What you’ll do
The Big Data Engineer plays a key role in Digital Engineering team by supporting the strategy and deliverables related to Cloud Platform and Data Engineering. The Big Data engineer is a core member of the agile teams delivering data pipelines and capabilities within the organization through building and automating data flows, and provides specialised guidance and delivers through self and others to:
* Design, Develop and Integrate ETL/ELT data pipelines for the necessary data from several sources for analysis and for Technology actions;
* Build applications and products that make use of large volumes of data and generate outputs that allow actions that generate incremental value;
* Deliver and implement core capabilities (frameworks, platform, development infrastructure, documentation, guidelines and support) to speed up delivery in the Big Data Programme, assuring quality, performance and alignment to the technology blueprint and patterns;
* Support stakeholders and functions in obtaining benefiting business value from the operational data;
* Designing and producing high performing stable end-to-end data applications to perform cost efficient and complex processing of batch and streaming massive volumes of data in a multi-tenancy big data platform in the cloud;
* Ingest and automate the necessary data from local and group sources onto GCP platform;
* Accountable to ensure delivery of solution & use case enablement, GCP project & resource enablement, data source ingestion for Networks sources, application production rollouts and code/execution optimisation for big data solutions;
* Working with key stakeholders such as the Group Big Data/Neuron team, ADS, ACoE, local market IT and Big Data teams to define the strategy for evolving the Big Data capability, including solution architectural decisions aligned with the platform architecture;
* Investigating and driving new technologies adoption to identify where they can bring benefits;
* Ensuring common data architecture, structure and definition, data cleanings and data integrity;
* Support data security & privacy, and thorough documentation processes.
Who you are
* Degree in Computer Science or related field
* 3 years of hands-on Big Data Engineering experience, ideally over a Cloud computing environment
* Knowledge of cloud providers such as GCP, AWS or Azure
* Expertise developing and optimising Apache Spark
* Skilled in OOP languages (such as Python, Java or Scala), SQL and bash scripting
* Experience in implementing large-scale batch and streaming based Big Data solutions
* Understanding of big data ecosystems and tools like Hadoop, Airflow, NiFi, Kafka, various data formats, etc.
* Background in telecommunication networks and its related data sources, its strengths, weaknesses, semantics, and formats
* Portuguese/English language proficiency.
Not a perfect fit?
Worried that you don’t meet all the desired criteria exactly? At Vodafone we are passionate about Inclusion for All and creating a workplace where everyone can thrive, whatever their personal or professional background. If you’re excited about this role but your experience doesn’t align exactly with every part of the job description, we encourage you to apply as you may be the right candidate for this role or another role, and our recruitment team can help you see how your skills fit in.
#J-18808-Ljbffr