Data, Insights & Analytics, Business Strategy & Delivery

Data Platform Engineer

Pin

Gurugram, India

Document

Permanent

Briefcase

Full Time

#R-00196944

Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive.

This role is based in India and as such all normal working days must be carried out in India.

Job description

Join us as a Data Platform Engineer

  • We're seeking a talented Data Platform Engineer to build effortless, digital first customer experiences and simplify the bank through developing innovative data driven solutions
  • You’ll inspire the bank to be commercially successful through insights, while at the same time keeping our customers’ and the bank's data safe and secure
  • This is a chance to hone your expert programming and data engineering skills in a fast paced and innovative environment

What you'll do

As a Data Platform Engineer, you’ll partner with technology and architecture teams to build your data knowledge and data solutions that deliver value for our customers. Working closely with universal analysts, platform engineers and data scientists, you’ll carry out data engineering tasks to build a scalable data architecture, including data extractions and data transformation.

As well as this, you’ll be:

  • Loading data into data platforms
  • Building automated data engineering pipelines
  • Delivering streaming data ingestion and transformation solutions
  • Participating in the data engineering community to deliver opportunities to support the bank's strategic direction
  • Developing a clear understanding of data platform cost levers to build cost effective and strategic solutions

The skills you'll need

You’ll be an experienced programmer and data engineer, with a BSc qualification or equivalent in Computer Science or Software Engineering. Along with this, you’ll have a proven track record in extracting value and features from large scale data, and a developed understanding of data usage and dependencies with wider teams and the end customer.

You’ll need experience in deploying and  managing distributed data/ETL pipelines (batch mode and real time streaming) hosted on Hadoop, Spark, Kafka, Informatica and MongoDB. You will also have experience of managing data engineering tooling/orchestration such as Streamsets and Informatica PWC/BDM/IICS on premise/cloud infrastructure. You will also demonstrate:

  • Experience of developing real time data streaming pipelines using Change Data Capture (CDC), Kafka and Streamsets/NiFi/Flume/Flink
  • Experience with Change Data Capture tooling such as IBM Infosphere, Oracle Golden Gate, Attunity, Debezium
  • Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling
  • Expertise in Unix and DevOps automation tools like Terraform and Puppet and experience in deploying applications to at least one of the major public cloud provider such AWS, GCP or Azure
  • Extensive experience using RDMS and one of the no-sql database such as MongoDB, ETL pipelines, Python, Java APIs using spring boot and writing complex SQLs
  • A good understanding of cloud warehouse such as Snowflake
  • A good understanding of modern code development practices
  • Good critical thinking and proven problem solving abilities

What’s it like to work at NatWest Group?

Find out more about what it’s like to work here, including Rewards and Benefits, and Learning and Development.

Working here