Data Engineer

#R-00124348
Location
Bangalore, India
Contract
Permanent - Full Time
Brand
NatWest Group
Job category
Insights & Analytics - Business Strategy & Delivery
Posted
07/05/2021
Closing date for applications: 06/07/2021

Join us as a Data Engineer

  • We're seeking a talented Data Engineer to build effortless, digital first customer experiences and simplify the bank through developing innovative data driven solutions
  • You’ll inspire the bank to be commercially successful through insights, while at the same time keeping our customers’ and the bank's data safe and secure
  • This is a chance to hone your expert programming and data engineering skills in a fast paced and innovative environment
  • We’re recruiting for multiple roles across a range to levels, up to and including experienced managers

What you'll do

As a Data Engineer, you’ll partner with technology and architecture teams to build your data knowledge and data solutions that deliver value for our customers. Working closely with universal analysts, platform engineers and data scientists, you’ll carry out data engineering tasks to build a scalable data architecture, including data extractions and data transformation.

As well as this, you’ll be:

  • Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code
  • Helping to define common coding standards and model monitoring performance best practices
  • Loading data into data platforms and building automated data engineering pipelines
  • Delivering streaming data ingestion and transformation solutions
  • Participating in the data engineering community to deliver opportunities to support the bank's strategic direction
  • Developing a clear understanding of data platform cost levers to build cost effective and strategic solutions

The skills you'll need

You’ll be an experienced programmer and data engineer, with a BSc qualification or equivalent in Computer Science or Software Engineering. Along with this, you’ll have a proven track record in extracting value and features from large scale data, and a developed understanding of data usage and dependencies with wider teams and the end customer.

We'll also expect you to have extensive experience using RDMS, ETL pipelines, Python, Hadoop, SQL and data wrangling, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure.

You’ll also demonstrate:

  • Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation
  • An understanding of machine-learning, information retrieval or recommendation systems
  • Good working knowledge of CICD tools
  • Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala
  • An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow
  • Knowledge of messaging, event or streaming technology such as Apache Kafka
  • Knowledge of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop
  • Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing and data modelling capabilities