Data Engineer

#R-00123877
Location
Edinburgh, United Kingdom
Contract
Permanent - Full Time
Brand
NatWest Group
Job category
Insights & Analytics - Business Strategy & Delivery
Posted
31/03/2021

Join us as a Data Engineer

  • This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences
  • You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers and the bank safe and secure
  • Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank
  • We’re recruiting for multiple roles across a range to levels, up to and including experienced managers

What you'll do

We’ll look to you to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering.

We’ll also expect you to be:

  • Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code
  • Helping to define common coding standards and model monitoring performance best practices
  • Delivering the automation of data engineering pipelines through the removal of manual stages
  • Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development
  • Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight
  • Delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists

The skills you'll need

To be successful in this role, you’ll need to be a programmer and Data Engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as a proven track record in extracting value and features from large scale data.

You’ll need knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala. And you’ll have an understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow.

You’ll also demonstrate:

  • Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation
  • An understanding of machine-learning, information retrieval or recommendation systems
  • Good working knowledge of CICD tools
  • Knowledge of messaging, event or streaming technology such as Apache Kafka, and of big data platforms such as Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop
  • Good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure
  • Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling
  • Extensive experience using RDMS and ETL pipelines

If you need any adjustments to support your application, such as information in alternative formats or special requirements to access our buildings, or if you’re eligible under the Disability Confident Scheme please contact us and we’ll do everything we can to help.