πŸ“’ Join our team as a Senior Data Engineer! πŸ“’

As a Senior Data Engineer you will work in the team that is responsible for all cross-cutting cloud/Hadoop topics in the Big Data & Advanced Analytics key area. You will be in charge of cloud and data engineering in the cloud migration and development of cloud-native applications, mentoring others in technical concepts and implementing best practices for our Cloud-based data ingestion solutions. Furthermore, in your role you will be responsible for the investigation, preparation and migration of our Products from our On-Premise infrastructure to the Cloud native (GCP and Azure).

A dynamic and representative working environment in a collegial atmosphere and attractive benefits ensure that you feel at home from the start.

What you will be doing? πŸ‘‡

  1. Working in a task orientated and Test Driven Team to investigate our Product and choose the candidates for On-Premise to Cloud migration
  2. Implement and maintain cloud-native landing and solutions
  3. Develop future-proof software solutions to Connect cloud and on-premise and viseversa
  4. Integrate security standards, policies and controls into service offerings

Alternative tasks:

  1. Focusing on stability, performance tuning and innovation of the applications
  2. Coaching colleagues and end users, actively contributing to knowledge sharing and to a learning culture

Which technology & skills are important for us? πŸ‘Œ

  1. Cloud Data Development:
    • Design and develop cloud-based data storage solutions using Google Cloud Platform services such as Cloud Composer, Dataproc, Dataflow and Pub/Sub
    • Build and maintain data pipelines to ensure data accuracy, consistency, and real-time processing
    • Optimize data infrastructure for performance, scalability and reliability
  2. Infrastructure as Code (IaC):
    • Utilize Terraform to create and manage cloud resources efficiently
    • Implement CI/CD pipelines for automated deployment and continuous integration
  3. Microservices and API:
    • Work with microservices architecture and design APIs for seamless data integration
  4. Scripting and Testing:
    • Proficient in Python and SQL for ETL processes, schema evolution pipeline development and performance tuning
    • Develop, test, and deploy solutions using Dataproc, Dataflow, and Cloud Functions
  5. Scala Expertise:
    • Leverage Scala for Dataproc pipeline development, enhancement and performance tuning

    How?
    πŸ“Œ Hybrid on Wersalska 6 street (two days per week)

    Below you can find more information about cluster and Commerzbank.

    The Big Data cluster is the enabler for data scientists and provides a huge collection of data and a data science workbench in one place.

    • BI technology within the lake infrastructure
    • Establish a stable, state-of-the-art technology base with on-prem and cloud solutions
    • Set up data lake as single data and analytics hub and effectively ingest most important data sources
    • Establish data quality and metadata management
    • Provide Data Marts and sandboxes for segments and functions with the most important combination of data sources

    Commerzbank is a leading international commercial bank with branches and offices in almost 50 countries. The world is changing, becoming digital, and so we are. We are leaving the traditional bank behind us and are choosing to move forward as a digital enterprise. This is exactly why we need talented people who will join us on this journey. We work in inter-locational and international teamwork in agile methodologies.

    What we offer? πŸ₯³

    Of course, we offer Development Plans for employees Life insurance, flexible working hours, integration events and much more 😎

    Important! Please add the clause to your CV. πŸ“‘ You can find it on the end of the advert.

    * * *

    🚩 Please add the following clause to your application:
    1. I consent to the processing of personal data contained in this document by Commerzbank Aktiengesellschaft with its registered office in Kaiserstrasse 16, 60311 Frankfurt am Main, Germany, operating through the Branch in Poland with its registered office in Łódź, 91-203 Łódź, ul. Wersalska 6, KRS 0000631053, for the implementation of the current recruitment process and for the future recruitment for a period of 6 months, in accordance with the Regulation of the European Parliament and of the Council (EU) 2016/679 of 27 April 2016 on the protection of individuals with regard to the processing of personal data the free flow of such data and the repeal of Directive 95/46 / EC (RODO) and in accordance with the Act of 10 May 2018 on the protection of personal data (Journal of Laws of 2018, item 1000). I provided my personal data voluntarily and I declare that they are truthful. I have the right to withdraw this consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal.

    2. I have read the content of the information clause, including information about the purpose and methods of processing personal data and the right to access my personal data and about the right to correct, rectify and delete it.