Mid Data Engineer



Apache Airflow

Job description

  • Design and build scalable & reliable data pipelines (ETLs) for our data platform
  • Constantly evolve data models & schema design of our Data Warehouse to support self-service needs
  • Work cross functionally with various teams, creating solutions that deal with large volumes of data.
  • Work with the team to set and maintain standards and development practices;
  • Be a keen advocate of quality and continuous improvement


  • 2+ years of experience abuilding and maintaining data pipelines in a custom or commercial ETL tool (eg. SSIS, Talend, Informatica) (Airflow is a plus);
  • 2+ years of experience working in a Data Warehouse environment with varied forms of data infrastructure, including relational databases, Hadoop, and Column Store;
  • Proficient in creating and evolving dimensional data models & schema designs to improve accessibility of data and provide intuitive analytics;
  • Experience working with cloud environments (eg. AWS, GCP, Azure) (plus);
  • Proficient in SQL;
  • Basic understanding of Hadoop/BigData ecosystem (HDFS, Hive);
  • Proficient in one of the following programming languages: C#, Java, Python;
  • Basic knowledge in distributed computing (Spark);
  • Experience in working with a BI reporting tool (eg. Tableau, QlikView, PowerBI, Looker) (plus);
  • Basic understating of continuous delivery principles: version control, unit and automated tests;
  • Intermediate level in English, both written and spoken;
  • Good analytical and problem solving skills, the ability to work in a fast moving operational environment 

Want to apply?

Phone number*
Upload your CV here* (max. 4MB)
Upload your photo or video here (max. 4MB)
This site uses cookies from Google to deliver its services and to analyze traffic. Your IP address and user-agent are shared with Google along with performance and security metrics to ensure quality of service, generate usage statistics, and to detect and address abuse.