Transform existing complex ETL flows into Python/Spark processes.
Develop new ETL flows in Python/Spark based on functional analysis.
The delivered solutions need to be resilient, have a minimal latency and have to be deployed/configured on the open IT platform.
Ø Improve the setup around the open IT platform.
Ø Provide Support to the other developers within the team
Ø Contribute to the internal Python/Spark community and the move towards the Cloud.
-Python: a minimum of 3 years of experience
-DevOps: CI/CD pipeline setup
-ETL Tool: for example: DataStage, Informatica
-Apache Hadoop: Hive
As part of this role a lot of interactions will be required with various teams and profiles.
Strong communication skills in English are therefore a must.
Chaussée de La hulpe, 185 - 1170 Brussels