Data Engineer (AWS/Databricks/Pyspark)
Meyrin
Key information
- Publication date:26 July 2025
- Workload:100%
- Contract type:Permanent position
- Place of work:Meyrin
Job summary
Talan is an international consulting group driving transformation through technology. This is a fantastic opportunity to work in an innovative, dynamic environment.
Tasks
- Design and optimize complex data pipelines on AWS and Databricks.
- Develop Python/PySpark processes for data ingestion and transformation.
- Collaborate with cross-functional teams in an international setting.
Skills
- 5+ years as a Data Engineer with strong Python and Databricks skills.
- Solid understanding of AWS services like Glue and S3.
- Analytical mindset with fluency in French and professional English.
Is this helpful?
Company Description
Talan is an international consulting and technology expertise group that accelerates the transformation of its clients through the levers of innovation, technology, and data. For over 20 years, Talan has been advising and supporting companies and public institutions in implementing their transformation and innovation projects in France and internationally.
Present on five continents, in 18 countries, the Group, certified Great Place To Work, which will have more than 7,200 employees by the end of 2024, aims to achieve a turnover of 850 million euros that same year and to exceed the one billion euro mark by 2025.
Equipped with a Research and Innovation Center, Talan places innovation at the heart of its development and operates in areas of technological change such as Artificial Intelligence, Data Intelligence, Blockchain, to serve the growth of large groups and mid-sized companies in a committed and responsible approach. www.talan.com
By placing "Positive Innovation" at the heart of its strategy, the Talan Group is convinced that it is by serving humans that technology multiplies its potential for society.
Job Description
We are looking for a senior Data Engineer to join an international company based in Geneva, as part of a strategic project to modernize the data platform.
This role is part of an advanced cloud environment, with a strong culture of performance, real-time data, and automation.
🔍 Your missions
You will join a multidisciplinary data team and contribute to the design, industrialization, and optimization of complex data pipelines on AWS and Databricks.
In this capacity, you will work on:
- Developing Python / PySpark processes for data ingestion, transformation, and exposure
- Setting up robust pipelines on Databricks (Delta Lake, Notebooks, orchestrations)
- Optimizing the performance and scalability of data flows
- Interacting with technical and business teams in an international, demanding, and data-oriented environment
Qualifications
- Minimum 5 years of experience as a Data Engineer
- Excellent mastery of Python and PySpark
- Significant experience with Databricks
- Good knowledge of the AWS environment (Glue, S3, IAM, etc.)
- Analytical mindset, rigor, autonomy
- Languages: Fluent French and Professional English