Job Summary
A company is looking for a Data Engineer to build and optimize data pipelines for retention strategy and campaign performance analytics.
Key Responsibilities
- Design, develop, and optimize ETL pipelines using Databricks and Snowflake
- Collect, transform, and load data into the warehouse and reporting environments
- Automate data dependencies and tasks, ensuring seamless data flow and pipeline reliability
Required Qualifications
- 7+ years of experience in data engineering
- Proficiency in Python, SQL, Spark, and Databricks
- Experience working with Snowflake and cloud-based data platforms (AWS, Azure, or GCP)
- Strong understanding of ETL processes, data pipelines, and workflow automation
- Hands-on experience with data partitioning strategies for performance optimization
Comments