Job Summary
A company is looking for a Big Data Engineer to build and maintain data pipelines using modern big data technologies.
Key Responsibilities
- Design, develop, and maintain ETL processes and data pipelines for both structured and unstructured data
- Write and optimize complex SQL queries to support analytics and reporting needs
- Implement distributed data processing solutions using Apache Spark and Kafka
Required Qualifications
- Bachelor's in Computer Science, Data Engineering, or a related field, or 1-3 years of relevant experience
- Experience in ETL design, development, and data warehousing concepts
- Proficiency in Python programming with a focus on object-oriented principles
- Experience with distributed data processing frameworks like Apache Spark and Apache Flink
- Familiarity with AWS or other major cloud platforms and Linux/Unix shell scripting
Comments