-
Job Role
- Create and maintain optimal data pipeline architecture.
- Build the infrastructure required for optimal ETL of data from a variety of data sources using AWS technologies.
- Build large-scale batch and real-time data pipelines with data processing frameworks like Spark, Storm or other AWS technologies.
-
Skills Required
- Good understanding on Java / Python.
- B.Tech / M.Tech in CS with major in 'Data Engineering'/'Big Data'.
- Understanding of Spark & other big data technologies.
- Added advantage if the candidate has experience in data modeling, ETL design, implementation and maintenance.
-
Personality
- You want to work in a small, agile team.
- You mentor other developers when needed.
- You work hard and don’t need much oversight.
- You like variety in your projects.
- You want to be proud of what you do at your job.