Join a diverse team of engineers, experts and business leaders in building future of Artificial Intelligence.
Create and maintain optimal data pipeline architecture.
Build the infrastructure required for optimal ETL of data from a variety of data sources using AWS technologies.
Build large-scale batch and real-time data pipelines with data processing frameworks like Spark, Storm or other AWS technologies.
Good understanding on Java / Python.
B.Tech / M.Tech in CS with major in 'Data Engineering'/'Big Data'.
Understanding of Spark & other big data technologies.
Added advantage if the candidate has experience in data modeling, ETL design, implementation and maintenance.
You want to work in a small, agile team.
You mentor other developers when needed.
You work hard and don’t need much oversight.
You like variety in your projects.
You want to be proud of what you do at your job.
Interested applicants should send their resume and cover letter at firstname.lastname@example.org.