1. Create and maintain optimal data pipeline architecture
2. Assemble large, complex data sets that meet functional/non-functional business requirements
3. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies
4. Combine raw information from different sources
5. Explore ways to enhance data quality and reliability
6. Collaborate with data scientists and architects on several projects
Only those candidates can apply who:
1. are available for full time (in-office) internship
2. can start the internship between 21st Jul’23 and 25th Aug’23
3. are available for duration of 2 months
4. have relevant skills and interests