Key Responsibilities
- Design, develop, and optimize data pipelines using Databricks
- Build scalable ETL/ELT solutions for large datasets
- Work closely with data engineers, architects, and stakeholders
- Ensure data quality, performance, and reliability
- Take ownership of deliverables end-to-end and drive solutions independently
- Participate in design discussions and technical decision-making
✅ Required Skills & Experience
- Total Experience: Minimum 8+ years
- Databricks: At least 5+ years of hands-on experience (mandatory)
- Strong experience as a Data Engineer or Data Warehouse professional
- Proficiency in Apache Spark, SQL, Python / Scala
- Experience working with large-scale data systems
- Strong understanding of data modeling, performance tuning, and optimization
- Exposure to cloud platforms (AWS / Azure preferred)
Behavioral & Professional Expectations
Good Communication Skills:
Ability to clearly communicate with both technical and non-technical stakeholdersCareer Stability:
Demonstrates long-term commitment and consistent career progressionOwnership:
Takes full responsibility for assigned modules and delivers with accountability