Posted 6d ago

Senior Data Engineer - Starburst

@ Dataeconomy
Pune City, Maharashtra, India
HybridFull Time
Responsibilities:Build data pipelines, Integrate data from multiple sources, Ensure data performance and reliability
Requirements Summary:8+ years of data engineering experience; strong Starburst/Trino/Presto; SQL performance optimization; data lakes (AWS S3/ADLS/HDFS); Python/Java/Scala; ETL/ELT; Spark/Kafka; cloud platforms.
Technical Tools Mentioned:Starburst, Trino, Presto, SQL, Python, Java, Scala, ETL/ELT, Spark, Kafka, Iceberg, Delta Lake, AWS, Azure, GCP, S3, ADLS, HDFS
Save
Mark Applied
Hide Job
Report & Hide
Job Description

Job Role: Senior Data Engineer –
Starburst

Experience: 8+ Years

Location: Pune (Hybrid/Remote
depending on project)

 

Key Skills Required:

·       Strong
experience with Starburst / Trino / Presto

·       Hands-on
experience in Data Engineering and Data Platforms

·       Expertise
in SQL and query performance optimization

·       Experience
working with Data Lakes (AWS S3 / ADLS / HDFS)

·       Good
programming knowledge in Python / Java / Scala

·       Experience
with ETL/ELT pipelines and distributed data processing

·       Familiarity
with Big Data tools like Spark, Kafka, Iceberg, Delta Lake

·       Experience
with AWS / Azure / GCP cloud platforms

 

Responsibilities:

         Build
and optimize data pipelines and federated data queries using Starburst

         Integrate
data from multiple sources (databases, data lakes, warehouses)

         Ensure
data performance, reliability, and governance

         Work
with analytics, AI/ML, and business teams to deliver scalable data solutions



Requirements

Job Role: Senior Data Engineer –
Starburst

Experience: 8+ Years

Location: Pune (Hybrid/Remote
depending on project)

 

Key Skills Required:

·       Strong
experience with Starburst / Trino / Presto

·       Hands-on
experience in Data Engineering and Data Platforms

·       Expertise
in SQL and query performance optimization

·       Experience
working with Data Lakes (AWS S3 / ADLS / HDFS)

·       Good
programming knowledge in Python / Java / Scala

·       Experience
with ETL/ELT pipelines and distributed data processing

·       Familiarity
with Big Data tools like Spark, Kafka, Iceberg, Delta Lake

·       Experience
with AWS / Azure / GCP cloud platforms

 

Responsibilities:

         Build
and optimize data pipelines and federated data queries using Starburst

         Integrate
data from multiple sources (databases, data lakes, warehouses)

         Ensure
data performance, reliability, and governance

         Work
with analytics, AI/ML, and business teams to deliver scalable data solutions