Posted 1d ago

Principal Data Engineering

@ UnitedHealth Group
United States
HybridFull Time
Responsibilities:Designing models, Building pipelines, Governing data
Requirements Summary:10+ years in software/data engineering; cloud platforms (Azure/GCP); Snowflake, Databricks; SQL and Python; data modeling, governance, and pipelines.
Technical Tools Mentioned:Azure, GCP, Snowflake, Databricks, SQL, Python
Save
Mark Applied
Hide Job
Report & Hide
Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.  


Primary Responsibilities:

  • Data Architecture & Canonical Model Design
    • Design, build, and maintain canonical data models that serve as the single source of truth across analytics and AI use cases
    • Define and enforce data contracts between upstream systems and downstream consumers
    • Handle schema evolution, versioning, and drift management proactively
    • Ensure alignment between business semantics and physical data models
  • Data Engineering & Pipeline Development
    • Build scalable and efficient data pipelines using Snowflake, SQL, and Python
    • Process both structured and semi-structured data (JSON, logs, API payloads)
    • Optimize transformations for performance, cost, and scalability
    • Implement reusable, modular pipeline components
  • Advanced Data Modeling for Analytics
    • Design dimensional and normalized data models for reporting, ML, and AI workloads
    • Optimize data models for BI tools, self-service analytics, and LLM consumption
    • Develop metric-layer ready models to ensure consistency across reporting
  • Data Governance & Quality
  • Implement data validation, monitoring, and quality checks across pipelines
    • Build frameworks to detect schema drift and data inconsistencies
    • Ensure adherence to data governance, lineage, and auditability standards
    • Support compliance requirements (PHI/PII handling, access control, traceability)
  • AI/ML & GenAI Enablement
    • Structure data to support RAG pipelines, embeddings, and LLM-based applications
    • Enable feature-ready datasets for ML and AI use cases
    • Collaborate with AI/ML engineers to ensure data readiness for agentic workflows
  • Performance Optimization & Platform Engineering
    • Optimize Snowflake performance (clustering, partitioning, query tuning, cost management)
    • Build frameworks for data observability, monitoring, and alerting
    • Improve pipeline reliability, scalability, and fault tolerance
  • Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so

Required Qualifications:

  • Bachelor’s degree in Computer Science, Engineering, Data Engineering, or a related technical field (or equivalent practical experience)
  • 10+ years of overall experience in software engineering and data engineering roles, with significant experience designing and delivering large scale data platforms in enterprise environments
  • Solid hands on experience with cloud based data platforms (Azure and/or GCP), including data storage, processing, orchestration, and monitoring services
  • Deep experience with ETL/ELT frameworks, batch and streaming data processing, and distributed data systems
  • Experience collaborating with Analytics, BI, Data Science, and Product teams to deliver trusted, reusable, and performant data assets
  • Proven expertise in with Snowflakes and databricks
  • Proven expertise in data engineering architecture and solution design, including building, optimizing, and scaling high volume, high availability data pipelines
  • Advanced proficiency in SQL and at least one programming language such as Python for data pipeline and platform development
  • Solid knowledge of data quality, data observability, lineage, and metadata management, and implementing governance controls in enterprise data ecosystems
  • Demonstrated ability to work across cloud and on prem ecosystems, supporting hybrid data architectures at scale


At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.