Posted 21h ago

Senior Data Engineer - Integrations

@ UnityAI
Nashville, Tennessee, United States
OnsiteFull Time
Responsibilities:Build pipelines, Map data, Develop APIs
Requirements Summary:5+ years software engineering with focus on data integrations, pipelines, or API development; strong TypeScript, Node.js, Python; SQL (Postgres/BigQuery); GCP (GKE, BigQuery, Airflow); familiarity with FHIR R4; REST or event-driven APIs.
Technical Tools Mentioned:GKE, BigQuery, Airflow, Python, TypeScript, Node.js, PostgreSQL
Save
Mark Applied
Hide Job
Report & Hide
Job Description

Why we’re hiring

After a successful Series A fundraise, our engineering team is growing to meet the demands of an expanding partner network. We’re looking for a Senior Data Engineer to build and maintain the data connectors that power our core agent platform. 

You’ll own data pipelines that ingest appointment and scheduling data from external EMR and scheduling systems, map that data to FHIR R4 resources, and keep those pipelines healthy as our partner network grows. 

You’ll also create APIs that help us scale our Connector ecosystem across internal and external use cases.


In this role, you’ll help lead:

Data Integrations & Pipelines

    • Design, build, and maintain data pipelines that ingest appointment and scheduling data from external EMR and scheduling systems.
    • Map ingested data to FHIR R4 resources, ensuring accuracy, completeness, and compliance with healthcare data standards.
    • Monitor pipeline health and reliability as our partner network scales; triage and resolve integration failures quickly.
    • Contribute to a growing library of reusable Connector components that can be deployed across new partner integrations with minimal lift.

API & Platform Development

    • Build and maintain internal and external-facing APIs that expose Connector capabilities to our agent platform and partner ecosystem.
    • Design for scalability and extensibility so that onboarding a new EMR or scheduling vendor is a repeatable, well-documented process.
    • Collaborate with product and client operations teams to translate integration requirements into clean, well-tested implementations.

Infrastructure & Reliability

    • Work within our Google Cloud Platform environment (GKE, BigQuery, Cloud Services) to deploy and operate integration workloads.
    • Instrument pipelines with appropriate logging, alerting, and monitoring so issues surface before they impact customers.
    • Participate in on-call rotations and incident response for integration-related production issues.

Engineering Standards

    • Write clean, well-tested TypeScript, Node.js, and Python code with an emphasis on maintainability and clarity.
    • Author technical documentation, runbooks, and architectural decision records that help the team move fast safely.
    • Contribute to code review culture and help raise the bar for integration quality across the engineering org.

What you’ll need to be successful:

We value demonstrable knowledge and the ability to deliver real-world solutions above formal education. Candidates are expected to have experience and/or interest in learning:


  • 5+ years of software engineering experience, with a meaningful portion focused on data integrations, data pipelines, or API development.
  • Strong TypeScript, Node.js and Python proficiency—you write idiomatic, testable code and know when to reach for a library versus rolling your own.
  • Strong SQL (Postgres and/or BigQuery) skills with experience  in data modeling, querying, and pipeline validation.
  • Familiarity with Google Cloud Platform services, particularly GKE, BigQuery, Airflow, and general cloud infrastructure patterns.
  • Working knowledge of FHIR R4 or a demonstrated ability to learn and apply healthcare interoperability standards quickly.
  • Experience designing and building REST or event-driven APIs intended for both internal consumers and external partners.
  • Highly organized with strong debugging instincts and a bias toward building observable, well-instrumented systems.
  • Comfortable operating in a startup environment where requirements evolve and you’re expected to own outcomes, not just tasks.

    Preferred: 
  • Experience integrating with major EMR or healthcare scheduling platforms (Epic, Cerner, Athenahealth, etc.).
  • Familiarity with healthcare data standards alongside FHIR.
  • Experience operating production workloads in Kubernetes (GKE).


Additional values, skills, and mindsets include:


  • Do good: You solve problems and do the right things to move healthcare forward. You dig deep to enable our company and product’s success, knowing this will help our customers do the important work of caring for patients.
  • Extreme ownership: You own everything in your world. When you find an issue that needs to be handled, your first thought is to take ownership and get it done.
  • Humble nerd: You're unreasonably curious—about the tech, about healthcare, about our customers. You set your ego aside and learn from others to do the best job possible.
  • Velocity and delivery: You ship solutions efficiently and effectively. You'd rather get something working than debate the perfect architecture.
  • Adaptability: You're comfortable with ambiguity and can operate independently while knowing when to pull others in. Your key priority may change tomorrow; if that doesn't excite you, this may not be the right fit.


What we offer

  • Competitive compensation package, including equity options
  • Medical, Dental, Vision, HSA, Life, Accident, Hospital, and Critical Illness insurance
  • Flexible work hours and vacation policy
  • 401(k)
  • Free lunch when working onsite
  • A dynamic, innovative, and supportive work environment where your contributions have a direct impact on the future of healthcare