This is our story
Born from our Care Transformation and Innovation team, DT&I was created to expand HCA Healthcare’s digital and AI strategy. We’re building intelligent systems, enhancing workflows, and driving innovation across a nationwide network. If you’re ready to build technology that saves lives and improves care, your future starts here.
What you will accomplish in this role
Job Summary
The Staff Data Engineer is responsible for design, development, testing, implementation, documentation, and maintenance of NextGen solutions for the Google Cloud Platform (GCP) Cloud Data Office initiatives. This role requires working closely with data teams, frequently in a matrixed environment as part of a broader project team. The culture of the organization places an emphasis on teamwork, so social and interpersonal skills are equally important as technical capability. Due to the emerging and fast-evolving nature of GCP technology and practice, the position requires being well-informed of technological advancements and proficiency placing new innovations into effective practice.
As a Staff Data Engineer, you will collaborate closely with all team members to create a modular, scalable solution that addresses current needs, but will also serve as a foundation for future success. The position will be critical in building the team’s engineering practices in test driven development, continuous integration, and automated deployment and is a hands-on team member who actively coaches the team to solve complex problems.
In addition, this position requires a candidate who can analyze business requirements, perform design tasks, construct, test, and implement solutions with minimal supervision. This candidate will have a record of successful project accomplishments in a fast-paced, mixed team (consultant and employee) environment.
The primary responsibility of this position is to implement and scale best practices of processing multimodal data including images, documents, unstructured text, and sensor data. Experience with multimodal data is a must for this role and should be demonstrated through prior projects.
What You'll Do:
- Develop, maintain, and optimize streaming, near real-time, and batch GCP data pipelines for enterprise-wise analysis of structured, semi-structured, and unstructured data.
- Collaborate closely with the Lead Architect and Product Owner to define, design, and build new features and improve existing data products.
- Lead, contribute, and participate in peer code reviews.
- Participate in on-call support rotation.
- Adhere to established development guidelines.
- Translate business requirements into technical design specifications.
- Mentor other Data Engineer colleagues.
- Work with Project Managers to estimate, establish, and meet target dates.
- Work independently, and complete tasks on-schedule by exercising strong judgment and critical thinking skills.
- Create and maintain technical documentation, including source-to-target mappings, job scheduling and dependency details, and business-driven transformation rules.
- Lead and participate in the deployment, change, configuration, management, administration, and maintenance of deployment processes and systems.
- Lead and contribute to technical group discussions to adopt innovative technologies, improve development practices, and reduce technical debt.
- Parse and structure multimodal datasets using AI and other cloud services
What qualifications you will need:
- Bachelor's degree preferred
- 7+ years of experience in Data Warehouse ETL/ELT Data Engineering required
- 3+ years of experience in Google Cloud Platform (GCP) Data Engineering preferred
- 3+ years of experience in Healthcare IT preferred
- Experience with Google Cloud Storage (GCS), BigQuery, and Cloud Composer (Airflow), Cloud Functions, Dataflow, and Dataproc GCP services.
- Experience writing optimized BigQuery SQL transformation queries and scripts.
- Experience with raw data formats such as JSON, Avro, and Parquet
- Experience with Oracle, SQL Server, Teradata, and other database platforms.
- Experience writing and maintaining Unix/Linux and Python scripts.
- Experience with GitHub source control and CI/CD workflows.
- Knowledge of issue tracking tools such as Jira and ServiceNow.
- Ability to troubleshoot, maintain, reverse engineer, and optimize existing data pipelines.
- Ability to analyze and interpret complex data and offer solutions to complex problems.
- Ability to work independently on assigned tasks.
- Strong written and verbal communication skills, including the ability to explain complex technical issues in a way that non-technical people may understand.
- Excellent problem-solving and critical thinking skills.
- Hands on experience parsing documents and storing data in a structured format
- Experience with unstructured data
- Knowledge of IT governance and operations.
At HCA Healthcare, we are committed to fostering a culture of growth that allows you to build the career of a lifetime. We encourage you to apply for our Staff Data Engineer today. We review all applications promptly, and qualified candidates will be contacted to continue the process. Join us!
We are an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.