Bridgeway is seeking a Senior Data Engineer to design, develop, and maintain our data warehouse infrastructure. This role involves working closely with analysts, engineers, and other stakeholders to shape our data architecture, ensuring secure and efficient data pipelines, and enabling advanced analytics across the organization. The ideal candidate will have a strong background in data engineering, data warehousing, and ELT processes, along with a passion for optimizing data systems.
This is a remote position, with preference given to East Coast candidates.
Key Responsibilities:
- Design, develop, and maintain a scalable lakehouse architecture, including a medallion (bronze/silver/gold) data model optimized for analytics and AI/ML consumption.
- Design, implement, and operate ELT pipelines, including workflow orchestration, scheduling, and monitoring, to ensure reliable and scalable execution.
- Establish data quality, testing, and observability practices, and proactively monitor and resolve data and automation issues to ensure platform reliability and trust.
- Ensure data security and compliance, including role-based access controls for security, encryption, masking, and governance best practices to ensure compliant handling of sensitive information.
- Optimize performance of data workflows and storage for cost efficiency and speed.
- Partner with engineers, analysts, and stakeholders to meet data needs; balance cost, performance, simplicity, and time-to-value while mentoring teams and documenting standards.
- Provide technical leadership and mentorship to team members – guiding best practices, skill development, and collaboration cross-functionally.
- Enable AI/ML use cases through well-structured data models, feature availability, and platform integrations using tools such as Databricks Vector Search and Model Serving.
- Develop and maintain data pipelines using version control and CI/CD best practices in a collaborative engineering environment.
- Collaborate within an Agile-Scrum framework and develop comprehensive technical design documentation to ensure efficient and successful delivery.
- Serve as a trusted expert on organizational data domains, processes, and best practices.
Requirements:
- 5+ years of hands-on data engineering experience required
- 3+ years of experience building and operating data pipelines on a modern lakehouse platform (e.g., Databricks – Unity Catalog, Delta Live Tables, Asset Bundles), including data modeling, governance, and CI/CD deployment patterns
- 3+ years of experience with analytical SQL (ANSI SQL/T-SQL/Spark SQL) and Python for data engineering, including pipeline construction, transformation logic, and automation required
- Strong communication skills with the ability to collaborate and influence across engineering, analytics, and business stakeholders required
- Streaming and ingestion tools, such as Kafka, Kinesis, Event Hubs, Debezium, or Fivetran preferred
- DAX, LookML, dbt; Airflow/Dagster/Prefect, Terraform; Azure DevOps; Power BI/Looker/Tableau; GitHub CoPilot knowledge is a plus
- Bachelor’s degree in Computer Science, Information Technology, or a related field. Master’s degree preferred