Posted 1w ago

Sr. Data Architect, Enterprise Data Platform

@ Healthcare of Ontario Pension Plan
Toronto, Ontario, Canada
OnsiteFull Time
Responsibilities:translate requirements, define patterns, lead governance
Requirements Summary:Senior Data Architect with Snowflake, Python, PySpark, AWS; strong governance, security, and collaboration with stakeholders; 12+ years data experience; 4+ years Snowflake.
Technical Tools Mentioned:Snowflake, Python, PySpark, SQL, AWS
Save
Mark Applied
Hide Job
Report & Hide
Job Description

Why you’ll love working here:

  • high-performance, people-focused culture

  • our commitment that equity, diversity, and inclusion are fundamental to our work environment and business success, which helps employees feel valued and empowered to be their authentic selves

  • learning and development initiatives, including workshops, Speaker Series events and access to LinkedIn Learning, that support employees’ career growth

  • membership in HOOPP’s world class defined benefit pension plan, which can serve as an important part of your retirement security

  • competitive, 100% company-paid extended health and dental benefits for permanent employees, including coverage supporting our team's diversity and mental health (e.g., gender affirmation, fertility and drug treatment, psychological support benefits of $2,500 per year, parental leave top-up, and a health spending account).

  • optional post-retirement health and dental benefits subsidized at 50%

  • yoga classes, meditation workshops, nutritional consultations, and wellness seminars

  • the opportunity to make a difference and help take care of those who care for us, by providing a financially secure retirement for Ontario healthcare workers

 

Job Summary

The Sr. Data Architect is responsible for Strategy, Architecture, implementation, and optimization of scalable data architecture and data pipelines, with a primary focus on the Snowflake platform. This role owns key Data Platform and Snowflake capabilities including access management (RBAC, roles, grants), performance and cost tuning (warehouses, clustering, query optimization), and enablement of AI/ML capabilities. The candidate is also responsible for design and implementation of production-grade data pipelines and data products using Python and PySpark, partnering with business stakeholders (for example Investment Management) and technology teams to deliver secure, governed, and high-performing solutions in a fast-moving environment.

This role owns the enterprise architecture definition of the Enterprise Data Platform by defining the target-state architecture, establishing reference architectures and standards, shaping multi-quarter roadmaps, and leading architecture governance to ensure solutions remain aligned to enterprise security, risk, and compliance requirements across domains. The role applies enterprise architecture frameworks and governance practices (e.g., reference architectures, standards, and decision records) to drive consistent outcomes across teams.

What you will do:

  • Work with Investment Management stakeholders to understand requirements and define solutions to address them.

  • Work with product and domain stakeholders to translate analytical and operational requirements into Snowflake-first data solutions (schemas, marts, data products, and pipelines), iterating quickly as needs evolve.

  • Partner with Enterprise Architecture to define target-state patterns for Snowflake (data modeling, layering/medallion, data sharing, and secure-by-design standards) and ensure alignment to enterprise standards.

  • Define and maintain Snowflake platform reference architectures and reusable implementation patterns (e.g., onboarding templates, naming conventions, security patterns, and data product standards) and socialize them across engineering teams.

  • Drive architecture governance for major data initiatives by documenting architecture decisions and tradeoffs, reviewing designs for alignment to standards, and escalating risks or gaps with pragmatic mitigation plans.

  • Provide technical leadership across teams by aligning stakeholders on standards and outcomes, mentoring engineers on platform best practices, and representing the data platform in cross-functional forums.

  • Own Snowflake platform configuration and optimization: warehouses & sizing, resource monitors, query profiling, clustering/materialized views where appropriate, and cost/performance tuning.

  • Implement Snowflake access management and security controls (RBAC/roles, grants, least privilege, secure views, masking/row access policies as applicable) and support governed subscription/onboarding processes.

  • Design and build scalable data architectures and data products with Snowflake as the core platform, balancing performance, cost, reliability, and governance.

  • Enable AI/ML capabilities by supporting AI/ML agents and governed semantic models for analytics and decisioning.

  • Participate in design sessions and code reviews; troubleshoot production issues end-to-end (pipelines, Snowflake objects, and workloads) and iterate quickly to restore service and improve reliability.

  • Define and implement consumption patterns for BI and analytics (e.g., Power BI), including semantic models, performance best practices, and governed access to Snowflake-backed datasets.

What you bring:

  • 12+ years of proven hands-on experience as a Data Architect building modern data platforms and pipelines in AWS platform.

  • 4+ yrs deep hands-on Snowflake experience, including access management (RBAC/roles/grants), data modeling, performance & cost tuning (warehouse sizing, query profiling), and platform best practices.

  • Strong experience with Snowflake Intelligence components, including AI/ML agents, and semantic models, and integrating these capabilities into governed analytics and operational workflows.

  • Hands-on data engineering experience with Python and PySpark/Spark, building, testing, and operating scalable ETL/ELT pipelines.

  • Hands on experience in AWS platform capabilities in general and specifically around Data Architecture components.

  • Strong problem-solving skills with a focus on automation including CI/CD and IaaC, DevOps practices, and version control

  • Proficiency in SQL and Python, with strong data modeling skills and a track record of performance optimization (including Snowflake query tuning and workload management).

  • Experience in data validation, cleansing, and quality checks to maintain data integrity.

  • Ability to thrive in a fast-moving environment: comfortable with ambiguity, takes end-to-end ownership, and can deliver iteratively while maintaining quality and security.

  • Excellent communication and collaboration skills to work with stakeholders and teams.

 

The actual base salary offered to the successful candidate may vary based on multiple factors including, but not limited to, individual's expertise and level of experience applicable to the role they are being offered.  

 

This role is eligible to participate in discretionary incentive plan(s), subject to the terms and conditions of the applicable incentive plan text.  

 

This job is for an existing vacancy.