Posted 2d ago

Senior Data Platform Engineer

@ OrderYOYO
Copenhagen, Hovedstaden, Denmark
RemoteFull Time
Responsibilities:Lead architecture, Migrate platform, Design models
Requirements Summary:6+ years in data warehousing or analytics engineering; strong Microsoft Fabric, or Azure Synapse/Databricks; SQL/T-SQL and Python/PySpark; Power BI and DAX; experience migrating legacy to modern data platforms; production data systems management; Git-based workflows.
Technical Tools Mentioned:Microsoft Fabric, Azure Synapse, Databricks, SQL, Python, PySpark, Power BI, Git
Save
Mark Applied
Hide Job
Report & Hide
Job Description
Senior Data Platform Engineer

At OrderYOYO, data powers executive reporting, payments, finance, merchant insights, product analytics, AI, and M&A integration. This role will shape the governed data foundation that supports our next stage of scale.

Competitive salary, growing international company, and growth opportunities.

Role mission

Own the continuity and evolution of OrderYOYO’s modern data platform during a critical scaling phase. You will lead the migration from legacy reporting and metric tooling into a governed Microsoft Fabric platform, keep business-critical BI and semantic models reliable, improve data pipeline stability and monitoring, support CRM data integration, and provide senior technical leadership for data engineering delivery.

Core responsibilities

• Lead hands-on Microsoft Fabric architecture across lakehouse, warehouse, notebooks, semantic models, Git-backed delivery and production governance.

• Drive migration from legacy reporting and metric tooling into a governed Fabric semantic layer, including parity testing, stakeholder sign-off and safe decommissioning.

• Own and improve data pipelines across APIs, files, events and operational stores; establish robust orchestration, monitoring, alerting, data-quality checks and incident response.

• Design high-quality Power BI semantic models, DAX measures and reusable metric definitions for leadership, finance, commercial, product, marketing, payments and support reporting.

• Support CRM and operational data integrations, including outbound data feeds, identity mapping, schema mapping, reverse-ETL patterns and monitoring.

• Create reliable ingestion and modelling patterns for acquired businesses, so future integrations are repeatable, auditable and faster to execute.

• Set data-engineering standards: definition of ready/done, code review, release discipline, documentation, runbooks and platform change governance.

• Mentor engineers and analysts and translate business-critical data needs into pragmatic technical delivery.

Must-have requirements

• 6+ years in modern data warehousing, analytics engineering or data platform engineering, ideally in a SaaS, marketplace, fintech, payments, e-commerce or multi-region B2B2C environment.

• Strong Microsoft Fabric capability, or deep Azure Synapse / Databricks experience with clear ability to specialise quickly in Fabric.

• Expert SQL/T-SQL plus strong Python or PySpark, with a track record of building maintainable ELT/ETL pipelines and analytical data models.

• Strong Power BI and DAX experience, including semantic modelling, incremental refresh, performance tuning, model governance and capacity/cost awareness.

• Experience leading legacy-to-modern data platform migrations, including metric parity, stakeholder validation, change control and safe decommissioning.

• Experience operating production data systems: monitoring, alert design, incident triage, root-cause analysis, data-quality checks, lineage and runbooks.

• Comfortable with Git-based data engineering workflows, pull requests, release discipline and standards for notebooks, pipelines and semantic model changes.

Strong-to-have experience

• Payments, settlement, reconciliation, fees, chargebacks, merchant reporting or finance-domain data.

• CRM-side data flows and reverse-ETL patterns, especially HubSpot, Salesforce, Zendesk or similar platforms.

• M&A or acquired-company data integrations: schema discovery, file/API ingestion, data profiling, master-data mapping, migration QA and reporting continuity.

• NoSQL-to-analytics modelling, including change-feed patterns from operational databases into lakehouse or warehouse structures.

• GA4, BigQuery export, Google Ads / SEM feeds, Segment or other event and marketing analytics sources.

• Practical use of AI-assisted engineering tools to improve migration speed, documentation, testing or developer productivity.

Apply now if you fulfill the above criteria, we look forward to hearing from you.