Posted 2d ago

Data Engineer

@ Avanade
Krakow or Wroclaw
HybridFull Time
Responsibilities:Create pipelines, Develop platforms, Collaborate teams
Requirements Summary:Hands-on experience with Azure data services, Databricks, Python/PySpark, SQL, and data governance; English proficiency at B2 or higher.
Technical Tools Mentioned:Databricks, PySpark, Azure Data Factory, Azure Data Lake, SQL Database, Azure Databricks, SQL, Python, MS Fabric
Save
Mark Applied
Hide Job
Report & Hide
Job Description

Come join us

In the Data Engineer role, you’ll expand your expertise by contributing to modern initiatives across a variety of industries and business domains. You’ll work with a broad toolset, with responsibilities including:

What you’ll do

  • Creating, developing, and operating scalable, efficient, and dependable data pipelines
  • Delivering end-to-end data platforms, including data architecture and ETL implementations
  • Partnering with data scientists, analysts, and engineering teams to integrate data and improve end-to-end performance
  • Applying best practices for data governance, quality, and security across the data estate
  • Tuning and streamlining data workflows to maximize reliability and throughput
  • Keeping current with new trends and advancements in data engineering
  • Supporting and coaching junior data engineers through mentoring and knowledge sharing
  • Opportunity to grow your skills in advanced AI technologies

Tech stack

You’ll use a range of tools depending on the project, however our core stack typically includes:

Databricks, PySpark, Azure cloud and services (Data Lake, SQL Database, Azure Databricks, Azure Data Factory), SQL, Python, MS Fabric

.

Skills and experiences :

Must-have skills

To be successful in this position, you should have hands-on commercial experience with:

  • Azure cloud and services (e.g. Azure Data Factory, Data Lake, SQL Database, Azure Databricks)
  • Databricks (commercial project experience)
  • Python and PySpark for building and operating data solutions
  • SQL for querying, transforming, and validating data
  • Core data engineering practices (ETL, data modeling, data warehousing, data governance)
  • English at B2 level (or higher)

Nice to have:

  • Hands-on experience with MS Fabric
  • Familiarity with containerization and orchestration (Docker/Kubernetes)
  • Understanding of machine learning concepts and frameworks (e.g. MLflow, TensorFlow)
  • Knowledge or experience with LLMs and orchestration frameworks

.