Posted 6h ago

Data engineer

@ SoftwareOne
Madrid or Barcelona or León or Santiago de Compostela
HybridAll Commitments Available
Responsibilities:designing pipelines, building models, optimizing performance
Requirements Summary:2-4 years data engineering with Azure stack; strong Big Data/AI background; experience building data pipelines; scalable data architectures; SQL, Python, PySpark; Fabric/Databricks; certifications a plus.
Technical Tools Mentioned:Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Fabric, SQL, Python, PySpark, Databricks
Save
Mark Applied
Hide Job
Report & Hide
Job Description
Why SoftwareOne?

SoftwareOne and Crayon have come together to form a global, AI-powered software and cloud solutions provider with a bold vision for the future. With a footprint in over 70 countries and a diverse team of 13,000+ professionals, we offer unparalleled opportunities for talent to grow, make an impact, and shape the future of technology. At the heart of our business is our people. We empower our teams to work across borders, innovate fearlessly, and continuously develop their skills through world-class learning and development programs. Whether you're passionate about cloud, software, data, AI, or building meaningful client relationships, you’ll find a place to thrive here. Join us and be part of a purpose-driven culture where your ideas matter, your growth is supported, and your career can go global. 


The role

We are looking for a person to join our Data & AI team with experience in both Microsoft Fabric and Databricks.

What will you do?

You will participate in the maintenance and evolution of systems based on both Fabric and Databricks. Some of your responsibilities will include:

  • Design and Develop Data Solutions: Collaborate with internal teams and stakeholders to understand business requirements and design effective, scalable data solutions.
  • Implement Data Pipelines: Develop robust data pipelines using Azure Data Factory, Azure Databricks, Azure Fabric, and other Azure tools for data ingestion, transformation, and loading from multiple sources into target systems.
  • Create Data Models: Design and build optimized data models to support advanced analytics and BI applications, ensuring data integrity and quality.
  • Optimize Performance: Identify and address performance bottlenecks in pipelines and databases by implementing optimization and tuning techniques to improve efficiency and scalability.
  • Automate Processes: Develop scripts and automation tools for the deployment, monitoring, and continuous maintenance of data pipelines and processes, improving operational efficiency.
  • Provide Technical Support: Offer technical guidance and support to other team members in the design, development, and maintenance of data solutions in the Azure cloud environment.
  • Stay Up to Date: Keep up with the latest trends and technologies in data engineering and the Azure cloud ecosystem by participating in training and certification activities as needed.
  • Contribute to a Data-Driven Culture: Promote a data culture within the company by encouraging best practices in data management and usage for informed decision-making.

Practical Information:

 

Location: Madrid, Barcelona, León or Santiago de Compostela | Work Arrangement: Hybrid | Contract type: full time | Language requirements: Fluent Spanish and English (desirable)

 


What we need to see from you

If you have between 2 and 4 years of experience working in the field of Data Engineering, we are looking for you.

We are looking for someone with a strong technical background in Big Data and AI, as well as the ability to understand business functional needs.

  • Experience creating and maintaining efficient data pipelines using services such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data ingestion, transformation, and loading.
  • Proven previous experience in a similar role as a Data Engineer, preferably with a focus on the Azure cloud ecosystem.
  • Experience designing and developing scalable, high-performance data architectures.
  • Strong programming skills in languages such as SQL, Python, PySpark, etc.
  • Excellent problem-solving skills and strong attention to detail.
  • Ability to work independently and collaboratively in a dynamic, fast-paced environment.
  • Fabric or Databricks certifications are considered a plus.

Nice to Have

  • Experience with the Azure ecosystem. Any experience with GCP or AWS will be considered an advantage.
  • Experience with Machine Learning and/or Artificial Intelligence.
  • Strong interpersonal skills, including verbal and written communication.

Job Function

Software & Cloud
Accommodations


SoftwareOne welcomes applicants from all backgrounds and abilities to apply. If you require reasonable adjustments at any point during the recruitment process, email us at [email protected].   
Please include the role for which you are applying and your country location. Someone from our organization that is not part of the decision-making process will be in touch to discuss your specific needs and we will make every effort to accommodate you. Any information shared will be stored securely and treated in the strictest of confidence in line with GDPR.  
  
At SoftwareOne, we are committed to providing an environment of mutual respect where equal employment opportunities are available to all applicants and teammates without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.  Additionally, we encourage experienced individuals that have taken an intentional career break and are now prepared to return to work to explore our SOAR program.