Senior Data Engineer in Krakow Opportunity

Luxoft company

Subscribe to our Telegram & Twitter Channel

Senior Data Engineer in Krakow in POLAND

No longer accepting applications
Visa sponsorship & Relocation 1 year ago

๐Ÿ”” Are you already in Poland? If YES this is the project for YOU!


Our benefits:

๐Ÿ‘ฉโ€โš•๏ธ Private Medical Care in Luxmed and Life Insurance

๐Ÿ‹๏ธโ€โ™€๏ธ Multisport Card

๐Ÿ‘จโ€๐Ÿ‘งโ€๐Ÿ‘ฆ Paid referrals

๐Ÿ“š Self-learning libraries

๐Ÿ›ซ Relocation package for seniors and assistance during all process...and MORE!



๐Ÿ‘‰ Location: KRAKOW - hybrid (3 days/week from the office )




Project Description:

We are embarking on an exciting project that requires expertise in a diverse range of technologies. This project aims to enhance our systems' efficiency and performance through the implementation of cutting-edge tools and methodologies. As part of our team, you will be responsible for deploying and maintaining critical infrastructure components, streamlining development workflows, and optimizing system performance. This project offers an opportunity to work with state-of-the-art technologies and contribute to the evolution of our IT landscape.


Responsibilities:

1. Data Engineering and Processing:

โ€ข Utilize Databricks (or equivalent platforms) to develop and maintain data engineering solutions.

โ€ข Write and optimize SQL queries and Python scripts for data processing and analysis.

2. Data Modeling and Architecture:

โ€ข Design and implement data models, ensuring they align with data warehousing principles and Lakehouse architecture.

โ€ข Develop and maintain data warehouses to support business intelligence and analytics.

3. ETL Processes:

โ€ข Design, develop, and manage ETL pipelines using Azure Databricks (Spark, Spark SQL, Python, SQL).

โ€ข Implement ETL/ELT design patterns to ensure efficient data extraction, transformation, and loading.

4. Spark-Based Data Processing:

โ€ข Leverage Spark and Python for large-scale data processing tasks.

โ€ข Develop and optimize data processing workflows using Spark.

5. Real-Time Data Processing:

โ€ข Implement real-time data processing solutions using Delta Live Tables and Spark Streaming.

โ€ข Ensure timely and accurate processing of streaming data for real-time analytics and applications.

6. Workflow Design and Optimization:

โ€ข Design data processing workflows that are scalable and maintainable.

โ€ข Continuously monitor and optimize workflows to improve performance and efficiency.

7. Collaboration and Communication:

โ€ข Work closely with cross-functional teams to understand data requirements and deliver solutions that meet business needs.

โ€ข Communicate effectively with stakeholders to ensure alignment on data engineering initiatives.


Mandatory Skills Description:

1) Hands on Data Engineering Experience using Databricks (or equivalent) , SQL and Python skills(as a script)

2) Solid understanding of data modelling, data warehousing principles and Lakehouse architecture

3) Expert knowledge of ETL using Azure Databricks (Spark, Spark SQL, Python, SQL) and understand the ETL/ETL design patterns

4) Proficiency in Spark-based Python for data processing.

5) Strong experience in data modeling and designing data processing workflows using Spark.

6) Familiarity with Delta Live Tables and Spark Streaming for real-time data processing.


Nice-to-Have Skills Description:

a) Understanding of ETD (Extract, Transform, Load) product and lifecycle.

b) Reconciliation experience

Apply now

Subscribe our newsletter

New Things Will Always Update Regularly