Senior Palantir Data Engineer (Python/PySpark) (relocation to Malta) Opportunity

Undelucram.ro company

Subscribe to our Telegram & Twitter Channel

Senior Palantir Data Engineer (Python/PySpark) (relocation to Malta) in ROMANIA

Visa sponsorship & Relocation 3 days ago

Undelucram.ro on behalf of:

EPAM Romania

Are you passionate about solving complex big data analytics problems using cutting-edge technologies? EPAM is looking for a skilled Senior Palantir Data Engineer to join our growing, globally distributed team. In this role, you’ll work on a high-impact Data Transformation project with our client. This initiative leverages Big Data and Machine Learning technologies to shape data-driven decisions in the Property & Casualty business domain.

If you’re eager to apply your expertise in Python/PySpark, SQL and Palantir to design and implement complex data pipelines, while collaborating with a multicultural and dynamic team, we’d love to hear from you!

Join us at our Malta office, which offers a flexible hybrid work setup.

Responsibilities

  • Lead the design and implement robust, large-scale data pipelines and analytics solutions
  • Oversee the monitoring and optimization of data pipelines for performance and scalability using advanced tools and techniques, including Python/PySpark and structured query languages
  • Optimize data workflows to support critical decision-making processes
  • Harness state-of-the-art tools and technologies (including Palantir Foundry) to address new and emerging business challenges
  • Partner with cross-functional and globally distributed teams (e.g., data scientists, analysts, business stakeholders) to align project goals and execution strategies
  • Contribute to a global strategic initiative focused on enhancing the ability to make data-driven decisions across the Property & Casualty value chain
  • Stay ahead of emerging technologies and trends (e.g., Generative AI, Machine Learning) and recommend potential applications in the data ecosystem

Requirements

  • A Bachelor’s degree (or equivalent) in Computer Science, Data Science or a related discipline
  • 5+ years of experience working with large-scale distributed computing systems
  • Proficiency in Python/PySpark to build and optimize complex data pipelines
  • Hands-on experience working with Databricks for large-scale data processing and analytics
  • Strong SQL skills (preferably Spark SQL) for data querying and manipulation
  • Deep understanding of data warehousing concepts and ELT techniques
  • Experience with Palantir Foundry is a must
  • Familiarity with Agile and Scrum development methodologies

Nice to have

  • Knowledge of HTML, CSS, JavaScript and Gradle
  • Familiarity with Microsoft Power BI
  • Exposure to Machine Learning or Generative AI technologies

Apply now

Subscribe our newsletter

New Things Will Always Update Regularly