Senior Databricks Developer Opportunity

ai talent company

Subscribe to our Telegram & Twitter Channel

Senior Databricks Developer in AUSTRALIA

Visa sponsorship 6 hours ago

We're partnering with a vital leader in the Government Health sector, dedicated to optimising public health outcomes, resource allocation, and policy analysis through a modern, cloud-native data platform. For the right candidate with the necessary skills and experience, we are pleased to offer 482 visa sponsorship.

This client requires a Senior Databricks Developer to serve as a technical leader for their advanced Analytics and Research workloads. You will be instrumental in architecting and building highly scalable data pipelines using PySpark/Scala within the Databricks Lakehouse Platform. This role demands expertise in Delta Lake, performance optimisation, and ensuring strict data governance and security for sensitive patient data, adhering to all compliance and privacy regulations.


What You'll Do
  • Lead the design and development of large-scale, resilient, and performant ETL/ELT data pipelines using PySpark/Scala within Databricks notebooks and jobs.
  • Architect and manage the Delta Lake environment, focusing on data ingestion, quality enforcement (using Delta Live Tables or similar), and schema evolution for complex public health datasets.
  • Optimise Databricks clusters, notebooks, and Spark jobs for cost-efficiency and performance, specifically targeting bottlenecks in high-volume batch and streaming workloads.
  • Define and enforce data governance practises within the Lakehouse, utilizing Unity Catalogue for centralized metadata and access control, adhering to government standards.
  • Collaborate closely with government analysts and data scientists to transition analytical models and research findings into scalable, production-ready pipelines.
  • Champion CI/CD and MLOps practises for Databricks notebooks and workflows, utilizing tools like Azure DevOps or Jenkins.
  • Mentor and guide junior engineers on Databricks development standards, Spark optimisation, and modern data engineering practises.


What You'll Bring
  • 6+ years of progressive professional experience in Data Engineering, with at least 3 years dedicated to developing solutions on the Databricks Platform.
  • Expert-level proficiency in PySpark and/or Scala for distributed data processing.
  • Mandatory hands-on experience with Delta Lake architecture, including DLT, time travel, and VACUUM operations.
  • Deep understanding of cloud infrastructure (Azure preferred) and how Databricks integrates with cloud storage (ADLS Gen2) and services.
  • Expert proficiency in SQL and dimensional modelling principles.
  • Proven experience with CI/CD, Infrastructure as Code (e.g., Terraform), and Databricks command-line tools for automation.
  • Exceptional communication and problem-solving skills, with the ability to analyse complex requirements and design resilient solutions in a highly regulated environment.

Apply now

Subscribe our newsletter

New Things Will Always Update Regularly