Data Engineer with Databricks, SQL - 100% Remote Opportunity

floga technologies company

Subscribe to our Telegram & Twitter Channel

Data Engineer with Databricks, SQL - 100% Remote in United State

Remote 1 day ago

Role: Data Engineer with Databricks, SQL

Mode of work: 100% Remote

Duration: Long Term

Experience: 12+ Years

Submission Details

Full Legal Name

Contact Number

Email ID

LinkedIn

Current Location

This position is a Remote role but Candidate need to visit client location occasionally to Dallas, TX

Full Education Details

Full Data of Birth (Full DOB)

On which visa came to US

In which year came to US

Work Authorization

Total IT Experience

US Experience

Notice Period

Have you previously submitted to the KNITT Global?

Rate

Mandatory Skills

Min Experience Required

How Many Years Does Candidate Have

In which project you have worked (Name of client clients)

Total IT Experience

12+ years

Data Engineer

7-9

Databricks with Python & Scala

7-8

SQL development (complex queries, performance tuning, stored procedures).

5-6

ETL processing

5-6

Cloud data platforms (Azure/AWS/GCP) is a plus.

7-8

Databricks or cloud certifications (SQL, Databricks, Azure Data Engineer) are a strong plus.

About The Job

Job Summary:

We're looking for a skilled Data Engineer with strong expertise in Databricks and SQL to join our data

analytics team. You will work as part of a cross-functional team to design, build, and optimize data

pipelines, frameworks, and warehouses to support business-critical analytics and reporting. The role

requires deep-on experience in SQL-based transformations, Databricks, and modern data engineering

practices.

Experience

6+ years of relevant experience or equivalent education in ETL processing/data engineering or related field.

4+ years of experience with SQL development (complex queries, performance tuning, stored procedures).

3+ years of experience building data pipelines on Databricks (Python/Scala).

Exposure to cloud data platforms (Azure/AWS/GCP) is a plus.

Roles & Responsibilities

Design, develop, and maintain data pipelines and ETL processes using Databricks and SQL.

Write optimized SQL queries for data extraction, transformation, and loading across large-scale datasets.

Monitor, validate, and optimize data movement, cleansing, normalization, and updating processes to ensure data quality, consistency, and reliability.

Collaborate with business and analytics teams to define data models, schemas, and frameworks within the data warehouse.

Document source-to-target mapping and transformation logic.

Build data frameworks and visualizations to support analytics and reporting.

Ensure compliance with data governance, security, and regulatory standards.

Communicate effectively with internal and external stakeholders to understand and deliver on data needs.

Qualifications & Experience

Bachelor's degree in Computer Science, Data Engineering, or related field.

6+ years of hands-on experience in ETL/data engineering.

Strong SQL development experience (query optimization, indexing strategies, stored procedures).

3+ years of Databricks experience with Python/Scala.

Experience With Cloud Platforms (Azure/AWS/GCP) Preferred.

Databricks or cloud certifications (SQL, Databricks, Azure Data Engineer) are a strong plus.
Apply now

Subscribe our newsletter

New Things Will Always Update Regularly