Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Key Responsibilities
:Develop high-quality, scalable ETL/ELT pipelines using Databricks technologies including Delta Lake, Auto Loader, and DL
TExcellent programming and debugging skills in Pytho
nStrong hands-on experience with PySpark to build efficient data transformation and validation logi
cMust be proficient in at least one cloud platform: AWS, GCP, or Azur
eCreate modular dbx functions for transformation, PII masking, and validation logic — reusable across DLT and notebook pipeline
sImplement ingestion patterns using Auto Loader with checkpointing and schema evolution for structured and semi-structured dat
aBuild secure and observable DLT pipelines with DLT Expectations, supporting Bronze/Silver/Gold medallion layerin
gConfigure Unity Catalog: set up catalogs, schemas, user/group access, enable audit logging, and define masking for PII field
sEnable secure data access across domains and workspaces via Unity Catalog External Locations, Volumes, and lineage trackin
gAccess and utilize data assets from the Databricks Marketplace to support enrichment, model training, or benchmarkin
gCollaborate with data sharing stakeholders to implement Delta Sharing — both internally and externall
yIntegrate Power BI/Tableau/Looker with Databricks using optimized connectors (ODBC/JDBC) and Unity Catalog security control
sBuild stakeholder-facing SQL Dashboards within Databricks to monitor KPIs, data pipeline health, and operational SLA
sPrepare GenAI-compatible datasets: manage vector embeddings, index with Databricks Vector Search, and use Feature Store with MLflo
wPackage and deploy pipelines using Databricks Asset Bundles through CI/CD pipelines in GitHub or GitLa
bTroubleshoot, tune, and optimize jobs using Photon engine and serverless compute, ensuring cost efficiency and SLA reliability
. Qualification
s:At least 2 years of relevant work experience with cloud-based services relevant to data engineering, data storage, data processing, data warehousing, real-time streaming, and serverless computi
ngHands on Experience in applying Performance optimization techniqu
esUnderstanding data modeling and data warehousing principles is essenti
al Good to ha
ve:Certifications: Databricks Certified Professional or similar certificatio
ns.Machine Learning: Knowledge of machine learning concepts and experience with popular ML librar
iesKnowledge of big data processing (e.g., Spark, Hadoop, Hive,Kaf
ka)Data Orchestration: Apache Airf
lowKnowledge of CI/CD pipelines and DevOps practices in a cloud environme
nt.Experience with ETL tools like Informatica, Talend, Matillion, or Fivetr
Searching, interviewing and hiring are all part of the professional life. The TALENTMATE Portal idea is to fill and help professionals doing one of them by bringing together the requisites under One Roof. Whether you're hunting for your Next Job Opportunity or Looking for Potential Employers, we're here to lend you a Helping Hand.
Disclaimer: talentmate.com is only a platform to bring jobseekers & employers together.
Applicants
are
advised to research the bonafides of the prospective employer independently. We do NOT
endorse any
requests for money payments and strictly advice against sharing personal or bank related
information. We
also recommend you visit Security Advice for more information. If you suspect any fraud
or
malpractice,
email us at abuse@talentmate.com.
You have successfully saved for this job. Please check
saved
jobs
list
Applied
You have successfully applied for this job. Please check
applied
jobs list
Do you want to share the
link?
Please click any of the below options to share the job
details.
Report this job
Success
Successfully updated
Success
Successfully updated
Thank you
Reported Successfully.
Copied
This job link has been copied to clipboard!
Apply Job
Upload your Profile Picture
Accepted Formats: jpg, png
Upto 2MB in size
Your application for Databricks Data Engineer
has been successfully submitted!
To increase your chances of getting shortlisted, we recommend completing your profile.
Employers prioritize candidates with full profiles, and a completed profile could set you apart in the
selection process.
Why complete your profile?
Higher Visibility: Complete profiles are more likely to be viewed by employers.
Better Match: Showcase your skills and experience to improve your fit.
Stand Out: Highlight your full potential to make a stronger impression.
Complete your profile now to give your application the best chance!