Job Description

Overview

Role Overview

We are looking for a seasoned Data Architect with strong expertise in Databricks to lead the design, development, and optimization of scalable data platforms and analytics solutions. The role involves defining end-to-end data architecture, building cloud-native data pipelines, and enabling advanced analytics and AI workloads for enterprise environments.

Key Responsibilities

Define enterprise data architecture, including data ingestion, transformation, storage, governance, and consumption layers.

Lead the design and implementation of Databricks Lakehouse architecture, Delta Lake, Unity Catalog, and optimized ETL/ELT pipelines.

Develop scalable data models, metadata frameworks, and integration patterns across structured and unstructured datasets.

Collaborate with data engineering, analytics, ML, and business teams to understand data needs and translate them into architectural solutions.

Define best practices for data quality, lineage, cataloging, security, and lifecycle management.

Drive cloud-based data modernization using Azure/AWS/GCP + Databricks.

Establish data platform governance, including RBAC, data privacy, and compliance controls.

Optimize data performance, storage costs, pipeline reliability, and cluster usage.

Review and guide implementation of notebooks, workflows, Delta Live Tables, and ML/AI workloads.

Create architecture artifacts including HLDs, LLDs, technology standards, and integration patterns.

Provide thought leadership on data strategy, migration paths, and adoption of Databricks features.

Required Skills & Experience

8-12+ years of experience in data engineering/architecture, with at least 3-5+ years of hands-on Databricks experience.

Strong knowledge of Databricks Lakehouse, Delta Lake, Unity Catalog, Workflows, Model Serving, and cluster management.

Expertise in Python, SQL, PySpark/Spark, and distributed data processing.

Experience designing cloud-native data platforms on Azure/AWS/GCP.

Strong understanding of ETL/ELT frameworks, streaming data (Kafka/Kinesis/Event Hubs), and data integration patterns.

Proven experience driving enterprise data platform migrations or modernization programs.

Solid understanding of data modeling (3NF, Star/Snowflake), data warehousing, and performance tuning.

Knowledge of security frameworks, IAM, encryption, GDPR/PII handling, and data governance practices.

Experience with CI/CD for data, Infrastructure as Code (Terraform/ARM/CloudFormation), and DevOps for data pipelines.

Excellent communication and stakeholder-management skills.

Preferred Qualifications

Databricks certifications (Data Engineer Professional, Lakehouse Architect).

Experience with MLflow, feature stores, and model deployment in Databricks.

Background in enterprise analytics, BI platforms, data mesh, or data product architecture.

Responsibilities

Role Overview

We are looking for a seasoned Data Architect with strong expertise in Databricks to lead the design, development, and optimization of scalable data platforms and analytics solutions. The role involves defining end-to-end data architecture, building cloud-native data pipelines, and enabling advanced analytics and AI workloads for enterprise environments.

Key Responsibilities

Define enterprise data architecture, including data ingestion, transformation, storage, governance, and consumption layers.

Lead the design and implementation of Databricks Lakehouse architecture, Delta Lake, Unity Catalog, and optimized ETL/ELT pipelines.

Develop scalable data models, metadata frameworks, and integration patterns across structured and unstructured datasets.

Collaborate with data engineering, analytics, ML, and business teams to understand data needs and translate them into architectural solutions.

Define best practices for data quality, lineage, cataloging, security, and lifecycle management.

Drive cloud-based data modernization using Azure/AWS/GCP + Databricks.

Establish data platform governance, including RBAC, data privacy, and compliance controls.

Optimize data performance, storage costs, pipeline reliability, and cluster usage.

Review and guide implementation of notebooks, workflows, Delta Live Tables, and ML/AI workloads.

Create architecture artifacts including HLDs, LLDs, technology standards, and integration patterns.

Provide thought leadership on data strategy, migration paths, and adoption of Databricks features.

Requirements

Role Overview

We are looking for a seasoned Data Architect with strong expertise in Databricks to lead the design, development, and optimization of scalable data platforms and analytics solutions. The role involves defining end-to-end data architecture, building cloud-native data pipelines, and enabling advanced analytics and AI workloads for enterprise environments.

Key Responsibilities

Define enterprise data architecture, including data ingestion, transformation, storage, governance, and consumption layers.

Lead the design and implementation of Databricks Lakehouse architecture, Delta Lake, Unity Catalog, and optimized ETL/ELT pipelines.

Develop scalable data models, metadata frameworks, and integration patterns across structured and unstructured datasets.

Collaborate with data engineering, analytics, ML, and business teams to understand data needs and translate them into architectural solutions.

Define best practices for data quality, lineage, cataloging, security, and lifecycle management.

Drive cloud-based data modernization using Azure/AWS/GCP + Databricks.

Establish data platform governance, including RBAC, data privacy, and compliance controls.

Optimize data performance, storage costs, pipeline reliability, and cluster usage.

Review and guide implementation of notebooks, workflows, Delta Live Tables, and ML/AI workloads.

Create architecture artifacts including HLDs, LLDs, technology standards, and integration patterns.

Provide thought leadership on data strategy, migration paths, and adoption of Databricks features.

Preferred Qualifications

Databricks certifications (Data Engineer Professional, Lakehouse Architect).

Experience with MLflow, feature stores, and model deployment in Databricks.

Background in enterprise analytics, BI platforms, data mesh, or data product architecture.


Job Details

Role Level: Mid-Level Work Type: Full-Time
Country: India City: Chennai ,Tamil Nadu
Company Website: https://www.prodapt.com/ Job Function: Engineering
Company Industry/
Sector:
Technology Information and Internet

What We Offer


About the Company

Searching, interviewing and hiring are all part of the professional life. The TALENTMATE Portal idea is to fill and help professionals doing one of them by bringing together the requisites under One Roof. Whether you're hunting for your Next Job Opportunity or Looking for Potential Employers, we're here to lend you a Helping Hand.

Report

Disclaimer: talentmate.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@talentmate.com.


Recent Jobs
View More Jobs
Talentmate Instagram Talentmate Facebook Talentmate YouTube Talentmate LinkedIn