Job Description

About The Role

We are looking for a highly skilled and motivated Senior Data Engineer to join our team. In this role, you will be responsible for designing, building, and maintaining scalable and robust data pipelines and architectures that support business intelligence, analytics, and machine learning initiatives. You will collaborate closely with data scientists, analysts, and software engineers to ensure the availability, quality, and security of data across the organization.

Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL workflows using orchestration tools such as Apache Airflow or dbt.
  • Architect and implement data integration solutions that consolidate data from multiple heterogeneous sources into cloud-based data platforms.
  • Work with large-scale distributed data processing systems such as Apache Spark and Kafka to build real-time and batch data pipelines.
  • Build and maintain data warehouses, ensuring efficient data modeling, schema design, and performance tuning.
  • Collaborate with data scientists and analysts to understand data requirements and deliver reliable datasets to power analytics and ML models.
  • Implement best practices for data governance, data quality, and security in compliance with organizational standards.
  • Optimize data workflows for efficiency and scalability in cloud environments such as AWS, GCP, or Azure.
  • Monitor and troubleshoot production data pipelines, proactively identifying and resolving bottlenecks or failures.
  • Document data engineering processes, data flows, and architecture to facilitate team knowledge sharing.
  • Mentor junior data engineers and contribute to continuous improvement of data engineering practices and technologies.

Required Skills & Qualifications

  • Bachelors or Masters degree in Computer Science, Engineering, or a related discipline.
  • Minimum 5 years of professional experience as a Data Engineer or in a similar role.
  • Strong proficiency in SQL and Python; familiarity with Scala, Java, or other languages is a plus.
  • Hands-on experience with data orchestration tools such as Apache Airflow, dbt, or similar platforms.
  • Expertise in cloud data platforms like AWS (Redshift, S3, Glue), GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse).
  • Working knowledge of big data ecosystems, including Apache Spark, Kafka, Hive, and related tools.
  • Deep understanding of data warehousing concepts, dimensional modeling, star and snowflake schemas, and data partitioning strategies.
  • Experience with data modeling tools like Erwin Data Modeler or MySQL Workbench.
  • Strong analytical and problem-solving skills with a proactive, team-focused mindset.
  • Familiarity with version control systems (Git) and CI/CD practices for data engineering.

(ref:hirist.tech)


Job Details

Role Level: Mid-Level Work Type: Full-Time
Country: India City: Chennai ,Tamil Nadu
Company Website: https://www.youtube.com/@TechyGeeks/about Job Function: Information Technology (IT)
Company Industry/
Sector:
Software Development Technology Information And Internet And Data Infrastructure And Analytics

What We Offer


About the Company

Searching, interviewing and hiring are all part of the professional life. The TALENTMATE Portal idea is to fill and help professionals doing one of them by bringing together the requisites under One Roof. Whether you're hunting for your Next Job Opportunity or Looking for Potential Employers, we're here to lend you a Helping Hand.

Report

Similar Jobs

Disclaimer: talentmate.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@talentmate.com.


Talentmate Instagram Talentmate Facebook Talentmate YouTube Talentmate LinkedIn