Job Description

Key Responsibilities

  • Design, develop, and support robust ETL pipelines to extract, transform, and load data into analytical products that drive strategic organizational goals.
  • Develop and maintain data workflows on platforms like Databricks and Apache Spark using Python and Scala.
  • Create and support data visualizations using tools such as MicroStrategy, Power BI, or Tableau, with a preference for MicroStrategy.
  • Implement streaming data solutions utilizing frameworks like Kafka for real-time data processing.
  • Collaborate with cross-functional teams to gather requirements, design solutions, and ensure smooth data operations.
  • Manage data storage and processing in cloud environments, with strong experience in AWS cloud services.
  • Use knowledge of data warehousing, data modeling, and SQL to optimize data flow and accessibility.
  • Develop scripts and automation tools using Linux shell scripting and other languages as needed.
  • Ensure continuous integration and continuous delivery (CI/CD) practices are followed for data pipeline deployments using containerization and orchestration technologies.
  • Troubleshoot production issues, optimize system performance, and ensure data accuracy and integrity.
  • Work effectively within Agile development teams and contribute to sprint planning, reviews, and Skills & Experience :
  • 7+ years of experience in technology with a focus on application development and production support.
  • At least 5 years of experience in developing ETL pipelines and data engineering workflows.
  • Minimum 3 years hands-on experience in ETL development and support using Python/Scala on Databricks/Spark platforms.
  • Strong experience with data visualization tools, preferably MicroStrategy, Power BI, or Tableau.
  • Proficient in Python, Apache Spark, Hive, and SQL.
  • Solid understanding of data warehousing concepts, data modeling techniques, and analytics tools.
  • Experience working with streaming data frameworks such as Kafka.
  • Working knowledge of Core Java, Linux, SQL, and at least one scripting language.
  • Experience with relational databases, preferably Oracle.
  • Hands-on experience with AWS cloud platform services related to data engineering.
  • Familiarity with CI/CD pipelines, containerization, and orchestration tools (e.g., Docker, Kubernetes).
  • Exposure to Agile development methodologies.
  • Strong interpersonal, communication, and collaboration skills.
  • Ability and eagerness to quickly learn and adapt to new Qualifications :
  • Bachelors or Masters degree in Computer Science, Information Technology, or related fields.
  • Experience working in large-scale, enterprise data environments.
  • Prior experience with cloud-native big data solutions and data governance best practices.

(ref:hirist.tech)


Job Details

Role Level: Entry-Level Work Type: Full-Time
Country: India City: Hyderabad ,Telangana
Company Website: https://www.curatal.com Job Function: Information Technology (IT)
Company Industry/
Sector:
Data Infrastructure And Analytics Technology Information And Internet And Software Development

What We Offer


About the Company

Searching, interviewing and hiring are all part of the professional life. The TALENTMATE Portal idea is to fill and help professionals doing one of them by bringing together the requisites under One Roof. Whether you're hunting for your Next Job Opportunity or Looking for Potential Employers, we're here to lend you a Helping Hand.

Report

Similar Jobs

Disclaimer: talentmate.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@talentmate.com.


Talentmate Instagram Talentmate Facebook Talentmate YouTube Talentmate LinkedIn