YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation.
At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future.
We are looking forward to hire Python Professionals in the following areas :
Job Description
Job Title: Senior Data Engineer/ DevOps - Enterprise Big Data Platform
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, to enable business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.
The Enabling Functions Data Office Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Enabling Function’s data management and analytics platform (Palantir Foundry, AWS and other components).
The Foundry Platform Comprises Multiple Different Technology Stacks, Which Are Hosted On Amazon Web Services (AWS) Infrastructure Or Own Data Centers. Developing Pipelines And Applications On Foundry Requires
Proficiency in SQL / Scala / Python (Python required; all 3 not necessary)
Proficiency in PySpark for distributed computation
Proficiency in Ontology, Slate, Familiarity with Workshop App basic design/visual competency
Familiarity with common databases (e.g. Oracle, mySQL, Microsoft SQL). Not all types required
This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.
Roles & Responsibilities
Tech / B.Sc./M.Sc. in Computer Science or related field and overall 6+ years of industry experience
Strong experience in Big Data & Data Analytics
Experience in building robust ETL pipelines for batch as well as streaming ingestion.
Experience with Palantir Foundry
Most important Foundry apps: Code Repository, Data Lineage and Scheduling, Ontology Manager, Contour, Object View Editor, Object Explorer, Quiver, Workshop, Vertex
Experience with Data Connection, external transforms, Foundry APIs, SDK and Webhooks is a plus
Interacting with RESTful APIs incl. authentication via SAML and OAuth2
Experience with test driven development and CI/CD workflows
Knowledge of Git for source control management
Agile experience in Scrum environments like Jira
Experience in visualization tools like Tableau or Qlik is a plus
Experience in Palantir Foundry, AWS or Snowflake is an advantage
Basic knowledge of Statistics and Machine Learning is favorable
Problem solving abilities
Proficient in English with strong written and verbal communication
Primary Responsibilities
Responsible for designing, developing, testing and supporting data pipelines and applications
Industrialize data pipelines
Establishes a continuous quality improvement process to systematically optimize data quality
Collaboration with various stakeholders incl. business and IT
Education
Bachelor (or higher) degree in Computer Science, Engineering, Mathematics, Physical Sciences or related fields
Professional Experience
6+ years of experience in system engineering or software development
4+ years of experience in engineering with experience in ETL type work with databases and Hadoop platforms.
Skills
Hadoop General*|Deep knowledge of distributed file system concepts, map-reduce principles and distributed computing. Knowledge of Spark and differences between Spark and Map-Reduce. Familiarity of encryption and security in a Hadoop cluster.
Data management / data structures*|Must be proficient in technical data management tasks, i.e. writing code to read, transform and store data
XML/JSON knowledge
Experience working with REST APIs.
Spark Experience in launching spark jobs in client mode and cluster mode. Familiarity with the property settings of spark jobs and their implications to performance.
Application Development- Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
SCC/Git Must be experienced in the use of source code control systems such as Git
ETL Experience with developing ELT/ETL processes with experience in loading data from enterprise sized RDBMS systems such as Oracle, DB2, MySQL, etc.
Authorization Basic understanding of user authorization (Apache Ranger preferred)
Programming Must be at able to code in Python or expert in at least one high level language such as Java, C, Scala.
Must have experience in using REST APIs
SQL Must be an expert in manipulating database data using SQL. Familiarity with views, functions, stored procedures and exception handling.
AWS General knowledge of AWS Stack (EC2, S3, EBS, …)
IT Process Compliance*|SDLC experience and formalized change controls
Working in DevOps teams, based on Agile principles (e.g. Scrum)
ITIL knowledge (especially incident, problem and change management)
Languages Fluent English skills.
Specific Information Related To The Position
Physical presence in primary work location (Bangalore)
Flexible to work CEST and US EST time zones (according to team rotation plan)
Willingness to travel to Germany, US and potentially other locations (as per project demand)
At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale.
Our Hyperlearning workplace is grounded upon four principles
Flexible work arrangements, Free spirit, and emotional positivity
Agile self-determination, trust, transparency, and open collaboration
All Support needed for the realization of business goals,
Stable employment with a great atmosphere and ethical corporate culture
Searching, interviewing and hiring are all part of the professional life. The TALENTMATE Portal idea is to fill and help professionals doing one of them by bringing together the requisites under One Roof. Whether you're hunting for your Next Job Opportunity or Looking for Potential Employers, we're here to lend you a Helping Hand.
Disclaimer: talentmate.com is only a platform to bring jobseekers & employers together.
Applicants
are
advised to research the bonafides of the prospective employer independently. We do NOT
endorse any
requests for money payments and strictly advice against sharing personal or bank related
information. We
also recommend you visit Security Advice for more information. If you suspect any fraud
or
malpractice,
email us at abuse@talentmate.com.
You have successfully saved for this job. Please check
saved
jobs
list
Applied
You have successfully applied for this job. Please check
applied
jobs list
Do you want to share the
link?
Please click any of the below options to share the job
details.
Report this job
Success
Successfully updated
Success
Successfully updated
Thank you
Reported Successfully.
Copied
This job link has been copied to clipboard!
Apply Job
Upload your Profile Picture
Accepted Formats: jpg, png
Upto 2MB in size
Your application for Data Engineer - Python Job
has been successfully submitted!
To increase your chances of getting shortlisted, we recommend completing your profile.
Employers prioritize candidates with full profiles, and a completed profile could set you apart in the
selection process.
Why complete your profile?
Higher Visibility: Complete profiles are more likely to be viewed by employers.
Better Match: Showcase your skills and experience to improve your fit.
Stand Out: Highlight your full potential to make a stronger impression.
Complete your profile now to give your application the best chance!