Job Description
We are seeking a highly skilled Senior Data Integration Engineer to design, develop, and manage enterprise-grade data integration solutions. The ideal candidate will have extensive experience with ETL/ELT processes, API-driven integrations, and enterprise data platforms.
Please apply, If you have the passion and standard methodologies work in an environment where challenges are a norm, where individual brilliance is valued and goes hand in hand with team performance, Where being proactive is how we do things !!!
Responsibilities
Successful applicants should have/demonstrate:
Key responsibilities
- Architect, design, and optimize scalable big data solutions for batch and real-time processing.
- Develop and maintain ETL/ELT pipelines to ingest, transform, and synchronize data from diverse sources.
- Integrate data from cloud applications, on-prem systems, APIs, and streaming workspaces into centralized data repositories.
- Implement and manage data lakes and data warehouses solutions on cloud infrastructure.
- Ensure data consistency, quality, and compliance with governance and security standards.
- Collaborate with data architects, data engineers, and business stakeholders to align integration solutions with organizational needs.
Core Qualifications
- Proficiency in Python, Java, or Scala for big data processing.
- Big Data Frameworks: Strong expertise in Apache Spark, Hadoop, Hive, Flink, or Kafka.
- Hands-on experience with data modeling, data lakes (Delta Lake, Iceberg, Hudi), and data warehouses (Snowflake, Redshift, BigQuery).
- ETL/ELT Development: Expertise with tools like Informatica, Talend, SSIS, Apache NiFi, dbt, or custom Python-based frameworks.
- APIs & Integration: Strong hands-on experience with REST, SOAP, GraphQL APIs, and integration platforms (MuleSoft, Dell Boomi, SnapLogic).
- Data Pipelines: Proficiency in batch and real-time integration (Kafka, AWS Kinesis/ Azure Event Hub/ GCP Pub/Sub).
- Databases: Deep knowledge of SQL (Oracle, PostgreSQL, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) systems.
Preferred Experience
- Expertise with at least one major cloud platform (AWS, Azure, GCP).
- Experience with data services such as AWS EMR/Glue, GCP Dataflow/Dataproc, or Azure Data Factory.
- Familiarity with containerization (Docker) and orchestration (Kubernetes).
- Knowledge of CI/CD pipelines for data engineering.
- Experience with OCI and Oracle Database (including JSON/REST, sharding) and/or Oracle microservices tooling.
Qualifications
Career Level - IC3
About Us
As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity.
We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all.
Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States.
Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.