Job Description

Wissen Technology is Hiring for System Engineer




About Wissen Technology:

At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients with the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.


Job Summary: We are looking for an experienced System Engineer with strong hands-on expertise in building and managing real-time and batch data processing systems. The ideal candidate must have deep specialization in at least one distributed data technology—Apache Flink, Apache Kafka, Spark Structured Streaming, or Dremio—along with solid working knowledge of the others. This role requires strong experience in distributed systems, event-driven architectures, streaming data pipelines, and observability of production-grade data platforms.



Experience: 6- 15 Years
Location: Pune
Mode of Work: Full time


Key Responsibilities:

  • Design, build, and optimize real-time streaming pipelines and batch workloads.
  • Ensure data correctness, reliability, and processing guarantees (at-least-once / exactly‑once).
  • Develop stateful stream processing solutions including joins, aggregations, windowing, and CDC pipelines.
  • Build and operate scalable, low‑latency event-driven architectures using technologies like Flink, Kafka, Pulsar, or Spark.
  • Design and support semantic layers, distributed SQL engines, and query acceleration using Dremio.
  • System Integration
  • Data Lakes (Iceberg, Delta Lake)
  • OLAP / Analytical Query Engines
  • Downstream applications, APIs, and data consumers
  • Manage schema evolution and compatibility across producers and consumers (Schema Registry, formats, versions).
  • Monitor, troubleshoot, and performance‑tune distributed data processing jobs in production environments.
  • Implement observability standards across streaming platforms and distributed components.




Requirements:

  • Expert-level knowledge in at least one of the following: Apache Flink, Apache Kafka, Spark Structured Streaming, or Dremio (Trino/Presto/Drill).
  • Strong hands-on experience in real-time and batch data processing systems.
  • Solid understanding of distributed systems, scalability, and fault-tolerant architectures.
  • Experience building and operating event-driven architectures.
  • Proficiency in managing schema evolution and compatibility across streaming ecosystems.
  • Hands-on experience integrating data pipelines with data lakes, OLAP engines, and downstream applications.
  • Strong skills in troubleshooting, performance tuning, and optimizing distributed data processing jobs.
  • Knowledge of observability practices across streaming and distributed systems.
  • Strong analytical, problem‑solving, and communication skills.


Good To Have Skills:

  • Experience with Pulsar, Redpanda, or other distributed messaging systems.
  • Knowledge of Kubernetes, containerized deployments, and orchestration.
  • Exposure to cloud platforms (AWS, Azure, GCP) and their managed streaming services.
  • Familiarity with CI/CD pipelines and DevOps practices.
  • Hands-on experience with data governance, lineage, and cataloging tools.
  • Knowledge of SQL performance tuning and BI acceleration mechanisms.
  • Understanding of Data Mesh or distributed data ownership models.



Wissen Sites:

Website: www.wissen.com

LinkedIn: https://www.linkedin.com/company/wissen-technology

Wissen Leadership: https://www.wissen.com/company/leadership-team/

Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All
Wissen Thought Leadership: https://www.wissen.com/articles/


Job Details

Role Level: Mid-Level Work Type: Full-Time
Country: India City: Haveli ,Maharashtra
Company Website: http://www.wissen.com Job Function: Information Technology (IT)
Company Industry/
Sector:
IT Services and IT Consulting

What We Offer


About the Company

Searching, interviewing and hiring are all part of the professional life. The TALENTMATE Portal idea is to fill and help professionals doing one of them by bringing together the requisites under One Roof. Whether you're hunting for your Next Job Opportunity or Looking for Potential Employers, we're here to lend you a Helping Hand.

Report

Disclaimer: talentmate.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@talentmate.com.


Recent Jobs
View More Jobs
Talentmate Instagram Talentmate Facebook Talentmate YouTube Talentmate LinkedIn