Job Description

Job Description:

  • Collaborate with core engineering, customers, and solution engineering teams for functional and technical discovery sessions to understand requirements and architect TigerGraph-based solutions.
  • Prepare and deliver compelling product demonstrations, live software prototypes, and proof-of-concepts showcasing a leading Graph Database Products multi-modal Graph + Vector capabilities, including hybrid search for AI applications.
  • Create and maintain public documentation, internal knowledge base articles, FAQs, and best practices for a leading Graph Database products implementations.
  • Design efficient graph schemas, develop GSQL queries and algorithms, and build prototypes that address customer requirements (e.g., Fraud Detection, Recommendation Engines, Knowledge Graphs, Entity Resolution, Anti-Money Laundering, and Cybersecurity).
  • Optimize indexing strategies, partitioning, and query performance in a leading Graph Database Products distributed environment, leveraging GSQL for parallel processing and real-time analytics.
  • Lead large-scale production implementations of a leading Graph Database Products solutions for enterprise clients, ensuring seamless integration with existing systems like Kafka for streaming, K8s for orchestration, and cloud platforms.
  • Provide expert guidance on Graph Neural Networks (GNNs), Retrieval-Augmented Generation (RAG), semantic search, and AI-driven optimizations to enhance customer outcomes.
  • Troubleshoot complex issues in distributed systems, including networking, load balancing, and performance monitoring.
  • Foster cross-functional collaboration, including data modeling sessions, whiteboarding architectures, and stakeholder management to validate solutions.
  • Drive customer success through exceptional service, project management, and communication of leading Graph Database Products value in AI/enterprise use cases.



Requirements:

  • Graph and Vector Data Science: Experience in applying graph algorithms, vector embeddings, and data science techniques for enterprise analytics.
  • SQL Expertise: Experience in SQL for querying, performance tuning, and debugging in relational and graph contexts.
  • Graph Databases and Platforms: Experience with Graph Database Products like Tigergraph, Neo4J, Janusgraph or similar systems, focusing on multi-modal graph + vector integrations.
  • Programming & Scripting: Experience in Python, C++, and automation tools for task management, issue resolution, and GSQL development.
  • HTTP/REST and APIs: Expertise in building and integrating RESTful services for database interactions.
  • Linux and Systems: Strong background in Linux administration, scripting (bash/Python), and distributed environments.
  • Kafka and Streaming: Experience with Kafka for real-time data ingestion and event-driven architectures.
  • Cloud Computing: Experience with AWS, Azure, or GCP for virtualization, deployments, and hybrid setups.
  • Graph Neural Networks (GNNs) and Graph Machine Learning**: Hands-on with frameworks like PyTorch Geometric for predictive analytics on graphs.
  • Retrieval-Augmented Generation (RAG) and Semantic Search: Building pipelines with vector embeddings and LLMs for AI applications.
  • Multimodal Data Handling: Managing text, images, video in graph + vector setups.
  • Agile Methodologies and Tools: 3+ years with Scrum/Agile, JIRA, or Confluence.
  • Presentation and Technical Communication: Advanced whiteboarding, architecture reviews, and demos.
  • Cross-Functional Collaboration: Leading discovery, data modeling (UML, ER diagrams), and on-call incident management.


Nice to have Skills

  • Data Governance, Security, and Compliance: Knowledge of encryption, access controls, GDPR/HIPAA, and ethical AI practices.
  • Big Data Processing Tools: Proficiency in Apache Spark, Hadoop, or Flink for distributed workloads.
  • AI-Driven Database Management and Optimization: Skills in AI-enhanced query optimization and performance tuning.
  • Monitoring & Observability Tools: 4+ years with Prometheus, Grafana, Datadog, or ELK Stack.
  • Networking & Load Balancing: Proficient in TCP/IP, load balancers (NGINX, HAProxy), and troubleshooting.
  • K8s (Kubernetes): Proficiency in container orchestration for scalable deployments.
  • DevOps and CI/CD Pipelines: Advanced use of Git, Jenkins, or ArgoCD for automation.
  • Real-Time Analytics and Streaming Integration: Beyond Kafka, experience with Flink or Pulsar.


Job Details

Role Level: Mid-Level Work Type: Full-Time
Country: India City: Hyderabad ,Telangana
Company Website: https://www.agivant.com/ Job Function: Information Technology (IT)
Company Industry/
Sector:
IT Services and IT Consulting

What We Offer


About the Company

Searching, interviewing and hiring are all part of the professional life. The TALENTMATE Portal idea is to fill and help professionals doing one of them by bringing together the requisites under One Roof. Whether you're hunting for your Next Job Opportunity or Looking for Potential Employers, we're here to lend you a Helping Hand.

Report

Similar Jobs

Disclaimer: talentmate.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@talentmate.com.


Talentmate Instagram Talentmate Facebook Talentmate YouTube Talentmate LinkedIn