Speaker

Divyanshu Mishra

Divyanshu Mishra

Site Reliability Engineer - 66degrees

Bengaluru, India

Actions

With 7 years of experience in IT, Divyanshu Mishra has worked at Capgemini and Tietoevry and now with GCP partner 66degrees, specializing in cloud computing, DevOps, and containerization technologies like Kubernetes and Docker. He has worked on projects in public, hybrid, and private cloud environments and is an active speaker on DevOps, Apache Kafka, and cloud-native technologies. At Tietoevry, Divyanshu also trained teams on OpenAI, driving innovation and automation in the organization.

Area of Expertise

  • Information & Communications Technology

Topics

  • Kubernetes
  • Artificial intellince
  • AI Agents
  • Generative AI
  • Machine Leaning
  • kubecon

"Beyond Keyword Search: Combining ML Models with OpenSearch for Smarter Results"

Traditional keyword search often falls short when users phrase queries in natural language or search for concepts rather than exact terms. This talk explores how to enhance OpenSearch with machine learning techniques to deliver smarter, context-aware results. By combining semantic models like BERT or sentence-transformers with OpenSearch’s k-NN and hybrid search capabilities, we can implement semantic search that understands meaning, not just keywords. Attendees will learn how to index and search dense vector embeddings, integrate ML inference pipelines, and balance precision with recall using hybrid scoring strategies. The session will include real-world examples, deployment patterns, and performance tips to help teams bring intelligent search to their applications.

The Intersection of Sustainability, AI, and Responsible Tech Practices in Industry

This session explores the intersection of sustainability, AI, and responsible tech practices, focusing on how businesses can harness AI for positive environmental and social impact while mitigating risks. AI technologies offer significant potential for sustainability, from optimizing energy usage to improving supply chain efficiency. However, the environmental impact of training AI models raises concerns. Responsible AI practices address ethical issues like bias, fairness, and data privacy. The session will cover strategies for integrating sustainability and ethics into AI development, including green AI, ethical frameworks, and AI for social good. By prioritizing transparency, accountability, and collaboration across sectors, businesses can create AI systems that contribute to global sustainability goals while ensuring ethical outcomes. This discussion provides insights into how AI can drive innovation while balancing technological advancement with environmental and social responsibility.

RAG, LLM Ops, and the Next Wave of AI Frameworks for the Enterprise

"RAG (Retrieval-Augmented Generation), LLM (Large Language Model) Ops, and the next wave of AI frameworks are transforming how enterprises approach AI development and deployment. RAG enhances language models by integrating external data retrieval into the generation process, improving the quality and relevance of AI outputs. LLM Ops focuses on the operational aspects of deploying and maintaining large-scale language models, ensuring scalability, reliability, and continuous improvement. As AI continues to evolve, these frameworks are critical for enterprises to efficiently integrate advanced AI into their operations. They enable organizations to manage complex AI workflows, optimize performance, and maintain control over AI systems. The next generation of AI frameworks is designed to handle vast datasets, ensure high-quality responses, and seamlessly integrate with existing enterprise infrastructure. Together, RAG and LLM Ops represent a significant leap forward in enterprise AI, offering innovative solutions for real-time decision-making, automation, and customer experience enhancement, all while maintaining operational efficiency and ethical responsibility."

"Serverless AI on Kubernetes: The Future of Event-Driven Machine Learning"

Explore the integration of serverless computing with AI workloads on Kubernetes. How does event-driven machine learning (using tools like KNative and Kubeless) help scale AI model deployment with minimal infrastructure management, focusing on efficiency and cost-effectiveness?

Explainable AI (XAI) for Cloud-Native Environments

Address the need for transparency and trust in AI/ML models, focusing on explain ability for decision-making in Kubernetes and cloud-native contexts.

"AI-Powered Kubernetes Resource Optimization: Reducing Costs While Maintaining Performance"

Efficient resource management in Kubernetes is key to balancing performance and costs. This session will explore how AI algorithms can predict and optimize resource allocation for Kubernetes workloads, ensuring clusters are right-sized without over-provisioning. We’ll dive into techniques like predictive autoscaling, where AI anticipates workload fluctuations and adjusts resources proactively, and cost-efficient provisioning, optimizing CPU, memory, and storage usage. We’ll also cover memory optimization for AI applications, improving cluster efficiency. By integrating AI, organizations can achieve better cost control and scalable performance, making Kubernetes clusters both efficient and cost-effective. The session will provide practical strategies and real-world examples for using AI to automate Kubernetes resource management, ensuring optimal performance while minimizing unnecessary resource consumption.

"AI-Enhanced Kubernetes for Continuous Optimization: Predictive Scaling and Self-Tuning"

As cloud-native environments become more complex, efficiently managing Kubernetes clusters is crucial. This session will explore how AI and machine learning can integrate with Kubernetes to enable predictive scaling and self-tuning of resources. Attendees will learn how AI models can analyze workload patterns, forecast demand, and automatically adjust cluster resources. We’ll cover techniques such as using reinforcement learning for resource allocation and predictive analytics for auto-scaling, ensuring performance while optimizing costs. By allowing Kubernetes to autonomously fine-tune itself, organizations can achieve more efficient, resilient, and cost-effective cloud-native systems. Join us to discover how AI can transform Kubernetes into an autonomous platform for continuous optimization.

Divyanshu Mishra

Site Reliability Engineer - 66degrees

Bengaluru, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top