
Kumaran Ponnambalam
Principal AI Engineer, Outshift by Cisco
Union City, California, United States
Actions
Kumaran Ponnambalam is a technology leader with 20+ years of experience in AI/ML and Big Data. His focus is on creating robust, scalable AI models and services to drive effective business solutions. He is currently leading AI initiatives at Outshift by Cisco. In his previous roles, he has built data pipelines, analytics, integrations, and conversational bots around customer engagement. He has also authored several courses on the LinkedIn Learning Platform in AI and Big Data.
Area of Expertise
Topics
Building RAG systems for Enterprise use cases - Challenges & solutions
Retrieval-Augmented Generation (RAG) systems have emerged as powerful tools for leveraging enterprise data to generate contextually relevant and accurate responses. However, building RAG systems with enterprise data poses several challenges that must be addressed to ensure their effectiveness and reliability. To begin with, data from enterprise data sources like document hubs, ticketing systems, intranet and internal data sources need to be pulled together. Structured and unstructured data need to be integrated into indexes. High quality retrieval is challenging, as one has to deal with enterprise vocabulary and user expectations. In this session, we will discuss these unique enterprise challenges for RAG and solutions to overcome them. We will share results from our real world experience on improving RAG for enterprises. Enterprise data scientists and AI managers will be able to learn a few best practices and leverage them in their organization.
Creating a Generative AI roadmap for the enterprise
In today's rapidly evolving digital landscape, enterprises are increasingly turning to Generative Artificial Intelligence (AI) as a strategic tool for innovation, efficiency, and competitive advantage. This talk explores the essential elements of crafting a comprehensive Generative AI roadmap tailored specifically for enterprise needs. We will discuss the process of identifying candidate use cases for Generative AI. Then we will look at formulating an evaluation criteria for prioritizing these candidates. We will explore the technology horizon and execution risks to arrive at a portfolio, with a mix of quick-wins, tactical and strategic use cases. Audience will learn about tools and methodologies to decide on an Generative AI roadmap for their enterprises.
Implement safe and compliant Generative AI solutions for the enterprise
Generative AI is revolutionizing how enterprises can do business, engage customers and improve employee productivity. But enterprises face several challenges around safe and compliant use of Generative AI technologies. Can such applications be hacked and made to produce undesirable results? Will there be confidential and private data leakage? How to protect against possible bias, hallucination, toxicity and off-topic content from large language models? Can employees exercise shadow usage of Generative AI services? In this session, the speaker will discuss such key challenges and lay out solutions and best practices for adding such guardrails to an enterprise's Generative AI framework.
Bringing Agentic AI to enterprise workflows : Possibilities & challenges
In the growing field of Gen AI, Agentic AI is the next big technology that will change the way enterprises run their business workflows. Agentic AI has the power to analyze inputs, generate next best actions and execute those actions. This has widespread applications in Sales, marketing, manufacturing, HR and healthcare. There are several challenges to overcome though, before Agentic AI can become mainstream in enterprises. The session will focus on the concepts of Agentic AI and its applications in enterprises. It will also discuss challenges in getting Agentic AI to production and best practices for a successful implementation.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top