Jagan PS
Senior AI /MLEngineer
Bengaluru, India
Actions
I am working as Senior Software Development Engineer – AI/ML at Flexera with 6+ years of experience in full-stack and AI/ML development. Based in Bengaluru, I have also presented at BIACONF (IIM Bangalore), BangPypers, and RVCE on LLMs and MLOps. A FlexHack25 winner, built scalable AI systems using Go, Python, and cloud-native tools, with certified expertise in data science and full-stack engineering.
Area of Expertise
Topics
Engineering Intelligence: LLM Councils, Domain Context, and Agentic Control
AI agents are no longer just thinking systems — they are becoming context-rich, tool-aware, and decision-oriented collaborators.
The real shift is not better models, but how context, retrieval, tools, and governance are structured around agents.
In this talk, we explore a practical agentic framework where:
Design and scale agentic AI frameworks where multiple LLM agents, governed by councils, domain-specific context, and agentic tools, work together to augment human intelligence—while maintaining quality, security, and enterprise-grade guardrails.
LLM agents collaborate through an LLM Council (multi-agent review, validation, and arbitration)
Each agent operates with domain-specific context powered by 16 types of RAG
Traditional MCP tools evolve into Agentic Tools with intent, memory, and policy awareness
Quality, security, and jailbreak resistance are enforced at the framework level, not bolted on later
We’ll show how agentic systems move from reactive question-answering to proactive problem-solving, and how this elevates—not replaces—human intelligence in real engineering workflows.
Beyond Single Agents: LLM Councils, Agentic Tools, and the Future of Human-AI Intelligence
AI agents are rapidly moving beyond chatbots and copilots—but scaling them reliably requires more than better models. The real challenge lies in how context, tools, and governance are engineered around agents.
This session presents a practical agentic framework where multiple LLM agents collaborate through LLM Councils, operate with domain-specific context powered by multiple RAG strategies, and use agentic tools instead of static APIs. We’ll explore how agents transition from reactive responders to proactive problem-solvers, and how these systems elevate human intelligence rather than replace it.
The talk focuses on real design patterns: context as a versioned artifact, agent-to-agent validation, bounded autonomy, and framework-level guardrails that significantly reduce hallucinations, security risks, and jailbreak attempts. Attendees will leave with a clear mental model for building AI-native systems that scale across teams while preserving quality, trust, and engineering discipline.
Beyond Single Agents: LLM Councils, Agentic Tools, and the Future of Human-AI Intelligence
Design and scale agentic AI frameworks where multiple LLM agents, governed by councils, domain-specific context, and agentic tools, work together to augment human intelligence—while maintaining quality, security, and enterprise-grade guardrails.
In this talk, we explore a practical agentic framework where:
LLM agents collaborate through an LLM Council (multi-agent review, validation, and arbitration)
Each agent operates with domain-specific context powered by 16 types of RAG
Traditional MCP tools evolve into Agentic Tools with intent, memory, and policy awareness
Quality, security, and jailbreak resistance are enforced at the framework level, not bolted on later
We’ll show how agentic systems move from reactive question-answering to proactive problem-solving, and how this elevates—not replaces—human intelligence in real engineering workflows.
This is not a product demo. It’s a blueprint for how AI-native systems actually work at scale.
Beyond Single Agents: LLM Councils, Agentic Tools, and the Future of Human-AI Intelligence
AI agents are no longer just thinking systems — they are becoming context-rich, tool-aware, and decision-oriented collaborators.
The real shift is not better models, but how context, retrieval, tools, and governance are structured around agents.
In this talk, we explore a practical agentic framework where:
LLM agents collaborate through an LLM Council (multi-agent review, validation, and arbitration)
Each agent operates with domain-specific context powered by 16 types of RAG
Traditional MCP tools evolve into Agentic Tools with intent, memory, and policy awareness
Quality, security, and jailbreak resistance are enforced at the framework level, not bolted on later
We’ll show how agentic systems move from reactive question-answering to proactive problem-solving, and how this elevates—not replaces—human intelligence in real engineering workflows.
This is not a product demo. It’s a blueprint for how AI-native systems actually work at scale.
From RAG to ReFrAG: Building Agentic AI That Actually Works in Production
This talk will enable the audience to confidently choose, design, and implement the right retrieval architecture—RAG, ReFrAG, or agentic retrieval—based on their problem, data, and production constraints. Attendees will learn how to move beyond basic RAG setups by understanding when feedback-driven retrieval (ReFrAG) is necessary and how agentic workflows can orchestrate retrieval, evaluation, and decision-making at scale.
By the end of the session, participants will be able to:
Identify common failure modes of traditional RAG systems
Decide when ReFrAG or agentic retrieval patterns provide measurable improvements
Design retrieval pipelines that incorporate feedback loops, evaluation layers, and guardrails
Apply practical architectural patterns that improve reliability, accuracy, and cost control in production LLM systems
The goal is to equip the audience with actionable mental models and design patterns they can immediately apply to build AI systems that are more trustworthy, scalable, and production-ready—rather than relying on trial-and-error or demo-only approaches.
From RAG to ReFrAG: Building Agentic AI That Actually Works in Production
This talk will enable the audience to confidently choose, design, and implement the right retrieval architecture—RAG, ReFrAG, or agentic retrieval—based on their problem, data, and production constraints. Attendees will learn how to move beyond basic RAG setups by understanding when feedback-driven retrieval (ReFrAG) is necessary and how agentic workflows can orchestrate retrieval, evaluation, and decision-making at scale.
By the end of the session, participants will be able to:
Identify common failure modes of traditional RAG systems
Decide when ReFrAG or agentic retrieval patterns provide measurable improvements
Design retrieval pipelines that incorporate feedback loops, evaluation layers, and guardrails
Apply practical architectural patterns that improve reliability, accuracy, and cost control in production LLM systems
The goal is to equip the audience with actionable mental models and design patterns they can immediately apply to build AI systems that are more trustworthy, scalable, and production-ready—rather than relying on trial-and-error or demo-only approaches.
Agents of the Future: From MCP to Organoid Intelligence
In this talk, we explore the emerging landscape of autonomous AI through MCP agents — modular, composable systems capable of reasoning, planning, and interacting with tools like Tavily and Amazon Bedrock. But the real frontier lies beyond: Organoid Intelligence (OI) — a revolutionary approach where lab-grown brain organoids serve as living computational platforms.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top