Session

The Inquisitive Agent: Curing RAG's Ambiguity with Autonomous Dialogue

For all their power, most RAG systems today are locked in a monologue. They listen to a query, retrieve what they think is relevant, and speak their answer. If the initial query is vague, the monologue fails. The future of intelligent assistance, however, is not a monologue; it's a rich, collaborative dialogue.

This session dives deep into the architecture of AID (Assisted Intelligent Dialogue), an Agentic system we built and deployed at Cisco to make this future a reality. We've designed an system that doesn't just search for answers, but actively seeks clarity. Instead of a linear pipeline, our agents operates in a cognitive loop, embodying different roles to deconstruct ambiguity and co-create understanding with the user.

We will walk you through its autonomous reasoning process:

1. The Agent as the Detective (Ambiguity Assessment):
When a query arrives, the agent’s first task is to assess the evidence. It doesn't immediately jump to conclusions. Instead, it analyzes the query for semantic ambiguity and potential for misinterpretation, identifying cases where a simple vector search would likely fail.

2. The Agent as the Strategist (Hypothesis and Reasoning):
Upon detecting ambiguity, the agent consults its "knowledge model"—a rich knowledge graph. Here, it maps the user's query onto the graph to explore interconnected concepts and relationships. It generates multiple hypotheses about the user's true intent and identifies the critical piece of missing information needed to validate one of them. This is the agent’s core reasoning phase.

3. The Agent as the Interviewer (Probing with Purpose):
Armed with this insight, the agent reaches its "moment of truth." It plans and executes a dialogue action, formulating a targeted, clarifying question. This isn't a generic "Can you rephrase?"; it's a precise probe designed to efficiently collapse the space of uncertainty and guide the user toward the correct context.

4. The Agent as the Expert (High-Fidelity Resolution):
Once the user responds, the agent integrates this new, crucial context. With the ambiguity resolved, it can now perform a high-fidelity retrieval from a precisely defined subgraph, delivering an accurate, actionable answer that solves the user's true underlying problem.

Join us to learn how to build agents that don't just answer, but actively listen, reason, and understand. This is the blueprint for the next generation of truly helpful, conversational AI.

What Attendees Will Learn:
a)Design Pattern: How to architect an Agentic "clarification loop" to move beyond brittle, single-shot RAG systems.
b)Cognitive Architecture: How to leverage a graph as a "world model" for an agent to reason about information and generate hypotheses.
c)Practical Implementation: Techniques for an agent to autonomously decide when to ask a question and how to formulate it with purpose.
d)Human-Agent Interaction: Principles for designing effective, low-friction dialogues that guide users without causing fatigue.
e)From Prototype to Production: Lessons learned from deploying this Agentic system at enterprise scale at Cisco to solve daily customer issues, complete with a look at our evaluation metrics.

Vivek Singh

Cisco System, Sr Technical Leader Customer Experience

Pune, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top