Ragasudha R
SDE 3 at Gartner
Bengaluru, India
Actions
I am a Senior Software Engineer with over six years of experience building and operating distributed systems on Java and AWS. I work extensively with Spring Boot, Kubernetes, and AWS services and have hands-on experience designing systems that run at scale in production environments.
I have been exploring the intersection of backend engineering and AI, with recent work covering Spring AI 1.0, including building semantic cache and persistent memory on Amazon Bedrock . My technical writing on Medium and LinkedIn covers practical engineering topics from JVM internals to cloud infrastructure to applied AI.
Kaaval is my latest project. The project applies static analysis, vector similarity search against known threat corpora, and LLM powered deep analysis to audit agent configurations and MCP server definitions before they reach production.
LinkedIn - https://www.linkedin.com/in/ragasudha-r-dev/
Medium - https://medium.com/@ragasudha99.rr
Area of Expertise
Topics
Blast Radius: How We Built a Scanner to Quantify AI Agent Risk
Most teams shipping AI agents spend a lot of time making them capable and very little time asking what happens if they are manipulated. A LangChain agent with a delete tool and no input constraints, a CrewAI agent that can delegate to sub-agents without limits, an MCP server that can read your database and post to Slack are not hypothetical risks. They are the default configurations most developers ship without realising.
We built Kaaval, to be an open source AI agent security scanner that takes an agent definition or MCP server config and tells you exactly what it can do, what it should not be able to do, and what the damage looks like if something goes wrong. We call that number the Blast Radius, a score from 0 to 10 that reflects the real-world impact of a compromised agent.
Kaaval works in three detection layers. The first layer runs deterministic rule checks mapped to the OWASP LLM Top 10, covering things like missing system prompt boundaries, tools with delete or execute access and no scope constraints, and sensitive credentials exposed in agent context. The second layer runs vector similarity search against a corpus of known attack patterns from MITRE ATLAS and Garak, so every finding is grounded in a documented threat rather than an LLM guess. The third layer runs an optional deep analysis that catches semantic risks the rules miss, such as two tools that look safe individually but together create a data exfiltration path.
Kaaval has two modules built and presented together in this session. The first module audits agent definitions across LangChain, CrewAI, and AutoGen. The second module audits MCP server configurations for trust boundary violations, supply chain risks, and dangerous capability combinations across servers. Both modules produce findings with severity scores, remediation guidance, and a combined blast radius when run together.
In the demo, we scan a realistic agent configuration that resembles what most teams are shipping today, an agent with several tools, a minimal system prompt, and no explicit permission boundaries. We run Kaaval against it and walk through the findings layer by layer.
Kaaval runs as a CLI tool, and is designed to integrate into deployment pipelines with a single flag that fails the build on critical findings.
Ragasudha R
SDE 3 at Gartner
Bengaluru, India
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top