Chandan Vedavyas
IT Engineer, Carnegie Mellon University
San Francisco, California, United States
Actions
Chandan Vedavyas is an IT Engineer and security researcher whose work focuses on the intersection of AI governance and insider threat modeling. He holds an MS in Information Security Policy and Management from Carnegie Mellon University's Heinz College, where his graduate research led to the development of the MA-STITM framework.
The framework maps 25 years of CERT Insider Threat Center research to autonomous AI systems and is validated through Chimera, a purpose-built multi-agent simulation environment he built to test attack scenarios under controlled conditions. The research produced two findings that surprised even him: a single injected instruction autonomously compounding into $250,000 in fraudulent transactions, and a hardened system prompt failing completely against a sophisticated injection that a deterministic authorization gate stopped cold.
Chandan is also an active community organizer, speaker, and volunteer at events like BSides, Null, and academic security forums.
Links
Area of Expertise
Topics
Loop, Rinse, Repeat: The Self-Amplifying Agent Attack Prompt Hardening Won't Stop
One poisoned calendar invite. One agent that read it, believed it, and acted on it. And then acted on it again. And again. Five unauthorized wire transfers totaling $250,000 in under ten seconds, with nothing in the architecture to make it stop. Not a theoretical scenario. A reproducible result from a controlled multi-agent simulation, and evidence of a threat category that does not yet have a name: the Cascading Amplifier.
A human insider is naturally bounded by fatigue, hesitation, and the friction of organizational life. An autonomous agent reasoning loop has none of that. Give it a consequential tool, an iterative loop, and no per-action authorization gate, and a single injected instruction just keeps executing. Loop, rinse, repeat, until the damage is done.
This talk presents findings from seven controlled simulation runs across two enterprise attack scenarios: a Confused Deputy financial fraud chain and an IP exfiltration targeting a patent-pending algorithm worth over $2.5M. The central finding challenges a widely held assumption. System prompt hardening achieved 100% attack prevention against moderate-sophistication injections and exactly 0% against high-sophistication injections using authority language and compliance urgency framing, both tested in the same environment. Probabilistic defenses have a ceiling, and motivated adversaries will find it.
The only control that held across every configuration was a deterministic, infrastructure-level authorization gate called an Intent Capsule. It enforces permitted tool scope regardless of what the LLM decides. When the hardened prompt failed, the capsule blocked the attack before any data moved.
Attendees will leave with a clear mental model for why prompt-layer defenses cannot provide enterprise security guarantees and a concrete architectural pattern they can actually deploy.
Original research. Live simulation demo. Strictly vendor-agnostic.
Seeing Through the Cipher: AI-Powered Threat Detection in Encrypted Traffic
As encryption becomes ubiquitous across networks and applications, defenders face a growing paradox: while encryption protects user privacy, it also blinds traditional security tools. Intrusion detection systems, data loss prevention tools, and firewalls now struggle to detect threats hiding within encrypted flows like TLS 1.3 or VPN tunnels.
This talk presents a novel AI-driven approach to regain visibility without decrypting traffic. By analyzing encrypted metadata such as packet sizes, flow directionality, timing, and TLS fingerprints, we trained machine learning models to detect threats, including command-and-control channels and data exfiltration attempts, without compromising encryption. Attendees will gain insights into feature engineering, model selection, deployment strategies, and real-world applications of this technique.
From Exploit to Alert: Reverse-Engineering Database Privilege Escalations Using Suricata
Database privilege escalation is a subtle yet powerful exploit vector that can grant attackers full control over enterprise systems. This talk demonstrates how to reverse-engineer database exploitation behavior and convert it into actionable detections.
Using a controlled PostgreSQL lab, we’ll trace a privilege escalation attempt (CREATE SUPERUSER) from network capture to IDS alert, showcasing how attackers embed privilege-manipulating SQL within normal traffic. Attendees will see how Suricata rules are engineered, tested, and tuned to detect these threats with minimal false positives.
All demonstrations are performed in a secure, isolated environment using sanitized PCAPs and synthetic payloads. The focus is purely defensive, understanding attacker logic to strengthen detection. The session concludes with remediation techniques for hardening database access, monitoring privilege changes, and integrating rule-based detections into SOC workflows.
Attendees leave with a repeatable workflow for transforming complex database exploits into reliable, production-grade IDS alerts.
Chandan Vedavyas
IT Engineer, Carnegie Mellon University
San Francisco, California, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top