Session

Scan, Learn, Prevent: Cross-Agent Security Policy Generation from Automated Vulnerability Detection

AI coding agents repeat the same security mistakes across sessions — XSS, hardcoded secrets, hallucinated packages — because each session starts fresh. Instruction files like CLAUDE.md and .cursorrules were meant to fix this, but today they're written by hand and never updated.

We built a closed-loop system where eight security scanners detect vulnerabilities in AI-generated code, classify them by CWE, and convert them into deterministic template-driven rules. No LLM in the rule generation step — rules are auditable and reproducible. They get injected into CLAUDE.md, .cursorrules, and copilot-instructions.md through a draft PR so humans stay in the loop.

A security lesson learned by one agent now transfers to every agent on the project — Claude, Cursor, Copilot, Goose — with zero fine-tuning.

This talk covers:
- The scanner-to-instruction-file pipeline
- Why deterministic templates beat LLM-generated rules
- Cross-agent knowledge transfer from a single vulnerability
- Which rule phrasings actually change model behavior and which get ignored
- Instruction poisoning as a new attack surface and how to mitigate it

Adhithya Rajasekaran

AI Product Manager | AI Ethics & Governance | Cybersecurity | github.com/adhit-r

Chennai, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top