Session

SecurePrompt: Building a Pre-Flight Security Layer for Agentic AI

As enterprises deploy agentic AI, everyone's building capabilities—but who's building the guardrails? When an autonomous agent generates a prompt containing AWS credentials, or a compromised data source injects malicious instructions, what stops that payload from reaching the LLM?
This session reveals how I built SecurePrompt, a pre-flight security scanner that intercepts prompts before they're sent to any AI model—addressing the critical blind spot at the boundary of autonomous AI systems.

You'll learn:

1. Real-world scenarios where credentials leak, prompt injections propagate, and PII compliance fails
2. Why I chose Go and rules-based detection for sub-10ms latency
3. Parallel scanning architecture for secrets, injection attacks, PII, and data exfiltration
4. Policy-as-code profiles for enterprise risk tolerances
5. HMAC-signed audit logs with causal traceability
6. Evolving from deterministic rules to LLM-powered semantic analysis

Leave with practical patterns for implementing security at the prompt boundary—the layer nobody else is building.

Ravi Sastry Kadali

Engineering Leader | Go Ecosystem Contributor | Security Tooling Author

Mountain View, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top