Session

SecurePrompt: Building a Pre-Flight Security Layer for Agentic AI

As enterprises race to deploy agentic AI, everyone's building capabilities—but who's building the guardrails? When an autonomous agent generates a prompt containing your AWS credentials, or a compromised data source injects malicious instructions, what stops that payload from reaching the LLM?
This session reveals how I built SecurePrompt, a pre-flight security scanner that intercepts prompts before they're sent to any AI model. Born from a simple realization—that the agentic AI ecosystem has a critical blind spot at the boundary - SecurePrompt now provides the missing security infrastructure for autonomous AI systems.

What you'll learn:
1. The Hidden Risk: Real-world scenarios where credentials leak, prompt injections propagate, and PII compliance fails—all in a single API call
2. Architecture Decisions: Why I chose Go, rules-based detection for v1, and how to achieve sub-10ms latency without sacrificing coverage
3. Detection Engine Deep Dive: Parallel scanning for secrets, prompt injection, PII, risky operations, and data exfiltration attempts
4. Policy-as-Code: Implementing strict, moderate, and permissive profiles for different enterprise risk tolerances
5. Audit by Default: HMAC-signed decision logs with causal traceability for compliance teams
6. Evolution Path: How to layer LLM-powered semantic analysis on top of deterministic rules for catching sophisticated attacks

Whether you're building AI agents, deploying enterprise copilots, or architecting AI platforms, you'll leave with practical patterns for implementing security at the prompt boundary - the layer nobody else is building.

Ravi Sastry Kadali

Engineering Leader | Go Ecosystem Contributor | Security Tooling Author

Mountain View, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top