Ravi Sastry Kadali
Engineering Leader | Go Ecosystem Contributor | Security Tooling Author
Mountain View, California, United States
Actions
Ravi Sastry Kadali is an Engineering Leader with more than two decades of progressive engineering leadership spanning defense, enterprise, and hyperscale systems. He built platform integrity systems at Meta protecting 3B+ users, engineered Windows platform releases at Microsoft, designed intrusion detection systems at India's Defence Research and Development Organisation (DRDO), hardened API security infrastructure at Neustar Security Services, and delivered edge-security and networking solutions at Volterra (acquired by F5) and X Corp.
An active open-source contributor to Kubernetes, etcd, gosec, and gqlgen, Ravi Sastry authored the go-safeinput and cryptoguard-go security libraries for the Go ecosystem. He is a co-editor of security engineering technical papers and a recurring invited panelist at cybersecurity. Ravi Sastry holds awards for Outstanding Achievement in Cybersecurity and brings both research depth and practitioner credibility to every stage he takes.
Area of Expertise
Topics
Detect, Trace, Fix: Bringing AI-Powered Taint Analysis to gosec
Go 1.26 changed how we think about static analysis. The rewritten go fix command proved that Go's analysis.Analyzer framework is not just for finding problems but for fixing them too, with dozens of modernizers that safely rewrite code at scale. Meanwhile, gopls brings these same analyzers into your editor in real time. The Go ecosystem is clearly moving toward a world where tools don't just detect issues but remediate them.
Security tooling hasn't kept up. gosec, the most widely adopted Go security scanner with over 8,700 GitHub stars, has long relied on pattern-based rules that inspect the AST for suspicious function calls. These rules are effective at catching common mistakes like hardcoded credentials or weak crypto, but they have a fundamental limitation: they cannot track data flow. When a user's HTTP query parameter passes through three helper functions and a struct field before landing inside a SQL query, pattern matching either misses it entirely or flags every call to db.Query regardless of whether the data is actually tainted. The result is either false negatives that leave real vulnerabilities undetected, or false positives that train developers to ignore security warnings.
This talk presents a taint analysis engine contributed to gosec that closes this gap. Built on golang.org/x/tools/go/ssa for Static Single Assignment representation and golang.org/x/tools/go/callgraph/cha for Class Hierarchy Analysis call graphs, the engine tracks data flow from untrusted sources (http.Request, os.Args, os.Getenv) through the entire program to dangerous sinks (db.Query, exec.Command, os.Open, http.Get, fmt.Fprintf on a ResponseWriter, log.Printf). Because SSA assigns every variable exactly once, every value can be traced deterministically to its origin. CHA resolves interface method calls conservatively, so taint is tracked even when data flows through interface boundaries.
The engine adds six new detection rules to gosec: G701 for SQL injection (CWE-89), G702 for command injection (CWE-78), G703 for path traversal (CWE-22), G704 for SSRF (CWE-918), G705 for XSS (CWE-79), and G706 for log injection (CWE-117). Each rule is purely configuration-driven, defined as a Go struct of Sources and Sinks with no engine changes required. A notable design feature is CheckArgs, which specifies exactly which function arguments to inspect. For db.Query(queryString, param1, param2), only the query string (argument index 1) is checked because prepared statement parameters are inherently safe. This eliminates an entire category of false positives that plague pattern-based scanners.
But detection alone is only half the story, and this is where the talk looks forward. Go 1.26's go fix demonstrated that the analysis.Analyzer framework supports not just diagnostics but SuggestedFixes, machine-applicable patches that tools can apply automatically. The taint engine's architecture is designed with this trajectory in mind. Because each finding carries the full taint flow path from source to sink, including every intermediate step, it provides exactly the context needed for automated remediation. For deterministic fixes like replacing string concatenation with parameterized queries, the engine can propose concrete SuggestedFixes in the go fix style. For more complex remediations, like input validation, encoding strategies, or architectural refactoring, the structured taint flow data becomes a rich prompt context for LLM-assisted remediation. Imagine running gosec and getting not just "G701: SQL injection at line 42" but a generated fix that rewrites your string concatenation to a prepared statement, or a context-aware suggestion to add path sanitization, powered by an LLM that understands exactly how the tainted data reached the sink. The talk will demonstrate this pluggable remediation architecture and discuss the design considerations for making LLM-assisted fixes reliable enough to trust.
The entire implementation uses zero external dependencies beyond golang.org/x/tools, integrates with the standard analysis.Analyzer framework, and runs alongside gosec's existing 30+ rules. It produces diagnostics in the standard format consumed by go vet, gopls, and golangci-lint.
Attendees will leave with three things: a deep understanding of how SSA and CHA make data flow analysis tractable in Go, practical knowledge of building analyzers that not only detect but suggest fixes using the same framework powering go fix, and a concrete look at where Go security tooling is headed - a world where vulnerabilities are not just flagged but understood, traced, and resolved before they ever reach production.
SecurePrompt: Building a Pre-Flight Security Layer for Agentic AI
As enterprises race to deploy agentic AI, everyone's building capabilities—but who's building the guardrails? When an autonomous agent generates a prompt containing your AWS credentials, or a compromised data source injects malicious instructions, what stops that payload from reaching the LLM?
This session reveals how I built SecurePrompt, a pre-flight security scanner that intercepts prompts before they're sent to any AI model. Born from a simple realization—that the agentic AI ecosystem has a critical blind spot at the boundary - SecurePrompt now provides the missing security infrastructure for autonomous AI systems.
What you'll learn:
1. The Hidden Risk: Real-world scenarios where credentials leak, prompt injections propagate, and PII compliance fails—all in a single API call
2. Architecture Decisions: Why I chose Go, rules-based detection for v1, and how to achieve sub-10ms latency without sacrificing coverage
3. Detection Engine Deep Dive: Parallel scanning for secrets, prompt injection, PII, risky operations, and data exfiltration attempts
4. Policy-as-Code: Implementing strict, moderate, and permissive profiles for different enterprise risk tolerances
5. Audit by Default: HMAC-signed decision logs with causal traceability for compliance teams
6. Evolution Path: How to layer LLM-powered semantic analysis on top of deterministic rules for catching sophisticated attacks
Whether you're building AI agents, deploying enterprise copilots, or architecting AI platforms, you'll leave with practical patterns for implementing security at the prompt boundary - the layer nobody else is building.
Unified Defense Against Injection Vulnerabilities
Injection attacks dominate the MITRE 2025 CWE Top 25—with XSS ranked #1, SQL injection #2, and OS command injection holding the highest count of CISA Known Exploited Vulnerabilities. Yet developers still juggle fragmented tools: one library for HTML sanitization, another for SQL, manual validation for paths and shell arguments. This context fragmentation creates gaps attackers exploit.
This session introduces go-safeinput, an open-source Go library providing unified, context-aware sanitization across all major injection categories through a single API. You will learn:
1. Why existing solutions fall short: Context fragmentation, lack of defense-in-depth, and supply-chain risks from excessive dependencies
2. The unified approach: One API that automatically applies the right sanitization for HTML, SQL identifiers, file paths, URL components, shell arguments, and deserialization
3. Real-world implementation: Live demonstration securing a vulnerable application against XSS, SQL injection, path traversal, command injection, and unsafe deserialization
4. Compliance alignment: How unified input validation supports NIST SP 800-53, CMMC, and federal security requirements
Whether you're building enterprise applications, federal systems, or open-source projects, you'll leave with practical techniques to reduce your injection vulnerability surface using defense-in-depth strategies that don't sacrifice developer productivity.
SecurePrompt: Building a Pre-Flight Security Layer for Agentic AI
As enterprises deploy agentic AI, everyone's building capabilities—but who's building the guardrails? When an autonomous agent generates a prompt containing AWS credentials, or a compromised data source injects malicious instructions, what stops that payload from reaching the LLM?
This session reveals how I built SecurePrompt, a pre-flight security scanner that intercepts prompts before they're sent to any AI model—addressing the critical blind spot at the boundary of autonomous AI systems.
You'll learn:
1. Real-world scenarios where credentials leak, prompt injections propagate, and PII compliance fails
2. Why I chose Go and rules-based detection for sub-10ms latency
3. Parallel scanning architecture for secrets, injection attacks, PII, and data exfiltration
4. Policy-as-code profiles for enterprise risk tolerances
5. HMAC-signed audit logs with causal traceability
6. Evolving from deterministic rules to LLM-powered semantic analysis
Leave with practical patterns for implementing security at the prompt boundary—the layer nobody else is building.
GraphQLShield: CWE-Aware Defense in Depth for GraphQL APIs in Go
GraphQL APIs face a unique threat landscape: deeply nested queries cause resource exhaustion, introspection exposes entire schemas, and mutation variables carry injection payloads past traditional WAFs. Yet most Go-based GraphQL servers ship with zero security middleware between HTTP and resolver execution.
I introduce GraphQLShield, an open-source Go middleware bringing defense-in-depth to GraphQL APIs through three layers: (1) Static schema analysis detecting cyclic types, missing depth limits, and sensitive field exposure before deployment; (2) Runtime CWE-aware input sanitization catching SQL injection, XSS, command injection, path traversal, and NoSQL injection in GraphQL variables — bridging go-safeinput's MITRE CWE Top 25 coverage to GraphQL; and (3) Resolver code auditing inspired by gosec and cryptoguard-go flagging insecure crypto, hardcoded secrets, and missing auth checks.
A quick demo shows GraphQLShield intercepting 7 attack vectors against a gqlgen API , from SQL injection in mutation variables to depth-based DoS, while legitimate requests pass cleanly. Attendees leave with a zero-dependency Go library covering 14 CWE vulnerability classes across static and runtime analysis.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top