Session

Governing the AI-Graph: Observability and Security for LLM-Generated Queries

When we give AI agents access to our GraphQL APIs, we introduce a new class of distributed system challenges: non-deterministic queries, potential N+1 floods, and authorization bypasses. How do we ensure our "AI-generated" queries are safe and efficient?

This talk bridges the gap between AI Quality Engineering and GraphQL governance. Building on my work designing evaluation frameworks for multi-agent systems, I will present strategies for monitoring and governing agents that interact with GraphQL endpoints. We will discuss how to implement "Semantic Rate Limiting" (analyzing query complexity vs. user intent) and how to evaluate the accuracy of agent-generated GraphQL syntax using "LLM-as-a-Judge" frameworks.

We will also cover the "Human-in-the-Loop" aspect: using GraphQL subscriptions to stream agent reasoning to human supervisors for real-time validation before a mutation is executed. Attendees will learn how to open their Graphs to AI without compromising on security or performance reliability.

Rajeshwari Sah

Rajeshwari Sah, Machine Learning Engineer at Apple

Sunnyvale, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top