Session

How to Prevent AI Agents from Accessing Unauthorized Data

It’s time for Day 2 Ops in the world of AI.

Building enterprise-ready AI poses challenges around data security, scalability, and integration, especially in compliance-regulated industries. With the use of AI Agents, guardrails are needed to safeguard data while also optimizing query response quality and efficiency.

This session will cover how modern permissions systems can ensure AI Agents have access only to authorized data. The talk will look at why the Google Zanzibar model of authorization which uses Relationship-Based Access Control (ReBAC) is well suited for fine-grained authorization at scale. The talk covers the nuts and bolts of how this works as well as how to apply it to AI Agents, RAG Pipelines and similar LLM implementations.

The talk will also include a practical demo implementing fine-grained authorization for AI Agents + RAG using Open Source tools such as PGVector, Langchain, OpenAI, and SpiceDB.

We're working with OpenAI on securing 37 billion documetns for 5 million users in ChatGPT connectors. This session is based on the learnings from that.

I've presented this and related topics at DevOpsDays, KCDs, & DevConfs

Target audience is software architects, developers and team leads.

Sohan Maheshwar

Developer Advocate Lead at AuthZed

Amsterdam, The Netherlands

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top