Session
Building Identity into LLM Workflows with Verifiable Credentials
LLMs power everything from chatbots to autonomous agents, but their non-deterministic nature exposes you to spoofing, privilege escalation, and compliance pitfalls. In this session, we'll draw on the social engineering experiments I undertook while building conversational AI systems, and we'll see how attackers could bypass security guardrails. We'll explore:
* Real-world injection attacks and the vulnerabilities that make them possible
* Emerging identity patterns, from W3C Verifiable Credentials to on-chain verification
* Methods to protect against prompt manipulation and the often-overlooked elements in audit logs
* A roadmap to LLM-aware identity ecosystems, including policy-as-code enforcement and federated governance models
You'll discover practical approaches to securing LLM workflows today while preparing for tomorrow's decentralised identity architectures. Through demos and case studies, you'll leave with actionable patterns for building trust into AI systems, and insight into where the ecosystem is heading.

Ben Dechrai
Disaster Postponement Officer
Kansas City, Missouri, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top