Session
Safeguarding LLM-Powered Apps with Guardrails
LLM-powered apps and agents flows behave like a new entry point into your system: similar to an API, but far less predictable.
The best place to manage that risk is before a request ever reaches the LLM, using an “incoming guardrails” layer.
This implementation-minded talk starts with a quick refresher on how LLMs work, what “agents” are, and what people mean by agentic flows.
We will then shift into a practical, software-engineer-friendly approach for adding incoming guardrails before requests hit a model.
We will cover common checks like prompt-injection, malicious intent, toxicity, and out-of-scope requests, as well as how to recognize higher-risk cases like potential self-harm or medical emergencies and route them to an escalation path.
You’ll leave this talk with a simple reference architecture you can adapt to your own stack.
Tags:
AI Engineering, LLMs, Guardrails, Agentic, Agents, Caching, Architecture, Observability, LangGraph
Eyal Wirsansky
AI practitioner • speaker • mentor
Jacksonville, Florida, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top