Session

Safeguarding LLM-Powered Apps with Incoming Guardrails

LLM-powered applications and agentic workflows introduce a new kind of entry point into your system: one that behaves somewhat like an API, but is far less predictable. That shift is part of what makes AI engineering different from traditional software engineering. In addition to building application logic, we now need to account for model behavior, misuse, and risk before a request ever reaches the model.

This implementation-minded talk presents a practical approach to adding an incoming guardrails layer in front of LLM-powered applications. After a brief refresher on how LLMs work, what agents are, and how AI engineering extends familiar software engineering practices, we will walk through a reference architecture for screening requests before they hit the model.

We will cover common checks such as prompt injection, malicious intent, toxicity, and out-of-scope requests, as well as higher-risk situations like potential self-harm or medical emergencies that may require escalation rather than generation. The focus throughout is on architecture and implementation: how to think about guardrails as a real software component rather than a vague safety add-on.

Attendees will leave with a practical mental model and a simple reference design they can adapt to their own stack.

Who should attend:
Software engineers, architects, tech leads, and AI engineers who are building or planning LLM-powered applications, copilots, or agentic workflows and want a practical approach to safety and control.

Tags:
LLM, Generative AI, AI Engineering, Software Architecture, AI Safety, Guardrails, Agentic Workflows, Application Security

Eyal Wirsansky

Staff AI Engineer | Adjunct AI Professor | Author of ‘Hands-On Genetic Algorithms with Python’ | JUG and GDG Community Leader

Jacksonville, Florida, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top