Session
Building Trustworthy and Reliable LLM Applications
We are entering the next wave of LLM understanding, so we want to make Natural Language Processing applications more reliable, robust, and trustworthy. This talk introduces the best practices and techniques for securing LLMs through guardrails that monitor and filter inputs and outputs.
We’ll examine how to effectively prevent malicious intent in inputs and protect against the exposure of sensitive data in outputs, using practical examples with Hugging Face and OpenAI models.
Through live demonstrations, attendees will learn how tools like LangChain4J can work together to detect and mitigate risks such as prompt injection attacks, unauthorized data access, poisoning RAG, and sanitization.
The session will conclude with a discussion on the evolving landscape of LLM security, the importance of vigilant permissions, and continuous monitoring to safeguard user and organizational data. This talk is ideal for developers, security practitioners, and anyone responsible for deploying LLMs into real-world applications.
Markus Eisele
Cloud Native and AI in a hybrid world. Loves Java and programming.
Munich, Germany
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top