Session

Ensuring quality and safety in LLM-based applications

In this talk we will explore a variety of open source tools designed to improve the quality and security in applications based on Large Language Models (LLMs). Using simple examples, we will learn how these tools can be used to address common challenges such as hallucinations (inaccurate or made-up answers), jailbreaks (attempts to evade security restrictions), toxicity, and data leaks. Additionally, we will discuss strategies to evaluate and mitigate these risks, ensuring that LLM applications are both robust and reliable.

Patty O'Callaghan

Technical Director @ Charles River Laboratories | Google Developer Advisory Board | Google Developer Expert in AI/ML | Google Cloud Champion Innovator

Glasgow, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top