Session

How to hack an LLM

In this interactive workshop we will learn how to change the answer of an LLM using a technique called RAG. After a brief introduction to open source LLMs and how prompt engineering works, we will discover how it is possible to change the output of a model using the langchain python library.

A. Rosa Castillo

Data Scientist- ML Engineer

Málaga, Spain

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top