Session
Gaslighting your chatbot to steal your secrets
As company's convert to LLM based chatbots, a new attack vector arises. Manipulating the model instructions and command obfuscation can reveal sensitive data that model has been granted access to. Break the LLM by gaslighting it into thinking it is still doing the right thing. Lets dive into how this is accomplished using various methods.
Ricky Ray
Sr Security Analyst and Security Researcher
Goodyear, Arizona, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top