Session

AI Risks - there are monsters in your LLMs

Your model is not your friend. It mimics human-like responses while being fundamentally alien—and people are getting very good at exploiting that gap.
Every LLM you ship opens new attack surfaces. Prompt injection. Data exfiltration. Crescendo attacks that walk the model into territory it would have refused to enter directly. The list keeps getting longer, and most of it doesn't look like the security threats your stack was designed for.
We'll go through the attack patterns showing up in the wild, the kinds of defenses that tend to hold up and the ones that backfire, and how to red-team your own application before someone else does.
Then we'll close on the long game—what attackers are actually working toward, and the worst-case scenarios worth taking seriously now.

Andreas Erben

CTO for Applied AI and Metaverse at daenet

Ponte Vedra Beach, Florida, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top