Session
The dark side of AI: Security risks developers ignore until It’s too late
AI is moving fast—faster than most engineering teams can keep up with. While everyone talks about productivity boosts and clever prompts, very few talk about the dangerous security gaps quietly introduced into modern applications. This session uncovers the hidden risks developers overlook when integrating AI tools, LLMs, and AI-driven automation into their workflows.
We’ll cut through the hype and focus on real, practical vulnerabilities: prompt injection, model hijacking, insecure API usage, supply-chain risks in AI tooling, leaking secrets through logs, hallucinated dependencies, and dangerous assumptions developers make when trusting AI outputs. You’ll see concrete examples, demos, and step-by-step mitigation strategies you can apply the same day.
If your team uses AI—or plans to—you can’t afford to ignore this session.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top