Session
Governing AI: The "Original Sin"
Why are Large Language Models (LLMs) so inherently fragile? The answer is not found in a passing bug, but in the "original sin" of their architecture: the lack of ontological separation between instructions and data. While traditional computing separates code from input, LLMs merge everything into a "Single Tape" flow, where any data can potentially become a system instruction.
This session explores the symptoms of this architectural confusion using the OWASP Top 10 for LLM framework. We will analyze critical risks such as Prompt Injection (LLM01), Sensitive Information Disclosure (LLM02), and Excessive Agency (LLM06).
We will demonstrate how to build "artificial barriers" to simulate the missing separation by leveraging native services from leading cloud providers.
Maurizio Argoneto
DevOps Culture, IT Project Manager (Scrum Master | Senior Software Engineer)
Pignola, Italy
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top