Session
Future open source LLM kill chains
Several mission-critical software systems rely on a single, seemingly insignificant open-source library. As with xz utils, these are prime targets for sophisticated adversaries with time and resources, leading to catastrophic outcomes if a successful infiltration remains undetected.
A parallel scenario can unfold for the open-source AI ecosystem in the future, where a select few of the most powerful large language models (LLM) are repeatedly utilised, fine-tuned for tasks ranging from casual conversation to code generation, or compressed to suit personal computers. Then, they are redistributed again, sometimes by untrusted entities and individuals.
In this talk, Andrew Martin and Vicente Herrera will explain methods by which an advanced adversary could leverage access to LLMs. They will show full kill chains based on exploiting the open-source nature of the ecosystem or finding gaps in the MLOps infrastructure and repositories, which can lead to vulnerabilities in your software.
Finally, they will show both new and existing security practices that should be in place to prevent and mitigate these risks.
Vicente Herrera
Principal Consultant at Control Plane
Alcalá de Guadaira, Spain
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top