Session

Using GenAI on and inside your code, what could possibly go wrong?

With GenAI, developers are shifting from traditional code reuse to generating new code snippets by prompting GenAI, leading to a significant change in the ways software gets developed.
Several academic studies show that AI generated code based on LLM's that are trained on vulnerable OSS implementations lead to vulnerable generated code. Another study showed that developers tend to trust GenAI created code more than human created code. Combining that with the higher code velocity it will result in more vulnerabilities in it's output.
Using an AI system that runs an LLM also has additional risks tied to it, related to jailbreaks, data poisoning and malicious agents, recursive learning and IP infringements.
In this presentation, we will examine real-world data from several academic studies to understand how GenAI is changing software security, the risks it introduces, and possible strategies to address these emerging issues.

Niels Tanis

Sr. Principal Security Researcher at Veracode | Microsoft MVP | International Speaker

Amersfoort, The Netherlands

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top