Session

Security and auditing tools in Large Language Models (LLM)

LLM models are a subcategory of deep learning models based on neural networks and natural language processing(NLP). Security and auditing are critical issues when dealing with applications based on large language models, such as GPT (Generative Pre-trained Transformer) or LLM (Large Language Model) models.

This talk aims to analyze the security of these language models from the developer’s point of view, analyzing the main vulnerabilities that can occur in the generation of these models. Among the main points to be discussed we can highlight:

-Introduction to LLM
-Introduction to OWASP LLM Top 10.
-Auditing tools in applications that handle LLM models.
-Use case with the textattack tool(https://textattack.readthedocs.io/en/master/)

Jose Manuel Ortega

Software engineer & Security Researcher

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top