Session

Securing the AI Frontier: Challenges and Strategies in LLM and AI Security

As Artificial Intelligence (AI) and Large Language Models (LLMs) become integral to modern business operations, they introduce unprecedented security challenges that traditional cybersecurity measures struggle to address. This presentation explores the unique security landscape of AI systems, with a particular focus on LLMs, unveiling potential vulnerabilities and attack vectors that security professionals must be prepared to confront.
We will delve into critical issues such as data poisoning, model inversion, prompt injection, and the privacy implications of AI systems. Special attention will be given to LLM-specific challenges, including hallucination, unauthorized data disclosure, and the potential for misuse in creating malicious content.
Through a combination of technical insights and strategic considerations, attendees will gain a comprehensive understanding of how to secure AI assets within their organizations. We'll examine defensive strategies, best practices for robust ML model development, and the importance of ethical considerations in AI deployment.
Real-world case studies will illuminate the practical applications of these concepts, while a look at emerging trends will prepare attendees for the future of AI security. This session aims to equip information security professionals, AI/ML engineers, and decision-makers with the knowledge and tools necessary to navigate the complex intersection of AI and cybersecurity effectively.

Vaibhav Malik

Vaibhav Malik, Partner Solutions Architect, Cloudflare

St. Louis, Missouri, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top