Session
Securing Generative AI with Google Cloud Model Armor
Large Language Models unlock powerful new capabilities, but they also introduce risks: prompt injection, data leakage, unsafe outputs, and compliance challenges. In this session, we’ll explore Google Cloud’s Model Armor, a new service that proactively screens LLM prompts and responses to ensure safety, privacy, and compliance. I’ll share real-world use cases, architecture patterns, and best practices for integrating Model Armor into enterprise AI systems—whether on GCP, multi-cloud, or hybrid environments. Attendees will leave with practical strategies to safeguard their AI applications and build trustworthy, scalable solutions.
Vinoth Arumugam
Principal Machine Learning Engineer - Qodea
London, United Kingdom
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top