Session
Securing the Generative AI Lifecycle: Evaluation as a Key Defense
As generative AI applications become integral to decision-making, customer experiences, and innovation, their security and reliability are paramount. This session will explore how systematic evaluation is crucial for identifying and mitigating risks throughout the AI lifecycle—from base model selection to pre-production testing and ongoing post-deployment monitoring. Learn how to implement robust evaluation strategies to prevent vulnerabilities such as misinformation, bias, and malicious outputs, ensuring your AI applications are not only effective but also secure and trustworthy in real-world environments.
- Introduction to security concerns of the Generative AI
- Base model selection and risk assessment
- Handling edge cases, adversarial inputs, and ensuring ethical output
- Continuous assessment to address emerging risks and maintain quality
- Best practices for iterative evaluation and ensuring secure AI development

Maxim Salnikov
Developer Productivity Lead at Microsoft, Tech Communities Lead, Keynote Speaker
Oslo, Norway
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top