Responsible AI in practice
Predictive and Generative AI model quality scanning tools and patterns for implementing responsible AI practices.
The presentation offers a comprehensive exploration of tools and strategies for monitoring and improving the integrity of generative AI and ML systems in production addressing responsible ai challenges such as:
Hallucination and Misinformation
Harmful Content Generation
Prompt Injection
Information disclosure
Robustness
Stereotypes & Discrimination
Performance Bias
Unrobustness
Overconfidence
Underconfidence
Unethical behaviour
Data Leakage
Stochasticity
Spurious correlation
Some of the topics covered include:
Current state of the art in automated model quality scanning.
Best practices for implementing responsible AI in Predictive ML and Generative AI pipelines.
How to mitigate potential security vulnerabilities introduced by ML models.
How to automate testing of generative ai solutions including domain specific tests for RAG.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top