Session
Responsible AI and Content Safety
Nowadays with the development of AI, it became a big part of our lives. It started with statistics, so was focused mostly on numbers: precision, recall, etc. But at some point, it became obvious lots of AI models were missing something very important: responsibility.
Content Safety got everyone's attention with increasing popularity of Generative AI. How to protect both inputs and outputs? What can be done other than prompt engineering? Content Safety detects harmful user-generated and AI-generated content in applications and services and can help you with putting guardrails around your Generative AI models.
Let's discuss the principles of Responsible AI and some tools to support AI developers. In this session we will talk about AI and ML in general, data preparation, Responsible AI and Content Safety principles and tools. In the end, I will show a demo using some of the tools supported by Azure AI.
Veronika Kolesnikova
Senior Software Engineer, Microsoft MVP (AI)
Boston, Massachusetts, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top