Session

Open Source Tools to Empower Ethical and Robust AI Systems

In this talk, Vicente Herrera will show us some open source tools for evaluating and securing AI models that are essential to building responsible AI systems. He will present an ontology explaining where each tool can assist in these tasks.

He will show tools like Garak, that helps identifying undesirable behaviors. LLM Guard and LLM Canary, providing detection and prevention of adversarial attacks and unintended data disclosures. Promptfoo, that optimizes prompt engineering and testing, leading to more reliable and consistent AI outputs.
For adversarial robustness, Counterfit, the Adversarial Robustness Toolkit, and BrokenHill provide solutions to assess AI models against malicious threats. Regarding fairness and compliance, AI Fairness 360 and Audit AI are important to understand how models can be just and accountable.

The final goal is being able to choose a model not only because how big ir is or good a knowledge evaluation score it has, but also about how robust and fair it is.

Vicente Herrera

Principal Consultant at Control Plane

Alcalá de Guadaira, Spain

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top