Session
Enhancing AI Transparency and Trust with Model Cards
Current Model Card implementations are inconsistent, non-standardized, and rarely machine-actionable. They often live in READMEs or templates, lacking integration with model registries, supply chain tools, or security pipelines. Critical metadata such as evaluations, SBOMs, or vulnerability attestations is fragmented or missing entirely. This hinders discoverability, auditing, and responsible AI deployment. Our work introduces a structured Model Card specification, generator libraries, and a discovery service that attaches and indexes Model Cards in OCI registries using referrers. We integrate metadata from AI Supply chain—evaluations, fairness benchmarks, security scans, and training pipelines—to build a complete, verifiable profile of the model. These Model Cards are queryable via a local search service, supporting automated validation and compliance. The result is a portable, transparent model identity that regulators, developers, and downstream consumers can trust—enabling responsible AI at scale across tools, registries, and teams.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top