Session

Engineering for the EU AI Act: What Should PyTorch Expose Natively?

The EU AI Act introduces concrete technical obligations for ML systems: traceability, risk management, monitoring, and auditability. Today, most of this burden is handled outside the ML framework—through ad-hoc tooling, documentation, or bespoke infrastructure.

This Birds of a Feather session is an open, practitioner-driven discussion on a forward-looking question:
What primitives, hooks, or abstractions should PyTorch expose natively to better support AI accountability and regulatory readiness?

Topics for discussion may include:
- Native support for provenance, lineage, and training/inference traces
- Standardized hooks for fairness, robustness, and drift monitoring
- Model and dataset metadata as first-class PyTorch objects
- Privacy-preserving logging and zero-retention execution patterns

Gaps between regulatory requirements (e.g. EU AI Act) and current ML frameworks
The goal is not consensus, but shared understanding and concrete ideas that can inform community practices, tooling, and potential upstream contributions. This BoF is intended for PyTorch users, maintainers, researchers, and infra engineers interested in the future of responsible, production-grade ML.

Roy Saurabh

Founder & CEO, AffectLog - AI governance & compliance engineering

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top