Speaker

Roy Saurabh

Roy Saurabh

Founder & CEO, AffectLog - AI governance & compliance engineering

Actions

Roy Saurabh is Founder & CEO of AffectLog and an applied researcher in AI governance, privacy engineering, and accountable ML systems. He has worked with UNESCO, the European Commission, and national governments on operationalising trustworthy AI, and leads EU-funded projects focused on embedding compliance, auditability, and privacy directly into ML pipelines.

From Gradients to Tokens: Standardising Observability Primitives for PyTorch, LLMs, & Agent Systems

Modern AI systems expose rich internal signals, gradients, activations, logits, but PyTorch doesn't provide structured interfaces to surface them. Teams repeatedly reimplement ad hoc instrumentation for stability, fairness, provenance, & logging.

We present a concrete, code-level analysis of missing observability primitives in PyTorch, derived from building a compliance and monitoring system using only core APIs (hooks, autograd, param.grad). We identify 4 recurring gaps: lack of training provenance, absence of dataset-level semantics (e.g. sensitive attributes), no structured hook outputs, & zero visibility into optimiser-level dynamics.

We then generalise these gaps to LLM inference & agent systems, where gradients are replaced by token probabilities, and optimiser steps by multi-step decisions. The same structural problem persists: signals exist but aren't standardised or exposed.

We propose framework-level primitives, structured hook outputs, gradient health reports, & run-scoped audit contexts that enable interoperable tooling without expanding PyTorch scope. This is a discussion grounded in implementation details, API design trade-offs, & reproducible engineering patterns.

Engineering for the EU AI Act: What Should PyTorch Expose Natively?

The EU AI Act introduces concrete technical obligations for ML systems: traceability, risk management, monitoring, and auditability. Today, most of this burden is handled outside the ML framework—through ad-hoc tooling, documentation, or bespoke infrastructure.

This Birds of a Feather session is an open, practitioner-driven discussion on a forward-looking question:
What primitives, hooks, or abstractions should PyTorch expose natively to better support AI accountability and regulatory readiness?

Topics for discussion may include:
- Native support for provenance, lineage, and training/inference traces
- Standardized hooks for fairness, robustness, and drift monitoring
- Model and dataset metadata as first-class PyTorch objects
- Privacy-preserving logging and zero-retention execution patterns

Gaps between regulatory requirements (e.g. EU AI Act) and current ML frameworks
The goal is not consensus, but shared understanding and concrete ideas that can inform community practices, tooling, and potential upstream contributions. This BoF is intended for PyTorch users, maintainers, researchers, and infra engineers interested in the future of responsible, production-grade ML.

Roy Saurabh

Founder & CEO, AffectLog - AI governance & compliance engineering

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top