Speaker

Parul Singh

Parul Singh

Red Hat

Actions

Parul is a Principal Software Engineer in Red Hat’s Office of the CTO, exploring emerging technologies and building cloud-native solutions. She leads initiatives at the intersection of AI infrastructure and model governance, currently focused on standardizing Model Card generation and discoverability to enable transparent, compliant, and trustworthy AI supply chains.

Enhancing AI Transparency and Trust with Model Cards

Current Model Card implementations are inconsistent, non-standardized, and rarely machine-actionable. They often live in READMEs or templates, lacking integration with model registries, supply chain tools, or security pipelines. Critical metadata such as evaluations, SBOMs, or vulnerability attestations is fragmented or missing entirely. This hinders discoverability, auditing, and responsible AI deployment. Our work introduces a structured Model Card specification, generator libraries, and a discovery service that attaches and indexes Model Cards in OCI registries using referrers. We integrate metadata from AI Supply chain—evaluations, fairness benchmarks, security scans, and training pipelines—to build a complete, verifiable profile of the model. These Model Cards are queryable via a local search service, supporting automated validation and compliance. The result is a portable, transparent model identity that regulators, developers, and downstream consumers can trust—enabling responsible AI at scale across tools, registries, and teams.

Surfacing Trust: An OCI-Native Model Card Discoverability Service

Model Cards are critical for AI transparency—but today they’re not standardized, often buried in README files or repos, and lack integration with the AI supply chain. This limits discoverability of key metadata like CVEs, SBOMs, evaluations, performance and intended use.

We present, OCI-compliant Model Card Discoverability Service that surfaces structured metadata from Model Cards attached to models using OCI referrers, without modifying model blobs. This enables separation of metadata from models, allowing trusted updates when new evaluations, CVEs, or attestations emerge — without republishing the model itself.

The system pulls and indexes Model Cards stored as OCI artifacts (e.g., via ORAS) and builds a searchable SQLite database. This enables users and automated systems to filter models by architecture, licensing, compliance benchmarks, and security attestations—without modifying the registry or model blob. It bridges the gap between open standards and registry-native workflows, enabling better governance, interoperability, and trust in AI deployments.

Energy Observability using Kepler: Revolutionizing Cloud Efficiency

With the rise of Kubernetes addressing power consumption in cloud is vital for capacity planning, environmental impact and detecting anomalies. Presenting Kepler, an observability framework for monitoring power within and beyond Kubernetes. Kepler works on various platforms and collects data via eBPF for minimal energy footprint. During the session, we will cover the following key points:

- Importance & challenges of power observability in the cloud.
- Kepler's methodologies;
- compiling data from performance counters
- approximation of power consumption using Machine Learning when direct monitoring isn't possible.
- Real-world adoption of Kepler for power monitoring.
- Live demo of power monitoring in Kubernetes via Kepler, Prometheus & Grafana.
- Kepler on Edge for power observability using OpenTelemetry for centralized dashboarding.
- A glimpse into our ongoing work on PEAKS, a power and energy-aware Kubernetes Scheduler developed using the Kepler Observability Framework.

Empowering Efficiency: PEAKS - Orchestrating Power-Aware Kubernetes Scheduling

Existing Kubernetes schedulers prioritize resource allocation over varying node power efficiencies. PEAKS (Power Efficiency Aware Kubernetes Scheduler) targets aggregate power optimization during scheduling. Using ML models considering Node Utilization vs Power Consumption, PEAKS recommends nodes for pod scheduling, addressing power inefficiencies on underutilized nodes. This dynamic approach aligns nodes along the utilization-power curve, significantly reducing power compared to default schedulers. Emphasizing multi-objective optimization and power efficiency, PEAKS innovates cloud-native system management. Kepler facilitates energy metrics collection from cluster nodes, enhancing power-aware scheduling. The discussion explores diverse pod placement strategies based on node utilization-power relations, enriching Kubernetes' energy optimization.

Parul Singh

Red Hat

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top