Speaker

Rahul Vishwakarma

Rahul Vishwakarma

WorkOnward, CTO

Actions

Rahul Vishwakarma is the CTO at WorkOnward in Los Angeles, USA, where he leads the development of generative AI-based solutions, driving the company's mission to deliver innovative AI technologies. With over 15 years of experience and an M.S. in Computer Science from CSULB, he specializes in AI systems. Rahul's innovative contributions are evidenced by his 60 granted U.S. patents. He has held significant positions at Dell Technologies and Hewlett Packard Enterprise.

Inside the Black Box: Hardware-Rooted Observability for the AI Era

AI-driven infrastructures demand observability pipelines that are both intelligent and hardware-trusted. Conventional Kubernetes observability tools such as Fluent Bit, OpenTelemetry, and Loki provide deep insights but risk exposing sensitive telemetry data during processing. This work presents a confidential observability framework using confidential-compute-enabled Kubernetes nodes. By leveraging hardware-based isolation, telemetry remains encrypted and is processed only within attested workloads, ensuring end-to-end confidentiality. Integrating CNCF observability components with secure hardware advances trustworthy AI operations, enabling privacy-preserving analytics, regulatory compliance, and resilient monitoring essential for future intelligent ecosystems across finance, healthcare, and autonomous digital systems.

AI Driven Framework for CNCF Aligned, Standards Compliant Digital Twins in Cloud Native Storage

Modern cloud-native infrastructures face challenges validating and scaling storage systems due to limited hardware, tight timelines, and high costs. This talk introduces a CNCF aligned, AI driven framework using Large Language Models to create standards-compliant Digital Twins of storage devices for realistic, hardware-free simulation. Integrating SNIA Swordfish and DMTF Redfish within a containerized, Kubernetes-native pipeline, it leverages key CNCF projects, Kubernetes for orchestration, Prometheus and OpenTelemetry for metrics and observability, Flux/Argo CD for GitOps-driven deployment, and Envoy for secure service communication. LLMs synthesize JSON device models that emulate real hardware, enabling scalable, standards-based validation and orchestration across cloud environments, empowering engineers to "design anywhere, test everywhere" while ensuring strict SNIA/DMTF and CNCF compliance.

Where Do Billions in Research Funding Really Go When Self-Citations Inflate Impact Scores by 20%

Self-citations can inflate research impact metrics by 10-20%, potentially skewing billions in research funding. This session explores the PyTorch-based architecture for analyzing self-citation patterns in massive bibliometric databases with millions of publications across diverse fields.

We’ll tackle computational challenges in developing the Self-Citation Adjusted Index (SCAI), a metric recalibrating citation counts based on discipline-specific patterns using PyTorch’s distributed training. We’ll explore PyTorch-based deep learning, including transformer architectures and graph neural networks via PyTorch Geometric, to distinguish legitimate from problematic self-citations. Hands-on examples will demonstrate training citation classification models with PyTorch’s autograd, emphasizing transparency through interpretable AI.

Attendees will learn to architect large-scale scientific data systems using PyTorch’s ecosystem, integrating Ray for distributed hyperparameter tuning and MLflow for experiment tracking, fostering equitable research evaluation impacting over $100 billion in annual funding.

Rahul Vishwakarma

WorkOnward, CTO

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top