Satvik Kumar
Product Leader & OSS Mentor
Santa Clara, California, United States
Actions
Satvik Kumar is a Product Manager at Pure Storage specializing in cyber resilience, data protection, and AI-driven product innovation. He leads strategy for enterprise resilience and replication solutions, works across ecosystem partnerships including Veeam, Commvault, and Rubrik, and regularly presents these topics to executive and technical audiences in global customer briefings. Earlier, he led global airline operations product initiatives at Sabre and co-founded PESU IO, an edtech platform that has served 25,000+ students. Satvik is also an IEEE-published researcher, a Carnegie Mellon University graduate, and a regular distinguished judge for hackathons.
Area of Expertise
Topics
From Type Safety to Trusted Inference: Confidential AI Patterns in Scala with LLM4S
Enterprises want GenAI on sensitive data, but many teams still glue together brittle prototypes that are hard to govern, audit, and secure. This session shows how JVM teams can use LLM4S, an open-source Scala-first framework, as the application layer for confidential AI systems. Using examples drawn from cyber resilience, enterprise data services, and agent workflows, we will walk through a practical reference architecture for privacy-preserving AI: type-safe tool calling, guardrails, RAG, observability, memory, multi-provider routing, and secure tool execution. We will also map which protections belong in the framework, platform, and confidential-computing layers, including isolated execution, attestation-aware deployment, and governed data access. Attendees will leave with concrete design patterns for building production-ready AI agents that are reliable, auditable, and aligned with enterprise security requirements.
Confidential AI for JVM Enterprises: Design Patterns from LLM4S
Regulated enterprises already run critical JVM services. Their challenge is adding confidential AI controls, auditable guardrails, and secure workflow boundaries without replatforming to Python. Using LLM4S as an open-source reference implementation, this talk separates application-layer concerns, including orchestration, prompt-injection defense, PII handling, retrieval grounding, and agent handoffs, from infrastructure-layer controls such as attested TEEs, isolated inference, secret release, and policy enforcement. Attendees leave with a practical blueprint for privacy-preserving, production-oriented agent workflows on top of existing JVM estates.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top