Session
From Type Safety to Trusted Inference: Confidential AI Patterns in Scala with LLM4S
Enterprises want GenAI on sensitive data, but many teams still glue together brittle prototypes that are hard to govern, audit, and secure. This session shows how JVM teams can use LLM4S, an open-source Scala-first framework, as the application layer for confidential AI systems. Using examples drawn from cyber resilience, enterprise data services, and agent workflows, we will walk through a practical reference architecture for privacy-preserving AI: type-safe tool calling, guardrails, RAG, observability, memory, multi-provider routing, and secure tool execution. We will also map which protections belong in the framework, platform, and confidential-computing layers, including isolated execution, attestation-aware deployment, and governed data access. Attendees will leave with concrete design patterns for building production-ready AI agents that are reliable, auditable, and aligned with enterprise security requirements.
Satvik Kumar
Product Leader & OSS Mentor
Santa Clara, California, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top