Session

From Zero to Hero: Self-host Large Language Models on Kubernetes

Explore when and how to self-host Large Language Models on Kubernetes. This session covers the key considerations for hybrid AI infrastructure approaches and provides a technical overview of the components required for running LLMs on Kubernetes, including GPU resource management, model deployment, inference, operational challenges, monitoring and scaling.

Seif Bassem

Cloud Solution Architect at Microsoft

Cairo, Egypt

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top