Session
Yes you can run LLMs on Kubernetes
As LLMs become increasingly powerful and ubiquitous, the need to deploy and scale these models in production environments grows. However, the complexity of LLMs can make them challenging to run reliably and efficiently. In this talk, we'll explore how Kubernetes can be leveraged to run LLMs at scale.
We'll cover the key considerations and best practices for packaging LLM inference services as containerized applications using popular OSS inference servers like TGI, vLLM and Ollama, and deploying them on Kubernetes. This includes managing model weights, handling dynamic batching and scaling, implementing advanced traffic routing, and ensuring high availability and fault tolerance.
Additionally, we'll discuss accelerators management and serving models on multiple hosts. By the end of this talk, attendees will have a comprehensive understanding of how to successfully run their LLMs on Kubernetes, unlocking the benefits of scalability, resilience, and DevOps-friendly deployments.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top