Session

From Developer to AI Pioneer: Kubernetes-Driven LLM Deployment Practices

As an AI developer and open-source contributor, I explored the cloud-native deployment of LLMs using tools like GUI through Hugging Face. This presentation will share how to build efficient LLMOps pipelines using Kubernetes, covering model management, containerized deployment, and inference optimization. Based on a case study from DaoCloud, I will showcase practical experiences from prototyping to production, helping developers become pioneers in cloud-native AI.

Samzong Lu

PM at DaoCloud, AI/LLMOps PM Leader, CNCF Multiple Project Contributors, Open Source Enthusiast

Shanghai, China

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top