Session
LLM Local Inference: Tools, Techniques, and Insights
Over the past few months, I’ve had the chance to work on a research and development project exploring the feasibility of adopting solutions powered by Small and Large Language Models—with one unique twist: inference had to be performed at the edge.
This journey led me to scratch the surface of the vast, fascinating, and rapidly evolving world of LLMs, with a particular focus on inference servers. Along the way, I encountered a fair share of head-scratching moments and valuable insights that I’m excited to share with you.
In this talk, we’ll cover the key concepts behind LLM inference, untangle the tricky jargon, and give you a glimpse into the primary tools and solutions you can leverage if you ever need or want to explore LLM local inference.
- Working Software 2025 https://www.agilemovement.it/workingsoftware/schedule.html
- We Are Developers World Congress 2025
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top