Session

Local Generative AI Inference: The Rise of Your Devices

With the advent of Small Language Models (SLMs), the time has come for your devices to rise! As SLMs continue to improve, we'll explore how leveraging the untapped power of personal devices can create exceptional offline Generative AI-enabled user experiences. From enhanced data privacy and regulatory compliance to reduced latency and cost-effectiveness, local inference offers a multitude of benefits. We'll present the multi-layered landscape of local inference, explaining how hardware, backend, libraries, and applications enabling it fit together. Then, we'll showcase live demos running local SLMs with Llama.cpp and ONNX Runtime. Join us to unlock the potential of your devices!

François Bouteruche

Senior Developer Productivity Specialist at Microsoft

Paris, France

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top