Session
Building Your (Local) LLM Second Brain
LLMs are hotter than ever, but most LLM-based solutions available to us require you to use models trained on data with unknown provenance, send your most important data off to corporate-controlled servers, and use prodigious amounts of energy every time you write an email.
What if you could design a “second brain” assistant with OSS technologies, that lives on your laptop?
We’ll walk through the OSS landscape, discussing the nuts and bolts of combining Ollama, LangChain, OpenWebUI, Autogen and Granite models to build a fully local LLM assistant. We’ll also discuss some of the particular complexities involved when your solution involves a local quantized model vs one that’s cloud-hosted.
In this talk, we'll build on the lightning talk to include complexities like:
* how much latency are you dealing with when you're running on a laptop?
* does degradation from working with a 7-8b model reduce effectiveness?
* how do reasoning + multimodal abilities help the assistant task?
Olivia Buzek
Senior Staff Developer Advocate for AI
Boulder, Colorado, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top