Session
Building an LLM on your Laptop to Learn Faster
With the rise of LLMs (Large Language Models) and all of the amazing capabilities it can bring, tons of applications are being built which are based on huge LLM providers like OpenAI, Anthropic, AzureAI and Cohere.
One common assumption around many of these applications from common users, is that they are outside of the realm of what many of us could build ourselves. Fortunately, this is not the reality of our capabilities.
In this workshop to show how with we can build our own chatbot to speed up our learning through a little bit effort, the fantastic tech that is retrieval augmented generation (RAG), and a few components.
Not only will we walk through what these components do and how the interact, but we'll show everyone how we use them to construct an application similar to ChatPDF, but even simpler that we can run locally (or in a VM or container).
We'll then load up some documentation from PowerShell, PowerCLI, azcli, and maybe even PowerShell in a Month of Lunches to see how we can leverage this knowledge live, without taking the time to RTFM.
Seriously though, there is some huge power which comes from taking the time to understand how this type of tools works, and maybe giving yourself some shortcuts in using these tools, plus yourself and your peers having a new advantage in your day job - or just your personal learning.
Come join this workshop, let's learn a few things and build some cool tooling that you can use immediately. Hopefully this will launch you into some other projects as well.
Joe Houghes
Solutions Architect/FullStackGeek/Champion of Community
Castle Rock, Colorado, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top