Session

Local LLMs for the Curious Data Practitioner

This up-to-60 minute presentation+Q&A is for data practitioners curious to explore how Large Language Models (LLMs) can be integrated into their workflows while saving costs, maintaining security, and managing complexity.

Practitioners will learn:
* How LLMs in general work and what they are (and are not) good for
* What LLMOps is and the criteria to consider when deciding whether to use open/closed, local/hosted LLMs
* “Context engineering” techniques to elicit the best possible results from LLMs.

Further, they will get an overview of:
* Options for finding, installing, configuring, and managing LLMs on their own devices
* Techniques for making LLMs domain-specific data available to LLMs
* How LLMs can be incorporated into their existing working environment and workflows

Practitioners will leave the session with enough information to begin exploring LLMs in general, and how local LLMs in particular can help them achieve desired data-centric outcomes.

[Note to conference organizers: This is a condensed overview of a one-day workshop that provides hands-on details on the same subjects.]

Ram Singh

Builder, leader, and consultant exploring how AI models can deliver cheap, effective, secure, and simple solutions.

Nashville, Tennessee, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top