© Mapbox, © OpenStreetMap

Speaker

Ram Singh

Ram Singh

Builder, leader, and consultant exploring how AI models can deliver cheap, effective, secure, and simple solutions.

Nashville, Tennessee, United States

Actions

Ram is a technologist and operator who root-cause solves existential enterprise-scale business and product problems while attempting to *not* create entirely new ones. When not leading or consulting cross-functional product development teams in delivering simple, trusted, fun, and useful solutions that delight customers and deliver significant ROI, he’s usually in the guts of yet another startup idea to do the same. Or jeeping and camping with (The RDML Grace) Hopper, a dog. Childhood access to computers blended with classic sci-fi like Asimov’s R. Daneel Olivaw, Simak’s Jenkins, and Adams’ Marvin stories piqued an early study of AI and robotics that perpetuated throughout Ram’s professional career. Like many, he’s been exploring what these latest AI model offerings are *aktshually* good for. Ram also: Operates "id8 inc", his consulting business and innovation incubator; Is developing "coreograf", a federated context-engineering system to improve efficacy & outcomes when humans-and-generative-AI-models collaborate in the development of complex enterprise-scale software; Co-organizes the "Artificial Intelligencers" and "Beerly Functional" meetup groups.

Area of Expertise

  • Business & Management
  • Information & Communications Technology
  • Media & Information

Topics

  • AI/ML
  • Local LLMs
  • Product Management
  • SLMs
  • Lean Enterprise
  • Lean Startup
  • Lean Product Management
  • complex systems
  • Collective Intelligence
  • Complex Adaptive Systems
  • systems thinking
  • functional programming
  • Elixir

Leveraging Local LLMs for the Resource-Constrained Data Practitioner

This one-day hands-on-keyboards working session is for data practitioners curious to explore how Large Language Models (LLMs) can be integrated into their workflows, but who have concerns about cost, security, and complexity.

On their own laptops, starting from scratch, practitioners will learn how to:
* Find, install, configure, and manage LLMs on their own devices
* Integrate those LLMs into a working environment
* And, utilize LLMs in a data workflow

They will also learn:
* How LLMs in general work and what they are (and are not) good for
* LLMOps criteria to consider when deciding whether to use open/closed, local/hosted LLMs
* How to make domain-specific data available to their LLMs
* And, “context engineering” techniques to elicit the best possible results from LLMs

Practitioners will leave the session with:
* A functional local LLM working environment set up on their laptops
* Experience using this environment in a simplified but realistic use case
* Sufficient understanding about how to customize this setup to suit their specific/evolving needs
* And, a basis to continue exploring and refining how they employ LLMs to achieve desired data-centric outcomes

[Note to conference organizers: This is an expanded, hand-on version of a ~40 minute presentation providing an overview of the same subjects.]

Not Your Models, Not Your Data? Private Local AI for the Cautious Data Practitioner

Cloud-hosted AI models offer undeniable convenience. But every prompt or API call to a hosted AI model includes the possibility of your company’s "secret sauce" leaking into a provider’s operational or training data sets.

In this session, we will identify the "hidden lifecycles of data" in AI systems, and criteria you can use to determine if those risks or unknowns are acceptable or not. We will also outline a "local-first" AI stack that brings your own models to your data and workflows, for those instances where the risks, costs, or unknowns of using hosted models are not acceptable.

Your takeaways will include:
* The Risk Surface: Understanding the vulnerabilities that can arise by bringing your data and workflows to hosted AI models.
* The Decision Framework: Criteria for balancing privacy, cost, and performance to determine when you should stay local and when you may need hosted models.
* The Local Stack: A curated guide to discovering, deploying, and operating your own AI models and tools.

The Bottom Line: Treat your data and workflows like the valuable, protected assets that they are. Deploying a local AI stack gives you modern AI capabilities on your own terms and in your own infrastructure.

Ram Singh

Builder, leader, and consultant exploring how AI models can deliver cheap, effective, secure, and simple solutions.

Nashville, Tennessee, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top