Session

Practical RAG: Building a Semantic Memory App

LLMs hallucinate. They forget context. They don't know your data. Retrieval-Augmented Generation (RAG) solves this by giving your app context-aware memory, storing information by meaning and retrieving it based on intent.

You'll see how user input is processed through an LLM to categorize, tag, and generate metadata, then converted into embeddings and stored for semantic retrieval. As users interact with the system, their questions are analyzed to determine intent, transformed into targeted semantic queries, and matched against relevant stored content. The retrieved results are then passed back to an LLM to synthesize accurate, context-aware responses.

This talk walks through the full end-to-end workflow: structuring content for effective retrieval, generating embeddings, orchestrating multiple LLM calls, and designing a system that moves beyond simple prompting into intelligent knowledge-driven applications. You'll leave knowing how to build AI applications that actually know your data.

Brent Stewart

Co-Founder of Alien Arc Technologies

Blue Springs, Missouri, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top