Session

From Silicon to Agents: A Technological Deep Dive into the AI Stack

Modern AI feels like magic. However, its structure follows a clear technological evolution from the transistor to the agent. The talk walks step by step through this AI stack.

We examine why GPUs with high parallelism and memory bandwidth became the foundation of large models, and why these models are so powerful but also resource intensive. Building on this, we explain transformer models: tokens, context windows, and generation as consequences of the architecture.

Finally, we connect this to retrieval-augmented generation (RAG), tool calling, and agentic loops as logical extensions of LLMs with their clearly defined strengths and limitations.

The goal is to provide a solid overall understanding that turns AI from a black box into an understandable technology without going too deeply into detail, enabling better technical as well as organizational decisions.

Alexander Lehmann

Software Architect, Inventor of QuineAI

Dresden, Germany

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top