Session

GenAI 101

If you want to understand the current state of the art in generative AI technology, this session is for you.

We will start with LLM Basics: How do Large Language Models (LLMs) work internally (at a high level of abstraction)? How does the API of an LLM look like, and what are its limitations? How do "reasoning models" work?
Then we'll visit the peripherals of a LLM: What is Retrieval Augmented Generation (RAG)? What is "tool calling"?
Before coming to AI Agents answering the questions: What defines an AI Agent? How do current state-of-the-art AI agents function? What's the Model Context Protocol (MCP)?

We will conclude the session with a practical example of how an internal AI agent for a software development team could look like. We will also give away some lessons learned about AI use cases and their pitfalls.

Alexander Lehmann

Software Architect @ One.O (OTTO Group)

Dresden, Germany

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top