Session
AI Apps: From LLMs to Agentic AI Systems (with Azure)
This session walks attendees through a practical progression of building real-world AI systems:
LLM app foundations (prompting, tool use, reliability basics)
RAG done right (retrieval design, chunking, grounding, citations)
Copilots & dev assistants (how to design assistive workflows and context)
Agents & multi-agent systems (planning, tools, memory, orchestration)
Evaluation & guardrails (hallucination mitigation, safety boundaries, monitoring)
Attendees leave with repeatable patterns and a production checklist they can apply immediately—especially in Azure-centric architectures (Azure OpenAI, Azure AI Search, Functions, App Service/Container Apps, Prompt Flow/eval pipelines, observability).
Audience level: Intermediate
Pre-reqs (Required):
- Comfortable with APIs, JSON, and basic backend concepts (auth, logging, retries).
- Familiarity with LLM basics (tokens, temperature, prompt/response).
- Built at least one small LLM prototype or used an LLM SDK.
Pre-reqs (Helpful):
- Basic understanding of vector search / embeddings.
- Familiarity with Azure (App Service/Functions, Azure OpenAI, Azure AI Search) is helpful but not mandatory.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top