Session
Beyond LLMs: Hybrid AI Patterns for Real Applications
Large language models have made it easier than ever to add AI capabilities to applications, but they are not a universal solution. In production systems, asking an LLM to do everything can lead to unnecessary cost, weaker control, and architectures that are harder to test and maintain. Many real applications work better when LLMs are combined with more traditional techniques such as rules, retrieval, scoring, search, or optimization.
This talk presents a practical, engineering-focused approach to hybrid AI design. We will look at where LLMs shine, where they struggle, and how to decide which parts of a system should remain deterministic or algorithmic. Rather than treating the model as the application, we will explore how to use it as one component within a broader architecture that may also include workflow logic, retrieval pipelines, classification, ranking, or search-based methods.
The goal is not to argue against LLMs, but to show how to use them more effectively by pairing them with the right supporting techniques. Attendees will leave with a clearer framework for choosing between LLM-driven behavior and classical approaches, along with practical design patterns for building AI-powered applications that are more reliable, maintainable, and production-ready.
Key takeaways
- How to identify which parts of an application are a good fit for LLMs and which are better handled by classical logic or algorithms.
- Practical patterns for combining LLMs with retrieval, rules, ranking, search, and workflow orchestration.
- A more disciplined architecture mindset for building AI-powered applications in production.
Target audience
Software developers, architects, tech leads, and AI engineers who want a practical framework for applying modern AI techniques without over-engineering or over-relying on LLMs.
Suggested tags
AI engineering, hybrid AI, LLMs, software architecture, applied AI, production systems
Compact version
Large language models are powerful, but they are not the whole solution. In production systems, many applications work better when LLMs are combined with traditional techniques such as rules, retrieval, scoring, search, or optimization rather than being asked to do everything on their own.
This talk presents a practical approach to hybrid AI design, showing how to decide which parts of a system should be LLM-driven and which should remain deterministic or algorithmic. Attendees will leave with a framework and design patterns for building AI-powered applications that are more reliable, maintainable, and production-ready.
Optional alternate titles
Hybrid AI for Real Applications: Where LLMs End and Classical Algorithms Begin
LLMs Are Not Enough: Practical Hybrid AI for Production Systems
Building Better AI Applications with LLMs, Rules, Retrieval, and Search
A Better Way to Build AI Applications: Hybrid Intelligence in Practice
Eyal Wirsansky
Staff AI Engineer | Adjunct AI Professor | Author of ‘Hands-On Genetic Algorithms with Python’ | JUG and GDG Community Leader
Jacksonville, Florida, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top