Session
Utilizing and Testing LLMs
The entire way that humanity has built, tested, and shipped software has (seemingly) been flipped on its head with the introduction of large language models and Generative AI. These systems are powerful, increasingly ubiquitous and -- to be brutally honest -- unbelievably useful. Avoiding them in modern products is unrealistic at best.
In this talk, we’ll explore practical strategies for integrating LLMs into real applications in a way that is actually useful for users (no, your app absolutely does not need its own chat buddy) and how to test systems that don't behave like traditional deterministic software. We'll cover how LLMs are getting ready to change the very way that we interact with computers and techniques for building confidence in AI-powered features.
This session focuses on treating LLMs like what they are: systems that can exhibit (very) limited reasoning capabilities, respond in plain language to users, and help us get rid of the bane of any app user's existence: step-by-step wizards.
Key Takeaways:
- Practical use cases for how industry leaders are using LLMs in ways that will change the world
- How to implement non-deterministic systems without harming user experience
- Testing strategies that reduce the inherent risks of LLM-based features
Chris Sellek
Writer of things
Raleigh, North Carolina, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top