Session
Actionable user feedbacks for LLM applications
LLM-based use cases are getting introduced and implemented within digital products every day. Yet, revolutionary as these new tools might be, product teams must stay grounded to actual user requirements and measure their effectiveness within the boundaries of product management best practices. This means observing user behaviour, A/B testing to validate product assumptions.
But how can traditional product analytics work with the non deterministic nature of LLM outputs?
In this talk we’ll explore how open-source project Langfuse can help correlating user actions or explicit feedbacks to LLM generations, so that the team can quickly A/B test different versions of prompting and measure what works best for the final users.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top