Session
LLM pipelines built for critical performances
When designing LLM-based applications, it’s tempting to go with the best-in-class of any component in the content generation pipeline. Yet, especially in consumer use cases, we must evaluate trade-offs to balance content quality and performances, as they both severely impact UX and one cannot compensate the other.
Picking from our own experience building a consumer app serving users real-time content generated on the fly with a complex LLM pipeline, we will explore lessons learnt, tips and tricks to find the sweet spot between these two contrasting forces.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top