Session

From Traction to Production: Maturing your LLMOps step by step

Your concept of a product that leverages the power of Large Language Models is gaining traction, and your initial API calls to the selected LLM vendor show promising results. What's next? Should you look forward to ensure your app is scalable, maintainable, safe, secure, and ready for the introduction and monitoring of quality metrics like groundedness, as well as operational metrics such as latency and token consumption? Or should you look back to reconsider model selection or fine-tuning, refine prompts, and improve RAG techniques? And how do you close the continuous improvement loop?

In this session, we cover all the important milestones of your LLM-powered product from both application and operations perspectives by introducing the LLMOps maturity model, as well as useful tools and services to support your journey. This will help you determine the best next steps for your Generative AI-based product to achieve operational excellence in today's dynamic and ever-evolving landscape of LLM technology.

After introducing LLMOps and its maturity model, we'll embark on the adaptation journey step by step, covering:
- Selecting the best LLM for your business requirements
- Enriching LLM responses with data relevance and contextualization
- Best practices for prompt engineering
- Assessing the performance of your LLM solution
- Effective management of LLM application deployment
- Continuous monitoring techniques for LLM applications
- Ensuring safety and security in LLM content

Maxim Salnikov

Developer Productivity Lead at Microsoft, Tech Communities Lead, Keynote Speaker

Oslo, Norway

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top