Session
Open AI & GPT Workshop
Introduction
Large language models (LLMs) have rapidly transitioned artificial intelligence from almost obscurity into the mainstream. The constantly evolving models and the impact that they have on society have recently become front-page news. Many developers, data scientists and companies are rushing to adopt and leverage these developments in their business process, products and solutions.
• What do I need to consider when developing an LLM based solution?
• What is a generative pre-trained transformer, and how does it work?
• How can these models be leveraged by application developers to deliver real business value?
In this workshop you will explore how to leverage the OpenAI and 3rd party models to develop AI enhanced solutions. You will learn options for hosting and consuming different models and develop code and prompts to explore their capabilities. The integration with company knowledgebases to provide a ChatGPT experience tailored to your business requirements will be covered along with the concepts of using prompt engineering to influence the outputs generated by the model.
Hands-on labs using the latest LLM frameworks will be available in C# and Python, ranging from walk-through exercises to advanced challenges and group activities.
The field of LLMs is constantly evolving, the contents of the workshop are being continuously updated to cover the latest developments.
This is what you will learn
• Large Language Models: Learn about the history, current state and possible future of the rapidly changing field of LLMs, focusing on hosted and open-source models.
• OpenAI, Azure OpenAI & Azure Machine Learning: Understand the different capabilities of cloud-hosted LLM model offerings, including options for model selection and fine-tuning.
• Prompt Engineering: Explore and experiment with the different techniques of prompt engineering, including techniques for output formatting, jail-breaking and securing your applications.
• Developing LLM Solutions: Leverage frameworks such as LangChain, Semantic Kernel and Prompt Flow to develop solutions that integrate with LLMs.
• Inside GPT: Gain an understanding of the internals of GPT models and how tokenization, embedding and output sampling work together with the model’s attention mechanism.
• Retrieval Augmented Generation (RAG): Integrate GPT solutions with external services, such as vector-based, text and hybrid search engines, to develop “chat with your data” solutions.
• Testing & Evaluating Responses: Understand the challenges of testing and evaluating the responses generated by LLMs and develop a testing strategy that provides quantitative metrics of the output quality.
• Working with Agents & Plugins: Explore the power of using pre-built and custom agents and plugins to create tools that LLM solutions can leverage to perform tasks and integrate with other systems.
Who should attend?
This workshop is ideal for developers and data scientists looking to deepen their understanding of generative AI and LLMs and integrate them into their projects.
Programming experience in C# or Python will be required for most of the hands-on labs.
At the end of this workshop, you will be able to:
• Understand the evolution and internals of LLMs
• Utilize cloud-hosted and 3rd party models
• Master prompt engineering
• Implement retrieval augmented generation (RAG)
• Test and evaluate LLM responses
• Leverage and develop agents & plugins
What should you bring?
• Laptop with a development environment suitable for Python or C# development.
• Access to OpenAI or Azure OpenAI services.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top