Session

Mastering Prompt Engineering Techniques

This session provides an in-depth exploration of prompt engineering techniques with recent large language models. A comprehensive walkthrough of various prompt engineering strategies are presented demonstrating how to optimize the model's responses and improve its performance in different tasks.

The session begins with an understanding of the distinction between base Large Language Models (LLMs) and instruction-tuned LLMs. It then moves on to discuss techniques that guide the model's responses, such as controlling temperature and providing explicit instructions. A significant part of the session is dedicated to strategies for handling potential model errors. This includes the use of self-check mechanisms and 'disallow' lists. We also introduce the audience to advanced techniques like 'few-shot prompting', 'self-consistency', 'tree of doubts', 'retrieval augmented generation', and 'jailbreaking'. Practical examples are provided throughout the session, demonstrating how these techniques can be applied in tasks like sentiment analysis, summarization, entity extraction, translation, and more.

Attendees will leave the session equipped with practical knowledge and insights that can be directly applied to enhance their work with large language models. This session is a must-attend for AI practitioners, developers, data scientists, and anyone interested in leveraging the power of LLMs for various applications.

Daron Yöndem

Microsoft - Tech Lead

Istanbul, Turkey

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top