Session

Embeddings, Transformers, RLHF: Three Key Ideas to Understand ChatGPT

ChatGPT has achieved tremendous success and is already transforming the daily routines of many professionals across various industries.

While countless articles highlight the "30 must-know commands," few delve into the actual workings of the technology behind ChatGPT. To understand it, it's essential to grasp three key concepts:

- Embeddings: These represent words and phrases numerically, allowing large language models like GPT to process natural language.
- Transformers: The core component of large language models. Using the attention mechanism, they can focus on semantically related words even when they appear distant from one another.
- RLHF (Reinforcement Learning with Human Feedback): This technique is employed to train models on extensive datasets of questions and answers with minimal human supervision.

In this talk, Emanuele Fabbiani from xtream will provide a concise yet thorough introduction to embeddings, transformers, and RLHF. He'll describe the technology powering ChatGPT, enabling the audience to harness the tool to its fullest potential.

First delivered at Talks at Buildo, 12 July 2023, Milan, Italy
Also held at BI Digital, 7 October 2023, Biella, Italy
Also held at Talks at BitRocket, 10 November 2023, Palermo, Italy
Also held at Boolean Masterclass, 21 November 2023, Milan, Italy
Also held at SIIAM Congress, 7 December 2023, Rome, Italy
Also held at Futuru 2024, 11 May 2024, Iglesias, Sardinia
Recorded session at https://youtu.be/m5qY4GNFEsA?si=a1SvJQYFVeQIcKZo

Emanuele Fabbiani

Head of AI at xtream, Professor at Catholic University of Milan

Milan, Italy

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top