Session

Embeddings, Transformers, RLHF: Three key ideas to understand ChatGPT

ChatGPT has become a groundbreaking tool, transforming how professionals in various industries work. However, while many articles focus on "the 30 prompts everybody needs to know", they often overlook the underlying technology of ChatGPT. To truly understand ChatGPT, it's important to comprehend three key concepts: Embeddings (how LLMs convert words and phrases into numerical values), Transformers (deep-learning modules that capture semantic connections), and RLHF (Reinforcement Learning with Human Feedback) to align models with intended purposes and ethical standards. The talk also covers the four primary steps involved in building and training a GPT-like model, plus limitations and strengths of current generative AI models and actionable insights for safe adoption.

Luca Baggi

AI Engineer @xtream

Milan, Italy

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top