Session

Frontiers of Large Language Model-Based Agentic Systems Construction, Efficacy and Safety (Oct '24)

Large Language Models (LLMs) have recently demonstrated remarkable potential in achieving human-level intelligence, sparking a surge of interest in LLM-based autonomous agents. However, there is a noticeable absence of a thorough guide that methodically compiles the latest methods for building LLM-agents, their assessment, and the associated challenges. As a pioneering initiative, this tutorial delves into the intricacies of constructing LLM-based agents, providing a systematic exploration of key components and recent innovations. We dissect agent design using an established taxonomy, focusing on essential keywords prevalent in agent-related framework discussions. Key components include profiling, perception, memory, planning, and action. We unravel the intricacies of each element, emphasizing state-of-the-art techniques. Beyond individual agents, we explore the extension from single-agent paradigms to multi-agent frameworks. Participants will gain insights into orchestrating collaborative intelligence within complex environments. Additionally, we introduce and compare popular open-source frameworks for LLM-based agent development, enabling practitioners to choose the right tools for their projects. We discuss evaluation methodologies for assessing agent systems, addressing efficiency and safety concerns.

Tutorial initially delivered October 2024 at ACM CIKM ‘24, proceedings for which can be found here: Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (https://dl.acm.org/doi/pdf/10.1145/3627673.3679105). See https://frontiers-of-ai-agents-tutorial.github.io/ for more information.

April Hazel

Advising large organizations on the edge of AI, Data, and Innovation

St. Louis, Missouri, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top