Session

Augmenting the Agentic Workforce: From LLM Creation to Human-Centered Evaluation

As organizations adopt large language models (LLMs) to automate content creation, summarization, and communication tasks, the need for thoughtful oversight becomes critical. This session introduces a scalable framework for building agentic workforces—semi-autonomous teams powered by LLMs that generate, evaluate, and translate content under structured human guidance. The proposed pipeline includes LLM-based content generation, self-evaluation through LLM-as-a-judge techniques, and dynamic human-in-the-loop (HitL) checkpoints that ensure factual accuracy, alignment with organizational values, and contextual appropriateness.

We’ll explore how this architecture can be extended to multilingual use cases, where translation and translation evaluation are managed through a blend of automated scoring and human quality review. Attendees will learn practical patterns for building these hybrid workflows, including evaluation criteria, escalation logic, and feedback integration for continual improvement. By fusing machine efficiency with human discernment, agentic workforces offer a responsible path forward for deploying generative AI in content-rich, high-stakes domains.

Nitin Kumar

Marriott International, Director Data Science

Dallas, Texas, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top