Speaker

Liji Thomas

Liji Thomas

Gen AI Manager- HRBlock, MVP (AI)

Kansas City, Missouri, United States

Actions

Liji Thomas, a Microsoft MVP in AI, is a seasoned technologist with over 15 years of experience in creating transformative digital experiences. As Gen AI Manager at H&R Block, she drives innovation by leveraging generative AI to enhance customer experiences and deliver business value. Her expertise spans AI, machine learning, natural language processing, and conversational AI.

Liji is passionate about harnessing AI's transformative potential to revolutionize business and human interaction with technology. A thought leader in the field, she actively shares her insights through speaking engagements, publications, and community involvement. Committed to diversity and inclusion, she mentors aspiring technologists and advocates for the next generation of AI leaders.

Badges

  • Most Active Speaker 2023
  • Most Active Speaker 2022

Area of Expertise

  • Business & Management
  • Information & Communications Technology

Topics

  • Azure
  • Azure AI
  • .NET
  • App Development
  • Generative AI

Building Trustworthy AI: Practical Approaches with Azure AI Foundry

We are moving on from building just intelligent systems to creating responsible, ethical, and impactful solutions. Join us for a hands-on session on building ethical and safe AI solutions using Azure AI Foundry. Let's cut through the jargon and discuss practical ways to make your AI projects more responsible. In this session, you'll learn:
• The basics of Responsible AI and why it matters
• How to use Azure AI Foundry's tools for safer AI development
• Practical tips for content moderation in AI applications
• Real-world techniques for ethical AI governance

The session is intended for developers, data scientists, and tech leaders who want to create AI that's not just smart, but also trustworthy. Let's see how it's done.
Key takeaways:
• Understand what Responsible AI really means in practice
• Learn to use Azure AI Foundry's safety features effectively
• Master practical content moderation techniques

HAX Toolkit: Guidelines, Patterns, and Real-World Examples for Human-Centered AI

Microsoft's Human-AI eXperience (HAX) Toolkit was designed in direct response to AI practitioners who have expressed that "AI is the most ambiguous space I've ever worked in... There aren't any real rules and we don't have a lot of tools." It's a comprehensive resource designed to address the ambiguity in AI design.

The HAX Toolkit includes essential components such as the HAX Design Guidelines, HAX Design Library, HAX Workbook, and HAX Playbook. These tools are crafted to provide clarity and structure in the design of human-AI interactions.

The toolkit synthesizes over 20 years of research and was introduced in an award-winning 2019 CHI paper, ensuring that it is grounded in real-world insights from AI practitioners. This session is ideal for product designers, AI developers, UX researchers, and design strategists looking to enhance their approach to AI product development.

Key takeaways:
-Explore the HAX Design Guidelines: Learn 18 evidence-based best practices for creating intuitive AI user experiences, synthesizing over 20 years of research.

-Explore the HAX Design Library: Discover a rich database of design patterns and real-world examples to effectively implement AI interaction guidelines.

-Leverage the HAX Workbook: Facilitate cross-functional collaboration and alignment in early-stage AI product development, prioritizing design goals and resource allocation.

-Utilize the HAX Playbook: Identify and mitigate potential AI system errors, particularly in natural language processing, to create more robust and user-friendly applications.

Evaluation-Driven Development: The Next Step in LLM Ops

Generative AI often feels like magic — surprising, creative, and full of potential. But magic alone doesn’t scale. Without the discipline of measurement, prototypes stall, trust erodes, and production never arrives. To build reliable, enterprise-grade AI, you have to measure your magic.

This session introduces the Microsoft.Extensions.AI.Evaluation libraries, designed to simplify the process of evaluating model outputs in Gen AI apps. These libraries provide a robust foundation for evaluating key dimensions like relevance, truthfulness, coherence, completeness, and safety. They offer a range of built-in quality, NLP, and safety evaluators — with the flexibility to customize and add your own.

And as agentic AI becomes the new rave — applications that plan, reason, and take multi-step actions autonomously — evaluation becomes even more critical. We’ll explore how to extend evaluation practices beyond static responses to agent workflows, action orchestration, and decision-making chains.

By the end, you’ll know why the only way to scale AI with confidence is simple: measure your magic.

Key Takeaways

-Understand why evaluations are the foundation of LLM Ops — not an afterthought.
-Learn how to use Microsoft.Extensions.AI.Evaluation libraries to measure quality of AI responses.
-Discover how to evaluate agentic AI applications — from workflows to reasoning steps.
-Apply the principles of Evaluation-Driven Development (EDD) — designing evaluations first to guide how AI features are built and scaled.

Global AI Bootcamp 2025 - Delhi Edition Sessionize Event

April 2025

Azure AI Connect Sessionize Event

March 2025

Global AI Bootcamp Dhaka 2025 Sessionize Event

March 2025

Global AI Bootcamp, Pune 2024 Sessionize Event

March 2024

.NET Conf Manila, Philippines 2023 Sessionize Event

January 2024

dev up 2023 Sessionize Event

August 2023 St. Louis, Missouri, United States

Atlanta Cloud Conference 2023 Sessionize Event

March 2023 Marietta, Georgia, United States

Global AI Bootcamp Philippines 2023 Sessionize Event

March 2023

Global AI Bootcamp - London 2023 Sessionize Event

March 2023

Global AI Developer Days, Pune 2022 Sessionize Event

October 2022

Global AI Developer Days Philippines 2022 Sessionize Event

October 2022

CollabCon 2022 Sessionize Event

October 2022 Overland Park, Kansas, United States

Global AI Developers Days Sessionize Event

October 2022

dev up 2022 Sessionize Event

June 2022 St. Louis, Missouri, United States

Global Azure - Verona 2022 Sessionize Event

May 2022

Global AI Bootcamp 2022 Sessionize Event

March 2022 Madrid, Spain

Global AI Bootcamp [London] Sessionize Event

March 2022

Austrian Developer Community Day (ADCD 2022) Sessionize Event

February 2022

Canadian Cloud Summit 2022 Sessionize Event

February 2022

Welsh Azure User Group - Event User group Sessionize Event

February 2022

Build Stuff 2021 Lithuania Sessionize Event

November 2021 Vilnius, Lithuania

Azure Community Conference 2021 Sessionize Event

October 2021

Azure Summit Sessionize Event

September 2021

Azure AI Day'21 Sessionize Event

September 2021

Women Data Summit Sessionize Event

June 2021

Global AI On Virtual Tour 2021 Sessionize Event

June 2021

MCT Summit 2021 Sessionize Event

March 2021

Virtual NetCoreConf 2021 Sessionize Event

February 2021

Azure Saturday - Belgrade 2021 Sessionize Event

February 2021

Global AI Bootcamp 2020 Sessionize Event

January 2021

Global AI Bootcamp Latinoamérica Sessionize Event

January 2021

AzConf Sessionize Event

November 2020

Liji Thomas

Gen AI Manager- HRBlock, MVP (AI)

Kansas City, Missouri, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top