Liji Thomas
Gen AI Manager- HRBlock, MVP (AI)
Kansas City, Missouri, United States
Actions
Liji Thomas, a Microsoft MVP in AI, is a seasoned technologist with over 15 years of experience in creating transformative digital experiences. As Gen AI Manager at H&R Block, she drives innovation by leveraging generative AI to enhance customer experiences and deliver business value. Her expertise spans AI, machine learning, natural language processing, and conversational AI.
Liji is passionate about harnessing AI's transformative potential to revolutionize business and human interaction with technology. A thought leader in the field, she actively shares her insights through speaking engagements, publications, and community involvement. Committed to diversity and inclusion, she mentors aspiring technologists and advocates for the next generation of AI leaders.
Area of Expertise
Topics
Building Trustworthy AI: Practical Approaches with Azure AI Foundry
We are moving on from building just intelligent systems to creating responsible, ethical, and impactful solutions. Join us for a hands-on session on building ethical and safe AI solutions using Azure AI Foundry. Let's cut through the jargon and discuss practical ways to make your AI projects more responsible. In this session, you'll learn:
• The basics of Responsible AI and why it matters
• How to use Azure AI Foundry's tools for safer AI development
• Practical tips for content moderation in AI applications
• Real-world techniques for ethical AI governance
The session is intended for developers, data scientists, and tech leaders who want to create AI that's not just smart, but also trustworthy. Let's see how it's done.
Key takeaways:
• Understand what Responsible AI really means in practice
• Learn to use Azure AI Foundry's safety features effectively
• Master practical content moderation techniques
HAX Toolkit: Guidelines, Patterns, and Real-World Examples for Human-Centered AI
Microsoft's Human-AI eXperience (HAX) Toolkit was designed in direct response to AI practitioners who have expressed that "AI is the most ambiguous space I've ever worked in... There aren't any real rules and we don't have a lot of tools." It's a comprehensive resource designed to address the ambiguity in AI design.
The HAX Toolkit includes essential components such as the HAX Design Guidelines, HAX Design Library, HAX Workbook, and HAX Playbook. These tools are crafted to provide clarity and structure in the design of human-AI interactions.
The toolkit synthesizes over 20 years of research and was introduced in an award-winning 2019 CHI paper, ensuring that it is grounded in real-world insights from AI practitioners. This session is ideal for product designers, AI developers, UX researchers, and design strategists looking to enhance their approach to AI product development.
Key takeaways:
-Explore the HAX Design Guidelines: Learn 18 evidence-based best practices for creating intuitive AI user experiences, synthesizing over 20 years of research.
-Explore the HAX Design Library: Discover a rich database of design patterns and real-world examples to effectively implement AI interaction guidelines.
-Leverage the HAX Workbook: Facilitate cross-functional collaboration and alignment in early-stage AI product development, prioritizing design goals and resource allocation.
-Utilize the HAX Playbook: Identify and mitigate potential AI system errors, particularly in natural language processing, to create more robust and user-friendly applications.
Evaluation-Driven Development: Turning AI Demos into Real Products
If you want to move POCs into production, they have to do more than impress. They have to work.
Generative AI demos can feel powerful- fast, fluent, and full of potential. But capability alone doesn’t scale. Without measurement, prototypes stall, trust erodes, and systems never make it to production. The gap between a compelling demo and a reliable product is rarely the model. It’s the absence of evaluation.
To build enterprise-grade AI, you have to measure what you build.
This session introduces the Microsoft.Extensions.AI.Evaluation libraries, designed to make evaluation a first-class part of Gen AI applications. These libraries provide a practical foundation for assessing what matters in real systems: relevance, truthfulness, coherence, completeness, and safety. They include built-in quality, NLP, and safety evaluators, with the flexibility to extend or tailor them to your domain.
And as agentic AI takes hold — systems that plan, reason, and take multi-step actions — evaluation becomes even more critical. We’ll explore how evaluation extends beyond static responses to cover agent workflows, action orchestration, and decision chains. When AI can act, understanding why it acted is as important as the outcome.
By the end, one principle should be clear:
You can’t scale AI on intuition alone. You scale it by measuring it.
Key Takeaways
-Why evaluation is the foundation of LLM Ops, not an afterthought
-How to use Microsoft.Extensions.AI.Evaluation to measure response quality
-How to evaluate agentic AI — from workflows to reasoning steps
Azure AI Connect 2026 Sessionize Event Upcoming
Global AI Bootcamp 2025 - Delhi Edition Sessionize Event
Azure AI Connect Sessionize Event
Global AI Bootcamp Dhaka 2025 Sessionize Event
Global AI Bootcamp, Pune 2024 Sessionize Event
.NET Conf Manila, Philippines 2023 Sessionize Event
dev up 2023 Sessionize Event
Atlanta Cloud Conference 2023 Sessionize Event
Global AI Bootcamp Philippines 2023 Sessionize Event
Global AI Bootcamp - London 2023 Sessionize Event
Global AI Developer Days, Pune 2022 Sessionize Event
Global AI Developer Days Philippines 2022 Sessionize Event
CollabCon 2022 Sessionize Event
Global AI Developers Days Sessionize Event
dev up 2022 Sessionize Event
Global Azure - Verona 2022 Sessionize Event
Global AI Bootcamp 2022 Sessionize Event
Global AI Bootcamp [London] Sessionize Event
Austrian Developer Community Day (ADCD 2022) Sessionize Event
Canadian Cloud Summit 2022 Sessionize Event
Welsh Azure User Group - Event User group Sessionize Event
Build Stuff 2021 Lithuania Sessionize Event
Azure Community Conference 2021 Sessionize Event
Azure Summit Sessionize Event
Azure AI Day'21 Sessionize Event
Women Data Summit Sessionize Event
Global AI On Virtual Tour 2021 Sessionize Event
MCT Summit 2021 Sessionize Event
Virtual NetCoreConf 2021 Sessionize Event
Azure Saturday - Belgrade 2021 Sessionize Event
Global AI Bootcamp 2020 Sessionize Event
Global AI Bootcamp Latinoamérica Sessionize Event
AzConf Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top