
Liji Thomas
Gen AI Manager- HRBlock, MVP (AI)
Kansas City, Missouri, United States
Actions
Liji Thomas, a Microsoft MVP in AI, is a seasoned technologist with over 15 years of experience in creating transformative digital experiences. As Gen AI Manager at H&R Block, she drives innovation by leveraging generative AI to enhance customer experiences and deliver business value. Her expertise spans AI, machine learning, natural language processing, and conversational AI.
Liji is passionate about harnessing AI's transformative potential to revolutionize business and human interaction with technology. A thought leader in the field, she actively shares her insights through speaking engagements, publications, and community involvement. Committed to diversity and inclusion, she mentors aspiring technologists and advocates for the next generation of AI leaders.
Area of Expertise
Topics
Building Trustworthy AI: Practical Approaches with Azure AI Foundry
We are moving on from building just intelligent systems to creating responsible, ethical, and impactful solutions. Join us for a hands-on session on building ethical and safe AI solutions using Azure AI Foundry. Let's cut through the jargon and discuss practical ways to make your AI projects more responsible. In this session, you'll learn:
• The basics of Responsible AI and why it matters
• How to use Azure AI Foundry's tools for safer AI development
• Practical tips for content moderation in AI applications
• Real-world techniques for ethical AI governance
The session is intended for developers, data scientists, and tech leaders who want to create AI that's not just smart, but also trustworthy. Let's see how it's done.
Key takeaways:
• Understand what Responsible AI really means in practice
• Learn to use Azure AI Foundry's safety features effectively
• Master practical content moderation techniques
HAX Toolkit: Guidelines, Patterns, and Real-World Examples for Human-Centered AI
Microsoft's Human-AI eXperience (HAX) Toolkit was designed in direct response to AI practitioners who have expressed that "AI is the most ambiguous space I've ever worked in... There aren't any real rules and we don't have a lot of tools." It's a comprehensive resource designed to address the ambiguity in AI design.
The HAX Toolkit includes essential components such as the HAX Design Guidelines, HAX Design Library, HAX Workbook, and HAX Playbook. These tools are crafted to provide clarity and structure in the design of human-AI interactions.
The toolkit synthesizes over 20 years of research and was introduced in an award-winning 2019 CHI paper, ensuring that it is grounded in real-world insights from AI practitioners. This session is ideal for product designers, AI developers, UX researchers, and design strategists looking to enhance their approach to AI product development.
Key takeaways:
-Explore the HAX Design Guidelines: Learn 18 evidence-based best practices for creating intuitive AI user experiences, synthesizing over 20 years of research.
-Explore the HAX Design Library: Discover a rich database of design patterns and real-world examples to effectively implement AI interaction guidelines.
-Leverage the HAX Workbook: Facilitate cross-functional collaboration and alignment in early-stage AI product development, prioritizing design goals and resource allocation.
-Utilize the HAX Playbook: Identify and mitigate potential AI system errors, particularly in natural language processing, to create more robust and user-friendly applications.
Evaluation-Driven Development: The Next Step in LLM Ops
Generative AI often feels like magic — surprising, creative, and full of potential. But magic alone doesn’t scale. Without the discipline of measurement, prototypes stall, trust erodes, and production never arrives. To build reliable, enterprise-grade AI, you have to measure your magic.
This session introduces the Microsoft.Extensions.AI.Evaluation libraries, designed to simplify the process of evaluating model outputs in Gen AI apps. These libraries provide a robust foundation for evaluating key dimensions like relevance, truthfulness, coherence, completeness, and safety. They offer a range of built-in quality, NLP, and safety evaluators — with the flexibility to customize and add your own.
And as agentic AI becomes the new rave — applications that plan, reason, and take multi-step actions autonomously — evaluation becomes even more critical. We’ll explore how to extend evaluation practices beyond static responses to agent workflows, action orchestration, and decision-making chains.
By the end, you’ll know why the only way to scale AI with confidence is simple: measure your magic.
Key Takeaways
-Understand why evaluations are the foundation of LLM Ops — not an afterthought.
-Learn how to use Microsoft.Extensions.AI.Evaluation libraries to measure quality of AI responses.
-Discover how to evaluate agentic AI applications — from workflows to reasoning steps.
-Apply the principles of Evaluation-Driven Development (EDD) — designing evaluations first to guide how AI features are built and scaled.
Global AI Bootcamp 2025 - Delhi Edition Sessionize Event
Azure AI Connect Sessionize Event
Global AI Bootcamp Dhaka 2025 Sessionize Event
Global AI Bootcamp, Pune 2024 Sessionize Event
.NET Conf Manila, Philippines 2023 Sessionize Event
dev up 2023 Sessionize Event
Atlanta Cloud Conference 2023 Sessionize Event
Global AI Bootcamp Philippines 2023 Sessionize Event
Global AI Bootcamp - London 2023 Sessionize Event
Global AI Developer Days, Pune 2022 Sessionize Event
Global AI Developer Days Philippines 2022 Sessionize Event
CollabCon 2022 Sessionize Event
Global AI Developers Days Sessionize Event
dev up 2022 Sessionize Event
Global Azure - Verona 2022 Sessionize Event
Global AI Bootcamp 2022 Sessionize Event
Global AI Bootcamp [London] Sessionize Event
Austrian Developer Community Day (ADCD 2022) Sessionize Event
Canadian Cloud Summit 2022 Sessionize Event
Welsh Azure User Group - Event User group Sessionize Event
Build Stuff 2021 Lithuania Sessionize Event
Azure Community Conference 2021 Sessionize Event
Azure Summit Sessionize Event
Azure AI Day'21 Sessionize Event
Women Data Summit Sessionize Event
Global AI On Virtual Tour 2021 Sessionize Event
MCT Summit 2021 Sessionize Event
Virtual NetCoreConf 2021 Sessionize Event
Azure Saturday - Belgrade 2021 Sessionize Event
Global AI Bootcamp 2020 Sessionize Event
Global AI Bootcamp Latinoamérica Sessionize Event
AzConf Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top