Liji Thomas
Gen AI Manager- HRBlock, MVP (AI)
Kansas City, Missouri, United States
Actions
Liji Thomas, a Microsoft MVP in AI, is a seasoned technologist with over 15 years of experience in creating transformative digital experiences. As Gen AI Manager at H&R Block, she drives innovation by leveraging generative AI to enhance customer experiences and deliver business value. Her expertise spans AI, machine learning, natural language processing, and conversational AI.
Liji is passionate about harnessing AI's transformative potential to revolutionize business and human interaction with technology. A thought leader in the field, she actively shares her insights through speaking engagements, publications, and community involvement. Committed to diversity and inclusion, she mentors aspiring technologists and advocates for the next generation of AI leaders.
Area of Expertise
Topics
Building Trustworthy AI: Practical Approaches with Azure AI Foundry
We are moving on from building just intelligent systems to creating responsible, ethical, and impactful solutions. Join us for a hands-on session on building ethical and safe AI solutions using Azure AI Foundry. Let's cut through the jargon and discuss practical ways to make your AI projects more responsible. In this session, you'll learn:
• The basics of Responsible AI and why it matters
• How to use Azure AI Foundry's tools for safer AI development
• Practical tips for content moderation in AI applications
• Real-world techniques for ethical AI governance
The session is intended for developers, data scientists, and tech leaders who want to create AI that's not just smart, but also trustworthy. Let's see how it's done.
Key takeaways:
• Understand what Responsible AI really means in practice
• Learn to use Azure AI Foundry's safety features effectively
• Master practical content moderation techniques
HAX Toolkit: Guidelines, Patterns, and Real-World Examples for Human-Centered AI
Microsoft's Human-AI eXperience (HAX) Toolkit was designed in direct response to AI practitioners who have expressed that "AI is the most ambiguous space I've ever worked in... There aren't any real rules and we don't have a lot of tools." It's a comprehensive resource designed to address the ambiguity in AI design.
The HAX Toolkit includes essential components such as the HAX Design Guidelines, HAX Design Library, HAX Workbook, and HAX Playbook. These tools are crafted to provide clarity and structure in the design of human-AI interactions.
The toolkit synthesizes over 20 years of research and was introduced in an award-winning 2019 CHI paper, ensuring that it is grounded in real-world insights from AI practitioners. This session is ideal for product designers, AI developers, UX researchers, and design strategists looking to enhance their approach to AI product development.
Key takeaways:
-Explore the HAX Design Guidelines: Learn 18 evidence-based best practices for creating intuitive AI user experiences, synthesizing over 20 years of research.
-Explore the HAX Design Library: Discover a rich database of design patterns and real-world examples to effectively implement AI interaction guidelines.
-Leverage the HAX Workbook: Facilitate cross-functional collaboration and alignment in early-stage AI product development, prioritizing design goals and resource allocation.
-Utilize the HAX Playbook: Identify and mitigate potential AI system errors, particularly in natural language processing, to create more robust and user-friendly applications.
AI POCs Are Easy. Production Is Hard. Evaluation Closes the Gap.
Generative AI prototypes are easy. With a few prompts and a model endpoint, teams can create impressive demos in minutes. But once these systems meet real users and real data, the cracks appear: retrieval pipelines drift, responses hallucinate, costs and latency fluctuate, and agent workflows take unexpected paths.
The gap between a compelling POC and a reliable production system is rarely the model. It’s the absence of systematic Evaluation.
This session introduces Evaluation Driven Development as a practical engineering discipline for production AI systems. Using tools like the Microsoft's Evaluation SDK and Azure AI Foundry, we’ll explore how developers can instrument AI applications with automated evaluators to measure quality and safety.
From there, we’ll examine how evaluation applies across modern AI architectures including RAG pipelines, tool-calling agents, and multi-step reasoning workflows. You’ll see how to design evaluation datasets, run automated evaluation pipelines, and integrate these checks into CI/CD so changes to prompts, retrieval, or orchestration can be validated before reaching production.
Scaling AI systems isn’t about better demos. It’s about trust. Evaluation closes the gap.
Oct 7- 12 pm CT
What Talking to AI Taught Me About Talking to Humans
Over the past few years, many of us have spent an unusual amount of time in conversation with machines.
We prompt them. Clarify intent. Add context. Correct misunderstandings. Iterate until the response improves.
And somewhere along the way, something interesting happens.
You start noticing how different these conversations are from the way humans often communicate with each other.
AI conversations reward clarity. They surface ambiguity instantly. They encourage iteration through feedback. They respond without ego or judgment. And they force us to be explicit about what we actually mean.
These patterns reveal something deeper: many of the habits that make AI interactions effective are the same habits that make human collaboration work.
In this talk, we explore the unexpected communication lessons that emerge from working closely with AI systems. Not just prompt engineering, but broader conversational dynamics like clarity of intent, structured thinking, iterative feedback, and shared understanding.
Through real examples from building and working with AI systems, we’ll look at how these patterns can improve conversations with teammates, stakeholders, and cross-functional partners.
Turns out the most surprising thing about talking to AI isn’t what the machine learns. It’s what we learn about ourselves.
How AI Teams Innovate Faster Through Experimentation
Traditional software teams ship features. AI teams run experiments.
When systems are probabilistic, behavior evolves with data, and small changes in prompts, retrieval, or context can shift outcomes, the path to improvement is rarely linear. Progress comes through rapid cycles of hypothesis, experimentation, evaluation, and learning.
This talk explores why experimentation sits at the heart of successful AI product development and how high-performing AI teams structure their work around it. Instead of relying on intuition or isolated demos, these teams build experimentation into their engineering workflow, allowing them to test ideas quickly, measure outcomes reliably, and iterate toward better systems.
We will walk through a practical framework for effective AI experimentation culture, including how teams design meaningful experiments, build evaluation datasets, compare system variations, and use production feedback loops to guide continuous improvement.
The goal is not just to experiment more, but to experiment better.
Attendees will learn practical patterns for building faster learning loops, structuring experimentation frameworks, and creating the cultural conditions that allow AI teams to move from promising ideas to reliable products.
Because in AI development, the teams that innovate the fastest are not the ones who guess the best. They are the ones who learn the fastest.
10 Things I Learned from Shipping AI Systems to Production
AI systems behave very differently from traditional software.
Once AI moves beyond prototypes, teams quickly discover that the biggest challenges are rarely about the model itself. They are about the systems around it: architecture, evaluation, observability, guardrails, and the engineering discipline required to make probabilistic systems dependable.
This talk shares ten practical lessons learned from building and operating AI systems in production. We will explore why the model is only one piece of the architecture, why classic engineering fundamentals remain essential, and why teams must become comfortable working with ambiguity when building systems that are inherently non-deterministic.
We will also examine how a failure-mode mindset shapes reliable AI systems. High-performing teams design guardrails, evaluation loops, monitoring, and feedback mechanisms from the beginning rather than treating them as afterthoughts.
If you are building AI-powered applications, this talk will highlight the engineering patterns and practices that help teams move from impressive demos to dependable production systems.
Because the hardest part of AI is rarely getting the model to respond.
It is building the system around it so people can trust it.
Azure AI Connect 2026 Sessionize Event
Global AI Bootcamp 2025 - Delhi Edition Sessionize Event
Azure AI Connect Sessionize Event
Global AI Bootcamp Dhaka 2025 Sessionize Event
Global AI Bootcamp, Pune 2024 Sessionize Event
.NET Conf Manila, Philippines 2023 Sessionize Event
dev up 2023 Sessionize Event
Atlanta Cloud Conference 2023 Sessionize Event
Global AI Bootcamp Philippines 2023 Sessionize Event
Global AI Bootcamp - London 2023 Sessionize Event
Global AI Developer Days, Pune 2022 Sessionize Event
Global AI Developer Days Philippines 2022 Sessionize Event
CollabCon 2022 Sessionize Event
Global AI Developers Days Sessionize Event
dev up 2022 Sessionize Event
Global Azure - Verona 2022 Sessionize Event
Global AI Bootcamp 2022 Sessionize Event
Global AI Bootcamp [London] Sessionize Event
Austrian Developer Community Day (ADCD 2022) Sessionize Event
Canadian Cloud Summit 2022 Sessionize Event
Welsh Azure User Group - Event User group Sessionize Event
Build Stuff 2021 Lithuania Sessionize Event
Azure Community Conference 2021 Sessionize Event
Azure Summit Sessionize Event
Azure AI Day'21 Sessionize Event
Women Data Summit Sessionize Event
Global AI On Virtual Tour 2021 Sessionize Event
MCT Summit 2021 Sessionize Event
Virtual NetCoreConf 2021 Sessionize Event
Azure Saturday - Belgrade 2021 Sessionize Event
Global AI Bootcamp 2020 Sessionize Event
Global AI Bootcamp Latinoamérica Sessionize Event
AzConf Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top