Tricia Diamond
Director/Founder of Diamond PMO Solutions | Speaker (AI, Portfolio and Program Management, Professional Development, Heritage Management)
Seattle, Washington, United States
Actions
Dr. Tricia Diamond is a portfolio and program management executive, aerospace engineer, and organizational strategist with more than two decades of leadership across the public sector, technology, and consulting in the United States and the Netherlands. She holds a PhD in Aerospace Engineering and professional aviation experience, alongside advanced credentials including the PMP®, PMI-ACP®, PMO-CP®, GPM-b®, CAL®, ITIL®, AWS-CCP, AWS-AIP, MCASE. As Director of ARPA Implementation for a major U.S. city, she built and led a PMO governing a $386M federal recovery portfolio with 100% regulatory compliance. She has spoken at ASPA and PMI conferences, previously served as VP of Professional Development for PMI Puget Sound, and is a Seattle Parks and Recreation Commissioner. She is the founder of Diamond PMO Solutions, an MWBE owned management consultancy specializing in portfolio governance, PMO design, and organizational strategy as well as PMI certification trainings.
Area of Expertise
Topics
When AI Gets It Wrong and Nobody Notices: Governance, Accountability, and the Cost of Unaccountable
Conference Alignment
This session addresses the conference’s Ethical AI and Governance track with a case-study-grounded focus on bias, transparency, accountability, and the organisational conditions that allow AI errors to compound undetected in environments where the consequences are measured not in metrics but in community harm and regulatory exposure.
Problem Statement
Most AI governance discourse focuses on model-level risks: bias in training data, lack of explainability, regulatory non-compliance. These are real problems. However, the governance failures that cause the most damage in practice are not model failures — they are organisational failures. They happen when no human in the process has clear accountability for verifying AI outputs, when the speed of AI-assisted analysis creates institutional pressure to skip the validation step, and when the systems that should catch errors are themselves partially automated. In high-stakes environments — government programmes, healthcare systems, public infrastructure, financial services — these organisational governance gaps are not hypothetical. They are operational realities with measurable human costs.
Session Description
When a federal programme deploys AI-assisted tools to analyse eligibility, prioritise allocations, or generate compliance documentation, the AI does not bear the consequences if the output is wrong. The programme director does. The community does. The regulator does. The accountability gap between what AI produces and who answers for it is not a technical problem. It is a governance problem and it requires a governance solution.
Dr. Tricia Diamond draws on her experience directing a $386 million ARPA Implementation PMO where every allocation decision was subject to federal audit, every requirement was traceable to a community outcome, and the cost of an undetected error was measured in potential clawback of public funds to present a practitioner’s framework for AI governance in high-stakes programme environments. This session examines the specific conditions under which AI errors go undetected in organisational workflows, the governance structures that prevent them, and the accountability architecture that makes AI-assisted decision-making defensible under scrutiny.
This is not a theoretical session about responsible AI. It is a direct account of what governance looks like when the stakes are real, the scrutiny is constant, and the humans in the loop must be able to explain every decision to a federal auditor.
Key Takeaways
• The three organisational conditions that allow AI errors to compound undetected in high-stakes programme environments, and the governance structures that interrupt each one.
• How to design human-in-the-loop validation checkpoints that are genuinely effective rather than performative, including the accountability assignment that makes them function under operational pressure.
• A practical AI governance framework drawn from federal programme management practice that is transferable to any high-stakes delivery environment — healthcare, finance, government services, infrastructure.
• How to build the documentation and traceability practices that make AI-assisted decisions auditable, defensible, and correctable after the fact.
• Why the accountability gap between AI output and human responsibility is the most consequential and least addressed dimension of AI governance in organisations today, and what closing it actually requires.
How This Session Aligns With the Conference Theme
The conference’s Ethical AI and Governance track is explicitly seeking content on bias mitigation, regulatory frameworks, privacy, and transparent AI systems. This session contributes a practitioner-level case study perspective that is rare in AI governance discourse: not a policy researcher or a vendor, but a programme director who built and operated AI-assisted governance systems in a federally scrutinised environment and can speak directly to what worked, what failed, and what the accountability architecture actually needs to look like.
Intended Audience and Prerequisites
Advanced. Intended for senior leaders, programme directors, governance professionals, and policy practitioners who are responsible for AI adoption decisions in environments where errors carry regulatory, financial, or community consequences. Assumes basic familiarity with AI concepts and organisational governance frameworks.
AI in The New Era Sessionize Event Upcoming
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top