Noble Ackerson
Responsible AI Product Strategy & Data Governance
Actions
Noble's professional career merges at the intersection of data ethics and emergent technology. His impact in product leadership spans industries Health, Edtech, International development, and media.
Day-to-day, he leads product strategy for enterprises looking to implement complex applications and responsible artificial intelligence solutions in a human-centered way, including AI bias mitigation, explainability, privacy-preserving approaches, and continuous learning automation to streamline software.
Mr. Ackerson is a Certified AI Product Manager, a Google Certified Design Sprint Master, and a former Google Developers Expert for Product Strategy.
Links
Area of Expertise
Topics
The Role of Explainable Interfaces and Building Trust in Government
As artificial intelligence (AI) becomes increasingly integrated into government services, building public trust and ensuring transparency in AI-driven decision-making processes have emerged as critical challenges. To address these concerns, explainable interfaces are playing a crucial role in enabling transparent AI and fostering trust in government AI systems.
In this talk, Noble Ackerson, a Fractional AI Risk Manager and Chief Product Strategist at Byte an Atom Research, will explore the importance of trust and transparency in adopting AI within government services and how explainable interfaces can help achieve these goals. By adhering to Paul Grice's maxims of effective communication, these interfaces ensure clear, relevant, and cooperative interactions between users and AI, making AI more accessible and user-friendly compared to traditional chatbots.
Noble will present real-world examples of government agencies using explainable AI, enabled by intuitive interfaces, to provide users with interpretable insights from predictive and prescriptive data on dashboards. These case studies will demonstrate how explainable interfaces can foster public trust and engagement by demystifying AI decision-making processes and promoting accountability.
Finally, Noble will discuss the future of AI in government, and the critical role explainable interfaces will play in ensuring responsible and transparent AI deployment. Attendees will learn about the challenges and opportunities associated with implementing explainable AI in government services and gain insights into how they can leverage these interfaces to build trust in their own AI initiatives.
Attend this GovAI Summit talk to explore the cutting-edge of explainable AI and discover how explainable interfaces are transforming the relationship between government, AI, and the public.
Building Explainable Interfaces for your Apps
In this talk, Noble Ackerson, a Fractional AI Risk Manager and Chief Product Strategist at Byte an Atom Research, will explore how to make user interfaces smarter with Large Language Models (LLMs)
As AI becomes increasingly integrated into our applications, it's crucial to rethink how we design user interfaces. Traditional chatbot interfaces place an undue cognitive burden on users, expecting them to become "prompt engineers" to effectively communicate with AI, leading to potentially harmful responses, hallucinations, and unreliable outputs.
What if we could make websites smarter with AI? Noble will walk through how he creates smarter, more intuitive interfaces for AI-powered apps.
This code lab-style talk applies the principles of cooperative communication and leveraging the power of Large Language Models to enable context-sensitive, real-time assistance that adheres to Paul Grice's maxims of effective communication.
We'll explore how his approach abstracts prompting from users, fosters clear and relevant interactions between humans and AI, and ultimately delivers a more seamless and satisfying user experience. Attend this Voice&AI talk to discover the future of AI-driven interfaces and learn how to build smarter apps that truly harness the potential of AI.
AI's Complexity Trap: Unpacking the Operational Complexity Paradox
What if our efforts to improve the reliability of generative AI solutions are making these systems more cumbersome to build and hard to use? I call this the "AI Complexity Trap." As we add more layers of safety and accuracy, we're inadvertently creating complex systems that are draining resources and complicating things.
In this talk, I'll expose the hidden costs of over-engineering AI and share solutions for balancing excellence with simplicity. You'll discover strategies to harmonize technological advancement with pragmatic management. The goal? Harnessing AI's potential without falling into the complexity trap.
GenAI: An Unreliable Information Store...and What to Do About It
Synopsis: Embark on an enlightening journey with Noble as he tackles the challenges of integrating Large Language Models (LLMs) into enterprise environments. Understand the inherent unreliability of these models and explore innovative solutions, ranging from vector databases to prompt chaining, that aim to enhance the trustworthiness of LLMs in crucial applications. About Noble Ackerson: An authority on data ethics and emergent technology, Noble is at the forefront of applied AI integration at Ventera Corporation, Virginia. His illustrious career boasts accolades as an AI Product Manager, Google Design Sprint Master, and former Google Developers Expert. With a legacy of crafting award-winning mobile and advanced analytics solutions, Noble's passion extends to mentoring underrepresented communities. [Read more](https://medium.com/@nobleackerson)
Trustworthy AI for Enterprise: Enhancing LLMs
GPT Is an Unreliable Information Store
Current large language models (LLMs) exhibit limitations in deductive reasoning and cognitive architecture, posing challenges to their reliability in enterprise applications. This talk presents novel techniques to address these limitations, evaluate, and improve LLM performance.
We will highlight the issue of epistemological blindness in LLMs and propose a solution using embeddings model endpoint and Vector Index DB for enhanced factual accuracy. Additionally, we will discuss innovative approaches to automate feature engineering, including Automatic Feature Extraction with LLMs and Langchain integration in an MLOps pipeline so that we can evaluate the accuracy of these tools.
This presentation aims to offer valuable insights for practitioners and researchers, contributing to developing responsible and trustworthy AI in enterprise contexts.
Tags: Explainability, ML Model Pipelines, Chroma, Pinecone, OpenAI, ada, Embeddings, Cosine Similarity, Semantic Search
Fight AI bias with… bias
How can we ensure that AI systems treat everyone fairly?
Human biases influence the outputs of an AI model. AI amplifies bias, and socio-technical harms impact fairness, adoption, safety, and well-being.
Harms such as improperly implemented AI-driven predictive policing disproportionately affect legally protected classes of individuals and groups in the United States.
In this talk, Noble walks through tools/libraries to mitigate bias within your ML pipeline and explainability solutions with SHAP for explaining the predictions of machine learning models to avoid disproportionate product failure due to fairness and bias issues.
It’s so fitting that the 2022 theme for International Women’s Day was #BreakTheBias, so join Noble as he returns to Strangeloop to expand on the topic of bias, deconstruct techniques to de-bias datasets by example for building intelligent systems that are fair and equitable while increasing trust and adoption.
Tags: Explainability, XAI, Fairness Metrics, ML Evaluation, Model Monitoring, Interpretable Models, Machine Learning, ML/AI
Defining Data Trust in XR - IAPP Global Privacy Summit 2022
Defining Data Trust in Extended Reality @ Noble joins Rob Sherman, Deputy CPO at Meta and Nicol Turner-Lee, Fellow, Center for Technology Innovation, Brookings Institution
To discuss privacy in XR at the IAPP Global Privacy Summit 2022
Technical constraints and privacy considerations for #UNID2020 mission
During the 63rd Session of the UN Commission on the Status of Women, I was invited to join a panel of thought leaders and international practitioners on the impact and opportunities the UN "ID for all" presents and, from a technical perspective, address the intersection of technical feasibility, standards, and privacy regulations such as the EU's GDPR.
ARCore Parkour: Incorporating marker-based AR into your app
Data gathered through Augmented Reality, and vision camera systems help computers bridge information from the real and virtual worlds so that one understands the other.
Augmented reality (AR) and mixed reality (MR) were once technologies considered reserved for games and media entertainment. However, these emerging technologies are growing rapidly as many industries realize that the potential of these technologies can be game changers. These technologies represent potential new leaps in how business is run, data is collected, and even new value-add propositions.
In this talk, Noble Ackerson, a 7-year Augmented Reality Developer and Product Expert, shares development best practices and industry insights, with examples such as incorporating marker-based AR into your app and creating XR experiences that cannot happen anywhere else.
Product Innovation in a Privacy by Design World
Personal data privacy is a hot-button item today. It can often seem overly burdensome for the software developer looking to change the world. Established businesses may look at regulations and current data privacy trends as negatively impacting product innovation.
This is the wrong lens through which we should look at this.
In this talk, Noble walks through his framework to respect user privacy AND innovate around the growth in data, and the growth in regulations like the GDPR.
He shares insights on human-centered design opportunities that can help the independent developer and an established corporation.
Finally, he presents practical examples to minimize the data you collect and best practices for asking for data in software only when it provides value for the user.
AWE: Shadows of the emerging worlds
Today we see advancement in practical applications of emerging technologies in our homes and at work. As these solutions grow towards ubiquity, so do the possible harms and obstacles to the world we all want to live in.
In this talk, Noble Ackerson walks through 3 data trust and ethics issues in XR specifically:
1. Dark patterns in XR - impact, example, opportunity;
2. AI fairness & bias - impact, example, opportunity;
3. Data privacy/trust and security - impact, example, opportunity.
He presents practical examples of principles to avoid wading into dark patterns within the shadows of the emerging world and how you optimize for trust through his explainability, context, control, and choice framework for good data stewardship.
VOICE & AI / GovAI Summit 2024 Sessionize Event
CodeForward Sessionize Event
2023 Tech Tactics in Education: Data and IT Security in the New Now Sessionize Event
Bloomberg Power of Difference Panel
Power of Difference Panel at Bloomberg New York City explores the impact and opportunities of Generative AI on the BIPOC communities.
DevOpsDays DC 2023 Sessionize Event
VOICE 2023 Sessionize Event
Google I/O Extended 2023
GPT (GenAI) Is an unreliable data store...and what o do about it
Embark on an enlightening journey with Noble as he tackles the challenges of integrating Large Language Models (LLMS) and explores practical approaches to improve the reliability of Generative Al solutions for enterprise environments.
Medium Day 2023
Session #1 GPT Is An Unreliable Information Store
Session #2 Fighting AI Bias...With Bias
Strange Loop 2022 Sessionize Event
Refactr Tech 2022
Fight AI bias with…bias Noble Ackerson President, CyberXR and Director of AI/ML Products, Ventera Corporation I know; the title of this talk is like saying the only way to stop a bad Terminator is to use a good Terminator but 'hear me out.' Human biases influence the outputs of an AI model. AI amplifies bias, and socio-technical harms impact fairness, adoption, safety, and well-being. These harms disproportionately affect legally protected classes of individuals and groups in the United States. It’s so fitting that this year's theme for International Women’s day was #BreakTheBias, so join Noble as he expands on the topic of bias and techniques to de-bias datasets by example for building intelligent systems that are fair and equitable while increasing trust and adoption.
Loudoun Agile Network [Virtual]
From Strategy to Outcomes
In this talk, I expand upon ways to measure the movement of your users’ or segments of users' interactions with digital software products, toward our vision, strategic objectives, and product goals.
Augmented World Expo (AWE2021)
In this talk, Noble Ackerson talks about trust and ethics in XR specifically:
Dark patterns in XR - impact, example, opportunity;
AI fairness & bias - impact, example, opportunity;
Data privacy/trust and security - impact, example, opportunity.
He presents practical examples of principles to avoid wading into dark patterns within the shadows of the emerging world and how you optimize for trust through explainability, context, control, and choice.
Refactr Tech 2019
Today there are lots of case studies for Spatial Computing (Augmented Reality, Virtual Reality, Mixed Reality etc) in education and entertainment. Unfortunately, the newly released ARCore features like Augmented Images still lack tutorials and practical examples. In this interactive session, Noble Ackerson, a seasoned Technical Product Lead and GDE, shares insights with examples through a fun "choose your own adventure" technical talk (Bandersnatch-style), on how to build marker-based AR with Augmented Images to create experiences to transform data from the world into practical utility for your users.
Strangeloop 2019
HOW TO BUILD WITH DATA TRUST AND PRIVACY AS A BASELINE
Tim Berners-Lee recently published his Contract for the Web with a core principle stating we must "Respect consumers' privacy and personal data so people are in control of their lives online."
Noble's talk explores tactical approaches to be good stewards of data in the face of growing regulations through software with techniques like differential privacy.
Abstractions Conf 2019
ARCore Parkour: Building Your First Augmented Reality App
Noble Ackerson
Advanced augmented reality ar ar foundations arkit arcore
Today there are lots of case studies for Spatial Computing (Augmented Reality, Virtual Reality, Mixed Reality etc) in education and entertainment. Unfortunately, the newly released ARCore features lik...
Positive Sum, Building with Privacy as a Basic Need
Noble Ackerson
Advanced design privacy by design software ethics
Personal data privacy is a hot-button item today. It can often seem overly burdensome for the software developer looking to change the world. Developers and designers may look at regulations and curre...
DevFest DC 2019
ARCore Parkour: Building your first Augmented Reality Experiences
How do you blend the physical and the digital within your apps?
Today there are many case studies for Spatial Computing (Augmented Reality, Virtual Reality, Mixed Reality, etc.) in education and entertainment. Unfortunately, the newly released ARCore features, like Augmented Images, Cloud Anchors, Sceneform, and Augmented Faces, still lack tutorials and practical examples.
In this interactive session, Noble Ackerson, a seasoned Technical Product Lead and GDE, shares insights with examples through a fun “choose your path“ technical talk on how to build markerless and or marker-based AR with ARCore
DevFest DC 2018
Dueling Technologists
10:00 - 10:45
Noble and Allen will be debating each other about a current set of technologies (They're keeping exactly which ones a secret) and their place in society.
DevFest DC 2017
From Game jams to Publisher, Building Immersive VR experiences
John Hopkins University Innovation Factory Summit 2017
Optimizing for machines
Noble Ackerson
Responsible AI Product Strategy & Data Governance
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top