Most Active Speaker

Alison Cossette

Alison Cossette

Data Science Strategist, Advocate, Educator

Burlington, Vermont, United States

Actions

Alison Cossette is a dynamic Data Science Strategist, Educator, and Podcast Host. As a Developer Advocate at Neo4j specializing in Graph Data Science, she brings a wealth of expertise to the field. With her strong technical background and exceptional communication skills, Alison bridges the gap between complex data science concepts and practical applications. Alison’s passion for responsible AI shines through in her work. She actively promotes ethical and transparent AI practices and believes in the transformative potential of responsible AI for industries and society. Through her engagements with industry professionals, policymakers, and the public, she advocates for the responsible development and deployment of AI technologies. She is currently a Volunteer Member of the US Department of Commerce - National Institute of Standards and Technology's Generative AI Public Working Group Alison’s academic journey includes Masters of Science in Data Science studies, specializing in Artificial Intelligence, at Northwestern University and research with Stanford University Human-Computer Interaction Crowd Research Collective. Alison combines academic knowledge with real-world experience. She leverages this expertise to educate and empower individuals and organizations in the field of data science. Overall, Alison Cossette’s multifaceted background, commitment to responsible AI, and expertise in data science make her a respected figure in the field. Through her role as a Developer Advocate at Neo4j and her podcast, she continues to drive innovation, education, and responsible practices in the exciting realm of data science and AI.

Badges

  • Most Active Speaker 2024

Area of Expertise

  • Business & Management
  • Finance & Banking
  • Government, Social Sector & Education
  • Information & Communications Technology
  • Transports & Logistics

Topics

  • Artificial Intelligence
  • Machine Learning and Artificial Intelligence
  • Artificial Intelligence (AI) and Machine Learning
  • Developing Artificial Intelligence Technologies
  • Democratized Artificial Intelligence
  • Databases
  • Data Science
  • Data Analytics
  • Data Management
  • Data Engineering
  • Data Science & AI
  • Azure Data & AI
  • Data Security
  • Database
  • All things data
  • Responsible AI
  • Technical Practices for Detecting Bias in AI: Building Fair and Ethical Models
  • Governance
  • Governance risk and compliance
  • Digital Governance
  • Data / Data Ops / Data Governance
  • Provenance and supply chains
  • Diverse Backgrounds Into Tech
  • Women in Technology
  • Emerging Technologies
  • Technology
  • tech leadership
  • Tech Ethics
  • Artificial Intelligence and Machine Learning for Cybersecurity
  • tech speaker
  • Tech Startups
  • Technology Startups
  • Technical Leadership
  • Technological Innovation
  • Information Technology
  • cyber security
  • CISO
  • CDO
  • AI Risk
  • AI risk management
  • Artifical Intelligence
  • artificial intelligence risk
  • The Future of Artificial Intelligence: Trends and Transformations
  • Artificial Intelligence and machine learning
  • Artificial intellince
  • Artificial Inteligence
  • Inteligencia Artificial
  • Artificial Intelligence and its impact on our IT ecosystems
  • Legal Artifical Intelliegence (AI) Tool
  • Artificial Intelligence of Things
  • Artificial Intelligence & Machine Teaching
  • Artificial Life
  • Artificial Intelligence & Machine Learning
  • Machine Learning/Artificial Intelligence
  • Agentic AI
  • Agentic automation
  • Cloud Native Artificial Intelligence
  • Microsoft Power Virtual Agents
  • AI Agents
  • AI Bias
  • AI Ethics
  • AI Builder
  • AI Research
  • AI in Health
  • AI for Startups
  • AI / Copilot
  • AI & ML Solutions
  • AI for Social Good
  • AI & ML Architecture
  • AI & product management
  • AI & Machine Learning
  • AI and Cybersecurity
  • AI Healthcare Agents
  • Generative AI Use Cases
  • Agentic AI architecture
  • Microsoft (Azure) AI + Machine Learning
  • ​​​​​​​The Generative AI LLM Revolution (ChatGPT)
  • Big Data Machine Learning AI and Analytics
  • Azure OpenAI Service
  • Virtual Agents
  • Autonomous Agents
  • Copliot Agents
  • Multiagent Systems
  • Ethics
  • Ethical AI
  • Ethics in AI
  • Ethical Data
  • data ethics
  • Data Science Ethics
  • Data Warehousing
  • Power Virtual Agent
  • Government Innovation
  • Data Governance
  • Cybersecurity Governance and Risk Management
  • Risk

Beyond Vectors: Evolving GenAI through Transformative Tools and Methods

Embark on a thought-provoking exploration of GenAI's evolution with "Beyond Vectors: Evolving GenAI through Transformative Tools and Methods." Tailored for engineers seeking fresh perspectives, this session encourages practitioners to step beyond familiar Vector Database practices. It's not just a departure; it's a pragmatic leap forward into precision methodologies for data quality and crafting datasets essential for Retrieval-Augmented Generation (RAG) excellence. We'll navigate the complexities of adding non-semantic context through graph databases, shedding light on the nuanced limitations of distance metrics like Cosine Similarity. Join us for this insightful journey, pushing the boundaries of GenAI evolution with transformative tools and methods.

Key Themes:

Methodical Precision in Data Quality and Dataset Construction for RAG Excellence: Uncover an integrated methodology for refining, curating, and constructing datasets that form the bedrock of transformative GenAI applications. Specifically, focus on the six key aspects crucial for Retrieval-Augmented Generation (RAG) excellence.

Navigating Non-Semantic Context with Awareness: Explore the infusion of non-semantic context through graph databases while understanding the nuanced limitations of the Cosine Similarity distance metric. Recognize its constraints in certain contexts and the importance of informed selection in the quest for enhanced data richness.

The Logging Imperative: Recognize the strategic significance of logging in the GenAI landscape. From application health to profound business insights, discover how meticulous logging practices unlock valuable information and contribute to strategic decision-making.

Key Takeaways:

Master a methodical approach to ensuring data quality and constructing datasets specifically tailored for Retrieval-Augmented Generation (RAG) excellence.
Navigate the complexities of adding non-semantic context, including an awareness of limitations in distance metrics like Cosine Similarity.
Understand the strategic significance of logging for application health and insightful business analytics.
Join us on this methodologically rich exploration, "Beyond Vectors," engineered to take your GenAI practices beyond the current Vector Database norms, unlocking a new frontier in GenAI evolution with transformative tools and methods!

Practical GraphRAG - Making LLMs smarter with Knowledge Graphs

We all know that LLMs hallucinate and RAG can help by providing current, relevant information to the model for generative tasks.

But can we do better than just vector retrievals? A knowledge graph can represent data (and reality) at high fidelity and can make this

rich context available based on the user's questions. But how to turn your text data into graphs data structures?

Here is where the language skills of LLM can help to extract entities and relationships from text, which you then can correlate with sources,

cluster into communities and navigate while answering the questions.

In this talk we will both dive into Microsoft Research's GraphRAG approach as well as run the indexing and search live with Neo4j and LangChain.

Identify Unknown Risk in Your Systems With Centrality Algorithms

How do you identify strengths and risk points in your complex systems? See how the Jedi and Rebel Alliance leverage graph data science for the forces of good.

Traditional risk management approaches often fail to capture the complexity of a system. The most impactful Empirical target may not be the most obvious target!

In this presentation, Alison and Jason will take the example of the Star Wars Rebel Alliance Network and demo with a Jupyter Notebook how to explore, clean up and analyze the Star Wars Galaxy with planets and hyperdrive lanes. She'll show you how you can use centrality algorithms to examine the most vulnerable Rebel planets, betweenness centrality to discover how to disrupt the supply chain, and pathfinding to navigate the optimal route.

Pattern Rights - An Ethical Framework for Generative AI Training Data

As generative AI continues to push boundaries, creating novel content by learning from massive datasets, we are faced with complex issues around intellectual property, privacy, and the ethical use of data. Current systems of copyright, fair use, and data protection lack the scope to fully address the unique challenges posed by AI pattern recognition and generation.

This pivotal talk introduces the pioneering concept of "Pattern Rights" - a holistic ethical framework to inform the development and deployment of generative AI technologies. Pattern Rights serves as an umbrella construct, encompassing principles of copyright, fair use, training data transparency, privacy, data ownership, and accountability.

We will explore how Pattern Rights can ensure appropriate attribution and compensation when AI models learn from copyrighted works or personal data. It establishes guidelines around consent, anonymization, and ethical data sourcing practices.

As the AI industry is rapidly evolving, we urgently need governance to foster innovation while upholding rights and safeguarding against misuse. Pattern Rights provides a roadmap to navigate this uncharted territory responsibly and equitably.

AI Evaluation: Tracing LLM Decisions for Reliability and Business Impact

As enterprises rapidly adopt LLMs for decision-making, they face a critical challenge: How do we evaluate and control AI-driven outcomes? Traditional AI monitoring tools only catch failures after they happen, but businesses need a way to trace, validate, and align LLM decisions before they cause financial or compliance risks.

This talk introduces graph-based AI evaluation—a method for mapping LLM decision pathways using Neo4j and Retrieval-Augmented Generation (RAG) to track data influence, improve model reliability, and ensure alignment with business goals. We will cover:

Why LLM decision failures happen—and why enterprises struggle to detect them early.
How graph-based AI evaluation helps businesses visualize AI decision logic, detect biases, and prevent costly mistakes.
Real-world applications of Graph AI in LLM deployments, including data-driven decision tracing and compliance monitoring.
LLMs are transforming business processes, but AI evaluation remains an unsolved challenge. This talk equips technical and business leaders with a practical framework for tracing AI decision-making, improving trust, and reducing risk.

The Hidden Patterns of Agentic AI—How Context Shapes Intelligence

AI agents don’t think in isolation—they follow patterns of behavior based on the context they retrieve, the decisions they make, and the data they rely on. Yet most AI today is blind to its own patterns, reacting without understanding the broader structures influencing its actions.
This workshop will uncover the emergent patterns that shape agentic AI and how we can design agents that leverage real-time context to become more adaptive, reliable, and effective. We’ll explore:
Common behavioral patterns in AI agents—how they retrieve, process, and act on context.
Why agentic AI needs structured, evolving context to break free from rigid, static behavior.
How to analyze, refine, and optimize agentic patterns for better decision-making and adaptability.
Through hands-on exercises, participants will learn how to trace, visualize, and shape AI agent behavior using dynamic context retrieval, interaction modeling, and live data adaptation.
If we want AI agents to be truly intelligent, we need to understand the patterns they follow, the context they need, and the behaviors they evolve—this workshop will teach you how.

The Hidden Patterns of Agentic AI—How Context Shapes Intelligence

AI agents don’t think in isolation—they follow patterns of behavior based on the context they retrieve, the decisions they make, and the data they rely on. Yet most AI today is blind to its own patterns, reacting without understanding the broader structures influencing its actions.

This workshop will uncover the emergent patterns that shape agentic AI and how we can design agents that leverage real-time context to become more adaptive, reliable, and effective. We’ll explore:

Common behavioral patterns in AI agents—how they retrieve, process, and act on context.
Why agentic AI needs structured, evolving context to break free from rigid, static behavior.
How to analyze, refine, and optimize agentic patterns for better decision-making and adaptability.
Through hands-on exercises, participants will learn how to trace, visualize, and shape AI agent behavior using dynamic context retrieval, interaction modeling, and live data adaptation.

If we want AI agents to be truly intelligent, we need to understand the patterns they follow, the context they need, and the behaviors they evolve—this workshop will teach you how.

AI’s Diversity Debt: How Compounding Bias Threatens Innovation—and What We Can Do About It

The tech industry is quietly amassing “Diversity Debt”—a hidden liability that arises from neglecting inclusivity, diverse data practices, and ethical governance in AI development. Like technical debt, Diversity Debt grows over time, with small biases in datasets, synthetic data generation, and model design compounding into systemic flaws that are increasingly costly and complex to address. These unchecked biases erode trust, stifle innovation, and create products that fail to serve the diverse populations they’re meant to empower.

In this keynote, Alison Cossette unpacks the concept of Diversity Debt and its far-reaching consequences, from skewed AI outcomes to diminished market opportunities. Drawing on her pioneering work in data provenance and governance with Neo4j and the FORGE platform, Alison demonstrates how organizations can identify and address compounding bias across all stages of AI development. Using compelling examples—such as synthetic datasets that reinforce disparities and feedback loops that amplify exclusion—she highlights the urgency of tackling bias before it scales.

Attendees will gain actionable insights into reducing Diversity Debt through inclusive governance frameworks, ethical data practices, and proactive model evaluation. Whether you’re a startup founder, data scientist, or industry leader, this talk will equip you to build AI systems that reflect the diversity of the world and unlock innovation without limits. Join us to discover why paying down Diversity Debt isn’t just ethical—it’s essential for creating AI that thrives in an increasingly complex, interconnected

The Black Box is a Lie: Why You Should Stop Blaming the Algorithm

It’s easy to call AI a black box, but that’s just an excuse for bad design. This session flips the script on opaque AI by exposing how human decisions—bad assumptions, shortcuts, and ignored data provenance—are the real culprits. Learn why transparency isn’t just an add-on but the foundation of ethical, accountable AI. This talk challenges participants to take back control from the “black box” myth and design systems that are clear, traceable, and human-centric.

None

Dynamic Data Intelligence: Enabling Proactive Governance and Risk Management in AI Systems

As artificial intelligence reshapes industries, the reliability, traceability, and risk associated with data take on unprecedented importance. Dynamic Data Intelligence (DDI) is an emerging competency of data governance that empowers organizations to understand, assess, and manage the origin, quality, and cascading impacts of their data across interconnected systems.

This session will explore how DDI enhances transparency and trust in AI workflows by dynamically identifying and prioritizing high-impact data. Through advanced modeling and analytical techniques, organizations can anticipate potential vulnerabilities, mitigate cascading risks, and optimize decision-making processes.

Key takeaways will include:

* Practical applications of DDI in pre-ingestion data governance to ensure the integrity of AI training data.
* Case studies showcasing how data risks were identified and managed across complex workflows.
* Metrics and methodologies for assessing data quality and understanding amplified risks.
* Real-world challenges in implementing dynamic data governance and how to overcome them effectively.

Join us to explore how Dynamic Data Intelligence transforms AI from a theoretical powerhouse into a practical, responsible, and sustainable solution, enabling organizations to achieve impactful AI innovations with confidence.

AI DevWorld 2025 Sessionize Event

February 2025 Santa Clara, California, United States

DeveloperWeek 2025 Sessionize Event

February 2025 Santa Clara, California, United States

AI Community Day Sessionize Event

December 2024 Utrecht, The Netherlands

MLOps + Generative AI World 2024 Sessionize Event

November 2024 Austin, Texas, United States

NODES 2024 Sessionize Event

November 2024

PyBay2024 Sessionize Event

September 2024 San Francisco, California, United States

AI Risk Summit + CISO Forum Sessionize Event

June 2024 Half Moon Bay, California, United States

NDC Oslo 2024 Sessionize Event

June 2024 Oslo, Norway

AI DevSummit 2024 Sessionize Event

May 2024 South San Francisco, California, United States

AI42 Conference Sessionize Event

March 2024 Oslo, Norway

Orlando Code Camp 2024 Sessionize Event

February 2024 Sanford, Florida, United States

CodeForward Sessionize Event

December 2023 Arlington, Virginia, United States

NODES 2023 Sessionize Event

October 2023

Alison Cossette

Data Science Strategist, Advocate, Educator

Burlington, Vermont, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top