Speaker

Andrew Madson

Andrew Madson

Dremio | Data Science, AI, and Analytics Evangelist

New City, New York, United States

Actions

Andrew Madson is a Data Analytics, Data Science, and AI Evangelist at Dremio, where he leverages his extensive expertise in data analytics, machine learning, and artificial intelligence to drive innovation and educate the wider community. With a strong academic background, including multiple master's degrees in data analytics and business management, Andrew deeply understands the technical intricacies involved in data-driven decision-making.

Andrew's career is marked by impactful roles at prominent organizations. He served as the Senior Director of Data Analytics & AI at Arizona State University, where he streamlined analytics processes and significantly enhanced team productivity. At LPL Financial, he played a pivotal role in creating a substantial strategic budget and led enterprise-wide initiatives. His tenure at MassMutual involved leading data projects across diverse teams and countries, managing a significant budget, and leading data privacy initiatives.

Andrew's technical ability is further exemplified by his experience at JP Morgan Chase, where he led data analytics, machine learning, and AI projects for Global Wealth Supervision. He spearheaded innovative AI solutions, including a real-time communication monitoring system and predictive models for advisor retention. His ability to automate processes and lead diverse teams of technical experts underscores his leadership and technical acumen.

In addition to his technical expertise, Andrew is a seasoned public speaker and educator. He has served as an adjunct instructor and faculty member at various universities, including Trine University, Grand Canyon University, Southern New Hampshire University, Western Governors University, Maryville University, and Indianapolis University. His ability to communicate complex technical concepts in an accessible manner makes him a sought-after speaker and thought leader in the data science community.

Andrew's current role as an Evangelist at Dremio allows him to combine his passion for data science with his exceptional communication skills. He actively engages with the wider community, sharing his knowledge and insights to empower organizations to harness the full potential of their data.

Area of Expertise

  • Business & Management
  • Finance & Banking
  • Government, Social Sector & Education
  • Information & Communications Technology
  • Law & Regulation

Topics

  • SQL
  • SQL Server
  • PostgreSQL
  • NoSQL
  • Database
  • Data Science
  • Data Privacy
  • Data Management
  • Data Science & AI
  • Big Data
  • Data Governance
  • Data Engineering
  • Data Visualization
  • Analytics
  • BI & Analytics
  • Predictive Analytics
  • Realtime Analytics
  • Database and Analytics
  • Data Science tools
  • Data Science (AI/ML)
  • Data Science and Analytics
  • Data science and the data lake
  • data products
  • Data Lakes
  • Data Lakehouse
  • Apache Iceberg
  • Open Source Software
  • Big Data Machine Learning AI and Analytics
  • Artificial Intelligence
  • Generative AI
  • Agile Digital Transformation & Data Analytics
  • Data Virtualization
  • Data Integration
  • Microsoft SQL Server
  • MySQL
  • Business Strategy • Business Intelligence • Databases • Data Analysis
  • data mesh
  • python

AI considerations for Education

Artificial Intelligence (AI) is revolutionizing the educational landscape, offering unprecedented opportunities to enhance learning experiences, personalize education, and streamline administrative processes. This session, "AI Considerations for Education," explores the critical factors educators, administrators, and policymakers need to consider when integrating AI into educational settings.

Participants will delve into the ethical, technical, and practical aspects of AI implementation in education. Through an examination of cutting-edge AI technologies and their applications, attendees will gain insights into how AI can support diverse learning needs, improve student outcomes, and drive institutional efficiency. The session will also address the challenges and risks associated with AI in education, providing a balanced perspective on this transformative technology.

Key Takeaways:
1. AI in the Classroom: Understand how AI can be used to create adaptive learning environments, offer personalized instruction, and support teachers in delivering more effective education.
2. Ethical and Privacy Considerations: Explore the ethical implications of AI in education, including data privacy, bias mitigation, and the importance of maintaining human oversight in AI-driven decisions.
3. Technical Integration: Learn about the technical requirements and infrastructure needed to successfully implement AI solutions in educational institutions.
4. Improving Student Outcomes: Discover how AI can help identify at-risk students, tailor interventions, and support diverse learning needs to enhance overall student performance.
5. Administrative Efficiency: Examine how AI can optimize administrative tasks, from enrollment and scheduling to resource management, freeing up educators to focus more on teaching and student engagement.
6. Challenges and Solutions: Gain insights into the common challenges faced during AI implementation and explore practical solutions to overcome these barriers.

Join us for this forward-thinking session to explore the multifaceted considerations of AI in education and learn how to harness its potential responsibly and effectively.

AI Considerations For Enterprise Change Management

As enterprises navigate digital transformation, integrating Artificial Intelligence (AI) into change management processes is crucial for success. This session, "AI Considerations for Enterprise Change Management," explores the strategic role of AI in facilitating and accelerating organizational change, ensuring seamless transitions and enhanced business outcomes.

Key Takeaways:
-AI in Change Management: Understand the foundational concepts of integrating AI into change management, including its impact on planning, execution, and monitoring of change initiatives..
-Process Automation: Explore the role of AI in automating routine tasks and processes, reducing manual effort, and increasing efficiency during organizational transitions.
-Employee Engagement and Training: Discover how AI can enhance employee engagement, provide personalized training, and support smoother adaptation to change.
-Challenges and Solutions: Address the common challenges of implementing AI in change management, including data privacy concerns, resistance to change, and ethical considerations.

AI Ready Data with Apache Iceberg: Unifying, Controlling, and Optimizing Your Data for Effective AI

Title: AI Ready Data with Apache Iceberg: Unifying, Controlling, and Optimizing Your Data for Effective Artificial Intelligence

Target Audience:
Data engineers
Data scientists
Data architects
Technical leaders (CTOs, CIOs)
Anyone interested in improving data quality for AI/ML initiatives

Abstract
In today's data-driven world, the effectiveness of Artificial Intelligence (AI) and Machine Learning (ML) models depends heavily on the quality and organization of your underlying data. "AI Ready Data with Apache Iceberg" addresses this challenge and describes how Apache Iceberg can facilitate unifying, governing, and optimizing your data, making it truly AI ready.

Key Takeaways:
The Data Lakehouse Advantage:
Explain how Apache Iceberg, combined with the lakehouse architecture, provides a unified platform for all types of data, breaking down silos and simplifying data management.

Git-Like Data Governance with Nessie:
Introduce Nessie and demonstrate how its Git-like functionality brings version control, branching, and collaboration to your data, enabling efficient experimentation and ensuring data reproducibility.

Data Contracts for Quality Assurance:
Discuss the concept of data contracts and how they can be used to define and enforce quality standards, ensuring that data meets the necessary criteria for AI/ML workloads.

Iceberg's Optimized Data Structures:
Highlight how Iceberg's optimized data layouts (e.g., columnar formats, partitioning, hidden partitioning) improve query performance and resource utilization, leading to faster AI/ML model training and inference.

Real-World Use Cases:
Share examples of how organizations are using Iceberg, Nessie, and data contracts to build robust data pipelines, enhance data quality, and achieve tangible results with their AI initiatives.

Uncover insights in your complex data with graph visualization

Finding hidden relationships is the key to unlocking insights. Traditional charts and graphs fall short when visualizing complex, interconnected data. This presentation will describe into the world of graph visualization, a powerful technique for unveiling hidden patterns, dependencies, and anomalies in your data.

We will explore:

Graph Fundamentals: An introduction to graph theory concepts, including nodes, edges, and different types of graphs, providing a foundation for understanding graph visualization.

Open-Source Graph Visualization Tools: A showcase of popular open-source libraries and tools like NetworkX, PyVis, and Gephi, demonstrating their capabilities for creating interactive and informative graph visualizations.

Use Cases: Real-world applications of graph visualization across various domains, such as social network analysis, fraud detection, recommendation systems, and knowledge graphs. We'll delve into specific examples and case studies to highlight the versatility and impact of this approach.

Best Practices and Design Principles: Guidelines for designing effective graph visualizations that are both aesthetically pleasing and informative. We'll discuss topics like layout algorithms, node and edge styling, interaction design, and scalability considerations.

(if time allows) Hands-On Demonstration: A live demonstration of building a graph visualization using an open-source tool, showcasing the step-by-step process and highlighting key features.

The Who, What, and Why of Data Lake Table Formats

A comprehensive exploration of the intricacies of Data Lake Table Formats and their impact on business analytics.
Data lake table formats are a critical component of modern data analytics. They provide a way to organize and manage data in a data lake, and they offer several benefits for business analytics, including:
Scalability: Data lake table formats can scale to handle large amounts of data.
Performance: Data lake table formats can improve the performance of queries on large datasets.
Durability: Data lake table formats can ensure that data is durable and recoverable.
Auditability: Data lake table formats can help to ensure that data is auditable and compliant.
This presentation will explore the who, what, and why of data lake table formats. We will discuss the different data lake table formats, such as Apache Iceberg, Apache Hudi, and Delta Lake. We will also discuss the benefits of using data lake table formats for business analytics.
By the end of this presentation, you will better understand data lake table formats and how they can be used to improve business analytics.
Key takeaways:
Data lake table formats are a critical component of modern data analytics.
They offer a number of benefits for business analytics, including scalability, performance, durability, and auditability.
There are a variety of data lake table formats available, including Apache Iceberg, Apache Hudi, and Delta Lake.

Sub-second Power BI Dashboards Directly on ADLS Storage

Unleash the full potential of your data with our talk, "Sub-second Power BI Dashboards Directly on the Data in Your ADLS Storage Using Dremio." In this engaging discussion, we'll dive into the challenges that arise when using extracts and cubes to accelerate Power BI dashboards and how Dremio, with its data reflections, simplifies the process while preserving consistency.

Traditional methods of accelerating Power BI dashboards often involve the creation of extracts and cubes. While these techniques can enhance query performance, they introduce data preparation, maintenance, and synchronization complexities. The quest for faster dashboards often results in a trade-off between speed and consistency.

Enter Dremio, a game-changer in the world of data acceleration. In this talk, we'll explore how Dremio's approach to data reflections revolutionizes how you create dashboards directly on your data stored in Azure Data Lake Storage (ADLS). Key highlights include:

A comprehensive examination of the complexities and consistency issues associated with extract and cube-based approaches for accelerating Power BI dashboards.
How Dremio's data reflections eliminate the need for extracts and cubes by providing sub-second query performance directly on your data in ADLS.
Real-world examples showcasing how Dremio empowers data professionals to create high-performance dashboards while maintaining data consistency.
The benefits of a simplified and more agile approach to data acceleration resulting in faster insights and reduced overhead.

Join us to discover how Dremio's data reflections can redefine your Power BI dashboard acceleration strategy. Say goodbye to the complexities of extracts and cubes, and embrace a more direct, efficient, and consistent way to access and visualize your data. Take advantage of this opportunity to learn how Dremio can supercharge your analytics workflow.

Building and Scaling Analytics Products

Organizations are increasingly recognizing the value of data-driven decision-making. Simply collecting and storing data isn't enough. To truly harness its potential, businesses need to transform their data into actionable insights and deliver those insights to their end-users through well-designed analytics products.

This presentation will explore the concept of "analytics as a product" and provide a practical framework for building, launching, and scaling analytics products that drive business value.

Key topics will include:

The Product Mindset: Understanding the principles of product management and how they apply to analytics.

Defining Your Target Audience: Identifying the needs and pain points of your end-users and designing solutions that address them.

Building Your Analytics Product: From MVP to full-fledged product, understanding the development lifecycle and key considerations for success.

Data Storytelling: Effectively communicating insights and recommendations to stakeholders through compelling narratives and visualizations.

Scaling Your Product: Strategies for expanding your user base, monetizing your product, and continuously improving its value proposition.

Whether you're a data analyst, product manager, or business leader, this presentation will equip you with the knowledge and tools to successfully build and scale analytics products that deliver real impact for your organization.

**Key Takeaways:**

Attendees will gain a deeper understanding of the "analytics as a product" mindset.

Learn a practical framework for building and scaling analytics products.

Discover strategies for effective data storytelling and communication.

Gain insights into the latest trends and best practices in analytics product management.

Accelerate Your Analytics with SQL Server + Data Lakehouse

Organizations are collecting enormous amounts of structured and unstructured data. This data holds valuable insights, but traditional data warehouses can struggle to handle the volume, variety, and velocity of modern data sets. This presentation will explore how combining SQL Server with a data lakehouse architecture can revolutionize your analytics capabilities.

We'll dive into the concept of a data lakehouse, which brings the best of data lakes and data warehouses together. You'll learn how to leverage SQL Server's robust relational capabilities alongside the scalability and flexibility of a data lake, all while maintaining data quality and governance. We'll discuss how to:

Ingest and store: Efficiently capture and store data from diverse sources, including databases, IoT devices, and cloud applications, in your data lakehouse.

Transform and model: Prepare your data for analysis using familiar SQL Server tools and techniques, ensuring data quality and consistency.

Query and analyze: Leverage the power of SQL Server's query engine, augmented by data lakehouse capabilities, for high-performance analytics and reporting.

Govern and secure: Implement comprehensive data governance and security measures to protect sensitive data and ensure compliance with regulatory requirements.

Whether you're a data engineer, analyst, or business leader, this session will provide actionable insights and strategies to accelerate your analytics initiatives, reduce costs, and unlock the full potential of your data assets. Discover how SQL Server and data lakehouse can empower your organization to make faster, more informed decisions and drive business growth.

Data-Centric AI: Accelerate Success with Apache Iceberg Data Products

Data-Centric AI: Accelerating Success Through Modern Data Products

Target Audience
- Data Leaders & Architects
- Data Engineers & Platform Engineers
- ML/AI Engineers & Data Scientists
- Analytics Engineers & Practitioners

Abstract
While organizations rush to advance their AI models, research shows that "reducing the technological gap alone is not enough to ensure success in AI projects" [The Data Death Cycle, 2024]. The key to accelerating AI success lies not in model optimization, but in data excellence. This session reveals how a data-centric approach, powered by Apache Iceberg and modern data architectures, dramatically improves AI systems by ensuring complete, consistent, and curated datasets from the start.

Overview
AI and analytics initiatives demand high-quality data, yet traditional model-centric approaches often overlook this fundamental requirement. We'll explore how data products, implemented through a combination of data mesh and data fabric patterns, provide the systematic data excellence that AI requires. Learn how Apache Iceberg's lakehouse architecture eliminates costly ETL while enabling "git-like" version control through metadata catalogs like Polaris and Nessie, providing comprehensive write-audit-publish capabilities for data changes.

Through real-world examples and architectural patterns, we'll demonstrate how organizations can:
- Accelerate AI success through systematic data excellence rather than just model optimization
- Create trusted data products that ensure quality, completeness, and consistency
- Implement efficient data integration without expensive ETL processes
- Enable version control and auditability for data changes
- Balance centralized governance with domain agility

Key Takeaways

1. Data-Centric Advantage
- Why focusing on data quality accelerates AI success more effectively than model optimization
- How systematic data excellence reduces the 80% of time data scientists spend on data preparation
- The critical role of complete, consistent, and curated datasets in AI/ML success

2. Modern Data Products
- How data products enable systematic management of data quality
- Why combining data mesh and data fabric creates the ideal architecture for AI-ready data
- Patterns for implementing data products that ensure quality, governance, and reliability

3. Technical Foundation
- How Apache Iceberg enables efficient data integration without costly ETL
- The role of metadata catalogs in providing git-like version control for data
- Practical patterns for implementing write-audit-publish workflows for data changes

4. Implementation Path
- Steps for transitioning to a data-centric approach
- How to begin implementing data products in your organization
- Methods for measuring and demonstrating success

Whether you're struggling with AI initiatives or looking to accelerate existing programs, this session provides practical insights into building the data foundation that modern AI demands. You'll learn why data-centric approaches succeed where model-centric efforts fail, and how to implement the technical architecture that makes it possible.

Join us to discover how combining data-centric thinking with modern technologies like Apache Iceberg can transform your organization's ability to deliver successful AI initiatives. Leave with concrete steps for implementing data products that provide the complete, consistent, and curated data that AI systems require.

This is an updated version of my most popular conference talk, which was requested at 20+ conferences in 2024

AI Summit Vancouver Sessionize Event

November 2024 Vancouver, Canada

2024 All Day DevOps Sessionize Event

October 2024

AI Community Conference - Boston 2024 Sessionize Event

September 2024 Cambridge, Massachusetts, United States

2024 Data.SQL.Saturday.SD (SQLSatSD) Sessionize Event

September 2024

Data Saturday Dallas 2024 Sessionize Event

September 2024

SQLSaturday Denver 2024 Sessionize Event

August 2024 Denver, Colorado, United States

Andrew Madson

Dremio | Data Science, AI, and Analytics Evangelist

New City, New York, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top