Speaker

Oscar Garcia

Oscar Garcia

Microsoft MVP @ozkary

Actions

Oscar Garcia is a Principal Software Architect and VP of product development. He is a Microsoft MVP and certified solutions’ developer with many years of experience building software solutions. He specializes in building cloud solutions using technologies like GCP, AWS, Azure, ASP.NET, NodeJS, Angular, React, SharePoint, Microsoft 365, Firebase as well as BI projects for data visualization using tools like Tableau and PowerBI, Spark.

Autonomous AI Agent: A Primer’s Guide - June 2025

What’s the AI agent mystique? Are they just chatbots with automation? What makes them different—and why does it matter?

This presentation breaks it down from the ground up. We’ll explore what truly sets AI agents apart—how they perceive, reason, and act with autonomy across industries ranging from healthcare to retail to logistics. You'll walk away with a clear understanding of what an agent is, how it works, and what it takes to build one.

Whether you’re a developer, strategist, or simply curious, this session is your entry point to one of the most transformative ideas in AI today.

Agenda:

What is an AI Agent?

Autonomy Advantage: How AI Agents Go Beyond Automation

The Agent’s Secret Power

Model Context Protocol (MCP): The Key to Tool Integration

How Does an Agent Talk MCP?

Benefits of MCP for AI Agents

Shape Agent Behavior Through Prompting

Why Attend?

Understand the Future – Discover how AI agents are redefining autonomy and real-time decision-making across industries.

Build Smarter Systems – Learn how to design agents that perceive, reason, and act—without human micromanagement.

Get Hands-On Concepts – Explore the roles of LLMs, short-term memory, tool orchestration, and prompting in building agents.

Bring Ideas to Life – Whether you're exploring a product idea or reimagining business operations, you'll leave with practical frameworks to build real AI-powered tools today.

Please RSVP to secure your spot for this insightful session on autonomous AI agents. We’re excited to explore the future of intelligent systems—together.

SaaS Fundamentals: A Primer's Guide for Success - March 2025

Have you ever encountered a problem, big or small, and thought, 'Wouldn't it be great if there was software to solve this?' You envision a tool, a solution, something that would make life easier or a process more efficient. That spark of an idea, that 'what if' moment, is where innovation begins. But what if you could do more than just imagine it? What if you could take that spark and transform it into a real, thriving application? With the SaaS approach, that's not just a dream – it's an achievable reality. This presentation will show you the fundamental steps to take your software idea from a simple thought to a successful online service.

This presentation provides a foundational understanding of building a successful SaaS (Software as a Service) solution. Whether you have a brilliant software idea or simply want to learn about the SaaS development process, this session will guide you through the key concepts. We'll cover everything from initial planning and technical considerations to deployment, growth, and ongoing support, empowering you to turn your software vision into a reality.

Agenda:

Introduction to SaaS
Planning for SaaS Success
Technical Approach & MVP
Automation, Security, and Data
Rollout and Market Presence
Continuous Retention and Support
How Do I Get Started?

Why Attend?

Demystify SaaS Development: Gain a clear understanding of the core principles and steps involved in building a successful SaaS application.
Transform Ideas into Reality: Learn how to take your software concepts from initial ideation to a fully functional online service.
Navigate the SaaS Landscape: Discover essential strategies for planning, development, deployment, and ongoing support.
Empower Your Entrepreneurial Spirit: Equip yourself with the knowledge and tools to launch your own SaaS venture.
Please RSVP to secure your spot for this insightful session. Looking forward to embarking on this SaaS journey together!

SaaS Fundamentals: A Primer's Guide for Success - April 2025

Have you ever encountered a problem, big or small, and thought, 'Wouldn't it be great if there was software to solve this?' You envision a tool, a solution, something that would make life easier or a process more efficient. That spark of an idea, that 'what if' moment, is where innovation begins. But what if you could do more than just imagine it? What if you could take that spark and transform it into a real, thriving application? With the SaaS approach, that's not just a dream – it's an achievable reality. This presentation will show you the fundamental steps to take your software idea from a simple thought to a successful online service.

This presentation provides a foundational understanding of building a successful SaaS (Software as a Service) solution. Whether you have a brilliant software idea or simply want to learn about the SaaS development process, this session will guide you through the key concepts. We'll cover everything from initial planning and technical considerations to deployment, growth, and ongoing support, empowering you to turn your software vision into a reality.

Agenda:

Introduction to SaaS
Planning for SaaS Success
Technical Approach & MVP
Automation, Security, and Data
Rollout and Market Presence
Continuous Retention and Support
How Do I Get Started?

Why Attend?

Demystify SaaS Development: Gain a clear understanding of the core principles and steps involved in building a successful SaaS application.
Transform Ideas into Reality: Learn how to take your software concepts from initial ideation to a fully functional online service.
Navigate the SaaS Landscape: Discover essential strategies for planning, development, deployment, and ongoing support.
Empower Your Entrepreneurial Spirit: Equip yourself with the knowledge and tools to launch your own SaaS venture.
Please RSVP to secure your spot for this insightful session. Looking forward to embarking on this SaaS journey together!

A Hands-On Exploration into the discovery phase - Data Engineering Process Fundamentals

In this session, we will delve into the essential building blocks of data engineering, placing a spotlight on the discovery process. From framing the problem statement to navigating the intricacies of exploratory data analysis (EDA) using Python, VSCode, Jupyter Notebooks, and GitHub, you'll gain a solid understanding of the fundamental aspects that drive effective data engineering projects.

Data Engineering Process Fundamentals: Introduction to Data Lakes and Data Warehouses

Overview:

In this technical presentation, we will delve into the fundamental concepts of Data Engineering, focusing on two pivotal components of modern data architecture - Data Lakes and Data Warehouses. We will explore their roles, differences, and how they collectively empower organizations to harness the true potential of their data.

Architecting Insights: Analytical Data Modeling - Data Engineering Process Fundamentals

Building on our previous exploration of data pipelines and orchestration, we now delve into the pivotal phase of data modeling and analytics. In this continuation of our data engineering process series, we focus on architecting insights by designing and implementing data warehouses, constructing logical and physical models, and optimizing tables for efficient analysis.

Generative AI : Create Code from GitHub User Stories - Introduction to AI LLM Models

This presentation explores the potential of Generative AI, specifically Large Language Models (LLMs) and Prompt Engineering, for streamlining software development by generating code directly from user stories written in GitHub.

Coupling Data Flows: Data Pipelines and Orchestration - Data Engineering Process Fundamentals

Join us for this presentation as we transition from design and planning into the implementation of data pipelines and orchestration. Explore the intricacies of building and orchestrating data pipelines in this session, covering tools, coding with Python, deployments with Docker.

Decoding Data: A Journey into the Discovery Phase - Data Engineering Process Fundamentals

Embark on a journey through the core principles of data engineering with our tech talk. In this session, we will delve into the essential building blocks of data engineering, placing a spotlight on the discovery process.

Building a cloud based data pipeline from raw data to visualization

In this presentation, we take a look at a real use case where we need to create a cloud based big data pipeline from unstructured raw data and build a data visualization solution to enable business decisions. We use these technologies: Python, Jupyter notebook, SQL, GitHub, Docker, Data Lake, Data Orchestration and Data Modeling for incremental data process into the Data Warehouse, which serves the data to Visualization tools like Looker or Power BI.

Data Engineering Fundamentals - Building a Cloud Based Data Pipeline

Let's build a data pipeline from CSV to a visualization dashboard using cloud technologies and fundamental principles of data engineering.

For this technical session, we look at a data engineering process and the related cloud technologies that are needed to build a data pipeline from CSV files to data visualization for Big Data scenarios for real use cases.

Some of the technologies that we will be covering:

- Data Lakes
- Data Warehouse
- Data Analysis and Visualization
- Python
- Jupyter Notebook
- SQL

Introduction to AI Large Language Models (LLMs) - ChatGPT, Bard, Prompt Engineering

Introduction to AI Large Language Models (LLMs), BARD, ChatGPT and prompt engineering

In this session, you will be introduced to the fascinating world of AI Large Language Models (LLMs) and explore a few groundbreaking models - BARD, ChatGPT and Claude. Learn about their capabilities, applications, and potential impact on various industries.

Data Engineering Process Fundamentals: Unveiling the Power of Data Lakes and Data Warehouses

In this technical presentation, we will delve into the fundamental concepts of Data Engineering, focusing on two pivotal components of modern data architecture - Data Lakes and Data Warehouses. We will explore their roles, differences, and how they collectively empower organizations to harness the true potential of their data.

Data Engineering Process Fundamentals: Unveiling the Power of Data Lakes and Data Warehouses

In this technical presentation, we will delve into the fundamental concepts of Data Engineering, focusing on two pivotal components of modern data architecture - Data Lakes and Data Warehouses. We will explore their roles, differences, and how they collectively empower organizations to harness the true potential of their data.

Unlocking Insights: Data Analysis and Visualization - Data Engineering Process Fundamentals

Building on our previous exploration of architecting a data warehouse, we now delve into unlocking the insights from our data with data analysis and visualization. In this continuation of our data engineering process series, we focus on visualizing insights. We learn about best practices for data analysis and visualization, we then move into an implementation using a code-centric dashboard using Python, Pandas and Plotly. We then follow up by using a high-quality enterprise tool, such as Looker, to construct a low-code cloud-hosted dashboard, providing us with insights into the type of effort each method takes.

Agenda:

1. Introduction:

Recap the importance of data warehousing, data modeling and transition to data analysis and visualization.
2. Data Analysis Foundations:

Data Profiling: Understand the structure and characteristics of your data.
Data Preprocessing: Clean and prepare data for analysis.
Statistical Analysis: Utilize statistical techniques to extract meaningful patterns.
Business Intelligence: Define key metrics and answer business questions.
Identifying Data Analysis Requirements: Explore filtering criteria, KPIs, data distribution, and time partitioning.
3. Mastering Data Visualization:

Common Chart Types: Explore a variety of charts and graphs for effective data visualization.
Designing Powerful Reports and Dashboards: Understand user-centered design principles for clarity, simplicity, consistency, filtering options, and mobile responsiveness.
Layout Configuration and UI Components: Learn about dashboard design techniques for impactful presentations.
4. Implementation Showcase:

Code-Centric Dashboard: Build a data dashboard using Python, Pandas, and Plotly (demonstrates code-centric approach).
Low-Code Cloud-Hosted Dashboard: Explore a high-quality enterprise tool like Looker to construct a dashboard (demonstrates low-code efficiency).
Effort Comparison: Analyze the time and effort required for each development approach.
5. Conclusion:

Recap key takeaways and the importance of data analysis and visualization for data-driven decision-making.

Why Join This Session?

Learn best practices for data analysis and visualization to unlock hidden insights in your data.
Gain hands-on experience through code-centric and low-code dashboard implementations using popular tools.
Understand the effort involved in different dashboard development approaches.
Discover how to create user-centered, impactful visualizations for data-driven decision-making.
This session empowers data engineers and analysts with the skills and tools to transform data into actionable insights that drive business value.

May the Tech Force Be With You: Unlock Your Journey in Technology

Buckle up for an in-depth exploration of technical careers. We'll launch with a look at next steps in tech and the diverse opportunities (What's Next?). Then, we'll dive into the exciting specializations that power the tech landscape (Explore Your Passion).

Next, we'll unveil the fundamental languages that build the tech world (Building Blocks of Tech). Don't worry if coding isn't your sole focus! We'll also venture beyond coding to discover a spectrum of roles within the tech industry (Beyond Coding).

Feeling overwhelmed by coding options? We'll demystify the debate between code-centric and low-code/no-code development (Code-Centric vs. Low-Code/No-Code Development). Finally, we'll chart a course for a bright future, exploring the impact of emerging technologies like AI and cloud computing (The Future is Bright).

Agenda:

- What's Next?:

Understanding the Technical Landscape.
Continuous Learning.
Exploring Industry Trends and Job Market.
- Explore Your Passion: Diverse Areas of Specialization:

Showcase different areas of CS specialization (e.g., web development, data science, artificial intelligence, cybersecurity).
- Building Blocks of Tech: Programming Languages:

Showcase and explain some popular programming languages used in different areas.
- Beyond Coding: Programming vs. Non-Programming Roles:

Debunk the myth that all CS careers involve coding.
Introduce non-programming roles in tech.
- Code-Centric vs. Low-Code/No-Code Development:

Explain the concept of code-centric and low-code/no-code development approaches.
Discuss the advantages and disadvantages of each approach.
- The Future is Bright

Discuss emerging technologies like AI, cloud computing, and automation, and their impact on the future of CS careers.
Emphasize the importance of continuous learning and adaptability in this ever-changing landscape.
Why Attend?
In-demand skills: Discover the technical and soft skills sought after by employers in today's tech industry.
Matching your passion with a career: Explore diverse areas of specialization and identify the one that aligns with your interests and strengths.
Career paths beyond coding: Uncover a range of opportunities in tech, whether you're a coding whiz or have a different area of expertise.
Future-proofing your career: Gain knowledge of emerging technologies and how they'll shape the future of computer science.
By attending, you'll leave equipped with the knowledge and confidence to make informed decisions about your future in the ever-evolving world of technology.

Please RSVP to secure your spot for this enriching session. Looking forward to exploring the future of data engineering together! We believe in fostering a welcoming and inclusive environment where everyone's unique perspectives are valued and contribute to our collective success.

Building Real-Time Data Pipelines: A Practical Guide - Data Engineering Process Fundamentals

Description:
This session builds upon your existing batch data processing knowledge! We'll delve into the world of data streaming, equipping you with the skills to seamlessly integrate a real-time pipeline into your data lake. Discover how to leverage Apache Kafka and Apache Spark to capture and process information as it's generated, unlocking the power of continuous data flow. Learn how this real-time data seamlessly integrates with your existing data lake, ultimately feeding into your data warehouse for even deeper analysis. Gain valuable insights from a combined batch and real-time approach, empowering you to make faster and more informed decisions.

This is a presentation from the series Data Engineering Process Fundamentals, with a supported GitHub Repo and a book.

Agenda:

1. What is Data Streaming?

- Understanding the concept of continuous data flow.

- Real-time vs. batch processing.

- Benefits and use cases of data streaming.

2. Data Streaming Channels

- APIs (Application Programming Interfaces)

- Events (system-generated signals)

- Webhooks (HTTP callbacks triggered by events)

3. Data Streaming Components

- Message Broker (Apache Kafka)

- Producers and consumers

- Topics for data categorization

- Stream Processing Engine (Apache Spark Structured Streaming)

4. Solution Design and Architecture

- Real-time data source integration

- Leveraging Kafka for reliable message delivery

- Spark Structured Streaming for real-time processing

- Writing processed data to the data lake

6. Q&A Session

- Get your questions answered by the presenters.

Why Attend:
- Stay Ahead of the Curve: Gain a comprehensive understanding of data streaming, a crucial aspect of modern data engineering.

- Unlock Real-Time Insights: Learn how to leverage data streaming for immediate processing and analysis, enabling faster decision-making.

- Master Kafka and Spark: Explore the power of Apache Kafka as a message broker and Apache Spark Structured Streaming for real-time data processing.

- Build a Robust Data Lake: Discover how to integrate real-time data into your data lake for a unified data repository.

- Ask the Experts: Get your questions answered by data engineering professionals during the Q&A session.

Please RSVP to secure your spot for this session. We believe in fostering a welcoming and inclusive environment where everyone's unique perspectives are valued and contribute to our collective success.

Medallion Architecture: A Blueprint for Data Insights and Governance - Data Engineering Process

Build upon your existing data engineering expertise and discover how Medallion Architecture can transform your data strategy. This session provides a hands-on approach to implementing Medallion principles, empowering you to create a robust, scalable, and governed data platform. Learn how to optimize data pipelines, enhance data quality, and unlock valuable insights through a structured, layered approach.

We'll explore how to align a data engineering processes with Medallion Architecture, identifying opportunities for optimization and improvement. By understanding the core principles and practical implementation steps, you'll be equipped to leverage Medallion Architecture to drive business success.

Live Dashboards: Boosting App Performance with Real-Time Integration

Dive into the future of web applications. We're moving beyond traditional API polling and embracing real-time integration. Imagine your client app maintaining a persistent connection with the server, enabling bidirectional communication and live data streaming. We'll also tackle scalability challenges and integrate Redis as our in-memory data solution.

Smart Charts: Using AI to Enhance Data Understanding

This presentation explores how Generative AI, particularly Large Language Models (LLMs), can empower engineers with deeper data understanding. We'll delve into creating complex charts using Python and demonstrate how LLMs can analyze these visualizations, identify trends, and suggest actionable insights. Learn how to effectively utilize LLMs through prompt engineering with natural language and discover how this technology can save you valuable time and effort.

Agenda:

1. Introduction to LLMs and their Role in Data Analysis and Training

- What are LLMs, and how do they work?

- LLMs in the context of data analysis and visualization.

2. Prompt Engineering - Guiding the LLM

- Crafting effective prompts for chart analysis.

- Providing context within the prompt (chart type, data).

3. Tokens - The Building Blocks

- Understanding the concept of tokens in LLMs.

- How token limits impact prompt design and model performance.

4. Let AI Help with Data Insights - Real Use Case

- Creating complex charts using Python libraries.

- Write Prompts for Chart Analysis

- Utilizing an LLM to analyze the generated charts.

- Demonstrating how LLMs can identify trends, anomalies, and potential areas for improvement.

6. Live Demo - Create complex charts using python and ask AI to help you with the analysis

- Live coding demonstration of creating a complex chart and using an LLM to analyze it.

Discovering Machine Learning: A Primer Guide

Machine Learning can seem like a complex and mysterious field. This presentation aims to discover the core concepts of Machine Learning, providing a primer guide to key ideas like supervised and unsupervised learning, along with practical examples to illustrate their real-world applications. We'll also explore a GitHub repository with code examples to help you further your understanding and experimentation.


1. What is Machine Learning?

Definition and core concepts

2. Why is Machine Learning Important?

Key applications and benefits

3. Types of Machine Learning

Supervised Learning
Examples: Classification & Regression
Unsupervised Learning
Examples: Clustering & Dimensionality Reduction

4. Problem Types

Regression: Predicting continuous values
Classification: Predicting categorical outcomes

5. Model Development Process

Understand the Problem
Exploratory Data Analysis (EDA)
Data Preprocessing
Feature Engineering
Data Splitting
Model Selection
Training & Evaluation

Autonomous AI Agent: A Primer’s Guide - July 2025

What’s the AI agent mystique? Are they just chatbots with automation? What makes them different—and why does it matter?

This presentation breaks it down from the ground up. We’ll explore what truly sets AI agents apart—how they perceive, reason, and act with autonomy across industries ranging from healthcare to retail to logistics. You'll walk away with a clear understanding of what an agent is, how it works, and what it takes to build one.

Whether you’re a developer, strategist, or simply curious, this session is your entry point to one of the most transformative ideas in AI today.

Agenda:

- Autonomous AI Agent: A Primer’s Guide
- Autonomy Advantage: How AI Agents Go Beyond Automation
- The Agent’s Secret Power
- Model Context Protocol (MCP): The Key to Tool Integration
- How Does an Agent Talk MCP?
- Benefits of MCP for AI Agents
- Shape Agent Behavior Through Prompting

Why Attend?

- Understand the Future – Discover how AI agents are redefining autonomy and real-time decision-making across industries.

- Build Smarter Systems – Learn how to design agents that perceive, reason, and act—without human micromanagement.

- Get Hands-On Concepts – Explore the roles of LLMs, short-term memory, tool orchestration, and prompting in building agents.

- Bring Ideas to Life – Whether you're exploring a product idea or reimagining business operations, you'll leave with practical frameworks to build real AI-powered tools today.

Please RSVP to secure your spot for this insightful session on autonomous AI agents. We’re excited to explore the future of intelligent systems—together.

SaaS Fundamentals: A Primer's Guide for Success - June 2025

Have you ever encountered a problem, big or small, and thought, 'Wouldn't it be great if there was software to solve this?' You envision a tool, a solution, something that would make life easier or a process more efficient. That spark of an idea, that 'what if' moment, is where innovation begins. But what if you could do more than just imagine it? What if you could take that spark and transform it into a real, thriving application? With the SaaS approach, that's not just a dream – it's an achievable reality. This presentation will show you the fundamental steps to take your software idea from a simple thought to a successful online service.

This presentation provides a foundational understanding of building a successful SaaS (Software as a Service) solution. Whether you have a brilliant software idea or simply want to learn about the SaaS development process, this session will guide you through the key concepts. We'll cover everything from initial planning and technical considerations to deployment, growth, and ongoing support, empowering you to turn your software vision into a reality.

Agenda:

- Introduction to SaaS
- Planning for SaaS Success
- Technical Approach & MVP
- Automation, Security, and Data
- Rollout and Market Presence
- Continuous Retention and Support
- How Do I Get Started?

Why Attend?

Demystify SaaS Development: Gain a clear understanding of the core principles and steps involved in building a successful SaaS application.

Transform Ideas into Reality: Learn how to take your software concepts from initial ideation to a fully functional online service.

Navigate the SaaS Landscape: Discover essential strategies for planning, development, deployment, and ongoing support.

Empower Your Entrepreneurial Spirit: Equip yourself with the knowledge and tools to launch your own SaaS venture.

Please RSVP to secure your spot for this insightful session. Looking forward to embarking on this SaaS journey together!

From Raw Data to Roadmap: The Discovery Phase in Data Engineering

In this session, we will delve into the essential building blocks of data engineering, placing a spotlight on the discovery process. From framing the problem statement to navigating the intricacies of exploratory data analysis (EDA), data modeling using Python, VS Code, Jupyter Notebooks, SQL, and GitHub, you'll gain a solid understanding of the fundamental aspects that drive effective data engineering projects.

#DevFest Series

1. Introduction:

The "Why": We'll discuss why understanding your data upfront is crucial for success.

The Problem: We'll introduce a real-world problem that will guide our exploration.

2. Data Loading and Preparation:

Loading: We'll demonstrate how to efficiently load data from an online source directly into our workspace.

Structuring: We'll prepare the loaded data for analysis, making it easy to work with.

3. Exploratory Data Analysis (EDA):

First Look: We'll learn how to quickly generate and interpret summary statistics for our data.

The Story: We'll use these statistics to understand the data's characteristics and identify any red flags or anomalies.

4. Data Cleaning and Modeling:

Cleaning: We'll identify and handle common data issues like missing values and inconsistencies.

Modeling: We'll organize our data into separate tables for dimensions (descriptive attributes) and facts (measurable values).

5. Visualization and Real-World Application:

Bringing it to Life: We'll create charts to visualize the data and find patterns.

Solving the Problem: We'll apply the insights gained to address our original problem and discuss practical solutions.

Key Takeaways:

- Mastery of the foundational aspects of data engineering.

- Hands-on experience with EDA techniques, emphasizing the discovery phase.

- Appreciation for the value of a code-centric approach in the data engineering discovery process.

Upcoming Talks:

Join us for subsequent sessions in our Data Engineering Process Fundamentals series, where we will delve deeper into specific facets of data engineering, exploring topics such as data modeling, pipelines, and best practices in data governance.

This presentation is based on the book, "Data Engineering Process Fundamentals," which provides a more comprehensive guide to the topics we'll cover. You can find all the sample code and datasets used in this presentation on our popular GitHub repository.

From Blueprint to Build: The Design and Planning Phase in Data Engineering

Agenda:

In this session, we embark on the next chapter of our data journey, delving into the critical Design and Planning Phase. As we transition from discovery to design, we'll unravel the intricacies of:

#DevFest Series

System Design and Architecture:

- Understanding the foundational principles that shape a robust and scalable data system.

Data Pipeline and Orchestration:

- Uncovering the essentials of designing an efficient data pipeline and orchestrating seamless data flows.

Source Control and Deployment:

- Navigating the best practices for source control, versioning, and deployment strategies.

CI/CD in Data Engineering:

- Implementing Continuous Integration and Continuous Deployment (CI/CD) practices for agility and reliability.

Docker Container and Docker Hub:

- Harnessing the power of Docker containers and Docker Hub for containerized deployments.

Cloud Infrastructure with IaC:

- Exploring technologies for building out cloud infrastructure using Infrastructure as Code (IaC), ensuring efficiency and consistency.

Why Join:

- Gain insights into designing scalable and efficient data systems.

- Learn best practices for cloud infrastructure and IaC.

- Discover the importance of data pipeline orchestration and source control.

- Explore the world of CI/CD in the context of data engineering.

- Unlock the potential of Docker containers for your data workflows.

Get ready to elevate your data engineering expertise and empower your projects with robust design and planning strategies.

Please RSVP to secure your spot for this enriching session. Looking forward to exploring the future of data engineering together!

Upcoming Talks:

Join us for subsequent sessions in our Data Engineering Process Fundamentals series, where we will delve deeper into specific facets of data engineering, exploring topics such as data modeling, pipelines, and best practices in data governance.

This presentation is based on the book, Data Engineering Process Fundamentals, which provides a more comprehensive guide to the topics we'll cover. You can find all the sample code and datasets used in this presentation on our popular GitHub repository Introduction to Data Engineering Process Fundamentals.

From Raw Data to Analytics: The Modern Data Layer Architecture

This presentation is part of the Data Engineering Process Fundamentals series, focusing on the essential architectural components—the Data Lake and the Data Warehouse—and defining their respective roles in a modern analytics ecosystem.



Agenda:

1. Introduction to Data Engineering:

- Brief overview of the data engineering landscape and its critical role in modern data-driven organizations.

- Operational Data

2. Understanding Data Lakes:

- Explanation of what a data lake is and its purpose in storing vast amounts of raw and unstructured data.

3. Exploring Data Warehouses:

- Definition of data warehouses and their role in storing structured, processed, and business-ready data.

4. Comparing Data Lakes and Data Warehouses:

- Comparative analysis of data lakes and data warehouses, highlighting their strengths and weaknesses.

- Discussing when to use each based on specific use cases and business needs.

5. Integration and Data Pipelines:

- Insight into the seamless integration of data lakes and data warehouses within a data engineering pipeline.

- Code walkthrough showcasing data movement and transformation between these two crucial components.

6. Real-world Use Cases:

- Presentation of real-world use cases where effective use of data lakes and data warehouses led to actionable insights and business success.

- Hands-on demonstration using Python, Jupyter Notebook and SQL to solidify the concepts discussed, providing attendees with practical insights and skills.

7. Q&A and Hands-on Session:

- An interactive Q&A session to address any queries.

Conclusion:

This session aims to equip attendees with a strong foundation in data engineering, focusing on the pivotal role of data lakes and data warehouses. By the end of this presentation, participants will grasp how to effectively utilize these tools, enabling them to design efficient data solutions and drive informed business decisions.

This presentation will be accompanied by live code demonstrations and interactive discussions, ensuring attendees gain practical knowledge and valuable insights into the dynamic world of data engineering.

From Raw Data to Governance: Refining Data with the Medallion Architecture - Nov 2025

💡Overview: Architecting Data Trust

Build upon your existing data engineering expertise and discover how Medallion Architecture can transform your data strategy. This session provides a hands-on approach to implementing Medallion principles, empowering you to create a robust, scalable, and governed data platform.

We'll explore how to align data engineering processes with Medallion Architecture, identifying opportunities for optimization and improvement. By understanding the core principles and practical implementation steps, you'll learn how to optimize data pipelines, enhance data quality, and unlock valuable insights through a structured, layered approach to drive business success.

💡Overview: Architecting Data Trust

Build upon your existing data engineering expertise and discover how Medallion Architecture can transform your data strategy. This session provides a hands-on approach to implementing Medallion principles, empowering you to create a robust, scalable, and governed data platform.

We'll explore how to align data engineering processes with Medallion Architecture, identifying opportunities for optimization and improvement. By understanding the core principles and practical implementation steps, you'll learn how to optimize data pipelines, enhance data quality, and unlock valuable insights through a structured, layered approach to drive business success.

#DevFest Series

🗓️ Presentation Agenda

1. Introduction to Medallion Architecture

* Defining Medallion Architecture (Bronze, Silver, Gold).

* Understanding the Core Principles (Immutability, Quality, Structure).

* Benefits of Medallion Architecture (Trust, Performance, Governance).

---

2. The Bronze Layer: The Landing Zone

Understanding the purpose of the Raw/Bronze Zone*.

* Best practices for data ingestion and immutable storage.

---

3. The Silver Layer: Integration & Cleansing

* Data transformation and cleansing (Silver Zone).

* Creating a robust foundation for unified analysis.

---

4. The Gold Layer: Curated Insights

* Data optimization and summarization (Gold Zone).

Preparing data for consumption and enabling self-service analytics*.

* Curated data for insights and action.

---

5. Empowering Insights & Governance

Driving Data-Driven Decision-Making* and Accelerated Insights.

Importance of data governance* in Medallion Architecture.

* Implementing data ownership and stewardship.

Ensuring data quality and security*.

***

✨ Why Attend

Gain a deep understanding of Medallion Architecture and its application in modern data engineering. You will learn how to:

* Optimize data pipelines and significantly improve data quality.

* Unlock valuable insights through a structured architectural approach.

* Discover practical steps to implement Medallion principles in your organization.

From Raw Data to Governance: Refining Data with the Medallion Architecture - Dec 2025

Overview: Architecting Data Trust

Build upon your existing data engineering expertise and discover how Medallion Architecture can transform your data strategy. This session provides a hands-on approach to implementing Medallion principles, empowering you to create a robust, scalable, and governed data platform.

We'll explore how to align data engineering processes with Medallion Architecture, identifying opportunities for optimization and improvement. By understanding the core principles and practical implementation steps, you'll learn how to optimize data pipelines, enhance data quality, and unlock valuable insights through a structured, layered approach to drive business success.

Presentation Agenda

1. Introduction to Medallion Architecture

* Defining Medallion Architecture (Bronze, Silver, Gold).

* Understanding the Core Principles (Immutability, Quality, Structure).

* Benefits of Medallion Architecture (Trust, Performance, Governance).

---

2. The Bronze Layer: The Landing Zone

Understanding the purpose of the Raw/Bronze Zone*.

* Best practices for data ingestion and immutable storage.

---

3. The Silver Layer: Integration & Cleansing

* Data transformation and cleansing (Silver Zone).

* Creating a robust foundation for unified analysis.

---

4. The Gold Layer: Curated Insights

* Data optimization and summarization (Gold Zone).

Preparing data for consumption and enabling self-service analytics*.

* Curated data for insights and action.

---

5. Empowering Insights & Governance

Driving Data-Driven Decision-Making* and Accelerated Insights.

Importance of data governance* in Medallion Architecture.

* Implementing data ownership and stewardship.

Ensuring data quality and security*.

***

✨ Why Attend

Gain a deep understanding of Medallion Architecture and its application in modern data engineering. You will learn how to:

* Optimize data pipelines and significantly improve data quality.

* Unlock valuable insights through a structured architectural approach.

* Discover practical steps to implement Medallion principles in your organization.

Please RSVP to secure your spot for this session. We believe in fostering a welcoming and inclusive environment where everyone's unique perspectives are valued and contribute to our collective success.

The Cognitive Data Lakehouse: AI-Driven Unification and Semantic Modeling in a Zero-ETL Environment

In the modern data landscape, the wall between "where data lives" and "how we get insights" is crumbling. This session focuses on the Cognitive Data Lakehouse—a paradigm shift that allows developers to treat a fragmented data lake as a unified, high-performance warehouse.

We will explore how to move beyond brittle ETL pipelines using Zero-ETL architecture in the cloud. The core of our discussion will center on using integrated AI capabilities and semantic modeling to solve the "Metadata Mess" inherent in global manufacturing feeds without moving a single byte of data. From raw telemetry in object storage to semantic intelligence via large language models, we’ll show you the real-world application of AI in modern data engineering.

🚀 Agenda Details

Phase 1: Foundations & The Zero-ETL Strategy

We kick off with the infrastructure layer. We'll discuss the design of cross-region telemetry tables and how modern cloud engines allow us to query raw files in object storage with the performance of a native table. We’ll establish why "0x data movement" is the goal for modern scalability.

Phase 2: Confronting the Metadata Mess

Schema drift and inconsistent naming across global regions are the enemies of unified analytics. We will look at why traditional manual mapping fails and how we can use AI inference to bridge these gaps and standardize naming conventions automatically.

Phase 3: AI-Driven Unification & Semantic Modeling

The "Cognitive" part of the Lakehouse. We’ll dive into the technical implementation of registering AI models directly within your data warehouse environment. You'll see how to create an abstraction layer that uses AI to normalize data on the fly, creating a robust semantic model.

Phase 4: Scaling to a Global Feed

Finally, we’ll demonstrate the DevOps workflow for integrating a new international factory feed into a global telemetry view. We'll show how to maintain a "Single Source of Intelligence" that BI tools and analysts can consume without needing to know the complexities of the underlying lake.

💡 Why Attend?

Master Modern Architecture: Learn the "Abstraction Layer" design pattern that is replacing traditional, slow ETL/ELT processes.

Hands-on AI for Data Ops: See exactly how to use AI and semantic modeling within SQL-based workflows to automate data cleaning and schema mapping.

Scale Without Pain: Discover how to manage global data sources (multi-region, multi-format) through a single governing layer.

Developer Networking: Connect with other data architects, engineering leaders, and professionals solving similar scale and complexity challenges.

Target Audience: Data Engineers, Analytics Architects, Cloud Developers, and anyone interested in the intersection of Big Data and Generative AI.

Tech Camels - Events 2025 User group Sessionize Event

January 2025

SQL Saturday Jacksonville #1068 Sessionize Event

May 2024 Jacksonville, Florida, United States

Tech Camels - Events 2024 User group Sessionize Event

January 2024

SQL Saturday South FL 2023 Sessionize Event

June 2023 Davie, Florida, United States

Oscar Garcia

Microsoft MVP @ozkary

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top