Shaurya Agrawal
Startup CTO & Board Advisor
Austin, Texas, United States
Actions
Shaurya Agrawal is a Data & Analytics leader with 25+ years of experience driving transformative initiatives across Tech/SaaS, E-commerce and FinTech. With expertise in AI/ML, Enterprise Data Architecture and BI, Shaurya has led impactful projects, creating customer-centric solutions and modernizing data platforms. As a Board Advisor to Hoonartek and CTO of YourNxt Technologies, a mobile tech start-up, Shaurya shapes global data strategies.
Holding an MBA and pursuing an MS in Data Science from UT Austin, Shaurya leverages data to unlock business value, specializing in unified customer views and personalized experiences.
Area of Expertise
Topics
Policy‑as‑Code for Security & Lineage: Board Ready controls for Agentic AI
Session discusses how encoding governance as code (policy gates, immutable evidence capture & lineage) turns vague risk statements into automated, auditable defenses. Session shows how Policy-as-Code stops risky promotions, surface exfil attempts and produce the evidence auditors and boards require to approve AI deployments. Key Takeaways-
#1 Understand the minimal, board‑readable architecture (policy‑as‑code + secrets + evidence store + lineage) required to make agentic AI auditable
#2 Learn how automated policy gates and immutable evidence capture convert governance requirements.
#3 Get a pragmatic 30–90 day pilot plan and KPIs that you can present to the board to demonstrate controlled AI adoption
Value‑Stream Lakehouse: From Commit ---> Cash (Measure Time‑to‑Value)
Session DEMOs a Snowflake/Databricks/Fabric Data Platform based pattern that ingests dev telemetry, release events, feature usage and finance metrics to compute time‑to‑value & initiative ROI .
Attendees will see a compact schema, critical joins and a demo notebooks that turns noisy telemetry into a single dashboard PMOs and execs can trust for portfolio decisions.
Auditable Lineage & One‑Click Investment Dossiers (Neo4j + Databricks)
Boards and steering committees demand concise, auditable evidence when approving investments.
This session demonstrates how to assemble compact, decision‑grade evidence packs by pairing Neo4j relationship graphs with Databricks lineage and KPI artifacts. Learn how to automatically generate an “investment dossier” containing lineage, cost rollups, outcome metrics, risk dependencies and approval history.
Walk away with cheat sheet of everything a portfolio board needs to approve or pause funding with confidence.
Policy-as-Code - Enforcing ML Governance with Terraform + Sentinel + Vault
This talk shows a practical pattern for encoding ModelOps guardrails as policy-as-code. Session will use Terraform to provision model infra/deployment pipelines, Sentinel policies to enforce promotion gates and Vault for secrets/sensitive labels. Session will demonstrate how to block risky promotions automatically, capture governance artifacts into an immutable store and integrate alerts. Session will use concrete examples that you can adapt to Databricks, Fabric or on‑prem pipelines. Key takeaways:-
#1 How to write promotion gates as Sentinel policies (quality, fairness, groundedness).
#2 Vault patterns for model secrets, short‑lived creds and sensitivity labels.
#3 CI/CD + policy enforcement pattern that automatically blocks/rolls back risky model promotions.
Rapidly Investigate Agentic Misuse with a Neo4j Provenance Graph
Session demonstrates modeling agent activity (agents, users, artifacts, actions) as a provenance graph in Neo4j and using a few investigative Cypher queries to surface multi‑hop exfiltration, suspicious delegation or lateral movement. The session shows visualized subgraphs, how to drill into node/context metadata and exporting an explainable evidence package (graph path + artifacts + timestamps) for incident response and regulator review
Graph-Powered Service Maps - Rapid Root Cause & Billing Reconciliation
This session demonstrates how to build a lightweight, queryable service graph that unifies topology, telemetry, configuration and billing relationships. Session will walk through the data model (resources, relationships, incidents, usage events), ingestion options for CMDBs/monitoring systems and the most useful graph queries for common MSP problems.
Attendees will see how a small set of Cypher queries and visualizations reduces mean time to resolution, simplifies cross-customer dependency analysis, and closes gaps that cause billing disputes. The demo will be a short live walkthrough of querying a sample multi-tenant graph and resolving a simulated incident + invoice mismatch in under 5 minutes.
This talk is practical thus no heavy architecture required
From Tickets to Signals - Using ML to Stop Recurring Work
This session explains a compact ML pipeline for ticket-driven signal engineering with data extraction from ticketing systems, text featurization, clustering and simple classifiers to surface preventive alerts. Session will discuss human-in-the-loop design, low-risk automation patterns and metrics for tracking deflection.
A short live notebook snippet will show clustering on sample ticket data and how a discovered pattern becomes a signal with a suggested playbook. Session will discuss how to start with limited data, validate improvements quickly and measure the operational impact that convinces leadership.
Audit‑Ready Grounding - Build a “Cite‑or‑Fail” Databricks Agent in Minutes
Live demo of a minimal Retrieval‑Augmented Generation pipeline on Databricks that only answers when retrieved evidence meets confidence thresholds, otherwise politely refuses. The demo shows grounding into governed Delta tables, inline citations in replies and automatic logging of an audit evidence row (prompt, sources, scores, answer) to Delta so compliance, security and incident teams can trace every response
Real-Time Customer Graph Activation - From Identity Resolution to Personalization with GraphRAG
Session will demonstrate how to build a unified customer graph in Neo4j with identity resolution and household detection, enrich it with GraphRAG-powered insights from unstructured data (support tickets, reviews, chat logs) and activate personalized experiences. You'll learn how to stream customer events from Kafka/Kinesis into Neo4j for sub-second identity stitching, use GraphRAG to extract sentiment, intent and relationship signals.
This session focuses on the operational "last mile", turning graph insights into revenue-generating actions with measurable lift and full auditability.
Productionizing GraphRAG - Hybrid Retrieval, Eval Harnesses and Audit‑Ready Traces
Session will show you how to move from demo to deployment with hybrid retrieval architectures, rigorous evaluation frameworks and audit-ready lineage. You'll learn how to combine keyword search (BM25), vector embeddings and graph traversals in Neo4j for maximum retrieval relevance. You will also understand how to implement evaluation harnesses measuring Recall@k, nDCG and groundedness, using negative mining and labeled test sets.
This session will equip you with practical patterns to make GraphRAG maintainable, measurable and compliant in enterprise environments.
Microsoft Fabric + Neo4j - Unified Semantic Layer for GraphRAG and Power BI
Shaurya Agrawal will show you how to build a unified semantic layer that connects Fabric, Neo4j and Power BI with full lineage and governance. You'll learn how to use the Fabric Workload for AuraDB to ingest OneLake data into Neo4j with authentication via Entra ID, build a semantic knowledge graph enriched with LLM-extracted entities and relationships. You will also learn how to run GraphRAG queries, publish graph-derived features and insights back to Fabric's semantic model in Power BI.
This session will equip you with integration patterns, authentication flows and a reference architecture for enterprises running the Microsoft + Neo4j stack.
Lakehouse + Graph - Delta Lake, Unity Catalog and Neo4j for Governed AI Pipelines
Shaurya Agrawal will demonstrate how to pair Delta Lake's bronze/silver/gold architecture with Neo4j to build governed AI pipelines that track lineage, derive graph features and power GraphRAG, while keeping RBAC, audit logs and data contracts synchronized across both systems.
You'll learn architectural patterns for dividing responsibilities such as Delta for events, aggregations and feature tables. You will also learn lineage tracking, relationship modeling and GraphRAG context retrieval and how to sync Unity Catalog metadata with Neo4j schema and propagate RBAC policies and audit trails. You'll walk away with a reference architecture, code samples and a governance dashboard design that spans lakehouse and graph.
Explainable Fraud Investigation with GraphRAG - From Alert to Evidence Package
Session will show you how to combine Neo4j graph patterns, GraphRAG retrieval and GNN embeddings to build an explainable fraud investigation platform. You'll learn how to model fraud entities (accounts, devices, transactions, shared attributes) and detect suspicious patterns with multi-hop Cypher queries.
This session emphasizes the compliance and investigator experience, making graph-powered fraud detection actionable, explainable and audit-ready for financial institutions and fintechs.
Agentic Graph Memory - Temporal Knowledge Graphs for Persistent, Explainable Agents
Shaurya Agrawal will demonstrate how temporal knowledge graphs in Neo4j provide agents with structured, queryable memory for planning, backtracking and multi-step reasoning. You'll learn design patterns for modeling agent observations, state transitions and provenance as temporal nodes and relationships.
By the end of this session, you'll understand how to build agents that remember context across sessions, justify their reasoning with graph paths and scale to production workloads with testable memory fidelity and retrieval latency SLOs.
Reliable LLM Domain Services Grounded Retrieval and Telemetry on Databricks
Build an assistant that answers only from governed domain data and shows its evidence. Teams will define a domain slice, curate a small corpus, implement grounded retrieval with cite‑or‑fail, and add refusal logic and PII guards. Session will capture traces linking prompt to sources to filters to answer and practice a review ritual to refine boundaries and tests. Most time is hands‑on in pairs with quick checkpoints. You leave with a slim checklist and a repeatable evaluation harness.
Relationship Modelling in Practice with Neo4j and the Lakehouse
In this lab, teams model a bounded context as a property graph, map it to Delta entities and iterate on the model through exercises. Session will run Cypher patterns for centrality, paths, and communities to test hypotheses, then round‑trip “graph features” back to analytical tables for ML and BI. Most of the time is group modelling and pair querying with short debriefs. You leave with a schema sketch, example queries, and a path to add relationship intelligence without breaking contracts
Finance Copilots Leaders Accept with Grounded Answers and Evidence
This talk frames the finance copilot as a domain service with policy‑as‑code, cite‑or‑fail retrieval, and telemetry that links question to sources to filters to summary. We will show how to define context boundaries for the assistant, encode refusal rules, and run a 30‑60‑90 rollout that balances autonomy and governance. Outcome is a credible path from demo to daily use without eroding trust.
Graph‑Augmented Data Products with Neo4j & Lakehouse for Cross‑Domain Insight
This one‑day workshop adds relationship intelligence to your Data Mesh. In the morning, session translates context maps into a property graph for a chosen domain, define stable identifiers, and sketch a graph schema aligned to data product contracts. Midday, session runs Cypher pattern labs — centrality, k‑hop paths and communities to answer fraud, identity, churn and dependency questions. In the afternoon, session publishes “graph features” as versioned outputs of data products, define tests and lineage and design a contract to consume graph outputs across domains. Entire session involves practice, model critiques, pair queries, and contract design. You leave with schema sketches, notebooks, loaders, and a three‑week activation plan to add graph to your Mesh
Data Products that Scale Contracts, Semantics, and Operating Cadence
This full‑day workshop is a practical path to that outcome. In the morning, session maps bounded contexts to data products, write explicit product contracts/SLAs, and define upstream/downstream agreements. Midday, session uses decision trees to balance latency and cost (materialize vs virtualize) and tune Delta and Direct Lake patterns without centralizing ownership. In the afternoon, session designs federated governance by code with lineage and access by purpose. Throughout the session, interaction is hands‑on in pairs and teams. You leave with contract templates, KPI trees, a semantic alignment playbook and a 90‑day adoption plan.
Data Products that Leaders Trust with One Copy of Truth
Leaders want reliable decisions from domain data products without sprawl or handoff friction. This talk shows an operating model for “one copy of truth” across domains with explicit contracts, SLAs, and lineage. Session will map bounded contexts to productized data, define interfaces between operational and analytical streams and give decision trees for materialize vs virtualize and latency vs cost. Expect a case‑style walkthrough and a checklist that turns platform choices into predictable outcomes.
Trustworthy AI for Finance - Grounded Variance Answers with a Lakehouse Copilot
This deep dive shows how to build a CFO centric finance copilot on Azure Databricks that answers only from governed data, always cites sources and logs audit-ready evidence. I will implement grounded RAG over curated Delta tables, enforce cite‑or‑fail and PII safeguards, add runtime evaluation and refusal logic and capture telemetry linking prompt --> sources --> filters --> summary. Then I will walk through a compact live demo and a 30/60/90 rollout plan for Finance/FP&A. You’ll leave with patterns, notebooks, and a reliability checklist to ship a CFO-ready copilot.
Shipping ML Faster - A Minimal, Repeatable Pipeline From Data to Decisions
This talk presents an opinionated, vendor-neutral blueprint for delivering ML to production reliably. I will cover the minimum viable pipeline: data contracts and quality checks, feature definitions with ownership, experiment tracking and reproducibility, promotion gates tied to evals, safe rollout patterns (blue/green, shadow), and lightweight monitoring for drift, quality, and cost.
You’ll get a reference lifecycle, a promotion checklist, and anti-patterns to avoid (pipeline sprawl, silent schema breaks, unmanaged “notebook ops”). Walk away with a practical template you can apply on any stack.
Relationship‑First Customer 360 - Graph‑Augmented CDPs with Databricks + Neo4j
Customer 360 demand more than just relational joins. In this hands-on workshop, I will show how to build a graph-powered CDP on Databricks using Neo4j GraphDB. With the Neo4j Spark Connector, I will stream multi-channel data into Delta Lake, run graph queries to uncover hidden affinities, and deliver insights back into Databricks ML and BI. I will cover real-world use cases like identity resolution, cross-channel attribution, and personalization.
Attendees will leave with detailed understanding around Customer 360 & CDP. You will have a hands-on deep dive immersion in Neo4j GraphDB & Databricks Data Platform. Understand patterns and design approaches to bring graph analytics into your Lakehouse and deliver customer intelligence that traditional CDPs struggle to achieve.
Production-Ready RAG & AI Assistants on Databricks + Azure AI Foundry
In this workshop, I will prepare trustworthy data in Delta, implement vector and keyword retrieval with confidence thresholds, apply grounded-answer patterns (cite-or-fail, source allow/deny) and add input/output filters for safety and PII.
You’ll wire up eval gates that run in CI/CD, capture telemetry (prompts, sources, decisions), and push actionable metrics to a simple ops dashboard. I will also cover least-privilege tool use and per-identity permissions to prevent “over-powered” assistants.
You’ll leave with a reference repo, notebooks, and checklists to deploy assistants that are accurate, auditable, and cost-aware.
Red Teaming Enterprise Assistants - Hidden Instructions, Data Leaks & Tool Misuse
This talk shows how to “red team” your assistants safely - plant realistic tests, measure what they accessed and shared, and write findings leaders can act on. Then we flip to defense with practical guardrails such as 'allow lists' for trusted sources, “show your sources” rules, output limits and filters, per-identity tool permissions, action budgets and simple approval steps. I will also cover the evidence leaders expect i.e. traces linking user --> prompt --> data --> actions, plus review and sign-off. Leave with a ready-to-use test kit, a controls checklist, and a blueprint to keep assistants helpful, contained, and auditable.
Operationalizing ML - A Technical Guide to MLOps, DataOps & ModelOps in Fabric
Deep Dive into MLOps in Fabric -
Ship AI in Microsoft Fabric with guardrails. Cover MLflow training, registry, deployment, drift/quality monitors, approvals, sensitivity labels, lineage to Power BI, and model cards. Leave with templates and a dashboard for accuracy, freshness and SLA breaches.
MLOps on Databricks - Features, CI/CD, Monitoring, and Cost Control
In this workshop, you’ll implement an opinionated MLOps stack on Databricks: feature engineering patterns, MLflow experiment tracking, model registry with promotion gates and managed endpoints with blue-green or canary rollout. You will wire up CI/CD pipelines that run eval suites (quality, robustness, fairness screens) and automatically block risky promotions.
You’ll add drift/quality monitors, alerting, and rollback runbooks, plus cost controls that right-size clusters and endpoints by workload. The result: faster iteration with fewer incidents and a transparent “evidence pack” leaders and auditors trust.
MLOps on Azure Databricks - Features, Registry Gates, and Safe Deployments in a Day
Ship models with confidence on Azure Databricks. In one day, you’ll stand up a minimal, repeatable pipeline: define features with ownership, track experiments, implement model registry promotion gates and deploy with shadow/canary plus rollback runbooks. Add drift/quality monitors, alerts, and cost guardrails so ML won’t surprise SRE. Wire CI/CD via GitHub Actions, generate an “evidence pack” (metrics, lineage, approvals) and adopt a simple promotion checklist. You’ll leave with a repo template, pipeline YAML and a working path from data to decisions, fitting .NET and Python teams who want production results, not just notebooks.
Graph + Lakehouse on Azure: When Relationships Beat Tables
This deep dive shows how to pair Azure Databricks with Neo4j to surface relationship intelligence for fraud, identity resolution, churn, and supply chain risk. I will design a graph-augmented data model, sync Delta <--> Neo4j reliably, run Cypher patterns (k-hop paths, triangles, community detection) and push graph features back to Delta for ML and BI. You’ll learn governance and lineage across both systems, performance/cost trade-offs and a pragmatic rollout plan. Expect practical code, reference patterns, and a compact live demo you can reuse on Azure today.
Graph + Lakehouse on Azure - Relationship Intelligence with Neo4j and Databricks
When tables hide the big picture, graph reveals it. This hands-on workshop shows how to pair Azure Databricks with Neo4j for relationship intelligence. You’ll model entities/relationships, sync Delta <--> Neo4j, run Cypher patterns for fraud, identity resolution, and supply chain, and extract graph features back to Delta for ML/BI. I will cover governance and lineage across both stores, performance trade-offs and a pragmatic cost model. You’ll leave with code, data models, and sync patterns to add graph where it matters, without breaking pipelines or budgets.
Direct Lake at Scale: Performance, Cost & Semantic Model Design Patterns
Learn how to tune Microsoft Fabric Direct Lake for sub‑second BI: when to use Direct Lake vs Import/DirectQuery, Delta layout/partitioning, semantic model patterns, aggregations, and a troubleshooting playbook. Optimize cost, latency, and freshness.
Direct Lake and Delta at Scale: Performance Engineering Bootcamp
This hands-on bootcamp teaches a pragmatic, measurement-driven approach to performance. I will tune Delta tables (partitioning, file sizing, Z-Order/V-Order, compaction), address tiny file and high-cardinality pain, and use shortcuts and metadata strategies wisely. On the semantic side, I will build star schemas, composite models, and aggregations that keep p95 latency down without ballooning costs.
You’ll learn when to choose Direct Lake vs Import vs DirectQuery, how to diagnose bottlenecks, and how to monitor freshness/latency with a lightweight perf dashboard. Bring your toughest scenarios as this day is about fixes you can apply immediately.
Demo of Practical NLP on Azure Databricks - From Text to Features to Impact
In this live, code‑forward session on Azure Databricks, I will ingest messy enterprise text (tickets, emails, notes), normalize and label it, and build task‑ready representations: TF‑IDF/n‑grams, domain keywords and lightweight transformer embeddings. I will compare classical models vs small transformers, show evaluation that survives drift and wire outputs into downstream apps and dashboards. You’ll see working notebooks, Delta tables, and a simple “NLP scorecard” for precision/recall, latency and cost. Leave with reusable patterns and a starter repo to ship explainable, cost‑aware NLP on Databricks.
Deep Learning for NLP on Azure Databricks with Hugging Face: Small Models, Big Wins
In this live, deep learning–focused session, I will use Hugging Face on Azure Databricks to build fast, explainable NLP services with small, efficient transformers. I will select task-specific models (classification, NER, summarization), apply efficient tokenization, batching/caching and compare distilled/quantized variants for cost and latency. You’ll see how to evaluate quality with slice-aware metrics, log latency/throughput/cost to a scorecard and deploy via batch jobs or a lightweight real-time endpoint. Optional GPU paths are shown, but everything runs well on CPU. Leave with notebooks, a model selection playbook, and production-ready patterns
Data Platforms Without the Sprawl - Clear Handoffs, One Copy of Truth
This session shows how to design a platform-agnostic operating model: data products with SLAs and schemas, “one copy” policies, and explicit handoffs between engineering, analytics, and ML. I will cover physical layout and query patterns that balance cost, latency, and freshness; when to materialize vs virtualize; and how to preserve lineage, security, and governance across tools.
Expect concrete decision trees and checklists that reduce duplication and platform churn—no matter which vendors you use.
Building Reliable LLM Apps on Azure Databricks - Grounded RAG, Cite‑or‑Fail, and Telemetry
Build LLM apps that leaders can trust without heavy CI/CD. In this hands‑on workshop, you’ll implement grounded retrieval on Azure Databricks, combine keyword and vector search and enforce “cite‑or‑fail” so every answer shows sources or refuses safely. Add safety filters (PII/toxicity), light policy‑as‑code tests, and runtime evaluation to catch regressions before users do. We’ll instrument telemetry & traces that link prompt --> data --> answer for auditing and debugging, then discuss simple rollout patterns (shadow/canary) and cost/performance tuning. You’ll leave with a working notebook repo, datasets, and a practical checklist to move from prototype to reliable production on Databricks.
Build an Agentic Finance Copilot on Azure Databricks - Guardrails and Red Team Tests
Build a CFO-ready copilot from scratch.
Day 1 -
---------
Curate “finance facts” in Delta, implement grounded Q&A with cite‑or‑fail, redact PII and wire a minimal API.
Day 2-
---------
Add an evaluation harness (variance QA, refusal tests, prompt injection), telemetry/traces and shadow/canary rollout with rollback.
You’ll integrate Azure OpenAI or a local model, log evidence (prompts, sources, summaries) to Delta and ship a working prototype. Leave with a repo, CI evals and dashboards for accuracy, freshness, and policy exceptions, ready to amaze your C-suite leaders in your environment
Disrupting Defaults: Smarter Credit Risk w/ Neo4j Graphs
In this lightning talk, Shaurya Agrawal will demonstrate how Neo4j’s graph database technology can revolutionize credit risk management for Financial and FinTech firms by uncovering hidden relationships between entities. Using a real-world scenario where Company A, Company B, and Company C, are subsidiaries, with varying ownership, under a common parent, you will see how traditional systems often miss these indirect connections—potentially underestimating aggregate exposure and risk. With Neo4j, you will learn how to model and visualize complex corporate hierarchies, instantly revealing cross-entity dependencies and shared liabilities.
Attendees will discover how graph queries can surface risk concentrations, identify circular ownership, and support more informed credit decisions. This session will show you how leveraging Neo4j’s relationship-first approach enables smarter, faster risk assessment—empowering you to move beyond the limitations of legacy, table-based systems.
Smarter Credit Risk with Databricks + Neo4j Graphs
Hidden relationships between corporate entities are often missing from traditional risk systems, thus leading to underestimated exposures. This session demonstrates how Databricks + Neo4j can transform credit risk modeling. We’ll integrate ownership structures and transactions into Delta Lake and apply Cypher graph queries to detect circular ownership, shared liabilities, and exposure chains. With Databricks ML, we’ll take these graph insights further, enabling predictive models for entity-level risk and portfolio concentrations.
Attendees will see a novel architecture that unifies graph algorithms with the scalability of Databricks, delivering faster and smarter credit risk decisions.
Visualizing Risk in Cozystack: A Graph Approach
Multi‑tenant platforms like Cozystack make it possible to run powerful shared data and AI workloads , but visibility across tenants, roles, and resources can quickly get complex. Logs alone often don’t reveal how different identities and workloads are actually connected. This session introduces a beginner‑friendly framework for applying graph thinking to Cozystack, helping teams model and visualize cross‑tenant relationships. Through simple examples, attendees will see how graphs highlight over‑permissioned accounts, hidden dependencies, and potential risk paths that traditional reporting may miss. No graph theory background required.
Participants will leave with practical ideas and methods they can use right away to make Cozystack deployments more secure, transparent, and trusted.
Signal-Led Growth: A CEO/CMO Playbook for AI-Driven GTM Efficiency
Boards want growth with fewer dollars. This session shows CEOs/CMOs how to pivot from MQLs to a signal-led operating model that unifies intent, product usage, and account context to trigger next-best actions automatically. I will cover the executive blueprint: what to instrument, how to govern data and AI risk, where AI creates leverage (propensity, uplift, personalization), and how to prove impact to the board. Expect clear org design, KPI trees, and vendor guardrails so you can scale pipeline and win rates without bloating headcount or tooling.
Securing AI and Test Automation Pipelines with HashiCorp Vault on Databricks
AI and ML workflows increasingly drive customer‑facing applications, but they depend on dozens of sensitive credentials: API keys, service accounts, model endpoints, cloud secrets. Too often, these credentials are stored in notebooks, Git repos, or CI/CD pipelines — a major security risk. In this session, we’ll showcase how HashiCorp Vault can seamlessly secure Databricks‑based AI pipelines, focusing on QA and test automation for conversational AI. You’ll see how using Vault to manage secrets reduces risk, enforces compliance, and provides dynamic secrets for ephemeral jobs. We’ll walk through a simple pattern: Vault securely stores API keys for testing LLM workflows, while Databricks consumes them in QA pipelines without exposing credentials. Attendees will leave with a clear blueprint for making AI automation pipelines not just smarter, but safer.
Security Lakehouse in Action: Using Databricks for Scalable Threat Hunting
Security data is exploding across on-prem, multi-cloud, and SaaS systems, overwhelming traditional SIEMs. This session demonstrates how Databricks can act as a Security Lakehouse, enabling scalable threat hunting and advanced analytics on diverse telemetry. We’ll showcase how to consolidate logs, streams, and identity data into Delta Lake, and then apply graph analytics and machine learning to uncover hidden attack paths.
Attendees will learn how the Lakehouse approach unlocks deep visibility, reduces detection blind spots, and empowers security teams to hunt threats across massive hybrid environments, faster and smarter than with traditional tooling.
Scaling Analytics Without ETL Nightmares: Trino + Databricks in the Data Fabric
Enterprises today struggle with sprawling data estates — Delta Lake in Databricks for ML workloads, Snowflake for BI, and legacy relational systems still running critical operations. The default answer is usually another pipeline, another data copy, and yet another layer of complexity.
In this session, we’ll share how combining Trino’s federated query engine with Databricks’ lakehouse platform creates an agile data fabric that eliminates unnecessary ETL. With Trino, we can query Delta tables in Databricks directly alongside Snowflake, S3, or even legacy RDBMS — all through one SQL interface. This means faster time to insight, fewer brittle pipelines, and a unified view across silos.
We’ll explore architectural patterns, performance considerations for querying Delta Lake via Trino, and how this approach empowers teams to democratize analytics without duplication. Attendees will leave with practical examples of making Databricks a first‑class participant in a broader federated architecture, powered by Trino.
Scaling Analytics Platforms as Code: Automating Databricks with Terraform
Data platforms must move as fast as the businesses they serve. Yet, provisioning workspaces, clusters, and secure storage for analytics often involves manual, error‑prone processes. In this session, we’ll explore how to use HashiCorp Terraform to provision Databricks Lakehouse infrastructure as repeatable, version‑controlled code. Attendees will see a live walkthrough of using Terraform to spin up Databricks workspaces, configure clusters, and integrate cloud storage, all with just a few lines of HCL and terraform apply. We’ll also discuss patterns for scaling this approach across teams, applying GitOps to infrastructure, and improving cloud cost efficiency by standardizing data platform deployment.
Revolutionizing Data Governance with Databricks Lakehouse Architecture
"The Databricks lakehouse architecture represents a transformative approach to data management by seamlessly unifying the flexibility of data lakes with the reliability and performance of data warehouses. This convergence simplifies data governance by providing a single platform that supports both traditional OLAP workloads and modern AI/ML applications. Attendees will gain insights into how this architecture enables consistent data quality, robust security controls, and streamlined compliance processes, addressing the complex challenges organizations face in managing diverse data environments.
This session will also delve into best practices for implementing effective governance across the entire data lifecycle, including DataOps, MLOps, and ModelOps. By integrating governance into these operational workflows, organizations can ensure that data and models remain trustworthy, auditable, and compliant with regulatory requirements. Participants will leave equipped with practical strategies to leverage the lakehouse paradigm to drive innovation while maintaining control and transparency over their data assets."
Optimizing Analytics Spend: Using Trino to Complement Databricks for Cost‑Efficient Workloads
As data teams scale, one of the biggest pain points is runaway compute and storage cost from platforms designed for heavy workloads but misused for everyday analytics. Databricks shines at large‑scale ML and advanced analytics, but using it as a catch‑all BI engine can quickly inflate budgets.
In this talk, we’ll show how layering Trino’s high‑performance federation with Databricks provides a cost‑optimized analytics strategy. Rather than keeping expensive SQL warehouses or large clusters hot, we can offload exploratory queries, BI workloads, and cross‑system joins to Trino. Delta Lake tables in Databricks remain accessible through Trino, but now joined seamlessly with data in S3, Kafka, or cloud warehouses — at lower cost.
We’ll walk through real patterns where Trino reduced compute bills while still enabling Databricks to focus on what it does best: advanced ML/AI and scalable storage. Attendees will learn a pragmatic approach to balancing cost, performance, and flexibility by leveraging Trino and Databricks together instead of in silos.
Operationalizing ML Governance: A Practical Guide to MLOps in Microsoft Fabric
This session will focus on the practical implementation of MLOps within Microsoft Fabric to achieve effective AI/ML model governance. We'll walk through the end-to-end process of building automated pipelines for model training, deployment, monitoring, and retraining, emphasizing how Fabric's integrated tools (Synapse, Data Factory, MLflow) facilitate version control, experiment tracking, and continuous validation. We'll also discuss strategies for seamless integration with Power BI for operational reporting and real-time insights into model performance and governance metrics.
Operational Intelligence for CX (LLM + Databricks)
Companies manage a firehose of unstructured communication — emails, letters, calls, chat transcripts — that contain insights into customer needs, risks, and compliance. In this session, we’ll show how to apply Databricks + LLMOps pipelines to structure and analyze these interactions at scale. Using Delta Lake for storage, Spark NLP for preprocessing, and GenAI models for classification and summarization, we’ll extract themes, sentiment, and risk flags. The outputs feed back into Customer Support/Call Center and Operation & Service workflows.
Attendees will learn repeatable architectures for unifying communications data into their Lakehouse and see how unstructured → structured transformation unlocks new intelligence to improve operations and customer experience (CX)
Marketing Agent on a Laptop — From Signals to SDR Task in 30 Minutes
Build a lightweight “marketing agent” that ingests sample buying signals, scores opportunities, drafts personalized outreach, and posts tasks to Slack/CRM—live, on stage. I will use provided templates and a simple dataset to keep it practical, with guardrails for data privacy and content quality. You’ll leave with a reusable starter kit to pilot agents safely in your GTM stack and a rubric to evaluate agent actions before production.
Machine Learning for Cyber Defense: Building Adversary-Aware Models on Databricks
As cyber threats grow more evasive, traditional detection methods often fall short. This session explores how to leverage Databricks as a Security Lakehouse to build ML-driven defenses that adapt to adversary tactics. We’ll dive into creating adversary-aware models that detect anomalies, recognize lateral movement, and retrain continuously as attackers evolve. Using Databricks capabilities of Delta Lake, LakeFlow pipelines, and MLlib, the security teams can operationalize scalable detection without drowning in noise
Attendees will learn practical steps to develop resilient ML workflows, mitigate adversarial risks, and empower SOCs with data-driven insights that anticipate and counter attacker innovations in real time
LLMs That CISOs Trust : RAG, QA Loops, and SME‑in‑the‑Loop
This session translates security governance into practical GTM guardrails. I will cover policy‑as‑code for prompts and data use, lineage and access by purpose, evaluation gates for AI outputs, and vendor risk clauses. You’ll get a compact governance kit that protects customers and brand without slowing campaigns, built for MOPs to implement and audit with minimal friction.
Lateral Movement in Hybrid Clouds - A Framework for Threat Discovery
This session introduces a practical framework for detecting lateral movement across hybrid clouds, grounded in lessons learned from enterprise-scale environments. We’ll explore how attackers exploit trust relationships in Azure AD/Entra, APIs & multi-cloud connectors. We’ll share strategic models and playbooks for mapping privileged pathways, using graph-powered analysis, and prioritizing high-value choke points.
Attendees will leave with an actionable framework they can apply in their partner practices, enabling them to advise clients, align with Microsoft’s Zero Trust principles, and deliver differentiated managed detection services across hybrid and multi-cloud estates.
Hybrid Cloud Threat Hunting: Visualizing Lateral Movement
Modern adversaries exploit the seams between on-premises and multi-cloud environments, making lateral movement harder to detect. This session dives into advanced threat hunting techniques for hybrid cloud infrastructures. We'll explore how to unify disparate security telemetry, from cloud identity providers (IAM, Azure AD) to API logs and workload activity. Furthermore, we will dive to construct a holistic view of attacker progression.
Attendees will learn practical strategies for visualizing complex attack paths and identifying anomalous lateral movement across diverse cloud services and traditional networks. Attendees will gain actionable insights to uncover hidden threats that bypass conventional security controls in today's interconnected enterprise
Graph-Powered Threat Hunting: Uncovering Hidden Attack Paths in Complex Systems
Traditional security tools often miss sophisticated threats that exploit complex relationships across users, devices, and applications. This session will demonstrate how graph technology, integrated with scalable data platforms like Databricks, offers a powerful new approach to cybersecurity. By mapping these intricate connections into a graph database, security teams can visualize and analyze the entire attack surface, revealing hidden lateral movement and anomalous behaviors that are invisible in siloed data.
Attendees will learn practical strategies for leveraging graph-based threat models to enhance proactive threat hunting. This method enables precise identification of critical vulnerabilities and attack paths, allowing security professionals to prioritize defenses more effectively. Discover how connected insights can transform your security operations, empowering your team to detect advanced threats and strengthen overall organizational resilience
Graph-Powered CDPs on Databricks with Neo4j
Customer Data Platforms (CDPs) demand more than just relational joins. Modern organizations need to model relationships across channels, devices, and identities. In this session, we’ll show how to build a graph-powered CDP on Databricks using Neo4j. With the Neo4j Spark Connector, we’ll stream multi-channel data into Delta Lake, run graph queries to uncover hidden affinities, and deliver insights back into Databricks ML and BI. We’ll cover real-world use cases like identity resolution, cross-channel attribution, and personalization.
Attendees will leave with code patterns and design approaches to bring graph analytics into their Lakehouse and deliver customer intelligence that traditional CDPs struggle to achieve.
Getting Started with Vector Search on Databricks: Building Intelligent Search Applications
Vector search is a hot topic in the AI/ML and data engineering space, especially with the rise of generative AI and semantic search. Databricks recently introduced native vector search capabilities, making it a timely for discussion
From Data to AI Agents: Building a Production-Ready AI Stack With Databricks and Azure AI Foundry
Deploying generative AI in the enterprise requires more than just fine‑tuning a model — it demands a reliable data backbone, scalable orchestration, and secure integration into business workflows. In this session, we’ll explore how combining Databricks’ lakehouse platform with Azure AI Foundry provides an end‑to‑end foundation for bringing LLM‑driven applications to life.
We’ll begin with how Databricks can prepare and govern trusted data in Delta Lake, enabling robust data pipelines that feed directly into AI training and retrieval‑augmented generation workflows. From there, we’ll show how Azure AI Foundry operationalizes those models — managing deployment, guardrails, evaluation, and integration into enterprise systems.
Attendees will leave with a reference architecture demonstrating how to link data engineering + model development + secure AI deployment across Databricks and Azure, enabling agentic AI applications that are both cost‑efficient and enterprise‑ready.
From Data Lakehouse to Kubernetes: Practical Lessons in ML Infrastructure for Non-Kube Expert
In this talk, I will look at how organizations building on Databricks, Azure, and open ML stacks can start aligning with Kubernetes‑native practices for batch workloads, observability, and governance—without needing to be cluster experts. We’ll discuss how ML pipelines (training/inference) can map into Kubernetes batch workloads. We will explore Agentic AI on top of Kubernetes for governance (drift, explainability, compliance)
Attendees will walk away with a clear mental model of how their ML infrastructure intersects Kubernetes, plus a pragmatic adoption path for leaders coming from a Data/AI background
Disrupting CDPs: Neo4j vs. the Status Quo
In this session, Shaurya Agrawal will guide you through the architecture and implementation of a modern Customer Data Platform (CDP) powered by Neo4j, the industry-leading graph DB. As organizations strive to unify and activate customer data from disparate sources, traditional relational models often fall short in capturing the complex, interconnected relationships that drive true customer understanding. This session will demonstrate how Neo4j’s flexible schema enables seamless integration of multi-channel customer data, and showcase how graph algorithms can uncover hidden patterns in customer journeys, preferences, and behaviors.
You will explore techniques for ingesting, linking, and querying customer data using Cypher, Neo4j’s powerful query language. Shaurya Agrawal will walk you through real-world use cases such as identity resolution, personalized recommendations, and advanced segmentation, illustrating how a graph-based CDP can deliver actionable insights and drive business value. By the end of the session, you will understand how to design and build a scalable CDP on Neo4j, leverage graph analytics for deeper customer intelligence. Whether you are a developer, architect, or data professional, you will leave equipped with the knowledge and resources to start your own graph-powered customer data journey.
Data to Decisions — Building a Security Lakehouse with Azure & Databricks
Workshop introduces a Security Lakehouse framework that uses Azure & Databricks to unify data, unlock deeper analytics & accelerate threat detection. We’ll walk through strategies for ingesting diverse data sources, from Azure AD & Defender telemetry to AWS, GCP, and on-prem workloads into Delta Lake. Building on this foundation, we’ll explore how advanced analytics, graph models, and machine learning can move security operations from reactive alert triage to proactive decision-making.
Participants will leave with a reference architecture, practical design patterns, and hands-on examples that they can adapt to partner offerings or client environments, helping them transform security data into actionable outcomes at enterprise scale.
Cost Object & Driver Analysis for Faster Corporate Financial Close
Firms face complex cost allocations across policies, products, and departments during financial close. Traditional systems struggle with transparency and runtime. In this session, we’ll demonstrate how to use Databricks for cost object and driver analysis, unifying finance, HR, and operational feeds into Delta Lake. We’ll build allocation models that can iterate quickly at scale, with lineage tracked in Unity Catalog, making period-end processing more auditable. Examples include allocating IT shared services, HR costs, Operational cost, Customer acquisition expenses & other OPEX costs.
Attendees will leave with patterns for accelerating close cycles and producing explainable cost allocations using Databricks instead of legacy allocation engines.
Catastrophe Loss, Risk or Actuarial Modeling with Databricks
Catastrophic events like pandemics (Covid 19 etc.) or natural disasters create extreme volatility for insurers — and require computationally heavy simulations. This session shows how Databricks Lakehouse + MLflow can be used to model fat-tail Loss/Risk or actuarial scenarios at scale. We’ll cover techniques for simulating mortality surges, stress-testing cash reserves, and integrating external demographic and health datasets alongside policy data. Attendees will see how distributed compute accelerates scenario evaluation and improves the transparency of actuarial assumptions.
By the end, actuaries and data scientists will understand how to leverage Databricks for catastrophe risk — moving from spreadsheets to scalable, explainable simulation models.
Brand, Trust, and AI: Building a CxO Credible Narrative Without FUD
In cybersecurity, trust is the brand. This talk gives CEOs/CIOs a framework to use AI to scale thought leadership and content while avoiding hallucinations and fear marketing. I will show how to anchor messaging in first-party research, implement “cite-or-fail” workflows (RAG + SME review) and turn insights into multi-channel narratives that resonate with CIOs and boards. You’ll leave with governance for AI-generated content, measurement for brand demand, and the operating cadence to sustain credibility in a noisy market.
Building an AI‑Native, Signal‑Led GTM Engine
This talk shows how to unify intent, web, product & firmographic data into a signal taxonomy that powers AI-driven next‑best actions. I will cover data design, lightweight propensity/uplift models, routing logic and governance to keep PII/IP safe. You’ll see orchestration patterns that auto-trigger SDR tasks, ads & email plays while preserving analyst- and CISO‑friendly messaging. I will share measurement frameworks to prove impact on pipeline velocity and conversion, without boiling the ocean or over‑engineering your stack.
Automating IoT Digital Twin Infrastructure with Terraform and Consul
IoT digital twins are powerful tools for modeling real‑world systems — but managing the infrastructure behind them is complex. Analytics platforms, graph databases, storage, and streaming components must all be deployed consistently across environments. In this session, we’ll explore how to use HashiCorp Terraform to automate provisioning of digital twin infrastructure, along with HashiCorp Consul for service discovery across components. We’ll demonstrate provisioning a Databricks workspace for telemetry analytics, a graph layer for asset relationships, and connecting them with cloud storage — all as code. With Consul, these components can discover and communicate reliably across environments. Attendees will leave with a practical understanding of how IaC enables scalable IoT analytics and digital twin platforms, making deployments repeatable, observable, and cloud‑agnostic.
AI/ML powered Student's Early Alert Systems: Proactive & Personalized
As K-12 education institutions strive to improve student retention and achievement, AI-powered early alert systems are emerging as a game-changer. This session will explore how integrating artificial intelligence and data analytics can help institutions proactively identify students in need of support, academically, financially, or emotionally. We will be drawing on Corporate world's use cases and lessons learned and explore its application in Education sector.
AI Security Basics for protecting Shared AI Workloads in Cozystack
In shared environments like Cozystack, simple missteps, from weak tenant isolation to unsecured model training data, can lead to data leakage, compliance issues, or even trust failures. This session introduces attendees to the fundamentals of AI security in multi-tenant environments, covering common risks, practical best practices, and easy wins that every team can adopt. Instead of diving into research-heavy adversarial techniques, we’ll start with the real-world basics: access controls, safe data handling, monitoring, and tenant boundaries.
Attendees will walk away with a starter playbook for building secure and reliable AI workloads in Cozystack without needing deep security expertise.
AI for PLG + ABM in Cyber: Score, Segment, Sequence
Merge product telemetry with account intent to prioritize targets, shape messaging & orchestrate plays automatically. In this fast‑paced session, I will outline features for propensity scoring, micro‑segmentation patterns, and activation across paid, email, and SDR sequences. You’ll get a lean blueprint to replace the MQL hamster wheel with signal‑based GTM that respects buyers and accelerates deal cycles, without rebuilding your entire stack.
Adversarial AI in Cybersecurity: How Attackers Trick Detection Models
As machine learning becomes embedded in SOC workflows, attackers are learning to exploit its weaknesses. This session explores the emerging field of adversarial AI in cybersecurity, showing how models can be evaded, poisoned, and manipulated. We’ll walk through real tactics adversaries use, from subtly engineered inputs that bypass classifiers to data poisoning that corrupts training sets. More importantly, we’ll outline defensive strategies to build resilience into your AI-driven security pipelines.
Attendees will leave with a grounded understanding of adversarial ML threats and practical steps to avoid being blindsided as AI adoption accelerates in defense tools.
Adversarial AI in Cybersecurity - Hardening Enterprise ML Models
This session provides a practical framework to secure enterprise-scale AI systems in Azure, including Copilot deployments, Azure Machine Learning workloads, and Fabric AI. We’ll ground the discussion in real-world case studies and the MITRE ATLAS framework to reveal how adversaries exploit AI pipelines and how to defend against them. Instead of features, we’ll share strategic lessons learned and design patterns partners can use to help their customers safely innovate with AI.
Attendees will walk away with actionable guidance to differentiate their partner services by enabling organizations to adopt AI confidently, turning security into a business accelerator rather than a blocker.
Adversarial AI and Safeguarding Enterprise Machine Learning Models with Azure
This workshop takes participants deep into the threat landscape of adversarial attacks against AI models & provides hands-on exercises to develop defensive strategies. Using case studies & guided labs, we’ll explore how poisoning, evasion & prompt injection attacks manifest in enterprise contexts & how defensive methods like secure MLOps, red-teaming, & MITRE ATLAS can be applied.
Attendees will be engaged with frameworks, architectures & response patterns that protect AI pipelines, models & outputs across Azure deployments. By the end, participants will have a partner-ready methodology to bring back to their clients, turning AI security into a differentiator for trust, credibility & competitive advantage.
Advanced AI/ML Analytics on ERP Data (SAP S/4HANA + Databricks)
ERP systems like SAP S/4HANA hold valuable finance, supply chain, and operations data. However, it’s often underutilized beyond reporting. In this session, we’ll explore how to unlock SAP as well as other ERP (NetSuite, Workday etc.) data with Databricks for AI and ML. We’ll walk through data engineering patterns for staging SAP data in Delta Lake, then demonstrate how to feed it into predictive demand models, supplier risk monitoring, and even GenAI-powered auditing assistants.
Attendees will see working examples of advanced analytics pipelines without duplicating ERP systems. By the end, you’ll know how to turn static ERP records into predictive and prescriptive insights powered by Databricks.
End-to-End Machine Learning Pipelines on Databricks: From Data Ingestion to Model Deployment
See the full lifecycle of ML on Databricks, including data engineering, feature engineering, model training, and deployment.
Integrating Microsoft Fabric and Databricks for Modern Data Analytics
This session will explore the strategic integration of Microsoft Fabric and Databricks to build a robust Cloud centric modern data analytics ecosystem. We will delve into how these powerful platforms complement each other, leveraging Microsoft Fabric's comprehensive suite for data integration, governance, and business intelligence with Databricks' advanced capabilities for data engineering, AI/ML, and complex analytics at scale.
Attendees will learn best practices for seamless data flow, optimizing performance, and unlocking deeper insights by combining the strengths of both environments. We will be drawing on real-world examples from FinTech and E-commerce to illustrate practical implementation strategies and benefits for enterprise data architecture.
Finance Copilots That CFOs Trust: Policy-as-Code, Lineage, and Evidence in 30 Days
This session gives CFO/FP&A leaders a practical blueprint to deploy AI safely and credibly by defining policy-as-code for sensitive data and model use. I will discuss ground AI outputs with cite-or-fail retrieval; capture lineage and approvals and discuss ways to implement lightweight evaluation gates before rollout. I will also share a 30/60/90 plan, metrics to monitor accuracy & risk and templates for audit-ready evidence. Attendees leave with a governance starter kit that accelerates benefits without compromising compliance.
Key Takeaways-
#1 A step-by-step policy-as-code and lineage checklist for AI in finance workflows
#2 Guardrail patterns (source citations, PII redaction, human-in-the-loop thresholds)
#3 CFO-ready metrics and evidence pack for audits and board updates
(Live Demo) Fast, Auditable Close: Driver-Based Allocations & Variance Explain with a Lakehouse
This session shows how to standardize cost objects and drivers, run transparent allocations at scale and produce variance narratives leaders understand. I will cover a pragmatic data model, quality checks before posting and lineage so auditors can see “input → allocation → result.” A short demo will highlight how a lakehouse approach streamlines allocations and generates explainable variance by entity, product, or channel, ready for CFO review
Key Takeaways-
#1 A driver-based allocation blueprint that’s explainable and repeatable
#2 How to generate variance narratives and drill-downs leaders can act on
#3 A data lineage checklist to make the close both faster and auditable
(Live Demo) Board‑Ready Variance Answers - Databricks solution of Grounded, Cited, Auditable AI Assi
In this session, I’ll live‑demo a CFO‑ready AI Assistant built on a lakehouse that answers only from governed data, always shows its sources, and leaves an audit trail. You’ll see how grounding restricts the assistant to trusted actuals/budgets/drivers; how “cite‑or‑fail” eliminates hallucinations and how telemetry logs every interaction (who asked, which tables, which filters, summary) so Finance can trust and audit results. I will also cover simple refusal logic for low‑trust queries and a 30‑day rollout plan to pilot this in your environment. Walk away with a repeatable pattern and starter templates to deliver board‑grade variance narratives that are accurate, sourced, and explainable (demonstrated live on Databricks)
Key Takeaways-
#1 Implement a grounded, “cite‑or‑fail” copilot pattern that eliminates hallucinations
#2 Capture audit‑ready evidence with telemetry linking prompt → data sources → filters → summary for board and audit
#3 Apply a practical 30‑day rollout plan: scope, data curation, refusal rules, evaluation gates and success KPIs
SHRM Expo26 - Orlando Upcoming
AI driven HRtelligence - Uncovering Hidden Internal Talent for Future-Ready Organizations (Live Demo)-
This session reveals how AI & modern data platforms can revolutionize internal mobility by shifting to a skills-based approach. Learn to build a "skills graph" using data architectures like the Lakehouse and graph databases. This enables HR to precisely identify skill adjacencies, forecast future talent needs, and proactively uncover hidden internal candidates for emerging roles, fostering an agile and resilient workforce.
I will provide practical strategies for implementing an AI-driven talent intelligence system that maps current capabilities and predicts future requirements. Discover how to build a future-ready organization that maximizes its most valuable asset: its people, through responsible AI practices.
SHRM Talent 2026 Upcoming
Title - AI driven HRtelligence - Uncovering Hidden Internal Talent for Future-Ready Organizations
Description - Organizations struggle to fully leverage their internal talent. This session reveals how AI and modern data platforms can revolutionize internal mobility by shifting to a skills-based approach. Learn to build a "skills graph" using data architectures like the Lakehouse and graph databases. This enables HR to precisely identify skill adjacencies, forecast future talent needs, and proactively uncover hidden internal candidates for emerging roles, fostering an agile and resilient workforce.
We'll provide practical strategies for implementing an AI-driven talent intelligence system that maps current capabilities and predicts future requirements. Discover how to build a future-ready organization that maximizes its most valuable asset: its people, through responsible AI practices.
Learnings -
Understand how AI and data platforms can enable a skills-based approach to internal talent management
Learn to identify and map hidden internal talent using advanced data strategies like "skills graphs
Discover actionable methods for leveraging AI to enhance internal mobility, reskilling, and workforce planning
Austin SHRM Breakfast Event Upcoming
Confidential by Design HR: Protecting Employee Data in the Age of GenAI -
his session gives HR leaders a practical playbook to harness AI while safeguarding PHI/PII and complying with privacy laws. We’ll cover policy-as-code guardrails, safe prompt patterns, redaction/classification, vendor risk in HRIS/ATS/LMS, and employee consent/disclosure language. Attendees will leave with a checklist for SHRM/HRCI-aligned Legal, Risk & Ethics governance and a 90-day rollout plan
General HR topics Upcoming
Agentic AI in HR: Automating Operations, Enhancing Employee Experience, and Ensuring Ethical Oversight -
This session explores the transformative potential of agentic AI—AI systems that can act autonomously to achieve specific goals—within HR operations. We will delve into practical applications, from automating routine tasks like onboarding and query resolution to proactively identifying employee needs and personalizing development paths. The discussion will also critically examine the ethical considerations and necessary oversight mechanisms to ensure these AI agents operate fairly, transparently, and in alignment with organizational values and employee well-being.
General HR topics Upcoming
Cybersecurity for HR: Protecting Your People, Your Data, and Your Organization from Evolving Threats -
In an era of increasing digital threats, HR departments are prime targets due to the vast amount of sensitive employee data they manage. This presentation will equip HR professionals with essential cybersecurity knowledge and practical strategies to protect their organization's most valuable assets: its people and their data. We will cover best practices for data governance, identifying common attack vectors, fostering a security-aware culture among employees, and building resilience against sophisticated cyber threats.
Society of Corporate Compliance - Regional Event Upcoming
AI's Wild Wild West: Taming LLM Risks for Corporate Compliance-
This session will provide a guide for IT and compliance leaders on how to tame these emerging risks. I will cover essential strategies for establishing AI governance frameworks, implementing data ingress/egress controls for LLM interactions and developing ethical AI guidelines that protect sensitive information and maintain public trust.
Learn how to identify shadow AI, conduct effective risk assessments and build a proactive compliance posture that transforms AI from a liability into a trusted asset.
Understand key AI/LLM-driven compliance risks (data leakage, bias, IP) and their impact on Corporate Compliance
Learn actionable strategies for establishing AI governance frameworks and data controls within your organization
Develop a roadmap for proactive compliance, transforming AI risks into managed opportunities
cybersecuritymarketingsociety.com Upcoming
(Talk or Live Demo) Building an AI-Native GTM Engine: From Signals to Pipeline -
Learn how to operationalize AI across the funnel, unifying intent, product usage and web signals to trigger next-best actions automatically. I will cover data design, model selection (propensity, uplift), routing and governance to deliver a signal-led GTM that boosts conversion and velocity without the MQL hamster wheel. Attendees will take away Signal taxonomy + scoring template, playbook for AI-triggered sequences and a measurement model for pipeline influence.
LLMs That CISOs Trust: Generating Technical Content Without Hallucinations-
A practical framework for using LLMs to produce credible, technical cyber content—grounded retrieval (RAG), fact-checking loops, style guards, and SME-in-the-loop. See workflows for briefs, threat writeups, sales sheets, and analyst responses that enhance accuracy and brand voice. Attendees will takeaway RAG prompt pack; QA checklist to reduce hallucinations; SME review workflow.
AI for PLG + ABM in Cyber: Score, Segment, and Sequence at Scale-
Combine product telemetry with account intent using ML to prioritize accounts, tailor messaging, and auto-orchestrate programs across paid, email, and SDR. We’ll show uplift modeling, micro-segmentation, and how to align ops for clean handoffs from marketing to sales. Attendees will takeaway feature map for propensity models; ABM micro-seg playbook; SDR/MOPs activation blueprint.
IRM UK
Policy-as-Code for Security & Lineage: Active AI and Data Governance for CIOs & CISOs-
This session presents a leadership blueprint to converge data, security and AI governance: policy‑as‑code, automated lineage and access (by identity and purpose), model risk controls, and runtime guardrails that produce evidence for auditors. We’ll show how to align IT operating models with GDPR/NIS2 obligations, embed zero‑trust and data minimization and measure value with business‑aligned KPIs. The outcome is a trusted, explainable analytics and AI that accelerate growth, while standing up to scrutiny.
Texas Education Conference
Agentic AI for Event Project Management: Smarter Cost Tracking and Planning with Databricks -
In this hands-on session, we’ll demonstrate how Databricks Community Edition combined with Agentic AI can act as a project co‑pilot—helping to clean messy budget spreadsheets, reconcile vendor invoices, and generate natural‑language insights (“Why is catering 12% over budget?”). By applying IT project management principles and leveraging agentic AI automation, attendees will see how to track burn rate, detect variances, and document project risks with less manual effort. We’ll build a quick live demo: importing event budgets into Databricks, standardizing expense categories, visualizing variance, and then asking the AI agent for explanations and recommendations. Participants will leave with a repeatable cost-tracking template and a glimpse into how AI‑assisted project governance can elevate both efficiency and trust in event delivery.
innovateenergynow.com
(Live Demo) Production-Grade MLOps for Industrial Perception-
Demonstrates an end-to-end Databricks pipeline for drone/robot imagery and time-series data: Delta Lake ingestion, feature engineering, model training, MLflow experiment tracking, Model Registry, and drift monitoring. Covers rollout/rollback runbooks, SLAs, and synthetic data for edge cases. Attendees see how to maintain consistent model performance across assets, sites, and seasons.
Securing the AI/ML Pipeline: Edge-to-Cloud Protection-
Outlines a defense-in-depth approach for AI in energy operations. Discusses risks like data poisoning, model evasion, and IP theft; hardening inference at the edge; identity and access controls; encrypted lineage in Unity Catalog; and continuous adversarial monitoring. Provides a checklist for secure deployment and incident response tailored to industrial AI.
(Live Demo) Agentic AI for Autonomous Operational Decisions-
Explores architecting safe, auditable agentic AI on Databricks that detects anomalies, reasons with policy constraints, and proposes actions (e.g., work orders) with human-in-the-loop approvals. Shows safety envelopes, oversight, and audit trails, enabling faster, trusted optimization of production and maintenance.
Data Governance for AI: Building a Trusted Foundation-
Presents a practical governance blueprint: data quality metrics, contracts, lineage from capture to model input, metadata standards, and access policies. Addresses bias mitigation, compliance, and vendor data onboarding. Attendees leave with templates for a scalable, governed AI data foundation.
CVision - IBM AWS F1 Roundtable Austin
Confidential by Design: Managing Protected Information in the Age of LLMs-
This session offers a strategic CTO & Board Advisor perspective on governing data in an AI‑driven world. Specifically, how to classify and protect critical information assets, implement “confidential by design” controls and enforce zero‑trust principles across cloud and SaaS LLM integrations. I will also explore regulatory expectations under GDPR, NIS2 and emerging AI frameworks and share a pragmatic blueprint for balancing innovation with compliance. Attendees will gain actionable models, governance patterns and messages to ensure their organizations benefit from LLM adoption, without compromising trust.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top