Shaurya Agrawal
Start-up CTO & Board Advisor
Austin, Texas, United States
Actions
Shaurya Agrawal is a Data & Analytics leader with 25+ years of experience driving transformative initiatives across Tech/SaaS, E-commerce, and FinTech. With expertise in AI/ML, Enterprise Data Architecture, and BI, he's led impactful projects, creating customer-centric solutions and modernizing data platforms. As a CTO of YourNxt Technologies, a mobile tech start-up and Board Advisor to Hoonartek, Shaurya shapes global data strategies.
Holding an MBA and pursuing an MS in Data Science from UT Austin, Shaurya leverages data to unlock business value, specializing in unified customer views and personalized experiences.
Area of Expertise
Topics
Disrupting Defaults: Smarter Credit Risk w/ Neo4j Graphs
In this lightning talk, Shaurya Agrawal will demonstrate how Neo4j’s graph database technology can revolutionize credit risk management for Financial and FinTech firms by uncovering hidden relationships between entities. Using a real-world scenario where Company A, Company B, and Company C, are subsidiaries, with varying ownership, under a common parent, you will see how traditional systems often miss these indirect connections—potentially underestimating aggregate exposure and risk. With Neo4j, you will learn how to model and visualize complex corporate hierarchies, instantly revealing cross-entity dependencies and shared liabilities.
Attendees will discover how graph queries can surface risk concentrations, identify circular ownership, and support more informed credit decisions. This session will show you how leveraging Neo4j’s relationship-first approach enables smarter, faster risk assessment—empowering you to move beyond the limitations of legacy, table-based systems.
Smarter Credit Risk with Databricks + Neo4j Graphs
Hidden relationships between corporate entities are often missing from traditional risk systems, thus leading to underestimated exposures. This session demonstrates how Databricks + Neo4j can transform credit risk modeling. We’ll integrate ownership structures and transactions into Delta Lake and apply Cypher graph queries to detect circular ownership, shared liabilities, and exposure chains. With Databricks ML, we’ll take these graph insights further, enabling predictive models for entity-level risk and portfolio concentrations.
Attendees will see a novel architecture that unifies graph algorithms with the scalability of Databricks, delivering faster and smarter credit risk decisions.
Visualizing Risk in Cozystack: A Graph Approach
Multi‑tenant platforms like Cozystack make it possible to run powerful shared data and AI workloads , but visibility across tenants, roles, and resources can quickly get complex. Logs alone often don’t reveal how different identities and workloads are actually connected. This session introduces a beginner‑friendly framework for applying graph thinking to Cozystack, helping teams model and visualize cross‑tenant relationships. Through simple examples, attendees will see how graphs highlight over‑permissioned accounts, hidden dependencies, and potential risk paths that traditional reporting may miss. No graph theory background required.
Participants will leave with practical ideas and methods they can use right away to make Cozystack deployments more secure, transparent, and trusted.
Signal-Led Growth: A CEO/CMO Playbook for AI-Driven GTM Efficiency
Boards want growth with fewer dollars. This session shows CEOs/CMOs how to pivot from MQLs to a signal-led operating model that unifies intent, product usage, and account context to trigger next-best actions automatically. I will cover the executive blueprint: what to instrument, how to govern data and AI risk, where AI creates leverage (propensity, uplift, personalization), and how to prove impact to the board. Expect clear org design, KPI trees, and vendor guardrails so you can scale pipeline and win rates without bloating headcount or tooling.
Securing AI and Test Automation Pipelines with HashiCorp Vault on Databricks
AI and ML workflows increasingly drive customer‑facing applications, but they depend on dozens of sensitive credentials: API keys, service accounts, model endpoints, cloud secrets. Too often, these credentials are stored in notebooks, Git repos, or CI/CD pipelines — a major security risk. In this session, we’ll showcase how HashiCorp Vault can seamlessly secure Databricks‑based AI pipelines, focusing on QA and test automation for conversational AI. You’ll see how using Vault to manage secrets reduces risk, enforces compliance, and provides dynamic secrets for ephemeral jobs. We’ll walk through a simple pattern: Vault securely stores API keys for testing LLM workflows, while Databricks consumes them in QA pipelines without exposing credentials. Attendees will leave with a clear blueprint for making AI automation pipelines not just smarter, but safer.
Security Lakehouse in Action: Using Databricks for Scalable Threat Hunting
Security data is exploding across on-prem, multi-cloud, and SaaS systems, overwhelming traditional SIEMs. This session demonstrates how Databricks can act as a Security Lakehouse, enabling scalable threat hunting and advanced analytics on diverse telemetry. We’ll showcase how to consolidate logs, streams, and identity data into Delta Lake, and then apply graph analytics and machine learning to uncover hidden attack paths.
Attendees will learn how the Lakehouse approach unlocks deep visibility, reduces detection blind spots, and empowers security teams to hunt threats across massive hybrid environments, faster and smarter than with traditional tooling.
Scaling Analytics Without ETL Nightmares: Trino + Databricks in the Data Fabric
Enterprises today struggle with sprawling data estates — Delta Lake in Databricks for ML workloads, Snowflake for BI, and legacy relational systems still running critical operations. The default answer is usually another pipeline, another data copy, and yet another layer of complexity.
In this session, we’ll share how combining Trino’s federated query engine with Databricks’ lakehouse platform creates an agile data fabric that eliminates unnecessary ETL. With Trino, we can query Delta tables in Databricks directly alongside Snowflake, S3, or even legacy RDBMS — all through one SQL interface. This means faster time to insight, fewer brittle pipelines, and a unified view across silos.
We’ll explore architectural patterns, performance considerations for querying Delta Lake via Trino, and how this approach empowers teams to democratize analytics without duplication. Attendees will leave with practical examples of making Databricks a first‑class participant in a broader federated architecture, powered by Trino.
Scaling Analytics Platforms as Code: Automating Databricks with Terraform
Data platforms must move as fast as the businesses they serve. Yet, provisioning workspaces, clusters, and secure storage for analytics often involves manual, error‑prone processes. In this session, we’ll explore how to use HashiCorp Terraform to provision Databricks Lakehouse infrastructure as repeatable, version‑controlled code. Attendees will see a live walkthrough of using Terraform to spin up Databricks workspaces, configure clusters, and integrate cloud storage, all with just a few lines of HCL and terraform apply. We’ll also discuss patterns for scaling this approach across teams, applying GitOps to infrastructure, and improving cloud cost efficiency by standardizing data platform deployment.
Revolutionizing Data Governance with Databricks Lakehouse Architecture
"The Databricks lakehouse architecture represents a transformative approach to data management by seamlessly unifying the flexibility of data lakes with the reliability and performance of data warehouses. This convergence simplifies data governance by providing a single platform that supports both traditional OLAP workloads and modern AI/ML applications. Attendees will gain insights into how this architecture enables consistent data quality, robust security controls, and streamlined compliance processes, addressing the complex challenges organizations face in managing diverse data environments.
This session will also delve into best practices for implementing effective governance across the entire data lifecycle, including DataOps, MLOps, and ModelOps. By integrating governance into these operational workflows, organizations can ensure that data and models remain trustworthy, auditable, and compliant with regulatory requirements. Participants will leave equipped with practical strategies to leverage the lakehouse paradigm to drive innovation while maintaining control and transparency over their data assets."
Optimizing Analytics Spend: Using Trino to Complement Databricks for Cost‑Efficient Workloads
As data teams scale, one of the biggest pain points is runaway compute and storage cost from platforms designed for heavy workloads but misused for everyday analytics. Databricks shines at large‑scale ML and advanced analytics, but using it as a catch‑all BI engine can quickly inflate budgets.
In this talk, we’ll show how layering Trino’s high‑performance federation with Databricks provides a cost‑optimized analytics strategy. Rather than keeping expensive SQL warehouses or large clusters hot, we can offload exploratory queries, BI workloads, and cross‑system joins to Trino. Delta Lake tables in Databricks remain accessible through Trino, but now joined seamlessly with data in S3, Kafka, or cloud warehouses — at lower cost.
We’ll walk through real patterns where Trino reduced compute bills while still enabling Databricks to focus on what it does best: advanced ML/AI and scalable storage. Attendees will learn a pragmatic approach to balancing cost, performance, and flexibility by leveraging Trino and Databricks together instead of in silos.
Operationalizing ML Governance: A Practical Guide to MLOps in Microsoft Fabric
This session will focus on the practical implementation of MLOps within Microsoft Fabric to achieve effective AI/ML model governance. We'll walk through the end-to-end process of building automated pipelines for model training, deployment, monitoring, and retraining, emphasizing how Fabric's integrated tools (Synapse, Data Factory, MLflow) facilitate version control, experiment tracking, and continuous validation. We'll also discuss strategies for seamless integration with Power BI for operational reporting and real-time insights into model performance and governance metrics.
Operational Intelligence for CX (LLM + Databricks)
Companies manage a firehose of unstructured communication — emails, letters, calls, chat transcripts — that contain insights into customer needs, risks, and compliance. In this session, we’ll show how to apply Databricks + LLMOps pipelines to structure and analyze these interactions at scale. Using Delta Lake for storage, Spark NLP for preprocessing, and GenAI models for classification and summarization, we’ll extract themes, sentiment, and risk flags. The outputs feed back into Customer Support/Call Center and Operation & Service workflows.
Attendees will learn repeatable architectures for unifying communications data into their Lakehouse and see how unstructured → structured transformation unlocks new intelligence to improve operations and customer experience (CX)
Marketing Agent on a Laptop — From Signals to SDR Task in 30 Minutes
Build a lightweight “marketing agent” that ingests sample buying signals, scores opportunities, drafts personalized outreach, and posts tasks to Slack/CRM—live, on stage. I will use provided templates and a simple dataset to keep it practical, with guardrails for data privacy and content quality. You’ll leave with a reusable starter kit to pilot agents safely in your GTM stack and a rubric to evaluate agent actions before production.
Machine Learning for Cyber Defense: Building Adversary-Aware Models on Databricks
As cyber threats grow more evasive, traditional detection methods often fall short. This session explores how to leverage Databricks as a Security Lakehouse to build ML-driven defenses that adapt to adversary tactics. We’ll dive into creating adversary-aware models that detect anomalies, recognize lateral movement, and retrain continuously as attackers evolve. Using Databricks capabilities of Delta Lake, LakeFlow pipelines, and MLlib, the security teams can operationalize scalable detection without drowning in noise
Attendees will learn practical steps to develop resilient ML workflows, mitigate adversarial risks, and empower SOCs with data-driven insights that anticipate and counter attacker innovations in real time
LLMs That CISOs Trust : RAG, QA Loops, and SME‑in‑the‑Loop
This session translates security governance into practical GTM guardrails. I will cover policy‑as‑code for prompts and data use, lineage and access by purpose, evaluation gates for AI outputs, and vendor risk clauses. You’ll get a compact governance kit that protects customers and brand without slowing campaigns, built for MOPs to implement and audit with minimal friction.
Lateral Movement in Hybrid Clouds - A Framework for Threat Discovery
This session introduces a practical framework for detecting lateral movement across hybrid clouds, grounded in lessons learned from enterprise-scale environments. We’ll explore how attackers exploit trust relationships in Azure AD/Entra, APIs & multi-cloud connectors. We’ll share strategic models and playbooks for mapping privileged pathways, using graph-powered analysis, and prioritizing high-value choke points.
Attendees will leave with an actionable framework they can apply in their partner practices, enabling them to advise clients, align with Microsoft’s Zero Trust principles, and deliver differentiated managed detection services across hybrid and multi-cloud estates.
Hybrid Cloud Threat Hunting: Visualizing Lateral Movement
Modern adversaries exploit the seams between on-premises and multi-cloud environments, making lateral movement harder to detect. This session dives into advanced threat hunting techniques for hybrid cloud infrastructures. We'll explore how to unify disparate security telemetry, from cloud identity providers (IAM, Azure AD) to API logs and workload activity. Furthermore, we will dive to construct a holistic view of attacker progression.
Attendees will learn practical strategies for visualizing complex attack paths and identifying anomalous lateral movement across diverse cloud services and traditional networks. Attendees will gain actionable insights to uncover hidden threats that bypass conventional security controls in today's interconnected enterprise
Graph-Powered Threat Hunting: Uncovering Hidden Attack Paths in Complex Systems
Traditional security tools often miss sophisticated threats that exploit complex relationships across users, devices, and applications. This session will demonstrate how graph technology, integrated with scalable data platforms like Databricks, offers a powerful new approach to cybersecurity. By mapping these intricate connections into a graph database, security teams can visualize and analyze the entire attack surface, revealing hidden lateral movement and anomalous behaviors that are invisible in siloed data.
Attendees will learn practical strategies for leveraging graph-based threat models to enhance proactive threat hunting. This method enables precise identification of critical vulnerabilities and attack paths, allowing security professionals to prioritize defenses more effectively. Discover how connected insights can transform your security operations, empowering your team to detect advanced threats and strengthen overall organizational resilience
Graph-Powered CDPs on Databricks with Neo4j
Customer Data Platforms (CDPs) demand more than just relational joins. Modern organizations need to model relationships across channels, devices, and identities. In this session, we’ll show how to build a graph-powered CDP on Databricks using Neo4j. With the Neo4j Spark Connector, we’ll stream multi-channel data into Delta Lake, run graph queries to uncover hidden affinities, and deliver insights back into Databricks ML and BI. We’ll cover real-world use cases like identity resolution, cross-channel attribution, and personalization.
Attendees will leave with code patterns and design approaches to bring graph analytics into their Lakehouse and deliver customer intelligence that traditional CDPs struggle to achieve.
Getting Started with Vector Search on Databricks: Building Intelligent Search Applications
Vector search is a hot topic in the AI/ML and data engineering space, especially with the rise of generative AI and semantic search. Databricks recently introduced native vector search capabilities, making it a timely for discussion
From Data to AI Agents: Building a Production-Ready AI Stack With Databricks and Azure AI Foundry
Deploying generative AI in the enterprise requires more than just fine‑tuning a model — it demands a reliable data backbone, scalable orchestration, and secure integration into business workflows. In this session, we’ll explore how combining Databricks’ lakehouse platform with Azure AI Foundry provides an end‑to‑end foundation for bringing LLM‑driven applications to life.
We’ll begin with how Databricks can prepare and govern trusted data in Delta Lake, enabling robust data pipelines that feed directly into AI training and retrieval‑augmented generation workflows. From there, we’ll show how Azure AI Foundry operationalizes those models — managing deployment, guardrails, evaluation, and integration into enterprise systems.
Attendees will leave with a reference architecture demonstrating how to link data engineering + model development + secure AI deployment across Databricks and Azure, enabling agentic AI applications that are both cost‑efficient and enterprise‑ready.
From Data Lakehouse to Kubernetes: Practical Lessons in ML Infrastructure for Non-Kube Expert
In this talk, I will look at how organizations building on Databricks, Azure, and open ML stacks can start aligning with Kubernetes‑native practices for batch workloads, observability, and governance—without needing to be cluster experts. We’ll discuss how ML pipelines (training/inference) can map into Kubernetes batch workloads. We will explore Agentic AI on top of Kubernetes for governance (drift, explainability, compliance)
Attendees will walk away with a clear mental model of how their ML infrastructure intersects Kubernetes, plus a pragmatic adoption path for leaders coming from a Data/AI background
Disrupting CDPs: Neo4j vs. the Status Quo
In this session, Shaurya Agrawal will guide you through the architecture and implementation of a modern Customer Data Platform (CDP) powered by Neo4j, the industry-leading graph DB. As organizations strive to unify and activate customer data from disparate sources, traditional relational models often fall short in capturing the complex, interconnected relationships that drive true customer understanding. This session will demonstrate how Neo4j’s flexible schema enables seamless integration of multi-channel customer data, and showcase how graph algorithms can uncover hidden patterns in customer journeys, preferences, and behaviors.
You will explore techniques for ingesting, linking, and querying customer data using Cypher, Neo4j’s powerful query language. Shaurya Agrawal will walk you through real-world use cases such as identity resolution, personalized recommendations, and advanced segmentation, illustrating how a graph-based CDP can deliver actionable insights and drive business value. By the end of the session, you will understand how to design and build a scalable CDP on Neo4j, leverage graph analytics for deeper customer intelligence. Whether you are a developer, architect, or data professional, you will leave equipped with the knowledge and resources to start your own graph-powered customer data journey.
Data to Decisions — Building a Security Lakehouse with Azure & Databricks
Workshop introduces a Security Lakehouse framework that uses Azure & Databricks to unify data, unlock deeper analytics & accelerate threat detection. We’ll walk through strategies for ingesting diverse data sources, from Azure AD & Defender telemetry to AWS, GCP, and on-prem workloads into Delta Lake. Building on this foundation, we’ll explore how advanced analytics, graph models, and machine learning can move security operations from reactive alert triage to proactive decision-making.
Participants will leave with a reference architecture, practical design patterns, and hands-on examples that they can adapt to partner offerings or client environments, helping them transform security data into actionable outcomes at enterprise scale.
Cost Object & Driver Analysis for Faster Corporate Financial Close
Firms face complex cost allocations across policies, products, and departments during financial close. Traditional systems struggle with transparency and runtime. In this session, we’ll demonstrate how to use Databricks for cost object and driver analysis, unifying finance, HR, and operational feeds into Delta Lake. We’ll build allocation models that can iterate quickly at scale, with lineage tracked in Unity Catalog, making period-end processing more auditable. Examples include allocating IT shared services, HR costs, Operational cost, Customer acquisition expenses & other OPEX costs.
Attendees will leave with patterns for accelerating close cycles and producing explainable cost allocations using Databricks instead of legacy allocation engines.
Catastrophe Loss, Risk or Actuarial Modeling with Databricks
Catastrophic events like pandemics (Covid 19 etc.) or natural disasters create extreme volatility for insurers — and require computationally heavy simulations. This session shows how Databricks Lakehouse + MLflow can be used to model fat-tail Loss/Risk or actuarial scenarios at scale. We’ll cover techniques for simulating mortality surges, stress-testing cash reserves, and integrating external demographic and health datasets alongside policy data. Attendees will see how distributed compute accelerates scenario evaluation and improves the transparency of actuarial assumptions.
By the end, actuaries and data scientists will understand how to leverage Databricks for catastrophe risk — moving from spreadsheets to scalable, explainable simulation models.
Brand, Trust, and AI: Building a CxO Credible Narrative Without FUD
In cybersecurity, trust is the brand. This talk gives CEOs/CIOs a framework to use AI to scale thought leadership and content while avoiding hallucinations and fear marketing. I will show how to anchor messaging in first-party research, implement “cite-or-fail” workflows (RAG + SME review) and turn insights into multi-channel narratives that resonate with CIOs and boards. You’ll leave with governance for AI-generated content, measurement for brand demand, and the operating cadence to sustain credibility in a noisy market.
Building an AI‑Native, Signal‑Led GTM Engine
This talk shows how to unify intent, web, product & firmographic data into a signal taxonomy that powers AI-driven next‑best actions. I will cover data design, lightweight propensity/uplift models, routing logic and governance to keep PII/IP safe. You’ll see orchestration patterns that auto-trigger SDR tasks, ads & email plays while preserving analyst- and CISO‑friendly messaging. I will share measurement frameworks to prove impact on pipeline velocity and conversion, without boiling the ocean or over‑engineering your stack.
Automating IoT Digital Twin Infrastructure with Terraform and Consul
IoT digital twins are powerful tools for modeling real‑world systems — but managing the infrastructure behind them is complex. Analytics platforms, graph databases, storage, and streaming components must all be deployed consistently across environments. In this session, we’ll explore how to use HashiCorp Terraform to automate provisioning of digital twin infrastructure, along with HashiCorp Consul for service discovery across components. We’ll demonstrate provisioning a Databricks workspace for telemetry analytics, a graph layer for asset relationships, and connecting them with cloud storage — all as code. With Consul, these components can discover and communicate reliably across environments. Attendees will leave with a practical understanding of how IaC enables scalable IoT analytics and digital twin platforms, making deployments repeatable, observable, and cloud‑agnostic.
AI/ML powered Student's Early Alert Systems: Proactive & Personalized
As K-12 education institutions strive to improve student retention and achievement, AI-powered early alert systems are emerging as a game-changer. This session will explore how integrating artificial intelligence and data analytics can help institutions proactively identify students in need of support, academically, financially, or emotionally. We will be drawing on Corporate world's use cases and lessons learned and explore its application in Education sector.
AI Security Basics for protecting Shared AI Workloads in Cozystack
In shared environments like Cozystack, simple missteps, from weak tenant isolation to unsecured model training data, can lead to data leakage, compliance issues, or even trust failures. This session introduces attendees to the fundamentals of AI security in multi-tenant environments, covering common risks, practical best practices, and easy wins that every team can adopt. Instead of diving into research-heavy adversarial techniques, we’ll start with the real-world basics: access controls, safe data handling, monitoring, and tenant boundaries.
Attendees will walk away with a starter playbook for building secure and reliable AI workloads in Cozystack without needing deep security expertise.
AI for PLG + ABM in Cyber: Score, Segment, Sequence
Merge product telemetry with account intent to prioritize targets, shape messaging & orchestrate plays automatically. In this fast‑paced session, I will outline features for propensity scoring, micro‑segmentation patterns, and activation across paid, email, and SDR sequences. You’ll get a lean blueprint to replace the MQL hamster wheel with signal‑based GTM that respects buyers and accelerates deal cycles, without rebuilding your entire stack.
Adversarial AI in Cybersecurity: How Attackers Trick Detection Models
As machine learning becomes embedded in SOC workflows, attackers are learning to exploit its weaknesses. This session explores the emerging field of adversarial AI in cybersecurity, showing how models can be evaded, poisoned, and manipulated. We’ll walk through real tactics adversaries use, from subtly engineered inputs that bypass classifiers to data poisoning that corrupts training sets. More importantly, we’ll outline defensive strategies to build resilience into your AI-driven security pipelines.
Attendees will leave with a grounded understanding of adversarial ML threats and practical steps to avoid being blindsided as AI adoption accelerates in defense tools.
Adversarial AI in Cybersecurity - Hardening Enterprise ML Models
This session provides a practical framework to secure enterprise-scale AI systems in Azure, including Copilot deployments, Azure Machine Learning workloads, and Fabric AI. We’ll ground the discussion in real-world case studies and the MITRE ATLAS framework to reveal how adversaries exploit AI pipelines and how to defend against them. Instead of features, we’ll share strategic lessons learned and design patterns partners can use to help their customers safely innovate with AI.
Attendees will walk away with actionable guidance to differentiate their partner services by enabling organizations to adopt AI confidently, turning security into a business accelerator rather than a blocker.
Adversarial AI and Safeguarding Enterprise Machine Learning Models with Azure
This workshop takes participants deep into the threat landscape of adversarial attacks against AI models & provides hands-on exercises to develop defensive strategies. Using case studies & guided labs, we’ll explore how poisoning, evasion & prompt injection attacks manifest in enterprise contexts & how defensive methods like secure MLOps, red-teaming, & MITRE ATLAS can be applied.
Attendees will be engaged with frameworks, architectures & response patterns that protect AI pipelines, models & outputs across Azure deployments. By the end, participants will have a partner-ready methodology to bring back to their clients, turning AI security into a differentiator for trust, credibility & competitive advantage.
Advanced AI/ML Analytics on ERP Data (SAP S/4HANA + Databricks)
ERP systems like SAP S/4HANA hold valuable finance, supply chain, and operations data. However, it’s often underutilized beyond reporting. In this session, we’ll explore how to unlock SAP as well as other ERP (NetSuite, Workday etc.) data with Databricks for AI and ML. We’ll walk through data engineering patterns for staging SAP data in Delta Lake, then demonstrate how to feed it into predictive demand models, supplier risk monitoring, and even GenAI-powered auditing assistants.
Attendees will see working examples of advanced analytics pipelines without duplicating ERP systems. By the end, you’ll know how to turn static ERP records into predictive and prescriptive insights powered by Databricks.
End-to-End Machine Learning Pipelines on Databricks: From Data Ingestion to Model Deployment
See the full lifecycle of ML on Databricks, including data engineering, feature engineering, model training, and deployment.
Integrating Microsoft Fabric and Databricks for Modern Data Analytics
This session will explore the strategic integration of Microsoft Fabric and Databricks to build a robust Cloud centric modern data analytics ecosystem. We will delve into how these powerful platforms complement each other, leveraging Microsoft Fabric's comprehensive suite for data integration, governance, and business intelligence with Databricks' advanced capabilities for data engineering, AI/ML, and complex analytics at scale.
Attendees will learn best practices for seamless data flow, optimizing performance, and unlocking deeper insights by combining the strengths of both environments. We will be drawing on real-world examples from FinTech and E-commerce to illustrate practical implementation strategies and benefits for enterprise data architecture.
SHRM Expo26 - Orlando Upcoming
AI driven HRtelligence - Uncovering Hidden Internal Talent for Future-Ready Organizations (Live Demo)-
This session reveals how AI & modern data platforms can revolutionize internal mobility by shifting to a skills-based approach. Learn to build a "skills graph" using data architectures like the Lakehouse and graph databases. This enables HR to precisely identify skill adjacencies, forecast future talent needs, and proactively uncover hidden internal candidates for emerging roles, fostering an agile and resilient workforce.
I will provide practical strategies for implementing an AI-driven talent intelligence system that maps current capabilities and predicts future requirements. Discover how to build a future-ready organization that maximizes its most valuable asset: its people, through responsible AI practices.
SHRM Talent 2026 Upcoming
Title - AI driven HRtelligence - Uncovering Hidden Internal Talent for Future-Ready Organizations
Description - Organizations struggle to fully leverage their internal talent. This session reveals how AI and modern data platforms can revolutionize internal mobility by shifting to a skills-based approach. Learn to build a "skills graph" using data architectures like the Lakehouse and graph databases. This enables HR to precisely identify skill adjacencies, forecast future talent needs, and proactively uncover hidden internal candidates for emerging roles, fostering an agile and resilient workforce.
We'll provide practical strategies for implementing an AI-driven talent intelligence system that maps current capabilities and predicts future requirements. Discover how to build a future-ready organization that maximizes its most valuable asset: its people, through responsible AI practices.
Learnings -
Understand how AI and data platforms can enable a skills-based approach to internal talent management
Learn to identify and map hidden internal talent using advanced data strategies like "skills graphs
Discover actionable methods for leveraging AI to enhance internal mobility, reskilling, and workforce planning
Austin SHRM Breakfast Event Upcoming
Confidential by Design HR: Protecting Employee Data in the Age of GenAI -
his session gives HR leaders a practical playbook to harness AI while safeguarding PHI/PII and complying with privacy laws. We’ll cover policy-as-code guardrails, safe prompt patterns, redaction/classification, vendor risk in HRIS/ATS/LMS, and employee consent/disclosure language. Attendees will leave with a checklist for SHRM/HRCI-aligned Legal, Risk & Ethics governance and a 90-day rollout plan
General HR topics Upcoming
Agentic AI in HR: Automating Operations, Enhancing Employee Experience, and Ensuring Ethical Oversight -
This session explores the transformative potential of agentic AI—AI systems that can act autonomously to achieve specific goals—within HR operations. We will delve into practical applications, from automating routine tasks like onboarding and query resolution to proactively identifying employee needs and personalizing development paths. The discussion will also critically examine the ethical considerations and necessary oversight mechanisms to ensure these AI agents operate fairly, transparently, and in alignment with organizational values and employee well-being.
General HR topics Upcoming
Cybersecurity for HR: Protecting Your People, Your Data, and Your Organization from Evolving Threats -
In an era of increasing digital threats, HR departments are prime targets due to the vast amount of sensitive employee data they manage. This presentation will equip HR professionals with essential cybersecurity knowledge and practical strategies to protect their organization's most valuable assets: its people and their data. We will cover best practices for data governance, identifying common attack vectors, fostering a security-aware culture among employees, and building resilience against sophisticated cyber threats.
Society of Corporate Compliance - Regional Event Upcoming
AI's Wild Wild West: Taming LLM Risks for Corporate Compliance-
This session will provide a guide for IT and compliance leaders on how to tame these emerging risks. I will cover essential strategies for establishing AI governance frameworks, implementing data ingress/egress controls for LLM interactions and developing ethical AI guidelines that protect sensitive information and maintain public trust.
Learn how to identify shadow AI, conduct effective risk assessments and build a proactive compliance posture that transforms AI from a liability into a trusted asset.
Understand key AI/LLM-driven compliance risks (data leakage, bias, IP) and their impact on Corporate Compliance
Learn actionable strategies for establishing AI governance frameworks and data controls within your organization
Develop a roadmap for proactive compliance, transforming AI risks into managed opportunities
cybersecuritymarketingsociety.com Upcoming
(Talk or Live Demo) Building an AI-Native GTM Engine: From Signals to Pipeline -
Learn how to operationalize AI across the funnel, unifying intent, product usage and web signals to trigger next-best actions automatically. I will cover data design, model selection (propensity, uplift), routing and governance to deliver a signal-led GTM that boosts conversion and velocity without the MQL hamster wheel. Attendees will take away Signal taxonomy + scoring template, playbook for AI-triggered sequences and a measurement model for pipeline influence.
LLMs That CISOs Trust: Generating Technical Content Without Hallucinations-
A practical framework for using LLMs to produce credible, technical cyber content—grounded retrieval (RAG), fact-checking loops, style guards, and SME-in-the-loop. See workflows for briefs, threat writeups, sales sheets, and analyst responses that enhance accuracy and brand voice. Attendees will takeaway RAG prompt pack; QA checklist to reduce hallucinations; SME review workflow.
AI for PLG + ABM in Cyber: Score, Segment, and Sequence at Scale-
Combine product telemetry with account intent using ML to prioritize accounts, tailor messaging, and auto-orchestrate programs across paid, email, and SDR. We’ll show uplift modeling, micro-segmentation, and how to align ops for clean handoffs from marketing to sales. Attendees will takeaway feature map for propensity models; ABM micro-seg playbook; SDR/MOPs activation blueprint.
IRM UK Upcoming
Policy-as-Code for Security & Lineage: Active AI and Data Governance for CIOs & CISOs-
This session presents a leadership blueprint to converge data, security and AI governance: policy‑as‑code, automated lineage and access (by identity and purpose), model risk controls, and runtime guardrails that produce evidence for auditors. We’ll show how to align IT operating models with GDPR/NIS2 obligations, embed zero‑trust and data minimization and measure value with business‑aligned KPIs. The outcome is a trusted, explainable analytics and AI that accelerate growth, while standing up to scrutiny.
Texas Education Conference Upcoming
Agentic AI for Event Project Management: Smarter Cost Tracking and Planning with Databricks -
In this hands-on session, we’ll demonstrate how Databricks Community Edition combined with Agentic AI can act as a project co‑pilot—helping to clean messy budget spreadsheets, reconcile vendor invoices, and generate natural‑language insights (“Why is catering 12% over budget?”). By applying IT project management principles and leveraging agentic AI automation, attendees will see how to track burn rate, detect variances, and document project risks with less manual effort. We’ll build a quick live demo: importing event budgets into Databricks, standardizing expense categories, visualizing variance, and then asking the AI agent for explanations and recommendations. Participants will leave with a repeatable cost-tracking template and a glimpse into how AI‑assisted project governance can elevate both efficiency and trust in event delivery.
innovateenergynow.com Upcoming
(Live Demo) Production-Grade MLOps for Industrial Perception-
Demonstrates an end-to-end Databricks pipeline for drone/robot imagery and time-series data: Delta Lake ingestion, feature engineering, model training, MLflow experiment tracking, Model Registry, and drift monitoring. Covers rollout/rollback runbooks, SLAs, and synthetic data for edge cases. Attendees see how to maintain consistent model performance across assets, sites, and seasons.
Securing the AI/ML Pipeline: Edge-to-Cloud Protection-
Outlines a defense-in-depth approach for AI in energy operations. Discusses risks like data poisoning, model evasion, and IP theft; hardening inference at the edge; identity and access controls; encrypted lineage in Unity Catalog; and continuous adversarial monitoring. Provides a checklist for secure deployment and incident response tailored to industrial AI.
(Live Demo) Agentic AI for Autonomous Operational Decisions-
Explores architecting safe, auditable agentic AI on Databricks that detects anomalies, reasons with policy constraints, and proposes actions (e.g., work orders) with human-in-the-loop approvals. Shows safety envelopes, oversight, and audit trails, enabling faster, trusted optimization of production and maintenance.
Data Governance for AI: Building a Trusted Foundation-
Presents a practical governance blueprint: data quality metrics, contracts, lineage from capture to model input, metadata standards, and access policies. Addresses bias mitigation, compliance, and vendor data onboarding. Attendees leave with templates for a scalable, governed AI data foundation.
CVision - IBM AWS F1 Roundtable Austin
Confidential by Design: Managing Protected Information in the Age of LLMs-
This session offers a strategic CTO & Board Advisor perspective on governing data in an AI‑driven world. Specifically, how to classify and protect critical information assets, implement “confidential by design” controls and enforce zero‑trust principles across cloud and SaaS LLM integrations. I will also explore regulatory expectations under GDPR, NIS2 and emerging AI frameworks and share a pragmatic blueprint for balancing innovation with compliance. Attendees will gain actionable models, governance patterns and messages to ensure their organizations benefit from LLM adoption, without compromising trust.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top