Speaker

Michael Roytman

Michael Roytman

CTO at Empirical Security

Chicago, Illinois, United States

Actions

Michael Roytman is the CTO of Empirical Security. Previously, he was the Chief Data Scientist of Kenna Security, and a Distinguished Engineer at Cisco. He served on boards for the Society of Information Risk Analysts, Cryptomove, and Social Capital. He was the co-founder and executive chair of Dharma Platform (acquired, BAO Systems), for which he landed on the 2017 Forbes 30 Under 30 list. He currently serves on Forbes Technology Council.

Area of Expertise

  • Information & Communications Technology

Topics

  • cybersecurity
  • AI and Cybersecurity
  • Cybersecurity Governance and Risk Management
  • Cybersecurity Threats and Trends
  • Artificial Intelligence and Machine Learning for Cybersecurity
  • Cybersecuirty
  • Information Security
  • InfoSec

Cybersecurity is Ready for Local Models

Cybersecurity has long relied on global models—threat intel feeds, vulnerability rankings, behavior baselines—built on the aggregation of telemetry across organizations. But the shift to cloud-native architectures, the explosion of software supply chains, and the operational limits of probabilistic prioritization (e.g., CVSS and even EPSS) are exposing the brittleness of these global models. The next frontier in defensible security decision-making is the integration of local models—statistical and causal models grounded in the specific configurations, assets, behaviors, and exposures of an individual environment.

This talk presents a practical framework for building and operationalizing local models in production environments. We’ll explore where local inference is outperforming global heuristics in vulnerability management, alert triage, and identity risk. Drawing from recent advances in telemetry-based modeling and on-premise LLMs, the talk will demonstrate how teams can move beyond generic scoring toward dynamic, environment-specific assessments—with measurable gains in precision and reduced mean-time-to-decision.

The audience will leave with a clear understanding of:
• Why global models are structurally limited in high-variance environments
• How to architect and deploy local models on real-world security data

Benchmarking LLMs on Vulnerability Prioritization

We present the first large-scale benchmarking of leading LLMs (GPT-4o mini, Claude 3.7, Gemini 2.5) against EPSS on the vulnerability prioritization task, using 50,000 CVEs stratified by real-world exploitation. Our results show that LLMs provide lumpy, poorly calibrated probability estimates, fail to maintain efficiency and coverage beyond 15%, and incur prohibitive inference costs at operational scale. In contrast, predictive models like EPSS and our Global Model deliver higher accuracy, better coverage, and practical cost profiles. We release our full dataset, agent (JayPT), and methodology under an MIT license to enable reproducibility and further research on scalable, evidence-driven vulnerability triage.

We present the first large-scale benchmarking of leading LLMs (GPT-4o mini, Claude 3.7, Gemini 2.5) against EPSS on the vulnerability prioritization task, using 50,000 CVEs stratified by real-world exploitation.

Michael Roytman

CTO at Empirical Security

Chicago, Illinois, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top