Speaker

Angus Chen

Angus Chen

CEO, Qerberos, AI-powered detections

Actions

Angus Chen is an experienced AI-powered cybersecurity executive and community leader known for his ability to connect the dots and foster a culture of trust. As a DEFCON Goon, board chair, and board member, he actively contributes to communities and champions diversity in all its forms. Angus is also a sought-after speaker, sharing insights on AI, quantum computing, and deep tech.

With over 20 years of hands-on experience, he has developed and implemented data science and deep tech solutions for organizations including MITRE, the Federal Reserve Board, and FINRA. Currently, he serves as the CEO of Qerberos, an AI-powered cybersecurity company.

Angus is a passionate community builder, an avid rock climber always looking for his next project, and a dedicated trail runner. He holds an executive certificate from NYU Stern, a Global Executive MBA from IESE, and a Master's in Applied and Computational Mathematics from Johns Hopkins University. He is also certified in CISSP, CCSP, and PMP.

Model Context Protocol (MCP) and the Unseen Security Risks

The Model Context Protocol (MCP) is transforming how AI systems interact with data and services. By making APIs accessible to large language models, MCP offers a simple and powerful way to connect applications with AI agents. With growing adoption by major AI companies and widespread deployment of MCP servers, this protocol is rapidly becoming the de facto bridge between systems and intelligent agents.

But with great power comes great risk.

MCP allows remote AI agents to perform any action a user could take—often without traditional safeguards. Imagine an AI accessing your GitHub account and issuing commands on your behalf. MCP makes this possible. The problem? Most APIs were designed with cautious, technically savvy developers in mind—not ordinary users experimenting with LLM prompts and untrusted MCP servers found online.

In this talk, we’ll unpack the security implications of MCP, walk through real-world attack vectors, and provide concrete strategies for mitigation. The session is led by Angus Chen, a cybersecurity expert, and Aleks Jakulin, an AI researcher and contributor to open web standards. Both are members of Sundai, a leading hacker collective affiliated with Harvard and MIT.

Multi-Agentic System Threat Modeling

Applying the OWASP Agentic AI – Threats and Mitigations taxonomy to real-world multi-agent systems (MAS). These systems, characterized by multiple autonomous agents coordinating to achieve shared or distributed goals, introduce additional complexity and new attack surfaces.

To Scan or Not to Scan? That Just Might Be Quishing

From 2021 to 2025, dynamic QR codes generated by users accumulated a total of 7,181,345 QR code global scans, a 433% increase since 2021. QR code is rapidly being adopted since COVID from information retrival to a payment method . Human eyes can’t see the URL or action encoded in the QR code. Threat actors can easily replace legitimate QR codes with malicious ones that lure users into credential theft, financial fraud, and malware downloads.
phishing via QR codes aka Qishing is rapidly becoming a high-risk threat vector, especially in regulated industries where user trust, secure access, and data protection are paramount. The problem? Human eyes can’t inspect QR codes, and most compliance controls can’t intercept them.

In this session, we will
- Discuss QR code use in critical sectors (financial services, health, transit, government and military).
- Walk through the actual case study of parking scam to simulate a phishing flow.
- Share actionable mitigation strategies

Security AI

Welcome to Security AI! The goal of this course is to inform on how artificial intelligence is becoming one of the major tools in our security arsenal. The problem is that, unless mom you have a specific type of degree, you are at the mercy of product vendors,collaborators, ChatGPT, or search engines to understand these concepts. This course demystifies artificial intelligence and its relationships.

This is an interactive course, with the goal of teaching security professionals how to implement AI in order to obtain valuable insights. This course will encompass various topics including: machine learning (ML), and large-language models (LLMs). The combination of AI and security allows the security community to move our assumptions, opinions and beliefs into knowledge.

No previous experience is necessary. Background understanding programming is very helpful, specifically Python.

Module Title
00 Introduction to SecurityAI
01 History of AI
02 Introduction to AI
03 Difficulties of SecurityAI
04The Data Science Process
05 Introduction to Machine Learning
Unsupervised Learning - event log analysis for persistence
Case Study: Phishing Detection
GenerativeAI
Large Language Models
llama - synthetic event log generation
Case Study: Phishing Detection with GenerativeAI

Get your tickets: https://www.eventbrite.com/e/bsidesnova-2024-8-bit-games-sept-6-7-arlington-va-tickets-971314707437

AI-Driven Security

The goal of this course is to inform on how Artificial Intelligence is becoming one of the major tools in our security arsenal. The problem is that, unless you have a specific type of degree, you are at the mercy of product vendors, collaborators, ChatGPT, or search engines to understand these concepts. This course demystifies Artificial Intelligence and its relationships.

This interactive course will teach security professionals how to conduct data science techniques to manipulate and analyze security data to uncover valuable insights. The course will cover a few topics from data preparation, feature engineering and selection, exploratory data analysis, data visualization, machine learning, model evaluation and optimization and finally, implementing at scale.

No previous experience is necessary. Background understanding Python programming is helpful.

Course outline:
• Introduction to AI
• Applied AI to Cybersecurity
• Difficulties of AI-Driven Security
• Introduction to Machine Learning & Data Science
• A case study coding
• Walking through the case study coding exercise with a real-world data set

A Data Scientist and a Threat Hunter Walk into a Bar

In this talk, we'll explore the dynamic partnership between a data scientist and a threat hunter as they join forces to elevate their company's machine learning (ML) powered detection and response capabilities. By blending threat intelligence with advanced ML techniques, they were able to create capabilities to uncover unknown threats across various data sources. Attendees will gain an understanding on how to bring ML into their detection strategy, practical applications of machine learning for threat detection, and understanding how to layer it with attacker operations and threat intelligence to enhance detection strategies, and see real-world examples of these concepts in action.

Angus Chen

CEO, Qerberos, AI-powered detections

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top