

James McQuiggan
Security Awareness Advocate and Teller of Dad Jokes
Orlando, Florida, United States
Actions
James McQuiggan has over 20 years of experience in cybersecurity and is currently Security Awareness Advocate for KnowBe4. Prior to joining KnowBe4, McQuiggan worked at Siemens in the Energy and Wind Divisions. Over the years he has held various cybersecurity roles, including consulting on cybersecurity standards, information security awareness, incident response and securing industrial control system networks.
McQuiggan is a part-time faculty professor at Full Sail University, teaching Cyber Threat Intelligence. He also volunteers with ISC2, including member of the North American Region Advisory Council and past president of the ISC2 Central Florida Chapter.
Area of Expertise
Topics
Deepfakes & Dark Side AI Attacks
Deepfake technology, leveraging AI, creates convincing fake audio and video clips that are rapidly evolving, posing significant risks to personal and organizational cybersecurity. These technologies present unique challenges and opportunities for IT and cybersecurity programs. In this context, our presentation will delve into the complexities of deepfakes, highlighting their creation, detection, and the latest tools and techniques, including AI-driven technologies, for identifying them.
AI: The Double-Edged Sword of Cybersecurity and Social Engineering
Artificial intelligence (AI) revolutionizes multiple industries, providing unprecedented automation, analytics, and decision-making capabilities. However, the rapid advancements in AI technology have also led to the emergence of sophisticated social engineering attacks, posing significant challenges to individuals, businesses, and governments.
This presentation is updated with new concepts and attack vectors since presenting last October. It continues to explore the impact of AI on social engineering, highlighting the potential benefits, dangers, and strategies for defending against these new-age threats.
Boardroom Intelligence: : AI Strategies for Winning in the Boardroom
In the digital age, where Artificial Intelligence (AI) is rapidly reshaping the business landscape, and it's crucial to examine how AI from a unique, boardroom-centric perspective. Venture beyond AI to understand how strategic decision-making interlaces with the most advanced technological frontiers.
Gain insights into how AI's transformative power not just as a buzzword but as a cornerstone of competitive advantage, operational efficiency, and unparalleled customer engagement. Through captivating case studies and actionable insights, this presentation will illuminate the path for leveraging AI to navigate the complexities of today's business challenges, from enhancing cybersecurity to fostering sustainable growth.
Whether you're looking to revolutionize your organization's approach to data-driven decision-making, streamline operations with AI-driven innovations, or enhance your cybersecurity defenses, this presentation will give you the knowledge to lead your organization into a prosperous, AI-enabled future. Discover why embracing AI in the boardroom isn't just an option—it's imperative for those aiming to stay at the forefront of industry innovation and leadership.
Digital Doppelgängers: The Dual Faces of Deepfake Technology
More and more in society, deepfake technology leverages artificial intelligence to create convincing fake audio and video clips and is evolving rapidly. These technologies not only pose significant threats to personal and organizational cybersecurity programs but also present unique challenges and opportunities in the realms of IT and cybersecurity. This presentation delves into the complexities of deepfakes, offering insights into their creation and detection. With a focus on informing IT professionals, cybersecurity practitioners, CIOs, and CISOs, we will explore the latest tools and techniques used to generate and identify deepfakes while providing practical guidance on mitigating associated risks.
Learning Objectives
- Learn about the potential risks and threats posed by deepfakes to both individuals and organizations.
- Discover the latest tools and techniques to detect deepfakes, including AI-driven technologies.
- Develop strategies and best practices for IT and cybersecurity teams to mitigate the risks associated with deepfakes.
The Art of Deception 2.0: Navigating Deepfakes and Digital Manipulation in Organizations
The convergence of advanced AI technologies and social engineering tactics has introduced sophisticated threats leveraging deepfakes and synthetic media. These malicious tools manipulate trust, compromise security, and exploit vulnerabilities within organizations. By utilizing realistic synthetic voices, manipulated videos, and fake credentials, cybercriminals deceive individuals, bypass authentication systems, and undermine organizational integrity. Understanding the mechanics of these attacks, their potential impacts, and the strategies to detect and mitigate them is critical for maintaining security. Insights into real-world examples and forward-looking solutions provide the foundation for safeguarding organizations against these increasingly convincing synthetic deceptions.
Learning Objectives:
Identify and Analyze Synthetic Media Threats
Evaluate the Impact of Synthetic Media on Insider Threats
Implement Defensive Measures to Combat Deepfakes
AI Agents in Action: Protecting Against Synthetic Media and Human Risk Exploits
The emergence of AI agents and Agentic AI is transforming offensive and defensive strategies in cybersecurity. Cybercriminals are leveraging these tools to execute sophisticated social engineering attacks, utilizing deepfakes and synthetic media to exploit human vulnerabilities and bypass traditional security measures. There is now the dual role of AI agents as both enablers of advanced cyberattacks and critical tools for organizational defense. Their capabilities, and risks drive security professionals to develop proactive strategies to harness the power of AI agents while mitigating their misuse.
Learning Objectives
- Understand the dual role of AI agents in cybersecurity.
- Detect and mitigate threats from Agentic AI.
- Develop strategies to harness AI agents for defense.
Call to Action: Strengthen your organization’s defenses by implementing proactive measures, adopting detection tools, and integrating AI agents into your cybersecurity framework.
Ransomware, Ransom-war and Ran-some-where
We've all heard about ransomware and its impact on organizations as they suffer an attack almost every two seconds. How can one of these cybercriminal organizations operate, what are their business models, and what is the level of experience needed to work for them? Last year, the Conti ransomware group was a victim of their own style of operations when their playbook, chat sessions, and other critical information ended up on the dark web.
Come and learn about the tactics, techniques, and procedures used by various cybercriminal groups, including one that provides a ransomware service. Understand their modus operandi and learn from them how to defend against their styles of attacks.
Learning Objectives:
Understand the operations of cybercriminal groups
See how their attacks can be used to help you defend your organization
Learn from their mistakes to ensure your organization can effectively defend against a ransomware attack
How Hackers Hack and Why They Do It
Cybercriminals use a wide range of tools, techniques and technology to attack organizations to gain access. Everything from social engineering, to attacking internet facing endpoints, their main goal is to access the organization and steal the data. If they can sell off the information or even better, encrypt the victims data and extort them for large amounts of money. It's important we understand how they do it, why they do it and what can be done to reduce the risk or opportunity for them to be successful. With the recent Conti data leak and from other cybercriminal groups, we can learn to understand and know our enemy to help us protect and defend our organizations and reduce the risk of an attack.
Learning Objectives:
- Understand the various technological methods cybercriminals use to attack organizations
- Learn the ways we can help protect ourselves and the organization
-See how cybercriminals use social engineering to get users to open the front door for them
Cybersecurity & You
If you discovered a rash of home burglaries in your neighborhood, you'd find out how they're getting in, what is being stolen, and work to secure your home. In cybersecurity, criminals are always trying to steal your personal information, passwords, and money.
Phishing and social engineering attacks cost organizations billions of dollars each year, and the attacks are getting worse. As these attacks intensify and become more refined, technology struggles to keep up, and users will continue to fall prey. To effectively defend against this, you have to understand how the attacks work, including the psychological triggers and tricks the attackers are using. This session will explore the different techniques that social engineers and scam artists use to make users more likely to do their bidding by clicking links and opening attachments.
Learning Objectives:
- Understand the continuous emerging threats threatening organizations
- Learn why people fall victim to the social engineering attacks like phishing
- Recognize and understand security culture as the next evolution to security awareness
The focus audience are non-technical people. It can be geared for technically minded as a refresher, but the purpose is to educate and bring additional awareness to users learning more about cybersecurity.
Security Awareness is Unalived. Long Live Human Risk Management
Security Awareness Training (SAT) has long been the standard approach to reducing human-related cyber risks, yet it often falls short in driving measurable behavior change. Human Risk Management (HRM) shifts the focus from awareness to action, leveraging continuous assessment, behavioral analytics, and adaptive interventions. The integration of Artificial Intelligence (AI), Generative AI (GenAI), and Agentic AI presents a transformative opportunity to enhance HRM strategies, enabling organizations to move beyond static training programs toward dynamic, real-time risk mitigation.
This session examines the limitations of traditional SAT programs and highlights the role of AI-driven HRM solutions in proactively identifying and addressing human risk factors. AI and GenAI can analyze user behavior, predict potential security lapses, and tailor interventions based on contextual risk levels, ensuring a more personalized and effective approach. Agentic AI further enhances HRM by automating decision-making processes, providing real-time guidance, and reducing the cognitive burden on employees, making security a seamless part of daily workflows.
By shifting from periodic training to continuous education and risk management, organizations can reduce human-driven threats with precision and scale. This discussion explores the intersection of HRM and AI presenting a forward-looking approach to securing the human element in cybersecurity.
CactusCon 13 Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top