Abhinav Singh
Cyber Security Research in AI,Cloud & Data.
Sunnyvale, California, United States
Actions
Abhinav Singh is an esteemed cybersecurity leader and researcher with over a decade of experience working with global technology leaders, startups, financial institutions, and as an independent trainer and consultant. He is the author of the widely acclaimed "Metasploit Penetration Testing Cookbook" (first, second, and third editions) and "Instant Wireshark Starter." Abhinav's contributions to the cybersecurity community are extensive, including patents, open-source tools, and numerous publications in security and privacy portals. He also serves as an advisor to startups, editorial and review board member for International events like RSA, NeurIPS, CSA events, ISSA & OWASP.
He is a sought-after speaker & trainer at international conferences like Black Hat, RSAC, DEFCON and many more, where he shares his deep industry insights and innovative approaches in cybersecurity. His expertise spans into areas like AI, Cloud, Data & enterprise security.
Links
Area of Expertise
Topics
Defending Public Cloud Infrastructure and Applications
Enhance your cloud security knowledge to defend AWS & Azure infrastructure and apps through building automated detection, alerting, and response systems. This training offers hands-on lab exercises & CTF challenges(metal coins to win) for a practical learning experience.
This hands-on CTF-style training focuses on elevating your security knowledge into the cloud. Learn to defend your AWS & Azure cloud infrastructure by building automated detection, alerting and response pipelines for your workloads by using native cloud services. This training focuses on building security knowledge on the cloud and for the cloud.
AI-Enhanced Futures: Securing the Data Within the GenAI Supply Chain
This talk highlights the engineering controls and tool development to safeguard data across AI's lifecycle. Addressing vulnerabilities from collection to deployment, it emphasizes the creation of bespoke security practices, advanced access controls, and sophisticated anomaly detection tools. The presentation delves into the engineering practices essential for embedding security by design in GenAI systems, advocating for a proactive approach to data protection. It calls for cross-disciplinary collaboration in developing robust security frameworks and tools, ensuring AI applications are both powerful, secure and trustworthy.
Cloud Security Masterclass: Defender's guide to securing public cloud Infrastructure
This hands-on CTF-style training focuses on elevating your security knowledge into the cloud. Learn to defend your public cloud infrastructure by building automated detection, alerting and response pipelines for your public cloud workloads by using native cloud services. This training focuses on building security knowledge on the cloud and for the cloud.
AI-Enhanced Futures: A CISO's Blueprint for Navigating Emerging Data risks in GenAI Era
In the rapidly advancing field of cloud computing, Artificial Intelligence (AI) stands at a crossroads as both a powerful ally and a potential concern in the realm of data security. This talk is focused on unraveling the dual role of AI as both a driver of enhanced security measures and a source of new challenges in data protection for CISO's and cyber executives.
AI-Enhanced Futures: A CISO's Blueprint for Navigating Emerging Data risks in GenAI Era
In the rapidly advancing field of cloud computing, Artificial Intelligence (AI) stands at a crossroads as both a powerful ally and a potential concern in the realm of data security. This talk is focused on unraveling the dual role of AI as both a driver of enhanced security measures and a source of new challenges in data protection for CISO's and cyber executives.
AI SecureOps: Attacking & Defending AI Applications and Agents
Can prompt injections lead to complete infrastructure takeovers? Could AI agents be exploited to compromise backend services? Can jailbreaks create false crisis alerts in security systems? In multi-agent systems, what if an attacker takes over an agent’s goals, turning other agents into coordinated threats? This immersive, CTF-styled training in AI and LLM security dives into these pressing questions. Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications & agentic systems to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for AI apps & agents, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.
By the end of this training, you will be able to:
- Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as instruction injection, agent control bypass, remote code execution for infrastructure takeover as well as chaining multiple agents for goal hijacking.
- Conduct AI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
- Execute and defend against adversarial attacks, including prompt injection, data poisoning, jailbreaks and agentic attacks.
- Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
- Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
- Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.
- Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
- Implement an incident response and risk management plan for enterprises developing or using GenAI services.
AI SecureOps: Attacking & Defending AI Applications and Agents
Can prompt injections lead to complete infrastructure takeovers? Could AI agents be exploited to compromise backend services? Can jailbreaks create false crisis alerts in security systems? In multi-agent systems, what if an attacker takes over an agent’s goals, turning other agents into coordinated threats? This immersive, CTF-styled training in AI and LLM security dives into these pressing questions. Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications & agentic systems to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for AI apps & agents, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.
By the end of this training, you will be able to:
Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as instruction injection, agent control bypass, remote code execution for infrastructure takeover as well as chaining multiple agents for goal hijacking.
Conduct AI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
Execute and defend against adversarial attacks, including prompt injection, data poisoning, jailbreaks and agentic attacks.
Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.
Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
Implement an incident response and risk management plan for enterprises developing or using AI services.
## Detailed Outline (For Reviewers. Do not publish if not needed)
### Introduction
- Introduction to LLM and AI.
- Terminologies and architecture.
- Transformers, Attention & their security implications(hallucinations, jailbreaks etc).
- Agents, multi-agents and multi-modal models.
### Elements of AI Security (1 lab)
- Understanding AI vulnerabilities with case studies on AI security breaches.
- OWASP LLM Top 10 and MITRE mapping of attacks on AI supply chain.
- Threat modeling of AI Applications.
### Adversarial LLM Attacks and Defenses (6 labs)
- Direct and indirect prompt injection attacks and their subtypes.
- Advanced prompt injections through obfuscation and cross-model injections.
- Breaking system prompts and their trust criteria.
- Indirect prompt injections through external input sources.
### Responsible AI & Jailbreaking (6 labs)
- Jailbreaking public LLMs covering adversarial AI, offensive security, and CBRN use-cases.
- Responsible AI frameworks and benchmarks.
- Model alignment, system prompt optimization, and defense.
### Building Enterprise-grade LLM Defenses (2 labs)
- Deploying LLM security scanner, adding custom rules, prompt block-lists, and guardrails.
- Writing custom detection logic, trustworthiness checks, and filters.
- Building security log monitoring and alerting for models using open-source tools.
- LLM security benchmarking and continuous reporting.
### Red & Blue Teaming of Enterprise AI applications(4 labs)
- Business control flow testing for risky responses & misaligned behavior of applications.
- Using Colab notebooks for automation of API calls and reporting
- Vector database and model-weight tracing for root-cause investigation.
- Rainbow teaming through a 3-way LLM implementation: target, attacker, and judge with self-improving attack prompts.
### Attacking & Defending Agentic Systems (5 labs)
- Attacking LLM agents for task manipulation, risky behavior and PII disclosure in RAG.
- Injection attacks on AI agents for code and command execution.
- Compromising backend infrastructure by abusing over-permissioning and tool usage in agentic systems.
- Multi-agent attacks causing privilege too calls, goal manipulation & chained escalations.
### Building AI SecOps Process
- Summarizing the learnings into a SecOps workflow.
- Monitoring trustworthiness, safety and security of enterprise AI applications.
- Implementing NIST AI Risk Management Framework (RMF) for security monitoring.
## Why should people attend your course?
- Practical, hands-on labs, simulating real attacks on AI Applications and Implementing Defense controls on applications to measure the effectiveness of controls.
- Focus on technical discussion, attendee engagement through open-ended questions, brainstorming, and security policy/controls related discussions.
- Continued learning experience since the shared labs are always online with a shared channel of discussion over a dedicated Discord server.
## Who Should Take This Course
- Security professionals seeking to update their skills for the AI era.
- Red & Blue team members.
- AI Developers & Engineers interested in the security aspects of AI and LLM models.
- AI Safety professionals and analysts working on regulations, controls, and policies related to AI.
- Product Managers & Founders looking to strengthen their PoVs and models with security best practices.
## Student Requirements
- Familiarity with AI and machine learning concepts is beneficial but not required.
- Experience in one or more areas of Security.
- Familiarity with common AI applications like OpenAI.
## What should students bring
- API key for OpenAI.
- Google Colab account.
- Complete the pre-training setup before the first day.
## What will students be provided with
- One year access to a live interactive playground with various exercises to practice different attack and defense scenarios for GenAI and LLM applications.
- "AI SecureOps" Metal coin for CTF players.
- Complete course guide containing 200+ pages in PDF format. It will contain step-by-step guidelines for all the exercises, labs, and a detailed explanation of concepts discussed during the training.
- PDF versions of slides that will be used during the training.
- Access to Slack channel for continued engagement, support, and development.
- Access to Github account for accessing custom-built source codes and tools.
- Access to HuggingFace models, datasets, and transformers.
OWASP Global AppSec EU (Vienna) 2026 - CFT Sessionize Event Upcoming
CISO Forum Virtual Summit Sessionize Event
AI Risk Summit + CISO Forum Sessionize Event
SAINTCON 2022 Sessionize Event
Abhinav Singh
Cyber Security Research in AI,Cloud & Data.
Sunnyvale, California, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top