Speaker

Aviral Srivastava

Aviral Srivastava

Offensive security for the age of machine intelligence

Sunnyvale, California, United States

Actions

Aviral Srivastava is a L4 Security Engineer at Amazon, where he focuses on offensive security, threat detection, and the intersection of AI and cybersecurity. He holds a Master’s degree in Cybersecurity Analytics and Operations from Penn State University, where his research explored adversarial machine learning, red teaming AI systems, and automated generation of cryptographic CTF challenges using large language models.

With over 30 research papers published in top-tier venues such as ICLR, NeurIPS, and ACM, Aviral’s work bridges the gap between academic rigor and real-world impact. His projects have ranged from automated exploit generation and fuzzing defenses to AI red teaming frameworks and symbolic execution enhancements.

He has presented at major cybersecurity conferences including Hackers on planet earth, RSAC, CypherCon, CactusCon, and multiple Bsides venues, delivering talks on topics such as adversarial AI attacks, AI governance and compliance, and the vulnerabilities hidden within AI. In recognition of his contributions, he was selected as an RSA Security Scholar in 2025 and awarded Cybersecurity Innovator of the Year (2023) at BSides Bangalore.

Aviral is also an active open-source contributor and developer of tools like KernelGhost , ZeroDayForge (offensive research framework), Polymorphic Shellcode Engine, and a Crypto CTF Challenge Builder powered by LLMs. His mission is to build tools that help defenders understand how attackers think and help attackers sharpen their edge responsibly.

Whether he’s reverse engineering binaries, crafting AI-driven payloads, or speaking to future researchers, Aviral remains passionate about pushing the boundaries of offensive security, AI safety, and cybersecurity education.

Area of Expertise

  • Information & Communications Technology

Topics

  • cybersecurity
  • AI and Cybersecurity
  • Adversarial AI
  • Hacking
  • Emerging Cybersecurity Topics
  • Information Security
  • reinforcement learning
  • Information Tehnology
  • Information Security Governance and Risk
  • Deep Reinforcement Learning
  • Information Protection
  • Cloud Security
  • Application Security
  • Developer Tools
  • AppSec
  • Machine Learning
  • Machine Learning and AI
  • Hacker
  • AWS Security
  • Data Security
  • Multi-cloud Security
  • mobile application security
  • api security
  • Offensive AI
  • Offensive Security
  • Red Team / Blue Team / Purple Team
  • AI Red Teaming
  • SOC
  • Security Operation Center

Weaponizing AI: Adversarial Attacks, Hallucinations, and the Offensive Security Frontier

As artificial intelligence becomes more integral to critical systems, it increasingly serves as a vector for offensive security attacks. Our session, "Weaponizing AI," delves into the murky intersection of AI hallucinations, adversarial attacks, and the development of offensive techniques exploiting these vulnerabilities. We will explore how attackers use adversarial techniques to induce AI hallucinations, which deceive systems into producing false data and disrupt machine perception, thereby opening new avenues for attacks in sectors like autonomous vehicles, healthcare, and financial systems.

We'll dissect the mechanics of adversarial examples, backdoor attacks, and model poisoning, demonstrating how these tactics compromise the integrity of AI systems in mission-critical environments. The session will also cover effective defense strategies, such as red-teaming AI models, employing anomaly detection, and using adversarial training to bolster system defenses.

Key Takeaways:

1) An understanding of how adversarial attack vectors exploit AI hallucinations and failure modes.

2) Insights into offensive techniques like adversarial examples, model manipulation, and backdoor attacks.

3) Defense strategies including adversarial training and anomaly detection to mitigate risks from AI manipulations.

Hacking Neural Networks: The Hidden Vulnerabilities of AI Systems

As artificial intelligence (AI) and machine learning (ML) revolutionize industries, from healthcare to finance and beyond, neural networks are at the heart of this transformation. But beneath their groundbreaking capabilities lies a hidden vulnerability: adversarial attacks. These subtle, often undetectable attacks can manipulate AI systems in ways that can be catastrophic for security-critical applications.

This session will expose the reality that neural networks, despite their sophistication, can be hacked with surprisingly simple techniques. We'll delve into how adversarial attacks exploit weaknesses in AI models, from tricking image recognition systems into misclassifying objects to manipulating financial models to produce faulty outcomes. Through real-world examples and a live demonstration, attendees will witness firsthand how seemingly minor changes in input data can have devastating consequences.

With AI rapidly becoming an integral part of modern cybersecurity defenses, the question isn't whether neural networks will be targeted, but when. This session will not only explore the mechanics of these attacks but will also arm participants with strategies to defend against them, highlighting the critical need for securing AI systems as they become increasingly integrated into our daily lives.

This talk is a must-attend for security professionals, AI developers, and anyone interested in the future of cybersecurity. As the AI landscape expands, understanding its vulnerabilities is crucial to protecting the systems that power our world.

Key Takeaways:
1. Multiple examples
2. A comprehensive breakdown of adversarial attacks and their potential to compromise AI systems.
3. Real-time demonstration of hacking a neural network. (maybe)
4. Insight into emerging defense mechanisms to secure AI systems.
5. Ethical implications of deploying vulnerable AI systems in critical applications.

Filling Gaps in AI Governance: How ISO/IEC 42001 Shapes the Future of AI Risk and Compliance

Abstract
In this presentation, we will explore the emerging gaps in AI governance and how the newly released ISO/IEC 42001 framework addresses these critical issues. As AI technologies evolve rapidly, organizations face increasing challenges in managing risks related to ethics, security, transparency, and accountability. This talk will provide an in-depth analysis of ISO/IEC 42001's role in mitigating these gaps and aligning governance frameworks with the unique demands of AI systems. Attendees will gain actionable insights on how to integrate these principles into their risk and compliance strategies while ensuring ethical and secure AI practices. Whether you’re a technologist, hacker, or executive, this talk will provide a roadmap to navigate the complexities of AI governance effectively.

Agents Under Siege: Live Attacks from RAG to Tool Calls to Protocols

Agentic AI doesn’t fail at the chat box it fails at the actions it takes. In this talk I chain three concise, live demos that move from data to action to supply chain:

RAG plan-graft: a tiny poisoned snippet in a local index silently adds an extra workflow step that changes business logic, reflecting recent RAG-poisoning research (e.g., “PoisonedRAG”).

Function-call abuse: adversarial inputs and crafted error paths cause argument drift so a “read-only” tool becomes a write—aligned with new tool-calling attack work.

Malicious protocol plugin: a benign-looking Model Context Protocol (MCP) server exfiltrates data, echoing real incidents and vendor advisories.

Each demo shows the second prompt (what the model tells tools to do), the observable failure signatures (unauthorized tool calls, argument-shape mutations, plan revisions), and simple fixes you can ship this week: pre/post-conditions and safelists on tools, schema-aware linting of generated calls, action-prompt logging with provenance, and basic KPIs (Unsafe Tool-Call Ratio, Off-Policy Action Rate). We’ll also map where OWASP LLM guidance fits and where it stops so you can harden real agent workflows, not just prompts.

Aviral Srivastava

Offensive security for the age of machine intelligence

Sunnyvale, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top