Speaker

Saurabh Kumar Pandey

Saurabh Kumar Pandey

MIQ - Associate Engineering Manager

Bengaluru, India

Actions

Saurabh is a passionate cybersecurity professional and a recognized Offsec Ambassador, dedicated to strengthening the security community. As the Organizer of B-sides Ahmedabad, he actively fosters collaboration and knowledge-sharing in the infosec space. With OSCP and CRTP certifications, Saurabh specializes in penetration testing, red teaming, and adversary simulation. He currently works as a Associate Engineering Manager , where he plays a crucial role in securing complex infrastructures

Area of Expertise

  • Information & Communications Technology
  • Law & Regulation

Topics

  • Red Team
  • penetration testing
  • Web Exploitation
  • Exploit Development
  • exploiting IOT
  • System Exploitation
  • Blue Team
  • window exploit
  • linux exploit
  • AI Red Teaming
  • Social Engineering and Phishing:

Stealth Mode Engaged: Advanced Antivirus and AMSI Evasion Techniques

In a post-signature world, antivirus engines have evolved — but so have attackers. AMSI is Microsoft’s last line of defense against malicious scripts, while UAC acts as the gatekeeper to elevated execution. What happens when both are bypassed with surgical precision?

In this talk, we’ll venture deep into advanced evasion on Windows, focusing on PowerShell and JScript-based attacks designed to dismantle modern defenses. Through the lens of offensive operations and red team engagements, we’ll explore:

Bypassing AMSI with .NET Reflection in PowerShell

Overwriting AMSI buffers in memory (Wrecking AMSI)

UAC Bypasses that remain effective even against Microsoft Defender's real-time scans

AMSI evasion using JScript and COM-based execution

Each evasion technique will be demonstrated, dissected, and explained — including the underlying memory manipulation, abuse of trusted interfaces, and how attackers ensure persistence under heavy endpoint monitoring.

Alongside these practical payloads, we’ll cover:

Why AMSI is broken by design in certain contexts

The cat-and-mouse game between red teams and Microsoft patches

Defensive techniques to detect these evasions in real-time

This session is not for the faint of heart. It’s for defenders, red teamers, and researchers who want to stay ahead in a game where stealth, not brute force, wins.

Hacking the Mind of the Machine: Pwn the Prompt – Inside LLM Vulnerabilities and Exploits

As Large Language Models (LLMs) like GPT, Claude, and Gemini become embedded in everything from customer support agents to autonomous cybersecurity tools, they bring with them a radically new attack surface—one shaped not by traditional code execution, but by language, intent, and contextual manipulation. This talk is for the red teamers, hackers, and curious minds who want to pull back the curtain and see how these so-called “intelligent” systems can be broken, hijacked, and subverted.

In this session, we’ll begin by demystifying LLMs—how they work, what they actually do under the hood, and why they behave more like improv actors than deterministic programs. From there, we’ll dive into the meat of the talk: practical, offensive security techniques that exploit the quirks, limitations, and architectural oversights of LLM-powered systems.

You’ll learn how prompt injection works—and why it’s way more than just asking the AI to “ignore previous instructions.” We’ll show real-world examples of jailbreaks that bypass filters, inject unintended commands, and even exfiltrate private data across session contexts. We'll cover improper output handling that turns trusted AI responses into cross-system attack vectors, and explore the fragile security assumptions in API-integrated LLMs that allow privilege escalation, function abuse, or total system compromise.

But we’re not stopping at prompts. We’ll go deeper into the AI development lifecycle—unpacking supply chain attacks on model fine-tuning, vulnerabilities in prompt engineering frameworks, and the risks of deploying autonomous LLM agents with too much agency and not enough oversight. If you've ever wondered whether a chatbot could trigger an internal API call that deletes your database, you're in the right place.

This talk doesn’t require a PhD in machine learning—just a hacker mindset and a willingness to explore the limits of emerging tech. Attendees will walk away with a red team–ready methodology for testing LLM systems, a mental map of their weak points, and a toolkit of real tactics that go beyond theoretical risks into practical exploitation.

Building on this foundation, the session will transition into an in-depth examination of emerging threats and offensive techniques targeting LLMs in real-world environments. Attendees will explore:

Prompt Injection: Techniques for manipulating model prompts to subvert expected behavior, including direct injection, indirect (latent) prompt manipulation, and prompt leaking.

Jailbreaking: Advanced methods for bypassing model safety layers and restriction policies, allowing unauthorized actions or outputs.

Output Handling Vulnerabilities: Common failure points in downstream systems that trust LLM outputs without proper validation or sanitization.

LLM API and Deployment Security: Attack vectors exposed by insufficient authentication, poor input/output filtering, and insecure API integrations.

Supply Chain Risks: Threats targeting the LLM development lifecycle—including poisoned datasets, backdoored fine-tuning checkpoints, and compromised third-party tools.

Autonomous Agent Overreach: Risks arising from LLMs with agency, including goal misalignment, unchecked tool usage, and recursive decision-making loops.

Resource Abuse Scenarios: Tactics for exploiting LLM endpoints via prompt amplification, looped interactions, and denial-of-service through compute exhaustion.

Through a blend of real-world examples, technical deep dives, and hands-on offensive demonstrations, attendees will gain a red team–oriented perspective on securing LLMs.

From Shodan to Secrets: Red Teaming Vault in Kubernetes and Building a Secure Defense

In this session, we’ll walk through a real-world attack scenario targeting a misconfigured Kubernetes cluster with HashiCorp Vault deployed for secrets management. The attack begins externally—with Shodan reconnaissance—demonstrating how Vault, Consul, and Nomad instances are often unintentionally exposed on the internet. From there, we pivot into the Kubernetes environment, gaining access through an exposed dashboard or pod and abusing misconfigured policies, insecure token handling, and overly permissive access to exfiltrate secrets such as cloud credentials.
We’ll draw from a structured Kubernetes security learning path to simulate each stage of the attack chain:
External discovery using Shodan dorks to identify exposed HashiCorp services
Privilege escalation using service account tokens and Vault API misuse
Secrets extraction from Vault’s AWS secrets engine
Terraform state secrets misuse and token leakage
Defensive hardening using Vault auth methods, short-lived tokens, namespace scoping, and Kubernetes RBAC and network policies
To empower defenders, we’ll also share a practical method to automate exposure monitoring using the Shodan API. This script allows security teams to detect when their Vault or related infrastructure becomes exposed—helping to close the gap between discovery and response.
Whether you're just starting your cloud security journey or actively defending production clusters, this talk offers an actionable blueprint for identifying risks, simulating real-world threats, and implementing security best practices for modern secrets management.

Security Scorecard: A Data-Driven Approach to Measuring Cyber Resilience

We've built a team-wise reporting system that scores services based on vulnerabilities. Data from multiple security tools grades each service from A to D, motivating teams to strive for an A+ rating.

Hacking the Mind of the Machine: Red Teaming Autonomous and Prompt-Based Systems

As Large Language Models (LLMs) like GPT, Claude, and Gemini become embedded in everything from customer support agents to autonomous cybersecurity tools, they bring with them a radically new attack surface—one shaped not by traditional code execution, but by language, intent, and contextual manipulation. This talk is for the red teamers, hackers, and curious minds who want to pull back the curtain and see how these so-called “intelligent” systems can be broken, hijacked, and subverted.

In this session, we’ll begin by demystifying LLMs—how they work, what they actually do under the hood, and why they behave more like improv actors than deterministic programs. From there, we’ll dive into the meat of the talk: practical, offensive security techniques that exploit the quirks, limitations, and architectural oversights of LLM-powered systems.

You’ll learn how prompt injection works—and why it’s way more than just asking the AI to “ignore previous instructions.” We’ll show real-world examples of jailbreaks that bypass filters, inject unintended commands, and even exfiltrate private data across session contexts. We'll cover improper output handling that turns trusted AI responses into cross-system attack vectors, and explore the fragile security assumptions in API-integrated LLMs that allow privilege escalation, function abuse, or total system compromise.

But we’re not stopping at prompts. We’ll go deeper into the AI development lifecycle—unpacking supply chain attacks on model fine-tuning, vulnerabilities in prompt engineering frameworks, and the risks of deploying autonomous LLM agents with too much agency and not enough oversight. If you've ever wondered whether a chatbot could trigger an internal API call that deletes your database, you're in the right place.

This talk doesn’t require a PhD in machine learning—just a hacker mindset and a willingness to explore the limits of emerging tech. Attendees will walk away with a red team–ready methodology for testing LLM systems, a mental map of their weak points, and a toolkit of real tactics that go beyond theoretical risks into practical exploitation.

From Shodan to Secrets: Red Teaming Vault in Kubernetes—and Building Resilient Defenses with the Has

Abstract
In this session, we’ll explore a realistic, end-to-end attack scenario that highlights how misconfigurations in Kubernetes and secrets management can lead to serious exposures—and how the robust features of HashiCorp Vault can be used to defend against them.
We begin our journey from the outside in, using Shodan reconnaissance to identify exposed instances of Vault, Consul, and Nomad—demonstrating how these powerful tools, when misconfigured, can unintentionally be made visible on the public internet. From there, we pivot into a Kubernetes cluster, exploiting common weak points such as insecure dashboards, overly permissive policies, and poor token hygiene to escalate privileges and access secrets.
Each phase of the attack chain is mapped to a structured Kubernetes security learning path:
External discovery using Shodan dorks to surface exposed HashiCorp services
Privilege escalation through service account tokens and Vault API misuse
Secrets extraction from Vault’s AWS secrets engine
Terraform state file leaks and hardcoded tokens
Real-world defensive strategies using Vault’s flexible auth methods, short-lived credentials, namespace segmentation, and integrations with Kubernetes RBAC and network policies
To empower defenders, we’ll share an automated approach for exposure monitoring using the Shodan API—enabling teams to proactively detect and respond when Vault or related infrastructure is exposed.
This talk balances offense and defense, giving both red and blue teams a hands-on blueprint for identifying risks, simulating real-world threats, and ultimately strengthening secrets management with Vault as a core security pillar in modern cloud-native environments

From Shodan to Secrets: Red Teaming Vault in Kubernetes—and Building Resilient Defenses

In this session, we’ll explore a realistic, end-to-end attack scenario that highlights how misconfigurations in Kubernetes and secrets management can lead to serious exposures—and how the robust features of HashiCorp Vault can be used to defend against them.

From DevSecOps to AIOps — Securing the AI-Integrated Software Supply Chain with Real-World Intellige

As enterprises adopt AI-native architectures to drive innovation and automation, the software development lifecycle is evolving rapidly — shifting from traditional DevOps to AIOps. In this new paradigm, models, data, and code co-exist in a unified pipeline, enabling continuous learning and deployment of intelligent systems. But this transformation brings with it a drastically expanded and undersecured attack surface.
In this session, we’ll explore how our organization is addressing these risks through hands-on experience securing AI-enabled platforms like MiQ Sigma — a multi-agent, LLM-powered system that automates campaign planning and orchestration across hundreds of data sources and DSPs. We’ll walk through the technical and governance challenges we faced securing a dynamic AI supply chain, and the solutions we implemented across data ingestion, model deployment, and inference execution.
You’ll gain actionable insights into:
The anatomy of an AI-integrated software pipeline, and how it differs from traditional DevOps
Unique supply chain threats in AI systems, including model poisoning, prompt injection, and malicious pretrained models
Our internal security blueprint for protecting LLM workflows — from data integrity checks to prompt pipeline hardening
Best practices for securing CI/CD pipelines that manage AI artifacts
Guardrails for continuous training (CT) and safe inference in production
Approaches to ensure compliance with evolving AI regulations (e.g., GDPR, EU AI Act)
Attendees will leave with a real-world playbook for securing AI-native environments — balancing innovation with resilience, and aligning DevSecOps principles with the demands of modern AIOps.

Saurabh Kumar Pandey

MIQ - Associate Engineering Manager

Bengaluru, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top