Krishnendu Dasgupta
Founder, AXONVERTEX AI
Bengaluru, India
Actions
Krishnendu Dasgupta is an engineer with 15+ years in applied Machine Learning . His interests span across healthcare, generative AI, and decentralized AI. He is currently applying AI innovation in clinical trials, graph ML, NLP, and privacy-preserving AI. A Stanford Code in Place mentor and MIT Bootcamp alum, he’s contributed to NIST, MIT Hacking Medicine, and won the Molypix AI award (MIT, 2025). Krishnendu focuses on ethical, scalable AI with global impact. He spoke at The Linux Foundation AI Dev Summit in 2025, and recently at NODES 2025.He has contributed to more than 12 books published globally on Artificial Intelligence and Machine Learning, published by Springer Nature.
Links
Area of Expertise
Topics
Building, Securing, and Deploying AI Agent Swarms in a Trustless Decentralized Ecosystem
Modern AppSec teams are starting to use agentic workflows to triage vulnerability reports and incident tickets that contain logs, stack traces, and chat transcripts often with PII, secrets, and sensitive internal context. Once these workflows add RAG and autonomous tool use (function calling), the attack surface expands: prompt injection can trigger unsafe actions, sensitive data can leak through memory/RAG/tool outputs, agents can be spoofed, and controls can be bypassed.
Enter : AI Agent Swarms in a Trustless Decentralized Ecosystem . You will understand the need of building , securing and deploying AI Agent Swarms in a decenetralized and trustless. .
In this hands-on 1-day training, you will build a Secure AppSec Triage & Remediation Swarm: a policy-governed, privacy-preserving multi-agent system powered by open-source foundation models in the 4B–20B range (Mistral/Qwen-class), with an explicit focus on EU policy-driven controls.
End to End Understanding - From prototype rapidly in Google Colab, then transition to a self-hosted Docker deployment hardened with container security best practices and protected at the edge using Cloudflare Zero Trust, WAF, and rate limiting.
You will implement:
1. Policy-as-code guardrails aligned to EU governance expectations (risk tiers, tool/model/RAG permissions, and human-oversight triggers).
2. PII detection and masking/pseudonymization using an anonymization framework across user input, inter-agent messages, and retrieval context.
3. Secured tool use via structured function calling, including a focused exploration of security specific LLM for security-oriented reasoning and structured outputs.
4. An eval + security test harness covering PII leakage, prompt-injection resilience, agent spoofing/tampering, and DoS/rate-limit checks.
5. An auditable deployment that produces a compliance-friendly evidence bundle (policies, logs, and test results).
After 8 hours, attendees leave with runnable take-home assets: Colab notebooks, a hardened Docker Compose stack, policy templates, and red-team scripts that can be directly adapted to real AppSec triage pipelines.
Privacy as Infrastructure: Declarative Data Protection for AI on Kubernetes
AI services are multiplying faster than privacy controls can keep up. This talk covers a Kubernetes-native approach to make privacy "just work": an open-source framework that treats data protection as infrastructure, not application code. It introduces the concept of a Privacy Operator that discovers AI and ML workloads, applies declarative privacy policies, and enforces anonymization at deployment and runtime. Instead of developers wiring in libraries or filters, the platform ensures that sensitive data never leaves a workload unprotected. We will demonstrate the architecture, policy model, and enforcement patterns, from webhook-based mutation to service-level mediation, with key trade-offs for latency, reliability, and observability. This session will show privacy automation in action as policies update dynamically across running AI workloads.
AGENTS OF S.E.A.L.E.D: AI Agentic Cybersecurity Framework
The Agents of S.E.A.L.E.D (Secure, Encrypt, Analyze, Locate, Eliminate, Defend) framework introduces an agentic ecosystem for cybersecurity structure leveraging AI-driven cryptography and intelligent agent-based guardrails. Our research focuses on deploying customizable AI agents that integrate encryption, real-time threat analysis, cyber threat localization, targeted elimination, and robust defense tactics.
This session explores practical and foundational approaches using frameworks like Semantic Kernel and AutoGen, along with guardrails models such as IBM Granite and Meta Llama Guard. Attendees will learn about RAG, RAF, Graph-RAG, and RHLF methodologies for cybersecurity, with attention to compliance (Digital Protection Act, US/UK Privacy Acts).
The session will highlight agent-to-agent communication,decentralised framework , and multimodal data integration (text, voice, image, video) to transform cybersecurity strategies. Discussion includes plug-and-play deployment, agent orchestration, and real-time policy-to-action pipelines.
Disclaimer: Research-oriented implementations only.
OWASP AppSec Italy 2026 - Call for Trainings Sessionize Event Upcoming
KubeCon + CloudNativeCon Europe 2026 Sessionize Event Upcoming
NODES 2025 Sessionize Event
AI_dev: Open Source GenAI & ML Summit Europe 2025 Sessionize Event
Krishnendu Dasgupta
Founder, AXONVERTEX AI
Bengaluru, India
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top