Amar Pandey
Engineer@IBM
Bengaluru, India
Actions
Amar Nath Pandey is an Advisory Engineer at IBM India Software Lab with approximately 18 years of IT experience specializing in J2EE, Cloud technologies, and Machine Learning. He has a proven track record of leading teams to deliver complex platforms, including Kubernetes-based ML serving systems and intelligent automation foundations. Amar holds certifications in Data Science and Machine Learning, and his technical expertise spans Java, Golang, Python, and cloud-native tools like Docker and OpenShift. Throughout his career at organizations like Oracle, Nokia Siemens, and IBM, he has consistently focused on building scalable web applications and innovative software solutions.
Area of Expertise
Topics
The "Security Guard" for AI: Making Enterprise Tools Safe with MCP Gateways
Abstract:
In the last few years, everyone started talking to AI. Now, we are entering a new phase where AI doesn't just talk—it actually does work for us. While this is exciting, it creates a big safety problem for company data and reputation. Leaders are now asking: "How do we let our teams use AI without accidentally opening our doors to hackers?"
This presentation gives leaders a clear, non-technical plan for using AI safely. We will look at the real risks companies face and show how an MCP Gateway acts as a single, strong "Security Guard" for all your AI tools. We will explain how to keep your data private and follow legal rules without slowing down your AI projects.
Join us to learn how to turn AI into a safe, powerful engine for your company while keeping your data under your control.
1. What are you hoping to get from your presentation?
My main goal is to show that we can connect AI agents to private company data without making it easy for hackers to get in. I am hoping to:
• Encourage a Standard way of working: I want to show why using a single "Gateway" (a front door) is better than trying to secure every single AI tool one by one.
• Get Feedback: I want to talk to other developers and security experts to see how they would use these "Secret Tunnels" (WireGuard) in their own companies.
• Find Collaborators: I hope to find people who want to help build better "detectors" to spot when an AI is being tricked into doing something dangerous.
2. What do you expect the audience to gain from your presentation?
The audience will walk away with a clear plan for how to keep their AI projects safe. They will learn:
• The "Blueprint": A simple guide on how to use a Gateway to handle the complicated login stuff (OAuth 2.1) so their developers don't have to.
• How to Hide Data: A simple explanation of how to use "Zero Trust" tunnels to keep private information off the public internet.
• A New Safety Checklist: An introduction to the MAESTRO framework, which is like a 7-layer checklist to make sure every part of the AI system is protected.
• Free Tools they can use: A list of ready-to-use open-source tools—like Traefik and CrowdSec—that act like digital bouncers for their AI.
3. How will your presentation help better the ecosystem?
This presentation helps the whole AI community by making security easier for everyone. It helps the ecosystem by:
• Making AI more Trustworthy: When companies feel their data is safe, they are more likely to use helpful AI tools.
• Saving Time for Developers: By using a "Security Guard" (Gateway) at the front door, developers can build tools faster without having to be security experts themselves.
• Promoting "Defense-in-Depth": It teaches people not to rely on just one lock, but to use multiple layers of protection—like encryption, ID checks, and behavior monitoring—all at once.
Amar Pandey
Engineer@IBM
Bengaluru, India
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top