Session
Engineering Trust in LLM-Powered Applications
LLM-powered applications introduce a new class of risks - from prompt injection and data leakage to hallucinations, model abuse, and indirect exploit paths that traditional security models don’t cover. This session examines how these vulnerabilities emerge in real-world AI systems and how they can be systematically controlled.
Explore exploit patterns and safety architectures for LLM applications, and see how SAP AI Core enables secure-by-design adoption through seamless LLM integration, embedded guardrails, policy enforcement, and operational controls - helping organisations move from AI experimentation to production-grade, trustworthy systems.
Animesh Kumar Mishra
SAP Labs India, Product Security Senior Specialist,
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top