Session

Red-for-Blue: Fortifying Applications through Actionable Red-Teaming

With GenAI and LLM applications conquering the world at an unprecedented pace, the evolution of the new attack surface associated with these applications, puts a challenge to security practitioners in general, and specifically also for red-teams. GenAI security red-teaming can focus on three victim-objects; the LLM model itself, the prompt, and the entire application, with each of these having its own challenges and opportunities.
With a defender mindset, striving for utilization of red-teaming within the application development lifecycle in a manner that contributes to proactive security by providing actionable insights on fortifying the application, we will present a novel security approach, based on a triangle of tools: a) Threat-wise prompts red-teaming; b) Prompt hardening through prompt patching; c) Adversarially robust LLM that has high Security Steerability.

Itsik Mantin

Head of AI Security Research, Intuit

Tel Aviv, Israel

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top