Pedram Hayati
Founder SecDim.com, SecTalks.org. Senior Lecturer UNSW.edu
Sydney, Australia
Actions
Dr. Pedram Hayati is the Founder and CEO of SecDim, where he focuses on redefining developer engagement in security through developer-oriented wargames.
As a security researcher who transitioned from the offsec to appsec, he has reported thousands of vulnerabilities to Fortune 500 companies, published over 25 zero-days, and has led a global penetration testing team for 2nd largest Defence contractor.
Pedram lectures postgraduate security courses at the University of New South Wales, Australian Defence Force Academy.
He is the founder of SecTalks.org, the largest non-profit security community in Australia. He has presented at top global security conferences such as at Black Hat, DEF CON, Hack In The Box, OWASP and FirstCon.
Area of Expertise
Topics
Fix the Flag Wargame (Day 1)
AppSec Village is proud to present our DEF CON Contest in partnership with SecDim.
Unlike a typical CTF, this contest rewards the players who fix security vulnerabilities. You can choose between secure coding challenges or competing against other players in an Attack & Defence AppSec challenge, where you need to secure your app before being given a chance to hack other players’ apps😈.
You can also develop your own AppSec challenge by following challenge contribution guidelines. All approved challenges will be hosted during AppSec Village @ DEF CON.
There are two categories of winners:
- The player with the highest total points by the end of the event (August 11 at 12:00 PM PDT)
- The best-contributed challenge submission
The Award Ceremony will be held at 12:45 PM PDT in the AppSec Village on August 11.
Fix The Flag - Battle Challenge
Find a security vulnerability in an app and get a score when you effectively fix it. The winner of the competition is the first person who fixes the vulnerability.
Jailbreaking and Protecting LLM Apps: A Public Wargame Experiment
This presentation captures findings from a public AI security challenge designed to evaluate the resilience of Large Language Models (LLMs) against prompt injection attacks. The experiment involved an Attack & Defence wargame where participants were tasked with securing their LLMs, specifically preventing secret phrase disclosure. They were given access to the source code of the app that interfaced with OpenAI API. Simultaneously, participants were to attack other LLMs in an attempt to exfiltrate the secret phrase. A notable aspect of this experiment was the real-time evolution of defensive strategies and offensive tactics by participants. The results indicated that all LLMs were exploited at least once, thus highlighting the complexity behind LLM security and lack of in-depth understanding of prompt injection. This underscores how there is no silver bullet for securing against prompt injection and that it remains as an open problem.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top