
Pedram Hayati
Founder SecDim.com, SecTalks.org | Senior Lecturer UNSW.edu | Security Researcher | Bug Hunter 🛡️
Sydney, Australia
Actions
Dr Pedram Hayati is the founder and CEO of SecDim, where he makes secure coding second nature for developers. A researcher across both offensive and application security, Pedram has:
* Published 25 + zero-day advisories since 2005
* Reported thousands of vulnerabilities to Fortune 500 companies
* Led the global penetration-testing unit at the world’s second-largest defence contractor
Holding a PhD in Information Security & Machine Learning, Pedram lectures postgraduate cyber-security at the University of New South Wales – Australian Defence Force Academy (UNSW ADFA) and founded SecTalks.org, a multinational non-profit security community with more than 25 000 members. His research regularly features on the global stage at Black Hat, DEF CON, Hack In The Box, FIRSTCon, NDC, and OWASP AppSec.
Area of Expertise
Topics
Fix the Flag Wargame (Day 1)
AppSec Village is proud to present our DEF CON Contest in partnership with SecDim.
Unlike a typical CTF, this contest rewards the players who fix security vulnerabilities. You can choose between secure coding challenges or competing against other players in an Attack & Defence AppSec challenge, where you need to secure your app before being given a chance to hack other players’ apps😈.
You can also develop your own AppSec challenge by following challenge contribution guidelines. All approved challenges will be hosted during AppSec Village @ DEF CON.
There are two categories of winners:
- The player with the highest total points by the end of the event (August 11 at 12:00 PM PDT)
- The best-contributed challenge submission
The Award Ceremony will be held at 12:45 PM PDT in the AppSec Village on August 11.
Fix The Flag - Battle Challenge
Find a security vulnerability in an app and get a score when you effectively fix it. The winner of the competition is the first person who fixes the vulnerability.
Jailbreaking and Protecting LLM Apps: A Public Wargame Experiment
This presentation captures findings from a public AI security challenge designed to evaluate the resilience of Large Language Models (LLMs) against prompt injection attacks. The experiment involved an Attack & Defence wargame where participants were tasked with securing their LLMs, specifically preventing secret phrase disclosure. They were given access to the source code of the app that interfaced with OpenAI API. Simultaneously, participants were to attack other LLMs in an attempt to exfiltrate the secret phrase. A notable aspect of this experiment was the real-time evolution of defensive strategies and offensive tactics by participants. The results indicated that all LLMs were exploited at least once, thus highlighting the complexity behind LLM security and lack of in-depth understanding of prompt injection. This underscores how there is no silver bullet for securing against prompt injection and that it remains as an open problem.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top