Session
Leveraging Generative AI for Cybersecurity. Attack, Defence and Ethics.
As cyber threats become more sophisticated, it's crucial to leverage advanced technologies to defend against them. One technology that has gained significant attention in recently is generative AI such as the Open AI GPT based models. In this session, we will explore how they could be used to carry out, and defend against, cyber-attacks.
We will discuss how generative AI can be leveraged to assist with targeted phishing attacks. Using a GPT based AI, an attacker can quickly create convincing phishing emails that are more likely to succeed. Further to this GPT based AI can guide the attacker with building the necessary infrastructure to host the whole attack chain. We will demonstrate this potential and discuss the implications of these attacks for cybersecurity.
Next, we explore how generative AI can assist defence against cyber threats. This technology can detect and advise on threats in real-time, allowing organizations to quickly identify and neutralize attacks to limit damage. We will provide examples of GPT based AI is used for defence, such as Microsoft Security Co-Pilot, and demonstrate this technology for organizations looking to improve their cybersecurity posture.
Lastly, we discuss the ethical implications of using generative AI for cybersecurity. As with any emerging technology, there are potential risks and drawbacks to consider, including issues related to privacy, bias, and accountability. We explore these issues and discuss how organizations can balance the benefits of this modern technology with the potential risks.
Michelle Sandford - Developer Engagement Lead @Microsoft
Microsoftie | Tedx Speaker | Dev Community Advocate | MCT | New Breakpoint Host | She Codes Mentor | International Conference Speaker | User Group Host | Hackathon Judge | Open Sourcerer
Perth, Australia
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top