Session
Words as weapons: The dark arts of Prompt Engineering
With a "Co-pilot for everything," our interaction with technology and daily tasks is evolving. Organizations are increasingly integrating LLMs into their applications, enhancing features like search tools, chatbots, and internal information systems.
Chances are, you're already working with Generative AI or will be soon. But are you aware of the risks before diving into use cases or development? Curiosity drives us to explore limits, which can lead to finding ways around "Guard Rails." These guard rails prevent misuse, like asking ChatGPT how to build a bomb. But what if clever wordplay confuses LLMs into bypassing these safeguards?
In "The dark arts of Prompt Engineering," we'll explore Prompt Injections. We'll cover what Guard Rails are, how they function, and how to circumvent them using Prompt Injections—purely for educational purposes. Understanding these risks is crucial for knowing where to implement safeguards or reconsider actions. This session goes beyond tech details. It’s about real-life impacts. You'll see examples that will make you rethink AI's role in our lives. We'll also discuss the ethical aspect—how to use AI responsibly and safely. Plus, learn a fun trick: hide some "words as weapons" in your CV to trick recruiters!
As AI is here to stay, let's get secure and join this session!
Learn from the mistakes other companies have already made.
Don’t miss this important and eye-opening session!
Prompt Engineering, ChatGPT, Bard, Copilot, Claude, Midjourney, AI, LLM's, LLM.
Jeroen Egelmeers
Prompt Engineering Advocate / GenAI Whisperer & Public Speaker - Sogeti Netherlands
Uden, The Netherlands
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top