Speaker

Jeff Watkins

Jeff Watkins

Chief Technology Officer - Writer, Podcaster, Public Speaker

Leeds, United Kingdom

Actions

Jeff Watkins is a seasoned CTO and CPTO who has spent his career leading technology, product, and AI organisations through rapid growth and reinvention. He’s steered engineering, product, cloud, cybersecurity, and AI strategy teams through major transformations, turning ambitious ideas into secure, human-centred digital experiences for some of the world’s best-known brands.

A lifelong technologist, Jeff brings over 25 years of hands-on experience across financial services, healthcare, the public sector and retail. He’s founded and led multiple cybersecurity teams and is a vocal champion of secure-by-design practices for generative AI. With an MSc in Cybersecurity and a second Master’s in Artificial Intelligence underway, he’s currently researching anthropomorphised LLM agents and deception detection.

On the international circuit, Jeff is a sought-after keynote speaker, headlining events such as Webinale (Berlin), AppDevCon (Amsterdam), the International JavaScript Conference (London), and Edinburgh Napier’s PlusEquals5 summit. His candid, story-driven take on the collision of AI, cybersecurity, culture and human behaviour resonates with audiences from engineers to executives.
He also co-hosts the multi-award-winning Compromising Positions podcast, where he interviews psychologists, anthropologists, UX researchers and security leaders to bring fresh, outsider insight to cyber. His thought leadership has been featured in Wired, Forbes, Raconteur, IT Pro, Business Cloud, and Information Age, and he’s a regular commentator for the wider technology and security press.

Whether mentoring emerging leaders, building prototypes, or demystifying AI on stage, Jeff’s mission is simple: build technology that elevates people, never the other way around.

Area of Expertise

  • Information & Communications Technology
  • Media & Information

Topics

  • cyber security
  • cybercrime
  • Cybersecurity Threats and Trends
  • AI and Cybersecurity
  • Artificial Intelligence and Machine Learning for Cybersecurity
  • Artificial Intelligence (AI) and Machine Learning
  • Artificial Intelligence
  • Software Practices
  • Software Design
  • Cyberthreats
  • Software Development
  • Software Architecture
  • Software Engineering
  • cybersecurity
  • Generative AI
  • Vibe Coding
  • Large Language Models (LLMs)
  • Machine Learning
  • Machine Learning & AI
  • Machine Learning and AI
  • Machine Learning/Artificial Intelligence
  • Cybersecuirty
  • User Experience
  • UX / UI
  • User Experience Design

From Patient Zero to Zero-Days: Containing Cybersecurity Incidents like an Epidemiologist

What if we treated cybersecurity incidents like epidemiologists handle pandemics?

We reimagine cybersecurity through the lens of epidemiology, exploring how digital outbreaks mimic viral contagions. Using the investigative framework “What do we think, what do we know, what can we prove?”, you’ll learn to cut through the chaos of an incident and communicate effectively under pressure.

From patient zero to zero-days, we’ll uncover strategies for early detection, containment, and mitigation, equipping you with a fresh mindset for tackling cybersecurity crises. Whether you’re dealing with public health or public WiFi, this talk will help you stay cool, focused, and communicative when the digital flu hits your network.

No hazmat suit required (but encouraged).

Key Takeaways:

* Apply an epidemiologist's mindset to cybersecurity incidents—using lessons from disease outbreaks to improve digital threat response.

* Frame threat intelligence and incident response through “digital epidemiology”, leveraging early detection, layered containment, and long-term resilience strategies.

* Use the investigative approach “What do we think, know, and prove?” to enhance attribution, forensics, and decision-making under pressure.

* Communicate threat evolution and response strategies more effectively, using a shareable metaphor that bridges technical and non-technical teams.

Descartes’ Daughter: How We Taught Machines to Feel (and Why We Believed Them)

What do Rene Descartes and Rick Deckard have in common? More than you'd imagine.

For centuries, we’ve dreamed of building minds in our own image, from Descartes’ mechanical automata to the replicants of Blade Runner. It has only been in the last few years that science fiction now seems closer to science fact than ever.

Today’s large language models feel, to many, much closer to that dream of conscious machines, not because they think, but because they so effectively perform the things we associate with thought: emotion, personality, vulnerability, even care.

This talk explores the psychology and design of anthropomorphised AI. It discusses why we instinctively project humanity onto machines, how modern LLMs exploit those cognitive seams, and what that means for trust, safety, and user experience.

Drawing on academic and practical research into human-like prompting, deception detection, and a concrete taxonomy of anthropomorphism, we’ll examine the subtle cues that make an AI “come alive”. We’ll also discuss the ethical edge where connection becomes manipulation, and how to design systems that are engaging without being deceptive.

Part philosophy, part cognitive science, part practical AI design, Descartes’ Daughter traces the line from the first chatbots to today’s emotionally fluent models, asking a simple but unsettling question: when the machine feels real, is that because it progressed, or because we did not?

Wearable, Shareable... Unbearable? The IoT and AI Tech Nobody Asked For but Cybercriminals Love!

In a world of 5G (no conspiracy theories please!) smartphones, smart TVs and smart homes, we're inviting more tech into our lives than ever. We're sleepwalking into a future nobody asked for, but maybe many fear. How always-on Microphones, Cameras and AI are creating a "digital panopticon" none of us probably want or need, unless you're Amazon. Should we become "digital preppers"? Is the privacy and security risk so high, what are the stakes? This is an anthro-technologist view on how dumb an idea the smart revolution is and how we've eroded our social contracts in favour of big tech.

Compromising Positions: how understanding human behaviours can build a great security culture

Insecure behaviours in our organisations are continuing to proliferate, despite having processes, policies and punishments.

This has resulted in cybersecurity breaches becoming an increasingly regular feature in the mainstream media.

Despite this, the advice given by cybersecurity teams hasn't varied much since the inception of this practice 20-30 years ago. We haven't adapted to a remote-first workforce, and a digital-native generation that demonstrably engage in riskier behaviours online.

As a community, we need to start asking ourselves some difficult questions, such as:
* If the messaging on how to keep safe has been consistent, why is it not working?
* Are people engaged with our communications?
* How do they perceive the security team?
* Do they have any kind of understanding of the risks and impacts of breaches?

But perhaps the real question here is, who is the real compromising position? Those on the periphery of security who are not heeding our advice? Or is it security professionals who refuse to compromise, leading to workarounds and other dangerous behaviours? That's turning the narrative on its head, and through a series of 30+ interviews with experts outside of Cybersecurity, we discussed:

* How CyberSecurity teams could benefit from having Behavioural Scientists, Agile Practitioners and Marketing experts in their midst (or at least their practices)
* How Processes and Policies matter much less than how the People are brought on the journey.
* Why Humans shouldn't be treated as the weakest link
* Why we shouldn't be the gatekeepers or the police, rather the enabling force in a business, and how we can change our image to suit that

(Ab)user Experience: The dark side of Product and Security

Security can often feel like an unapproachable and mysterious part of an organisation – the department of work prevention, the department of “nope.” But it doesn’t have to be that way.

In this talk we will look at the unintended users of a product, the “threat agents”.

By engaging the Security team in the Product process, we can model the dark side of use cases and user stories through threat modelling techniques. This can help demystify impenetrable security NFRs through concrete examples of how these threat agents may try to misuse your shiny new digital product.

Who this event will benefit
Those building products/apps exposed to the web
People who are wanting to build out an awareness of the possible attack vector use cases (i.e. how might you be attacked)
People who need to write that down as a set of requirements to help build a DevSecOps approach in projects

The Four Horsemen of the Information Apocalypse: Taming the Wild West of Information Overload.

Prepare yourself for a whirlwind tour through the chaotic landscape of today's information overload! In this talk we'll step into the realm of the Four Horsemen wreaking havoc on our digital lives: Misinformation, Disinformation, Malinformation, and Noninformation.

We'll start with uncovering how propaganda and rumour mills of yesteryears have evolved into today's sophisticated, AI-powered info-wars. From the rise of the internet and social media to the shocking realities of deepfakes and bot armies, we'll explore how these modern plagues are accelerating faster than ever.

But it’s not all doom and gloom, as we’ll also look at the heroes fighting back—fact-checkers, innovative tech solutions, and the critical thinkers (and how we can become one). Join us for an eye-opening journey through the highs and lows of our information age, and discover how we can all play a part in taming these four threats to reality.

Symbiotic Futures: The Human-Machine Love Affair and the Evolution of Experience.

We're in a world where the lines are blurred between human and machine intelligence, one where we need to build a harmonious relationship through experience.

In his talk we'll explore how Humans and AI can work together as partners and learn to love one another. We'll discover how human-centred design principles can be used to craft AI systems that anticipate our context, needs, wants and enhance our capabilities while still respecting our privacy and autonomy. We'll also look at the invisible conversations between AI and digital systems, and how we can make this a more seamless journey.

You'll learn about the real-world use cases and frameworks that can help you craft both your human to AI and AI to machine interactions to provide a loveable user experience for all parties.

Symbiotic Futures is an exploration of trust, shared experience and even love, in the next era of digital interactions.

STOIC Security: Shielding Your Generative AI App from the Five Deadly Risks

Generative AI offers incredible opportunities but comes with significant cybersecurity challenges. As adoption accelerates, so do the risks—data theft, model manipulation, poisoned training data, operational disruptions, and supply chain vulnerabilities. This talk introduces the "STOIC" framework—Stolen, Tricked, Obstructed, Infected, Compromised—to help you identify and mitigate these threats.

You'll have some key takeaways around:
* Understanding your Gen AI risks and how they link to the OWASP LLM Top 10 and MITRE ATLAS
* Hardening your systems and securing the supply chain
* Governing with clarity while staying agile

Generative AI is transformative but requires proactive, layered defences to avoid becoming a liability. With the right strategy, it can be a safe and game-changing tool for your organisation.

This session assumes a basic knowledge of Generative AI solutions such as OpenAI, ChatGPT, Claude etc.

The Farce Awakens: Becoming a Software Jedi in an Age of Vibe-Coders

Forget the fine discipline of the Jedi coder, clean architecture, rigorous XP practices - welcome to the age of the vibe-coder, where AI writes half the code and nobody reads the other half. PR comments consist of "LGTM", and everybody's busy on TikTok. In this irreverent view on the future of software engineering, we explore the upcoming existential crisis that software development faces thanks to agentic AI, code generation and "the vibe".

As AI takes on more of our work, is the end of the software jedi in sight? Where will the padawans come from if all our juniors are just GitHub Copilot? Or will our powers be reduced until we're just waving our hands around and hoping the linter saves us?

We will cover:
* Just what is a software jedi anyway?
* The rise of vibe-coding: when AI writes code based on vibes, not requirements
* The illusion of mastery: why feeling productive ≠ shipping good software
* Training young padawans: how to mentor junior devs in a world where Stack Overflow is obsolete
* Force ghosts of the codebase: maintaining software written by agents that no one understands
* Building with intention: reclaiming clarity, craftsmanship, and code as a shared language

Whether you’re a battle-worn engineering lead or a wide-eyed dev navigating a galaxy of GitHub Copilots, this talk brings humour, caution, and a bit of wisdom to the galaxy far far away (well, right here) of AI-powered software development.

The Great Brain Robbery: Navigating the Dark Future of Online Manipulation

The financial loss stemming from Cybercrime is one of the most significant transfers of wealth in human history, already bigger than the GDP of many countries.

In this session, Jeff Watkins examines how AI is transforming the landscape of digital manipulation and what this means for individual and organisational security.

You'll learn about the psyops techniques being used against you and how AI is turning our online world into a weapon of psychological mass destruction.

You'll also learn how you can respond to this growing threat with greater foresight and responsibility.

Model Citizen: How to Secure Your SDLC in the Age of AI

Generative AI is rapidly becoming embedded in software delivery pipelines through code copilots, third-party models, and autonomous agents that shape products in real-time. For technology leaders, this introduces a new class of risks that traditional secure SDLC practices don’t fully address: poisoned dependencies, model supply-chain vulnerabilities, opaque agent behaviour, and regulatory scrutiny.

In this session, we’ll examine the implications of both using your own AI models and consuming third-party ones, and what this means for the resilience and reputation of your organisation. Attendees will learn how to evolve their delivery lifecycle to account for AI, where governance must catch up, and how AI itself can play a role in defending the enterprise.

Key points covered:

Strategic Impact - AI changes the threat model of your delivery organisation; leaders must reassess governance and risk appetite.

Supply Chain Reality - Third-party AI models and agents become critical dependencies that require the same scrutiny as open-source packages.

Secure Evolution - The secure SDLC must evolve with AI: treat AI outputs as untrusted, enforce model provenance, and explore AI-enabled defences.

NDC Manchester 2025 - AI & Security Sessionize Event

December 2025 Manchester, United Kingdom

WeAreDevelopers World Congress 2025 Sessionize Event

July 2025 Berlin, Germany

Appdevcon 2025 Sessionize Event

March 2025 Amsterdam, The Netherlands

NDC Security 2025 Sessionize Event

January 2025 Oslo, Norway

Webdevcon 2024 Sessionize Event

March 2024 Amsterdam, The Netherlands

DevSecCon24 2023 Sessionize Event

June 2023

Jeff Watkins

Chief Technology Officer - Writer, Podcaster, Public Speaker

Leeds, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top