Speaker

Jeff Watkins

Jeff Watkins

Chief Technology Officer - Writer, Podcaster, Public Speaker

Leeds, United Kingdom

Actions

A lifelong technologist, having started coding at the age of six, Jeff has been in the industry for over 25 years. He has a love for CyberSecurity and AI, especially the human elements of both subjects. Working for a consultancy, he's seen the world change to be product- and service-oriented, and bangs the drum for ensuring that everybody is involved in delivering secure and usable offerings. Outside of work, he is co-host of the Compromising Positions podcast, a unique view of cybersecurity from outsiders' perspectives.

Publications include:
* Wired
* Forbes
* Raconteur
* IT Pro
* Business Cloud
* Information Age

Area of Expertise

  • Information & Communications Technology

Topics

  • cyber security
  • cybercrime
  • Cybersecurity Threats and Trends
  • AI and Cybersecurity
  • Artificial Intelligence and Machine Learning for Cybersecurity
  • Artificial Intelligence (AI) and Machine Learning
  • Artificial Intelligence
  • Software Practices
  • Software Design
  • Cyberthreats
  • Software Development
  • Software Architecture

Wearable, Shareable... Unbearable? The IoT and AI Tech Nobody Asked For but Cybercriminals Love!

In a world of 5G (no conspiracy theories please!) smartphones, smart TVs and smart homes, we're inviting more tech into our lives than ever. We're sleepwalking into a future nobody asked for, but maybe many fear. How always-on Microphones, Cameras and AI are creating a "digital panopticon" none of us probably want or need, unless you're Amazon. Should we become "digital preppers"? Is the privacy and security risk so high, what are the stakes? This is an anthro-technologist view on how dumb an idea the smart revolution is and how we've eroded our social contracts in favour of big tech.

Compromising Positions: how understanding human behaviours can build a great security culture

Insecure behaviours in our organisations are continuing to proliferate, despite having processes, policies and punishments.

This has resulted in cybersecurity breaches becoming an increasingly regular feature in the mainstream media.

Despite this, the advice given by cybersecurity teams hasn't varied much since the inception of this practice 20-30 years ago. We haven't adapted to a remote-first workforce, and a digital-native generation that demonstrably engage in riskier behaviours online.

As a community, we need to start asking ourselves some difficult questions, such as:
* If the messaging on how to keep safe has been consistent, why is it not working?
* Are people engaged with our communications?
* How do they perceive the security team?
* Do they have any kind of understanding of the risks and impacts of breaches?

But perhaps the real question here is, who is the real compromising position? Those on the periphery of security who are not heeding our advice? Or is it security professionals who refuse to compromise, leading to workarounds and other dangerous behaviours? That's turning the narrative on its head, and through a series of 30+ interviews with experts outside of Cybersecurity, we discussed:

* How CyberSecurity teams could benefit from having Behavioural Scientists, Agile Practitioners and Marketing experts in their midst (or at least their practices)
* How Processes and Policies matter much less than how the People are brought on the journey.
* Why Humans shouldn't be treated as the weakest link
* Why we shouldn't be the gatekeepers or the police, rather the enabling force in a business, and how we can change our image to suit that

(Ab)user Experience: The dark side of Product and Security

Security can often feel like an unapproachable and mysterious part of an organisation – the department of work prevention, the department of “nope.” But it doesn’t have to be that way.

In this talk we will look at the unintended users of a product, the “threat agents”.

By engaging the Security team in the Product process, we can model the dark side of use cases and user stories through threat modelling techniques. This can help demystify impenetrable security NFRs through concrete examples of how these threat agents may try to misuse your shiny new digital product.

Who this event will benefit
Those building products/apps exposed to the web
People who are wanting to build out an awareness of the possible attack vector use cases (i.e. how might you be attacked)
People who need to write that down as a set of requirements to help build a DevSecOps approach in projects

The Four Horsemen of the Information Apocalypse: Taming the Wild West of Information Overload.

Prepare yourself for a whirlwind tour through the chaotic landscape of today's information overload! In this talk we'll step into the realm of the Four Horsemen wreaking havoc on our digital lives: Misinformation, Disinformation, Malinformation, and Noninformation.

We'll start with uncovering how propaganda and rumour mills of yesteryears have evolved into today's sophisticated, AI-powered info-wars. From the rise of the internet and social media to the shocking realities of deepfakes and bot armies, we'll explore how these modern plagues are accelerating faster than ever.

But it’s not all doom and gloom, as we’ll also look at the heroes fighting back—fact-checkers, innovative tech solutions, and the critical thinkers (and how we can become one). Join us for an eye-opening journey through the highs and lows of our information age, and discover how we can all play a part in taming these four threats to reality.

Symbiotic Futures: The Human-Machine Love Affair and the Evolution of Experience.

We're in a world where the lines are blurred between human and machine intelligence, one where we need to build a harmonious relationship through experience.

In his talk we'll explore how Humans and AI can work together as partners and learn to love one another. We'll discover how human-centred design principles can be used to craft AI systems that anticipate our context, needs, wants and enhance our capabilities while still respecting our privacy and autonomy. We'll also look at the invisible conversations between AI and digital systems, and how we can make this a more seamless journey.

You'll learn about the real-world use cases and frameworks that can help you craft both your human to AI and AI to machine interactions to provide a loveable user experience for all parties.

Symbiotic Futures is an exploration of trust, shared experience and even love, in the next era of digital interactions.

STOIC Security: Shielding Your Generative AI App from the Five Deadly Risks

Generative AI offers incredible opportunities but comes with significant cybersecurity challenges. As adoption accelerates, so do the risks—data theft, model manipulation, poisoned training data, operational disruptions, and supply chain vulnerabilities. This talk introduces the "STOIC" framework—Stolen, Tricked, Obstructed, Infected, Compromised—to help you identify and mitigate these threats.

You'll have some key takeaways around:
* Understanding your Risks
* Hardening your Systems
* Securing Model Pipeline
* Governing with Clarity
* Staying Agile

Generative AI is transformative but requires proactive, layered defences to avoid becoming a liability. With the right strategy, it can be a safe and game-changing tool for your organisation.

This session assumes a basic knowledge of Generative AI solutions such as OpenAI.

The Farce Awakens: Becoming a Software Jedi in an Age of Vibe-Coders

Forget the fine discipline of the Jedi coder, clean architecture, rigorous XP practices - welcome to the age of the vibe-coder, where AI writes half the code and nobody reads the other half. PR comments consist of "LGTM", and everybody's busy on TikTok. In this irreverent view on the future of software engineering, we explore the upcoming existential crisis that software development faces thanks to agentic AI, code generation and "the vibe".

As AI takes on more of our work, is the end of the software jedi in sight? Where will the padawans come from if all our juniors are just GitHub Copilot? Or will our powers be reduced until we're just waving our hands around and hoping the linter saves us?

We will cover:
* Just what is a software jedi anyway?
* The rise of vibe-coding: when AI writes code based on vibes, not requirements
* The illusion of mastery: why feeling productive ≠ shipping good software
* Training young padawans: how to mentor junior devs in a world where Stack Overflow is obsolete
* Force ghosts of the codebase: maintaining software written by agents that no one understands
* Building with intention: reclaiming clarity, craftsmanship, and code as a shared language

Whether you’re a battle-worn engineering lead or a wide-eyed dev navigating a galaxy of GitHub Copilots, this talk brings humour, caution, and a bit of wisdom to the galaxy far far away (well, right here) of AI-powered software development.

Jeff Watkins

Chief Technology Officer - Writer, Podcaster, Public Speaker

Leeds, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top