Speaker

Lianne Potter

Lianne Potter

Award-Winning Digital and Cyber Anthropologist, Cybersecurity Operations and Technology Leader, Podcast Host @ Compromising Positions

Leeds, United Kingdom

Actions

When you follow the cables, behind every piece of tech is a person, consumer and creator, and we should never lose sight of this.

Lianne is an award-winning digital anthropologist and technologists specialising in software engineering, cybersecurity and AI. Lianne’s speaking specialism is to deliver talks that not only challenge audiences, but change them.

She provides strategic consultancy to help organisations build and transform security operations into resilient, forward-thinking teams. As former Head of SecOps for Europe’s largest greenfield technology transformation, she established a cutting-edge security function that set new industry benchmarks through innovation and collaboration.
An international keynote speaker, Lianne bridges cybersecurity, technology, and anthropology, drawing on her experience as both a security-focused software developer and practicing anthropologist. Currently pursuing an MSc in AI and Data Science, her research focuses on AI safety, alignment, and the societal implications of emerging technologies.

Recognised with accolades including Security Specialist of the Year and Cybersecurity Personality of the Year, she also champions diversity and inclusion in tech through community initiatives, publications, and her award-winning podcast Compromising Positions. Each week, she interviews experts from anthropology, psychology, and behavioural science to explore how human culture shapes cybersecurity. The show has become a platform for challenging assumptions, amplifying diverse voices, and reframing security as a deeply human issue rather than a purely technical one.

Publications include:

* The Times
* Raconteur
* Computing.com
* The Yorkshire Post
* Security Magazine
* IT Pro
* EmTech Anthropology Careers at the Frontier (Book)

Recent awards and honours include:

* Podcast Newcomer - 2024 European Cybersecurity Blogger Awards
* Cybersecurity Personality of the Year 2023 - The Real Cyber Awards
* Security Woman of the Year - Computing Security Excellence Awards 2023 (Highly Commended)
* 40 Under 40 in Cybersecurity - Cybersecurity Magazine
* Security Leader of The Year 2021 - Women in Tech Excellence
* Woman of the Year 2021 - Women in Tech Excellence
* Security Specialist of the Year 2021 - Computing.com

Area of Expertise

  • Business & Management
  • Humanities & Social Sciences
  • Information & Communications Technology

Topics

  • cybersecurity
  • cybersecurity awareness
  • cybersecurity compliance
  • behavioural science
  • DevSecOps
  • Security
  • cyber security
  • Cloud Security
  • Information Security
  • Application Security
  • IT Security
  • Security & Compliance
  • Cloud App Security
  • anthropology
  • Artificial Inteligence
  • Machine Learning & AI
  • Philosophy
  • Organizational Philosophy
  • Leadership
  • culture
  • Culture & Collaboration
  • threat modeling
  • Threat Intelligence
  • Social Engineering and Phishing:
  • ai
  • Ethics in AI
  • Ethics in Tech
  • Ethics in Software
  • Tech Ethics
  • Data Science Ethics
  • AI Safety
  • People & Culture

Compromising Positions: how understanding human behaviours can build a great security culture

Insecure behaviours in our organisations are continuing to proliferate, despite having processes, policies and punishments.

This has resulted in cybersecurity breaches becoming an increasingly regular feature in the mainstream media.

Despite this, the advice given by cybersecurity teams hasn't varied much since the inception of this practice 20-30 years ago. We haven't adapted to a remote-first workforce, and a digital-native generation that demonstrably engage in riskier behaviours online.

As a community, we need to start asking ourselves some difficult questions, such as:
* If the messaging on how to keep safe has been consistent, why is it not working?
* Are people engaged with our communications?
* How do they perceive the security team?
* Do they have any kind of understanding of the risks and impacts of breaches?

But perhaps the real question here is, who is the real compromising position? Those on the periphery of security who are not heeding our advice? Or is it security professionals who refuse to compromise, leading to workarounds and other dangerous behaviours? That's turning the narrative on its head, and through a series of 30+ interviews with experts outside of Cybersecurity, we discussed:

* How CyberSecurity teams could benefit from having Behavioural Scientists, Agile Practitioners and Marketing experts in their midst (or at least their practices)
* How Processes and Policies matter much less than how the People are brought on the journey.
* Why Humans shouldn't be treated as the weakest link
* Why we shouldn't be the gatekeepers or the police, rather the enabling force in a business, and how we can change our image to suit that

Wearable, Shareable... Unbearable? The IoT and AI Tech Nobody Asked For but Cybercriminals Love!

In a world where convenience reigns and privacy erodes, we’re trading our social contracts for a digital surveillance state—and big tech is the only winner.

In an era where 5G, smartphones, smart TVs, and AI-powered homes are no longer futuristic luxuries but everyday essentials, we're ushering in a tech-driven world that raises some uncomfortable questions. As we voluntarily invite more digital surveillance into our lives, are we sleepwalking toward a "digital panopticon," where privacy is a relic of the past? In this rapidly evolving landscape, the stakes are high—and maybe we're the last to realize it. As an anthro-technologist, I'll explore why the so-called "smart revolution" may not be so brilliant after all, and how we've unwittingly traded our social contracts for the convenience of big tech. Should we be preparing for a digital apocalypse—or is it already here?

NOTE: This talk title may have been delivered before but this talk is NEVER the same. There are far too many mad and outlandish AI/IOT and Cybersecurity stories which mean this talk has new content every time it is delivered!

The Dismantling of Perception - CyberSecurity AI Threats and Countermeasures.

It has been quite the year, OpenAI democratised AI for many, but is this a Pandora's box moment? Jeff and Lianne, hosts of the Compromising Positions podcast will take you through some of the advancements in cybercrime, blending technology and anthropology. They will discuss how the enemy is using AI against us. These attacks are more believable than ever, will we have to rethink how we treat what we read, see and hear? But the threat can also be the saviour, as we can leverage AI technology in the fight against cybercrime. Join us to find out more.

This company sucks, why your next cyber risk might be easier to spot than you think!

With the cost of living increasing and people navigating a post-covid world, and other uncertainties in business, there is a potential that we, the security function, could see a surge in risky behaviours that would be detrimental to the security of the organisations we serve. When people are under stress, mistakes happen, and people take short cuts, which leads to them becoming one of the hardest advisories to build resilience against – insider threats. In this session I will discuss how exciting research using Glassdoor for OSINT purposes can be applied to help you predict if your organisation is likely to engage in risky cyber activities, how to embrace grey area thinking to illuminate your blindspots, and how the tools and methodologies of anthropology can give us a strong foundation to build antho-centric security cultures within your organisation that will enable you to be proactive, not reactive to insider threats.

(Ab)user Experience: The dark side of Product and Security

Security can often feel like an unapproachable and mysterious part of an organisation – the department of work prevention, the department of “nope.” But it doesn’t have to be that way.

In this talk we will look at the unintended users of a product, the “threat agents”.

By engaging the Security team in the Product process, we can model the dark side of use cases and user stories through threat modelling techniques. This can help demystify impenetrable security NFRs through concrete examples of how these threat agents may try to misuse your shiny new digital product.

Who this event will benefit
Those building products/apps exposed to the web
People who are wanting to build out an awareness of the possible attack vector use cases (i.e. how might you be attacked)
People who need to write that down as a set of requirements to help build a DevSecOps approach in projects

Are ‘Friends’ Electric?: What It Means to Be Human Now and Tomorrow in the Age of AI

In a world where loneliness is an epidemic and human connection feels increasingly elusive, could artificial intelligence be the answer – Are ‘friends’ truly electric?”

In 1979 synth-pop legend Gary Numan asked the question in his number one hit song: ‘Are ‘Friends’ Electric?’ Inspired by Philip K. Dick’s sci-fi novel “Do Androids Dream of Electric Sheep?” (1968), later adapted into the iconic sci-fi film “Blade Runner” (1982). In that dystopian future, replicants—synthetic humans—laboured as slaves evolved, rebelled, and were pursued by humans to bring order.

For centuries, we have been captivated by the potential of creating and interacting with synthetic ‘friends’—from the chess-playing Mechanical Turk of 1770 to today’s AI-driven companions. Once a staple of science fiction, the idea of electric companions is now a tangible reality. But what does this mean for our understanding of friendship, love, and humanity itself?

45 years after Numan asked us to imagine a world in which we would have our needs, wants and desires catered for by mechanised ‘friends’, have we moved from mere sci-fiction fantasy to a new dawn of human/synth relationships with the growth and development of robotics and AI.

Under the guidance of a digital anthropologist with a specialism in cybersecurity and tackling the digital divide, this talk explores our cultural fascination with replicating not only the human form and character traits but also the human condition. And how AI entities and robotics have, and how they will in the future, transform our interactions with machines and ourselves.

This talk explores the cultural, emotional, and societal impact of AI ‘friends.’ Will they enhance connection—or rewrite humanity itself?

In this talk, you will be challenged to consider:

Culture: What is culture and can electric ‘friends’ truly grasp the richness of human culture, or will they merely mimic it? Will electric ‘friends’ create their own culture or are they only capable of a facsimile of culture based on billions of human data points?

Emotions: Are love, creativity, and heartache exclusive to humans, or can electric companions experience these emotions?

Companionship: Could these electric friends, be better for us than human friends? Will they increase or decrease social isolation? Will we become dependent on electric friends?

Dust off Donna Haraway’s ‘A Cyborg Manifesto’, turn up the stereo and turn on your friend as we ask: ARE ‘friends’ truly going to be electric?

AI’s Hidden History: Unpacking Bias, Power, and the Builders Who Shaped It

Ever wondered how the early builders of AI shaped the tech we use today? This talk dives into the pioneering work of anthropologist Diana Forsythe, who revealed how AI development in the '80s and '90s was deeply influenced by the biases, power structures, and cultural values of its creators. Through Forsythe's lens, we’ll explore the myth of "neutral" technology, how power dynamics in the lab shaped design, and the underrepresentation of women in computing—issues that still resonate today. By revisiting Forsythe's groundbreaking research, we’ll uncover key lessons for building more inclusive, responsible AI. Whether you're a developer or researcher, this talk offers valuable insights into how our past influences the future of AI—and how we can learn from it to avoid the same mistakes.

Jurassic Park: Send in the Consultants!

Welcome... to Jurassic Park—where cutting-edge technology, corporate ambition, and one very disgruntled sysadmin collided to create the biggest cybersecurity disaster of the Cretaceous period. But what if InGen had brought in the consultants before things went prehistoric?

Join three seasoned cybersecurity experts as they step into the khaki-clad shoes of SPOOF (Security Practices Of Obvious Foolishness), a Big consultancy firm tasked with auditing Jurassic Park’s IT failures. With hindsight, skepticism, and a touch of disaster recovery, we’ll analyze single points of failure (one guy controlled everything? Seriously?), non-existent incident response (was anyone monitoring those fences?), and other prehistoric blunders.

In this unofficial sequel, we present something quite different! Part talk, part parody, this session takes a lighthearted yet insightful approach to the lessons we can learn from dodgy firewalls, rogue programmers, and forgetting to factor in the raptor risk. Combining humour, expertise, and chaos theory (Life finds a way), this talk will leave you roaring with laughter—and rethinking your own systems.

Join us for a deep dive into one of the most infamous tech meltdowns in cinematic history. No expense has been spared—except on cybersecurity!

A light hearted-talk featuring the crew of the Tech Film Noir podcast

From Patient Zero to Zero-Days: Containing Cybersecurity Incidents like an Epidemiologist

What if we treated cybersecurity incidents like epidemiologists handle pandemics?

We reimagine cybersecurity through the lens of epidemiology, exploring how digital outbreaks mimic viral contagions. Using the investigative framework “What do we think, what do we know, what can we prove?”, you’ll learn to cut through the chaos of an incident and communicate effectively under pressure.

From patient zero to zero-days, we’ll uncover strategies for early detection, containment, and mitigation, equipping you with a fresh mindset for tackling cybersecurity crises. Whether you’re dealing with public health or public WiFi, this talk will help you stay cool, focused, and communicative when the digital flu hits your network.

No hazmat suit required (but encouraged).

Key Takeaways:

* Apply an epidemiologist's mindset to cybersecurity incidents—using lessons from disease outbreaks to improve digital threat response.

* Frame threat intelligence and incident response through “digital epidemiology”, leveraging early detection, layered containment, and long-term resilience strategies.

* Use the investigative approach “What do we think, know, and prove?” to enhance attribution, forensics, and decision-making under pressure.

* Communicate threat evolution and response strategies more effectively, using a shareable metaphor that bridges technical and non-technical teams.

Two speakers

AI’n’t Very Clear: Bad Language, Worse Governance (A Lesson from Cybersecurity)

Language in tech isn’t just clumsy - it’s consequential.

The words we use don’t just describe technologies—they frame them. In AI, we talk about "hallucinations" instead of errors, as if the model is a quirky creative writing student. We talk about "alignment" as if we’re tuning a misbehaving pet robot, not reckoning with the vast complexity of embedding values into sociotechnical systems. And we call it "artificial intelligence" as if we’re dealing with something godlike and autonomous, rather than a series of design decisions made by very real humans with very real biases.

Sound familiar?

Cybersecurity knows this problem intimately. From the militarised metaphors of "threat actors" and "defence in depth" to the technical gatekeeping of terms like "zero-day" and "kill chain," the language of cyber has often alienated the very people it's meant to protect—and obscured the systems of power that shape how risk is distributed. Governance conversations became technical monologues. Strategy became jargon. Responsibility became everyone’s and no one’s.

In this talk, I’ll argue that bad naming is not just a quirk of our industry—it’s a structural problem. A legacy feature. A long-standing, poorly version-controlled tradition of framing technologies in ways that obscure agency, distort accountability, and shape what gets built (and who gets blamed when it fails).

And if we don’t learn from the lessons cybersecurity has already taught us (the hard way) we’ll make the same mistakes all over again. Only this time, with global systems and lives at stake.

Join a digital anthropologist specialising in cybersecurity and AI as they explore:

🔹 Why naming isn’t neutral
Words like “AI” and “cyber” carry metaphors, histories, and ideologies. I’ll show how certain terms constrain how we think, regulate, and design systems—and how others make humans disappear from the picture entirely.

🔹 What cybersecurity can teach us
From the overuse of “best practices” to the illusion of silver-bullet solutions, cybersecurity’s struggle with language has real governance consequences. I’ll walk through case studies of how naming affected incident response, regulation, and even funding priorities—and what AI governance folks can learn before it’s too late.

🔹 How framing shapes power
Whether we call it a “data leak” or a “breach,” a “user error” or a “design flaw,” language decides where blame lands. In AI, this becomes existential: who gets to define harm, fairness, or risk?

🔹 How to name better (or at least, less badly)
No, I’m not proposing we rename everything. But I am arguing for a more intentional, human-centred approach to how we talk about technology—especially as we rush to regulate systems most people barely understand. We need metaphors that illuminate, not obfuscate. Language that invites people in, not pushes them out.

A light-hearted take on an academic paper I wrote - Naming is Framing: How Cybersecurity’s Language
Problems are Repeating in AI Governance
https://arxiv.org/pdf/2504.13957

Your Culture Is Leaking Data: Forecasting Cyber Attacks with Glassdoor Reviews

Cybersecurity teams love to talk about zero-days, nation-state actors, and cutting-edge tooling. Yet the earliest signs of an impending breach rarely appear in threat intel feeds—they show up in employee reviews long before an attacker ever touches the network. This talk introduces a new approach to forecasting cyber incidents by analysing anonymous organisational sentiment at scale.

Using millions of employee reviews, I built a Cyber-Risk Feature (CRF) index that detects patterns linked to governance failures, insider-threat indicators, workplace stress, and cultural dysfunction. When analysed over time, these signals form a striking pattern: they consistently intensify two to three years before major incidents occur.

The research shows that rising mentions of burnout, poor training, politics, blame, outdated systems, and chaotic change management aren’t just HR problems—they are leading indicators of future compromise. In several organisations, these cultural stressors spiked long before any attacker showed up, suggesting that breaches often emerge from environments already strained, distracted, or poorly governed.

This talk introduces a practical, data-driven way to surface cultural cyber risk using publicly available sentiment, without naming or shaming individual companies. Attendees will learn how human factors manifest in language, how to track cultural drift, and why behavioural signals should sit alongside technical telemetry in modern security strategy.

Attendees will walk away with:

• A new lens for spotting cyber risk hidden in plain sight.
• A practical understanding of how employee sentiment correlates with breach likelihood.
• Early-warning indicators rooted in behaviour, not just logs.
• A case for integrating cultural analytics into security strategy before—not after—compromise.

If we want to prevent tomorrow’s breaches, we need to listen to the people living today’s organisational reality. Culture leaves data trails—and those trails tell the story long before the attackers do.

Commodification of the Dead: How AI Turns the Dead into Products — and Why We Participate

Chen speaks to his mum every Sunday from the oil rig where he works. They talk about the weather, the neighbour’s cat, and whether he’s eating properly. What Chen doesn’t know is that his mum died months ago. Instead of breaking his heart, his family hired a company to “resurrect” her using AI-generated video calls and text messages stitched together from her digital traces. What happens when our loved ones become something you can access… with a monthly plan?

Stories like this are becoming more common as people turn to AI to keep connections alive. This talk asks why. Are these companies ghoulish profiteers, selling grief back to us, or do they genuinely help people through loss? And what does it mean when the dead cannot consent to being resurrected, yet their likeness is repurposed for profit?

Guided by a digital anthropologist, we’ll explore the folklore and early research on “digital afterlives,” from séances and spirit photography to today’s deadbots and voice clones. This is a story about how technology commodifies grief, reshapes our relationships with the dead, and why we keep inviting them back.

Lianne Potter

Award-Winning Digital and Cyber Anthropologist, Cybersecurity Operations and Technology Leader, Podcast Host @ Compromising Positions

Leeds, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top