Speaker

Lianne Potter

Lianne Potter

Award-Winning Digital and Cyber Anthropologist, Cybersecurity Operations and Technology Leader, Podcast Host @ Compromising Positions

Leeds, United Kingdom

Actions

When you follow the cables, behind every piece of tech is a person, consumer and creator, and we should never lose sight of this.

Lianne is an award-winning cyber anthropologist and security transformation leader with experience in the retail, healthcare, finance, private and non-profit sectors.

Driving Security Operations Maturity and Resilient Security Functions

I specialise in providing strategic and technical consultancy to help organisations build and evolve their security operation functions into mature, resilient, and forward-thinking teams. From guiding the establishment of new security functions to elevating existing ones, I partner with organisations to ensure their SecOps department is not just operational but transformational.

Previously, as Head of SecOps for Europe's largest greenfield technology transformation, I built a cutting-edge security team from scratch to safeguard a modern retail organisation. My leadership empowered the team to set new benchmarks in industry best practices through innovation and collaboration.

As an international keynote speaker for both technical and academic audiences, I share insights at the intersection of cybersecurity, technology, and anthropology. Drawing on my practical experience as a security-focused software developer and my continued work as a practicing anthropologist, I bring a unique perspective that combines deep technical expertise with a focus on human-centered design and cultural understanding.

Currently, I’m pursuing an MSc in AI and Data Science, deepening my understanding of emerging technologies and their societal implications. My work focuses on AI safety and alignment, striving to ensure AI systems respect human rights, promote equity, and are resilient against adversarial risks.

Committed to fostering diversity and inclusion in tech, I’ve advised community initiatives, authored publications, host award-winning podcasts, and earned accolades including Security Specialist of the Year, Security Leader of the Year and Cybersecurity Personality of the Year.

Beyond cybersecurity, I co-founded Liria Digital Health, a health-tech startup focused on improving outcomes for underrepresented health conditions within marginalised communities.

You can listen to me talk about my human-centric approach every Thursday on my award-winning technology podcast Compromising Positions, in which I interview non-cybersecurity people from the world of anthropology, psychology and behavioural science about cybersecurity culture.

Publications include:

* The Times
* Raconteur
* Computing.com
* The Yorkshire Post
* Security Magazine
* IT Pro
* EmTech Anthropology Careers at the Frontier (Book)

Recent awards and honours include:

* Podcast Newcomer - 2024 European Cybersecurity Blogger Awards
* Cybersecurity Personality of the Year 2023 - The Real Cyber Awards
* Security Woman of the Year - Computing Security Excellence Awards 2023 (Highly Commended)
* 40 Under 40 in Cybersecurity - Cybersecurity Magazine
* Security Leader of The Year 2021 - Women in Tech Excellence
* Woman of the Year 2021 - Women in Tech Excellence
* Security Specialist of the Year 2021 - Computing.com

Area of Expertise

  • Business & Management
  • Humanities & Social Sciences
  • Information & Communications Technology

Topics

  • cybersecurity
  • cybersecurity awareness
  • cybersecurity compliance
  • behavioural science
  • DevSecOps
  • Security
  • cyber security
  • Cloud Security
  • Information Security
  • Application Security
  • IT Security
  • Security & Compliance
  • Cloud App Security
  • anthropology
  • Artificial Inteligence
  • Machine Learning & AI
  • Philosophy
  • Organizational Philosophy
  • Leadership
  • culture
  • Culture & Collaboration
  • threat modeling
  • Threat Intelligence
  • Social Engineering and Phishing:
  • ai
  • Ethics in AI
  • Ethics in Tech
  • Ethics in Software
  • Tech Ethics
  • Data Science Ethics
  • AI Safety
  • People & Culture

Compromising Positions: how understanding human behaviours can build a great security culture

Insecure behaviours in our organisations are continuing to proliferate, despite having processes, policies and punishments.

This has resulted in cybersecurity breaches becoming an increasingly regular feature in the mainstream media.

Despite this, the advice given by cybersecurity teams hasn't varied much since the inception of this practice 20-30 years ago. We haven't adapted to a remote-first workforce, and a digital-native generation that demonstrably engage in riskier behaviours online.

As a community, we need to start asking ourselves some difficult questions, such as:
* If the messaging on how to keep safe has been consistent, why is it not working?
* Are people engaged with our communications?
* How do they perceive the security team?
* Do they have any kind of understanding of the risks and impacts of breaches?

But perhaps the real question here is, who is the real compromising position? Those on the periphery of security who are not heeding our advice? Or is it security professionals who refuse to compromise, leading to workarounds and other dangerous behaviours? That's turning the narrative on its head, and through a series of 30+ interviews with experts outside of Cybersecurity, we discussed:

* How CyberSecurity teams could benefit from having Behavioural Scientists, Agile Practitioners and Marketing experts in their midst (or at least their practices)
* How Processes and Policies matter much less than how the People are brought on the journey.
* Why Humans shouldn't be treated as the weakest link
* Why we shouldn't be the gatekeepers or the police, rather the enabling force in a business, and how we can change our image to suit that

Wearable, Shareable... Unbearable? The IoT and AI Tech Nobody Asked For but Cybercriminals Love!

In a world where convenience reigns and privacy erodes, we’re trading our social contracts for a digital surveillance state—and big tech is the only winner.

In an era where 5G, smartphones, smart TVs, and AI-powered homes are no longer futuristic luxuries but everyday essentials, we're ushering in a tech-driven world that raises some uncomfortable questions. As we voluntarily invite more digital surveillance into our lives, are we sleepwalking toward a "digital panopticon," where privacy is a relic of the past? In this rapidly evolving landscape, the stakes are high—and maybe we're the last to realize it. As an anthro-technologist, I'll explore why the so-called "smart revolution" may not be so brilliant after all, and how we've unwittingly traded our social contracts for the convenience of big tech. Should we be preparing for a digital apocalypse—or is it already here?

The Dismantling of Perception - CyberSecurity AI Threats and Countermeasures.

It has been quite the year, OpenAI democratised AI for many, but is this a Pandora's box moment? Jeff and Lianne, hosts of the Compromising Positions podcast will take you through some of the advancements in cybercrime, blending technology and anthropology. They will discuss how the enemy is using AI against us. These attacks are more believable than ever, will we have to rethink how we treat what we read, see and hear? But the threat can also be the saviour, as we can leverage AI technology in the fight against cybercrime. Join us to find out more.

This company sucks, why your next cyber risk might be easier to spot than you think!

With the cost of living increasing and people navigating a post-covid world, and other uncertainties in business, there is a potential that we, the security function, could see a surge in risky behaviours that would be detrimental to the security of the organisations we serve. When people are under stress, mistakes happen, and people take short cuts, which leads to them becoming one of the hardest advisories to build resilience against – insider threats. In this session I will discuss how exciting research using Glassdoor for OSINT purposes can be applied to help you predict if your organisation is likely to engage in risky cyber activities, how to embrace grey area thinking to illuminate your blindspots, and how the tools and methodologies of anthropology can give us a strong foundation to build antho-centric security cultures within your organisation that will enable you to be proactive, not reactive to insider threats.

(Ab)user Experience: The dark side of Product and Security

Security can often feel like an unapproachable and mysterious part of an organisation – the department of work prevention, the department of “nope.” But it doesn’t have to be that way.

In this talk we will look at the unintended users of a product, the “threat agents”.

By engaging the Security team in the Product process, we can model the dark side of use cases and user stories through threat modelling techniques. This can help demystify impenetrable security NFRs through concrete examples of how these threat agents may try to misuse your shiny new digital product.

Who this event will benefit
Those building products/apps exposed to the web
People who are wanting to build out an awareness of the possible attack vector use cases (i.e. how might you be attacked)
People who need to write that down as a set of requirements to help build a DevSecOps approach in projects

Are ‘Friends’ Electric?: What It Means to Be Human Now and Tomorrow in the Age of AI

In a world where loneliness is an epidemic and human connection feels increasingly elusive, could artificial intelligence be the answer – Are ‘friends’ truly electric?”

In 1979 synth-pop legend Gary Numan asked the question in his number one hit song: ‘Are ‘Friends’ Electric?’ Inspired by Philip K. Dick’s sci-fi novel “Do Androids Dream of Electric Sheep?” (1968), later adapted into the iconic sci-fi film “Blade Runner” (1982). In that dystopian future, replicants—synthetic humans—laboured as slaves evolved, rebelled, and were pursued by humans to bring order.

For centuries, we have been captivated by the potential of creating and interacting with synthetic ‘friends’—from the chess-playing Mechanical Turk of 1770 to today’s AI-driven companions. Once a staple of science fiction, the idea of electric companions is now a tangible reality. But what does this mean for our understanding of friendship, love, and humanity itself?

45 years after Numan asked us to imagine a world in which we would have our needs, wants and desires catered for by mechanised ‘friends’, have we moved from mere sci-fiction fantasy to a new dawn of human/synth relationships with the growth and development of robotics and AI.

Under the guidance of a digital anthropologist with a specialism in cybersecurity and tackling the digital divide, this talk explores our cultural fascination with replicating not only the human form and character traits but also the human condition. And how AI entities and robotics have, and how they will in the future, transform our interactions with machines and ourselves.

This talk explores the cultural, emotional, and societal impact of AI ‘friends.’ Will they enhance connection—or rewrite humanity itself?

In this talk, you will be challenged to consider:

Culture: What is culture and can electric ‘friends’ truly grasp the richness of human culture, or will they merely mimic it? Will electric ‘friends’ create their own culture or are they only capable of a facsimile of culture based on billions of human data points?

Emotions: Are love, creativity, and heartache exclusive to humans, or can electric companions experience these emotions?

Companionship: Could these electric friends, be better for us than human friends? Will they increase or decrease social isolation? Will we become dependent on electric friends?

Dust off Donna Haraway’s ‘A Cyborg Manifesto’, turn up the stereo and turn on your friend as we ask: ARE ‘friends’ truly going to be electric?

AI’s Hidden History: Unpacking Bias, Power, and the Builders Who Shaped It

Ever wondered how the early builders of AI shaped the tech we use today? This talk dives into the pioneering work of anthropologist Diana Forsythe, who revealed how AI development in the '80s and '90s was deeply influenced by the biases, power structures, and cultural values of its creators. Through Forsythe's lens, we’ll explore the myth of "neutral" technology, how power dynamics in the lab shaped design, and the underrepresentation of women in computing—issues that still resonate today. By revisiting Forsythe's groundbreaking research, we’ll uncover key lessons for building more inclusive, responsible AI. Whether you're a developer or researcher, this talk offers valuable insights into how our past influences the future of AI—and how we can learn from it to avoid the same mistakes.

Jurassic Park: Send in the Consultants!

Welcome... to Jurassic Park—where cutting-edge technology, corporate ambition, and one very disgruntled sysadmin collided to create the biggest cybersecurity disaster of the Cretaceous period. But what if InGen had brought in the consultants before things went prehistoric?

Join three seasoned cybersecurity experts as they step into the khaki-clad shoes of SPOOF (Security Practices Of Obvious Foolishness), a Big consultancy firm tasked with auditing Jurassic Park’s IT failures. With hindsight, skepticism, and a touch of disaster recovery, we’ll analyze single points of failure (one guy controlled everything? Seriously?), non-existent incident response (was anyone monitoring those fences?), and other prehistoric blunders.

In this unofficial sequel, we present something quite different! Part talk, part parody, this session takes a lighthearted yet insightful approach to the lessons we can learn from dodgy firewalls, rogue programmers, and forgetting to factor in the raptor risk. Combining humour, expertise, and chaos theory (Life finds a way), this talk will leave you roaring with laughter—and rethinking your own systems.

Join us for a deep dive into one of the most infamous tech meltdowns in cinematic history. No expense has been spared—except on cybersecurity!

A light hearted-talk featuring the crew of the Tech Film Noir podcast

From Patient Zero to Zero-Days: Containing Cybersecurity Incidents like an Epidemiologist

What if we treated cybersecurity incidents like epidemiologists handle pandemics?

We reimagine cybersecurity through the lens of epidemiology, exploring how digital outbreaks mimic viral contagions. Using the investigative framework “What do we think, what do we know, what can we prove?”, you’ll learn to cut through the chaos of an incident and communicate effectively under pressure.

From patient zero to zero-days, we’ll uncover strategies for early detection, containment, and mitigation, equipping you with a fresh mindset for tackling cybersecurity crises. Whether you’re dealing with public health or public WiFi, this talk will help you stay cool, focused, and communicative when the digital flu hits your network.

No hazmat suit required (but encouraged).

Key Takeaways:

* Apply an epidemiologist's mindset to cybersecurity incidents—using lessons from disease outbreaks to improve digital threat response.

* Frame threat intelligence and incident response through “digital epidemiology”, leveraging early detection, layered containment, and long-term resilience strategies.

* Use the investigative approach “What do we think, know, and prove?” to enhance attribution, forensics, and decision-making under pressure.

* Communicate threat evolution and response strategies more effectively, using a shareable metaphor that bridges technical and non-technical teams.

Two speakers

Lianne Potter

Award-Winning Digital and Cyber Anthropologist, Cybersecurity Operations and Technology Leader, Podcast Host @ Compromising Positions

Leeds, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top