Speaker

Lianne Potter

Lianne Potter

Award-Winning Cyber Anthropologist, Head of Security Operations and Technology Podcast Host @ Compromising Positions

Leeds, United Kingdom

Actions

When you follow the cables, behind every piece of tech is a person, consumer and creator, and we should never lose sight of this.

Lianne is an award-winning cyber anthropologist and security transformation leader with experience in the retail, healthcare, finance, private and non-profit sectors.

Her consultancy, The AnthroSecurist, enables teams in complex organisations to understand each other’s motivations, identify barriers that have prevented good security practices in the past, and provide practical steps and insights to increase collaboration between the security team and the rest of the organisation. Lianne is also the Head of SecOps for the largest greenfield technology project in Europe where she builds strategies to create sustainable security cultures throughout the organisation.

As a respected keynote speaker, Lianne has delivered talks across the globe to share her vision for a new type of security function. Drawing upon her expertise as an anthropologist, her practical experience as a security-focused software developer and as a security practitioner; Lianne combines the human and the technical aspects of security to evangelise a cultural security transformation.

In 2020 Lianne formed a health-tech start-up, Liria Digital Health, using technology to improve patient outcomes for those with under-researched, under-represented or unloved health conditions, particularly for people in marganised or minority communities

You can listen to Lianne talk about her human-centric approach every Thursday on her award-winning technology podcast Compromising Positions, in which she interviews non-cybersecurity people from the world of anthropology, psychology and behavioural science about cybersecurity culture.

Lianne is also undertaking a MSc in AI and Data in 2024.

Publications include:

* The Times
* Raconteur
* Computing.com
* The Yorkshire Post
* Security Magazine
* IT Pro
* EmTech Anthropology Careers at the Frontier (Book)

Recent awards and honours include:

* Podcast Newcomer - 2024 European Cybersecurity Blogger Awards
* Cybersecurity Personality of the Year 2023 - The Real Cyber Awards
* Security Woman of the Year - Computing Security Excellence Awards 2023 (Highly Commended)
* 40 Under 40 in Cybersecurity - Cybersecurity Magazine
* Security Leader of The Year 2021 - Women in Tech Excellence
* Woman of the Year 2021 - Women in Tech Excellence
* Security Specialist of the Year 2021 - Computing.com

Area of Expertise

  • Business & Management
  • Humanities & Social Sciences
  • Information & Communications Technology

Topics

  • cybersecurity
  • cybersecurity awareness
  • cybersecurity compliance
  • behavioural science
  • DevSecOps
  • Security
  • cyber security
  • Cloud Security
  • Information Security
  • Application Security
  • IT Security
  • Security & Compliance
  • Cloud App Security
  • anthropology
  • Artificial Inteligence
  • Machine Learning & AI
  • Philosophy
  • Organizational Philosophy
  • Leadership
  • culture
  • Culture & Collaboration
  • threat modeling
  • Threat Intelligence
  • Social Engineering and Phishing:
  • ai
  • Ethics in AI
  • Ethics in Tech
  • Ethics in Software
  • Tech Ethics
  • Data Science Ethics
  • AI Safety
  • People & Culture

Compromising Positions: how understanding human behaviours can build a great security culture

Insecure behaviours in our organisations are continuing to proliferate, despite having processes, policies and punishments.

This has resulted in cybersecurity breaches becoming an increasingly regular feature in the mainstream media.

Despite this, the advice given by cybersecurity teams hasn't varied much since the inception of this practice 20-30 years ago. We haven't adapted to a remote-first workforce, and a digital-native generation that demonstrably engage in riskier behaviours online.

As a community, we need to start asking ourselves some difficult questions, such as:
* If the messaging on how to keep safe has been consistent, why is it not working?
* Are people engaged with our communications?
* How do they perceive the security team?
* Do they have any kind of understanding of the risks and impacts of breaches?

But perhaps the real question here is, who is the real compromising position? Those on the periphery of security who are not heeding our advice? Or is it security professionals who refuse to compromise, leading to workarounds and other dangerous behaviours? That's turning the narrative on its head, and through a series of 30+ interviews with experts outside of Cybersecurity, we discussed:

* How CyberSecurity teams could benefit from having Behavioural Scientists, Agile Practitioners and Marketing experts in their midst (or at least their practices)
* How Processes and Policies matter much less than how the People are brought on the journey.
* Why Humans shouldn't be treated as the weakest link
* Why we shouldn't be the gatekeepers or the police, rather the enabling force in a business, and how we can change our image to suit that

Wearable, Shareable... Unbearable? The IoT and AI Tech Nobody Asked For but Cybercriminals Love!

In a world of 5G (no conspiracy theories please!) smartphones, smart TVs and smart homes, we're inviting more tech into our lives than ever. We're sleepwalking into a future nobody asked for, but maybe many fear. How always-on Microphones, Cameras and AI are creating a "digital panopticon" none of us probably want or need, unless you're Amazon. Should we become "digital preppers"? Is the privacy and security risk so high, what are the stakes? This is an anthro-technologist view on how dumb an idea the smart revolution is and how we've eroded our social contracts in favour of big tech.

The Dismantling of Perception - CyberSecurity AI Threats and Countermeasures.

It has been quite the year, OpenAI democratised AI for many, but is this a Pandora's box moment? Jeff and Lianne, hosts of the Compromising Positions podcast will take you through some of the advancements in cybercrime, blending technology and anthropology. They will discuss how the enemy is using AI against us. These attacks are more believable than ever, will we have to rethink how we treat what we read, see and hear? But the threat can also be the saviour, as we can leverage AI technology in the fight against cybercrime. Join us to find out more.

This company sucks, why your next cyber risk might be easier to spot than you think!

With the cost of living increasing and people navigating a post-covid world, and other uncertainties in business, there is a potential that we, the security function, could see a surge in risky behaviours that would be detrimental to the security of the organisations we serve. When people are under stress, mistakes happen, and people take short cuts, which leads to them becoming one of the hardest advisories to build resilience against – insider threats. In this session I will discuss how exciting research using Glassdoor for OSINT purposes can be applied to help you predict if your organisation is likely to engage in risky cyber activities, how to embrace grey area thinking to illuminate your blindspots, and how the tools and methodologies of anthropology can give us a strong foundation to build antho-centric security cultures within your organisation that will enable you to be proactive, not reactive to insider threats.

Tales of an Anthropologist in Cyber Security

Hacking humans is a very lucrative business. Social engineering is one of the easiest and most effective ways to access a secure system. Cyber criminals know this, and they are increasingly leaning on the research and techniques of the social science disciplines to leverage the human element into letting them into our lives, our businesses and our bank accounts.

As the threat continues to grow, how can security practitioners increase security awareness and build up our resilience to unite against malicious actors looking to leverage what makes us human against us? This talk argues that we do the same, and use what makes us human to make us stronger.

This discussion puts people and social science at the centre of the solution with real-life examples and experiences from a SOC analyst turned cyber anthropologist.

Product AI - The key to killer AI implementation

In this talk we'll discuss how the last few months has changed the game in the field of AI. How it's more important than ever to involve Product to find those killer use cases, and how we can go from a pocket of Data Scientists, to a fully formed AI Centre for Excellence.

(Ab)user Experience: The dark side of Product and Security

Security can often feel like an unapproachable and mysterious part of an organisation – the department of work prevention, the department of “nope.” But it doesn’t have to be that way.

In this talk we will look at the unintended users of a product, the “threat agents”.

By engaging the Security team in the Product process, we can model the dark side of use cases and user stories through threat modelling techniques. This can help demystify impenetrable security NFRs through concrete examples of how these threat agents may try to misuse your shiny new digital product.

Who this event will benefit
Those building products/apps exposed to the web
People who are wanting to build out an awareness of the possible attack vector use cases (i.e. how might you be attacked)
People who need to write that down as a set of requirements to help build a DevSecOps approach in projects

Compromising Positions:An Anthro-Centric Look at Organizational Security Culture

Insecure behaviours are continuing to proliferate and this is despite that cybersecurity breaches are regularly a newsworthy item on mainstream media and despite the fact that the advice given by cybersecurity teams hasn't varied much since the inception of this practice 20-30 years ago. As a community, we need to start asking ourselves, if the messaging on how to keep safe has been consistent, why is it not working?

Perhaps the real question here is, who is the real compromising position? Those on the periphery of security who are not heeding our advice?

Or is it security professionals who refuse to compromise, leading to workarounds and other dangerous behaviours?

Join me as we ask these very questions and look at ways we can build a stronger security culture in our organisations.

Are ‘Friends’ Electric?: What It Means to Be Human Now and Tomorrow in the Age of AI

In a world where loneliness is an epidemic and human connection feels increasingly elusive, could artificial intelligence be the answer – Are ‘friends’ truly electric?”

In 1979 synth-pop legend Gary Numan asked the question in his number one hit song: ‘Are ‘Friends’ Electric?’ Inspired by Philip K. Dick’s sci-fi novel “Do Androids Dream of Electric Sheep?” (1968), later adapted into the iconic sci-fi film “Blade Runner” (1982). In that dystopian future, replicants—synthetic humans—laboured as slaves evolved, rebelled, and were pursued by humans to bring order.

For centuries, we have been captivated by the potential of creating and interacting with synthetic ‘friends’—from the chess-playing Mechanical Turk of 1770 to today’s AI-driven companions. Once a staple of science fiction, the idea of electric companions is now a tangible reality. But what does this mean for our understanding of friendship, love, and humanity itself?

45 years after Numan asked us to imagine a world in which we would have our needs, wants and desires catered for by mechanised ‘friends’, have we moved from mere sci-fiction fantasy to a new dawn of human/synth relationships with the growth and development of robotics and AI.

Under the guidance of a digital anthropologist with a specialism in cybersecurity and tackling the digital divide, this talk explores our cultural fascination with replicating not only the human form and character traits but also the human condition. And how AI entities and robotics have, and how they will in the future, transform our interactions with machines and ourselves.

In this talk, you will be challenged to consider:

Culture: What is culture and can electric ‘friends’ truly grasp the richness of human culture, or will they merely mimic it? Will electric ‘friends’ create their own culture or are they only capable of a facsimile of culture based on billions of human data points?

Emotions: Are love, creativity, and heartache exclusive to humans, or can electric companions experience these emotions?

Companionship: Could these electric friends, be better for us than human friends? Will they increase or decrease social isolation? Will we become dependent on electric friends?

Dust off Donna Haraway’s ‘A Cyborg Manifesto’, turn up the stereo and turn on your friend as we ask: ARE ‘friends’ truly going to be electric?

Never Neutral: Unveiling the Sociocultural Fabric of AI Development

We seldom discuss the early builders of this AI landscape, however, there is much we can learn and understand from the grandfathers of AI to help us understand how we got to what we know as AI today, and to understand where we might be going with AI in the future.

This talk examines the pioneering work of anthropologist Diane Forsythe, who immersed herself in the world of AI research during the 1980s and 1990s. Forsythe's groundbreaking research revealed AI as a deeply sociocultural product, shaped by the values, biases, and power structures of its creators.

This talk is a tribute to the impactful and still pertinent work of Forsythe who tragically died in 1997 during an accident. As a modern anthropologist, I will share my journey through the evolving landscape of AI, examining how Forsythe's insights resonate with contemporary challenges and opportunities. Together, we will explore her key findings, including the power dynamics within AI research, the underrepresentation of women, and the critical role of social and cultural factors in shaping AI development.

By revisiting Forsythe's work, we will uncover enduring lessons about the human element in AI creation. Ultimately, I argue that understanding the past is essential for building a more responsible and equitable AI future.

Together we will explore

Never Neutral: Forsythe challenged the notion that technology is neutral and how neglecting the social and the cultural leads to shelfware

Power in the Lab: We explore her analysis of the power researchers and technology developers have when their attitudes and perspectives profoundly influence technological design

The disappearing women in the social world of computing (a problem that doesn’t seem to age well!)

What can we learn from the builders of AI in the 1980s and 1990s, through our understanding of contemporary AI technologists?

Lianne Potter

Award-Winning Cyber Anthropologist, Head of Security Operations and Technology Podcast Host @ Compromising Positions

Leeds, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top