Session
AI’n’t Very Clear: Bad Language, Worse Governance (A Lesson from Cybersecurity)
Language in tech isn’t just clumsy - it’s consequential.
The words we use don’t just describe technologies—they frame them. In AI, we talk about "hallucinations" instead of errors, as if the model is a quirky creative writing student. We talk about "alignment" as if we’re tuning a misbehaving pet robot, not reckoning with the vast complexity of embedding values into sociotechnical systems. And we call it "artificial intelligence" as if we’re dealing with something godlike and autonomous, rather than a series of design decisions made by very real humans with very real biases.
Sound familiar?
Cybersecurity knows this problem intimately. From the militarised metaphors of "threat actors" and "defence in depth" to the technical gatekeeping of terms like "zero-day" and "kill chain," the language of cyber has often alienated the very people it's meant to protect—and obscured the systems of power that shape how risk is distributed. Governance conversations became technical monologues. Strategy became jargon. Responsibility became everyone’s and no one’s.
In this talk, I’ll argue that bad naming is not just a quirk of our industry—it’s a structural problem. A legacy feature. A long-standing, poorly version-controlled tradition of framing technologies in ways that obscure agency, distort accountability, and shape what gets built (and who gets blamed when it fails).
And if we don’t learn from the lessons cybersecurity has already taught us (the hard way) we’ll make the same mistakes all over again. Only this time, with global systems and lives at stake.
Join a digital anthropologist specialising in cybersecurity and AI as they explore:
🔹 Why naming isn’t neutral
Words like “AI” and “cyber” carry metaphors, histories, and ideologies. I’ll show how certain terms constrain how we think, regulate, and design systems—and how others make humans disappear from the picture entirely.
🔹 What cybersecurity can teach us
From the overuse of “best practices” to the illusion of silver-bullet solutions, cybersecurity’s struggle with language has real governance consequences. I’ll walk through case studies of how naming affected incident response, regulation, and even funding priorities—and what AI governance folks can learn before it’s too late.
🔹 How framing shapes power
Whether we call it a “data leak” or a “breach,” a “user error” or a “design flaw,” language decides where blame lands. In AI, this becomes existential: who gets to define harm, fairness, or risk?
🔹 How to name better (or at least, less badly)
No, I’m not proposing we rename everything. But I am arguing for a more intentional, human-centred approach to how we talk about technology—especially as we rush to regulate systems most people barely understand. We need metaphors that illuminate, not obfuscate. Language that invites people in, not pushes them out.
A light-hearted take on an academic paper I wrote - Naming is Framing: How Cybersecurity’s Language
Problems are Repeating in AI Governance
https://arxiv.org/pdf/2504.13957

Lianne Potter
Award-Winning Digital and Cyber Anthropologist, Cybersecurity Operations and Technology Leader, Podcast Host @ Compromising Positions
Leeds, United Kingdom
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top