Lakeidra Smith
Founder - The Cyber Consultant
Birmingham, Alabama, United States
Actions
Lakeidra Smith is the founder of The Cyber Consultant, a boutique firm offering Human-Centered Cybersecurity and Digital Risk Solutions tailored for modern families and mission-driven firms. A published author and sought-after speaker, Lakeidra transforms complex digital threats into empowering knowledge. Her Cyber Curiosity framework invites a more mindful, resilient relationship with technology, anchored in curiosity, caution, and care.
Area of Expertise
Topics
The Curiosity Firewall: Why AI Risk Starts (and Ends) With Humans
AI isn’t just accelerating attacks—it’s making deception feel indistinguishable from reality. From hyper-realistic deepfake voicemails to synthetic executives scheduling wire transfers, today’s threats don’t just bypass firewalls—they exploit our most human instincts: trust, urgency, and confusion.
But if attackers are getting smarter, our defenses must get more curious.
In this session, cybersecurity author and risk advisor Lakeidra Smith introduces the Cyber Curiosity Mindset™, a human-first framework designed to help teams recognize and respond to AI-enabled threats before damage is done. Drawing from her work training high-risk sectors—from financial institutions and energy operators to family offices—Lakeidra explains why curiosity, not fear, is the foundation for effective human risk mitigation.
Attendees will learn:
How AI-powered phishing and impersonation attacks bypass traditional training—and what to do instead
The three cognitive levers attackers exploit (status, urgency, affinity)—and how to short-circuit them with micro-habits
How to embed “curiosity drills” into workflows, executive routines, and even staff onboarding
A simple pulse-check method to measure human risk tolerance and decision hygiene
This isn’t another checkbox training or abstract framework. It’s a practical playbook for leaders managing AI risk at scale—whether they’re overseeing front-line operations or governing ethical AI systems at the board level.
By the end of the session, attendees will walk away with:
- A 3-part strategy to reduce human attack surface
- Techniques to coach cyber curiosity into technical and non-technical teams
- Metrics they can brief to CISOs, audit committees, or the board
Because in the age of AI, your biggest vulnerability isn’t just your data. It’s your people. And your biggest untapped defense? Their curiosity.
Geopolitical Threat, Local Impact: Helping Frontline Teams Understand Nation-State Risk
Nation-state cyber threats get serious attention at the leadership level. Security teams track the advisories. Executives get the briefings. However, somewhere between the intelligence report and the plant floor, the warehouse, the clinic, and the IT help desk — the message stops traveling.
And that gap is exactly where attackers find their way in.
This session is about closing the distance between what security leaders know about geopolitical cyber risk and what frontline workers in critical infrastructure environments actually understand about their role in defending against it. Because the IT admin at a regional energy provider, the procurement coordinator at a mid-size manufacturer, and the clinic scheduler at a regional health system — they're all part of the threat surface. They just don't know it yet.
Lakeidra Smith draws on her background in cybersecurity education and curriculum design to break down how nation-state tactics translate into day-to-day risk for workers across Manufacturing, Energy, Healthcare, and Technology — and what it actually looks like to build threat awareness at every level of an organization, not just at the top.
Attendees will leave with a communication framework for translating geopolitical threat intelligence into language that resonates with non-technical employees, a tiered education model that meets workers where they are, and concrete examples of how frontline behavior has been the deciding factor in both successful attacks and near-misses on critical infrastructure.
Because your perimeter is only as strong as the person who least understands why it matters.
Human in the Loop: How Attackers Are Exploiting the People Behind Agentic AI
We've spent years hardening our systems, patching our software, and training our teams to spot phishing emails. And now we're deploying AI agents that can take autonomous action — browsing, deciding, executing — on behalf of our organizations. That's a remarkable shift. It's also opened a threat surface most security programs aren't accounting for yet.
The attack isn't always against the model. Increasingly, it's against the person managing it.
This session examines how threat actors are learning to exploit the humans in agentic AI workflows — the employees who prompt, override, trust, and respond to these systems every day. When an AI agent generates an urgent request, flags an exception, or asks a user to confirm an action, how do we know that interaction hasn't been manipulated? Most people don't stop to ask. This is why it matters.
Drawing on behavioral research and emerging real-world attack patterns, this session walks practitioners through what human-layer exploitation looks like in agentic AI environments, where existing social engineering defenses fall short, and what organizations in technology, healthcare, manufacturing, and energy need to build into their policies and training before this becomes a headline they're reading about themselves.
You'll leave with a clear picture of the human risk surface agentic AI introduces, a framework for identifying where your people are most exposed in AI-assisted workflows, and practical controls that don't require overhauling your entire security program to implement.
The Cyber Curiosity Mindset: Reducing Human Risk in an AI-Driven World
AI is the ultimate double-edged sword: powering both innovation and exploitation. From deepfakes and AI-powered phishing to careless employee use of generative AI models, human behavior remains the single greatest risk vector. Traditional awareness programs aren’t built for this new era.
This bold, practical session introduces The Cyber Curiosity Mindset™, a proven behavioral framework that helps organizations turn their people into active participants in AI risk reduction. Attendees will learn how to train employees and executives alike to safely leverage AI models without leaking sensitive data, violating compliance, or exposing the company to brand or regulatory disaster.
Key takeaways include:
- How attackers weaponize AI to manipulate human behavior and gain access
- Why employees inadvertently leak sensitive data into public AI models—and how to stop it
- A step-by-step model to build AI usage curiosity: pause, question, verify, protect
- How Cyber Curiosity™ complements enterprise-level technical controls and governance
AI risk starts and ends with human behavior. Attendees will leave with actionable tools to make “human risk management” a living, breathing part of their AI risk mitigation program.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top