Speaker

Jay James

Jay James

Cybersecurity Strategy, Responsible AI, and Education Leader

Atlanta, Georgia, United States

Actions

Jay James is a nationally recognized technology leader, educator, and speaker working at the intersection of cybersecurity, artificial intelligence, and workforce development. His passion is in helping organizations and professionals move faster with confidence by pairing technical insight with responsible decision making.

He currently works across higher education and industry, leading security and AI initiatives while also serving as an educator for universities and global programs such as MITxPro, Purdue University Global, and Auburn University, where his teaching emphasizes applied learning, systems thinking, and career readiness in AI driven environments.

He is a National Leadership Award recipient from EDUCAUSE, was named a Global Achievement Award Winner as Mid-Career Professional of the Year for the Americas by ISC2 and was selected as an international finalist for Community Champion of the Year through the SANS Institute Difference Maker Awards.

He has spoken on national stages including Microsoft Ignite, the EDUCAUSE National Conference, and the SANS New2Cyber Summit, delivering practical, human centered insights on responsible AI adoption, security as a behavioral practice, and career growth in high impact technical roles.

Area of Expertise

  • Business & Management
  • Government, Social Sector & Education
  • Information & Communications Technology
  • Media & Information

Topics

  • Leadership
  • Leadership development
  • Technical Leadership
  • IT Leadership
  • cybersecurity
  • Cybersecurity Strategy
  • cybersecurity awareness
  • Technology Strategy
  • Workforce Development
  • Mentorship
  • Leadership Empowerment
  • Artificial Intelligence
  • AI
  • Strategy
  • systems thinking
  • Inclusive Leadership
  • Education
  • EdTech
  • Higher Education
  • Instructional Technology

AI-Supported Model for Student Workforce Development

This course explores how higher education institutions can design and scale student workforce development programs using artificial intelligence as a structured support system for learning, mentorship, and operational contribution while also supporting the day-to-day needs of operational staff. The presentation demonstrates how students can be integrated into live University operations while developing workforce ready technical and professional skills.
Participants will learn how students support real institutional work through operational tasks, structured workflows, documentation, and program support within clearly defined roles and guardrails. The session highlights how AI can be used to scaffold learning, standardize workflows, support reflection, and scale coaching without replacing human judgment or increasing institutional risk. While focused on cybersecurity, the model is positioned as transferable across student workforce programs.

Learning Objectives:
1. Understand how to design an AI supported student workforce development model that integrates students into live operational environments while maintaining institutional trust and oversight.
2. Identify how artificial intelligence can be applied to scale coaching, documentation, and skill development within student programs without replacing human mentorship.
3. Examine key structures, roles, and guardrails required to manage risk and ensure quality in AI supported student led operations.
4. Apply a transferable framework for positioning student workforce programs as strategic assets that support career readiness and institutional outcomes.

AI Security: Career Advantage Hiding in Plain Sight

AI tools are already embedded in how work gets done across engineering, product, data, design, and operations. Most professionals use them to move faster. Few realize they are already making security and risk decisions every day without ever calling them that.

This lightning talk introduces a simple, repeatable lens called the "Leverage Check". Before using AI, ask three questions. What data is involved. What assumptions am I trusting. What happens if this goes wrong at scale. These small decisions quietly shape trust, responsibility, and career momentum.

Drawing from real world experience applying AI in high impact environments, this session shows how AI increases leverage and why leverage raises the bar for judgment and accountability. In seven to ten minutes, attendees will see how everyday AI use can either create invisible risk or signal readiness for greater autonomy.

This talk explains why professionals who develop AI security awareness early stand out faster, earn more trust, and position themselves for leadership opportunities. You do not need to work in cybersecurity to benefit from security thinking. In the AI era, it is one of the clearest indicators of who is ready for what comes next.

AI Security in Higher Education: Governance, Risk, and Responsibility

As artificial intelligence becomes embedded across teaching, administration, research, and student services, higher education institutions face increasing pressure to secure AI systems while enabling innovation. AI security extends beyond technical controls and requires coordinated governance, risk management, and shared responsibility across the institution.

This course provides a practical, role inclusive overview of AI security in higher education, grounded in three interconnected domains: AI governance and program management, AI risk management, and AI technologies and controls. Participants will learn how institutions can establish clear ownership for AI decisions, align AI use with compliance and ethical obligations, and reduce risk from fragmented or shadow AI deployments.

The presentation examines real world implications of AI adoption for faculty, staff, administrators, and IT leaders, including data privacy risks, bias and equity considerations, vendor risk, and misuse of generative AI tools. Rather than focusing solely on technical defenses, the session highlights how governance structures, operational guardrails, and everyday decisions across roles contribute to institutional trust and responsible AI adoption.

Learning Objectives
1. Understand the core components of AI security in higher education, including governance, risk management, and technical controls, and how they intersect across institutional functions.
2. Identify common AI related risks affecting students, faculty, staff, and institutional reputation, including data privacy, bias, misuse, and vendor related exposure.
3. Learn how different campus roles contribute to AI security through policy, oversight, and daily practices, not just technical implementation.
4. Apply practical strategies for establishing AI guardrails that enable innovation while maintaining compliance, trust, and accountability.

Intentional Mentorship 101

This course demonstrates how structured mentorship can address staffing gaps, accelerate skill development, and strengthen institutional operations.

Participants will learn how intentional mentorship can be designed as a system rather than an informal practice, using clear roles, developmental goals, and repeatable processes to support early career, mid career, and senior staff. The session highlights practical mentoring frameworks that improve onboarding, reduce time to competency, and increase engagement across teams. While grounded in cybersecurity operations, the mentorship model is presented as transferable across IT, compliance, and administrative functions, offering actionable guidance for building sustainable talent pipelines and resilient teams in higher education.


Learning Objectives
1. Understand how intentional mentorship can be designed as a workforce strategy to address staffing challenges and skill gaps in higher education IT and cybersecurity.
2. Identify key mentor roles, structures, and processes that enable students and early career professionals to contribute safely and effectively to operational environments.
3. Learn how to build and scale a mentorship program that balances professional oversight with student autonomy and growth.
4. Apply a practical framework for measuring the impact of mentorship programs on learning, behavior, retention, and operational outcomes.

From operator to architect: the systems shift that separates good leaders from great ones

Most high performers hit a ceiling—not because they lack skill, but because they never make the shift from executing tasks to designing the systems that make execution possible. This session shares a hard-won framework built through years of leading teams, launching programs, and earning recognition as a national cybersecurity and higher-ed leader. Using four diagnostic questions that cut through self-deception, attendees will identify exactly where they are in their leadership evolution—and what has to change next. This isn't a talk about working harder. It's about working at the right level of abstraction.

AI governance in practice: building security-aware AI programs for your organization

AI adoption is accelerating faster than most organizations can manage. Leaders are under pressure to deploy—but without governance frameworks in place, they're exposing their organizations to regulatory risk, data loss, and algorithmic failures they won't see coming. This session cuts through the hype and delivers a practitioner's blueprint for building a security-conscious AI program from scratch. Drawing on hands-on experience standing up institutional AI initiatives and deep study of the EU AI Act and emerging U.S. frameworks, attendees will leave with a tiered governance model they can adapt—whether they're a 5-person startup or a 5,000-person enterprise. This isn't policy theory. It's how to actually do it.

Jay James

Cybersecurity Strategy, Responsible AI, and Education Leader

Atlanta, Georgia, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top