© Mapbox, © OpenStreetMap

Speaker

Nnenna Ndukwe

Nnenna Ndukwe

Principal Developer Advocate at Qodo AI

Boston, Massachusetts, United States

Actions

Nnenna Ndukwe is a Principal Developer Advocate and Software Engineer, passionate about AI. With 9+ years in industry, she's a global AI community architect championing engineers to build in emerging tech. She studied Computer Science at Boston University and is a proud member of Women Defining AI. Nnenna believes that AI should augment: enabling creativity, accelerating learning, and preserving the intuition and humanity of its users.

Area of Expertise

  • Information & Communications Technology

Topics

  • DevOps
  • Artificial Intelligence
  • Machine Learning
  • Python Programming Language
  • Infrastructure as Code
  • Security
  • Software Deveopment
  • Software Engineering
  • python
  • DevSecOps
  • Google AI
  • LLMs
  • LLMOps
  • Generative AI
  • MLOps

Leading in the AI Development Era: Empowering Engineers as Innovators

As AI transforms the way software gets built, leadership can evolve from managing output to enabling creativity. This session explores how engineering leaders can foster cultures where developers thrive as innovators, partnering with AI systems instead of competing with them. Attendees will learn how to guide teams beyond automation toward augmentation, leveraging agentic coding tools to amplify human problem-solving and accelerate delivery. Drawing from real-world experiences in code quality automation, developer relations, and AI strategy, this talk reimagines software leadership as creative enablement. Attendees will leave with actionable strategies to elevate their teams’ skills, strengthen technical culture, and future-proof their organizations in the age of agentic software.

Choose Your Fighter: A Pragmatic Guide to AI Mechanisms vs Automated Ops

Engineers get bombarded with industry noise about integrating AI into all software/platforms. But not every tool needs to leverage AI, so how do we think pragmatically about solid use cases for them? This talk emphasizes clear distinctions between automation and AI mechanisms to encourage implementations that truly solve problems without over-engineering. We'll explore mechanisms of both paradigms, dissect their strengths and limitations, and choose the right tool for your use case.

This talk is geared for software architects, technical leads, and senior engineers who make strategic technology decisions. This presentation is also great for general Software Engineers, DevOps practitioners, and any AI/ML enthusiasts who are bombarded with industry noise about integrating AI into all software/platforms. Sometimes, AI isn't the solution. Automated workflows are. Not every tool needs to leverage AI, but how do we pause to think more strategically and pragmatically about solid use cases for AI? This talk can emphasize clear delineations between automation and AI mechanisms to encourage implementations of both for specific use cases in order to truly solve problems without over-engineering solutions.

Basic understanding of automation tools and DevOps practices, familiarity with AI/ML concepts (no deep expertise required), and experience with distributed systems and application architecture is helpful in following along with this talk.

Key Takeaways:
- Framework for evaluating automation vs. AI solutions for specific use cases
- Understanding of integration patterns for hybrid solutions
- Risk assessment strategies for each approach
- Best practices for implementation and maintenance

From DevOps to MLOps: Bridging the Gap Between Software Engineering and Machine Learning

Both DevOps and MLOps aim to streamline the development and deployment lifecycle through automation, CI/CD, and close collaboration between teams. But there are key differences in the purposes and applications of DevOps and MLOps. This talk demonstrates how your existing DevOps expertise creates a strong foundation for understanding and implementing MLOps practices. We'll explore how familiar concepts like CI/CD, monitoring, and automated testing map to ML workflows, while highlighting the key differences that make MLOps unique.
Through practical examples, we'll show how software engineers can apply their current skills to ML systems by extending DevOps practices to handle model artifacts, training pipelines, and feature engineering. You'll learn where your existing tools and practices fit in, what new tools you'll need, and how to identify when MLOps practices are necessary for your projects.

Attendees should have experience with DevOps practices and general software engineering principles. No ML or data science experience is required - we'll focus on how your existing knowledge applies to ML systems.

Prerequisites: Familiarity with CI/CD, infrastructure as code, monitoring, and automated testing. Experience with containerization (e.g., Docker) and cloud platforms is helpful but not required.

Building with Confidence: Mastering Feature Flags in React Applications

Feature flags have become an essential tool in modern software development, enabling teams to deploy code safely, conduct A/B tests, and manage feature releases with precision. This session will take you on a journey from understanding basic feature flag implementation in React to advanced patterns used by high-performing teams. Through live coding demonstrations and real-world examples, you'll learn how to leverage feature flags to deploy confidently, experiment rapidly, and deliver value to your users continuously.

This talk is ideal for intermediate to advanced React developers, tech leads, and architects who want to implement or improve feature flag usage in their applications. Basic knowledge of React and modern JavaScript is required. Attendees will leave with a solid understanding of feature flag architecture in React applications, code templates and patterns they can implement immediately, best practices for feature flag management in production, strategies for scaling feature flags across large applications, and tools and resources for additional learning.

Red Teaming AI: How to Stress-Test LLM-Integrated Apps Like an Attacker

It’s not enough to ask if your LLM app is working in production. You need to understand how it fails in a battle-tested environment. In this talk, we’ll dive into red teaming for Gen AI systems: adversarial prompts, model behavior probing, jailbreaks, and novel evasion strategies that mimic real-world threat actors. You’ll learn how to build an AI-specific adversarial testing playbook, simulate misuse scenarios, and embed red teaming into your SDLC. LLMs are unpredictable, but they can be systematically evaluated. We'll explore how to make AI apps testable, repeatable, and secure by design.

Target audience:
- Application security engineers and red teamers
- AI/ML engineers integrating LLMs into apps
- DevSecOps teams building Gen AI pipelines
- Security architects looking to operationalize AI security
- Developers and technical product leads responsible for AI features

Separation of Agentic Concerns: Why One AI Can't Rule Your Codebase

The dream of a single all-knowing AI running your entire SDLC is both admirable and widespread. But in reality, specialized agents with distinct responsibilities outperform generalist systems.

This talk makes the case for using multiple agents in the SDLC: planning agents that think like principal architects, testing agents that prepare code with adversarial precision, and review agents that enforce quality like seasoned QA engineers.

We’ll explore the technical foundations that make this possible, including deep codebase context engineering, real-world benchmarks, and developer workflow patterns that ensure AI-assisted development scales with both velocity and quality.

Attendees will leave with practical knowledge to leverage agentic AI throughout the development lifecycle that deliver safer, smarter, and more reliable software.

From Spec to Prod: Continuous Code Quality in AI-Native Workflows

AI is accelerating how code gets written, but it’s also widening the gap between specs and production-ready implementation. The result is both velocity and hidden risks. This talk reframes code quality as a living lifecycle instead of a static checkpoint.

We’ll explore how a “code review lifecycle” approach can transform pull requests into continuous feedback loops that evolve with your team’s standards, architecture, and best practices. You’ll learn how to close the “last mile” gap in AI-generated code, embed quality checks across the SDLC, and turn review findings into one-click fixes.

By the end, you’ll have a practical playbook for making code review the backbone of AI-native development to make sure speed and quality move forward together.

From Code to Confidence: Building AI-Driven Quality Gates into Your Developer Platform

Internal Developer Platforms (IDPs) help teams ship faster, but manual code reviews quickly become the bottleneck as organizations scale. Traditional code analysis tools miss contextual standards and impede velocity, leaving platform engineers stuck between speed and quality.

This session shows how AI-powered code review agents deliver context-aware, automated quality gates in golden path workflows, reducing bottlenecks and improving developer experience. Through real world examples and actionable playbooks, you’ll learn how platform teams embed intelligent review into CI/CD, measure impact using DORA metrics, and keep standards high as velocity grows.

Takeaways:

Embed context-aware quality gates in platform workflows

Automate code review for faster, safer delivery

Measure impact on velocity and developer experience

For: Platform engineering teams, developer experience leaders, cloud native architects.

Solving the Last Mile Problem: How to Ship AI Code at Scale

AI coding agents can now generate 80-90% of new code in enterprises, but the biggest barrier to reliable delivery is verifying that AI-written code meets critical business and compliance standards before merging into production.

In this session, Dedy Kredo and Nnenna Ndukwe reveal enterprise-tested best practices for closing the "AI last mile" gap, focusing on context-aware code review agents and automated compliance gates.

They will highlight a case study from a Fortune 10 retailer that saved 450,000 developer hours in six months and empowered 12,000 monthly users by deploying AI-driven PR review across massive, distributed teams. Attendees will learn concrete strategies for operationalizing AI at the pull request stage, architecting cross-repo review, and delivering production-ready, enterprise-compliant code with new confidence and speed.

Agentic Code Quality: How Platform Teams Can Scale AI-Driven Development

The rise of AI-generated code and autonomous development agents introduces incredible speed, while simultaneously introducing risk into the cloud native software lifecycle. Platform teams are faced with supporting developer velocity and governing the quality, safety, and compliance of code changes created by and for AI.

This talk explores how platform engineering teams can harness agentic AI to embed continuous, context-aware code quality directly into infrastructure and application pipelines. Based on real-world implementations, platform engineers and AI/ML practitioners will learn how “agentic” workflows enable scalable quality gates that adapt to evolving codebases and organizational standards, assisting human reviews and ensuring trust in AI-driven development.

It's a recurring theme across the cloud-native ecosystem that automation promises increased efficiency and scalability for internal platforms, yet many practitioners find that too much reliance on AI-driven controls can alienate engineers, slow adoption, and inhibit innovation. Both newcomers and seasoned platform engineers are increasingly curious about how to strike a meaningful balance between automated governance and preserving human intuition in workflows such as code reviews, quality gates, and platform onboarding.

This talk aims to clarify the governance landscape for internal developer platforms by examining how automation, AI agents, and policy frameworks interact with human judgment, sharing real practitioner stories as well as emerging techniques. Attendees will see how teams can design trust signals and feedback systems that bring developers into the loop, enabling responsible experimentation and cultural growth alongside scaled automation.

Ultimately, as we witness enterprise platforms moving past simple controls into more human-centered systems, this talk will equip practitioners, advocates, and platform engineers with practical strategies to cultivate trust, encourage adoption, and maintain resilient platforms where automation empowers the human element.

Code Review Hell? 5 Rules to Fix It in the AI-Era

AI is accelerating code velocity, but code review remains the last human safety gate before production. In this talk, discover five battle-tested rules to make code reviews effective, not noisy. We'll focus on code intent and behavior over lines changed, treat AI suggestions like junior contributions, avoid scope creep around the PR, reduce false alarms, and always tie feedback to blast radius.

Through live PR diffs and comment threads, you'll see these rules in action by blending linters, static analysis, and AI tools. You'll walk away with a checklist you can use and techniques to calibrate tools for your team's context in order to improve your processes for high quality software development.

Technical: Laptop with GitHub access for live PR demos (browser-based, no installs). First public delivery.

Target: Mid-Sr engineers, tech leads adopting AI tools.

Preferred: 30-45 min talk including Q&A.

Don't Ditch Your Linters: Exploring a Modern Code Review Stack

Don't ditch your linters. Stack them with AI as your code quality gates. This talk maps the toolchain: what linters nail down, what static analysis owns, and what AI unlocks with humans maintaining the final say with your code.

We will build a PR pipeline showing integration pitfalls and wins, referencing multiple tools' pros and cons. We'll walk through real Python diffs exposing bad vs balanced code quality outcomes, and tips to leverage in your developer workflow.

Technical: Laptop for live GitHub Actions workflow chaining (prepped repo, browser/GitHub). First public delivery.

Target: Platform teams, tech leads, engineers.

Preferred: 30-45 min talk, including Q&A.

4 Best Practices for Evaluating AI Code Quality

Now that AI is handling unprecedented code velocity, human judgment is now the real constraint (and differentiator) in shipping trustworthy code. Explore how "code integrity" trumps speed when AI misses context, risk tradeoffs, and business invariants that explode in production.

Using a real AI pipeline (prompt → output → PR → deploy), we'll identify four irreplaceable judgment checkpoints that help scale dev teams without sacrificing quality. We'll also draw on real-world failures and engineering evaluation principles.

Attendees will leave with frameworks to audit their own workflows and push back on "ship faster, review later" hype.

Technical: Slides. First public delivery.
Target: Engineerings, staff devs, and managers in AI-heavy or AI-curious teams.
Preferred: 30 min talk.

11 Principles for Evaluating AI Dev Tools

Benchmarks measure narrow capabilities. Demos show best-case scenarios. Neither tells you whether AI-generated code will survive production or whether that shiny new tool deserves a place in your stack.

This talk presents a unified framework of 11 principles for evaluating AI-generated code and the tools that manage it. Because code quality and tool quality are inseparable: bad tools generate bad code, and bad code evaluation processes never catch it.

We'll reframe AI use with responsibility boundaries where the core questions shift from "is it fast?" to "can this be understood under pressure, safely changed, and defended to a stakeholder?"

Through real-world patterns from teams adopting AI across their SDLC, we'll apply these principles to distinguish tools that surface risk from tools that hide it.

Attendees will leave with a practical rubric to decide which AI tools to trust, which to constrain, and how to keep human judgment at the center of fast-moving, AI-augmented engineering.

5 Pillars for Mastering Context Engineering

Many teams treat context as prompt stuffing. Add more docs, longer system prompts, hope the model figures it out. This breaks the moment you try to build agentic systems, multi-step workflows, or a consistent DevEx.

This talk breaks down context engineering, its evolution, layers, and impact. We'll walk through a “context stack” and AI architectural components that answer the “what”, “why”, and “how” that your dev tools will need to optimize your team.

Whether you're building or buying AI dev tools, these pillars will help you gauge AI engineering and products with keen eyes.

Attendees will leave with a framework to audit their current context approach and principles for leveraging context systems that scale beyond ad hoc RAG.

Design Systems for AI Code: Preserving Engineering Judgment with AI

AI is a force multiplier that turns weak standards into architectural chaos. As code review becomes the ultimate bottleneck, engineering teams must bridge the gap between human intuition and machine output. This talk introduces a holistic framework for designing systems around AI coding. We explore how to codify architectural intent, from module boundaries to failure awareness, into machine-readable guardrails. Learn how to leverage context engineering to ensure your AI code tools respect your system’s design, preserving long-term maintainability without sacrificing the speed of the AI era.

1. Attendees will learn how to apply design, maintainability, quality, and chaos prevention to build a holistic verification layer for AI-generated output.
2. Learn how to build infrastructure that automatically feeds architecture diagrams, API contracts, and memory layers into AI agents to ensure alignment with senior intuition
3. Gain a practical methodology for encoding architectural reasoning into durable systems, allowing senior developers to shift from manual "line-coders" to "Policy Guardians" of the codebase.

MCP From Complete Scratch: Building an Agent Tool Server

MCP is a connective tissue from agents to tools, but for many it still feels abstract, magical, or hidden behind SDKs.

This talk strips MCP down to first principles.

Rather than starting with a framework, we'll walk through the journey of building an MCP server from scratch over stdio and JSON-RPC. We'll follow the protocol exactly as it exists, showing how agents discover tools, negotiate capabilities, and invoke real work step by step.

By the end of this session, MCP will no longer feel like a black box. You will understand how transports, JSON-RPC, initialization, tool discovery, and invocation work together on a local machine and how to architect your own MCP tools with confidence.

AI Code Flood: Protecting Open Source Maintainers and Code Quality at Scale

AI has dramatically increased the volume of contributions to open source projects. While this lowers barriers to entry, it has also created a new strain on maintainers: pull requests scaling faster than human judgment and unsustainable review capacity. The result is rising cognitive load, burnout, and declining long-term project health.

This talk explores how AI-generated code is changing the shape of open source contribution. Rather than positioning AI as the problem, we'll introduce maintainer-first principles for the AI era, including treating code review as a boundary rather than a rubber stamp, prioritizing intent and explainability over raw output, and using AI to amplify human discernment instead of replacing it.

Code Review Is Not a Bottleneck: Why Judgment Is the Product in Open Source

Code review is often a source of friction in the software delivery process. And with AI-assisted development dramatically increasing contribution, maintainers are now expected to assess changes that technically work, but are difficult to explain, contextualize, or trust in production environments.

This session frames code review as critical infrastructure for open source sustainability. We will explore how review functions as a boundary that protects maintainers, preserves architectural coherence, and enables healthy contributor communities. The talk will also examine how AI can support this work by amplifying human discernment, rather than attempting to replace it.

Attendees will leave with practical principles for designing review processes that prioritize comprehension, intent, and maintainability, especially in an era of AI-generated contributions.

TechBash 2025 Sessionize Event

November 2025 Mount Pocono, Pennsylvania, United States

The Commit Your Code Conference 2025! Sessionize Event

September 2025 Dallas, Texas, United States

AppSec Village - DEF CON 33 Sessionize Event

August 2025 Las Vegas, Nevada, United States

Women on Stage Global Conference

5 Security Best Practices for Production-Ready Containers

October 2023 Boston, Massachusetts, United States

Nnenna Ndukwe

Principal Developer Advocate at Qodo AI

Boston, Massachusetts, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top