Nnenna Ndukwe
Principal Developer Advocate at Qodo AI
Boston, Massachusetts, United States
Actions
Nnenna Ndukwe is a Principal Developer Advocate and Software Engineer, passionate about AI. With 9+ years in industry, she's a global AI community architect championing engineers to build in emerging tech. She studied Computer Science at Boston University and is a proud member of Women Defining AI. Nnenna believes that AI should augment: enabling creativity, accelerating learning, and preserving the intuition and humanity of its users.
Links
Area of Expertise
Topics
4 Best Practices for Evaluating AI Code Quality
Now that AI is handling unprecedented code velocity, human judgment is now the real constraint (and differentiator) in shipping trustworthy code. Explore how "code integrity" trumps speed when AI misses context, risk tradeoffs, and business invariants that explode in production.
Using a real AI pipeline (prompt → output → PR → deploy), we'll identify four irreplaceable judgment checkpoints that help scale dev teams without sacrificing quality. We'll also draw on real-world failures and engineering evaluation principles.
Attendees will leave with frameworks to audit their own workflows and push back on "ship faster, review later" hype.
Technical: Slides. First public delivery.
Target: Engineerings, staff devs, and managers in AI-heavy or AI-curious teams.
Preferred: 30 min talk.
From Pilot Theatre to Production: Rolling Out AI Coding Tools Without Breaking Your Org
Many organizations are stuck in delivering impressive AI pilot demos in a few teams with no safe path to scale, and growing shadow AI. Others rush to broad rollout and wake up months later with fragmented standards and more incidents more frequently.
In this session, we'll walk through a phased rollout framework grounded in real failure modes and hard exit criteria. We'll cover how to stage adoption by repository criticality across codebases, "context eligibility" before higher-tier usage, and use champions and feedback loops to ensure compliant pathways are easier and more optimal than shadow AI workarounds. You'll walk away with checklists, stage gates, and anti-patterns you can apply to your own AI rollout plans.
Beyond Lines of Code: Measuring AI's Real Impact on Engineering Quality
AI adoption vanity metrics are pervasive: more code generated, more PRs, faster cycle time. But if those gains come with higher defect escape, rollback rates, and incident load, are they truly gains?
In this session, we'll design an AI ROI metric tree for engineering leaders, linking leading indicators (adoption coverage, review load, flaky test rates, context drift signals) to the outcomes your business cares about: stable throughput, lower incident rates, and reduced remediation tax.
We'll also cover "integrity constraints" that prevent metric gaming, like requiring that cycle-time improvements not coincide with rising production defects. You'll leave with a measurement blueprint you can hand to your ops or analytics team to instrument AI-assisted engineering without lying to yourself or your board.
Stop AI-Driven Quality Collapse: A 5-Layer Operating Model for Engineering Leaders
As AI coding tools spread through your engineering org, your dashboards show a "productivity boom" right up until incidents spike, rollbacks climb, and architectural consistency quietly falls apart. This is engineering quality collapse disguised as productivity, and it's becoming the default failure mode of AI-assisted development at scale.
This strategy and methodology session introduces a vendor-neutral, 5-layer operating model (Access & Governance, Context Architecture, Quality Gates, Measurement & ROI, and Rollout Strategy) that treats AI-assisted engineering as a true asset. You'll hear about defining decision rights, risk tiers, and non-negotiable controls so teams can move faster and preserve correctness, maintainability, and operational trust.
You'll walk away with a concrete model you can adapt to your org before "AI productivity" shows up as instability in production.
How to Design Code Quality Gates in the AI Era
Sometimes, the small and seemingly insignificant code changes can have the greatest critical path impact on software in production.
This talk provides a systematic approach to designing quality gates that correlate with production stability. We'll break down the four categories worth catching early: functional correctness signals, integration readiness, performance guardrails, and architectural consistency.
We'll build an adaptive enforcement model that distinguishes mission-critical paths from velocity-focused code. You'll see risk-based severity in action, learning loops that strengthen checks, and methods that prove your gates actually work.
You'll leave with a framework to audit their current code quality gates and patterns to strengthen their understanding of impact analysis when shipping software.
Leading in the AI Development Era: Empowering Engineers as Innovators
As AI transforms the way software gets built, leadership can evolve from managing output to enabling creativity. This session explores how engineering leaders can foster cultures where developers thrive as innovators, partnering with AI systems instead of competing with them. Attendees will learn how to guide teams beyond automation toward augmentation, leveraging agentic coding tools to amplify human problem-solving and accelerate delivery. Drawing from real-world experiences in code quality automation, developer relations, and AI strategy, this talk reimagines software leadership as creative enablement. Attendees will leave with actionable strategies to elevate their teams’ skills, strengthen technical culture, and future-proof their organizations in the age of agentic software.
Choose Your Fighter: A Pragmatic Guide to AI Mechanisms vs Automated Ops
Engineers get bombarded with industry noise about integrating AI into all software/platforms. But not every tool needs to leverage AI, so how do we think pragmatically about solid use cases for them? This talk emphasizes clear distinctions between automation and AI mechanisms to encourage implementations that truly solve problems without over-engineering. We'll explore mechanisms of both paradigms, dissect their strengths and limitations, and choose the right tool for your use case.
This talk is geared for software architects, technical leads, and senior engineers who make strategic technology decisions. This presentation is also great for general Software Engineers, DevOps practitioners, and any AI/ML enthusiasts who are bombarded with industry noise about integrating AI into all software/platforms. Sometimes, AI isn't the solution. Automated workflows are. Not every tool needs to leverage AI, but how do we pause to think more strategically and pragmatically about solid use cases for AI? This talk can emphasize clear delineations between automation and AI mechanisms to encourage implementations of both for specific use cases in order to truly solve problems without over-engineering solutions.
Basic understanding of automation tools and DevOps practices, familiarity with AI/ML concepts (no deep expertise required), and experience with distributed systems and application architecture is helpful in following along with this talk.
Key Takeaways:
- Framework for evaluating automation vs. AI solutions for specific use cases
- Understanding of integration patterns for hybrid solutions
- Risk assessment strategies for each approach
- Best practices for implementation and maintenance
From DevOps to MLOps: Bridging the Gap Between Software Engineering and Machine Learning
Both DevOps and MLOps aim to streamline the development and deployment lifecycle through automation, CI/CD, and close collaboration between teams. But there are key differences in the purposes and applications of DevOps and MLOps. This talk demonstrates how your existing DevOps expertise creates a strong foundation for understanding and implementing MLOps practices. We'll explore how familiar concepts like CI/CD, monitoring, and automated testing map to ML workflows, while highlighting the key differences that make MLOps unique.
Through practical examples, we'll show how software engineers can apply their current skills to ML systems by extending DevOps practices to handle model artifacts, training pipelines, and feature engineering. You'll learn where your existing tools and practices fit in, what new tools you'll need, and how to identify when MLOps practices are necessary for your projects.
Attendees should have experience with DevOps practices and general software engineering principles. No ML or data science experience is required - we'll focus on how your existing knowledge applies to ML systems.
Prerequisites: Familiarity with CI/CD, infrastructure as code, monitoring, and automated testing. Experience with containerization (e.g., Docker) and cloud platforms is helpful but not required.
Building with Confidence: Mastering Feature Flags in React Applications
Feature flags have become an essential tool in modern software development, enabling teams to deploy code safely, conduct A/B tests, and manage feature releases with precision. This session will take you on a journey from understanding basic feature flag implementation in React to advanced patterns used by high-performing teams. Through live coding demonstrations and real-world examples, you'll learn how to leverage feature flags to deploy confidently, experiment rapidly, and deliver value to your users continuously.
This talk is ideal for intermediate to advanced React developers, tech leads, and architects who want to implement or improve feature flag usage in their applications. Basic knowledge of React and modern JavaScript is required. Attendees will leave with a solid understanding of feature flag architecture in React applications, code templates and patterns they can implement immediately, best practices for feature flag management in production, strategies for scaling feature flags across large applications, and tools and resources for additional learning.
Red Teaming AI: How to Stress-Test LLM-Integrated Apps Like an Attacker
It’s not enough to ask if your LLM app is working in production. You need to understand how it fails in a battle-tested environment. In this talk, we’ll dive into red teaming for Gen AI systems: adversarial prompts, model behavior probing, jailbreaks, and novel evasion strategies that mimic real-world threat actors. You’ll learn how to build an AI-specific adversarial testing playbook, simulate misuse scenarios, and embed red teaming into your SDLC. LLMs are unpredictable, but they can be systematically evaluated. We'll explore how to make AI apps testable, repeatable, and secure by design.
Target audience:
- Application security engineers and red teamers
- AI/ML engineers integrating LLMs into apps
- DevSecOps teams building Gen AI pipelines
- Security architects looking to operationalize AI security
- Developers and technical product leads responsible for AI features
Separation of Agentic Concerns: Why One AI Can't Rule Your Codebase
The dream of a single all-knowing AI running your entire SDLC is both admirable and widespread. But in reality, specialized agents with distinct responsibilities outperform generalist systems.
This talk makes the case for using multiple agents in the SDLC: planning agents that think like principal architects, testing agents that prepare code with adversarial precision, and review agents that enforce quality like seasoned QA engineers.
We’ll explore the technical foundations that make this possible, including deep codebase context engineering, real-world benchmarks, and developer workflow patterns that ensure AI-assisted development scales with both velocity and quality.
Attendees will leave with practical knowledge to leverage agentic AI throughout the development lifecycle that deliver safer, smarter, and more reliable software.
From Spec to Prod: Continuous Code Quality in AI-Native Workflows
AI is accelerating how code gets written, but it’s also widening the gap between specs and production-ready implementation. The result is both velocity and hidden risks. This talk reframes code quality as a living lifecycle instead of a static checkpoint.
We’ll explore how a “code review lifecycle” approach can transform pull requests into continuous feedback loops that evolve with your team’s standards, architecture, and best practices. You’ll learn how to close the “last mile” gap in AI-generated code, embed quality checks across the SDLC, and turn review findings into one-click fixes.
By the end, you’ll have a practical playbook for making code review the backbone of AI-native development to make sure speed and quality move forward together.
From Code to Confidence: Building AI-Driven Quality Gates into Your Developer Platform
Internal Developer Platforms (IDPs) help teams ship faster, but manual code reviews quickly become the bottleneck as organizations scale. Traditional code analysis tools miss contextual standards and impede velocity, leaving platform engineers stuck between speed and quality.
This session shows how AI-powered code review agents deliver context-aware, automated quality gates in golden path workflows, reducing bottlenecks and improving developer experience. Through real world examples and actionable playbooks, you’ll learn how platform teams embed intelligent review into CI/CD, measure impact using DORA metrics, and keep standards high as velocity grows.
Takeaways:
Embed context-aware quality gates in platform workflows
Automate code review for faster, safer delivery
Measure impact on velocity and developer experience
For: Platform engineering teams, developer experience leaders, cloud native architects.
Code Review Hell? 5 Rules to Fix It in the AI-Era
AI is accelerating code velocity, but code review remains the last human safety gate before production. In this talk, discover five battle-tested rules to make code reviews effective, not noisy. We'll focus on code intent and behavior over lines changed, treat AI suggestions like junior contributions, avoid scope creep around the PR, reduce false alarms, and always tie feedback to blast radius.
Through live PR diffs and comment threads, you'll see these rules in action by blending linters, static analysis, and AI tools. You'll walk away with a checklist you can use and techniques to calibrate tools for your team's context in order to improve your processes for high quality software development.
Technical: Laptop with GitHub access for live PR demos (browser-based, no installs). First public delivery.
Target: Mid-Sr engineers, tech leads adopting AI tools.
Preferred: 30-45 min talk including Q&A.
Don't Ditch Your Linters: Exploring a Modern Code Review Stack
Don't ditch your linters. Stack them with AI as your code quality gates. This talk maps the toolchain: what linters nail down, what static analysis owns, and what AI unlocks with humans maintaining the final say with your code.
We will build a PR pipeline showing integration pitfalls and wins, referencing multiple tools' pros and cons. We'll walk through real Python diffs exposing bad vs balanced code quality outcomes, and tips to leverage in your developer workflow.
Technical: Laptop for live GitHub Actions workflow chaining (prepped repo, browser/GitHub). First public delivery.
Target: Platform teams, tech leads, engineers.
Preferred: 30-45 min talk, including Q&A.
AI Code Flood: Protecting Open Source Maintainers and Code Quality at Scale
AI has dramatically increased the volume of contributions to open source projects. While this lowers barriers to entry, it has also created a new strain on maintainers: pull requests scaling faster than human judgment and unsustainable review capacity. The result is rising cognitive load, burnout, and declining long-term project health.
This talk explores how AI-generated code is changing the shape of open source contribution. Rather than positioning AI as the problem, we'll introduce maintainer-first principles for the AI era, including treating code review as a boundary rather than a rubber stamp, prioritizing intent and explainability over raw output, and using AI to amplify human discernment instead of replacing it.
Code Review Is Not a Bottleneck: Why Judgment Is the Product in Open Source
Code review is often a source of friction in the software delivery process. And with AI-assisted development dramatically increasing contribution, maintainers are now expected to assess changes that technically work, but are difficult to explain, contextualize, or trust in production environments.
This session frames code review as critical infrastructure for open source sustainability. We will explore how review functions as a boundary that protects maintainers, preserves architectural coherence, and enables healthy contributor communities. The talk will also examine how AI can support this work by amplifying human discernment, rather than attempting to replace it.
Attendees will leave with practical principles for designing review processes that prioritize comprehension, intent, and maintainability, especially in an era of AI-generated contributions.
11 Principles for Evaluating AI Dev Tools
Benchmarks measure narrow capabilities. Demos show best-case scenarios. Neither tells you whether AI-generated code will survive production or whether that shiny new tool deserves a place in your stack.
This talk presents a unified framework of 11 principles for evaluating AI-generated code and the tools that manage it. Because code quality and tool quality are inseparable: bad tools generate bad code, and bad code evaluation processes never catch it.
We'll reframe AI use with responsibility boundaries where the core questions shift from "is it fast?" to "can this be understood under pressure, safely changed, and defended to a stakeholder?"
Through real-world patterns from teams adopting AI across their SDLC, we'll apply these principles to distinguish tools that surface risk from tools that hide it.
Attendees will leave with a practical rubric to decide which AI tools to trust, which to constrain, and how to keep human judgment at the center of fast-moving, AI-augmented engineering.
5 Pillars for Mastering Context Engineering
Many teams treat context as prompt stuffing. Add more docs, longer system prompts, hope the model figures it out. This breaks the moment you try to build agentic systems, multi-step workflows, or a consistent DevEx.
This talk breaks down context engineering, its evolution, layers, and impact. We'll walk through a “context stack” and AI architectural components that answer the “what”, “why”, and “how” that your dev tools will need to optimize your team.
Whether you're building or buying AI dev tools, these pillars will help you gauge AI engineering and products with keen eyes.
Attendees will leave with a framework to audit their current context approach and principles for leveraging context systems that scale beyond ad hoc RAG.
Design Systems for AI Code: Preserving Engineering Judgment with AI
AI is a force multiplier that turns weak standards into architectural chaos. As code review becomes the ultimate bottleneck, engineering teams must bridge the gap between human intuition and machine output. This talk introduces a holistic framework for designing systems around AI coding. We explore how to codify architectural intent, from module boundaries to failure awareness, into machine-readable guardrails. Learn how to leverage context engineering to ensure your AI code tools respect your system’s design, preserving long-term maintainability without sacrificing the speed of the AI era.
1. Attendees will learn how to apply design, maintainability, quality, and chaos prevention to build a holistic verification layer for AI-generated output.
2. Learn how to build infrastructure that automatically feeds architecture diagrams, API contracts, and memory layers into AI agents to ensure alignment with senior intuition
3. Gain a practical methodology for encoding architectural reasoning into durable systems, allowing senior developers to shift from manual "line-coders" to "Policy Guardians" of the codebase.
TechBash 2025 Sessionize Event
The Commit Your Code Conference 2025! Sessionize Event
AppSec Village - DEF CON 33 Sessionize Event
Women on Stage Global Conference
5 Security Best Practices for Production-Ready Containers
Nnenna Ndukwe
Principal Developer Advocate at Qodo AI
Boston, Massachusetts, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top