
Nnenna Ndukwe
Principal Developer Advocate at Qodo AI
Boston, Massachusetts, United States
Actions
Nnenna Ndukwe is a Principal Developer Advocate and Software Engineer, enthusiastic about AI. With 8+ experience spanning across startups, media tech, cybersecurity, and AI, she's an active global AI/ML community architect championing engineers to build in emerging tech. She studied Computer Science at Boston University and is a proud member of Women Defining AI, Women Applying AI, and Reg.exe. Nnenna believes that AI should augment: enabling creativity, accelerating learning, and preserving the intuition and humanity of its users. She's an international speaker and serves communities through content creation, open-source contributions, and philanthropy.
Links
Area of Expertise
Topics
Choose Your Fighter: A Pragmatic Guide to AI Mechanisms vs Automated Ops
Engineers get bombarded with industry noise about integrating AI into all software/platforms. But not every tool needs to leverage AI, so how do we think pragmatically about solid use cases for them? This talk emphasizes clear distinctions between automation and AI mechanisms to encourage implementations that truly solve problems without over-engineering. We'll explore mechanisms of both paradigms, dissect their strengths and limitations, and choose the right tool for your use case.
This talk is geared for software architects, technical leads, and senior engineers who make strategic technology decisions. This presentation is also great for general Software Engineers, DevOps practitioners, and any AI/ML enthusiasts who are bombarded with industry noise about integrating AI into all software/platforms. Sometimes, AI isn't the solution. Automated workflows are. Not every tool needs to leverage AI, but how do we pause to think more strategically and pragmatically about solid use cases for AI? This talk can emphasize clear delineations between automation and AI mechanisms to encourage implementations of both for specific use cases in order to truly solve problems without over-engineering solutions.
Basic understanding of automation tools and DevOps practices, familiarity with AI/ML concepts (no deep expertise required), and experience with distributed systems and application architecture is helpful in following along with this talk.
Key Takeaways:
- Framework for evaluating automation vs. AI solutions for specific use cases
- Understanding of integration patterns for hybrid solutions
- Risk assessment strategies for each approach
- Best practices for implementation and maintenance
Spec-Driven Code Quality in Action with Tessl MCP and Qodo CLI
Specs are the backbone of reliable software, but AI tools can lose them once you're in the weeds of development phases. In this demo, I’ll show how Tessl’s spec-driven MCP server and Qodo CLI combine to keep specs alive throughout the development lifecycle. You’ll see how spec files drive AI-generated code, how review agents validate against organizational standards, and how feedback loops push insights back into specs. You'll see how continuous, spec-aligned code quality can scales with AI-assisted development.
From Code to Confidence: Building AI-Driven Quality Gates into Your Developer Platform
Internal Developer Platforms (IDPs) help teams ship faster, but manual code reviews quickly become the bottleneck as organizations scale. Traditional code analysis tools miss contextual standards and impede velocity, leaving platform engineers stuck between speed and quality.
This session shows how AI-powered code review agents deliver context-aware, automated quality gates in golden path workflows, reducing bottlenecks and improving developer experience. Through real world examples and actionable playbooks, you’ll learn how platform teams embed intelligent review into CI/CD, measure impact using DORA metrics, and keep standards high as velocity grows.
Takeaways:
Embed context-aware quality gates in platform workflows
Automate code review for faster, safer delivery
Measure impact on velocity and developer experience
For: Platform engineering teams, developer experience leaders, cloud native architects.
Bridging the AI Last Mile: Enterprise-Ready AI Code at Scale
AI coding agents can now generate 80-90% of new code in enterprises, but the biggest barrier to reliable delivery is verifying that AI-written code meets critical business and compliance standards before merging into production.
In this session, Dedy Kredo and Nnenna Ndukwe reveal enterprise-tested best practices for closing the "AI last mile" gap, focusing on context-aware code review agents and automated compliance gates.
They will highlight a case study from a Fortune 10 retailer that saved 450,000 developer hours in six months and empowered 12,000 monthly users by deploying AI-driven PR review across massive, distributed teams. Attendees will learn concrete strategies for operationalizing AI at the pull request stage, architecting cross-repo review, and delivering production-ready, enterprise-compliant code with new confidence and speed.
Agentic Code Quality: How Platform Teams Can Scale AI-Driven Development
The rise of AI-generated code and autonomous development agents introduces incredible speed, while simultaneously introducing risk into the cloud native software lifecycle. Platform teams are faced with supporting developer velocity and governing the quality, safety, and compliance of code changes created by and for AI.
This talk explores how platform engineering teams can harness agentic AI to embed continuous, context-aware code quality directly into infrastructure and application pipelines. Based on real-world implementations, platform engineers and AI/ML practitioners will learn how “agentic” workflows enable scalable quality gates that adapt to evolving codebases and organizational standards, assisting human reviews and ensuring trust in AI-driven development.
From DevOps to MLOps: Bridging the Gap Between Software Engineering and Machine Learning
Both DevOps and MLOps aim to streamline the development and deployment lifecycle through automation, CI/CD, and close collaboration between teams. But there are key differences in the purposes and applications of DevOps and MLOps. This talk demonstrates how your existing DevOps expertise creates a strong foundation for understanding and implementing MLOps practices. We'll explore how familiar concepts like CI/CD, monitoring, and automated testing map to ML workflows, while highlighting the key differences that make MLOps unique.
Through practical examples, we'll show how software engineers can apply their current skills to ML systems by extending DevOps practices to handle model artifacts, training pipelines, and feature engineering. You'll learn where your existing tools and practices fit in, what new tools you'll need, and how to identify when MLOps practices are necessary for your projects.
Attendees should have experience with DevOps practices and general software engineering principles. No ML or data science experience is required - we'll focus on how your existing knowledge applies to ML systems.
Prerequisites: Familiarity with CI/CD, infrastructure as code, monitoring, and automated testing. Experience with containerization (e.g., Docker) and cloud platforms is helpful but not required.
Building with Confidence: Mastering Feature Flags in React Applications
Feature flags have become an essential tool in modern software development, enabling teams to deploy code safely, conduct A/B tests, and manage feature releases with precision. This session will take you on a journey from understanding basic feature flag implementation in React to advanced patterns used by high-performing teams. Through live coding demonstrations and real-world examples, you'll learn how to leverage feature flags to deploy confidently, experiment rapidly, and deliver value to your users continuously.
This talk is ideal for intermediate to advanced React developers, tech leads, and architects who want to implement or improve feature flag usage in their applications. Basic knowledge of React and modern JavaScript is required. Attendees will leave with a solid understanding of feature flag architecture in React applications, code templates and patterns they can implement immediately, best practices for feature flag management in production, strategies for scaling feature flags across large applications, and tools and resources for additional learning.
Red Teaming AI: How to Stress-Test LLM-Integrated Apps Like an Attacker
It’s not enough to ask if your LLM app is working in production. You need to understand how it fails in a battle-tested environment. In this talk, we’ll dive into red teaming for Gen AI systems: adversarial prompts, model behavior probing, jailbreaks, and novel evasion strategies that mimic real-world threat actors. You’ll learn how to build an AI-specific adversarial testing playbook, simulate misuse scenarios, and embed red teaming into your SDLC. LLMs are unpredictable, but they can be systematically evaluated. We'll explore how to make AI apps testable, repeatable, and secure by design.
Target audience:
- Application security engineers and red teamers
- AI/ML engineers integrating LLMs into apps
- DevSecOps teams building Gen AI pipelines
- Security architects looking to operationalize AI security
- Developers and technical product leads responsible for AI features
Separation of Agentic Concerns: Why One AI Can't Rule Your Codebase
The dream of a single all-knowing AI running your entire SDLC is both admirable and widespread. But in reality, specialized agents with distinct responsibilities outperform generalist systems.
This talk makes the case for using multiple agents in the SDLC: planning agents that think like principal architects, testing agents that prepare code with adversarial precision, and review agents that enforce quality like seasoned QA engineers.
We’ll explore the technical foundations that make this possible, including deep codebase context engineering for specialization, essential MCPs and tool delegation, and developer workflow patterns that ensure AI-assisted development scales with both velocity and quality. Attendees will leave with practical knowledge to leverage agentic AI throughout the development lifecycle that deliver safer, smarter, and more reliable software.
From Spec to Prod: Continuous Code Quality in AI-Native Workflows
AI is accelerating how code gets written, but it’s also widening the gap between specs and production-ready implementation. The result is both velocity and hidden risks. This talk reframes code quality as a living lifecycle instead of a static checkpoint.
We’ll explore how a “code review lifecycle” approach can transform pull requests into continuous feedback loops that evolve with your team’s standards, architecture, and best practices. You’ll learn how to close the “last mile” gap in AI-generated code, embed quality checks across the SDLC, and turn review findings into one-click fixes.
By the end, you’ll have a practical playbook for making code review the backbone of AI-native development to make sure speed and quality move forward together.
Beyond Stateless Agents: Engineering Persistent Context for AI-Native Teams
Most AI agents today are fast, but forgetful. They generate code in the moment, then lose the architectural context, design trade-offs, and standards that make software sustainable. The next leap in agentic AI is memory.
This talk introduces the concept of the “second brain”: AI systems that capture, preserve, and apply organizational knowledge across the SDLC. We’ll show how moving beyond Gen 3 agentic workflows to Gen 4 memory-driven systems enables persistent PR memory, architectural decision tracking, and real-time rule enforcement.
We'll explore memory architecture, knowledge structuring, and governance mechanisms to see how AI is evolving from a temporary productivity boost into an orchestrator of organizational intelligence. This concept is gearing up to accelerate new hire onboarding, improve code quality, and embed institutional knowledge into every line of code.
Democratizing AI: Building Resilient and Secure Open-Source LLMs for Digital Sovereignty
This presentation explores the critical role of open-source LLMs in achieving digital sovereignty. We will delve into the technical challenges and opportunities in building secure and resilient open-source LLMs, focusing on practical strategies for data privacy, model security, and community governance. We will examine case studies showcasing successful community-driven initiatives and discuss best practices for fostering collaboration and knowledge sharing within the open-source ecosystem.
TechBash 2025 Sessionize Event Upcoming
The Commit Your Code Conference 2025! Sessionize Event
AppSec Village - DEF CON 33 Sessionize Event
Women on Stage Global Conference
5 Security Best Practices for Production-Ready Containers

Nnenna Ndukwe
Principal Developer Advocate at Qodo AI
Boston, Massachusetts, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top