Session

4 Best Practices for Evaluating AI Code Quality

Now that AI is handling unprecedented code velocity, human judgment is now the real constraint (and differentiator) in shipping trustworthy code. Explore how "code integrity" trumps speed when AI misses context, risk tradeoffs, and business invariants that explode in production.

Using a real AI pipeline (prompt → output → PR → deploy), we'll identify four irreplaceable judgment checkpoints that help scale dev teams without sacrificing quality. We'll also draw on real-world failures and engineering evaluation principles.

Attendees will leave with frameworks to audit their own workflows and push back on "ship faster, review later" hype.

Technical: Slides. First public delivery.
Target: Engineerings, staff devs, and managers in AI-heavy or AI-curious teams.
Preferred: 30 min talk.

Nnenna Ndukwe

Principal Developer Advocate at Qodo AI

Boston, Massachusetts, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top