Session

The New Review Burden: More Code Generated Than Written — Now What?

As AI-assisted coding and automated generation tools pump out more lines than human developers ever could alone, the responsibility to catch subtle bugs and maintain quality has shifted heavily to the review stage.

In this talk, we’ll walk through practical ways to adapt your review process for a world where more code is generated than written by hand. We’ll cover how to establish clear team guidelines and require structured artifacts with every pull request — not just a list of files changed, but well-organized summaries that categorize and group code modifications by purpose and impact.

We’ll discuss how to classify PR types — from bug fixes to new features — and define levels of effort so reviewers can gauge how much time and scrutiny each change deserves. As tools generate more of our code, we also need AI and automated helpers to reason through those changes, highlight potential security concerns, and map out how modifications ripple through the codebase.

Finally, we’ll look at the non-negotiables: setting up static analysis gates (linters, PMD, SpotBugs, Checkstyle), enforcing agreed-upon quality metrics, and ensuring unit tests and coverage thresholds are met before code ever hits the main branch.

You’ll leave with a blueprint to review smarter and faster — and catch the bugs that AI might sneak in.

David Parry

Unlocking Innovation Through Expertise: David Parry, Developer Advocate

Dallas, Texas, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top