Session

Async Code Reviews Are Choking Your Company’s Throughput

“Never had a huge Pull Request that didn't look good to me”.

We've all been there. A PR is so big you don't even bother properly reviewing. It's already too late to build the quality in, so you make a sad face, comment “Looks good to me”, and click approve.
That's still the case in lots of teams, but after a long time our industry, on average, learned the value of small batches idea from Lean applied to PRs. And that's a step in the right direction.
We do a bit of coding, raise a PR and then ask for feedback. People are more likely to engage on smaller PRs, it takes them less time to review, and they feel that they can course-correct if something goes astray. PRs go sooner out of the door, and we feel productive as a team.

But, here's the surprise.
What if I told you that teams doing small PRs (with async code reviews) actually have way lower throughput than teams doing big PRs.“Wait, what?!”
Yes.

I got this surprising systemic insight from analyzing more than 40,000 PRs, and in this talk, I'll present you with the results of that study. On the bigger PRs side of the spectrum, we tend to lose quality, while on the smaller PRs end of the spectrum we lose throughput. We're forced to make a trade-off.

But! There's a parallel universe of ways of working where you get to have a cake and eat it too. Where you can have both throughput and quality. That universe is called co-creation patterns (Pair and Mob programming).

Join me on a journey where I'll show you the data invalidating the assumption that two or more people working on the same thing, at the same time will hurt a team's throughput, and why the opposite is actually the case.

Dragan Stepanović

Sr. Principal Engineer at Talabat/Delivery Hero

Berlin, Germany

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top