Session

Mythbusting code coverage (while writing better code)

Code coverage is one of rare useful metrics that can measure your code. Yet, most of us get it completely wrong.

Everyone knows that aiming for 100% coverage is crazy. So teams settle for some arbitrary figure they deem to be good enough. Most of teams aim for 80%, that's kind of an industry average.

Excuse my party-breaking, but why? Why is it okay to have 20% of code untested? How do you decide which 20% not to cover? More often than not, it's the most complicated 20%, and most impactful 20%.

For the better part of it, the 80% coverage is just a mantra. For some team, some time in the past, this figure made a lot of sense. But then it became one of those "best practices" that everyone follows, but nobody understands why anymore. The reason why it's okay not to test 20% of your code is not because it's your team's benchmark, but because that 20% of your code doesn't actually need testing.

This session shows you how to structure your code in such a way that there is a very clear split between code that needs testing, and the code that you can safely omit from your unit tests. The end result is a much more robust, more testable code, and a good test coverage that is not just some meaningless number your team settled for.

Vjekoslav Babic

Solutions Architect | MVP

Zagreb, Croatia

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top