Session
The Quality Dashboard
You’ve got thousands of automated tests running, multiple test and coverage reports and logs – but you can’t see the forest from the trees. The problem is you don’t know: Is it safe to release? With refined, specific metrics, you can define reports (or dashboard) that tell you the real quality of the product. You can then decide what to do about it.
This is a case-study of building a quality dashboard with metrics and reports that matter for an application with hundreds of APIs, and multiple front-ends. Some features were better covered than others, but what that coverage meant was vague. The dashboard was built, collecting information from multiple sources – test reports and coverage reports from Jenkins, custom logs that were farmed for information, SonarQube and more. We then added some “brains” to show the analyzed metrics, in terms of covered and uncovered test cases, test quality and more. We then presented a confidence level calculated from the metrics. The effort was done by developers, quality advisors, dev-ops people and others. This session is about this project.
The dashboard helps managers see what features are ready, where the gaps are, and gave back feedback to the developers how well their tests are working for them. With this session you may be inspired to build a quality reports that tell you how well your team is doing.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top