Using Metrics to Track Code Review Performance

TitleUsing Metrics to Track Code Review Performance
Publication TypeConference Paper
Year of Publication2017
AuthorsIzquierdo-Cortazar, D, Sekitoleko, N, Gonzalez-Barahona, JM, Kurth, L
Secondary TitleProceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering
Pagination214–223
PublisherACM
Place PublishedNew York, NY, USA
ISBN Number978-1-4503-4804-1
Keywordscode review, data mining, Software development analytics
Abstract

During 2015, some members of the Xen Project Advisory Board became worried about the performance of their code review process. The Xen Project is a free, open source software project developing one of the most popular virtualization platforms in the industry. They use a pre-commit peer review process similar to that in the Linux kernel, based on email messages. They had observed a large increase over time in the number of messages related to code review, and were worried about how this could be a signal of problems with their code review process.

To address these concerns, we designed and conducted, with their continuous feedback, a detailed analysis focused on finding these problems, if any. During the study, we dealt with the methodological problems of Linux-like code review, and with the deeper issue of finding metrics that could uncover the problems they were worried about. For having a benchmark, we run the same analysis on a similar project, which uses very similar code review practices: the Linux Netdev (Netdev) project. As a result, we learned how in fact the Xen Project had some problems, but at the moment of the analysis those were already under control. We found as well how different the Xen and Netdev projects were behaving with respect to code review performance, despite being so similar from many points of view.

In this paper we show the results of both analyses, and propose a comprehensive methodology, fully automated, to study Linux-style code review. We discuss also the problems of getting significant metrics to track improvements or detect problems in this kind of code review.

URLhttp://doi.acm.org/10.1145/3084226.3084247
DOI10.1145/3084226.3084247
Full Text