After some time, I started getting the idea that the evolved code review process may not be so good. Basically, team members use SD Pack and email. Everyone on the team gets the email and everyone sends the review back to the author. The author then pastes all the responses back into another email, comments on each response, and then sends the email. It suffices to say: Not a great use of time (the author has to do a lot of reading, synchronizing email directions with real code), and encourages superficial comments (everyone spends the same amount of time on general areas).
I started brainstorming ways to improve code reviews. At first I was just thinking of having more sit down paper code reviews. Fortunately, and very serendipitously, an EE training class on code reviews came to Shenzhen. It was taught by a principle developer on loan here in China from Redmond. He’s a solid guy, awesome dev, and one of the best teachers I’ve seen here.
The training was exactly what we were looking for. The great thing was that it actually demonstrated an easy way to measure how useful your code reviews are. We brainstormed some more on this topic, I gave three presentations to the team on the various ideas, we had one quick sample code inspection, and have derived a streamlined process that we’re trying out here at MACH.
My goal: Make the process easy and rewarding.
The training focused mostly on “code inspections”. We’ve taken some of the ideas and simplified it:
For example, code inspections identifies roles like: Gatekeeper, Moderator, Scribe. We’ve rolled many of these together. We still see value in a moderator (someone other than the author who runs the meeting), but that person can also act as the scribe (writing down bug statistics). The Gatekeeper (the one who ensures the code is ready to be reviewed) can be performed by each author (using an optimized and more visible version of the style guide, there should be no style issues).
There are simple but effective statistics that we will take too:
1. Counting the total number of issues.
2. Of the total issues, which ones are major? Major issues should be labeled and counted. Right now, we’re defining “major” as: Any issue that causes a crash, leaks a resource, hangs the application, affects performance, serious thread issues, or maintainability and design problems. If we know our team’s bug fix rate (we are calculating this now), then we can calculate if the code review was a good use of time. Also, as the percentage of major issues rise (without the overall of quantity of Major issues rising), we’re getting better at Gatekeeping and style.
3. Of the total issues, which ones overlap? Meaning, how many people found the same issue? This helps us to judge the effectiveness of the review. If similar issues, especially major issues, are found, the we have higher confidence in the effectiveness of the code review at finding bugs.
There are two other documents that fell out of this exercise:
1. Improved Style Guide: This idea came from the realization that most of our comments were high level style issues. So, how do we fix it? The first thought was to have a more useful style guide. So, I took the initiative to cut 80% out of the style guide so that it only covered style (not coding best practices or project settings)! We want to refer to it frequently and make changes as needed. One of the main goals though is to keep it short.
2. Code Review Checklist: This idea came from the training class. The team brainstormed and created this list. We haven’t made much use of this yet, but the idea is it’s a reminder of what’s important to review and, more importantly, if we need someone to focus deeply in a particular area, the checklist helps with this.
Of course, this process would not be not required for all code reviews, but only for the biggest, most important reviews. Walk-in peer reviews are still useful for quick reviews, SD Pack and email reviews are still useful in small sections of existing code that you’ve already touched, etc. Part of my job here will be to help the team members understand just when the appropriate review is necessary.