Reflections on Our First Mob Code Review
Two heads are better than one. Indeed that is commonly the term used to justify various paring and collaboration efforts. Including the deceptively contentious topic of pair and mob programming. Diverse thoughts and opinions can come from individuals from diverse backgrounds, with diverse thoughts and experiences, and at different stages in their career and stage of development.
What about three heads? That is indeed the question I was pondering ahead of a recent mob review. I’m certainly known for my slightly odd ideas. This latest hair-brained scheme was a recent attempt of mine to correct some worrying patterns of behaviour being exhibited as part of regular UI code reviews.
None of us have three heads, so we rely on collaboration with others to obtain the best solution we can
I have since moved onto a new team. Yet the story of our first mob review is one story that I am eager not to have remain untold. Here I regale the tale of using a mob review to educate myself and other developers in review standards and best practices, and how they can be used as a health check for team review behaviours and psychological safety.
Part of the Queue
Like many great tales, this story begins a little further back than the actual event. It starts with a queue. A queue of rather large UI code reviews that appeared suddenly a few days before the end of the sprint. Of course one could immediately comment on sudden work bursts, suggesting how work tasks should gradually tick down with developers raising small regular requests to ensure regular integration. I certainly raised that very point rather quickly. However it was the next step which defines the turning point in our story.
In the interest of evaluating a raised concern that UI peer reviews were not approved as quickly as backend service reviews, I waited. I waited eagerly to see what feedback would be bestowed upon these solutions. I waited avidly for the green tick celebrating an approval and merging. Yet what actually happened was…
Queues may be a part of pandemic life, but regular queues of large pull requests is a warning sign that small regular reviews are not being raised
Nothing. The backend service requests were reviewed in a perfect swarming formation. Interestingly the UI reviews were left outstanding until I weighed in with either commentary or an approval. If I wasn’t concerned before about the expertise hole I would leave behind upon moving to a new team, I certainly was now. I’m not proud to say, but I was also feeling the effects of review fatigue after spending the majority of three days reviewing gargantuan pull requests. It’s not just Zoom fatigue we need to worry about working remotely!
When two more requests appeared the next working day, I felt it was time to understand this review reluctance plaguing the UI codebase. My working theory was the lack of experienced UI expertise that has been a challenge experienced by not only this team, but the wider group for several years.
A Little Less Talk and a Lot More Action
Team dynamics was also an issue. A culture supporting pair programming still doesn’t really exist. This has been partially manufactured through a combination of distributed teams and a lack of dedicated tooling for collaborative programming. The deliver at all costs ethos pushed by the wider group also doesn’t help. However, it’s also due to the preferences of some engineers to work alone and just get stuff done. Not everyone is geared to the pairing style.
Like pairing with humans, rubber duck (or cuddly elephant) programming is not for everyone
I knew that leaving this issue to persist would have a detrimental impact on team productivity and code quality. Mainly because I was moving to a new team and could no longer provide dedicated review time as often. However, it is also that I realised that I had become the gatekeeper for changes, which is not healthy! Rather than continue having pair reviews with individuals, it was time to shake things up with a mob review.
One morning on our standup the number of open PRs was raised as a blocker. This was my cue to strike. When discussing blockers after updates were finished, I asked how comfortable others were reviewing UI requests. The mixture of silence and mumbled statements of discomfort validated my assumption.
I invited everyone to stay on the Zoom meeting post-standup where we would conduct a mob review. For those unfamiliar with the term, this involves walking through an outstanding request and discussing the feedback in a group. This is far more collaborative than our normal async structure of raising a review request and pinging the team chat to notify others of the request.
After a few staggered drop offs we were left with myself and two others. Perfect, a trio review it is! We simply opened the PR within the portal and started talking. Bitbucket is our tool of choice, but many others are available.
I found the best thing was to try to limit my contributions to questions to ensure others could speak their mind, which is easier said than done
As the more experienced engineer, it can be easy to overpower the situation and simply give feedback and note it down in the tool. What worked well in this situation for me was to ask some questions about segments to hear what others think. This method worked well as it gave everyone irrespective of experience a chance to share their views comfortably.
Another useful element was to be mindful of giving balanced feedback. As I have covered previously on my piece on UI code review best practices, developers learn from both positive and negative feedback. Actively calling out nice elements of the design and implementation in this review was a great entry into discussing alternative approaches and their trade-offs in a friendly forum.
On the Sunny Side of the Street
There were two key benefits that came out of this session. Firstly, discussion on usage of null and undefined enabled us to agree a convention for our team that could be documented in a set of UI review guidelines to help communicate our standard to new joiners, and encourage consistency across our many modules. Rather than it being mandated by myself, using the collective opinions of all to formulate these conventions establishes a shared ownership model that they can continue after my departure.
The second benefit luckily proved to be the intended myth-busting effect. By engaging in discussion rather than pushing my agenda, I managed to clear up an assumption by a colleague who was new to the team that reviews had to go through me. I was happy to hear that this discussion gave them the confidence to hit the approve button and provide comments in future PRs. All experience levels can contribute meaningful feedback to a review, be it async, pair or mob style.
Learning experiences come from peers at all levels, not just the teacher at the front of the class
I’ve had the opportunity in my almost 10 years of experience to learn from not only those further on in their careers, but also those just starting out. While pair programming would be the preferred mechanism for sharing opinions and writing code in a more collaborative style, embarking on shared reviews and discussions is a great alternative where pairing is not possible.
Get Yourself Together
If I have inspired you to give a mob review a try, consider the following best practices which worked for me on this occasion:
- Make the review session optional so people can drop to get on with other things if needed.
- Consider your tooling carefully. I imagine using collaborative coding tools such as Jetbrains Code With Me or Visual Studio Code Live Share would be great. However, if those are not available a screen share and your existing review tool or IDE will also work well.
- Use questions to elicit ideas on what to change rather than imposing your own senior viewpoint. This will better support contribution from colleagues at all levels and facilitate discussion.
- Explain the reason for a comment. Explaining why you consider a particular approach to be good or bad helps educate the wider group on your thinking.
- Encourage questions. This is a collaborative effort, not a dictation!
- Document any agreed guideline conventions that come out of any mob reviews in a shared space. A good example is the Typescript project guidelines which live within the repository.
Thanks for reading! Do reach out and share your own experiences of async and mob reviews, or even pair programming. I would love to hear them!