1 Minute Pull Request Review Hack

August 2, 2024-
By WTP

So far we've covered "creating" PRs - but let's discuss the flip side of that: reviewing them.

The honest answer to this question is "anyone who can understand tests & feature flags". You don't need to be a full-fledged engineer to review a PR - just someone who can verify the author did what they were supposed to do.

Since robots (through "checks") handle validating the code for syntax, security, style, patterns, buildability, deployability and such - humans don't need to.

Since humans don't need to validate the code - a much broader audience than historically possible are candidates for reviewing PRs. Take advantage of this!

PRs should get looked at, reviewed, and merged with a much greater velocity than previously having to wait on only engineers knowledgeable in those specific areas of code to weigh in.

How often you review PRs is somewhat up to personal preference.

Reviewing Pull Requests used to take a lot more effort when having to context switch into a new feature branch, download any new dependencies, build the app, and test the new functionality (which often required contextual setup with specific data and scenarios in a database to unveil).

With the increased reliability of robots to do this style of work for us, PRs are really just simple look-overs to see that the boxes have been metaphorically ticked and things looking generally "ok".

That being the case, everyone on a team should take a moment to set down their editors and review their peer's PRs the moment they get the notification(s) to do so. Yes, this can be distracting - but it frees up the author to iterate faster and get their work deployed sooner. We're after business Profit - not "don't bother me" engineering time windows.

This guideline changes a bit if you're in a senior position with the responsibility of mentoring others. PRs become a catalyst for education, teach alternative ways of doing things and asking guiding questions to help grow the engineering org as a whole.

If that's you, then feel free to review PRs when convenient; context switching to deep-dive into teaching can start to weigh on a person if too spur-of-the-moment back to back. Set aside time or review PRs in between your own tasks.

I've covered this in bits n' pieces throughout the other sections but we can recap it here.

Pull Request reviewers are held responsible for:

  • confirming PR author wrapped changes in a feature flag
  • confirming PR author wrote automated tests that cover the code changes

They are NOT (unless as a mentor) responsible for:

  • making sure the code solves the intended problem
  • grading syntax, naming conventions, patterns, etc.
  • "did the app blow up" confirmations

The human in the PR review loop guards against the things too subjective for robots to determine, at a high level. Small things might fall through the cracks, but we aren't going for technical perfection here - we're going for Profit.

Now that we know what the human's job is in all of this, lets talk the optimal way of doing it.

Once you receive your notification that you've been assigned as a reviewer on a Pull Request (if you haven't yet setup GitHub Scheduled Reminders, go do that. It's the notification settings) and you aren't a mentor (or you are a mentor and just finished your task), click it.

Read the title and description to get a feel for what you're looking at. Is it a chore of some tweaks to stuff, a bug fix, a new feature? Figuring that out sets the tone but isn't all that important.

If there's a screenshot, does it look "ok"? We're not trying to analyze perfection here nor problem/solution fit... more like a sanity check of "yea there's something in the UI that's different now. Cool". Feel free to comment on it and ask "what changed?" if it's not obvious.

You should be able to see the checks running and completed ones passing & failing. It's still worth reviewing if checks fail, but know that you'll likely be back for another review once they're fixed (removing stale reviews is a good setting to enable for your repository).

Head on over to the "Files changed" tab and start reading the changes. You're looking for evidence of test automation, and a general overview of what areas of the codebase were edited.

Once you go through each file, I'd suggest ticking the "Viewed" box. The next time you look at this PR any changes will pop up and be highlighted making it faster to look at next time by seeing only the differences between reviews.

If there are no new tests written (or edits to existing ones, where applicable), then click to "Review changes", tell the author as such, and fail it. Go back to whatever you were doing before and keep cranking out profitable code.

If you do see tests tweaks - see if they line up with the code that was changed. We're going for ballpark here, not perfection. New tests should cover new code changes, and edits to stuff should edit existing tests - simple enough.

Before approving the PR - do the same level of investigation to make sure any code changes are covered by at least one feature flag. This can be discretionary if the changes just need to go live asap - but make that a talking point in your PR feedback when you fail it for not having flags applied.

That's it! ...unless you're supposed to be teaching people : ).

In that case review the code as one would expect, finding areas of improvement and conventions to convey, patterns to align, yada yada. If you're in a position to educate others I'll assume you know how to do so.

Beyond the aggregated "yay/nay" feedback on the pass/fail button itself, there are times directly within the code changes you might want to comment on stuff or ask questions.

Click the line number relevant to your question (drag for multiple lines) and ask away. Common things to ask:

  • why this way and not that?
  • is there a better way?
  • what are your thoughts on this piece?
  • etc.

If you directly want to give code examples of what to do instead, click the "suggest changes" button and literally write the code you were expecting to see. Do so sparingly and educate as to the reasoning for such changes; re-writing folks work without context feels pretty bad.

By providing the exact code changes using the in-built GitHub means of doing so lets the PR author 1-click patch them into their changes. Just note this could violate linting rules causing a check to fail... can't have it all!

Based on what you've seen by glancing at the code changes (and the feedback you've left if in a position of seniority) it's time to approve or deny the PR.

Herein lies the distinction of normal vs profit engineering: don't deny a PR because it's not how you would have done it!

The variable names might be different, camel case instead of snake, the spacing further than you'd want, files in different locations, etc. Move on.

If these things matter to you, then update your linting rules to flag it. All of these kinds of things can be coded in and automated away by the robots. Don't - as a human - flag humans for things robots can cover. If the robots didn't flag it when they should - fix your robots.

Getting code (that doesn't break) into production is priority #1, second to niceties. Plenty of companies fail with perfect code.

If the author covered their changes with tests, and wrapped it up in a feature flag bow, approve it. Say "nice work!" while you're at it! Build that culture of positivity one PR at a time.

See your Portal in action

Go to Dashboard