Now that you've created a PR, given it a decent context-providing description, and have some checks running/passing, you can start involving the humans!
Let's start by answering "what role do these humans play?" and "how can they most effectively be utilized?" style of questions, to get our bearings on what the robots should do vs engineers and the optimal lanes each should stay in.
Ideally you have at least 1 other team member who can understand what you were trying to do and to validate that it was accomplished... but this isn't necessarily required.
Welcome to the most contentious part of this article : ).
With profit in mind, human engineer time shouldn't be spent pulling down someone else's code, running it on their own machine (or even viewing an external ephemeral environment for it), and seeing if it works or meetings expectations.
This is the author's job! Ownership must be practiced here and whoever created the code changes in the PR should be solely responsible for:
- ensuring the code solves the problem in the best way possible
- doesn't introduce security issues
- matches product requirements
- builds correctly
- styled appropriately
- follows agreed-upon patterns & practices
- write automated tests that hold the changes to the above standards
- encapsulate changes in feature flags to not alter existing looks, behaviors and functions
It's the reviewers job to:
- confirm the author wrote automated tests that cover the code changes
- confirm the author wrapped changes in feature flag(s)
- educate on better ways of doing things, optionally
- ... and that's it!
Notice how much more there is on the author's side of things. They wrote the code, so they should own it's quality, it's accuracy, and it's delivery to the end destination (usually production).
Notice also how we included automated tests and feature flags in this list. These are the two things that modernize the whole coding, PR, and CI/CD development cycle.
When you wrap code changes behind feature flags (that default to "off"), you're able to deploy the code to production without the changes being "applied". When stakeholders and/or "the powers that be" decide to turn on a feature/change, this is called a "release" and is simply a flag switched to "on".
The automated unit and/or e2e tests guarantee the code changes didn't break existing things, and adds to the validations the robots will use to check future PRs for when your code gets fully released & utilized.
side note on that: it's helpful to run an e2e test suite against the code once with the flag off, and once with the flag on - as separate checks. This way you know if your changes broke existing things, or if your new code doesn't correctly validate expectations once released.
Said another way: it's the PR reviewers job to ensure the flags & tests in the PR cover the changes. That's it.
The concept of a PR "assignee" is kinda loose and team-dependent.
It can mean the person who's ultimately going to click the "merge" button, or the person responsible for that area of tech, etc. It could be the QA team lead if that's your thing, or even the engineering manager governing a body of work.
To each their own, and most of the time you can ignore it. PR author, and PR reviewers are the mandatory bits - assignee is optional for unique situations and workflows.
Even though code owners are covered in the "reviewers" section, I wanted to highlight them specifically here.
Some teams have knowledge silos (ok, a lot of teams do unfortunately) and specific individuals have better knowledge of things than others. If you're trying to add safeguards and safety rails to things by throwing people at the problem, code owners is a convenient way to do it.
GitHub lets you define files, code areas, types of changes, etc. to automatically add individuals & groups to PRs to serve as a sort of "specialist reviewer" to them. You can customize PR rules at the repository level to enforce the approval of code owners as a "check" before a PR can merge.
This is situational and not something I'd recommend most teams do; the point of human review is to validate the tests and flags - not to expertly divine the code changes themselves.
It's temping to take the URL of your newly created Pull Request and plaster it in Slack, Discord, etc. to let people know to look at it. Don't do this.
It pings people who might not care or are busy with their own stuff. Instead, use the built-in tools GitHub provides... by just assigning the specific person or group. That's it!
Your teammates should have their notifications set up to their liking and will receive the notification that they've been assigned to your PR. They'll get to it when they get to it (timing discussed below) and don't need the additional attention-grabbing ping of a Slack notice.
Any discussion about the PR contents itself should stay in-context with that PR in the form of comments & threads - not in separate messaging systems or issue trackers. Keep discussion about the code, with the code.
This goes for PR updates too - if you need someone to review your latest alterations, click the "Re-request review" button instead of sending them a message.
One Exception: if you've tagged people as reviewers, and it's been 24 hours - tell them on Slack. They're leaving you hanging at this point and is on them for not getting to it fast enough. Ping em.
After creating a PR and pushing code to it, keep working. Checks will run and various feedback will start to flow in with passes & failures, etc. Fix what's broken and get to a deployable state sooner than later.
Add human reviewers to the PR and start working on something else. Keep an eye on whether your PR is gets that needed approval, or if someone else adds you as a reviewer on their own PR.
Instead of keeping a bunch of tabs open to look at your team's PRs, your PRs, PRs you're a reviewer on, etc. - just add a Pull Request widget to your Portal. You'll see all this info in a single place and can better focus on writing profitable code.