To make money with pull requests, you have to spend less time creating, updating, reviewing, and merging them (the cost) than the value your organization gets from users accessing your changes (the revenue).
If you spend less than the company makes - congrats! You're practicing Profit Engineering correctly.
This post will cover:
- where to make pull requests,
- how to make them superfast and easy to review,
- the right automations to plug into them,
- how to review them in less than 2 minutes,
- and how to merge them
There's a lot of options for PRs, so we'll cover the basis and most common providers.
Step one is understanding your git provider. We've briefly mentioned a few above, but we're going to assume you're like most people and use GitHub.
Each vendor offers their own flavor of PRs but with GitHub they're literally called "Pull Requests". You can find them listed in the "Pull requests" tab when looking at any of your repos in their web UI.
Lots of tools exist for viewing PRs as well, but those are more personal preference than efficiency related.
Personally I like to use GitKraken for pushing/pulling/branching code with PR insights. The main github.com will do just fine though.
PRs belong to a specific repository, and are the ways in which you get code from one branch of the repo into another.
Each pull request will have a "from" branch - meaning the branch containing your new code changes, and a "to" branch - meaning the branch where you want your new changes put into.
- from: the branch containing your new code changes
- to: the branch where you want your new changes to go
There are several branching methodologies and I won't go too much into detail here about them, but the most common is simply called "feature branches". This is where you create a branch from the production - aka "main" - branch, push your work into it, and merge that feature branch back into the main branch.
It's about as simple as it gets. Far more complex branching concepts exist... but we're Profit Engineering, not technical-savvy engineering. Companies don't care "how" you get your code into production, just that you do so both often & safely.
I'll cover more about branching in a separate article (there's a lot to cover about Profit Engineering) but the advice I'd give you here is to push changes to your feature branch often, all throughout the day. Think of it as a "save" button that re-validates how you're doing along the way. Don't wait until you're done to push your changes.
I put this section here to suggest "not" using robots & APIs to create PRs.
Even though this is technically possible, and seems like it would save humans time from having to go in and create a PR manually after making a new branch, but it's not worth it.
PRs "belong" to a person - their author is whoever is ultimately responsible for the work. Most of the time creating a PR via code makes the bot or application authorizing the request the author of the PR, losing that association.
It might be possible to code a bot to open a PR on behalf of a user, but that user would then need to grant those permissions to the bot to do so... and is just not worth the time to fiddle with it.
Speed is the name of the game here; have the human developers themselves file PRs for their work. At least with GitHub when a new branch is pushed, it will show a little notice prompting a new PR creation so the experience is fairly frictionless.
This way you retain all the benefits of a user owning the PR start to finish. Gotta cut the technical fancy sometimes.
Ultimately the title's you use on your Pull Requests don't matter.
Some might suggest using semantic messaging concepts like "Feat: new feature XYZ" or "Chore: do the thing", but that's personal preference. I personally do use semantic titles, so I know at a glance if a PR is for new user-facing functionality vs generic changes, but the PR is treated the same.
We care about getting code into production - not about the PRs showing up pretty in a list.
The description has a slight bit more importance to it. Since humans will be reading the Pull Requests, the description's job is to inform the reviewers of what's going on - at a glance - as quickly and clearly as possible.
That means a one to two sentence outline of what changes are in the PR, what they do at a high level, and why they were introduced. Don't write full paragraphs here; if more context is needed, paste a link to the issue outlining the work itself (like a Jira/Linear ticket).
Below that should be screenshots - if applicable - so the reviewer can glance to see what the changes look like if it impacts some UI or visual thing. Don't screenshot the code... that's already viewable in the "changes" tab of the PR.
PR descriptions can vary by team & preference, so "templates" are useful here. GitHub lets you specify a template when creating a PR to pre-fill sections and expectations for the author to fill in - like the description and screenshot sections.
Lastly, descriptions provide a fancy bit of automation with checklists (with the help of some bot(s)) that can be utilized in the PR's "checks" section, requiring the author of the PR to explicitly mark certain things has having been accomplished. Things like "documentation was updated" or "tests were written" can provide some accountability and literally block deployments until complete.
With GitHub you can specify a PR as "draft", meaning it's not ready for review.
There are some caveats to using this functionality which might make them useless to you; generally it's best to just create the PR as "Ready for Review" and wait to add reviewers until you're ready for them to look.
This gets odd when utilizing the concept of "code owners". This is a feature letting the repository owners assign specific people to get auto-added as mandatory reviewers of PRs that meet certain criteria - such as specific files having been altered as part of the PRs changes.
If you open the PR in draft mode, the code owners won't be added & notified until the PR gets transitioned to "Ready for Review". This might seem like a good idea, but it depends on your automations.
If you have ephemeral environments that auto spin up (like with Vercel) or build scripts / linters / tests that run, they might not trigger on draft PRs. This can be overridden most of the time but if that's not an option for you, then you'll be missing out on the previously stated benefits of progressive frequent PR check runs by waiting to transition the PR out of draft mode.
I've found it better to "poke" the code owners with not-ready-to-merge-yet PRs by creating them in "Ready for Review" mode, ignoring the concept of "draft" PRs all together, in the name of faster more iterative development & bot feedback. The code owners will review it, say something like "it ain't ready yet, yo" - to which you can ignore and click the little "re-request review" icon next to their name when the PR is "actually" ready to be looked at.
Here's where some magic can be worked - so long as it makes the developer's lives more efficient, not more complex & cumbersome.
Adding labels to PRs for us isn't for organizational purposes, but rather to trigger automations. With GitHub you can run GitHub Action scripts for PRs based on labels, giving you conditional logic controls to do custom things in certain scenarios.
For example, you can use a label called "db", listen for it in a script, and spin up a new PlanetScale database for that feature branch automatically. That can be pretty useful for running checks against a branch database like End-to-End tests or giving stakeholders a playground to tinker with alongside ephemeral applications through the likes of Vercel.
That's a fairly complex example but you get the point. If there's something meaningful to help you get feedback on your code faster, deploy faster, iterate faster, or reduce risk - then labels can probably help you out.
Otherwise, feel free to ignore them.
We've covered Pull Request "checks" throughout this guide but can summarize them a tad more here for clarity.
These might be the entire reason we bother with PRs in the first place. Yes, they're good for passing on knowledge and reviewing code to ensure the right things were done by the right people, but those almost come second to the gains received through automations / robots / checks.
PRs are a conduit for robots to validate & analyze your code changes. That is first and foremost their purpose in this style of engineering.
Previous to checks & automations PRs were mostly a way for other engineers to check out your code locally, run it, test to make sure it worked how it should, then shove it on a staging environment for the QA team to manually review.
This became a very slow outdated practice, and as automations became more prevalent the roles have shifted quite a bit (which we'll go into more detail below for who's responsible for what part of a PR). Now the PRs are mostly a benefit to the author instead of the org receiving the code changes.
Checks can be used both as "indicators" of pass/success conditions, and as "gatekeepers" marked as "required" to block PRs from merging until the required checks all pass.
A quick rundown of valuable automations / checks to consider utilizing:
"Linting is the automated checking of your source code for programmatic and stylistic errors."
In other words, adding a check to your PR that validates your code changes for style & syntax correctness helps keep everyone's stuff looking the same and following the same patterns. An example in the TypeScript world is using a combination of ESLint and Prettier.
If developers use different conventions across the codebase, reading it becomes difficult, PR changes become noisy with syntax differences instead of actual functional alterations, and differing opinions from human reviewers can get in the way of merging.
Linting helps streamline efficiency so only the meaningful stuff gets the time investment, not the mundane & pointless.
I mark these style of checks as "required". Even though they don't technically prevent the app from building or deploying, I don't want nasty mixtures of code syntax confusion muddled into the repository.
Common to most apps will be some kind of build process.
Most things these days are built in a CI/CD pipeline like GitHub Actions, CircleCI, etc. and can be triggered by changes to a branch (which PRs help facilitate). Build processes that fire off this way can report back their progress & a pass/fail with a link to details directly from the "checks" interface of the PR.
In Next.js for example you can have a GHA script that runs next build
to
generate output files for placing in a server/app environment. After linting,
this is the most common place to see errors in your code.
Running and building things locally often work fine (the "works on my machine" moniker) but CI will find if it "actually" builds in a production-esq style context. Even running things locally sets NODE_ENV to "development" mode vs building as "production" - so who knows what the differences might be?
Robots are who!
Don't waste time doing production build commands locally - just code up your changes, and shove (push) them up to your git provider to trigger a fresh set of checks against what you've written.
Keep on coding - even moving on to other tasks - while the checks are running, but keep an eye on them to see if any fail. "Keeping an eye on them" is much harder than it sounds - and is why WhatThePortal exists, to provide a single place to see everything going on.
If you notice a failure, click it in WhatThePortal (or hunt around your git provider's UI) to see details on what went wrong, code in your fix, and push the changes to fire off the checks again.
Rinse and repeat until everything is green. I mark these checks as "required", since... I don't ever want to merge in code that can't build. This is known as "breaking the build".
These days ephemeral environments are all the rage - and for good reason. Temporary isolated full instances of applications, databases, clients, APIs and more are all possible now.
You can either rig these up yourself, or just modern frameworks like Next.js through Vercel to do it for you.
Either way you'll want to see if the code in your PR can actually deploy successfully to an environment and be usable. These are what I refer to as "deployment" checks.
Not to be confused with actual "deployments" - the raw, real actual deployment happening - but rather a check to confirm a particular deployment was successful. Even if your app builds in the CI pipeline in a "production" mode, it might not land on a server / in a service successfully.
Context matters - having a different set of values for environment variables inside of GitHub than you do for Vercel is one such example. Knowing for sure that your changes can deploy successfully is a very important piece of feedback to validate merging or not.
For that reason, I mark these type of checks as "required".
Here's where some of the important distinctions between "normal" development and "profit" development come to full fruition.
Traditionally - and in more corporate / slow-paced environments - PR changes would be tested by the reviewers themselves, and sometimes a dedicated QA team. This could happen along the way of the PRs progress or when it's done, but the theme is the same - human effort validating code changes.
This is a no-go for us. We want to move fast, shove our code live often, and need to rely more on robots to tell us we're in the clear than spend human time doing the same.
That means more validation done in PR checks, usually in the form of "automated tests". Here's some examples of common types:
- Unit Tests for validating business logic in isolation
- E2E tests for validating features still work and apps/UIs didn't break
- Integration tests for tying disparate systems together (I just lump these into E2E)
We'll go into more detail about each one in a separate article covering "workflows".
These style of checks are always marked as "required"; an optional check for tests means the tests aren't reliable thus not worth keeping - so fix or delete them.
While we try to keep PRs as automated as possible, there's still a human element (outlined below) to account for.
Did the author of the PR do everything they were supposed to do before merging the code? Did a reviewer blindly approve it without even reading it?
These sorts of questions can be streamlined a bit through various "accountability" style checks. One of my favorites is Shopify's task-list-checker.
Define some "to-do" items in your PR descriptions like "added documentation" or "wrote unit tests" and have this script validate that they were indeed checked. If not, the check will fail and the PR can't be merged.
Sure, the author can tick the boxes just to merge the code, but it's at least a human intentionality sticking point for "discussions" such as "Stop lying on PRs; if you tick the box saying you did it, then do it". Fostering an engineering culture valuing ownership & accountability will help with this.
Since the checks will fail until those items are checked, human reviewers don't need to get involved until they all pass. Green checkmarks in a single place are easier to spot at a glance than hunting around for potential missed requirements.
I mark these as "required" checks.
These types of checks all aim to inspect your code and deliver a report of some sort.
Things like security, vulnerability, and code quality scans fall into this category. Mostly "nice to have" in terms of productivity, but different business requirements call for different levels of "meets expectations" for risk, licensing, and compliance reasons.
Medical software has different needs than blog widget technology, so use discretion here. Don't add more than is necessary, or you'll end up with a very long PR cycle time having to wait 15 minutes for valuable robotic feedback.
In most scenarios I mark these as "optional". They're good to know about, but not something that should prevent the feature development cycle from happening. They do a good job at creating future task items to tackle.