review-prompts Refactoring the code review process

AI auxiliary code review prompts written by Linux kernel expert Chris Mason can speed up the Linux kernel and systemd patch review process. It can be easily installed through bash scripts, and then use slash commands such as/kreview and/kdebug in tools such as Claude. These commands automatically load the project context, break up large differences into subtasks, reduce Token costs by 40%-60%, and discover more vulnerabilities.
Utilization income:

  • Speed of code review by 30%-50%
  • Reduce costs and reduce fatigue
  • Improve code quality without replacing manual review

Supports GPT-4, Claude and other models, and has the best navigation effect when paired with semcode.

Some projects will look “very simple” at first glance, or even a little too simple to mention-there is no complex architecture, no flashy UI, and no bunch of automated pipelines. But after you really use it, you will find that it quietly changes you.The way you think about code

Review-prompts is one such project.

Its author is Chris Mason, a long-term participant in Linux kernel development. So this project started with an obvious temperament: it wasn’t to teach you “how to write code”, it was to teach youHow to doubt the code

The way many people use AI is actually similar: write code, complete code, and change bugs. But few people seriously think about one thing-can AI help you do “code review”? And not that kind of mild advice, but a kind of review close to the core community’s style: direct, strict, and even a little ruthless.

What this project does is inherently restrained. It does not attempt to build a complex system, implement automated diff analysis, or wrap it into a complete agent. It just provides a set of polished prompts, such as /kreview/kdebug The essence of such orders is a “role setting + review strategy”.

But the key point is that the way these prompts are written is completely different from the way you usually ask AI casually.

You may ask,”Is there a problem with this code?”
AI will usually give you a polite or even conservative answer.

In this project, questions are rewritten into another form-more like real-world code reviews:

As an engineer who maintains complex systems for a long time, identify the most likely problems with this change, including boundary conditions, performance degradation, and potential concurrency risks.

Just a change in this angle will cause a significant change in the output quality.

You will start to see things that are easy to ignore, such as:

  • A branch will collapse at extreme inputs
  • A change introduces unnecessary lock competition
  • Certain logic becomes difficult to understand during long-term maintenance

These are not that AI is “smarter”, but that youput it in a more appropriate location

This is also the really interesting part of this project. It doesn’t try to make AI a “stronger programmer,” but makes AI more like a role you normally find difficult to have: a reviewer who is willing to constantly question you.

If you have written about projects, especially multi-person collaborations, you probably know that good code reviews are actually scarce. Many reviews are either mere formality or become “just a glance” due to time pressure. Over time, the quality of the code does not collapse suddenly, but slowly becomes uncontrollable.

Review-prompts provides a kind of “low-cost simulation”: you can let AI act as the strictest reviewer locally and before submitting, exposing problems in advance.

It may even change the way you write code to some extent. Because when you get used to being picked on before committing, you will start thinking ahead when writing code:

  • Where are the boundaries of this function?
  • Are there any hidden side effects?
  • If I were a reviewer, which paragraph would I curse?

This kind of thinking is itself the core of engineering culture such as core development.

Of course, there are many misunderstandings about this project. Some people will describe it as an “automated code review system” or even say that it can automatically split diff and reduce the cost of token by a certain amount. In fact, these are not what it is doing in itself. It has no complex engineering capabilities, and it is not a complete agent.

It’s closer to a very simple tool:
One group made the matter of “how to ask AI” more professional prompt.

But precisely because it’s simple, it’s easier to embed into your own workflow.

For example, if you are doing some AI projects, automating processes, or writing blogs or videos, you can actually use the same set of ideas: after “generating content”, add a “review” step. Let AI find problems, rather than just responsible for output.

After writing an article, you can ask it to criticize it from the perspective of “whether the reader will be boring”.
Design an Agent process that you can break down from the perspective of a “failure path.”
Even when you are studying physics, you can let it specifically find loopholes in your derivation.

The moment you transform AI from a “creator” to a “censor”, its value actually begins to amplify.

What review-prompts does is simple after all, but it reminds you of something that is easily ignored:

What really opens the gap is not how much code you can write, but whether you can discover which code shouldn’t be written that way.

This ability originally relied heavily on experience. But now, you have an “external brain” that you can call at any time, provided that you have to learn to ask it in the right way.

Github:https://github.com/masoncl/review-prompts
Oil tubing:

Scroll to Top