Code Review in the Age of AI
Engineering teams are drowning in pull requests that look correct but feel off. The code passes tests and follows style guides, yet reviewers sense something's missing. AI-assisted coding has fundamentally changed what's landing in your review queue, but the review process hasn't evolved to match.
The Core Problem
Code review serves two purposes: quality control and developer education. AI-generated code breaks the second one entirely unless you know who (or what) wrote it.
When a senior engineer provides detailed feedback on problematic patterns, that knowledge transfer only works if a human is on the receiving end. If the code came from Claude or Copilot, your carefully crafted review comments vanish into the void—no one learns, and the same mistakes reappear next week.
The fix requires three pieces of information current tools don't provide: the original prompt, which corrections the developer made, and clear markers for untouched AI output.
Enter AI-Assisted Code Review
Here's where the process can flip in your favor: use an AI agent to pre-review pull requests before a human ever looks at them.
The agent scans the PR and generates structured feedback, categorized into:
- Critical issues: Security vulnerabilities, breaking changes, logic errors, performance bottlenecks—things that must be addressed before merge.
- Nice-to-haves: Style improvements, refactoring suggestions, documentation gaps—valuable but not blocking.
When the human reviewer steps in, they're not starting from scratch. Instead, they:
- Review the AI's analysis — See the flagged issues with explanations and suggested fixes.
- Accept proposals — One-click approval for AI suggestions that make sense.
- Reject proposals — Dismiss false positives or suggestions that miss context.
- Add their own feedback — Layer in human insight the AI couldn't catch.
- Approve or delegate back — Send it back to the author with a clear, prioritized list of changes.
This workflow lets reviewers focus their energy on high-judgment decisions while the AI handles the tedious pattern-matching. The result: faster turnaround and higher quality.
What To Do About It
-
Instrument your AI usage: Add a simple convention—a comment block or commit message prefix—capturing the prompt used and what was modified. Start with one team as a pilot.
-
Implement AI pre-review: Set up an agent to analyze PRs before human review. Configure it to flag critical vs. nice-to-have issues so reviewers can triage efficiently.
-
Separate review modes: AI-generated code reviews should focus on teaching the developer to prompt better, not just fixing the output. Document common AI coding failures and the context improvements that prevent them.
-
Audit your metrics: Check whether increased PR velocity is shipping better software or just more code. If PRs are approved faster but bugs are up, your review process may be rubber-stamping AI output.
Originally reported by Martin Fowler: AI Changed How We Write Code—Now It Needs to Change How We Review It