The PR Review
If you wrote code professionally before AI showed up, you have lived this scene.
You spent days on a piece of work. You thought about the edge cases. You read it back to yourself with the kind of care you reserve for things you quietly hope will outlive you. You opened the pull request, picked your reviewers, and walked away.
When you came back, the comments were waiting.
Lots of them. Some short. Some lengthy. Some flagged real issues. Some were taste. Some were nits the reviewer probably wrote at eleven at night because the kid had a fever and they couldn’t sleep.
You felt two things at once.
The first was gratitude. You knew exactly how long it had taken to read all that code, because you had read all that code yourself. Someone had spent their evening crawling through your work. That is a gift.
The second feeling — and this is the one that matters here — was something closer to getting jumped. Even if no one meant it that way. Even if every comment was thoughtful and well-intentioned.
You had put yourself into the work. The reviewer’s pile of comments did not feel like a pile of comments. It felt like a list of things wrong with you.
Hold onto that feeling. It’s where the rest of this essay lives. (I wrote a different version of the same emotional layer in It Wasn’t Wrong. This one is the cousin.)
Now Look Around
That same feeling — overwhelmed by someone else’s output, judged before you were ready to be, asked to read a wall when you wanted a sentence — is showing up everywhere now.
Inboxes full of dense AI-assisted analyses. Slack messages that arrive looking like position papers. Feedback that runs five pages when you asked for a paragraph. The sense that everyone you work with has somehow levelled up overnight, and you are now expected to keep up while also doing the actual job.
Most of us, when this hits, point at the same culprit.
It’s the AI.
AI is making us tired. AI is making interactions colder. AI is making the office feel inhuman. AI is the reason I came home with my brain on fire on Tuesday. AI is why my colleague’s note felt like a lecture instead of a conversation.
I want to interrogate that. Because the more I sit with it, the more I think we are blaming the wrong thing.
The Wrong Diagnosis
Let me start by taking the complaints seriously, because they are not nothing.
The brain-fry from context switching is real. The discomfort of reading something you suspect a person did not actually write themselves is real. The suspicion of slop is real. So is the feeling that AI is sitting between you and the person who sent the message — that there is now a third presence in the room you did not invite.
These are not invented problems.
But none of them are new problems.
Critique-as-attack is older than software. Long before AI, a colleague could send you a five-page memo full of valid points and leave you feeling like you had been disassembled in your own kitchen. The problem in that exchange was never the volume of the analysis. It was the absence of context for how to receive it.
Context switching has been crushing knowledge workers since open-plan offices were invented. Even on a quiet day, with no AI in sight, going from a deep code session to a meeting to a Slack thread costs you more than people admit.
Communication where the sender is fluent and the receiver is overwhelmed is the human condition. AI did not invent it. AI just made it cheaper to produce more of it, faster.
The Amplifier
Here is the framing I keep coming back to.
AI is an amplifier. Almost every interpersonal complaint we have about it is an old human-communication problem turned up to eleven.
That sentence is doing a lot of work, so let me sit on it for a moment.
If a behaviour that creates friction between people existed before AI, then AI cannot be the cause. It can only be the volume knob.
And the conclusion most people quietly draw — therefore we should use less AI — is precisely the wrong conclusion.
It is the equivalent of looking at the road-traffic-fatality statistics and concluding that cars were a bad idea. I shall stick with horse wagons. They are slower, but I will arrive on the highway alive. You will not arrive at all. You will be flattened by the first lorry that comes around the bend.
The right response to a powerful technology that amplifies an old problem is not abstinence. It is to learn to drive the thing — and to demand that the thing be built better.
When We Get It Wrong
Imagine someone asks me, casually, what I think of their plan.
I run their plan through Claude. Claude, being Claude, returns a thoughtful five-page document with cross-references, edge cases, and a tidy summary at the bottom.
I send the whole thing back. Here are some thoughts.
Now imagine being that person on the other end. They asked for an opinion. They got a tribunal.
Even if every single point in that document is correct — and it might be — the message that lands is not here are some thoughts. The message that lands is:
I didn’t have time for you. I outsourced you to a machine. And the machine had a lot to say.
The relationship gets dented. Not because the analysis was wrong. Because the framing was missing.
When We Get It Right
Same scenario. Same tools. Different framing.
I reply: I’d love to give you my take. Let me have Claude do a first pass on the factual side — it’ll catch things faster than I will. Then I’ll read its output alongside your plan and write you my actual opinion on top.
Now I send three things instead of one. The Claude analysis, clearly labelled as Claude’s analysis. My own response, clearly labelled as mine. And a single sentence at the top connecting the two: here is the data layer; here is what I, the human, think about it.
Same person. Same hours of my time. Wildly different conversation.
The difference is not the AI. The difference is the labelling.
The Oldest Rule, Newly Exposed
Here is the principle the labelled version is observing. A principle older than computers, older than office work, possibly older than language itself.
The sender of a message has full visibility into their intention. The receiver has none.
Every breakdown between two well-meaning people lives in that gap.
The PR-review sting from the start of this essay was that gap. The reviewer knew their intent — help, sharpen, ship. The author saw none of it. They saw a stack of things that were wrong.
The five-page-AI-analysis sting is the same gap. The sender knows their intent — I want to give you a careful answer. The receiver sees a wall. They fill in the missing intent themselves, usually with the worst available reading.
AI did not create this. AI just made it easier to produce a rich, dense artefact at speed, without giving anyone the context for how to read it. The artefact is rich. The framing is poor. The gap widens.
The fix is not less AI. The fix is the same fix it has always been. Close the gap. Tell the receiver what you are doing, why, and how to read it. (This is also the heart of The Feedback Loop — the same lesson, applied to systems.)
Two Layers, Not One
Here is the mental shift I have been making, and that I think we all need.
There are now two layers in any meaningful exchange between colleagues.
The first is the human-to-human layer. Your opinion. Your stance. Your judgement. Your this matters to me. This is the layer that has always existed and is not going away. AI does not replace it. AI cannot replace it.
The second is the data package. AI-assisted analysis. Cross-references. Verification. The thorough first read no tired human has the patience for. The thing AI is genuinely better at than any of us — pulling on threads, surfacing facts, catching the reconciliation issue at three in the morning.
These two layers are not the same thing. They should not be conflated.
When I send you a five-page Claude analysis as though it were my opinion, I am compressing two layers into one and asking you to read them as the same thing. They are not. The analysis is not slop, and it is not laziness. It is often immensely valuable. But it is a different layer than the human-to-human conversation we are having, and you, the receiver, deserve to know which is which.
This reframe is liberating once you make it. Long, dense AI output stops feeling like an attack. You can read it for what it is — a richer source than you used to have access to — without taking it personally, because nobody is personally sending it.
The Tools Are Making This Harder Than It Should Be
Now for the part of this that is not on us.
Today’s AI tools blur human and machine on purpose. The agent runs under your credentials. The draft is signed with your name. The Slack message goes out with your avatar. The email arrives in the receiver’s inbox looking exactly like every other email you have ever sent — except this one was largely written by something that is not you.
That is a structural problem. We cannot mindset our way around it.
When the receiver cannot tell which layer they are looking at, they cannot read it correctly. And right now, no major communication tool gives them a clean signal. There is no badge that says this paragraph was AI; this one was Niels. There is no header that says the analysis below was machine-assisted; the conclusion was not. There is no inbox affordance for please skim, this is the data layer versus please read carefully, this is from me.
We need that. The companies building AI products need to ship it. (I argued the bigger version of this in It’s All About Trust — agent identity and perimeter. The version I am asking for here is the smaller, social one: tell me when I am reading you, and tell me when I am reading the model.)
Until They Fix It, We Have To
So here is the practice.
When you reach for AI in a human exchange, label it. I had Claude run a first pass — here is the analysis, separately here is my take. One sentence. Changes everything.
When you receive something that smells AI-assisted, update your read. It is not slop. It is probably valuable. It is a data package, not your colleague’s intent. Read accordingly.
When the channel is one where you were the human and only the human, say so when that changes. Even in casual messages. Especially in casual messages. The trust we have built with each other over years is worth more than the seconds it costs to be clear.
And when you are the one designing or buying the tools that mediate all this — be loud about wanting human/AI provenance to be a first-class feature. The companies will build it eventually. They will build it sooner if their best customers keep asking.
Back to the PR Review
The feeling I started with — the bittersweet sting of receiving a careful review and reading it as an attack — was not about code. And it was not about AI.
It was about the gap that opens between well-meaning humans whenever the framing of a message lags behind the substance of it.
AI did not create that gap. AI just walked into the room, sat down between us, and made the gap easier to fall into.
The work of staying clear with each other across that gap is older than the technology. It belongs to us, not to the model. And it is, mostly, the same work it always was — say what you are doing, say why, and tell the person on the other end which layer they are reading.
Until the tools learn to do it for us, that work is on us.
Between us.
My brother Johan and I went deeper on this same theme — including the embarrassing real-life version of the labelled-feedback story above, lived from the wrong end — on episode 3 of Johnny’s Garage, our podcast. The episode is in Danish: listen on Spotify or on Apple Podcasts.