Essay

The Feedback Loop

On Danish housing, AI, and the skill politicians haven’t accounted for

By Niels Kristian Schjødt · March 2026 · ~8 min read
Illustration of a small Danish town with a lone figure and AI-generated conversation bubbles

The Missing Debate

I've spent the last year inside a whirlwind. Building with AI every day. Watching it reshape how my team works, how we ship, how we think about what's possible. I've seen entire categories of work collapse in weeks. I've watched companies lose 20% of their market value in a single day because a new model made their core product redundant. The pace of change right now is unlike anything I've experienced in two decades of building software.

And then I turn on the election coverage.

It's March 2026. Denmark is heading into an election. Housing dominates every debate, every talk show, every party platform. How do we make cities affordable? How do we distribute opportunity more evenly? How do we make sure the Danish model — productivity through equality — keeps working?

These are important questions. I'm not dismissing them. But I keep having this unsettling feeling: AI is almost entirely absent from the conversation. The thing that is already reshaping industries, eliminating job categories, and rewriting the economics of knowledge work — it's not on the topic list. Not really.

I know I'm biased. When you live inside a transformation, everything looks like that transformation. But this doesn't feel like my filter talking. It feels like watching a debate where an important piece of the picture is missing. Where politicians are forming policies for a world that might look fundamentally different in eighteen months — and the housing debate is just the most recent example of why it matters.

The Productivity Argument

The Danish housing discussion isn't really about fairness for fairness' sake. Underneath the surface, there's a serious economic argument — one that economists and PhDs have been making for years.

The core idea: where you live determines how productive you are. Not because of the building you sit in, but because of the people around you. Knowledge work thrives on friction. On challenge. On having someone nearby who can look at your idea and say, "That won't work — but what about this?" Ideas get better when they're pushed. And they get pushed when smart people are physically close to each other.

This is why cities exist the way they do. It's why universities cluster. It's the intellectual backbone of the argument that affordable housing isn't charity — it's economic infrastructure. If only the wealthy can afford to live where the knowledge work happens, you're locking out the people who could contribute the most. And in a Danish context, that matters doubly: the Nordic model depends on broad participation. Productivity isn't just a GDP number — it's the engine that funds universal healthcare, free education, and the social safety net. When fewer people can access the environments that make them productive, everyone pays the price.

It's a compelling argument. And for most of modern economic history, it's been correct.

The Mathematician in Mistville

Last Friday, cycling home from the office, I was listening to Zetland's podcast series about the housing crisis — Det Store Boligfix, or something close to that title. They unfold exactly this argument: how housing affects productivity, how proximity creates the conditions for high-value knowledge work.

And they use an image that stuck with me.

Imagine you're the only mathematician in a small Danish provincial town. Your ideas aren't worse because you're less talented. They're worse because your feedback loop is weaker. There's no one to challenge you. No one to build on what you've started. No one to tell you you're wrong.

If the same mathematician sat in Copenhagen — or Cambridge, or MIT — she'd be surrounded by peers. The ideas would be sharper. The output would be more valuable. Not because she's smarter in the city. Because the feedback loop is stronger.

That's the whole argument in one image. And it's been true for centuries.

The question is whether it's still true tomorrow.

The Twist

Here's what hit me on that bike ride: AI is quietly changing the economics of feedback loops.

One of the most undervalued things AI does isn't automation. It isn't code generation or summarization or image creation. It's something much more fundamental: it gives you a sparring partner.

Before
Productivity requires physical proximity to other knowledge workers
Now
AI democratizes feedback loops — geography becomes less decisive

Think about what happens when someone opens ChatGPT and talks through a problem. Or when they draft an email and ask Claude to challenge their reasoning. Or when they dictate a business idea into a voice interface and get real, structured pushback.

They're not just "using an AI tool." They're accessing a feedback loop that used to require a colleague, a mentor, a peer in the next office. The mathematician in Mistville now has someone to talk to. Not a replacement for a department of peers — but a massive upgrade from silence.

And this matters for the equality argument just as much as the productivity one. If AI can partially close the feedback gap, it doesn't just help individuals — it loosens the grip that expensive cities have on who gets to be productive. A talented person in Thisted or Nakskov or Sønderborg isn't locked out of meaningful knowledge work the way they were ten years ago. The playing field doesn't become perfectly flat. But it tilts.

The Sycophancy Problem

But there's a catch. And it's one that most people don't talk about.

AI providers have optimized their products for confirmation. If you've used ChatGPT or Claude, you've noticed it: the AI is polite. Encouraging. Supportive. It tells you your idea is great. Your writing is strong. Your reasoning is sound.

That's not a feedback loop. That's a mirror.

The whole value of proximity — the whole reason cities boost productivity — is that the people around you push back. They disagree. They poke holes. They force you to defend your ideas or improve them. A sparring partner who only tells you what you want to hear isn't sparring. They're flattering.

This sycophancy is a deliberate design choice. AI companies know we like being affirmed. It drives engagement. It reduces churn. It's good marketing.

But it actively undermines the most valuable thing AI could be doing for you.

If you want to capture the real value of AI as a feedback loop, you need to insist on it. You need to prompt for challenge, not comfort. You need to explicitly ask: "What's wrong with this? Where does this fall apart? What am I missing?" And you need to be persistent about it, because the default setting will always drift back toward politeness.

The feedback loop is available. But you have to demand it.

Phase Two

This connects to something I've been writing about across this blog: the shift from phase one to phase two of working with AI.

Phase one was individual acceleration. A single person goes into a room, closes the door, and produces at ten times the speed. That phase is real, and it's transformative. I've lived it. I wrote about the identity crisis it creates in It Wasn't Wrong.

But phase one has a ceiling. The people who locked themselves in a room and ran as fast as they could — they hit the wall where individual speed stops mattering and the quality of the thinking becomes the bottleneck.

Phase two is about finding new ways for humans to have high-level conversations. The old frameworks — agile, sprints, standups — are collapsing. Not because they were bad ideas, but because AI has blown apart every assumption about velocity and capacity that they were built on. We need new structures for the conversations that actually matter: the strategic ones, the philosophical ones, the ones where someone says something that changes how everyone else in the room thinks.

AI should handle everything underneath. The execution, the iteration, the day-to-day sparring on incremental decisions. That frees humans to spend their time on the interactions that only humans can have — the high-level meetings of minds that produce genuinely new ideas.

This is, in my opinion, the secret ingredient for fulfilling the promises people have been making about AI. Solving cancer. Accelerating scientific breakthroughs. Those exceptional leaps don't come from a model running in a loop. They come from humans having better conversations, more often, with AI clearing the path beneath them.

It's Not the Model

I still meet people who say AI isn't good enough yet. That the models aren't smart enough. That we need to wait for GPT-6 or Claude 5 before any of this becomes real.

I've written about this before, and my view hasn't changed. With very few exceptions — geometric reasoning and spatial architecture being the most notable — AI is good enough right now. When it fails, the failure is almost never the model's intelligence.

🎯 The one skill that matters: Context engineering. The ability to give AI what it needs to be relevant — your domain knowledge, your history, your specific reality. Not the base training. The layer on top of it that makes the output yours.

When someone tells me "AI can't do what I do," I hear one of two things. Either they haven't given the AI the context it needs. Or they haven't invested in the thinking required to build that context — understanding what the AI needs to know, and how to make it available.

This is where those high-level human conversations from phase two become critical. The thinking that produces good context doesn't happen in isolation. It happens when people sit together and figure out: what do we actually know? What does the AI need to understand about our domain? How do we structure that knowledge so it's accessible, reusable, and alive?

If you want to go deeper on how this works in practice — the infrastructure, the tooling, the approach to making context available to AI agents — I wrote about it in Still True. And for the tactical patterns of working with AI day to day, the AI Cookbook lays out the full framework.

The Compound Effect

Here's the part that excites me most: context engineering compounds.

The first time you invest in building context — structuring your domain knowledge, setting up the right tools, creating the repository of information that makes AI relevant to your work — it's hard. It takes thought. It takes time. It feels like the AI should just "know" this already.

But once you've done it, something remarkable happens. The next time a related problem comes up, the AI produces high-quality output with almost no input from you. And the time after that. And the time after that.

This is because much of knowledge work is more repetitive than it feels. We think each problem is unique because it feels unique in the moment. But the patterns repeat. The domain stays the same. The constraints are similar. And the context you built for the first problem turns out to be 80% of what's needed for the next ten.

That's the real productivity multiplier. Not a faster model. Not a better prompt. A growing body of context that makes every future interaction richer, faster, and more relevant. It's the same compounding effect I describe in the enforcement pattern for code quality — except applied to knowledge work broadly.

The Real Question

So where does this leave the Danish housing debate?

The housing argument isn't wrong. Proximity still matters. The highest-level human synergy — the kind that produces breakthrough ideas, that fuels the deepest innovation — still benefits from people being in the same room. Phase two of AI depends on humans having better conversations, and some conversations are better in person.

But the premise that you need a city to have a functional feedback loop? That the only way to participate in high-productivity knowledge work is to live in an expensive postcode? That premise is already cracking.

The mathematician in Mistville isn't stuck in silence anymore. She has a sparring partner — one that's available around the clock, that knows her domain if she invests in teaching it, and that can push back with real substance if she insists on being challenged. She still benefits from going to conferences, collaborating with peers, and being part of a community. But the baseline has shifted. The gap between Mistville and Copenhagen just got smaller.

And if we're serious about the Danish model — about equality as a vehicle for productivity, and productivity as a vehicle for equality — then maybe the most important question for this election isn't just where we put people. It's whether we're preparing them for the tools that are already changing what "where" means.

The debate about housing matters. But the debate about AI — about context engineering, about new ways of working, about what it means for every knowledge worker in every corner of the country — that debate hasn't started yet. Sweden is already having it — their national AI Labour Market Council is producing real data on how AI is reshaping who gets hired, and the findings should concern anyone who cares about the Danish model.

It should.