top of page
Search

Nobody Has a Map: What Matt Shumer's 80 Million Readers Still Need to Hear About AI

  • 5 hours ago
  • 6 min read

Wednesday morning. I'm at my desk when my I check my phone and it shows the fourth forwarded link in two hours. "Have you seen this?" Each message points to the same place: Matt Shumer's "Something Big Is Happening" post on X.


By lunchtime it had 20 million views. By the end of the week, 50 million. At the time of writing, 80 million. Friends who normally send me articles about resilience risk or cat videos were sending me this instead, each asking some version of the same question: should we be worried?


Shumer Isn't Wrong. He's Incomplete.

Here's my take (among the memes and many others you will no doubt see): Shumer isn't wrong about the direction. But his framing collapses three critical distinctions that determine whether you come through this well or badly. Those distinctions come not from the critics, but from Dario Amodei himself, CEO of Anthropic, in a recent interview with Dwarkesh Patel that deserves ten times the attention Shumer's post received.


The stakes are real. Anthropic's revenue went from roughly zero to $100 million in 2023, to $1 billion in 2024, to nearly $10 billion in 2025. In January 2026 alone, they added several billion more. This is not a drill.


Unfortunately, most people will respond to Shumer's post in one of two predictable ways: panic or dismissal. The panickers will potentially over-invest in tools (my monthly subscription bill shows I can empathise here). The dismissers will sleepwalk into irrelevance (sounds harsh but that’s what I currently believe for that camp). Both groups share the same error. They think someone, somewhere, has a reliable map for what comes next.


Nobody has a map. Not the labs, not governments, not the builders. I wrote that on LinkedIn last week and I meant every word. The direction is clear. The speed and shape of arrival are genuinely unknown. If you don't grasp that distinction, you'll either freeze in place or sprint in the wrong direction.


The most valuable thing I can offer isn't a prediction. It's a framework for operating without one.


With that, let’s dive in.


The Disruption Isn't Coming. You're Standing in It.

Shumer frames this as AI's 'February 2020 moment.' A vivid analogy. A virus most people haven't heard of is about to reshape the world. But the analogy breaks down on closer inspection.

 

February 2020 was the moment before disruption hit. What Amodei's revenue numbers actually show is something different: we're already inside the disruption. This is closer to February 2021, when the vaccines existed but distribution was the bottleneck. The capability is here. The question is plumbing.


In his interview with Patel, Amodei draws a distinction that should be required reading for every strategy team on earth. Two exponentials are running at the same time. The first is the capability curve: models getting smarter, faster, more capable. That one is steep and shows no signs of slowing. The second is the diffusion curve: how quickly those capabilities actually change how organisations work. Also fast. But as Amodei puts it, 'extremely fast, but not instant.'


This is not a trivial difference. Big enterprises, he says, are adopting tools like Claude Code 'much faster than enterprises typically adopt new technology. But again, it takes time.' Legal review. Security provisioning. Compliance sign-off. Change management. The leaders of a company who are 'further away from the AI revolution' still need convincing that spending millions makes sense. Then they explain it two levels down. Then thousands of developers get onboarded.


I spend my working life inside this machinery. I can tell you that the bottleneck in most large organisations isn't intelligence. It isn't even willingness. It's plumbing.


Why Software Is the Exception, Not the Template

Here's where Shumer's post becomes genuinely misleading, if unintentionally. His most compelling examples all involve coding. AI writes tens of thousands of lines, tests the app by clicking through it, fixes issues on its own. I'm using it myself and this week it allowed me to instantly implement some additional database encryption on an app I’m building. It's remarkable.


But buried in the Amodei interview is a structural insight that is fundamental. Dwarkesh Patel puts it plainly: coding made fast progress 'precisely because there is an external scaffold of memory which exists instantiated in the codebase.' The AI can read your entire repository and know what a six-month employee would know. Then Patel asks the question that gets to the heart of the impact on every non-software industry: 'How many other jobs have that?'


Almost none.


Software engineering is the anomaly, not the template. It is the one profession where accumulated institutional knowledge is already machine-readable, version-controlled, and structured for automated consumption. Amodei himself notes the difference between '90% of code written by AI' and '90% of end-to-end SWE [software engineering] tasks done by AI,' calling them 'very different benchmarks.' As Fortune's Jeremy Kahn observed in his critique of Shumer's post, there are 'no compilers for law, no unit tests for a medical treatment plan.'


After a long career in consulting and banking, I can tell you exactly where my industry's critical knowledge lives. In the heads of experienced relationship managers. In the unwritten rules about how a particular regulator actually behaves versus what the handbook says. In the instinct a credit officer develops after a thousand deal reviews, when something in the numbers feels wrong before the spreadsheet confirms it.


None of that is in a repository. None of it is machine-readable. And until it is, the timeline for AI transforming these domains will look nothing like the timeline for software.


This doesn't mean it won't happen. It means the path runs through a different bottleneck: making your domain's knowledge as legible as a codebase. That's the real work, and almost nobody is doing it.


A Framework for Operating Without a Map

So if confident prediction is a trap, what's the alternative?


Amodei himself models the answer, perhaps without realising it. He puts himself at 90% confidence that the 'country of geniuses in a data centre' arrives within ten years. But on a one-to-two-year timeline? He drops to 50/50. He expresses 'genuine puzzle' about sample efficiency. He admits honest uncertainty about tasks that can't be verified: planning a mission to Mars, writing a novel, making a fundamental scientific discovery like CRISPR.


Decision scientists call this calibrated uncertainty. Daniel Kahneman spent decades showing that most experts lack it. His research demonstrates that confident predictions generate disproportionate trust; we are wired to follow whoever sounds most sure of themselves. Shumer sounds sure. His critics sound sure. Amodei, who arguably knows more about the trajectory than either group, sounds carefully, precisely unsure. And he structures actual billion-dollar decisions around that uncertainty rather than his hopes. He told Patel that being wrong by a single year on compute investment could mean bankruptcy. If the man spending billions hedges his timing, perhaps you should too.


Three principles fall out of this:


Stop mapping your AI strategy to a date. Build for optionality, not a target. Pilot programmes with flexible contracts and modular architecture that can scale up faster than it scales down. The cost of being one year early is real. The cost of building rigid plans around a confident timeline is worse.

 

Audit where your institutional knowledge actually lives. If the answer is 'in people's heads' or 'that's just how we do things here,' you've found your AI readiness bottleneck. The organisations that turn tacit knowledge into structured, legible assets will ride every AI improvement upward. This, personally, is my #1 conviction for tackling non-coding domains. Those that don't will watch from the platform as the train leaves.


Build a learning system, not a plan. Run fifty small experiments. Update your model monthly. Measure actual output, not feelings of productivity. The organisation that learns fastest will outperform the one that predicts most confidently, every single time.


The Bottom Line

The old framing is binary. Something big is either happening or it isn't. The alarm rings. You run or you don't.


The new framing is messier, but truer. Something big is already happening. It's happening unevenly. The capability curve is ahead of the plumbing. The plumbing matters more than the demos. Software is sprinting ahead because its knowledge was machine-readable before AI arrived. Every other domain has to earn that advantage the hard way. And nobody, including the people building the technology, can tell you precisely how fast the gap closes.


This ‘two exponential’ problem is the gap in which most arguments sit about how AI will impact us all.


Amodei says we're near 'the end of the exponential.' What surprised him most wasn't the technology itself. It was 'the lack of public recognition of how close we are.' Eighty million people just got a glimpse. Now the question is what they do with it.


Until next time, you'll find me turning tacit knowledge into structured assets, running small experiments that fail cheaply, and resisting the considerable urge to pretend I know what March 2027 looks like.

 
 
 

© 2023 by therealityof.ai. All rights reserved

bottom of page