The Lobster Enlightenment

Why the dumbest statue in finance clarified everything about the AI moment
Sometimes you see something so profoundly stupid that it snaps everything else into focus.
That was my reaction to the photos of a giant lobster posed in front of the Wall Street bull. If you found that installation compelling -- a crustacean staring down the symbol of American capitalism -- I'm sorry, but you're part of the problem.
And yet, I'm grateful for it. Because that lobster gave me clarity on something I'd been struggling to articulate about AI.
This isn't a bubble. That's not the right word.
The Scale Is Real
Let's be precise about what's happening.
The Big Five hyperscalers -- Amazon, Microsoft, Google, Meta, Apple -- are on track to spend $600 billion on AI infrastructure in 2026. Alphabet alone announced up to $185 billion in capex. Total tech AI spending is approaching $700 billion this year.
We haven't seen capital deployed at this scale into a single technology since the industrial revolution. The comparisons to the dot-com bubble are inevitable, but they miss the point. The dot-com boom was about speculation on future revenue. This is about building physical infrastructure -- data centers, chips, power plants -- at a pace that will reshape the geography of compute for decades.
The promise is real. Cognitive labor is already changing. Embodied AI will eventually reshape physical labor too.
So no -- I'm not bearish on the technology.
What I'm bearish on is the idea that anyone actually knows what they're doing.
The Acqui-Hire Phase of Confusion
Here's the pattern that should concern you:
2024:
- Google hires Character.AI's founders for ~$2.5 billion
- Amazon hires Adept AI's team
- Microsoft hires Inflection's leadership
- Google hires Windsurf executives for $2.4 billion -- after OpenAI's $3 billion acquisition attempt collapsed
But here's what that pattern actually reveals: the well-capitalized companies don't know what they're doing either.
If Google, Microsoft, and Amazon had clear conviction about how to build the future, they wouldn't need to absorb every promising team that emerges. They're buying optionality because they're as confused as everyone else -- just with deeper pockets.
When you build something marginally interesting with AI in 2025-2026, you don't scale it. You get absorbed. That's not a sign of a healthy ecosystem. It's a sign of a market trying to convince itself it understands something that's still being figured out -- in public, at scale, with real money on the line.
The lobster reminded me of that.
The Adoption Gap Nobody Talks About
Here's the uncomfortable truth: almost no one actually uses this stuff.
ChatGPT claims 300 million weekly active users. That sounds impressive until you realize:
- Most usage is casual, one-off queries
- Enterprise adoption remains concentrated in tech-forward companies
- The people who use AI for everything -- the power users -- are a tiny, unrepresentative cohort
- A slice of tech Twitter
- Some people under 25 (often using it poorly)
- A small group of heavy users who've rebuilt their entire workflow around AI
Brain Fry Is Real
There's a term circulating now: AI brain fry.
A Harvard Business Review / BCG study of 1,488 U.S. workers found that 14% of heavy AI users report significant mental fatigue. The study identified a cognitive ceiling: three AI agents is about all most people can effectively manage.
The symptoms are familiar:
- Mental fog from constant context-switching
- Exhaustion from verifying outputs
- Cognitive overload from managing parallel workflows
- The feeling of being chronically online for 16-20 hours a day
I usually dismiss concerns like this. But I recognize the feeling. It's the same cloudiness people reported during peak-COVID internet life -- when being chronically online for 20 hours a day flattened your thinking into a permanent haze.
AI didn't cause that. But it's accelerating it.
The K-Shaped Reality
People love to talk about a K-shaped future: elite users up, everyone else down, permanent underclass, etc.
But here's the part missing from that story:
The people at the top of that K are doing an unsustainable amount of work.
They're not being replaced by autonomous agents. They are the agents -- coordinating, supervising, correcting, and stitching together systems that don't quite work on their own. The productivity gains are real, but they come at the cost of cognitive bandwidth that humans weren't designed to sustain.
As one analysis put it: the top half of the K is soaring, but it's soaring on fumes.
Something has to give:
- Either workflows simplify dramatically
- Or cognitive burnout becomes the limiting factor
- Or adoption stalls outside a narrow elite
What the Lobster Taught Me
The lobster in front of the bull doesn't signal irrational exuberance to me.
It signals a market that's lost the plot.
We're in a phase where:
- Capital is moving faster than understanding. $600 billion in capex, but no one can articulate what the killer app is beyond "chatbots" and "code completion."
- Acqui-hires are the dominant exit. Build something interesting, get absorbed by a company that also doesn't know what to do with you.
- Adoption is narrower than the discourse suggests. The people who talk about AI the most are the least representative users.
- Cognitive limits are becoming visible. The brain fry is real, and it's hitting the heaviest users hardest.
- The K-shaped future is already here. But the top of the K is running on unsustainable workloads.
But the understanding isn't there yet. We're building the future while still figuring out what it's for.
And that's the most dangerous phase of any technological revolution -- not when the hype exceeds the reality, but when the capital exceeds the clarity.
The lobster knew.