
The Red Wedding of AI: What a Defensible AI Company Actually Looks Like
Every era of software has a defining way you win.
Mobile was distribution.\
SaaS was switching costs.
AI is something else. AI is the era where your platform partner can watch you grow... and then kill you.
Call it the Red Wedding problem (from Game of Thrones hehe).
Foundation model labs -- OpenAI, Anthropic, Google -- are not normal infrastructure vendors. They can observe usage patterns, see what application-layer startups are building, and ship the feature natively 6-12 months later.
And they are getting faster.
If you are building an AI company today, there is one question that matters:
Does your company survive a 3x capability jump from your model provider?
If the answer is no, you are not building a company. You are running an arbitrage.
This is the framework for what actually makes an AI company defensible and how to invest accordingly.
The Kill Pattern: How AI Startups Die
The Red Wedding usually follows this sequence:
- A startup finds a valuable workflow on top of GPT/Claude.
- It scales quickly because distribution and code are cheap.
- The model provider ships the feature natively.
- Growth stalls. Multiples compress. Equity value evaporates.
- Prompt-engineered writing tools
- Persona generators
- Coding assistants that add minor UX around existing APIs
- "AI for X, but faster"
Why?
Because they don't control anything unique.
LLM inference costs have collapsed over 90% in just a few years. The barrier to building an MVP is now measured in weekends. Code is no longer a moat. Feature complexity is no longer a moat. Prompt cleverness is definitely not a moat.
If your advantage is "we use AI better," that advantage has a half-life.
The Lai Test: Minimum Viable Moat
Jon Lai framed it well: the defining concept for AI apps is the Minimum Viable Moat -- the smallest edge that survives a 3x capability jump from your model provider.
<https://x.com/Tocelot/status/2037185591440511088?s=20>
Let's make that more concrete.
If GPT-6 ships tomorrow and:
- Gets 10x better at reasoning
- Cuts inference cost by another 80%
- Bundles your core feature into ChatGPT
- More valuable?
- Or obsolete?
If the model improvement replaces your core value, you are not.
That's the dividing line.
What a Defensible AI Company Actually Looks Like
There are five durable moats in the AI era.
1. Proprietary Data Flywheels
This is the strongest moat in AI.
A defensible AI company owns data the foundation models do not have and cannot easily get.
Not scraped web data.\
Not public PDFs.\
Not "we fine-tuned on open datasets."
I mean:
- Millions of annotated sales calls (Gong).
- Billions of real design decisions across teams (Figma).
- Private clinical data inside hospital systems.
- Operational sensor data from factories.
- Transaction-level workflows inside enterprises.
Does your product generate new, non-public data every time someone uses it?
And:
Does that data improve the product in a way competitors cannot replicate?
If yes, you have a flywheel.
If no, you have a wrapper.
A proprietary data flywheel compounds over time. A wrapper depreciates as models improve.
2. Workflow Integration Depth (Beyond Switching Costs)
Traditional SaaS talked about switching costs. AI demands something deeper.
There are four levels of workflow integration:
Level 1: Storage\
You hold data. Weak moat. Migration tools are getting better.
Level 2: Process Execution\
You run automations. Moderate moat.
Level 3: Decision Automation\
You make decisions on behalf of users -- routing tickets, approving claims, prioritizing leads. Strong moat.
Level 4: Institutional Memory\
You understand the nuance of the organization. You know how Sarah interprets "urgent." You know which deals stall. You know which vendors fail.
At Level 4, your product is not just software.
It is embedded judgment.
Replacing you means retraining institutional intuition. That is painful, risky, and slow.
AI companies that aim directly for Level 3-4 integration are defensible.
AI companies that sit at Level 1-2 are feature vendors.
3. Real Network Effects (Not Vanity Metrics)
"10,000 users" is not a moat.
A network effect exists when: the product becomes more valuable to each user as more users join.
Marketplaces are the cleanest example.
Imagine a vertical AI marketplace for independent funeral homes trading excess inventory. With 5 participants, it's useless. With 500, it's liquid. A competitor can clone the software in a week -- but they cannot clone the network density.
Other examples:
- Talent marketplaces enhanced by AI matching.
- Supplier discovery networks.
- Developer ecosystems where third parties build on your platform.
If you cloned this product perfectly tomorrow, could you replicate the value without replicating the users?
If the answer is no, you have something real.
4. Distribution You Own (Not Rent)
In the AI era, distribution is collapsing.
Everyone can ship a product. Everyone can run ads. Everyone can plug into the same APIs.
Owning distribution means:
- You control a niche community.
- You have a media presence that aggregates attention.
- You have a trusted brand inside a vertical.
Why? Because distribution is now scarcer than code.
If your user acquisition depends entirely on paid ads or platform algorithms, your leverage is fragile.
If your distribution is organic, trusted, and domain-specific, you control a strategic asset.
5. Trust and Security as Infrastructure
This one is under-discussed.
As enterprises push more private data into AI systems, paranoia becomes structural.
Who guarantees:
- My proprietary data isn't being used to train models?
- My workflows aren't leaking?
- My AI layer is compliant?
Trust compounds. Once embedded, it is extremely hard to displace.
In highly regulated sectors -- healthcare, finance, defense -- this moat may be stronger than any data advantage.
What Is Not a Moat in AI
Let's be blunt.
The following are not durable advantages:
- Prompt engineering skill.
- Fine-tuning on mostly public data.
- Being "first to market."
- Slight UX improvements over ChatGPT.
- Cost arbitrage.
- Better model selection logic.
They can generate revenue. They can even generate hype.
They rarely generate durable equity value.
The Investor's Red Wedding Checklist
If you're investing in AI companies, here's the diagnostic framework.
Ask these questions in order.
1. What unique asset does this company control?
If the answer is:
- "Great engineers"
- "Strong AI capabilities"
- "Brand"
If the answer is:
- "Non-public, compounding data"
- "A dense two-sided network"
- "Deep institutional integration"
- "Regulatory trust position"
2. If foundation models improve dramatically, does this company get stronger or weaker?
Stronger:
- Their data layer becomes more valuable.
- Their automation gets more powerful.
- Their network gets more liquid.
- Their decision engine gets better.
- The core feature becomes table stakes.
- The user experience is replicated natively.
- The pricing power collapses.
3. Is the moat structural or behavioral?
Structural moats:
- Data ownership.
- Network density.
- Integration depth.
- Regulatory barriers.
- Users like the product.
- It's convenient.
- It's trendy.
4. Can this be rebuilt in a weekend?
The "Weekend Test" is brutal but clarifying.
If a talented solo founder with access to APIs could rebuild 80% of this product in two days, the moat must live elsewhere -- in data, integration, or network.
If everything valuable is visible in the interface, you are in danger.
The Big Misconception: Revenue ≠ Defensibility
The AI cycle has created a strange phenomenon:
It is easier than ever to reach $1M ARR.\
It is harder than ever to justify a durable multiple.
A startup can hit millions in revenue with:
- Strong positioning.
- Smart distribution.
- Model wrappers.
- Good UX.
The market is starting to price this in.
The next generation of category leaders will not just "use AI well." They will own something AI cannot replace.
What to Build (If You're a Founder)
If you're building in AI right now:
- Design your product so usage generates proprietary data.
- Integrate deeply into decision workflows early.
- Focus on vertical depth over horizontal breadth.
- Embed trust and compliance from day one.
- Consider owning distribution before shipping product.
The goal is to build the system that AI improvements flow through.
The Survivors of the Next Feast
The Red Wedding will keep happening.
Foundation model labs will keep shipping up the stack.\
Capabilities will keep commoditizing.\
Costs will keep collapsing.
The survivors will not be the ones with the best prompts.
They will be the ones who built something the models need but cannot generate:
- Proprietary data.
- Dense networks.
- Embedded institutional memory.
- Trusted infrastructure.
- Owned distribution.
If your company shrinks as AI improves, you are not building a moat.
You are building a demo. And demos do not survive weddings.