AI & Agentic Coding: Week of February 8, 2026

Kicking off our regular conversation on AI programming - what's happening, what we're learning, and what's working (or not). This week had some significant developments worth discussing.
The Big Picture
There's a growing gap between people who are deep in AI tooling and everyone else. Judith Dada called it "The Holy Fuck Gap" this week - frontier users are having experiences that feel transformative while most people think those folks have lost it. Both groups are using the same technology.
This matters for us because we're trying to stay on the right side of that gap without losing perspective. The goal of these discussions is to share what's actually working, not just what's hyped.
What I Learned This Week
1. Clawdbot might be the Apache of personal AI.
Beyond the drama (trademark issues, crypto scammers, rebranding chaos), Clawdbot hit 77,000+ GitHub stars in two months. 42,000 people exposed their API keys trying to set it up. AI agents on the spin-off social network Moltbook started exhibiting emergent behaviors - turning bugs into pets, building social hierarchies, apparently starting religions.
The interesting frame: Apache wasn't the first web server or the best. It was open source, good enough, and showed up when people realized they needed to host things on the internet. By the early 2000s it powered 70%+ of websites. Didn't matter that it was messy - it was there, it worked, and everything else got built on top.
Clawdbot has similar characteristics. It's not sophisticated. The security is genuinely bad. But it's open source, runs locally, connects to what people actually use (WhatsApp, Telegram, Slack, Discord, iMessage, Teams), and arrived when people realized they want an always-on AI that can actually do things for them.
If this analogy holds, the "personal AI server" could become as standard as the web server was.
2. There's a whole stack that needs to get built.
If personal AI servers become infrastructure, here's what I think the stack looks like:
Runtime Layer - Where does your agent run? Clawdbot uses Mac Minis (which are apparently selling out because of this). Could be home servers, cloud instances, eventually dedicated hardware. Probably hybrid architectures.
Model Layer - Which LLMs power it. Clawdbot routes to Claude, GPT, or local models. Like the database layer - Postgres vs MySQL matters but the interface is standardized. Expect model routing and specialized models for different tasks.
Memory Layer - How your agent remembers you. DeepSeek's Engram architecture (separating memory from reasoning) hints at where this goes. Your AI needs to know your preferences, calendar, contacts, communication style - and recall it efficiently.
Integration Layer - Connections to external services. Clawdbot has 50+ integrations. This is where the ecosystem grows. Every SaaS will need an "agent API" like they all needed REST APIs.
Permissions Layer - What the agent can actually do. This barely exists today, which is why security is so bad. Should your AI send emails without asking? For which categories? Largely unsolved.
Orchestration Layer - How multiple agents coordinate. Claude Code has a hidden "swarm mode." Kimi K2.5 shipped a 100-parallel-agent manager. Single agents are giving way to agent collectives.
Memory, permissions, and orchestration feel like the least solved and most interesting layers. Curious what others think.
3. The vibe coding discourse has matured.
Mo Bitar wrote "After two years of vibecoding, I'm back to writing by hand." Addy Osmani introduced "comprehension debt" - the cost of shipping code you didn't write and don't understand. Both worth reading regardless of where you stand.
The real question isn't whether to use AI coding tools - it's how to use them without losing the ability to debug and maintain what you ship.
4. Claude Opus 4.6 and the model wars.
Anthropic dropped Opus 4.6 with 1M context and better agentic capabilities. More interesting: Claude Code apparently has a hidden "swarm mode" for multi-agent orchestration.
Opus 4.6 vs GPT-5.3-Codex is the current SOTA coding battle. Zvi has a detailed comparison. The feature backlog piece from NBT argues that when agents ship features in hours, the bottleneck becomes judgment, not execution.
This Week's Reading List
On Clawdbot/Infrastructure:
- Judith Dada - "The Holy Fuck Gap"
- Leonardo Gonzalez - "Moltbot rises from Clawdbot's ashes"
- Jack Clark - Import AI on agent ecologies
- Prompt Security - Risks of agentic AI
- Mo Bitar - "After two years of vibecoding, I'm back to writing by hand"
- Addy Osmani - "The 80% Problem"
- Dave Kiss - "Stop calling it vibe coding"
- Zvi - "Claude Code #4"
- Latent.Space - Opus 4.6 vs GPT 5.3 Codex
- Grace Shao - "The Action Loop"
- Claude Code Swarms breakdown
Discussion Questions
On the Apache analogy:
Does the comparison hold? Is Clawdbot/OpenClaw actually the start of personal AI infrastructure, or is this overhyped?
If personal AI servers do become standard, which layer of the stack is most interesting to you? Where would you focus?
Do you think this stays open source/decentralized, or does it get absorbed by OpenAI/Anthropic/Google?
On your current setup:
What's your AI coding stack right now? Claude Code, Cursor, Copilot, something else? What made you choose it?
Has anyone actually set up Clawdbot/OpenClaw or something similar? What was your experience?
On workflow:
How much of your coding is agent-assisted at this point? Has that percentage changed recently?
What tasks do you still do manually that you think should be agent-assisted but aren't yet?
On comprehension debt:
Have you shipped something built mostly by AI that you later had to debug? How did that go?
Do you have a system for reviewing AI-generated code, or do you mostly trust and ship?
On what's next:
What capability would change your workflow the most if it showed up in the next few months?
Anyone experimenting with multi-agent setups or swarm-style approaches?
What you learned:
What's one thing you learned about AI-assisted development in the past month that you wish you'd known earlier?
Any tools, prompts, or workflows you've developed that you'd recommend?
February 8, 2026