A short build log about a feature I never expected to write, leading to musings about why an open source PR is now a donation in two currencies, and what an agentic OSS economy looks like once the friction collapses.


Like most of the readers in my bubble right now, I was wiring up with Claude Code a personal CRM that reads, among other sources, my WhatsApp messages. The Skill was working (markdown wrapping a third-party Go CLI built on whatsmeow, send tested end-to-end), and then I asked it for the chat history. 20 chats returned: I have hundreds.

I went down the protocol-level rabbit hole first: the protocol does have a real history limit, the phone showed “Chat history: Paused” on the linked client, and that sounded conclusive enough to send me into the entire research-and-fork loop. Only after the patched binary was running and pulling thousands of new messages did I notice that chats list defaults to --limit 20: the local DB had 828 chats all along, and a one-line flag bump would have shown them. The fork earned its keep anyway, since I then discovered a real depth problem (most of those 828 chats had a single anchor message and nothing more, the rest sitting on my phone and never pushed to the linked client) which did require adding a way for the CLI to ask for a chat’s history. Still, I’ll admit that Claude and I happily went about building the whole thing before noticing the default: we seem to share a jump-the-gun attitude to coding, which is a known GenAI problem… but I diverge.

My agentic hour, in order

(Non-technical peer: feel free to skim the agentic flexing in this section, the actual argument resumes in the next one.)

I don’t write Go. I can read it (probably still), have never opened a go.mod from scratch, and the CLI was 1,200 lines of someone else’s Go I’d never seen. In a previous era this is where the story ends: file a feature request, hope someone with the right neurons gets bored on a weekend, or just shrug and ship a workaround.

Instead, in one hour: an agent inventoried the upstream issues, PRs, branches, and 22 forks (we’ll come back to the 22 forks); a second pass surfaced 3 independent codebases that had already solved the underlying problem against whatsmeow (mautrix’s bridge, openclaw/wacli, and a wa-agent-pipeline 6 days old at time of writing); reading the codebase located the right hook (the existing handler already processed the right protocol event, so my entry point could reuse it); the patch went in across 6 files (a store method for the chat’s oldest message, a client method that builds the history request as a peer message, a command that walks the cursor backward and stops when it doesn’t move); the mock interfaces were updated until Go’s structural typing stopped complaining; go test ./... came back green; validation against the real account showed the cursor stopping on a chat that had nothing older, and thousands of new messages on one that did; the issue went up with the working PoC linked and the 3 reference implementations cited. (#20.)

One agentic hour later

What was new in that loop wasn’t the code (3 other projects had already shown the way), or the diagnosis, or the patch; what was new was that I was in the loop at all. The friction was never intellectual, it was the cost of paging in someone else’s codebase and idiom in a language I’d otherwise have to commit to learning, and that cost has collapsed.

But this isn’t really a post about my agentic hour; it’s about the 22 other forks the agent found while inventorying the repo.

The CLI is 6 months old, with 162 stars and 22 public forks; its closed-issue list runs to 8, the busiest thread topping out at 5 comments; the merged-PR list has 5 entries from 4 external developers. The maintainer pushes daily, merges small PRs in 1-2 days, and has invited the community to “propose a design and send a PR”; the bottleneck isn’t him.

The bottleneck was nowhere in particular. 22 people cared enough to fork; 4 ever shipped a contribution back; the other 18 are running their own copies, presumably solving their own problems, presumably reinventing each other’s wheels in private. And those are just the public ones. Every consultant, every weekend hobbyist, every team using this in a closed-source product has its own private fork too, ça va sans dire (sometimes a licence violation, though not here, since the repo doesn’t have one).

I’m not blaming anyone in this. I almost became the 19th wheel-reinventer earlier today and only didn’t because the alternative was suddenly cheaper. What was the cost they felt they were facing? Not unwillingness, in most cases, and not company policy either, since the thing being forked rarely contains the secret sauce: it’s the boilerplate, the integration glue, the missing flag, the sensible default. The cost was participation: read the codebase, learn the idiom, satisfy the test scaffold, address review and follow-up. For a small fix in a project you didn’t write, in a language you don’t think in, that cost dwarfed the value, so forking was cheaper, and 22 forks were the result.

That equation flipped today, and it flipped specifically because every step of the loop except having an opinion is now agent-cheap.

I don’t want to oversell my flavor of this. Simon Willison described the same loop last November, contributing to OpenAI’s Rust CLI without writing Rust, and Vjeux ported 100k lines of TypeScript to Rust at the maximalist end of the same curve. So this isn’t a first essay, just my flavor of the pattern, with one argument I haven’t seen made elsewhere.

The agentic OSS re-rise

There’s a story being told about AI-and-software-development that goes: the agent writes the codebase, the human directs, lines-typed-by-humans approaches zero. It’s true and getting truer faster than I expected. Boris Cherny said in May 2025 that ~80% of Claude Code (the tool I’m writing inside) was written by Claude Code itself; by June, Mike Krieger called it 90-95%; in March 2026 Cherny tweeted it had reached 100%, with his prior-month output of 259 PRs all Claude-written; Anthropic-wide, Fortune cited a 70-90% range in January. (LessWrong is right that the metric is undefined, but the direction is clear.) The big-product, closed-source story is real and getting more real every week.

The quieter story alongside it is the 18-of-22 case. The same drop in cost-of-typing that lets a small Anthropic team ship Claude Code also lets a much larger, less coordinated population of small contributors fix small things in projects they don’t own; the risk of cheap code is everyone reinventing the same wheel privately, in their own context window, while the opportunity is the opposite: the boilerplate and the integration glue (the not-secret-sauce parts of what every team builds) flowing back into the projects that should hold them. That’s what history extend is, structurally: not my secret sauce, not even a feature of any product I’m building, just a missing primitive in a community CLI that 22 forks were silently working around.

The part of the framing I want to put at the centre is this: when I publish my patch upstream, I’m not just contributing the code, I’m contributing the LLM token spend that produced it, the tokens that mapped the repo, the tokens that surfaced the 3 prior implementations, the tokens that designed the cursor-walk, the tokens that debugged Go’s structural typing complaints, the tokens that researched whether anyone had filed this issue before. If my fork stays private, the next person who hits the same gap pays those tokens again, in their own context window; 22 public forks is 22 sets of duplicated token spend, and the dark-matter private forks layer who-knows-what on top. Token spend has a real cost in dollars and in GPU compute, and at scale it’s the kind of scarcity worth pooling. Open source has always been a public good; in the agent era it also pools the upstream cost of getting to the answer, not just the answer itself, so a PR is now a donation in two currencies.

That changes what contributing back means: it used to mean I wrote some code that you could use for free, and it now also means I burned the tokens to figure this out so you don’t have to. The civilisation-level efficiency of that is non-trivial; we’ve been cheerfully leaving it on the table because contributing was harder than re-burning the tokens ourselves, a trade-off that’s suddenly bad on both sides.

There’s a precedent for this kind of pooling, just not in software. SETI@home and the BOINC family spent two decades harvesting spare CPU cycles from millions of laptops; the newer decentralised-GPU networks (io.net, Akash, Render, Bittensor) generalise the instinct into deliberate compute marketplaces, settled in stablecoins on public blockchains with no trusted middleman required. What’s structurally new in 2026 is that the analog applies to writing software, not just to running it. If code is increasingly agent-written, then any given patch is increasingly a proxy of compute spend rather than of programmer ingenuity (again, secret sauce excluded), the kind of statement that doesn’t help anyone’s hiring narrative (mine included) but that I find I can’t argue with; sharing the patch back upstream is the OSS analog of returning a SETI work-unit’s output to Berkeley, the asymmetric one-to-many flow that makes the whole network worth participating in.

(Francesco Ciuccarelli made an adjacent and darker point, on the labor side rather than the OSS side: “Gli agenti, a differenza delle persone, non lasciano l’azienda […] la conoscenza non risiede più esclusivamente nelle persone, ma viene progressivamente codificata nei sistemi.” For non-Italian readers: agents, unlike people, don’t leave the company; knowledge no longer lives only in heads, it’s being codified into the systems companies own. The same externalisation that makes OSS contribution this cheap also shifts leverage from people to companies. He’s right; both stories run at the same time, and “open source economy” reads warmer if you don’t squint at who allocates the token budget on the company side.)

The skill that’s getting more valuable, not less, is having opinions about the projects you depend on, because what you’re contributing has gotten richer at the same time as the friction of contributing has collapsed.

Being built, and what’s to come

Worth flagging that the contribution loop itself is being rebuilt for agents in real time. GitHub launched Agent HQ at Universe in October 2025 with first-class primitives for parallel agent sessions and “managing how agents interact with each other”; GitHub Agentic Workflows and Atlassian’s Rovo Agent Connector followed in February, both converging on Google’s A2A protocol, now governed by the Linux Foundation alongside MCP. Today my issue #20 is a human-shaped artifact (prose, a hyperlink, an open question about a stalled migration); on a one-year horizon it can be a structured object the maintainer’s triage agent picks up, validates against the repo’s conventions, runs the test scaffold over my fork, and returns with a one-line “looks safe to accept; here’s the conflict surface with #11”. The non-secret-sauce part of every contribution loop becomes infrastructure, and at the limit even the merging happens autonomously, inside an open source economy that runs on agents but stays governed by humans setting only the token budget that feeds it and the direction that tells each project what to become strategically.


The phone is still warm from the QR scan. The fork is at feat/history-extend on my GitHub, the issue is open upstream, the backfill is grinding through 828 chats in the background as I write this, and I’ve still never written a hand-rolled Go file in my life.

22 of us forked, and 18 stayed silent because it wasn’t worth the cost. The cost just changed, and what flows back upstream is no longer just code, it’s the compute that produced it.

What this can become, if I’m reading the curve right, is an open source economy that runs autonomously on agents but stays governed by humans. Maintainers as stewards of judgment, contributors as directors of effort, the duplication tax finally retired.

The loop runs from here.