Does this mean that Claude is the 'infrastructure' layer, the wrapper, so the LLM can be swapped so cost could be reduced? What is the benefit long term for Claude?
1. Convergence has been happening on two axes. Harness designs are converging. Model behaviors inside them too - labs train on Claude Code / OpenClaw / Cursor loops. Neither the model nor the harness is a moat anymore. What wins: distribution, capacity, ecosystem, speed, and trust.
2. They want to offload the capacity. They are clearly hitting the compute wall. "Let's see how other labs handle Cowork's agentic loops while keeping users in their UI." Models are easy to replace anytime (see 1.). Though, I doubt that's the main reason.
3. Anthropic doesn't want to promote this for individual users, but can't block it without hurting enterprise deals. That would explain the silence.
More:
- The individual path is documented, not an accident. Anthropic's docs don't explicitly say end users get this, the install section just tells you to "enter the values supplied by your administrator."
- But on a personal install, with no administrator in the picture, those fields sit there for you to set.
- Requiring a subscription for an enterprise admin who wants to test what's possible before promoting Cowork in the organization wouldn't make sense.
I think Claude is a bit more than a wrapper since you're still using Anthropic's interface for your tasks/projects/use cases. But not full infra layer, as the swapped LLM is hosted on another vendor cloud service or your local machine.
The benefit is that users can reduce costs; while still staying Claude subscribers. But everyone's mileage may vary.
In this AI race, competitive advantage erodes real fast ... so Anthropic is trying to defend its position with differentiated & fast-evolving experience on getting value out of AI in multiple ways like dispatch, skills, tasks etc. That in turn increases switching costs, and ultimately, protects customer LTV since users will still pay subscription fees.
Does this mean that Claude is the 'infrastructure' layer, the wrapper, so the LLM can be swapped so cost could be reduced? What is the benefit long term for Claude?
There are different takes:
1. Convergence has been happening on two axes. Harness designs are converging. Model behaviors inside them too - labs train on Claude Code / OpenClaw / Cursor loops. Neither the model nor the harness is a moat anymore. What wins: distribution, capacity, ecosystem, speed, and trust.
2. They want to offload the capacity. They are clearly hitting the compute wall. "Let's see how other labs handle Cowork's agentic loops while keeping users in their UI." Models are easy to replace anytime (see 1.). Though, I doubt that's the main reason.
3. Anthropic doesn't want to promote this for individual users, but can't block it without hurting enterprise deals. That would explain the silence.
More:
- The individual path is documented, not an accident. Anthropic's docs don't explicitly say end users get this, the install section just tells you to "enter the values supplied by your administrator."
- But on a personal install, with no administrator in the picture, those fields sit there for you to set.
- Requiring a subscription for an enterprise admin who wants to test what's possible before promoting Cowork in the organization wouldn't make sense.
My biggest bets: 1 and 3.
I think Claude is a bit more than a wrapper since you're still using Anthropic's interface for your tasks/projects/use cases. But not full infra layer, as the swapped LLM is hosted on another vendor cloud service or your local machine.
The benefit is that users can reduce costs; while still staying Claude subscribers. But everyone's mileage may vary.
In this AI race, competitive advantage erodes real fast ... so Anthropic is trying to defend its position with differentiated & fast-evolving experience on getting value out of AI in multiple ways like dispatch, skills, tasks etc. That in turn increases switching costs, and ultimately, protects customer LTV since users will still pay subscription fees.