OpenClaw is the self-hosted orchestrator that drives Claude Code, Codex, and Gemini CLI under one harness — useful when you need multi-model routing and unattended runs. But after Anthropic's April 2026 carve-out, you cannot use a Pro/Max subscription with it. Below: when Claude Code earns the default slot, and when the harness pays back its API costs.

OpenClaw exists because some teams want to drive Claude Code, OpenAI Codex, and Gemini CLI under one harness. It is a real piece of software with real users, and as of April 2026 it sits in a more awkward commercial position than it did a year ago: Anthropic prohibits using a Claude Pro/Max subscription with OpenClaw, so any team running it pays full API rates for every Claude call.

The result is a sharper buying decision than the surface comparison suggests. Below: when each one wins, and the small set of conditions where the harness pays back its API costs.

When Claude Code wins

Default for most engineering teams under 50 engineers. The Pro/Max subscription is priced for individual usage, the harness is mature, and the integration with MCP, Cowork, and the Anthropic ecosystem is first-party. If your usage profile is 'a developer running an agent on their laptop during the day', you are in the Claude Code lane.

The places Claude Code is weakest: multi-model routing (you cannot ask the same harness to use GPT-5 nano for cheap classification and Sonnet 4.6 for refactoring), unattended overnight runs across many parallel sessions, and self-hosted observability with custom logging.

When OpenClaw earns its place

  • Multi-model routing. You want to send classification work to a cheap model and reasoning work to an expensive one without your developers having to think about it.
  • Unattended fleets. Overnight runs that span hours and dozens of concurrent sessions. The pay-per-API-call cost gets large, but only at this scale do you need the harness.
  • Self-hosted ops. You want every model call logged in your own observability stack. Compliance teams sometimes require this.
  • Cross-vendor portability. You're nervous about Anthropic-specific lock-in and want a layer that abstracts the model away from the agent.
OpenClaw is for teams already paying API rates and already in the GPU-ops business. For everyone else, Claude Code is the answer.

The April 2026 carve-out

Anthropic's policy change matters because it removes the cheapest entry point into OpenClaw. Before the carve-out, an engineer could run OpenClaw on top of their existing Claude Pro subscription and get most of the harness benefits at no marginal cost. After it, every Claude call through the harness bills against the API at API rates. For an active engineer that is €40-150/month minimum.

The change does not break the harness. It does mean that smaller teams who were running OpenClaw casually now have a clearer commercial signal: either you have the volume that justifies API billing, or you do not, and Claude Code is the cheaper home.

A simple decision rule

Three questions: do you need multi-model routing in the same session? Do you run unattended fleets? Do you have a compliance requirement that demands self-hosted observability? Two yeses, and OpenClaw earns its keep. Fewer than two, and the harness is overhead.

The honest version: most SMB engineering teams answer no to all three, run Claude Code, and ship faster.

Or skip ahead and talk through it directly