Blog

Inertia Is All They Need

A short essay on how AI companies turn convenience, opacity, and polished workflows into user dependence.

Published 6 Apr 2026 Updated 6 Apr 2026
  • ai
  • lock-in
  • subscriptions
  • open-source
  • product

I have tremendous respect for Anthropic’s research and engineering teams. Claude is an excellent model. But the model and the company behind it are not the same thing, and users should stop pretending otherwise.

Anthropic’s recent decision to cut subscription access for third-party agent tools like OpenClaw is easy to frame as a narrow infrastructure policy. It is not narrow. It points to a broader pattern that is starting to define the AI industry.

The issue is not whether a company has the right to manage its own systems. Of course it does. The real issue is what kind of market these decisions are building. A market in which users are steadily pushed away from open, portable workflows and back toward closed, first-party habits. And once habits harden, habit becomes dependence.

That is the point. AI companies do not need every user to believe they are the best forever. They only need users to stop comparing.

We have seen this before. Microsoft Office did not win because Word, Excel, and PowerPoint had to remain permanently superior to every alternative. It won because it became the default. Once schools, offices, and institutions standardized around it, the question was no longer “what is best?” but “what is everyone else already using?” That is how inertia works. The product does not need to dominate your judgment. It only needs to dominate your workflow.

AI is moving in the same direction. The real target is not the power user who pays for several models, tests alternatives, and keeps switching costs low. The real target is the ordinary user who opens one tab, builds routines around it, and slowly stops looking elsewhere.

This is why subscription opacity matters.

With an API, the unit of account is at least visible. You know the nominal price per million tokens, even if the total bill is still painful. With subscriptions, that clarity disappears. You are shown a percentage bar, perhaps a /usage command, and a set of soft limits that can shift without a clean public conversion like “$20 equals X million tokens.” From the consumer side, that creates a fog around what is actually being bought.

That fog matters because it creates room for silent adjustment. If a model becomes more efficient, the provider can say the value has improved. If limits feel tighter, the user has little ground from which to argue, because the unit of account was never clear in the first place. The problem is not that every subscription is dishonest. The problem is that the structure is unusually resistant to scrutiny.

This becomes more serious with Computer Use and agentic workflows, which are inherently token-heavy. The more useful these tools become, the more deeply they sink into a person’s routine. Once an ordinary user gets used to an AI that can operate a computer, edit files, and run tasks on its own, the switching cost rises. If prices later increase, or usage rules tighten, many users will not leave. They will grumble and stay. That is exactly how lock-in matures.

What makes this more concerning is that Anthropic is not merely building capability. It is building polish. The more seamless the experience becomes across chat, desktop, and agentic workflows, the less often ordinary users will ask whether the workflow is portable. Convenience is not neutral here. Convenience is how dependence becomes pleasant enough to ignore.

Open-source tools like OpenClaw matter because they interrupt that process. They keep powerful AI workflows more portable across providers. They also serve another function that is rarely acknowledged: they lower public resistance before the large companies arrive. Open-source communities tested the idea that an AI should be trusted with broad computer access. They normalized the tradeoff. By the time official versions appeared, much of the psychological ground had already been cleared.

That is why this moment should concern people beyond Anthropic users. The immediate issue is not one policy change at one company. The deeper issue is an industry pattern: open experimentation helps discover what users want, but once the behavior becomes valuable enough, the pressure shifts toward enclosure.

So no, I do not think the deepest problem here is simple greed, nor do I think the right response is melodrama. The problem is more ordinary than that, and therefore more dangerous. It is the slow construction of dependence through convenience, opacity, and habit.

The best defense is not loyalty. It is comparison. Keep more than one provider active. Keep an eye on open-source alternatives. Refuse to let your workflow become so comfortable that you stop asking what it is costing you.

Because the moment you stop comparing, the moment you settle into comfortable inertia, that is the exact moment they win.