They did not. Anthropic is protecting its huge asset: the Claude Code value chain, which has proven itself to be a winner among devs (me included, after trying everything under the sun in 2025). If anything, Anthropic's mistake is that they are incapable of monetizing their great models in the chat market, where ChatGPT reigns: ie. Anthropic did not invest in image generation, Google did and Gemini has a shot at the market now.
Apparently nobody gets the Anthropic move: they are only good at coding and that's a very thin layer. Opencode and other tools are game for collecting inputs and outputs that can later be used to train their own models - not necessarily being done now, but they could - Cursor did it. Also Opencode makes it all easily swappable, just eval something by popping another API key and let's see if Codex or GLM can replicate the CC solution. Oh, it does! So let's cancel Claude and save big bucks!
Even though CC the agent supports external providers (via the ANTHROPIC_BASE_URL env var), they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc). The move totally makes sense, like it or not.
Agreed. The system is ALL about who controls the customer relationship.
If Anthropic ended up in a position that they had to beg various Client providers to be integrated (properly) and had to compete with other LLMs on the same clients and could be swapped out at a moment's notice, they would just become a commodity and lose all leverage. They don't want to end up in such situation. They do need to control the delivery of the product end-to-end to ensure that they control the customer relationship and the quality.
This is also going to be KEY in terms of democratizing the AI industry for small startups because this model of ai-outside-tools-inside provides an alternative to tools-outside-ai-inside platforms like Lovable, Base44 and Replit which don't leave as much flexibility in terms of swapping out tooling.
It's all easily swappable without OpenCode. Just symlink CLAUDE.md -> AGENTS.md and run `codex` instead of `claude`.
> they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc).
Every feature you listed has an open-source MCP server implementation, which means every agent that supports MCP already has all those features. MCP is so epic because it has already nailed the commodification coffin firmly shut. Besides, Anthropic has way less funding than OAI or Google. They wouldn't win the moat-building race even if there were one.
That said, the conventional wisdom is that lowering switching costs benefits the underdogs, because the incumbents have more market share to lose.
> ie. Anthropic did not invest in image generation, Google did and Gemini has a shot at the market now.
They're after the enterprise market - where office / workspace + app + directory integration, security, safety, compliance etc. are more important. 80% of their revenue is from enterprise - less churn, much higher revenue per W/token, better margins, better $/user.
Microsoft adopting the Anthropic models into copilot and Azure - despite being a large and early OpenAI investor - is a much bigger win than yet another image model used to make memes for users who balk at spending $20 per month.
Same with the office connector - which is only available to enterprises[0] (further speaking to where their focus is). There hasn't yet been a "claude code" moment for office productivity, but Anthropic are the closest to it.
[0] This may be a mistake as Claude Code has been adopted from the ground up
Usually you can see it when someone nags about “call us” pricing that is targeted at enterprise. People that nag about it are most likely not the customers someone wants to cater to.
When I was a software developer, I mostly griped about this when I wanted to experiment to see if I would even ask my larger enterprise if they would be interested in looking into it. I always felt like companies were killing a useful marketing stream from the enterprise's own employees. I think Tailscale has really nailed it, though. They give away the store to casual users, but make it so that a business will want to talk to sales to get all the features they need with better pricing per user. Small businesses can survive quite well on the free plan.
> making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc)
I use CC as my harness but switch between third party models thanks to ccs. If Anthropic decided to stop me from using third party models in CC, I wouldn't just go "oh well, let's buy another $200/mo Claude subscription now". No. I'd be like: "Ok, I invested in CC—hooks/skills/whatever—but now let's ask CC to port them all to OpenCode and continue my work there".
I rather have a product that is only good at one single thing than mid for everything else especially when the developer experience for me is much more consistent than using gemini and chatgpt to the point that I only have chatgpt for productivity reasons and also sometimes making better prompts to claude (when I don't use claude to make a better prompt). After realizing that Anthropic is discounting token usages for claude code they should have made that more explicit and also the API key (but hindsight is 20/20) they should already have been blocking third party apps or just have you make another API key that has no discount but even then this could have pissed off developers.
The problem the second you stop subsidizing Claude Code and start making money on it the incentive to use it over opencode disappears. If opencode is the better tool than claude code - and that's the reason people are using their claude subscription with it instead of claude code - people will end up switching to it.
Maybe they can hope to murder opencode in the meantime with predatory pricing and build an advantage that they don't currently have. It seems unlikely though - the fact that they're currently behind proves the barrier to building this sort of tool isn't that high, and there's lots of developers who build their own tooling for fun that you can't really starve out of doing that.
I'm not convinced that attempting to murder opencode is a mistake - if you're losing you might as well try desperate tactics. I think the attempt is a pretty clear signal that Antrhopic is losing though.
It’s possible that tokens become cheap enough that they don’t need to raise prices to make a profit. The latest opus is 3x less expensive than the previous.
Then the competitors drop prices though. The current justification for claude code is just that it's an order of magnitude (or more) cheaper per token than comparable alternatives. That's a terrible business model to be stuck in.
If everyone is dropping prices in this scenario then I don’t see how the user eventually gets squeezed.
I mean I guess they could do a bait and switch (drop prices so low that Anthropic goes bankrupt, then raises price) but that’s possible in literally any industry, and sees unlikely given the current number of competitors
I am pretty sure most people get Anthropic's move. I also think "getting it" is perfectly compatible with being unhappy about it and voicing that opinion online.
> Anthropic's mistake is that they are incapable of monetizing their great models in the chat market
The types of people who would use this tool are precisely the types of people who don't pay for licenses or tools. They're in a race to the bottom and they don't even know it.
> and that's a very thin layer
I don't think Anthropic understands the market they just made massive investments in.
It might make sense from Anthropics perspective but as a user of these tools I think it would be a huge mistake to build your workflow around Claude Code when they are pushing vendor lock in this aggressively.
Making this mistake could end up being the AI equivalent of choosing Oracle over Postgres
As a user of Claude Code via API (the expensive way), Anthrophic's "huge mistake" is capping monthly spend (billed in advance and pay as you go some $500 - $1500 at a time, by credit card) at just $5,000 a month.
It's a supposedly professional tool with a value proposition that requires being in your work flow. Are you going to keep using a power drill on your construction site that bricks itself the last week or two of every month?
An error message says contact support. They then point you to an enterprise plan for 150 seats when you have only a couple dozen devs. Note that 5000 / 25 = 200 ... coincidence? Yeah, you are forbidden to give them more than Max-like $200/dev/month for the usage-based API that's "so expensive".
They are literally "please don't give us money any more this month, thanks".
Sounds plausible they're not really making any. Arbitrary and inflexible pricing policies aren't unusual, but it sounds easy enough for a new rapidly-growing company to let the account managers decide which companies they might have a chance of upselling 150 seat enterprise licenses to and just bill overage for everyone else...
Their target is the Enterprise anyway. So they are apparently willing to enrage their non-CC user base over vendor-locking.
But this is not the equivalent of Oracle over Postgres, as these are different technology stacks that implement an independent relational database. Here were talking about Opencode which depends on Claude models to work "as a better Claude" (according to the enraged users in the webs). Of course, one can still use OC with a bazillion other models, but Anthropic is saying that if you want the Claude Code experience, you gotta use the CC agent period.
Now put yourself in the Anthropic support person shoes, and suppose you have to answer an issue of a Claude Max user who is mad that OC is throwing errors when calling a tool during a vibe session, probably because the multi-million dollar Sonnet model is telling OC to do something it can't because its not the claude agent. Claude models are fine-tuned for their agent! If the support person replies "OC is an unsupported agent for Claude Code Max" you get an enraged customer anyway, so you might as well cut the crap all together by the root.
If you’ve only got a CLAUDE.md and sub agent definitions in markdown it is pretty easy to do at the moment, although more of their feature set is moving in a direction that doesn’t have 1:1 equivalents in other tools.
The client is closed source for a reason and they issued DMCA takedowns against people who published sourcemaps for a reason.
> Anthropic is protecting its huge asset: the Claude Code value chain
Why is that their “huge asset?” The genus of this complaint is that Opencode et al replace everything but the LLM, so it seems like the latter is the true “huge asset.”
If Clause Code is being offered at or near operational breakeven, I don’t see the advantage of lock-in. If it’s being offered at a subsidy, then it’s a hint that Claude Code itself is medium-term unsustainable.
“Training data” is a partial but not full explanation of the gap, since it’s not obviously clear to me how Anthropic can learn from Claude Code sessions but not OpenCode sessions.
Anthropic and OpenAI are essentially betting that a somewhat small difference in accuracy translates to a huge advantage, and continuing to be the one that's slightly but consistently better than others is the only way they can justify investments in them at all. It's natural to then consider that an agent trained to use a specific tool will be better at using that tool. If Claude continues to be slightly better than other models at coding, and Claude Code continues to be slightly better than OpenCode, combined it can be difficult to beat them even at a cheaper price. Right now, even though Kimi K2 and the likes are cheaper with OpenCode and perform decently, I spend more than 10x the amount on Claude Code.
i think they're trading future customer acquisition and model quality for the current claude code userbase which they might also lose from this choice.
the reason i got the subscription wasnt to use claude code. when i subscribed you couldnt even use it for claude code. i got it because i figured i could use those tokens for anything, and as i figured out useful stuff, i could split it off onto api calls.
now that exploration of "what can i do with claude" will need to be elsewhere, and the results of a working thing will want to stay with the model that its working on.
They’re betting that the stickiness of today’s regular users is more valuable than the market research and training data they were receiving from those nerdy, rule-breaking users.
It's crazy how bad the interface it is. I'm generally a fan of the model performance but there is not a day where their CLI will not flash random parts of scrollback or have a second of input lag just typing in the initial prompt (how is that even possible? you are not doing anything?). If this is their "premier tool" no vending machine business can save them.
I'll be honest; I'm pretty sure this "mistake" will be completely forgotten by the next month. Their enforcing that their subscription only works with their product should not really come as a surprise to anyone, and the alt-agent users are a small enough minority that they'll get over it.
I’m starting to think you’re right but only because software engineers don’t seem to actually value or care about open source anymore. Apparently we have collectively forgotten how bad it can be to let your tools own you instead of the other way around.
Maybe another symptom of Silicon Valley hustle culture — nobody cares about the long term consequences if you can make a quick buck.
There's nothing stopping you from using OpenCode with any other provider, including Anthropic: you just can't get the subsidised pricing while doing so. This is irritating, yes - it certainly disincentivises me from trying out OpenCode - but it's also, like, not unexpected?
In any case, the long-term solution for true openness is to be able to run open-weight models locally or through third-party inference providers.
Yes but why are they subsidizing the pricing and requiring to use their closed source client to benefit from it? It’s the same reason the witch in the story of Hansel and Gretel was giving out free candy.
Is this a serious question? Why would they subsidize people when there is no benifet to them? Subsidization means they are LOSING money when people use it. If the customers that are using 3rd party clients are unwilling to pay a price that is profitable for them, that is a very positive, not negative, thing for Anthropic to lose them.
The reason to subsidize is the exact reason you are worried about. Lock in, network effects, economies of scale, etc.
> Apparently we have collectively forgotten how bad it can be to let your tools own you instead of the other way around.
We've collectively forgotten because a large enough number of professional developers have never experienced anything other than a thriving open source ecosystem.
As with everything else (finance and politics come to mind in particular), humans will have to learn the same lessons the hard way over and over. Unfortunately, I think we're at the beginning of that lesson and hope the experience doesn't negatively impact me too much.
I mean... I don't like it either but this is pretty standard stuff and it's obvious why they're doing it.
Claude, ChatGPT, Gemini, and Grok are all more or less on par with each other, or a couple months behind at most. Chinese open models are also not far behind.
There's nothing inherent to these products to make them "sticky". If your tooling is designed for it, you can trivially switch models at any time. Mid-conversation, even. And it just works.
When you have basically equivalent products with no switching cost, you have perfect competition. They are all commodities. And that means: none of them can make a profit. It's a basic law of economics.
If they can't make a profit, no matter how revolutionary the tech is, their valuation is not justified, and they will be in big trouble when people figure this out.
So they need to make the product sticky somehow. So they:
1. Add a subscription payment model. Once you are paying a subscription fee, then the calculus on switching changes: if you only maintain one subscription, you have a strong reason to stick with it for everything.
2. Force you to use their client app, which only talks to their model, so you can't even try other models without changing your whole workflow, which most people won't bother to do.
These are bog standard tactics across the tech industry and beyond for limiting competitive pressure.
Everyone is mad about #2 but honestly I'm more mad about #1. The best thing for consumers would be if all these model providers strictly provided usage-based API pricing, which makes switching easy. But right now the subscription prices offer an enormous discount over API pricing, which just shows how much they are really desperate to create some sort of stickiness. The subscriptions don't even provide the "peace of mind" benefit that Spotify-like subscription models provide, where you don't have to worry about usage, because they still have enforced usage limits that people regularly hit. It's just purely a discount offered for locking yourself in.
But again I can't really be that mad because of course they are doing this, not doing it would be terrible business strategy.
Well, no. It just means no single player can dominate the field in terms of profits. Anthropic is probably still losing money on subscribers, so other companies "reselling" their offering does them no good. Forcing you to use their TUI at least gives them control of how you interact with the models back. I'm guessing but since they've gone full send into the developer tooling space, their pitch to investors likely highlights the # of users on CC, not their subscriber numbers (which again, lose money). The move makes since in that respect.
I'm not "mad", I'm "sad" -- because I was very much on "Team Anthropic" a few months ago ... but the tool has failed to keep up in terms of quality.
If they're going to close the sub off to other tools, they need to make very strong improvements to the tool. And I don't really see that. It's "fine" but I actually think these tools are letting developers down.
They take over too much. They fail to give good insights into what's happening. They have poor stop/interrupt/correct dynamics. They don't properly incorporate a basic review cycle which is something we demand of junior developers and interns on our teams, but somehow not our AIs?
They're producing mountains of sometimes-good but often unreviewable code and it isn't the "AI"'s fault, it's the heuristics in the tools.
So I want to see innovation here. And I was hoping to see it from Anthropic. But I just saw the opposite.
There is so much low-hanging fruit in the tooling side right now. There's no way Anthropic alone can stay ahead of it all -- we need lots of different teams trying different things.
I myself have been building a special-purpose vibe-coding environment and it's just astounding how easy it is to get great results by trying totally random ideas that are just trivial to implement.
Lots of companies are hoping to win here by creating the tool that everyone uses, but I think that's folly. The more likely outcome is that there are a million niche tools and everyone is using something different. That means nobody ends up with a giant valuation, and open source tools can compete easily. Bad for business, great for users.
(Also, Kenton, I'd add that I'm an admirer more broadly of your work, and so if by chance you end up creating some public project commercial or open source in the general vein we're talking about here, I'd love to contribute)
Yep. And in a way this has always been the story. It's why there's just so few companies making $$ in the pure devtooling space.
I have no idea what JetBrain's financials are like, but I doubt they're raking in huge $$ despite having very good tools & unfortunately their attempts to keep abreast of the AI wave have been middling.
Basically, I need Claude Code with a proper review phase built in. I need it to slow-the-fuck-down and work with me more closely instead of shooting mountains of text at me and making me jam on the escape key over and over (and shout WTF I didn't ask for that!) at least twice a day.
IHMO these are not professional SWE tools right now. I use them on hobby projects but struggle to integrate them into professional day jobs where I have to be responsible in a code review for the output they produced.
And, again, it's not the LLM that's at fault. It's the steering wheel driving it missing a basic non-yeet process flow.
> Basically, I need Claude Code with a proper review phase built in. I need it to slow-the-fuck-down and work with me more closely instead of shooting mountains of text at me and making me jam on the escape key over and over (and shout WTF I didn't ask for that!) at least twice a day.
It sounds like you want Codex (for the second part)
Try plan mode if you haven't already. Stay in plan mode until it is to your satisfaction. With Opus 4.5, when you approve the plan it'll implement the exact spec without getting off track 95% of the time.
It's fine, but it's still "make big giant plan then yeet the impl" at the end. It's still not appropriate for the kind of incremental, chunked, piecework that's needed in a shop that has a decent review cycle.
It's irresponsible to your teammates to dump very large giant finished pieces of work on them for review. I try to impress that on my coworkers, and I don't appreciate getting code reviews like that for submission, and feel bad if I did the same.
Even worse if the code review contains blocks of code which the author doesn't even fully understand themselves because it came as one big block from and LLM.
I'll give you an example -- I have a longer term bigger task at work for a new service. I had discussions and initial designs I fed into Claude. "We" came to a concensus and ... it just built it. In one go mainly. It looks fine. That was Friday.
But now I have to go through that and say -- let's now turn this into something reviewable for my teammates. Which means basically learning everything this thing did, and trying to parcel it up into individual commits.
Which is something that the tool should have done for me, and involved me in.
Yes, you can prompt it to do that kind of thing. Plan is part of that, yes. But planning, implement, review in small chunks should be the default way of working, not something I have to force externally on it.
What I'd say is this: these tools right now are are programmer tools, but they're not engineer tools
i think the review cycles weve been doing for the past decade or two are going to change to match the output of the LLMs and how the LLMs prefer to make whole big changes.
i immediately see that the most important thing to have understand a change is future LLMs more than people. we still need to understand whats going on, but if my LLM and my coworkers LLM are better aligned, chances are my coworker will have a better time working with the code that i publish than if i got them to understand it well but without their LLM understanding it.
with humans as the architects of LLM systems that build and maintain a code based system, i think the constraints are different, and that we dont ahve a great idea on what the actual requirements are yet.
it certainly mismatches with how we've been doing things in publishing small change requests that only do a part of a whole
Have any of these sorts of proclamations ever actually come true? I recall when Reddit effectively cut off all the clients from their API, there were similar loud proclamations that they had ruined their business and everyone would defect. I remember something similar with Twitter. These businesses both have their problems, but blocking third-party apps doesn’t seem to be one of them.
I think Anthropic took a look at the market, realized they had a strong position with Claude Code, and decided to capitalize on that rather than joining the race to the bottom and becoming just another option for OpenCode. OpenAI looked at the market and decided the opposite, because they don’t have strong market share with Codex and they would rather undercut Claude, which is a legitimate strategy. Don’t know who wins.
I feel like Anthropic is probably making the right choice here. What do they have to gain by helping competitors undercut them? I don’t think Anthropic wants to be just another model that you could use. They want to be the ecosystem you use to code. Probably better to try to win a profitable market than to try to compete to be the cheapest commodity model.
I am sure the company is going to get very upset at people no longer paying who were using their product in a way that they did not intend. Just going to be heartbroken. I will never understand the people that make a big deal about "I will never support this business again because of x" when X not something the company ever officially said they cared about.
In all seriousness, I really don't think it should be a controversial opinion that if you are using a companies servers for something that they have a right to dictate how and the terms. It is up to the user to determine if that is acceptable or not.
Particularly when there is a subscription involved. You are very clearly paying for "Claude Code" which is very clearly a piece of software connected to an online component. You are not paying for API access or anything along those lines.
Especially when they are not blocking the ability to use the normal API with these tools.
I really don't want to defend any of these AI companies but if I remove the AI part of this and just focus on it being a tool, this seems perfectly fine what they are doing.
To me it's very easy to understand why people would be upset and post about it online.
1. The company did something the customers did not like.
2. The company's reputation has value.
3. Therefore highlighting the unpopular move online, and throwing shade at the company so to speak, is (alongside with "speaking with your wallet") one of the few levers customers have to push companies to do what they want them to do.
Sure, it is perfectly valid to complain all you want. But it is also important to remember the context here.
I could write an article and complain about Taco Bell not selling burgers and that is perfectly within my right but that is something they are clearly not interested in doing. So me saying I am not going to give them money until they start selling burgers is a meaningless too them.
Everything I have seen about how they have marketed Claude Code makes it clear that what you are paying for is a tool that is a combination of a client-side app made by them and the server component.
Considering the need to tell the agent that the tool you are using is something it isn't, it is clear that this ever working was not the intention.
> So me saying I am not going to give them money until they start selling burgers is a meaningless too them.
Sure, but that's because you're you. No offense, but you don't have a following that people use to decide what fast food to eat. You don't have posts about how Taco Bell should serve burgers, frequently topping one of the main internet forums for people interested in fast food.
HN front page articles do matter. They get huge numbers of eyeballs. They help shape the opinions of developers. If lots of people write articles like this one, and it front pages again and again, Anthropic will be at serious risk of losing their mindshare advantage.
Of course, that may not happen. But people are aware it could.
Before this drama started, OpenCode was just another item on a long list of tools I've been meaning to test. I was 100% content with CC (still am, mostly). But it was nice to know that there were alternatives, and that I could try them, maybe even switch to them, without having to base my decision on token pricing. The idea of there being escape hatch made me less concerned about vendor lock-in and encouraged me to a) get my entire team onto CC and b) invest time into building CC's flavor of agents, skills, commands, hooks, etc., as well as setting up a marketplace to distribute them internally.
While Anthropic was within their right to enforce their ToS, the move has changed my perspective. In the language of moats and lock-ins, it all makes sense, sure, but as a potential sign of the shape of things to come, it has hurt my trust in CC as something I want to build on top of.
Yesterday, I finally installed OpenCode and tried it. It feels genuinely more polished, and the results were satisfactory.
So while this is all very anecdotal, here's what Anthropic accomplished:
1) I no longer feel like evangelizing for their tool
2) I installed a competitor and validated it's as good as others are claiming.
Perhaps I'm overly dramatic, but I can't imagine I'm the only one who has responded this way.
I responded in a similar way. More than that I preemptively canceled my claude subscription (which just cancels auto-renewal) to make sure it was an affirmative choice to continue with it next month, after I have some time to try out the alternative they are so worried about and see if I should switch to it instead.
Claude already played their card, from threatening that 90% of the code will be written by Ai then cutting off their most enthusiastic followers. Opencode and others haven't threatened the industry and generally have better standing with most devs. I do not see how Claude can ever be profitable at this point, they don't have any stickyness and they actively propose cutting their own market.
Anthropic doesn’t want you to use a tool that makes it easy to switch to a competitor’s model when you reach a cap. They want to nudge you toward upgrading - Pro -> Max -> Max 20× -> extra usage - rather than switching to Codex. They can afford to make moves like this as long as they stay on top. OpenAI isn’t the good guy here - it’s just an opportunity for them to bite off a bit more of the cake.
It seems that Anthropic's thesis is that vertical integration wins.
It's too soon to tell if that's true or not.
One of the features of vertical integration is that there will be folks complaining about it. Like the way folks would complain that it's impossible or hard to install macOS on anything other than a Mac, and impossible or hard to install anything other than macOS on a Mac. Yet, despite those complains, the Mac and macOS are successful. So: the fact that folks are complaining about Anthropic's vertical integration play does not mean that it won't be successful for them. It also doesn't mean that they are clueless
Interestingly, another front page article today is about Apple choosing to use Gemini for Siri.
A lot of the comments revolve around how much they will be locked in and how much the base models are commoditized.
Google is pretty clearly ok with being an infrastructure/service provider for all comers. Same is true for Open AI (especially via Azure?) I guess Anthropic does not want to compete like that.
Anthropic offer their API, including for tools like Opencode. It’s more expensive than Claude Code, but I don’t think it’s priced significantly differently to competitors. Obviously Apple aren’t paying API prices, and Google have a lot more to offer them, but I don’t think Anthropic would turn down that deal if they could have it. They have their models in AWS Bedrock too, and that is an option to auth with Claude Code.
I think they do see vertical integration opportunities on product, but they definitely want to compete to power everything else too.
While I respect the author's opinion (and it's interesting that Vibe Coding, the term is less than a year old), I am more than happy to be an Anthropic customer, and actually happy that they've opened more capacity for their paying customers. What I'm achieving with Claude is spectacular and for now, it's the best system I've found to meet my goals.
when i signed up for a subscription it was with the understanding that id be able to use those tokens on which ever agent i wanted to play with, and that as i got to something i want to have persistently running, id switch that to be an api client. i quickly figured out that claude code was the current best coding agent for the model, but seeing other folks calling opus now im not actually sure thats true, in which case that subsidized token might be more expensive to both me and anthropic, because its not the most token efficient route over their model.
i dislike that now i wont be able to feed them training data using many different starting points and paths, which i think over time will have a bad impact on their models making them worse over time
I want them to cut off these electron wrappers. If there's no tokens going to these third parties, the more they can keep subsidizing my claude code usage.
You are just taking advantage of their CC subscription business model, which they are subsidizing because you are using CC. Why should they do this when you don't use their product?
Also You can still use OpenCode with API access...so no they didn't lock anything down. Basically the people just don't want to pay what is fair and is whining about it.
Note - we primarily make use of Gemini CLI, which is very promising, but have made pretty extensive trials as Claude Code.
Anthropic hasn't changed their licensing, just enforcing what the licensing always required by closing a loophole.
Business models aside - what is interesting is whether the agent :: model relationship requires a proprietary context and language such that without that mutual interaction, will the coding accuracy and safety be somehow degraded? Or, will it be possible for agentic frameworks to plug and play with models that will generate similar outcomes.
So far, we tend to see the former is needed --- that there are improvements that can be had when the agentic framework and model language understanding are optimized to their unique properties. Not sure how long this distinction will matter, though.
Honestly very confused by the people happy or agreeing with Anthropic here. You can use their API on a pay-per-use basis, or (as I interpreted the agreement) you can prepay as a subscription and use their service with hourly & weekly session limits.
What's changed is that I thought I was subscribing to use their API services, claude code as a service. They are now pushing it more as using only their specific CLI tool.
As a user, I am surprised, because why should it matter to them whether I open my terminal and start up using `claude code`, `opencode`, `pi`, or any other local client I want to send bits to their server.
Now, having done some work with other clients, I can kind of see the point of this change (to play devils' advocate): their subscription limits likely assume aggregate usage among all users doing X amount of coding, which when used with their own cli tool for coding works especially well with client side and service caching and tool-calls log filtering— something 3rd party clients also do to varying effectivness.
So I can imagine a reason why they might make this change, but again, I thought I was subscribing to a prepaid account where I can use their service within certain session limits, and I see no reason why the cli tool on my laptop would matter then.
Anthropic has been doing this from the start and they are justified in it (the plan has different pricing rates than API). People have been making workarounds and they are justified in that as well - those people understand their workarounds are fragile when they made them.
I want to like Anthropic, they have such a great knowledge sharing culture and their content is bar none, but then they keep pulling stuff like this... I just can't bring myself to trust their leadership's values or ethics.
I would disagree on the knowledge sharing. They're the only major AI company that's released zero open weight models. Nor do they share any research regarding safety training, even though that's supposedly the whole reason for their existence.
I agree with you on your examples, but would point out there are some places they have contributed excellent content.
In building my custom replacement for Copilot in VS Code, Anthropic's knowledge sharing on what they are doing to make Claude Code better has been invaluable
Q: Do I need extra AI subscriptions to use OpenCode?
A: Not necessarily, OpenCode comes with a set of free models that you can use without creating an account. Aside from these, you can use any of the popular coding models by creating a Zen account. While we encourage users to use Zen, OpenCode also works with all popular providers such as OpenAI, Anthropic, xAI etc. You can even connect your local models.
I was paying for Max but after trying GLM 4.7 I am a convert. Hardly hit the limit but even if I do it is cheaper to get two accounts from Z.ai than one Max from Anthropic
Technically, isn't the API they want third-party software to use better anyway? This is really about pricing. The price difference between the regular API and the Oauth API is too large.
> they really, really want to own the entire value chain
That is it. That is the problem. Everyone wants vertical integration and to corner the market, from Standard Oil on down. And everyone who wants that should be smacked down.
my guess is that they are probably drowning in traffic since claude code really took off over the break and are now doing everything they can to reduce traffic and keep things up.
I don't think I agree with this claim. Also, they didn't cut-off anyone. You can still use their API as you wish. It's out there for anyone who wants it.
They simply stopped people from abusing a accessibility feature that they created for their own product.
> "For me personally, I have decided I will never be an Anthropic customer, because I refuse to do business with a company that takes its customers for granted."
The best pressure on companies comes from viable alternatives, not from boycotts that leave you without tools altogether.
> they really, really want to own the entire value chain rather than being relegated to becoming just another "model provider"
I remember the story used to be the other way around - "just a wrapper", "wrapper AI startups" were everywhere, nobody trusted they can make it.
Maybe being "just a model provider" or "just a LLM wrapper" matter less than the context of work. What I mean is that benefits collect not at the model provider, nor at the wrapper provider, but where the usage takes place, who sets the prompts and uses the code gets the lion share of benefits from AI.
After reading this opinion ten times today. Can someone explain to me why OpenCode is a “better harness”? Or is it just because it’s open source that people support it?
No matter what the answer to the question is.. IMO "just" is out of place here. Being free/open source software is a big deal, particularly for a developer tool.
It's mostly based on feelings/"vibes", and hugely dependent on the workflow you use. I'm so happy with Claude Code, Opus and plan mode that I don't feel any need to check the others.
OpenCode has some more advanced features and plays nicely in more advanced setups. ClaudeCode isn't bad at all, but OpenCode has some tricks up it's sleeve.
Can't Opencode just modify their implementation to use the anthropic claude code SDK directly? The issue is they were spoofing oauth. I tried OpenCode before this whole drama and immediately noticed the oauth spoofing and never authorized it. Doesn't opencode speak ACP? https://agentclientprotocol.com/overview/agents
OpenCode wasn't using claude CLI at all (or claude SDK). They were using their own agent loop and bypassing claude cli entirely (except for spoofing auth).
The SDK bundles Claude code and uses it for its agentic work. The SDK really only lets you control the UI layer. It als doesn’t yet fully support plan mode.
I use the SDK in my app and it works fine with plan mode. I don't deal with auth at all. I detect if the CLI is installed and it just reuses whatever auth the user has already setup. Works fine.
When the only winning move is corner-the-market, the only way for the customer to win is not to play the game. I'll take my token-money elsewhere.
That said, the author is deluding themselves if they think OpenAI is supporting OpenCode in earnest. Unlike Anthropic, they don't have explicit usage limits. It's a 'we'll let you use our service as long as we want' kind of subscription.
I got a paid plan with GPT 5.2 and after a day of usage was just told 'try again in a week'. Then in a week I hit it again and didn't even get a time estimate. I wasn't even doing anything heavy or high reasoning. It's not a dependable service.
A good example of an extremely small but extremely vocal minority doing their best to punish a company for not catering to their explicitly disallowed use case for no reason other than they want it. I'd bet this has 0 negative impact on their business.
What I learned from all this is that OpenAI is willing to offer a service compatible with my preferred workflow/method of billing and Anthropic clearly is not. That's fine but disappointing, I'm keeping my Codex subscription and letting my Claude subscription lapse but sure, it would be nice if Anthropic changed their mind to keep that option available because yes, I do want it.
I'm a bit perplexed by some comments describing the situation like OpenCode users were getting something for free and stealing from CC users when the plan quota was enforced either way and were paying the same amount for it. Or why you seem to think this post pointing out that Anthropic's direct competitor endorses that method of subscription usage is somehow malicious or manipulative behavior.
Commerce is a two-way street and customers giving feedback/complaining/cancelling when something changes is normal and healthy for competition as evidenced by OpenAI jumping in to support OpenCode users without needing to break their TOS.
"they utterly failed to consider the second-order effects of this business decision"
Or maybe they did consider but were capital/ inference capacity constrained to keep serving at this pricepoint. Pretty sure without any constraints they would eagerly go for 100% market share.
CC users give them the reigns to the agentic process. Non CC users take (mostly indirect) control themselves. So if you are forced to slow growth, where do you push the break (by charging defacto more per (api) token)?
> they really, really want to own the entire value chain rather than being relegated to becoming just another "model provider"
This is really the salient point for everything. The models are expensive to train but ultimately worthless if paying customers aren't captive and can switch at will. The issue it that a lot of the recent gains are in the prefill inference, and in the model's RAG, which aren't truly a most (except maybe for Google, if their RAG include Google scholar). That's where the bubble will pop.
This reads like an overreaction. I think both OpenAI and Anthropic are soon to settle upon their target markets; that each of them are attracting separate crowds/types of coders and that the people already sold on Claude Code don’t care about this decision.
I just cancelled, citing this as the reason. I’m actually not all that torn up about it. I mostly want to see how Anthropic responds to the community about this issue.
I think they’re smart enough to know that they’re not making a mistake here. I’m fine with it. The API costs are not outrageous. I don’t mind paying per token prices and I don’t mind getting a discounted all-inclusive plan.
Yeah I think Anthropic has the "right" to do this. That's fine.
But they also have shown a weakness by failing to understand why people might want to do this (use their Max membership with OpenCode etc instead).
People aren't using opencode or crush with their Claude Code memberships because they're trying to exploit or overuse tokens or something. That isn't possible.
They do it because Claude Code the tool itself is full of bugs and has performance issues, and OpenCode is of higher quality, has more open (surprise) development, is more responsive to bug fixes, and gives them far more knobs and dials to control how it works.
I use Claude Code quite a bit and there isn't a session that goes by where I don't bump into a sharp edge of some kind. Notorious terminal rendering issues, slow memory leaks, or compaction related bugs that took them 3 months to fix...
Failure to deal with quality issues and listen to customers is hardly a good sign of company culture, leading up to IPO... If they're trying to build a moat... this isn't a strong way to do it.
If you want to own the market and have complete control at the tooling level, you're simply going to have to make a better product. With their mountain of cash and army of engineers at their disposal ... they absolutely could. But they're not.
Meh. I’ve never used my x20 Max account in OpenCode because the Oauth solution was clearly “hacky”.
But to me the appeal of OpenCode is that I can mix and match APIs and local models. I have DeepSeek R1 doing research while KLM is planning and doing code reviews and o4 mini breaking down screenshots into specs while local QWEN is doing the work.
My experience with bugs has also been the exact opposite of what you described.
I'm supposed to adopt these wonderful new tools, but no one can figure out exactly what they are, how they should work, how much they cost, or other basics. This is worse than the early days of the cloud. Hopefully most of this goes the way of SOAP.
> For me personally, I have decided I will never be an Anthropic customer, because I refuse to do business with a company that takes its customers for granted.
Archaeologist.dev Made a Big Mistake
If guided by this morality column, Archaeologist should immediately stop using pretty-much anything they are using in their life. There's no company today that doesn't have their hands dirty. The life is a dance between choosing the least bad option, not radically cutting off any sight of "bad".
The people defending Anthropic because “muh terms of service” are completely missing the point. These are bad terms. You should not accept these terms and bet the future of your business on proprietary tooling like this. It might be a good deal right now, but they only want to lock you in so that they can screw you later.
By only supporting their own cloud service for remote execution & slowly adding more and more proprietary integration points that are incompatible with other tools.
But switching cost to a different CLI coding tool is close to zero… I truly don’t understand the argument that using Claude Code means betting your business on that particular tool. I use Claude Code daily, but if tomorrow they massively raised prices, made the tool worse, or whatever I’d just switch to a competitor and keep working like nothing happened.
To be clear, I’ve seen this sentiment across various comments not just yours, but I just don’t agree with it.
They wouldn’t require you to use their closed source client if they weren’t planning on using it to extract value from you later. It’s still early & a lot more capabilities are going to be coming to these tools in the coming months. Claude Code or an equivalent will be a full IDE replacement and a lot of the integration and automation mechanisms are going to be proprietary. Want to offload some of that to the cloud? Claude Code Web is your only option. Someone else drops a better model or a model that’s situationally better at certain types of tasks? You can’t use it unless you move everything off of that stack.
As an example, this is the exact type of thing Anthropic doesn’t want you to be able to build with Claude & it’s why they want you on their proprietary tooling:
Apparently nobody gets the Anthropic move: they are only good at coding and that's a very thin layer. Opencode and other tools are game for collecting inputs and outputs that can later be used to train their own models - not necessarily being done now, but they could - Cursor did it. Also Opencode makes it all easily swappable, just eval something by popping another API key and let's see if Codex or GLM can replicate the CC solution. Oh, it does! So let's cancel Claude and save big bucks!
Even though CC the agent supports external providers (via the ANTHROPIC_BASE_URL env var), they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc). The move totally makes sense, like it or not.
If Anthropic ended up in a position that they had to beg various Client providers to be integrated (properly) and had to compete with other LLMs on the same clients and could be swapped out at a moment's notice, they would just become a commodity and lose all leverage. They don't want to end up in such situation. They do need to control the delivery of the product end-to-end to ensure that they control the customer relationship and the quality.
This is also going to be KEY in terms of democratizing the AI industry for small startups because this model of ai-outside-tools-inside provides an alternative to tools-outside-ai-inside platforms like Lovable, Base44 and Replit which don't leave as much flexibility in terms of swapping out tooling.
It's all easily swappable without OpenCode. Just symlink CLAUDE.md -> AGENTS.md and run `codex` instead of `claude`.
> they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc).
Every feature you listed has an open-source MCP server implementation, which means every agent that supports MCP already has all those features. MCP is so epic because it has already nailed the commodification coffin firmly shut. Besides, Anthropic has way less funding than OAI or Google. They wouldn't win the moat-building race even if there were one.
That said, the conventional wisdom is that lowering switching costs benefits the underdogs, because the incumbents have more market share to lose.
They're after the enterprise market - where office / workspace + app + directory integration, security, safety, compliance etc. are more important. 80% of their revenue is from enterprise - less churn, much higher revenue per W/token, better margins, better $/user.
Microsoft adopting the Anthropic models into copilot and Azure - despite being a large and early OpenAI investor - is a much bigger win than yet another image model used to make memes for users who balk at spending $20 per month.
Same with the office connector - which is only available to enterprises[0] (further speaking to where their focus is). There hasn't yet been a "claude code" moment for office productivity, but Anthropic are the closest to it.
[0] This may be a mistake as Claude Code has been adopted from the ground up
Usually you can see it when someone nags about “call us” pricing that is targeted at enterprise. People that nag about it are most likely not the customers someone wants to cater to.
I use CC as my harness but switch between third party models thanks to ccs. If Anthropic decided to stop me from using third party models in CC, I wouldn't just go "oh well, let's buy another $200/mo Claude subscription now". No. I'd be like: "Ok, I invested in CC—hooks/skills/whatever—but now let's ask CC to port them all to OpenCode and continue my work there".
I wonder when they will add another level and talk to LLM how to talk to another LLM how to talk to another LLM
Maybe they can hope to murder opencode in the meantime with predatory pricing and build an advantage that they don't currently have. It seems unlikely though - the fact that they're currently behind proves the barrier to building this sort of tool isn't that high, and there's lots of developers who build their own tooling for fun that you can't really starve out of doing that.
I'm not convinced that attempting to murder opencode is a mistake - if you're losing you might as well try desperate tactics. I think the attempt is a pretty clear signal that Antrhopic is losing though.
I mean I guess they could do a bait and switch (drop prices so low that Anthropic goes bankrupt, then raises price) but that’s possible in literally any industry, and sees unlikely given the current number of competitors
The types of people who would use this tool are precisely the types of people who don't pay for licenses or tools. They're in a race to the bottom and they don't even know it.
> and that's a very thin layer
I don't think Anthropic understands the market they just made massive investments in.
Making this mistake could end up being the AI equivalent of choosing Oracle over Postgres
It's a supposedly professional tool with a value proposition that requires being in your work flow. Are you going to keep using a power drill on your construction site that bricks itself the last week or two of every month?
An error message says contact support. They then point you to an enterprise plan for 150 seats when you have only a couple dozen devs. Note that 5000 / 25 = 200 ... coincidence? Yeah, you are forbidden to give them more than Max-like $200/dev/month for the usage-based API that's "so expensive".
They are literally "please don't give us money any more this month, thanks".
But this is not the equivalent of Oracle over Postgres, as these are different technology stacks that implement an independent relational database. Here were talking about Opencode which depends on Claude models to work "as a better Claude" (according to the enraged users in the webs). Of course, one can still use OC with a bazillion other models, but Anthropic is saying that if you want the Claude Code experience, you gotta use the CC agent period.
Now put yourself in the Anthropic support person shoes, and suppose you have to answer an issue of a Claude Max user who is mad that OC is throwing errors when calling a tool during a vibe session, probably because the multi-million dollar Sonnet model is telling OC to do something it can't because its not the claude agent. Claude models are fine-tuned for their agent! If the support person replies "OC is an unsupported agent for Claude Code Max" you get an enraged customer anyway, so you might as well cut the crap all together by the root.
The client is closed source for a reason and they issued DMCA takedowns against people who published sourcemaps for a reason.
Why is that their “huge asset?” The genus of this complaint is that Opencode et al replace everything but the LLM, so it seems like the latter is the true “huge asset.”
If Clause Code is being offered at or near operational breakeven, I don’t see the advantage of lock-in. If it’s being offered at a subsidy, then it’s a hint that Claude Code itself is medium-term unsustainable.
“Training data” is a partial but not full explanation of the gap, since it’s not obviously clear to me how Anthropic can learn from Claude Code sessions but not OpenCode sessions.
the reason i got the subscription wasnt to use claude code. when i subscribed you couldnt even use it for claude code. i got it because i figured i could use those tokens for anything, and as i figured out useful stuff, i could split it off onto api calls.
now that exploration of "what can i do with claude" will need to be elsewhere, and the results of a working thing will want to stay with the model that its working on.
The CLI tool is terrible compared to opencode.
That is the unfortunate reality, we are now being foisted claude code. :( I wish they just fork opencode.
But it was only a matter of time before: a) Microsoft reclaimed its IDE b) Frontier model providers reclaimed their models
Sage advice: don’t fill potholes in another company’s roadmap.
Re: b) "frontier" models can reclaim all they want; bring it. that's not a moat.
Maybe another symptom of Silicon Valley hustle culture — nobody cares about the long term consequences if you can make a quick buck.
In any case, the long-term solution for true openness is to be able to run open-weight models locally or through third-party inference providers.
The reason to subsidize is the exact reason you are worried about. Lock in, network effects, economies of scale, etc.
We've collectively forgotten because a large enough number of professional developers have never experienced anything other than a thriving open source ecosystem.
As with everything else (finance and politics come to mind in particular), humans will have to learn the same lessons the hard way over and over. Unfortunately, I think we're at the beginning of that lesson and hope the experience doesn't negatively impact me too much.
Claude, ChatGPT, Gemini, and Grok are all more or less on par with each other, or a couple months behind at most. Chinese open models are also not far behind.
There's nothing inherent to these products to make them "sticky". If your tooling is designed for it, you can trivially switch models at any time. Mid-conversation, even. And it just works.
When you have basically equivalent products with no switching cost, you have perfect competition. They are all commodities. And that means: none of them can make a profit. It's a basic law of economics.
If they can't make a profit, no matter how revolutionary the tech is, their valuation is not justified, and they will be in big trouble when people figure this out.
So they need to make the product sticky somehow. So they:
1. Add a subscription payment model. Once you are paying a subscription fee, then the calculus on switching changes: if you only maintain one subscription, you have a strong reason to stick with it for everything.
2. Force you to use their client app, which only talks to their model, so you can't even try other models without changing your whole workflow, which most people won't bother to do.
These are bog standard tactics across the tech industry and beyond for limiting competitive pressure.
Everyone is mad about #2 but honestly I'm more mad about #1. The best thing for consumers would be if all these model providers strictly provided usage-based API pricing, which makes switching easy. But right now the subscription prices offer an enormous discount over API pricing, which just shows how much they are really desperate to create some sort of stickiness. The subscriptions don't even provide the "peace of mind" benefit that Spotify-like subscription models provide, where you don't have to worry about usage, because they still have enforced usage limits that people regularly hit. It's just purely a discount offered for locking yourself in.
But again I can't really be that mad because of course they are doing this, not doing it would be terrible business strategy.
Well, no. It just means no single player can dominate the field in terms of profits. Anthropic is probably still losing money on subscribers, so other companies "reselling" their offering does them no good. Forcing you to use their TUI at least gives them control of how you interact with the models back. I'm guessing but since they've gone full send into the developer tooling space, their pitch to investors likely highlights the # of users on CC, not their subscriber numbers (which again, lose money). The move makes since in that respect.
If they're going to close the sub off to other tools, they need to make very strong improvements to the tool. And I don't really see that. It's "fine" but I actually think these tools are letting developers down.
They take over too much. They fail to give good insights into what's happening. They have poor stop/interrupt/correct dynamics. They don't properly incorporate a basic review cycle which is something we demand of junior developers and interns on our teams, but somehow not our AIs?
They're producing mountains of sometimes-good but often unreviewable code and it isn't the "AI"'s fault, it's the heuristics in the tools.
So I want to see innovation here. And I was hoping to see it from Anthropic. But I just saw the opposite.
I myself have been building a special-purpose vibe-coding environment and it's just astounding how easy it is to get great results by trying totally random ideas that are just trivial to implement.
Lots of companies are hoping to win here by creating the tool that everyone uses, but I think that's folly. The more likely outcome is that there are a million niche tools and everyone is using something different. That means nobody ends up with a giant valuation, and open source tools can compete easily. Bad for business, great for users.
I have no idea what JetBrain's financials are like, but I doubt they're raking in huge $$ despite having very good tools & unfortunately their attempts to keep abreast of the AI wave have been middling.
Basically, I need Claude Code with a proper review phase built in. I need it to slow-the-fuck-down and work with me more closely instead of shooting mountains of text at me and making me jam on the escape key over and over (and shout WTF I didn't ask for that!) at least twice a day.
IHMO these are not professional SWE tools right now. I use them on hobby projects but struggle to integrate them into professional day jobs where I have to be responsible in a code review for the output they produced.
And, again, it's not the LLM that's at fault. It's the steering wheel driving it missing a basic non-yeet process flow.
It sounds like you want Codex (for the second part)
It's irresponsible to your teammates to dump very large giant finished pieces of work on them for review. I try to impress that on my coworkers, and I don't appreciate getting code reviews like that for submission, and feel bad if I did the same.
Even worse if the code review contains blocks of code which the author doesn't even fully understand themselves because it came as one big block from and LLM.
I'll give you an example -- I have a longer term bigger task at work for a new service. I had discussions and initial designs I fed into Claude. "We" came to a concensus and ... it just built it. In one go mainly. It looks fine. That was Friday.
But now I have to go through that and say -- let's now turn this into something reviewable for my teammates. Which means basically learning everything this thing did, and trying to parcel it up into individual commits.
Which is something that the tool should have done for me, and involved me in.
Yes, you can prompt it to do that kind of thing. Plan is part of that, yes. But planning, implement, review in small chunks should be the default way of working, not something I have to force externally on it.
What I'd say is this: these tools right now are are programmer tools, but they're not engineer tools
i immediately see that the most important thing to have understand a change is future LLMs more than people. we still need to understand whats going on, but if my LLM and my coworkers LLM are better aligned, chances are my coworker will have a better time working with the code that i publish than if i got them to understand it well but without their LLM understanding it.
with humans as the architects of LLM systems that build and maintain a code based system, i think the constraints are different, and that we dont ahve a great idea on what the actual requirements are yet.
it certainly mismatches with how we've been doing things in publishing small change requests that only do a part of a whole
I expect that from all my team mates, coworkers and reports. Submitting something for code review that they don't understand is unacceptable.
I think Anthropic took a look at the market, realized they had a strong position with Claude Code, and decided to capitalize on that rather than joining the race to the bottom and becoming just another option for OpenCode. OpenAI looked at the market and decided the opposite, because they don’t have strong market share with Codex and they would rather undercut Claude, which is a legitimate strategy. Don’t know who wins.
I feel like Anthropic is probably making the right choice here. What do they have to gain by helping competitors undercut them? I don’t think Anthropic wants to be just another model that you could use. They want to be the ecosystem you use to code. Probably better to try to win a profitable market than to try to compete to be the cheapest commodity model.
And if they've made a business decision to do this, rolling it out without announcement is even worse.
Did they think no one would notice?
Plus I’m the one who compared them to Reddit. They certainly didn’t issue a statement that said “well it worked for Reddit”.
In all seriousness, I really don't think it should be a controversial opinion that if you are using a companies servers for something that they have a right to dictate how and the terms. It is up to the user to determine if that is acceptable or not.
Particularly when there is a subscription involved. You are very clearly paying for "Claude Code" which is very clearly a piece of software connected to an online component. You are not paying for API access or anything along those lines.
Especially when they are not blocking the ability to use the normal API with these tools.
I really don't want to defend any of these AI companies but if I remove the AI part of this and just focus on it being a tool, this seems perfectly fine what they are doing.
1. The company did something the customers did not like.
2. The company's reputation has value.
3. Therefore highlighting the unpopular move online, and throwing shade at the company so to speak, is (alongside with "speaking with your wallet") one of the few levers customers have to push companies to do what they want them to do.
I could write an article and complain about Taco Bell not selling burgers and that is perfectly within my right but that is something they are clearly not interested in doing. So me saying I am not going to give them money until they start selling burgers is a meaningless too them.
Everything I have seen about how they have marketed Claude Code makes it clear that what you are paying for is a tool that is a combination of a client-side app made by them and the server component.
Considering the need to tell the agent that the tool you are using is something it isn't, it is clear that this ever working was not the intention.
Sure, but that's because you're you. No offense, but you don't have a following that people use to decide what fast food to eat. You don't have posts about how Taco Bell should serve burgers, frequently topping one of the main internet forums for people interested in fast food.
HN front page articles do matter. They get huge numbers of eyeballs. They help shape the opinions of developers. If lots of people write articles like this one, and it front pages again and again, Anthropic will be at serious risk of losing their mindshare advantage.
Of course, that may not happen. But people are aware it could.
> It is up to the user to determine if that is acceptable or not.
It sounds like you understand it perfectly.
While Anthropic was within their right to enforce their ToS, the move has changed my perspective. In the language of moats and lock-ins, it all makes sense, sure, but as a potential sign of the shape of things to come, it has hurt my trust in CC as something I want to build on top of.
Yesterday, I finally installed OpenCode and tried it. It feels genuinely more polished, and the results were satisfactory.
So while this is all very anecdotal, here's what Anthropic accomplished:
1) I no longer feel like evangelizing for their tool 2) I installed a competitor and validated it's as good as others are claiming.
Perhaps I'm overly dramatic, but I can't imagine I'm the only one who has responded this way.
- Google cutting off using search from other than their home page code. (At one time there was an official SOAP API for Google Search.)
- Apple cutting off non-Apple hardware in the Power PC era. ("We lost our license for speeding", from a third party seller of faster hardware.)
- Twitter cutting off external clients. (The end of TweetDeck.)
It’s CC with Qwen and KLM and other OSS and/or local models.
It's too soon to tell if that's true or not.
One of the features of vertical integration is that there will be folks complaining about it. Like the way folks would complain that it's impossible or hard to install macOS on anything other than a Mac, and impossible or hard to install anything other than macOS on a Mac. Yet, despite those complains, the Mac and macOS are successful. So: the fact that folks are complaining about Anthropic's vertical integration play does not mean that it won't be successful for them. It also doesn't mean that they are clueless
A lot of the comments revolve around how much they will be locked in and how much the base models are commoditized.
Google is pretty clearly ok with being an infrastructure/service provider for all comers. Same is true for Open AI (especially via Azure?) I guess Anthropic does not want to compete like that.
I think they do see vertical integration opportunities on product, but they definitely want to compete to power everything else too.
They're probably losing money on each pro subscription so they probably won't miss me!
looool
Maybe the LLM thing will be profitable some day?
when i signed up for a subscription it was with the understanding that id be able to use those tokens on which ever agent i wanted to play with, and that as i got to something i want to have persistently running, id switch that to be an api client. i quickly figured out that claude code was the current best coding agent for the model, but seeing other folks calling opus now im not actually sure thats true, in which case that subsidized token might be more expensive to both me and anthropic, because its not the most token efficient route over their model.
i dislike that now i wont be able to feed them training data using many different starting points and paths, which i think over time will have a bad impact on their models making them worse over time
Also You can still use OpenCode with API access...so no they didn't lock anything down. Basically the people just don't want to pay what is fair and is whining about it.
Anthropic hasn't changed their licensing, just enforcing what the licensing always required by closing a loophole.
Business models aside - what is interesting is whether the agent :: model relationship requires a proprietary context and language such that without that mutual interaction, will the coding accuracy and safety be somehow degraded? Or, will it be possible for agentic frameworks to plug and play with models that will generate similar outcomes.
So far, we tend to see the former is needed --- that there are improvements that can be had when the agentic framework and model language understanding are optimized to their unique properties. Not sure how long this distinction will matter, though.
I have a gut feeling that the real top dog harness (profitability, sticky users, growth) is VSCode + Copilot.
It's a trivial violation until it isn't. Competitors need to be fought off early else they become much harder to fight in the future.
What's changed is that I thought I was subscribing to use their API services, claude code as a service. They are now pushing it more as using only their specific CLI tool.
As a user, I am surprised, because why should it matter to them whether I open my terminal and start up using `claude code`, `opencode`, `pi`, or any other local client I want to send bits to their server.
Now, having done some work with other clients, I can kind of see the point of this change (to play devils' advocate): their subscription limits likely assume aggregate usage among all users doing X amount of coding, which when used with their own cli tool for coding works especially well with client side and service caching and tool-calls log filtering— something 3rd party clients also do to varying effectivness.
So I can imagine a reason why they might make this change, but again, I thought I was subscribing to a prepaid account where I can use their service within certain session limits, and I see no reason why the cli tool on my laptop would matter then.
Just pay per token if you want to use third party tools. Stop feeling entitled to other people's stuff.
that and they "stole" my money
In building my custom replacement for Copilot in VS Code, Anthropic's knowledge sharing on what they are doing to make Claude Code better has been invaluable
It looks like they need to update their FAQ:
Q: Do I need extra AI subscriptions to use OpenCode? A: Not necessarily, OpenCode comes with a set of free models that you can use without creating an account. Aside from these, you can use any of the popular coding models by creating a Zen account. While we encourage users to use Zen, OpenCode also works with all popular providers such as OpenAI, Anthropic, xAI etc. You can even connect your local models.
That is it. That is the problem. Everyone wants vertical integration and to corner the market, from Standard Oil on down. And everyone who wants that should be smacked down.
They simply stopped people from abusing a accessibility feature that they created for their own product.
They did banned a lot people. Later, they "unbanned" them, but your comment isn't truthful.
The best pressure on companies comes from viable alternatives, not from boycotts that leave you without tools altogether.
I remember the story used to be the other way around - "just a wrapper", "wrapper AI startups" were everywhere, nobody trusted they can make it.
Maybe being "just a model provider" or "just a LLM wrapper" matter less than the context of work. What I mean is that benefits collect not at the model provider, nor at the wrapper provider, but where the usage takes place, who sets the prompts and uses the code gets the lion share of benefits from AI.
Being "just a wrapper" wouldn't be a risky position if the LLMs would be content to be "just a model." But they clearly wouldn't be, and so it wasn't.
You can use the Anthropic API in any tool, but these users wanted to use the claude code subscription.
(@dang often doesn't work, I just happened to see this. If you want guaranteed message delivery it's best to email hn@ycombinator.com)
That said, the author is deluding themselves if they think OpenAI is supporting OpenCode in earnest. Unlike Anthropic, they don't have explicit usage limits. It's a 'we'll let you use our service as long as we want' kind of subscription.
I got a paid plan with GPT 5.2 and after a day of usage was just told 'try again in a week'. Then in a week I hit it again and didn't even get a time estimate. I wasn't even doing anything heavy or high reasoning. It's not a dependable service.
This will be completely forgotten in like a week.
And if you leave because of this, more support for those that abide by the TOS and stay.
This is akin to someone selling/operating a cloud platform named Blazure and it’s just a front for Azure.
My view to everyone is to stop trying to control the ecosystem and just build shit. Fast.
What I learned from all this is that OpenAI is willing to offer a service compatible with my preferred workflow/method of billing and Anthropic clearly is not. That's fine but disappointing, I'm keeping my Codex subscription and letting my Claude subscription lapse but sure, it would be nice if Anthropic changed their mind to keep that option available because yes, I do want it.
I'm a bit perplexed by some comments describing the situation like OpenCode users were getting something for free and stealing from CC users when the plan quota was enforced either way and were paying the same amount for it. Or why you seem to think this post pointing out that Anthropic's direct competitor endorses that method of subscription usage is somehow malicious or manipulative behavior.
Commerce is a two-way street and customers giving feedback/complaining/cancelling when something changes is normal and healthy for competition as evidenced by OpenAI jumping in to support OpenCode users without needing to break their TOS.
Or maybe they did consider but were capital/ inference capacity constrained to keep serving at this pricepoint. Pretty sure without any constraints they would eagerly go for 100% market share.
CC users give them the reigns to the agentic process. Non CC users take (mostly indirect) control themselves. So if you are forced to slow growth, where do you push the break (by charging defacto more per (api) token)?
This is really the salient point for everything. The models are expensive to train but ultimately worthless if paying customers aren't captive and can switch at will. The issue it that a lot of the recent gains are in the prefill inference, and in the model's RAG, which aren't truly a most (except maybe for Google, if their RAG include Google scholar). That's where the bubble will pop.
Anthropic blocks third-party use of Claude Code subscriptions
https://news.ycombinator.com/item?id=46549823
But they also have shown a weakness by failing to understand why people might want to do this (use their Max membership with OpenCode etc instead).
People aren't using opencode or crush with their Claude Code memberships because they're trying to exploit or overuse tokens or something. That isn't possible.
They do it because Claude Code the tool itself is full of bugs and has performance issues, and OpenCode is of higher quality, has more open (surprise) development, is more responsive to bug fixes, and gives them far more knobs and dials to control how it works.
I use Claude Code quite a bit and there isn't a session that goes by where I don't bump into a sharp edge of some kind. Notorious terminal rendering issues, slow memory leaks, or compaction related bugs that took them 3 months to fix...
Failure to deal with quality issues and listen to customers is hardly a good sign of company culture, leading up to IPO... If they're trying to build a moat... this isn't a strong way to do it.
If you want to own the market and have complete control at the tooling level, you're simply going to have to make a better product. With their mountain of cash and army of engineers at their disposal ... they absolutely could. But they're not.
But to me the appeal of OpenCode is that I can mix and match APIs and local models. I have DeepSeek R1 doing research while KLM is planning and doing code reviews and o4 mini breaking down screenshots into specs while local QWEN is doing the work.
My experience with bugs has also been the exact opposite of what you described.
And you let local QWEN write the code for you? Is the output any good or comparable to frontier models?
Archaeologist.dev Made a Big Mistake
If guided by this morality column, Archaeologist should immediately stop using pretty-much anything they are using in their life. There's no company today that doesn't have their hands dirty. The life is a dance between choosing the least bad option, not radically cutting off any sight of "bad".
what? that's a thing ? why would a vibe coder be "renowned"? I use Claude every day but this is just too much.
https://clawd.bot/ https://github.com/clawdbot/clawdbot
He's also the guy behind https://github.com/steipete/oracle/
To be clear, I’ve seen this sentiment across various comments not just yours, but I just don’t agree with it.
https://builders.ramp.com/post/why-we-built-our-background-a...