It's very clear that Anthropic doesn't really want to expose the secret sauce to end users. I have to patch Claude every release to bring this functionality back.
Remember there are no moats in this industry - if anything one company might have a 2 month lead, sometimes. We've also noticed that companies paying OpenAI may swiftly shift to paying Google or Anthropic in a heartbeat.
That means the pricing is going to be competitive. You may still get your wish though, but instead of the price of an engineer remaining the same, it will cut itself down by 95%.
Yeah. If you ignore the negligible fact that some investor may want a return on all that money that is going into capex I am pretty sure you can, Enron style, get to the conclusion that any of those companies have “healthy” margins.
Amazon was founded in 1994, went public in 1997 and became profitable in 2001. So Anthropic is two years behind with the IPO but who knows, maybe they'll be profitable by 2028? OpenAI is even more behind schedule.
How much loss did they accumulate until 2001? Pretty sure it wasn't the 44 billion OpenAI has. And Amazon didn't have many direct competitors offering the same services.
Did Amazon really not turn a profit, or apply a bunch of tricks to make it appear like they didn't in order to avoid taxes? Given their history, I'd assume the later: https://en.wikipedia.org/wiki/Amazon_tax_avoidance
Anyway, this has nothing to do with whether inference is profitable.
Their price is not a signal of their costs, it is the result of competitive pressure. This shouldn't be so hard to understand. Companies have burned investor money for market share for quite some time in our world.
This is the expected, the normal, why are you so defensive?
Because you made stuff up, did not show any proof, and ignored my proof to the contrary.
You made the claim:
> Deepseek lies about costs systematically.
DeepSeek broke down their cost in great detail, yet you simply called it "lies", but did not even mention which specific number of theirs you claim is a lie, so your statement is difficult to falsify. You also ignored my request for clarification.
You’re citing deepseek unaudited numbers. This is not even close to a proof.
Unless proven otherwise it is propaganda.
Meanwhile we have several industry experts pointing not only towards DeepSeek ridiculous claims of efficiency, but also the lies from other labs.
That's not how valuations work. A company's valuation is typically based on an NPV (net present value) calculation, which is a power series of its time-discounted future cash flows. Depending on the company's strategy, it's often rational for it to not be profitable for quite a long while, as long as it can give investors the expectation of significant profitability down the line.
Having said that, I do think that there is an investment bubble in AI, but am just arguing that you're not looking at the right signal.
Why would you gladly pay more than what it's worth? It's not an engineer you are hiring, it's AI. The whole point of it was to make intelligent workflows cheaper. If it's going to cost as much as an engineer, hire the engineer, at least you'd have an escape goat when things invariably go wrong.
Scapegoat, got it. Can't blame the autocorrect though... I honestly thought it was spelled like that, which is a shame since I've been studying English my entire life as a second language.
> one of two goats that was chosen by lot to be sent alive into the wilderness, the sins of the people having been symbolically laid upon it, while the other was appointed to be sacrificed
Let me get this straight: in the Bible, the scapegoat does survive, while the "pure" goat that did nothing wrong gets killed? That's... messed up, even for a tribal rite.
I'd pay up to $1000 pretty easily just based off the time it saves me personally from a lot of grindy type work which frees me up for more high value stuff.
It's not 10x by any means but it doesn't need to be at most dev salaries to pay for itself. 1.5x alone is probably enough of an improvement for most >jr developers for a company to justify $1000/month.
I suppose if your area of responsibility wasn't very broad the value would decrease pretty quickly so maybe less value for people at very large companies?
Yes, easily. Paying for Claude would be investing that money. Assuming 10% return which would be great I'd make an extra $1200 a year investing it. I'm pretty sure over the course of a year of not having to spend time doing low value or repetitive work I can increase productivity enough to more than cover the $13k difference. Developer work scales really well so removing a bunch of the low end and freeing up time for the more difficult problems is going to return a lot of value.
It's *worth it* when you're salaried? Compared to investing the money? Do you plan to land a very-high-paying executive role years down the line? Are you already extremely highly paid? Did Claude legitimately 10x your productivity?
I'm serious - the productivity boost I'm getting from using AI models is so significant, that it's absolutely worth paying even 2k/month. It saves me a lot of time, and enables me to deliver new features much faster (making me look better for my employer) - both of which would justify spending a small fraction of my own money. I don't have to, because my employer pays for it, but as I said, if I had to, I would pay.
I am not paying this myself, but the place I work at is definitely paying around 2k a month for my Claude Code usage. I pay 2 x 200, for my personal projects.
I think personal subs are subsidized while corporate ones definitely not. I have CC for my personal projects running 16h a day with multiple instances, but work CC still racks way higher bills with less usage. If I had to guess my work CC is using 4x as little for 5x the cost so at least 20x difference.
I am not going to say it has 10xed or whatever with my productivity, but I would have never ever in that timeframe built all those things that I have now.
I don't know why you keep insisting that no one is making any money off of this. Claude Code has made me outrageously more productive. Time = Money right?
I'm an employee, and my boss loves me because I deliver things he wants quickly and reliably - because I use AI tools. Guess who he will keep in the next round of layoffs?
> It's very clear that Anthropic doesn't really want to expose the secret sauce to end users
Meanwhile, I am observing precisely how VS+Copilot works in my OAI logs with zero friction. Plug in your own API key and you can MITM everything via the provider's logging features.
How much longer is Anthropic going to allow OpenCode to use Pro/Max subscriptions? Yes, it's technically possible, but it's against Anthropic's ToS. [1]
It's amazing how much other agentic tools suck in comparison to Claude Code. I'd love to have a proper alternative. But they all suck. I keep trying them every few months and keep running back to Claude Code.
Just yesterday I installed Cursor and Codex, and removed both after a few hours.
Cursor disrespected my setting to ask before editing files. Codex renamed my tabs after I had named them. It also went ahead and edited a bunch of my files after a fresh install without asking me. The heck, the default behavior should have been to seek permission at least the first time.
OpenCode does not allow me to scrollback and edit a prior prompt for reuse. It also keeps throwing up all kinds of weird errors, especially when I'm trying to use free or lower cost models.
Gemini CLI reads strange Python files when I'm working on a Node.js project, what the heck. It also never fixed the diff display issues in the terminal; It's always so difficult for me to actually see what edits it is actually trying to make before it makes it. It also frequently throws random internal errors.
At this point, I'm not sure we'll be seeing a proper competitor to Claude Code anytime soon.
Same, I still use CC mainly due to it being so wildly better at compaction. The overall experience of using OpenCode was far superior - especially with the LSP configured.
I use Opencode as my main driver, and I don’t experience what you have experienced.
For instance, opencode has /undo command which allows you to scroll back and edit a prior prompt. It also support forking conversations based on any prior message.
I think it depends on the set up. I overwrote the default planning agent prompt of opencode to fit my own use cases and my own mcp servers. I’ve been using OpenAI’s gpt codex models and they have been performing very well and I am able to make it do exactly what I ask it to do.
Claude code may do stuff fast, but in terms of quality and the ability to edit only what I want it to do, I don’t think it’s the best. Claude code often take shortcuts or do extra stuff that I didn’t ask.
Not in my (limited) experience. I gave CC and codex detailed instructions for reworking a UI, and codex did a much worse job and took 5x as long to finish.
If they cared about that, they wouldn't expose the thinking blocks to the end-user client in the first place; they'd have the user-side context store hashes to the blocks (stored server-side) instead.
GitHub Issues as a customer support funnel is horrible. It's easy for them, but it hides all the important bugs and only surfaces "wanted features" that are thumbs-up'd alot. So you see "Highlight text X" as the top requested feature; meanwhile, 10% of users experience a critical bug, but they don't all find "the github issue" one user poorly wrote about it, so it has like 7 upvotes.
GitHub Codespaces has a critical bug that makes the copilot terminal integration unusable after 1 prompt, but the company has no idea, because there is no clear way to report it from the product, no customer support funnel, etc. There's 10 upvotes on a poorly-written sorta-related GH issue and no company response. People are paying for this feature and it's just broken.
Claude code can reverse engineer it to a degree. Doing it for more than a single version is a PITA though. Easier to build you own client over their SDK.
I think it's more classic enshittification. Currently, as a percentage, still not many devs use it. In a few months or 1-2 years all these products will start to cater to the median developer and start to get dumbed down.
https://github.com/anthropics/claude-code/issues/15263
https://github.com/anthropics/claude-code/issues/9099
https://github.com/anthropics/claude-code/issues/8371
It's very clear that Anthropic doesn't really want to expose the secret sauce to end users. I have to patch Claude every release to bring this functionality back.