Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

O3 is multiple orders of magnitude more expensive to realize a marginal performance gain. You could hire 50 full time PhDs for the cost of using O3. You're witnessing the blowoff top of the scaling hype bubble.


What they’ve proven here is that it can be done.

Now they just have to make it cheap.

Tell me, what has this industry been good at since its birth? Driving down the cost of compute and making things more efficient.

Are you seriously going to assume that won’t happen here?


>> Now they just have to make it cheap.

Like they've been making it all this time? Cheaper and cheaper? Less data, less compute, fewer parameters, but the same, or improved performance? Not what we can observe.

>> Tell me, what has this industry been good at since its birth? Driving down the cost of compute and making things more efficient.

No, actually the cheaper compute gets the more of it they need to use or their progress stalls.


> Like they've been making it all this time?

Yes exactly like they’ve been doing this whole time, with the cost of running each model massively dropping sometimes even rapidly after release.


No, the cost of training is the one that isn't dropping any time soon. When data, compute and parameters increase, then the cost increases, yes?


Do you understand the difference between training and inference?

Yes, it costs a lot to train a model. Those costs go up. But once you trained it, it’s done. At that point inference — the actual execution/usage of the model — is the cost you worry about.

Inference cost drops rapidly after a model is released as new optimizations and more efficient compute comes online.


That’s precisely what’s different about this approach. Now the inference itself is expensive because the system spends far more time coming up with potential solutions and searching for the optimal one.


I feel like I’m taking crazy pills.

Inference always starts expensive. It comes down.


And again, no. The cost of inference is a function of the size of the model and if models keep getting bigger, deeper, badder, the cost of inference will keep going up. And if models stop getting bigger because improved performance can be achieved just by scaling inference, without scaling the model- well that's still more inference; and even if the cost overall falls, there will need to be so much more inference to scale sufficiently to keep AI companies in competition with each other that the money they have to spend will keep increasing, or in other words it's not how much it costs but how much you need to buy.

This is a thing, you should know. It's called Jevon's Paradox:

In economics, the Jevons paradox (/ˈdʒɛvənz/; sometimes Jevons effect) occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced.[1][2][3][4]

https://en.wikipedia.org/wiki/Jevons_paradox

Better check those pills then.

Oh but, you know, merry chrimbo to you too.


>> Do you understand the difference between training and inference?

Oh yes indeed-ee-o and I'm referring to training and not inference because the big problem is the cost of training, not inference. The cost of training has increased steeply with every new generation of models because it has to, in order to improve performance. That process has already reached the point where training ever larger models is prohibitively expensive even for companies with the resources of OpenAI. For example, the following is from an article that was posted on HN a couple days ago and is basically all about the overwhelming cost to train GPT-5:

In mid-2023, OpenAI started a training run that doubled as a test for a proposed new design for Orion. But the process was sluggish, signaling that a larger training run would likely take an incredibly long time, which would in turn make it outrageously expensive. And the results of the project, dubbed Arrakis, indicated that creating GPT-5 wouldn’t go as smoothly as hoped.

(...)

Altman has said training GPT-4 cost more than $100 million. Future AI models are expected to push past $1 billion. A failed training run is like a space rocket exploding in the sky shortly after launch.

(...)

By May, OpenAI’s researchers decided they were ready to attempt another large-scale training run for Orion, which they expected to last through November.

Once the training began, researchers discovered a problem in the data: It wasn’t as diversified as they had thought, potentially limiting how much Orion would learn.

The problem hadn’t been visible in smaller-scale efforts and only became apparent after the large training run had already started. OpenAI had spent too much time and money to start over.

From:

https://archive.ph/L7fOF

HN discussion:

https://news.ycombinator.com/item?id=42485938

"Once you trained it it's done" - no. First, because you need to train new models continuously so that they pick up new information (e.g. the name of the President of the US). Second because companies are trying to compete with each other and to do that they have to train bigger models all the time.

Bigger models means more parameters and more data (assuming there is enough which is a whole other can of worms) more parameters and data means more compute and more compute means more millions, or even billions. Nothing in all this is suggesting that costs are coming down in any way, shape or form, and yep, that's absolutely about training and not inference. You can't do inference before you do training, you need to train continuously, and for that reason you can't ignore the cost of training and consider only the cost of inference. Inference is not the problem.


> What they’ve proven here is that it can be done.

No they haven't, these results do not generalize, as mentioned in the article:

"Furthermore, early data points suggest that the upcoming ARC-AGI-2 benchmark will still pose a significant challenge to o3, potentially reducing its score to under 30% even at high compute"

Meaning, they haven't solved AGI, and the task itself do not represent programming well, these model do not perform that well on engineering benchmarks.


Sure, AGI hasn’t been solved today.

But what they’ve done is show that progress isn’t slowing down. In fact, it looks like things are accelerating.

So sure, we’ll be splitting hairs for a while about when we reach AGI. But the point is that just yesterday people were still talking about a plateau.


About 10,000 times the cost for twice the performance sure looks like progress is slowing to me.


Just to be clear — your position is that the cost of inference for o3 will not go down over time (which would be the first time that has happened for any of these models).


Even if compute costs drop by 10X a year (which seems like a gross overestimate IMO), you're still looking at 1000X the cost for a 2X annual performance gain. Costs outpacing progress is the very definition of diminishing returns.


From their charts, o3 mini outperforms o1 using less energy. I don’t see the diminishing returns you’re talking about. Improvement outpacing cost. By your logic, perhaps the very definition of progress?

You can also use the full o3 model, consume insane power, and get insane results. Sure, it will probably take longer to drive down those costs.

You’re welcome to bet against them succeeding at that. I won’t be.


Yes, that's exactly what I'm implying, otherwise they would have done it a long time ago, given that the fundamental transformer architecture hasn't changed since 2017. This bubble is like watching first year CS students trying to brute force homework problems.


> Yes, that's exactly what I'm implying, otherwise they would have done it a long time ago

They’ve been doing it literally this entire time. O3-mini according to the charts they’ve released is less expensive than o1 but performs better.

Costs have been falling to run these models precipitously.


I would agree if the cost of AI compute over performance hasn't been dropping by more than 90-99% per year since GPT3 launched.

This type of compute will be cheaper than Claude 3.5 within 2 years.

It's kinda nuts. Give these models tools to navigate and build on the internet and they'll be building companies and selling services.


That's a very static view of the affairs. Once you have a master AI, at a minimum you can use it to train cheaper slightly less capable AIs. At the other end the master AI can train to become even smarter.


The high efficiency version got 75% at just $20/task. When you count the time to fill in the squares, that doesn't sound far off from what a skilled human would charge




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: