Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs get to maybe ~20% of the rated max FLOPS for a GPU. It’s not hard to imagine that a purpose built ASIC with maybe adjusted software stack gets us significantly more real performance.


They get more than this. For prefill we can get 70% matmul utilization, for generation less than this but we’ll get to >50 too eventually.


And even when you get to 100% utilization you’ll still be wasting a crazy amount of gates / die area, plus you’re paying the Nvidia tax. There is no way in hell that will go on for 10 years if we have good AGI but inference is too expensive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: