Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I thought that, too. It wasn’t really true, though.

Some papers pointed out that the models start failing after being trained with too much synthetic data. They also need tons of random, Internet data in the first place. Humans don’t have those failure modes. The AI’s also got smarter the more data we produced.

So, there’s some critical differences between what we’re doing and what they’re doing that keep it from being a neat flow like that. What many humans do in training other humans fits that, though.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: