Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I still don't understand. That presuposes that the AGI will just materialize out of thin air, and immediately have human level intelligence. That once day you just have a bunch of dumb computers, the next day you have an AGI hell-bent on escaping at all cost.

That's not going to happen - even Carmack believes so. The process to get AGI is going to take a long time, and we'll go through lots and lots of iterations of progressively more intelligent machines, starting with ones that are at toddler-level at best. And yes, toddlers are little monkeys when it comes to escaping, but they are not a global world ending threat.



They will inevitably reach the point of a world-ending threat, but I'm very confident Carmack is right that this won't happen so quickly we can't see the signs of danger.

There's a lot of policy we can do to slow things down once we get near that point, which is something rarely talked about among AI safety researchers, but the fundamental existential danger of this technology is obvious.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: