Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>The "6 or so" breakthroughs sounds about right to me. But I don't really see the reason for being optimistic about 2030. It could just as easily be 2050, or 2100, etc.

Well if you read between the lines of the Gato paper there may be no more hurdles left and scale is the only boundary left.

>Not a recognition that we fundamentally still don't know how brains work! (or what intelligence is, etc.)

This is a really bad trope. We don't need to understand the brain to make an intelligence. Does Evolution understand how the brain works? Did we solve the Navier Stokes Equations before building flying planes? No.



I can acknowledge the point about planes, since I believe in tinkering/engineering over theory.

But I'd say planes are more like "narrow AI", and we already have that. Planes do a economically useful thing, just like narrow AIs do economically useful things. (But what birds do is also valuable and efficient, and it's still an open research problem to emulate them. Try getting a plane or drone to outmaneuver prey like an eagle.)

I'd consider the possibility that we WILL get AGI in 10, 30 or 100 years, but it won't be that impactful compared to the narrow AIs already running everything! It will be slow and suffer from Moravec's paradox (i.e. being much less efficient than a human, for a very long time)

---

"Solving AGI" isn't solving a well-defined problem IMO. If you want to say "well we'll just ask the AGI how to get to Mars and how to create nuclear fusion and it will tell us", well to me that sounds like a hacker / uninformed philosopher fantasy, which has no bearing in reality.

Nothing about deep learning / DALL-E-type systems is close to that. I think people who believe the contrary are mostly projecting meaning in their own minds onto computing systems -- which if they'd studied human cognition, they'd realize that humans are EXTREMELY prone to!

It seems like a lot of the same people who didn't believe that level 5 self-driving would take human-level intelligence. That is, they literally misunderstood what THE ACTIVITY OF DRIVING IS. And didn't understand that current approaches have a diseconomy where the last 1% takes 99% of the time. Now even Musk admits that, after Gary Marcus and others were telling him that since 2015.

(The point about evolution doesn't make sense, because people want AI within 10 years, not 100M or 1B years)


>But I'd say planes are more like "narrow AI", and we already have that.

Not sure I buy the analogy. A plane is already more like an AGI, you have to figure out enough aerodynamics to get the thing to be airworthy, enough materials science to develop the proper materials to build it, enough mechanical engineering to get the thing to be maneuverable, enough Software to make it all work together etc. So it's already an amalgam of many other types of systems. A Hang Glider might be more akin to a Narrow AI in this framework.

>I'd consider the possibility that we WILL get AGI in 10, 30 or 100 years, but it won't be that impactful compared to the narrow AIs already running everything!

I think this is misunderstanding what an AGI really represents. Imagine John Neumann compared to a Chimpanzee. There's no comparison right, the chimp can't even begin to understand the simplest plans or motivations Von Neumann has, now imagine an AGI is to Von Neumann as Von Neumann is to the Chimp, only the metaphor doesn't even work because there's no reason the AGI can't scale further until we're talking about something relative to us as we are relative to Ants or Bacteria. If you think nothing will change when a system like that exists then I don't know what to tell you.

>I'd consider the possibility that we WILL get AGI in 10, 30 or 100 years, but it won't be that impactful compared to the narrow AIs already running everything! It will be slow and suffer from Moravec's paradox (i.e. being much less efficient than a human, for a very long time)

If one considers the above and is comfortable with the idea that such a thing might be possible in our or our children's lifetimes then we should be doing everything in our powers to solve the alignment problem which is extremely non trivial and enormously consequential. At minimum an AGI could direct, orchestrate or improve the Narrow AI's in ways that no Human could hope to understand. All bets are off at that point.

I think the paradox was more poignant in robotics, but in any case recent advances have put tasks like object recognition, real world path planning, logical reasoning, etc well past human child levels and at or beyond adult levels in some cases.

>Now even Musk admits that, after Gary Marcus and others were telling him that since 2015.

Marcus is a constant goalpost mover and he'll be shouting about AGI's not really understanding the world while he's being disassembled by nanobots.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: