Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Carmack gave his opinions about AGI on a recent Lex Fridman interview. He has some good ideas.


I remember him saying we don't have "line of sight" to AGI, and there could just be "6 or so" breakthrough ideas needed to get there.

And he said he was over 50% on us seeing "signs of life" by 2030. Something like being able to "boot up a bunch of remote Zoom workers" for your company.

The "6 or so" breakthroughs sounds about right to me. But I don't really see the reason for being optimistic about 2030. It could just as easily be 2050, or 2100, etc.

That timeline sounds more like a Kurzweil-ish argument based on computing power equivalence to a human brain. Not a recognition that we fundamentally still don't know how brains work! (or what intelligence is, etc.)

Also a lot of people even question the idea of AGI. We could live in a future of scary powerful narrow AIs for over a century (and arguably we already are)


There is an inverse relationship between the age of a futurist and the amount of time they think it will take for their predictions to become true.

In other words, people making these sort of predictions about the future are biased towards believing they'll be alive to benefit from it.


> There is an inverse relationship between the age of a futurist and the amount of time they think it will take for their predictions to become true.

That's not true. The so-called Maes-Garreau law or effect does not replicate in actual surveys, as opposed to a few cherrypicked futurist examples.


> There is an inverse relationship between the age of a futurist and the amount of time they think it will take for their predictions to become true.

I think calling Carmack a Futurist is pretty insulting.


Why? Because he also wrote some game engines?


Because he's an extremely analytical person and is extremely data driven. He isn't trying to sell some Ted talk


There's all sorts of accomplished people on this list: https://en.wikipedia.org/wiki/List_of_futurologists


>The "6 or so" breakthroughs sounds about right to me. But I don't really see the reason for being optimistic about 2030. It could just as easily be 2050, or 2100, etc.

Well if you read between the lines of the Gato paper there may be no more hurdles left and scale is the only boundary left.

>Not a recognition that we fundamentally still don't know how brains work! (or what intelligence is, etc.)

This is a really bad trope. We don't need to understand the brain to make an intelligence. Does Evolution understand how the brain works? Did we solve the Navier Stokes Equations before building flying planes? No.


I can acknowledge the point about planes, since I believe in tinkering/engineering over theory.

But I'd say planes are more like "narrow AI", and we already have that. Planes do a economically useful thing, just like narrow AIs do economically useful things. (But what birds do is also valuable and efficient, and it's still an open research problem to emulate them. Try getting a plane or drone to outmaneuver prey like an eagle.)

I'd consider the possibility that we WILL get AGI in 10, 30 or 100 years, but it won't be that impactful compared to the narrow AIs already running everything! It will be slow and suffer from Moravec's paradox (i.e. being much less efficient than a human, for a very long time)

---

"Solving AGI" isn't solving a well-defined problem IMO. If you want to say "well we'll just ask the AGI how to get to Mars and how to create nuclear fusion and it will tell us", well to me that sounds like a hacker / uninformed philosopher fantasy, which has no bearing in reality.

Nothing about deep learning / DALL-E-type systems is close to that. I think people who believe the contrary are mostly projecting meaning in their own minds onto computing systems -- which if they'd studied human cognition, they'd realize that humans are EXTREMELY prone to!

It seems like a lot of the same people who didn't believe that level 5 self-driving would take human-level intelligence. That is, they literally misunderstood what THE ACTIVITY OF DRIVING IS. And didn't understand that current approaches have a diseconomy where the last 1% takes 99% of the time. Now even Musk admits that, after Gary Marcus and others were telling him that since 2015.

(The point about evolution doesn't make sense, because people want AI within 10 years, not 100M or 1B years)


>But I'd say planes are more like "narrow AI", and we already have that.

Not sure I buy the analogy. A plane is already more like an AGI, you have to figure out enough aerodynamics to get the thing to be airworthy, enough materials science to develop the proper materials to build it, enough mechanical engineering to get the thing to be maneuverable, enough Software to make it all work together etc. So it's already an amalgam of many other types of systems. A Hang Glider might be more akin to a Narrow AI in this framework.

>I'd consider the possibility that we WILL get AGI in 10, 30 or 100 years, but it won't be that impactful compared to the narrow AIs already running everything!

I think this is misunderstanding what an AGI really represents. Imagine John Neumann compared to a Chimpanzee. There's no comparison right, the chimp can't even begin to understand the simplest plans or motivations Von Neumann has, now imagine an AGI is to Von Neumann as Von Neumann is to the Chimp, only the metaphor doesn't even work because there's no reason the AGI can't scale further until we're talking about something relative to us as we are relative to Ants or Bacteria. If you think nothing will change when a system like that exists then I don't know what to tell you.

>I'd consider the possibility that we WILL get AGI in 10, 30 or 100 years, but it won't be that impactful compared to the narrow AIs already running everything! It will be slow and suffer from Moravec's paradox (i.e. being much less efficient than a human, for a very long time)

If one considers the above and is comfortable with the idea that such a thing might be possible in our or our children's lifetimes then we should be doing everything in our powers to solve the alignment problem which is extremely non trivial and enormously consequential. At minimum an AGI could direct, orchestrate or improve the Narrow AI's in ways that no Human could hope to understand. All bets are off at that point.

I think the paradox was more poignant in robotics, but in any case recent advances have put tasks like object recognition, real world path planning, logical reasoning, etc well past human child levels and at or beyond adult levels in some cases.

>Now even Musk admits that, after Gary Marcus and others were telling him that since 2015.

Marcus is a constant goalpost mover and he'll be shouting about AGI's not really understanding the world while he's being disassembled by nanobots.


> The "6 or so" breakthroughs sounds about right to me.

What’s your logic? Or his if you know it?


Yeah he didn't elaborate on this in the interview, which I would have liked.

I should amend my comment to say that "6 or so breakthroughs" sounds a lot more plausible to me than "1 breakthrough" or "just scaling", which you will see some people advocate, including in this thread. That's what I meant.

I believe the latter is fundamentally mistaken and those people simply don't understand what they don't understand (intelligence). The views of Gary Marcus and Steven Pinker are closer to my own -- they have actually studied human cognition and are not just hackers and uninformed philosophers pontificating.

Jeff Hawkins is another AI person from an unconventional background, and I respect him because he puts his money where his mouth is and has been funding his own research since early 2000's. I read his first book in 2006, and the most recent one a couple years ago.

But I feel Jeff Hawkins has had the "one breakthrough" feeling for 10-15 years now. TBH I am not sure if they have even made what qualifies as one breakthrough in the last 10-15 years, and I don't mean that to be insulting, since it's obviously extremely difficult and beyond 99.99% of us, including me.

I am not sure that even deep learning counts as a breakthrough. We will only know in retrospect. Based on Moravec's paradox, my opinion is there's a good chance that deep learning won't play any role in AGI, if and when it arrives. Some other mechanism could obviate the technique.

So to me "6 or so breakthroughs" simply sounds more realistic than the current AI zeitgeist. But nobody really knows.


What do you mean exactly when you say “deep learning”?


I'm guessing he means neural networks with a large number of layers.


Oh, so something like a brain, yeah, that can’t possibly lead to AGI, right.


If you think about big areas of cognition like memory, planning, exploration, internal rewards, etc., it's conceivable that a breakthrough in each could lead to amazing results if they can be combined.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: