> all of the bulls were physicists, philosophers or Deepak-Chopra-for-the-TED-crowd bullshit artists who have never written a line of code in their lives
One of the cofounders of DeepMind (founded in 2010) wrote a blog post that year saying his estimate of time till AGI was a lognormal distribution peaking at 2025. While I haven't personally seen any of his work, https://en.wikipedia.org/wiki/Shane_Legg sounds pretty technical.
If you'd written "nobody I was paying attention to said this", that would've been reasonable.
I might point out Shane Legg invented the term Artificial General Intelligence, and the title of his PhD thesis is "Machine Super Intelligence".
Of course there always, since the beginning, were/are many, many AI researchers who were bullish on AGI... if you think it's possible then you should try to build it. But often people avoid broadcasting such opinions. However, very very few AI researchers are AGI researchers, historically because it's hard to make a small contribution, and to get taken seriously and get funding: I was told at the AGI conference just ~6 years ago there were maybe less than 10 funded -- they weren't counting DeepMind. ALMOST NOBODY at the AGI conference had funding to work on AGI!
One of the cofounders of DeepMind (founded in 2010) wrote a blog post that year saying his estimate of time till AGI was a lognormal distribution peaking at 2025. While I haven't personally seen any of his work, https://en.wikipedia.org/wiki/Shane_Legg sounds pretty technical.
If you'd written "nobody I was paying attention to said this", that would've been reasonable.