Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is a common problem I see - people are always trying to compare human reasoning to artificial reasoning rather than looking at the output. The output is all that matters.

Entire schools of philosophy disagree completely.

There was also a fantastic article here on HN a while ago [1] by Douglas Hofstadter where he ranted against google translate and similar, by showing that the current state-of-the-art statistical brute force word nearness ML cluster doesn't understand the text it's translating, and therefore will always be lacking, always have cases that it simply cannot solve.

He's basically saying that Weak AI can only get to 99% when it comes to translating, but we would need Strong AI to get to 100%, and Strong AI is probably impossible.

[1] https://news.ycombinator.com/item?id=16296738



> Entire schools of philosophy disagree completely.

I'm sure they do; it's very much a matter of debate.

> brute force word nearness ML cluster doesn't understand the text it's translating, and therefore will always be lacking

Humanlike reasoning doesn't solve this problem. Understanding does not prevent this abstract "lacking." Strong AI is not suddenly perfect.

> always have cases that it simply cannot solve.

This is an inherent problem with reasoning, not with strong vs weak AI. Which leads us back to:

> Strong AI is probably impossible.

If we measure "Strong AI" as infallible, then yes. If we measure it by "understanding," then no. Which is why I care more about results than the philosophical debate over understanding/consciousness/humanness. 99.999% is acceptable if 100% is impossible.


Given that we now have superhuman image recognition (in particular, even highly trained experts have a hard time classifying dog breeds with extensive time per sample while modern NNs can classify them with 99+% accuracy), I don't think this is true.

We will probably exceed human's ability to come up with reliable training data before we reach strong AI.


So you're basically saying that Weak AI is good enough, and who cares if the machine really understands what it is doing? As if the ever-increasing accuracy numbers somehow move us closer to Strong AI?


It's hard to say for certain, but I would say that observed by humans, we'll probably get computers to the point of appearing to be conscious, with minds, passing the most challenging tests we can construct.

It seems like a reasonable extrapolation from modern technology. Think about text to speech producing sounds that are hard to distinguish from humans, deepfake producing images that look like they were actually filmed. Those are all weak AI. They don't need strong AI- just tons of labelled data- to produce things that can fool inexperienced humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: