Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's why I specifically didn't call the LLM itself Turing complete, but stated that if you put a loop around a Turing machine you can trivially make it Turing complete. Maybe I should have been clearer and write "the combined system" instead of it.

But the point is that this is irrelevant, because it is proof that unlesss human brains exceed the Turing computable, LLM's can at least theoretically be made to think. And that makes pushing the "they're just predicting the next token" argument anti-intellectual nonsense.



I am not sure it is proof, at least not in an interesting way. It's also proof that Magic: The Gathering could theoretically be made to think. Which is true but doesn't tell you anything much about MtG other than that it is a slightly complicated ruleset that has a couple of properties that are pretty common.

I think both sides of this end up proving "too much" in their respective directions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: