Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's amusing, but when it comes to doing actually work, I just don't care if my LLM fails things like this.

I'm not trying to trick it, so falling for tricks is harmless for my use cases. Does it write quality, secure code? Does it give me accurate answers about coding/physics/biology. If it gets those wrong, that's a problem. If it fails to solve riddles, well, that'll be a problem iff I decide to build a riddle solver using it.



Additionally, I don't think that these kinds of failures say much about overall intelligence. Humans are largely visual creatures, and we fall prey to innumerable visual illusions where we fail to see what's actually there or imagine something that isn't there under certain visual patterns.

LLMs are largely textual creatures and they fail to see things that are there or imagine things that are under certain textual patterns.

I don't think you would say a human "isn't really intelligent" because it imagines grey spots at the intersection of black squares on a white background even though they aren't there.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: