Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Without an empirical methodology it's hard to know how true this is. There are known and well-documented human biases (e.g., placebo effect) that could easily be involved here. And besides that, there's a convincing (but often overlooked on HN) argument to be made that modern LLMs are optimized in the same manner as other attention economy technologies. That is to say, they're addictive in the same general way that the YouTube/TikTok/Facebook/etc. feed algorithms are. They may be useful, but they also manipulate your attention, and it's difficult to disentangle those when the person evaluating the claims is the same person (potentially) being manipulated.

I'd love to see an empirical study that actually dives into this and attempts to show one way or another how true it is. Otherwise it's just all anecdotes.



I don't understand how the placebo effect is a human bias. Is it?


At least in some instances you could frame it that way: You believe that doctors and medicine are effective at treating disease, so when you are sick and a doctor gives you a bottle of sugar pills and you take them, you now interpret your state through the lens that you should feel better. A bias on how you perceive your condition

That's not all that the placebo effect is. But it's probably the aspect that best fits the framing as bias


It's much more than a bias.

You actually get better through placebo, as long as there's a pathway to it that is available to your body.

It's a really weird effect.

The fight isn't against triggering placebo, it's against letting it muddle study results.


I really love the back-and-forth in this mini-thread, I learned a lot about good thinking skills here. Thanks everyone.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: