Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't quite follow what you're attempting to say here, I mean to say that the study is flawed because "It's a study on your ability to generate random list of results", with the inference that supposedly those who generate the most random results and those who appear by human estimation to do so are the same people, because humans are actually bad at randomness, exactly as you yourself say here.

So what "appears random" is not at all a good measure of what is "actually random" to put it as simply as possible.



The goal is the ability to generate strings that match the "approximate sense of complexity" (ASC) of the subjects. To do so requires the ability to avoid any routine and inhibit prepotent responses.

Research goal was to measure cognitive ability and randomness is just measure stick. The actual mathematical complexity is correlated but there is human bias. The bias itself is irrelevant if it's constant. The relevant is how closely subjects can generate strings that appear random and complex (randomness with bias) for humans.

In other words

measure = statistical randomness + bias

Because the bias is almost universal (see the modulating factors in the article) it's not interfering with the thing they try to measure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: