Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Intelligence is speed, but not in the linear way we think of it. When people think faster, they can put slightly more time into examining each possible solution branch, which allows them to discard the ones that resolve as "bad." This, then, culls those branches from their memory, allowing them to process the rest of the solution tree even faster. Someone who knows a little more CS than I do could actually give a definite answer to just how much faster it makes someone, assuming, just for the sake of argument, that solution trees are binary.

Anyway, there are certain problems that require a "minimum mental speed" to accomplish; Mensa test questions are basically designed so that your working memory capacity will be exhausted by extraneous branches unless your mind can cull them as quickly as it can think of them.



> Anyway, there are certain problems that require a "minimum mental speed" to accomplish; Mensa test questions are basically designed so that your working memory capacity will be exhausted by extraneous branches unless your mind can cull them as quickly as it can think of them.

What about people who have a larger or longer-lasting working memory capacity? Or those who have better culling algorithms? They may be able to solve classes of problems that 'faster' thinkers could never solve. They might do well at the Mensa test but on a simpler speed-based test they could do poorly.

Intelligence is not speed.


You missed the symmetry of the equality. If you have "better culling algorithms", each branch gets culled away faster, so, on an EEG or a similar device, you appear to be thinking faster.

The reason I didn't mention the possible variation in working memory capacity was a single word that I left out, leaving it, like I said, to the Computer Scientist: "exponential." Every time your have an additional binary choice to make, your possible solution-space doubles. It doesn't matter how large your working memory is; as long as it's a fixed, finite size, an exponential growth in memory consumption will consume it with relative ease. The culling algorithm needs to be of a certain minimum efficiency to keep you from "blowing your stack" and losing your place repeatedly, and this is what is perceived as "speed."

I think you misinterpreted, though, that I meant that intelligence is the speed at which you solve macro-level problems. It certainly isn't. Intelligence is the speed at which you prune decisions; external to the mind, this affects the confidence you can take in your assertions, not the speed at which you reach them.

The more confident you are in lemma 1, the more quickly you can consciously decide to move onto lemma 2, and so on, but this requires some internal message passing that's much slower than the complete decision-branch path-finding "system call" in your mind. Your macro "problem-solving speed" is much more affected by the speed of the IRQ handler (the neocortex, I think), but if the system calls keep returning a low confidence interval, you have to keep consciously pulling apart intuitions into graphs of semantic knowledge, and passing each node to the path-finder in turn, until you feel sure enough of your decision to become aware of it. If your culling algorithm is more efficient (as directly measured by your score, not your time, on an IQ test) then you will become aware of things that others never will, because their minds are caught in loops that return false-negative confidences.

Sleep, then, gives your culling algorithm a larger fixed-bound on its runtime before it must return a confidence; this is why sleeping on something will let you make decisions that you felt involved too much complexity the day before. (Of course, sleeping does other things as well, like transferring things between short- and long- term memory (where they're stored--if they are stored--with a much lower path cost), and shutting off the neocortex so the mind doesn't constantly have to context-switch to ring 3 while it's trying to process things.)


I think you are focusing heavily on one particular model of an 'intelligence algorithm'. The model you are assuming sounds a lot like the type used for chess program. I played along with the 'culling algorithm' bit for the sake of convenience. But I'm not sure there is good enough evidence to assume that as the underlying model.

The definition of speed I am using is analogous to the clock speed or memory retrieval speed in a computer and it does seem at one level to be the way you are thinking of it as well. I think this fits in with what the researchers are measuring.

> The reason I didn't mention the possible variation in working memory capacity was a single word that I left out, leaving it, like I said, to the Computer Scientist: "exponential."

The memory requirements of a particular algorithm don't necessarily grow exponentially with the solution space or the input data. As with time complexity common space complexity relationships to the input size are n (you need to store at least the input), nlog(n), n^k and k^n. Most tractable problems would be nlog(n) or at worst n^k where k was small. The implication of this is that often small differences in memory can make a huge difference. Possibly you are forgetting that every extra bit of memory 'exponentially' increases the number of options it that can represent.

One of the first things you learn in CS algorithms is that the an efficient (let's say nlog(n)) algorithm running on a very slow computer will beat an inefficient one (say n^2) on a much faster computer. Sometimes resources such as memory will make the difference between being able to use an nlog(n) rather then a n^2 algorithm. To me this is a good analogue to apply to intelligence. For particular classes of problems (the harder ones) the better algorithm will trump a faster processor. Having a store of 'right algorithms' probably has a lot more to do with intelligence than raw thought speed.

Your view seems to be that the fundamental thinking algorithm is a fixed tree-pruning-like one. If it was then yes speed would be the main differentiating factor. I think this is the crux of our disagreement.


That's an interesting theory, but wrong. Having a larger short term memory makes you better at specific tests regardless of how fast you prune your decision tree. g relates to all useful mental process and is separate from how well you score on a specific IQ test.

EX: It's not part of most IQ test, but some people are really good at remembering what happened 3 flash cards ago others suck at it.


So what happens when each node in the decision tree has probability attached to it. Worse still, what happens when you do not know the true probabilities of these nodes. Now imagine you have to deal problems all day with uncertain characteristics, i.e., decision trees have nodes with unknown probabilities. You will be stuck, unable to make decision.

This also reminds of the study in which a person's emotional part of the brain was removed (forgot name of the part, its the one that uses hueristics). So, he would have to make every decision completely rationally. That caused him to take a very long time to make even the simplest decisions, like which pencil to choose amongst a set pencils to write with.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: