No, most of these do. To build a search engine for example, you need to collect a massive amount of data (say, every web page on the internet), and perform a relatively trivial amount of computation on it (building a TF/IDF term index of each term, computing PageRank, etc.) to build your indexes.
While building a search index is computationally intensive, the number of CPU cycles exercised per GB of data is relatively low. So if you wanted to 'distribute' this problem by farming out the data across a high latency connection to have it processed on another node, and then have it returned to you would actually be slower in many cases than just processing locally on a decent machine.
it depends on the what is meant by a computer. i'd refer to it as a mini-cluster that can fit into a basement for heating in the context of this article (and soon enough , moore's law, that will be a desktop size). On that scale, all these tasks are embarassingly parallel. Exceptions to this are multiuser systems where users interactions matter, such as facebook, in which case a huge cluster is required. My argument is that more computing will devoted to the former (basically machine learning, and scientific/engineering optimization) than the latter (multi-user databases) over time, simply because humans can only enter so much info (all 7 billion of us) compared to what machines can gather and compute.
While building a search index is computationally intensive, the number of CPU cycles exercised per GB of data is relatively low. So if you wanted to 'distribute' this problem by farming out the data across a high latency connection to have it processed on another node, and then have it returned to you would actually be slower in many cases than just processing locally on a decent machine.