Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The best layman's example of O(log N) I've heard is finding a name in the phone book. You open it around the middle, select which half gets you closer, and repeat. It's then easy to "get" why it's not a big deal if it's a huge phone book versus a tiny phone book.


That seems more like a layman's explanation of a binary search than O(log N). Binary search just happens to be O(log N)


Binary search is an example of a O(log N) algorithm, as OP said. Having tangible examples is great for explaining algorithmic complexity to people.

One I've used is sorting with a deck of cards. Take a single suit, A-K, and have people sort it using bubble or insertion or other sorts. It's only 13 cards, but the difference even between some of the O(n^2) algorithms becomes obvious when swapping is postponed until the latest stage versus earliest.

Then you can also demonstrate radix sort. One of the most underrated sorting algorithms, in my opinion. A lot of people focus on the (potentially) large constant. But it can be appropriate, or may be reduced depending on your use-case. And it's a fantastic sort when you can't give a neat numeric quality for sorting and have to sort over multiple dimensions. Again with cards, you can do two passes. One with buckets by value, and one with buckets by suit. After the second, you collect each of the 4 stacks and the deck is sorted.

Cards, phone books, recipes, these are things that most people encounter (ok, maybe not phone books anymore), in their lives. They offer great aids to demonstrating CS concepts to either new students or people who have only a passing curiosity (or none but they shouldn't have asked :P).


I'm not sure how drawing that distinction helps a layman.


Clarifying something as an example of a principle instead of the principle itself seems pretty important, regardless of lay status.


Ah, when you phrase it that way I see what you mean, I agree.


A phone book search does O(log N) string comparisons, but its running time is not O(log N) unless you consider all string comparisons to take a constant amount of time. Because for the names in the phone book to be distinct, the strings have to be about log N in size, so each string comparison costs O(log N) in the worst case. So the cost of phone book search is O((log N)^2).

I've skipped some details here: string comparisons can finish early, and maybe N should be the size of the phone book rather than the number of names in the phone book -- but even if you consider these factors, the runtime complexity is still more than O(log N).


Thats not a good example for Big O because number of operations computes exactly to logN in worst case.

Try with mergesort and see how it ends being nlogn


Isn't O(log n) just an upper bound limit ? So isn't it more correct to say, that in worst case the number of steps are < O(log n) ?


Big-O describes an upper limit in itself. Something that is O(N) is also O(2^N). No less-than needed, it's included in the definition of big-O.


Yes, but context matters; Big O is often used informally. The interviewer's going to look at you weird when you tell him the O(N) algorithm is O(2^N).


We have the same problem with things being "in NP". It's clear that all of P is in NP, but not clear whether all of NP is in P. But people often say "NP" with the implication of "(apparently, as far as we know) not in P", which is not actually correct based on the meanings of those terms.

(In this case the problem is probably made worse by the mental interpretation of "NP" as "Not Polynomial", when it really means "Nondeterministic machine can solve in Polynomial time", and if a deterministic machine can solve something in polynomial time, a nondeterministic machine can do so as well!)


A phone book? How is Amazon's Kindle App O(log N)? ;-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: