Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with doing away with the implementation specific details of recursion is that these details are inextricably linked to the correctness of your program in most languages.

The fact is, even with modern optimizing compilers that prove all sorts of correctness theorems about transformations, stack overflows will occur in recursive programs if you aren't constantly aware of their existence. You always have to code recursive programs defensively.

A program coded like "recurse(n-1)" will probably blow its lid if you pass in 1,000,000 as the parameter while "for (i=0; i<N; i++)" will happily chug away a billion times. Or worse, if you call "recurse(n-1)" twice in the same function and then pass in a measly value of 100 as the parameter, you will be waiting for the program to terminate on your deathbed.

Explaining why these differences are important to a beginning student, let alone teaching them how to avoid them on their own, is a task that can only get in the way. Avoiding recursion for a while is a much better way to make first few attempts at independent coding not go up in flames.



I just tried, and ghci (i.e. Haskell) does not blow up when confronted with

    > let f n = if n > 0 then n + f (n - 1) else 0
    > f 1000000


...which is why we should be using lazy languages.


Although lazyness is the usual suspect, it has nothing to do with this.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: