Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There exists a form of survivalist mindset in some programmers. It goes roughly like this: if you can't boot it, it's fluff programming.

The attraction of C to this mindset is that it can be transliterated (vs translated) into machine code. This transliteration is so straight forward that if you know C and are familiar with the basics of the instruction set you can do it by hand.

The survivalist mindset fears dependence. VMs and interpreters are the essence of dependence. So those with this mindset always seek find a way to do it in plain C when they can.



On the next episode of "Doomsday Coders"

Programmer writes an entire operating system from scratch in case all the copies of Linux in the world are deleted.

A web developer writes his entire application in x86 assembly to prepare for an event in which all the world's compilers and interpreters disappear.

And more...


Replace "disappear" with "cannot be trusted to not be owned by NSA" and someone might actually take a stab..


> The attraction of C to this mindset is that it can be transliterated (vs translated) into machine code. This transliteration is so straight forward that if you know C and are familiar with the basics of the instruction set you can do it by hand.

Except modern processor architectures are no longer a one-to-one correspondence between Assembly code and C.


There has never been a one-to-one mapping (rather one-to-many), but it's still perfectly possible to transform C into assembly by hand. It's not even that difficult.


Yes, back in the old 8 and 16 bit days, it was pretty much one-to-one for most use cases.

Nowadays not any longer if you want to write code that takes advantage of branch prediction, speculative execution, cache lines, vector units, GPGPU ...

Just watch this Going Native talk on how sometimes generating code that is 4x bigger than the direct translation can yield up to 30% performance increase.

http://channel9.msdn.com/Events/GoingNative/2013/Compiler-Co...


Auto vectorization can be impressive, but it applies to hard numerical code, not so much in regular code bases. I regularly debug assembly in my job, and in the vast majority of cases it's a fairly straightforward translation of the C/C++ code. Stuff like cache locality or using the GPU is still determined by the C code.

I'll concede there's more fine variations to think about than with an 8-bit system, though, with the padding, etc.


> VMs and interpreters are the essence of dependence.

I find that an odd perspective. If you can rewrite your VM/interpreter from scratch, are you really dependent on it?

I guess I see it less as "use some black-box language runtime to solve this problem" and more as "I will use plain C to solve this problem... by writing an interpreter in plain C for a good DSL to solve this problem in, and then writing the solution in that DSL. But, you know, someone else already did the first part. I'd do it myself if they hadn't, though."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: