Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see it as absurd because when you get to that level, there's no information about what you can practically do to fix it. You've gone way past the threshold of usefulness — if you get my meaning (in other words, this reductionist view could end with "because, physics", but that's not very helpful).

You don't attempt to fix human fallibility itself but instead create systems and process to reduce both it's likelihood and impact (as you describe) for specific scenarios. Those systems and processes can then be improved and new scenarios added.

In this particular case, I find the reduction to 'management and hard deadlines' to be a useless summary (as well as incorrect) as it goes beyond the threshold.



> I see it as absurd because when you get to that level, there's no information about what you can practically do to fix it.

I disagree. I see it as important to recognize our shortcomings and find successful ways to deal with them. It may be that we haven't yet figured this out in general, but figuring out specific cases may allow us to solve individual pieces of the problem one at a time and improve things even if we cannot fully overcome our flaws.


I don't see how your comment disagrees with mine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: