Which is often equally dangerous--applications which rely on garbage collection often use vastly more memory than applications that don't.
A particularly analogous situation, however, would be a C application that relies on a non-guaranteed GC, i.e. one that is not guaranteed to garbage collect all objects. In such a case, you're almost sure to eventually run out of memory. This is as opposed to the Java GC, which, while potentially inefficient, will (AFAIK) GC all objects.
Another analogous situation would be an application built to run on a server with 2 gigabytes of memory. The application uses 1.6 gigabytes and uses GC. A change in GC results in the program actually needing 2.4 gigabytes despite the actual internal memory usage never changing, and your servers grind to a halt.
I wouldn't want my pacemaker running a garbage-collected language and yet most web apps are written with GC. Google uses GC languages. GC can be done correctly and efficiently.
TCO is not very hard to implement in a compiler/interpreter if considered from the beginning. Unless your compiler/interpreter was designed very poorly, it probably isn't very hard to implement later, either.
Given this, it doesn't really make sense to not trust TCO because it's somehow black magic. There may be other arguments against wanting TCO in your language of choice ("I really need backtraces all the time and I can't sacrifice simple, provable interpreter optimizations!") but reliability can't be it.
Given this, it doesn't really make sense to not trust TCO because it's somehow black magic.
I wouldn't say it's black magic at all--it's just that from my experience you can rarely even trust compilers with the most basic of things, let alone black magic.