While I agree that you should definitely use your tools in the best way you're capable of, I think for most people there's a baseline expectation that if you save data in a database, that data will be safe (at the very least, recoverable) unless something happens like the server catching on fire.
Nearly every other major database has this as the default -- MySQL, PostgreSQL, CouchDB, and Berkeley DB to name a few. (Redis doesn't, but it's also very upfront about it, and does provide this kind of durability as an option from early on.) So when MongoDB breaks this expectation, and when asked to support it as an option, just says, "That's what more hardware is for," it's a pretty big turnoff.
Mongo client's "safe" operator causes the client to see if the database threw an error and throw an error itself. Like someone mentioned, it mostly falls on the mongo clients to implement this. We mostly use fire-and-forget for our application, since it is just logging stats and speed is more important than losing an increment here or there. There should probably be better documentation telling people to always use the safe operations for important data.
There is also the durability issue. Early versions shipped with durability turned off by default and required replication to maintain durability. Mongo has had the journal feature since 1.8 and has it enabled by default since 1.9.2. (current version is 2.0)
So while mongo has definitely been unsafe in the past, both kinds of safety are now supported, and one is default. The other is either not that big a deal or egregious, depending on the way you view mongo.
Care to explain? I believe for Redis, "appendfsync everysec" is the default. The poster's point was that MySQL and Postgres both ship with something like "appendfsync always", and you have to opt-in to the the less safe mode if you want to get more performance. Redis ships with the less safe mode pre-selected, and so has higher performance out-of-the-box.
You're right. The Postgres equivalent to "appendfsync always" is "synchronous_commit = on". Which AFAIK is the default.
However, one of the nice things about redis is that even if you run "appendfsync everysec" you never run the risk of corruption. You're only risk is losing a maximum of 2 seconds worth of data.
Nearly every other major database has this as the default -- MySQL, PostgreSQL, CouchDB, and Berkeley DB to name a few. (Redis doesn't, but it's also very upfront about it, and does provide this kind of durability as an option from early on.) So when MongoDB breaks this expectation, and when asked to support it as an option, just says, "That's what more hardware is for," it's a pretty big turnoff.