I've maintained large systems and small systems. FreeBSD and Linux systems. I helped build and maintain the serverless platform that hosts the Netflix API. I managed build systems and CI/CD for docker images deployed to k8s and built a package manager to address the instability that is inherent with rebuilding artifacts the way tools like Docker do (which are already a marked improvement over Ansible).
Ansible is essentially logging in and running a series of shell scripts. This works great in isolation, but do it long enough and you'll realize a lot of things you thought were idempotent, atomic, and infallible are not. Most package managers are glorified tarballs with shell scripts wired up to lifecycle hooks during install. You YOLO unpack them into a global namespace and hope for the best. With any luck, when something surprising happens, you can just rerun your script to bring the server back into a good state. But often times the server just ends up borked and you have to throw it away and start over.
K8S somewhat addresses this by maintaining the desired state in a declarative format and comparing the actual state against the declared state in an eval loop. But K8S is absolutely massive and unbelievably complicated. Most declarative systems are non-trivial. The closest I've seen our industry get to this ideal is Nix.
Linux itself is a beast, a reliable beast, but it's a chunk of software I don't think you can just wave your hand at and say "this is easy!" It's easy because it works. When it doesn't work, it's absolutely not trivial.
And this is the core of it: everything you just listed off that makes server administration easy has no delegation of responsibility. They are abstractions that you ultimately own. When they stop working, that's your problem. The Ansible project has no vested interest in the health of your server or the success of your CI/CD pipeline. They have no engineers standing by to help you bring your site back up. That's all 100% you even if you've pushed it down under the covers.
Compare that to my serverless deployments. I pay a vendor to be responsible for everything I possibly can, and everything I end up being responsible for I keep as minimal as possible. These deployments aren't mine, they are my customers'. My customers are small to medium sized businesses (for my fortune 500 contracts, I build the systems you're talking about and a whole lot more). A small to medium sized business can not maintain Ansible. They are mechanics, plumbers, drywallers, etc. They are not Linux System Administrators. And I'm not here to milk them for money, I want to get in, get done, and leave them with a stable system that requires minimal maintenance. I do that by having vendors lined up that are responsible for the system running below my software and those vendor's support contracts are a lot cheaper than my weekly rate.
I've maintained large systems and small systems. FreeBSD and Linux systems. I helped build and maintain the serverless platform that hosts the Netflix API. I managed build systems and CI/CD for docker images deployed to k8s and built a package manager to address the instability that is inherent with rebuilding artifacts the way tools like Docker do (which are already a marked improvement over Ansible).
Ansible is essentially logging in and running a series of shell scripts. This works great in isolation, but do it long enough and you'll realize a lot of things you thought were idempotent, atomic, and infallible are not. Most package managers are glorified tarballs with shell scripts wired up to lifecycle hooks during install. You YOLO unpack them into a global namespace and hope for the best. With any luck, when something surprising happens, you can just rerun your script to bring the server back into a good state. But often times the server just ends up borked and you have to throw it away and start over.
K8S somewhat addresses this by maintaining the desired state in a declarative format and comparing the actual state against the declared state in an eval loop. But K8S is absolutely massive and unbelievably complicated. Most declarative systems are non-trivial. The closest I've seen our industry get to this ideal is Nix.
Linux itself is a beast, a reliable beast, but it's a chunk of software I don't think you can just wave your hand at and say "this is easy!" It's easy because it works. When it doesn't work, it's absolutely not trivial.
And this is the core of it: everything you just listed off that makes server administration easy has no delegation of responsibility. They are abstractions that you ultimately own. When they stop working, that's your problem. The Ansible project has no vested interest in the health of your server or the success of your CI/CD pipeline. They have no engineers standing by to help you bring your site back up. That's all 100% you even if you've pushed it down under the covers.
Compare that to my serverless deployments. I pay a vendor to be responsible for everything I possibly can, and everything I end up being responsible for I keep as minimal as possible. These deployments aren't mine, they are my customers'. My customers are small to medium sized businesses (for my fortune 500 contracts, I build the systems you're talking about and a whole lot more). A small to medium sized business can not maintain Ansible. They are mechanics, plumbers, drywallers, etc. They are not Linux System Administrators. And I'm not here to milk them for money, I want to get in, get done, and leave them with a stable system that requires minimal maintenance. I do that by having vendors lined up that are responsible for the system running below my software and those vendor's support contracts are a lot cheaper than my weekly rate.