> The additional bandwidth is not really that big.
It's actually not so much about bandwidth, it's more that if they don't have a computer at home they will have to use a computer at a library that forbids all HTTPS, making the resources unavailable.
> A non-argument. One cannot let things remain the same for ages just because of backwards compatibility.
So you volunteer to migrate and maintain all the software we're talking about here, or to pay the people that will do it over the decades to come ?
Migration doesn't come out of thin air by simply snapping fingers. Yes, the software is old. Yes, it's ugly to see. Yes, it's hard to maintain. But, if you read the article, you would no that there is no funding to maintain it, so they're doing what they can with what they have. Everyone would love to use the cleanest, leanest, UNIXest tech stack, but the realities of the world make it far more difficult than what we're used to see on HN.
> It's actually not so much about bandwidth, it's more that if they don't have a computer at home they will have to use a computer at a library that forbids all HTTPS, making the resources unavailable.
That's a weak argument. If places like Libraries continue to block all HTTPS content, before long they would have blocked so much of the internet that the entire service would lack a coherent point.
Most sites in the top 50 are HTTPS only now. How long until they all are?
Regarding old software, I'd say the problem is being overblown. You don't need to touch any of them. If the service is HTTPS only and the client is HTTP only, the obvious solutions is to have a proxy in the middle, converting between the two.
For example: "Stunnel is a proxy designed to add TLS encryption functionality to existing clients and servers without any changes in the programs' code."
The kinds of projects and organizations that will suffer the most from forcing HTTPS are ones like those under government research umbrellas which require vast amounts of paperwork to do something as simple as setup an SSH key so rsync can be used. That pales in comparison to the amount of paperwork required to do something like install a new piece of software or setup a new server. On top of that, such changes are not budgeted for, so you end up with highly educated and trained people, who ought to be doing their research or actually making progress in their work, spending time fixing artificial problems that ought to be handled by dedicated staff. This wastes taxpayer dollars and leads to personnel burnout.
Why? All because some memo-writer in Washington said, "There's no such thing as insensitive web traffic!"
It's easy to make these proclamations when you have no part in fixing the fallout.
> The additional bandwidth is not really that big.
It's actually not so much about bandwidth, it's more that if they don't have a computer at home they will have to use a computer at a library that forbids all HTTPS, making the resources unavailable.
> A non-argument. One cannot let things remain the same for ages just because of backwards compatibility.
So you volunteer to migrate and maintain all the software we're talking about here, or to pay the people that will do it over the decades to come ?
Migration doesn't come out of thin air by simply snapping fingers. Yes, the software is old. Yes, it's ugly to see. Yes, it's hard to maintain. But, if you read the article, you would no that there is no funding to maintain it, so they're doing what they can with what they have. Everyone would love to use the cleanest, leanest, UNIXest tech stack, but the realities of the world make it far more difficult than what we're used to see on HN.