I work for a digital bank and the versioning is essentially exactly how we handle T&Cs.
The user accepts a certain version of some terms, and if we launch for example a new product that requires changed T&Cs then we ask the user to accept them if they want to use the new product.
If they don't, well, then they just keep using the existing offering without accepting any new terms.
That’s sounds reasonable for services-based consumer offerings. Which would include consumer SaaS services.
Where it gets a little
muddy for me is hardware with services attached (a new EV, etc)… you pay $60k for a car, it really shouldn’t be possible to force a new ToS on something they has physical ownership. And definitely not possible to brick or de-option the car due to refusal to accept new ToS.
Versioned terms help when changes apply to a product, not the whole platform. For a new product with different rules, require explicit, time-stamped consent before first use; otherwise grandfather users on existing terms. Provide a changelog, a grace period, and an easy opt-out. At Getly, per-product terms and payout rules kept separate can reduce friction.
Are you sure? The article makes the point that the nop is actually required for this to work in GDB because the instruction pointer might otherwise point at an entirely different scope.
I have to admit I didn't try it out though. Maybe this changed in the meantime and it is not needed anymore.
// Q: Why is there a __nop() before __debugbreak()?
// A: VS' debug engine has a bug where it will silently swallow explicit
// breakpoint interrupts when single-step debugging either line-by-line or
// over call instructions. This can hide legitimate reasons to trap. Asserts
// for example, which can appear as if the did not fire, leaving a programmer
// unknowingly debugging an undefined process.
(This comment has been there for at least a couple of years, and I don't know if it still applies to the newest version of Visual Studio.)
I think even if it was then it's likely better to have the attackers show their hands by attacking (comparatively) irrelevant targets.
I would assume that there are insights to be gained to even more effectively mitigate potential future attacks by this.
In my personal experience I find that zone files work quite well as universal format for that.
To pick up your Fastmail example: Fastmail could generate a matching zone file for your domain and let you download it. You could then upload it to any domain service provider that supports importing zone files.
It's obviously not as hassle-free than something like your oauth example, but it's using the infrastructure that is already there.
Incidentally, just an hour ago I was setting up a mail server on a Digital Ocean droplet, and had to manually copy and paste 20+ DNS entries because Digital Ocean doesn't support zone file upload (only download). So, the zone file seems like a good enough solution if only everyone would use it.
I know from experience that they might take some actions if you take too much of a DDoS on their free plan but I never heard of that on some usual traffic, especially not when it's as low as the blog suggests (sub 10 rpm).
The attacks my blog received were in the thousand requests per second area when it got suspended.
I'm not sure what exactly your are trying to say. As far as I can tell, there are indeed safe variants for arrays in the standard - both static and dynamic. People just choose to not use them for some arbitrary reasons.
I don't think it is.
(disclaimer: I do live in the EU)
The current US administration is already trying today to force close allies to conform to their will using economical pressure. I can imagine a future where this might escalate, so in my opinion forcing US companies to block certain origin countries if not that far fetched.
We're dealing with a deeper level problem here. Since a lot of the internet is relying on Cloudflare DNS at some part or another, even many backup solutions fail.
Since so much of DNS is centralised in so few services, such outages hit the core infrastructure of the internet.
A sudden disruption on a large number of services for everybody at once doesn't look like a DNS problem to me, with all the caching in DNS. It would fail progressively and inconsistently.
DNS absolutely was an issue. I changed DNS manually from Cloudflare's 1.0.0.1 (which is what DHCP was giving me) to 8.8.8.8 (Google) and most things I'm trying to reach work. There may be other failures going on as well, but 1.0.0.1 was completely unreachable.
No, I changed the setting back and forth while it was down to confirm that the issue was that I could not reach 1.0.0.1. All the entries I tried from my host file were responsive (which is how I ruled out an issue with my local equipment initially and confirmed that it wasn't a complete failure upstream -- I could still reach my servers). Changing 1.0.0.1 to 8.8.8.8 allowed me to reach websites like Google and HN, and changing back to default DNS settings (which reset to 1.0.0.1, confirmed in Wireshark) resulted in the immediate return of failures. 1.0.0.1 was not responsive to anything else I tried.
Again, it may not have been the only issue -- and there are a number of possible reasons why 1.0.0.1 wasn't reachable -- but it certainly was an issue.
> I don't use cloudflare DNS but google DNS and got the same problems thant everyone else
Cloudflare is also the authoritative DNS server for many services. If Cloudflare is down, then for those services Google's DNS has nowhere to get the authoritative answers from.
You're right that power consumption is predictable. But looking at some great-parent posts I'd assume we're still talking about using nuclear as backup for renewable sources.
In this case, often the renewable power production is the bigger variable factor in my opinion, and it's less predictable than usage patterns.