Clothing doesn't get cancer. Also a lot of what can get left in a patient doesn't show up super well on x-rays, so more general solutions like counting in and out are preferable.
If it happens it'll probably the result of a positive feedback loop forming: miners leaving slowing down transactions and affecting utility/faith in the system resulting in people selling, meaning more miners leaving, etc. That said, I don't know of any clear examples of this happening to any other proof of work coins: I think in general other parts of a cryptocurrency tend to fail first, it requires a particularly fast death for this kind of thing to happen.
Which has always been the case. Attackers only have to find one exploit in the weakest part of the system, and usually that's more a function of grunt work than it is being particularly sophisticated.
I dunno, it's not obvious to me that it shift the balance that way. It's always kind of been the case that a sufficiently determined attacker is going to be able to spend way more effort than you put into secure a system to break into it. If anyone can find the holes that includes the people defending the system. This might actually make the state-level threats are less scary than they were before.
This has tended to be significantly overblown recently with a huge amount of 'no CGI' advertising coming from studios, which often verges into utter BS. There's an incredible amount of CG at every level of modern productions, regardless of how much stuntwork and practical effects were done as well. (this video series has a good breakdown on it, which has included studios releasing doctored 'behind the scenes' footage! https://www.youtube.com/watch?v=7ttG90raCNo ).
That's not to say that doing these things is pointless or unimpressive, but it's often used to denigrate and minimize the work of a lot of already quite underappreciated artists.
The news articles on it are going to affect this. I wonder if the original paper is in the base models at all, almost certainly these results were from the article showing up in an Internet search.
Similarly, I wonder what a frontier model would say if just given the paper in isolation and asked to summarise/opine on it. I suspect they would successfully recognize such obivous signs, the failure is when less sophisticated LLMs are just skimming search results and summarising them.
Doesn't even need the companies to react fast. Now that the Google results are returning news articles on it the LLMs are going to find and report on that as opposed to the original paper.
Even if you could do this rigorously (not at all obvious with how LLMs work), it's not a reliable metric: you can easily fabricate debate as well, and in this case the main issue was essentially skimming the surface of the reports and not looking any deeper to see the obvious red flags that it was an april-fools-level fake (which obviously even a person can fall for, but LLMs are being given a far greater level of trust for some reason)
I would also say that it matters how you remember things. It's possible to memorize a large quantity of facts without really meshing them together into a coherent whole, and this is a lot less useful than remembering things through the relationships between them.
reply