"Designing AI for Disruptive Science" is a bit market-ey, but "AI Risks 'Hypernormal' Science" is just a trimmed section heading "Current AI Training Risks Hypernormal Science".
Ooh, that's a worthy challenge. Of course, I can imagine getting enough data on all of those cities and deciding to launch everywhere else but not Boston "because your roads are garbage and you all drive like you're impaired 24/7" :-)
That's not how you should measure "worth". In that world, you'd have a P/E ratio of 1. Comparing to a bond, it would be like expecting to get paid the face amount in a single year. Many people are quite happy with 5-10% interest as a risky benchmark, so 10-20 P/E isn't wild. That puts the market cap for tech itself at 10-20T as a reasonable baseline.
I feel vindicated :). We put in a lot of effort with great customers to get nested virtualization running well on GCE years ago, and I'm glad to hear AWS is coming around.
You can tell people to just do something else, there's probably a separate natural solution, etc. but sometimes you're willing to sacrifice some peak performance just have that uniformity of operations and control.
This isn't strictly correct: you probably mean wrt compressed size. Compression is a tradeoff between size reduction and compression and decompression speed. So while things like Bellard's tz_zip (https://bellard.org/ts_zip/) or nncp compress really well they are extremely slow compared to say zstd or the much faster compression scheme in the article. It's a totally different class of codec.
an LLM can be used to losslessly compress a string to a size equal to the number of bits of entropy of next token prediction loss over the string, by encoding the extra bits of entropy with arithmetic encoding. its sota compression for the distribution of string found on the internet
reply