That’s a good question. A cryptographic hash must not leak any information other than “yes” or “no”. A checksum-type hash might have an additional property where data that only has a single bit flipped might hash to something close to the original hash, allowing you to refine your collision attack by knowing if you’re hot or cold.
For a checksum, this isnt a bad thing and actually could be good.
I think this is a really good point. A logging system could theoretically toggle "text" mode on and off, giving human readable logs in development and small scale deployments.
> In fact, I'm going to build a toy one in python!
I suggest building it as a normal python logging handler instead of totally custom, that way you don't need a "text" toggle and it can be used without changing any existing standard python logging code. Only requires one tweak to the idea: Rather than a template table at the start of the file, have two types of log entries and add each template the first time it's used.
Drawback is having to parse the whole file to find all the templates, but you could also do something like putting the templates into a separate file to avoid that...
I heard about someone using PO2C for cache eviction recently. Maybe Oxide talking about their storage work? Seems obviously smart. No need to keep LRU or carefully track & decide, just use statistics to compare two random elements and drop the least used one.
Well, one thought about your pricing model - you’re thinking that it might need work in the future, but you’re charging each month without delivering any additional value to customers. How about charging additional money for each new major version, corresponding to either new feature development or support for new versions of the OS?
The Due app does something like this.
Basically it’s 1 time fee + subscription.
The day you buy the app, you get every new feature for the next year and every previous feature.
If a new feature is added, you can subscribe for $5 a year. Upon subscribing, you get all new features since your subscription lapsed.
Blog post here: https://www.dueapp.com/blog/future-of-paid-upgrades.html
I don't think App Store supports that. But if you're interested, it's a planned feature in my macOS license management SDK (padlockSDK.com - pricing is placeholder, its free until I know if it's of interest for people).
Yeah, it's strange to me that's a CVE. That seems like "working as intended" if I, the owner of the machine, want to load other libraries, why shouldn't it respect that?
Honestly, I think it will be used for the reverse (and unfortunately more evil) - Google wants to be able to control YOUR machine's compute environment for things like playing back of DRM'd content. They want a chain of trust that your browser cannot be modified to do things like block ads.
From a service owner perspective, if I offer content and want to enforce strong identity from the user then this seems like a win. I may lose eyeballs but will gain higher confidence that my content is being consumed as intended.
I'm fine with more controls in place, a safer internet is clearly a social win that would reduce life alerting fraud, scams etc. If power users want to go to their peer-to-peer cesspool then go for it.
A safer internet does not necessarily follow from having this system in place. I'd like to point out that this is an opinion that you have which I and others disagree with.
I also don't believe that content creators have any kind of legal or moral right to force the general public to "consume as intended". For instance, I've got a shelf in my office that's built with supports that are designed for plumbing. I have not consumed these pipes as intended.
How does enforcing strong attestation from the user result in a safer internet or reduce life alerting frauds and scams? It's not users injecting that onto pages, it's the ad networks that operators choose to use.
I made something called shell workspaces[0] for this.
I like to tinker with bash and this has helped me keep all of the commands relevant to a particular project discoverable, accessible, and documented.
It uses a function "," for command execution. So, for instance, if the workspace file defines a function "build", when you're in the workspace, the command `$ , build` will run it.
It's not the most comprehensive solution to this, but it's probably the bit of shell programming that I use almost every day and has saved me tons of time.
I'm not sure I understand. Is the idea to step through a script, maybe executing the default value of each line, but offering the opportunity to edit that line before executing it?
I am blown away that this is not already part of gRPC. A lifetime ago, I worked on twitter's version of this sort of thing (thrift/finagle) and I assumed it was standard.
Both the context and message metadata are already part of gRPC. The gRPC system also allows for server [2] and client message interceptors. Essentially this konig-kontext library provides a interceptor implementations, e.g., [3], that uses their hard-coded key for your serializable context that gets read/written from a gRPC header. The context provided by konig-kontext within your code is a wrapper around the existing gRPC Context [1].
The library is convenient for sure, but I feel that if you had a need to propagate context within gRPC, you'd probably already discovered the API and implemented propagation with your own header keys.
While I agree with your opinion here, I find it more alarming that these researchers are mixing the reporting of empirical evidence with "just like their opinion, man".
Joking aside, I think that's worrying. It immediately calls the researcher's motives into question.