Hacker Newsnew | past | comments | ask | show | jobs | submit | 13security's commentslogin

The problem is in supplying the CA certificate(s) used to verify the web-of-trust. Web browsers are pre-loaded with a huge number of trusted CA certificates. Pre-loading and maintaining that list on an embedded microcontroller is non-trivial. Not to mention what happens when a CA root is compromised or goes rogue, you have to deal with the revocation process.

Your link mentions global_ca_store but provides no guidance on how to effectively populate it. That's the problem.

Interestingly, providing a non-Tivo-ized system, e.g. one that allows connection to an arbitrary cloud server, requires even more work than just hardcoding in "your" CA certificates.

None of this is insurmountable, but it leaves devs pining for a pre-HTTPS world where you can just do a DNS lookup and send "GET / HTTP/1.0" and not have to worry about all the attack vectors that HTTPS protects against, as well as the ones that HTTPS opens you up to.


The best description of this I've seen is "chain of custody": Like with evidence in criminal proceedings. Docker and Git's use of content-addressable hashes helps in this regard, but there is still considerable work to be done when tracking third-party dependencies.


We used "chain of custody" as our analogy for buildpacks when I was working on them.

It's not completely accurate, though, for the same reason as "supply chain". There isn't a single linear sequence of agents, actions and assets for a given asset. It's a graph in which things can appear many times in many permutations.



Because they have no way of measuring how much of their outbound mail is going straight to Junk, so they just assume their messages are being ignored.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: