> What we did not know was how to authenticate a site's public key. Today, we'd use certificate issued by a certificate authority.
He's correct, of course, that we'd use a CA, but I don't know if we ought to. Why should I trust dozens or hundreds of companies worldwide to certify that I'm talking to my local university?
> The next thing we considered was neighbor authentication: each site could, at least in principle, know and authenticate its neighbors, due to the way the flooding algorithm worked. That idea didn't work, either. For one thing, it was trivial to impersonate a site that appeared to be further away.
I'm actually much more confident that neighbour authentication could have worked: each message could have been signed by the originating user, by his site, and by each site in the path it took to reach its destination. Keys could have been exchanged when setting up links between sites.
This wouldn't have been able to fix the Sybil problem (e.g. my local university's news admin would have been able to create as many fake sites claiming to be on the other side of the university from me), but it would have enabled admins to trace the source of bad messages, and potentially cut off misbehaving sites, in a way that Usenet ultimately didn't really support.
> Why should I trust dozens or hundreds of companies worldwide to certify that I'm talking to my local university?
Couldn't the university provide you a copy of the certificate chain it uses such that you can import it into your browser (or other client) certificate authority store? Then you personally have verified the university's identity and can tell your client to trust them as an authority.
He's correct, of course, that we'd use a CA, but I don't know if we ought to. Why should I trust dozens or hundreds of companies worldwide to certify that I'm talking to my local university?
> The next thing we considered was neighbor authentication: each site could, at least in principle, know and authenticate its neighbors, due to the way the flooding algorithm worked. That idea didn't work, either. For one thing, it was trivial to impersonate a site that appeared to be further away.
I'm actually much more confident that neighbour authentication could have worked: each message could have been signed by the originating user, by his site, and by each site in the path it took to reach its destination. Keys could have been exchanged when setting up links between sites.
This wouldn't have been able to fix the Sybil problem (e.g. my local university's news admin would have been able to create as many fake sites claiming to be on the other side of the university from me), but it would have enabled admins to trace the source of bad messages, and potentially cut off misbehaving sites, in a way that Usenet ultimately didn't really support.