Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In general, asking a database for set membership is not close to the slowest thing applications do.

From Jeff Dean's list of numbers every programmer should know[0], a round trip within the same datacenter takes on the order of 500,000 ns; a main memory reference is on the order of 100 ns. How often will an application be making an order of magnitude more than 5,000 main memory references to service a request? Sometimes, sure.

In our experience online validation completely dominates our workloads.

> I don't think it's a given that internal auth has to work the same way external auth does, for a bunch of reasons.

Oh, you're completely correct. It has upsides & downsides.

> Without revocation, you can't log out of things, and you can't invalidate sessions after credential rotation.

I pointed out that I like the use of stateless tokens, which can be revoked. In many cases 'log out' can just be destruction of the token. And in many cases it doesn't make sense to invalidate access just because credentials have rotated (in many more cases it does, which of course is perfectly supportable with stateless tokens).

> So unless you're saying tokens should be valid for some tiny number of seconds.

I'm not: I'm saying that in many cases there's no business need to assure revocation within $SMALLNUM seconds, and that the business costs of online validation utterly dominate the costs of running a service.

> If you really desperately want to make the check faster, why is JWT better than a local token cache?

There are only two hard things in computer science … cache invalidation is one of them.

[1] https://gist.github.com/jboner/2841832



> From Jeff Dean's list of numbers every programmer should know[0], a round trip within the same datacenter takes on the order of 500,000 ns; a main memory reference is on the order of 100 ns. How often will an application be making an order of magnitude more than 5,000 main memory references to service a request? Sometimes, sure.

That argument is only valid if your application doesn't touch disk and doesn't touch the network to go talk to some other service anyway.

Do you have a concrete description of what "online validation" specifically means for you, and how long it takes? How long does validating the token take instead?


Just in case someone else reads this thread later, the point I was going to make:

- DC roundtrip: 0.5ms (per GP's own numbers)

- P256 ECDSA signature validation: 2ms [0]

[0]: https://www.cryptopp.com/benchmarks.html


> That argument is only valid if your application doesn't touch disk and doesn't touch the network to go talk to some other service anyway.

Why? Adding another source of “slowness”[1] isn’t free just because other sources of “slowness” already exist, especially if it’s already close to being unacceptably slow as it is.

I mean, sure, if the request takes ten minutes anyway and the validation check takes a few seconds, nobody will notice, but if I’m doing, say, a single non-local database lookup for the request then adding a second one for verification doubles the time it takes to service the request.

[1] I’m not saying that a network round trip and database lookup are slow, just that for the sake of this argument, they are being considered slow.


Do you have specific performance data for a comparable token validation?


I’m not really arguing against your point, just pointing out that your statement isn’t necessarily true. You said that if you do any network or disk access then validation will be negligible as it would be dominated by that. I’m simply saying that while this is probably true in most real world cases, it may not be so if for example both validation and normal request handling do a simple database query, then validation is 50% of the request time (or 25% if the request takes twice as long etc). Since the person you were relying to above said something about performance sensitivity, this overhead could be too much.

Having said that, I do believe that what you’re saying is the right approach for 99.9% of use cases and I would imagine that in almost all cases the performance hit really is negligible.


I gave specific examples of relative latency going the other direction in GP thread. But yes: not only is it not actually slow, it’s a much simpler engineering exercise to make it fast (see caching argument, same thread).

Is it literally definitionally impossible that minted tokens have a useful application? No, of course not. But absent very specific cases I’m going to argue for the thing that’s safe, fast & simple :-D


> There are only two hard things in computer science … cache invalidation is one of them.

I mean, it's a pithy saying, but TTL invalidation is a problem you have to solve with JWT too. Caching tokens is the easiest possible cache invalidation problem: definitionally, you know a priori exactly when the token is invalid.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: