IMO stream abstractions make it too convenient to write fragile programs which are slow to recover from disconnections (if they do at all) and generally place too many restrictions on the transport layer. Congestion control is definitely needed but everything else seems questionable.
In a datagram-first world we would have no issue bonding any number of data links with very high efficiency or seamlessly roaming across network boundaries without dropping connections. Many types of applications can handle out-of-order frames with zero overhead and would work much faster if written for the UDP model.
> In a datagram-first world we would have no issue bonding any number of data links with very high efficiency or seamlessly roaming across network boundaries without dropping connections. Many types of applications can handle out-of-order frames with zero overhead and would work much faster if written for the UDP model.
So your argument is that software isn’t written well because TCP is too convenient, but we’re supposed to believe that a substantially more complicated datagram-first world would have perfectly robust and efficient software?
In practice, moving to less reliable transports doesn’t make software automatically more reliable or more efficient. It actually introduces a huge number of failure modes and complexities that teams would have to deal with.
> So your argument is that software isn’t written well because TCP is too convenient, but we’re supposed to believe that a substantially more complicated datagram-first world would have perfectly robust and efficient software?
I think you can make a better argument:
Your operating system should offer you something like UDP, but you can handle all the extra features you need on top of that to eg simulate something like TCP, at the level of an unprivileged library you link into your application.
That way is exactly as convenient for 'normal' programmers as the current world: you get something like TCP by default. But it's also easy for people to innovate and to swap out a different implementation, without needing access to the privileged innards of your kernel.
> Congestion control is definitely needed but everything else seems questionable.
Even congestion control can be optional for some applications with the right error correcting code. (Though if you have a narrow bottleneck somewhere in your connection, I guess it doesn't make too much sense to produce lots and lots more packages that will just be discarded at the bottleneck.)
As mentioned in the article, this doesn't handle the case of buffer bloat: packets all eventually arrive rather than being dropped, but increasingly late, and only backing off can help reduce the latency.
Any data block that must be fully received before processing comes to mind (e.g. websites’ html), just request retransmission of parts that didn’t make it in first pass. Funnily enough it’s (realtime) video and audio that already uses UDP due to preferring timely data to complete data.
On the contrary, for me it’s hard to imagine video game with missed packets as the state can get out of sync too easily, you’ll need eventual consistency via retransmission or some clever data structures (I know least about this domain though)
I'm not expert in this area, but my understanding is that for latency sensitive video games, on the client side, state is predicted between updates, e.g. the state may be updated via the network, at say, roughly 10Hz (probably faster, but probably less than 60Hz), but updated locally at 60Hz via interpolation, dead reckoning and other predictive/smoothing heuristics, a few missed updates just means a bit more prediction is used. "State" is not usually transmitted as one lump "frame" at a time, but rather per-game unit (per spaceship, asteroid, robot, or whatever is in the game) or per some-small-group of units. When some updates are delayed too long, you might get some visible artifacts, "rubber-banding", "teleportation" or "warping", etc. often lumped together by players under the umbrella term "lag". For out-of-order packets, older packets might be dropped as newer packets may already have been applied to the state of some game units (typically there's some timestamps or monotonically increasing counter associated with state updates for each unit used by the prediction/interpolation/smoothing heuristics) and the state is usually represented in absolute terms rather than relative terms (e.g. (x,y) = (100,100), rather than x+=10, y+=10, so that any update may be applied in isolation.)
Many video games just transmit the whole state the player should be able to see every tick. While some games send a diff of some kind, sometimes that turns out to not be worth it. This is particularly true in the games which care most about lag (think mario kart, or street fighter and friends).
In a datagram-first world we would have no issue bonding any number of data links with very high efficiency or seamlessly roaming across network boundaries without dropping connections. Many types of applications can handle out-of-order frames with zero overhead and would work much faster if written for the UDP model.