Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Agreed. TCP stands for "Transmission control protocol" which started out as file transfer. These days it is exquisitely unsuitable for most things it get used for, even fetching web pages. The delays, retries and congestion controls are set arbitrarily and rarely adjusted. In this modern world of wireless roaming and streaming media, TCP has little or nothing to offer, except that its there.


I've long wished for a reliable protocol that was negotiated on a per-link basis. I mean: we have the processing power now. (So, effectively, packets aren't removed from the router's buffer until it knows the next link has the packet in buffer. Lots of implementation details to be gone over, though.)

It seems a mite... silly... to resend a packet from the other side of the world just because your ISP couldn't shove the packet down your internet connection fast enough.


When packets are being sent too fast the sender needs to be slowed down. Otherwise the buffers would just fill up. And in traditional TCP, the only thing that tells the sender this is dropped packets.

The smart solution you're looking for is TCP ECNs - a way for the routers to say "I'm buffering it for now, but you'd better slow down". If you're running linux it's a kernel setting you can enable (they're disabled by default as some routers mishandle them).


Sorry, should have specified. One of the things this suggests is explicit cache management for individual links. I.e. "I have space for <x> packets before my next ack".

Don't treat links as end-to-end. Treat them as a bucket chain - each link in the chain negotiates with its immediate neighbors only. Currently we do a game of "toss the bucket at the next guy and hope he catches it".


That would cause abysmal performance because of the head of line blocking of this approach: If one of the outbound links becomes congested, sooner or later, all of the memory would be occupied by packets destined for that congested link, as all of the packets destined for other links would be transmitted, making room for some more inbound packets, of which again all the packets destined for the uncongested links would be transmitted right away, leaving in the buffer the ones destined for the congested link, ... repeat until the buffer is filled with packets destined for the congested link, thus preventing the flow of any more packets to the uncongested link that might be queued somewhere in the network.

The only solution to avoid that problem would be to have every router keep state for each connection/flow it sees, and managing separate buffers for each flow - but that would mean keeping state for hundreds of millions of flows on backbone routers, all of which essentially would need to be in level 1 cache in order to be able to move packets at line speed, and even that probably would be too slow - there is a reason why those routers use CAMs to be able to keep up even just doing routing table lookups in only about 500000 routes.


Carnegie Mellon has a blank-slate project to manage data center traffic similar to this. I understand it involves bandwidth/buffer credits passed around between endpoints to the subnet. The idea is, buffer backup at the source is inevitable if subnet bandwidth is insufficient; you have to accept that. What you CAN do is avoid pointlessly sending packets to where they can't be kept, because it wastes bandwidth; even worse signaling and retries use even more.


It seems like the routes would have to be a lot more static for that to work, negating the big advantage of the internet over traditional circuit switching. Right now each end-to-end link negotiates its own window size and can accept that many packets before acking, and it doesn't matter whether half of those packets go by one route and half of them by another, they just have to all arrive at the end.


Not particularly. You can still do all the fancy and not-so-fancy tricks regarding packet routing. As long as each router knows a "closer" router to the destination, you're fine. This is identical to the current setup in that regard.

As a matter of fact, it would probably be easier to make dynamic. (Router A gets a packet for router Z - router A wants to send it to router B, but router B is currently congested, and router A knows that router C is an alternate route, so it sends it to router C instead.)

Now, there are circumstances where this approach is not particularly valid. In particular, on wireless networks. However, TCP over wireless networks isn't exactly great either. (TCP and this approach both make the same assumption: namely that most packet loss is congestion as opposed to actual packet loss.) This approach is for the segment of the network that's wired routers with little to no packet loss disregarding packets being dropped due to no cache space. I.e. this approach is for the reliable segment of modern networks - wireless should probably have an entirely different underlying protocol.


Router A knows that router B is congested - but this is actually due to congestion in the link between router K and router L. How does it know which of router C or D would be using the same link? It has to have a global understanding of all the routing paths, no?

Routing the packet to Z and telling you that the path to Z is congested are mirror images of each other; it makes sense to use the same mechanism for both.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: