Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That would cause abysmal performance because of the head of line blocking of this approach: If one of the outbound links becomes congested, sooner or later, all of the memory would be occupied by packets destined for that congested link, as all of the packets destined for other links would be transmitted, making room for some more inbound packets, of which again all the packets destined for the uncongested links would be transmitted right away, leaving in the buffer the ones destined for the congested link, ... repeat until the buffer is filled with packets destined for the congested link, thus preventing the flow of any more packets to the uncongested link that might be queued somewhere in the network.

The only solution to avoid that problem would be to have every router keep state for each connection/flow it sees, and managing separate buffers for each flow - but that would mean keeping state for hundreds of millions of flows on backbone routers, all of which essentially would need to be in level 1 cache in order to be able to move packets at line speed, and even that probably would be too slow - there is a reason why those routers use CAMs to be able to keep up even just doing routing table lookups in only about 500000 routes.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: