I'm nothing like a network engineer. But is the centralization described in this article a good thing for the Internet as a whole?
I can see that within an organization's internal network, they can assess the importance of different communications, and route accordingly. So on the internal side, it's potentially a big win.
But across the globe, who can assign the priority of traffic accurately and impartially? And isn't the decentralized nature of the current architecture an important feature, because of the way it can route around problems (be they technical or regulatory) of its own accord, without requiring a higher authority to tell it how (and thus without being susceptible to the agenda of that authority)?
Yes, it's great. Not because of anything specific to what they're doing with it but because it fundamentally changes the game from you jumping through whatever flaming hoops the network vendor chooses to provide to being able to implement what makes sense for your business.
If you haven't dealt with network gear before, it's like going back 3-4 decades in general computing: bizarre, obscure UIs, features which are complicated by very strict limits, management is a bolt-on after-thought generally treated as a profit center ("We'll sell you tools to deal with our arcane UI!"), paid bug fixes which have to be installed by hand, interoperability across vendors is very limited, etc.
The point is that the switch is programmable. You can implement the centralised behaviour. You can implement decentralised one. You can implement anything you want.
The idea is to build a globally consistant view of the network. Currently, each node builds up its own view of the global state and routes based on this. Sure, distributed protocols allow us to share this information, but it's not guaranteed that the state will be accurate a few hops away.
OpenFlow allows an entity to keep a global consistant state, and calculate the rules by which each of the nodes should forward. This logically centralised control can then enable higher utilisation of the network. Think about traffic reports on the radio -- If you are driving and know that there is a bottleneck on one highway, then you can take an alternative route.
EDIT: I use "Global" in this context for within an AS, not necessarily internet-wide.
Thank you for that analogy, because it actually serves to illustrate my concern.
Here in NJ there's a station that reaches most of the state, and makes a big deal of its every-15-minute traffic reports. I used to listen to these while commuting, until I found from experience that their reports, at least for the roads I deal with, carried data that was either so stale as to be useless, or was just plain wrong. So now I don't listen to that station anymore. Instead, I use an app called Waze for my phone. This uses crowd-sourced data (i.e., decentralized), which also isn't wholly dependable (there's not always another user there ahead of me to make a report, and it's still susceptible to gaming), but on the whole it gives me a better picture of the traffic situation.
Is that analogy necessarily parallel to networking? The radio station communicates with an entire city. The traffic jam only affects traffic within X miles of the bottle neck. I can imagine a car radio that automatically stitches to a local radio station that only broadcasts traffic jams that are relevant to cars in that area, eliminating the need for a larger centralized station.
Well, I think any analogy starts to fall apart when you look too closely. But you've got the right idea. Sure, there's no reason you couldn't distribute it out further. I used the word 'entity' above in an attempt to imply that it could be "one large radio station" or "a group of smaller radio stations"--the point is that the decision-making is abstracted out to somewhere else.
Note also the use of "logically centralized", not "physically centralized".
The point is that this centralization is within an org boundary, not across networks.
Today, to do this, you may need to configure several switches, routers, between the server, and the source & dest of traffic to the server, while not being able to globally optimize.
Depending on security considerations, it may even preclude certain servers from being in certain racks, based on the switch it is connected through.
I can see that within an organization's internal network, they can assess the importance of different communications, and route accordingly. So on the internal side, it's potentially a big win.
But across the globe, who can assign the priority of traffic accurately and impartially? And isn't the decentralized nature of the current architecture an important feature, because of the way it can route around problems (be they technical or regulatory) of its own accord, without requiring a higher authority to tell it how (and thus without being susceptible to the agenda of that authority)?