After server virtualization took off, virtualization became a buzzword which made it easy to get attention from market, and for startup companies, to get funding. Therefore you’ve seen many technologies claiming it’s * virtualization mostly for marketing purpose. Network virtualization is such a case. The even newer term for it is called software defined network, or simply SDN.
It’s Centralization, Really!
Bothered by SLOW Web UI to manage vSphere? Want to manage ALL your VMware vCenters, AWS, Azure, Openstack, container behind a SINGLE pane of glass? Want to search, analyze, report, visualize VMs, hosts, networks, datastores, events as easily as Google the Web? Find out more about vSearch 3.0: the search engine for all your private and public clouds.
The network virtualization initially refers to OpenFlow (http://www.openflow.org/), which is a new technology started from Stanford University as a research project. Traditionally the IP network intelligence is with the routers and switches, meaning the individual network components talk to each other with various protocols (see many RFCs from Internet Engineering Task Force), and know where to forward IP packets. One benefit is that if the topology changes or some nodes broken, the network adjusts itself. This comes with some prices on coordinating among the nodes.
The OpenFlow technology tries to do things differently. Instead of having different nodes decides whether or where to move IP packets, it centralizes the decisions to a server called controller. The advantage is clear in that the controller has the big picture of a network therefore potentially operates more efficiently and more flexibly. Also, the centralization can also reduce the cost of management. Will you like to manage many different switches or routers or just one server for everything? Most people would prefer the latter because of less effort to get same or even better results.
Two Fatal Weaknesses
Coming with OpenFlow are two fatal weaknesses – single point of failure and scalability. What if the controller fails? The existing flows may continue to work but cannot respond to new changes. That is why it’s critical to have HA capability built in the controller.
Even if the controller never fails, can it scale to the scope of your network or the Internet? Depending on the size of your network, the answer may be different. For the Internet, I think the answer is no given the current computing power and scope of the Internet.
Even someday the computer is powerful enough to handle the scope of the Internet (I doubt it because of IPv6 will soon increase the size dramatically), it’s still not be the direction to go because the distribution is one of the fundamental principles of the Internet. Started from the time of cold war, the Internet was designed to be functional even some parts of it are completely destroyed. Without any war, we can still see the value of this distributed design.
Where Would Network Centralization Fit?
Given the points above, I think the OpenFlow will never be used in Internet scope in the foreseeable future as distribution is at the heart of the Internet.
But, how about businesses?
It really depends. The decision point is ROI like most other decisions. If it’s a small shop, you can grab Cisco or Juniper switches/routers and hook them up, and you are ready to go. Best design and operation? Probably not but it works as needed. It’s hard to beat the cost when the network is small. Yes, you may save on the network gadgets, but most likely not enough to pay for the controller license and related training on the new way of network management. My guess you’ll end up paying more with OpenFlow than the traditional networking in the next few years.
For the big enterprises, it also depends. More often than not, big enterprises have mature network already. It’s not likely they would like to dump their existing investment and buy into OpenFlow unless they have good size of green field of data centers to be constructed. With this limitation, the number of customers who would use OpenFlow in production is significantly reduced. Having said that, I believe IT shops will still start to try and test the OpenFlow in small pilot projects.
The perfect match for the OpenFlow is the new datacenters for service providers and big Internet shops who match the two important criteria: reasonably big scope and green field. On top of that is the added flexibility for serving the tenants. I heard Google has implemented OpenFlow from two years ago. Again, that represents only a very small percentage of overall market.
For the technology to be really successful, there got to be a transition story for typical enterprises to gradually migrate to this new way of networking. That gives the chance to traditional network behemoths like Cisco more time.