Home > Cloud Computing, Virtualization > What Software-defined Networking Is and Is Not and Where It Fits

What Software-defined Networking Is and Is Not and Where It Fits

April 29th, 2013 Leave a comment Go to comments

After server virtualization took off, virtualization became a buzzword which made it easy to get attention from market, and for startup companies, to get funding. Therefore you’ve seen many technologies claiming it’s * virtualization mostly for marketing purpose. Network virtualization is such a case. The even newer term for it is called software defined network, or simply SDN.

It’s Centralization, Really!

Lost VMs or Containers? Too Many Consoles? Too Slow GUI? Time to learn how to "Google" and manage your VMware and clouds in a fast and secure HTML5 App.

The network virtualization initially refers to OpenFlow (http://www.openflow.org/), which is a new technology started from Stanford University as a research project. Traditionally the IP network intelligence is with the routers and switches, meaning the individual network components talk to each other with various protocols (see many RFCs from Internet Engineering Task Force), and know where to forward IP packets. One benefit is that if the topology changes or some nodes broken, the network adjusts itself. This comes with some prices on coordinating among the nodes.

The OpenFlow technology tries to do things differently. Instead of having different nodes decides whether or where to move IP packets, it centralizes the decisions to a server called controller. The advantage is clear in that the controller has the big picture of a network therefore potentially operates more efficiently and more flexibly. Also, the centralization can also reduce the cost of management. Will you like to manage many different switches or routers or just one server for everything? Most people would prefer the latter because of less effort to get same or even better results.

Two Fatal Weaknesses

Coming with OpenFlow are two fatal weaknesses – single point of failure and scalability. What if the controller fails? The existing flows may continue to work but cannot respond to new changes. That is why it’s critical to have HA capability built in the controller.

Even if the controller never fails, can it scale to the scope of your network or the Internet? Depending on the size of your network, the answer may be different. For the Internet, I think the answer is no given the current computing power and scope of the Internet.

Even someday the computer is powerful enough to handle the scope of the Internet (I doubt it because of IPv6 will soon increase the size dramatically), it’s still not be the direction to go because the distribution is one of the fundamental principles of the Internet. Started from the time of cold war, the Internet was designed to be functional even some parts of it are completely destroyed. Without any war, we can still see the value of this distributed design.

Where Would Network Centralization Fit?

Given the points above, I think the OpenFlow will never be used in Internet scope in the foreseeable future as distribution is at the heart of the Internet.

But, how about businesses?

It really depends. The decision point is ROI like most other decisions. If it’s a small shop, you can grab Cisco or Juniper switches/routers and hook them up, and you are ready to go. Best design and operation? Probably not but it works as needed. It’s hard to beat the cost when the network is small. Yes, you may save on the network gadgets, but most likely not enough to pay for the controller license and related training on the new way of network management. My guess you’ll end up paying more with OpenFlow than the traditional networking in the next few years.

For the big enterprises, it also depends. More often than not, big enterprises have mature network already. It’s not likely they would like to dump their existing investment and buy into OpenFlow unless they have good size of green field of data centers to be constructed. With this limitation, the number of customers who would use OpenFlow in production is significantly reduced. Having said that, I believe IT shops will still start to try and test the OpenFlow in small pilot projects.

The perfect match for the OpenFlow is the new datacenters for service providers and big Internet shops who match the two important criteria: reasonably big scope and green field. On top of that is the added flexibility for serving the tenants. I heard Google has implemented OpenFlow from two years ago. Again, that represents only a very small percentage of overall market.

For the technology to be really successful, there got to be a transition story for typical enterprises to gradually migrate to this new way of networking. That gives the chance to traditional network behemoths like Cisco more time.

Categories: Cloud Computing, Virtualization Tags:
  1. April 29th, 2013 at 00:27 | #1

    What Software-defined Networking Is and Is Not and Where It Fits (DoubleCloud) http://t.co/Vbo2UR3NZa

  2. April 29th, 2013 at 01:57 | #2

    What Software-defined Networking Is and Is Not and Where It Fits (DoubleCloud) http://t.co/3iAmtOAe4G

  3. April 29th, 2013 at 14:55 | #3

    What Software-defined Networking Is and Is Not and Where It Fits, @sjin2008 gives his perspectives: http://t.co/gpUrMzAZav

  4. May 1st, 2013 at 16:01 | #4

    Where would network centralization fit? Read more to find out: http://t.co/K9Jwedyek0

  5. May 7th, 2013 at 11:46 | #5

    What software-defined networking is and is not, and where it fits http://t.co/o8rIMV5Qkj via @sjin2008

  1. May 6th, 2013 at 08:30 | #1
  2. June 4th, 2013 at 11:53 | #2
  3. June 9th, 2013 at 03:37 | #3