You are here: Home / Open vSwitch / Dodging open protocols with open software

Dodging open protocols with open software

February 8, 2012 By Brad Hedlund 17 Comments

 I have a hunch.  Pure speculation, in fact, that there may be an even more interesting story developing here with the unveiling of networking software startup Nicira Networks.  The story being that of open source networking software minimizing the role of network “protocols” and the diminishing role of standards bodies in building next generation networks.

Nicira Network’s network virtualization platform (NVP) leverages the Open vSwitch (much of which was developed by Nicira engineers) as the data path for whatever edge networking device it’s installed on.  In this case, its a server with a hypervisor and a bunch of virtual machines.  Though, as Nicira points out in their literature, there’s nothing preventing the Open vSwitch from being installed on other networking devices and form factors, such as firewalls, load balancers, and physical switches.  With the Open vSwitch as the workhorse, an open source software project, one can certainly make the claim that the solution is “open” and not proprietary.  Furthermore, the data path of the Open vSwitch uses established and well understood open protocols; 802.1Q, GRE, etc.

Now that you have these Open vSwitches everywhere you need something to centrally configure and control their data path with a forwarding policy.  This is where Nicira’s clustered controller comes in, or perhaps a controller provided by some other vendor.  The Nicira central controller will control all of the edge Open vSwitches in an elegant way (perhaps a gross understatement).  That’s where they’ll make money — selling their controller software and all the professional services you might need to get things working right in your environment.

This is where things get interesting.  Most people think that you would need an open “protocol” for the controller to interface with the edge Open vSwitch.  And absolutely, you certainly should have that.  That way any vendor can supply the controller while using the same Open vSwitches. Right?  And if you take a cursory look at the Open vSwitch documentation, as expected you’ll see OpenFlow as the protocol for this purpose.

When you use a protocol, you obviously need to follow the rules of the protocol otherwise you’re not adhering to the standard, and people tend to get really upset about that kind of stuff.  So, somebody has to set the rules for others to follow.  Which usually involves getting a group of people with inflated egos together to agree on something, be it vendor-led standards bodies such as the IETF, or in the case of OpenFlow a customer-led “foundation” such as the ONF.  All of this takes time to get right. Lots of time.  Meanwhile, there’s a market out there willing to pay for a solution now.

Think about this for a second — Why do we need to use an open “protocol” for a controller to program a switch? “Well, Brad, that’s obvious, because otherwise the solution would be deemed proprietary, heaven forbid!” True, perhaps, if you’re thinking in terms of the usual paradigm where Vendor-A’s box is running Vendor-A software, connected to Vendor-B’s box running Vendor-B software.  This is obviously where we need protocols.  But what if Vendor-A’s box was running open source software, and Vendor-B’s box was running the same open source software? Or at least, the communication path between Vendor-A and Vendor-B is through an open source software module. Do you need a “protocol” then?

With that in mind, take a closer look at the Open vSwitch documentation, dig deep, and what you’ll find is that there are other means of controlling the configuration of the Open vSwitch, other than the OpenFlow protocol.

Take for example the ovs-vsctl ”component” of the Open vSwitch.  This component can be used to remotely configure the Open vSwitch at a granular level — such as editing tables and records in the vswitch configuration database.  It’s one piece of open software talking to another piece of open software over a standard TCP connection — you can’t call that proprietary.  And guess what, you don’t need a dinosaur standards body to decide what goes in the code.  Is ovs-vsctl a “protocol”? No.

Nicira may or may not be using the OpenFlow “protocol” to control the Open vSwitch in their current deployments.  There’s enough evidence to suggest they had that choice to make.  Perhaps the OpenFlow 1.0 spec was just too limited for what their customers needed at that time.  If so, what’s wrong with coding around the limitations in an open software platform?

The point here isn’t to blow a standards dodger whistle, but rather to observe that, perhaps, a significant shift is underway when it comes to the relevance and role of “protocols” in building next generation virtual data center networks.  Yes, we will always need protocols to define the underlying link level and data path properties of the physical network — and those haven’t changed much and are pretty well understood today.

However, with the possibility of open source software facilitating the data path not only in hypervisor virtual switches, but many other network devices, what then will be the role of the “protocol”? And what role will a standards body have in such case when the pace of software development far exceeds that of protocol standardization.

Great example: Take a look at the OpenStack Quantum project

Thoughts?  Have I lost my marbles? Chime in with a comment.


Disclaimer: The author is an employee of Dell, Inc. However, the views and opinions expressed by the author do not necessarily represent those of Dell, Inc. The author is not an official media spokesperson for Dell, Inc.

  • Share this:
  • Facebook
Filed Under: Open vSwitch, OpenFlow, SDN


  1.  DanielG says:
    February 8, 2012 at 2:10 pm

    Certainly agree with you on the protocol, but wouldn’t ovs-vsctl be considered an API or “language” of sorts. If you wanted to add VendorB to your VendorA controller network, VendorB would at least need to speak the same control langauge. No?

    •  Brad Hedlund says:
      February 8, 2012 at 4:50 pm

      Yes! This underscores the point I attempted to make — you need the same control language on both ends. In networking today, that language has always been defined by standards bodies (love’em or hate’em). However in a world where you have open source software controlling the network, the control language is defined by the programmers working on the code. Big difference.

      •  Jason Edelman says:
        February 8, 2012 at 7:35 pm

        Brad, nice write up. Now I fully understand where you were going with your tweets earlier. Great perspective.

        Some thoughts…

        When we normally think protocols (in the network world), they are most of the time L2/L3 protocols, right? If it’s a control plane or management plane “interface,” i.e. protocol or API, it’s higher up in the stack obviously, but we still have standards today. Maybe it’s these standards that go away in the shorter term.

        But, is it less important to be a standard if it’s at the management plane of a network? Maybe we use a new way to manage/configure/update flows to an OVS day 1 (today)? That’s okay because this just means really cool management systems. After all, what great NMSs are out there today? Network guys will like that…and they’ll even get to keep standards like OSPF/BGP to calculate the FIB keeping the same guys semi-happy without too much change at once.

        Shortly after, it will be VERY easy to modify OSPF or create a totally new “flow protocol” for the lack of a better term, which will be another new proprietary protocol, this time at the control plane. This is where it gets interesting from a standards perspective. So the more I think about this, you’re definitely onto something and standards bodies may have less of a role here?

        I’m starting to think I lost my marbles too. It’s hard thinking straight these days.


  2.  Adam Thompson says:
    February 8, 2012 at 2:22 pm

    Brad, I think you’re confusing Protocols with Standards.
    Manipulating oss-vctl over TCP is a protocol. Unofficial, de facto, but still a protocol.
    GRE and 802.11Q are official, de jure, standards.
    s/protocol/standard/ in your article, and it makes more sense. It also ceases to be a big deal – lots of things aren’t standardized, this is just one more.

    •  Brad Hedlund says:
      February 8, 2012 at 3:50 pm

      I see your point. The word “protocol” can mean slightly different things in network speak vs. programmer speak. In the new era of blending networking with programming, this won’t be the first time we see this.
      Thanks for the comment.

  3.  Kurt Bales (@networkjanitor) says:
    February 8, 2012 at 2:28 pm

    Great post, as usual, Brad. I was actually having a similar conversation with somebody just the other day. I guess the main purpose of many “Protocols” as we use them today is to allow independent network components to communicate state and to determine an agreement on that state.

    Within an autonomous device it is generally up to the manufacturer to determine how to handle the state information. In the OpenFlow model that determination is a function of the OpenFlow controller. It would make sense for the communications channel between the controller and the switch in an SDN environment to use OpenFlow as the protocol of choice, though if the spec doesnt allow for a full set of features then an open platform and an available specification is the next best thing. Who knows, having “features in the field” may be exactly what is needed to get them worked into the OpenFlow standards…


  4.  Wes Felter says:
    February 8, 2012 at 2:45 pm

    I think it’s worth remembering that “built on open source” always means “proprietary” and “arguably open” also means “proprietary”, because if something is really open then it doesn’t need to hedge. I don’t mean to diminish what Nicira has accomplished; I think it can survive on its merits and doesn’t need openwashing.

    We’ve seen the “who needs standards” attitude quite a bit in the open source world; in many cases it works well but in other cases it creates a sort of “Linux ghetto” where free stuff can’t interoperate with other stuff. For example, people with Cisco boxes probably don’t appreciate hearing “IPSec is bloated, just install OpenVPN”. I think Nicira has been careful to set expectations that you need their gateway and you shouldn’t expect any third-party switches to be able to join the party. Ultimately I expect a standard overlay protocol will be developed (see NVO3) and at that point there will still be an advantage to being standard-compliant instead of “inspired by the standard”.

  5.  Jody Scott says:
    February 8, 2012 at 6:11 pm

    I think the difference between an API and a protocol is largely a matter of scale acceptance. The impact of standards bodies on the development of relevant protocols is generally overstated and forms long after the natural evolution of the protocol has developed. Protocols developed by committee have not usually fared so well. The future of SDN is bright and the protocol, whether openflow or not will follow.

  6.  JS says:
    February 8, 2012 at 9:42 pm

    And how can you make the Nicira controller talk with a controller of another network that is not Nicira? How do you make it talk to existing infrastructure? How do you cross administrative domain boundaries? This is where standards are needed ….The reason the Internet works is that there is a well defined set of protocols for building networks of networks, and adversaries can still exchange traffic and routes. If you get rid of those you build islands …and if you are an enterprise you might be ok with that (you need a network for yourself). But this is not the Internet. This is SNA.

    The interface between the controller and the switch is not interesting by itself and it can be anything … Within a single administrative domain, life is simple.

    It is sad that people are forgetting why the design choices of the Internet (not just IP) were made, and how big of an impact this had in its exponential growth.

    •  Brad Hedlund says:
      February 9, 2012 at 7:13 am

      If the same underlying open source software exists on both ends, presumably each end has the same tools and APIs to interface with. You don’t need a bloated standards body to decide what goes in the code. That’s the difference.

      A perfect example of this in action is the OpenStack Quantum project.


      •  JS says:
        February 9, 2012 at 11:56 am

        Ok. Let me know when Nicira open sources their controller software so that it can go in both
        sides of different administrative domains and inter-work ….

        That’s my point ..

    •  Jason Edelman says:
      February 9, 2012 at 7:42 am


      Was voice doomed by IP Telephony? Maybe for some, but did the TDM world that had been around for even longer than IP networks simply die? No, there are gateways and devices alike that are used to inter-connect them. We would need something similar for SDN. Similar, but different

      My 2 cents.


  7.  Abner Germanow says:
    February 8, 2012 at 10:46 pm

    Bang on sir. Standards bodies always lag dudes with duct tape and water leaking.

    Also to prove your point in some regard there is no reason the defacto software stack won’t show up in all sorts of interesting places. The Juniper MX router open flow implementation runs open vSwitch.


  8.  Salvatore Orlando says:
    February 9, 2012 at 8:48 am

    Whenever I hear about new protocols, I always recall reading in a book that “The world is not happy when a new protocol is born”; and I believe that to stand true no matter how open is the protocol. (The book was “Computer Networks” by A.Tanenbaum I guess)

    From my limited perspective, I see Open Source components and open APIs as a way for building virtualized networks. As cited by the author Quantum is an example; however the very ovs-vsctl CLI utility cited in this post uses an API exposed by the ovsdb-server process, which is a simple JSON-RPC interface to manipulate the OVS layout – and by extension the topology of your virtual networks; in the open source space, Cloudstack has a rich API for building virtual networks as well.

    Nevertheless, I do believe in the concept of separating data forwarding, management and control planes, which is at the foundation of OpenFlow and Open vSwitch; it is my humble opinion anyway that this will happen independently of OpenFlow.

    Thanks for this post!



  1. OpenFlow, Software Defined Networking and Standards - Is it the process open and reasonable ? — My EtherealMind says:
    February 9, 2012 at 3:09 pm

    [...] vibrant exchange on Twitter), Brad Hedlund asks whether Nicira’s Open vSwitch is open – Dodging open protocols with open software ? The story being that of open source networking software minimizing the role of network [...]

  2. Open Cloud Initiative Is Dead Long Live OCI says:
    February 9, 2012 at 4:13 pm

    [...] but it is quite possible to work around its absence if there is open source, a fact once again highlighted by this brilliant post by Brad Hedlund. No, I am nowhere close to claiming that we don’t need to focus on open standards but I am [...]

  3. Networking Guy’s, Just don’t understand software | says:
    February 9, 2012 at 11:13 pm

    [...] very interesting points of view from my distinguished ex-colleague Brad Hedlund entitled “Dodging Open Protocols with open software“. In his post he tries to dissect both the intentions and impact of a new breed of [...]


Speak Your Mind Cancel reply