More on Open: Standards
This is a followup to a post a couple of weeks ago on "Several Different Meanings Behind "Open." The intent is to make more precise what people mean when they say Open with an eye on getting to better conversations. This is not meant to advocate any one definition over others - all are valid.]
Discussions around Open frequently gravitate towards technology standards and the bodies that support them (IETF, ONF, and so one). In this context, the thinking is that technologies that are standardized will exhibit fewer (if any) vendor-specific elements. A well-documented technical specification means that any vendor can develop the technology, and that components from disparate vendors will be able to interoperate.
Standards are a tremendous means of getting cross-industry collaboration on a set of technologies that have a high degree of interdependence in multi-vendor environments. When a particular protocol is standardized, for example, it prevents one vendor from using its IP rights around the technology to essentially prevent other vendors from implementing the protocol. In this regard, standards are an important means of ensuring a level playing field.
Perhaps the most notable example of the need for standards is Cisco. They have frequently developed new technologies and used their dominant incumbent position to establish new de facto standards. WCCP and CDP are two examples of Cisco protocols that effectively became standards in the Enterprise. Cisco’s reluctance to make the protocols available in a major standards body meant that other vendors could not implement the same protocols and guarantee interoperability with Cisco devices. This served to strengthen Cisco’s market position, and it created a strong vendor lock-in once these were deployed in production environments.
The risks of non-standard technologies are twofold: vendor lock-in (as described previously) and interoperability. Examining standards through these two lenses is a useful exercise.
Standards do very little to cap technologies. The intent is to capture broadly what aspects of a technology ought to be commonly specified, but this in and of itself does not prevent or even dissuade companies from extending the technology in vendor-specific (and often proprietary) ways. Virtually every routing protocol has a set of vendor-specific extensions that cater to specific use cases within their solution. The extent to which standards prevent vendor lock-in is based primarily on customers’ willingness to deploy only the standardized (and ubiquitously implemented) aspects of a technology.
In fact, this is desired behavior. Because standards bodies use a rough-consensus model, movement will tend to be slow. By design, until most members agree on a direction, it does not become a standard. In early phases of technology evolution, this means that technological advances will almost certainly outpace standards activity.
For mature technology areas, the standards serve as a means to provide guidance and oversight, a protection against vendor hijacking, which serves end users well. For emerging technologies, standards often move too slow, being more inhibitor than steward, and keeping technologies out of customer environments en masse.
So when are standards important? As a rule of thumb, when the majority of vendors adopt a technology, standards will serve as a necessary governance function. Before adoption is broad and while technology is still in its formative stages, standardization is likely premature and could slow down the evolution of production-ready solutions.
The second major desire behind the push for standards is interoperability. Indeed, if companies agree on what black-box behavior should be, the ability to integrate different solutions should improve. In this case, though, the driving desire is interoperability not standards.
It is, in fact, possible to have standardized behavior that is not interoperable. Technologies like MPLS and multicast are long-time routing staples, but there are extensions implemented by both Cisco and Juniper that prevent interoperability in some cases.
Note that this is not intended to make the case against extensions; in many cases, they are absolutely required to solve real-world problems. Instead, this is to say that if the primary desire is interoperability, standards might not be sufficient (and in some cases might not be necessary).
So how do we apply this in the case of SDN where there is a lot of work by two somewhat competing bodies (ONF and ODP)? SDN needs to get code out into production so we can observe, learn, iterate, and evolve. Think of ODP as a young athlete eager to get his chance on the field. ONF is taking a more measured academic approach, soliciting input and providing guidance to make sure decisions are firm and can be held. ONF is more like a seasoned veteran. Each of these has a role, and they actually create decent tension between them.
But if I had to choose, gun to head, what do I think will advance the cause?
We need working code to get into deployment before we are going to really learn what the operational issues of SDN are, and I predict those are the biggest inhibitor to broad adoption in real production environments. The real concern by people in the industry is not whether something is standard but rather whether it is proprietary. Vendor lock-in is the thing we need to fight, and an extensible protocol – standardized or not – is both a great tool and a great weapon. Our best shot at interoperability is rapid code dev, test, and deploy from a common platform. And if this plays out in public rather than behind closed doors and on secret email lists, all the better.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)