---------------------------------------------

NOTE: This article is an archived copy for portfolio purposes only, and may refer to obsolete products or technologies. Old articles are not maintained for continued relevance and accuracy.
August 24, 1998

Standards Never Die

I've been really busy over the past couple of months, writing a variety of articles on new networking technologies for various trade publications, and slaving away on my Internet Protocols book for O'Reilly. As you can probably imagine, a big part of this work has involved heavy research into the various networking standards and protocols that keep our networks working.

Protocols are really interesting to me, partly because of the functionality that they provide, but also because they present unique problems whenever they need to be enhanced. Some protocols have support for future enhancements designed in from the start, leaving plenty of room for improvement while also maintaining backwards compatibility. Conversely, many of the older protocols have extraordinarily rigid structures that provide no opportunities for modification without breaking interoperability altogether.

Support for forward-compatibility inside of networking protocols is starting to become a really critical issue, particularly as new technologies that push the envelope of network utilization are being deployed. As a result, many of the core elements of today's data networks are being retrofitted to allow these new technologies to work reliably. In some cases, entirely new protocols are being developed to get around those protocols that are so inflexible that they cannot accommodate any sort of tweaking.

In all these cases, the fundamental issue revolves around the options for dealing with outmoded technologies. This is similar conceptually to the issues surrounding installed base management and migration for just about any kind of product, whether it be productivity applications or computer systems. We've all felt this kind of pain directly in the past. However, we're about to go through some major changes in the networking space which will make those efforts pale in comparison.

New VoIP Technologies

One of the articles that I just finished is a feature on Voice-over-IP, looking at the various technologies and implementations available today, and how they impact traditional data networks (this article will run in Network Computing later this fall). By far, the most widely-implemented standard today is H.323, an ITU standard for transmitting audio over packet-based networks like IP.

Just about everybody agrees that H.323 is a lousy standard, particularly when you look at it from an IP-centric perspective. The nature of the protocol is such that any node can contact any other node directly, with both systems dynamically negotiating the TCP and UDP ports that they will use for the duration of the call. As such, H.323 calls don't use fixed port numbers like most other Internet-based protocols, resulting in all kinds of problems for firewalls (you have to open every TCP and UDP port on your firewall if you want bi-directional calling), as well as for network address translators, IP gateways and other technologies that rely on fixed port numbers.

Yet, H.323 is the only workable standard today, so vendors have to support it if they want to be taken seriously. Most are doing so, but are also saying they'll drop it like a bad habit once something else becomes widely available. Among the alternatives currently being proposed are the Session Initialization Protocol (SIP), the Simple Gateway Control Protocol (SGCP) and the Internet Protocol Device Control (IPDC) protocol, each of which promise to make VoIP easier to deploy and manage in complex, multi-vendor installations.

Unfortunately, all of these protocols are extraordinarily new, with the ink on the proposals barely even dry. It will be at least a year before these alternatives make it through the first-round of RFC negotiations, and probably two or three years before they are functional enough to compete with H.323 on even footing.

But in the meantime, the number of H.323 implementations is likely to go through the roof, as more vendors build support for the one working standard into their current offerings. As a result of this rapid adoption, H.323 will likely become tomorrow's de riguer standard, rather than just a de facto one for today. In the end, nobody will be able to drop H.323 from their products, regardless of the alternatives available, because it will be so pervasive. So even if H.323 sucks, it'll be around for years and years, proving that standards are harder to kill than anybody wants them to be.

Another example of this ghost effect can be found with POP and IMAP. Although IMAP is vastly superior to POP in almost every regard, and although most of the e-mail systems available today support IMAP, no vendor would imagine trying to release a product that didn't support POP. It's a de riguer standard with far too many implementations—and far too large of an installed base —to be ignored. In five years time, H.323 will look much the same.

The New Ethernet

Sometimes wide-scale deployment is such a huge thorn that you don't even have the option of replacing a legacy architecture with better alternatives. One example of this can be found with Ethernet, arguably the most widely-deployed protocol on the planet, which cannot simply be replaced with an entirely new protocol. Instead, any enhancements to Ethernet have to be implemented using the architecture of the existing protocol, allowing legacy infrastructure equipment (e.g., hubs) to continue working. This can get pretty tricky, since Ethernet's design doesn't really allow changes to be made easily.

We've already seen this in action with Ethernet II and 802.3, two fairly different protocols that have to use the same header structures in order to peacefully co-exist on the network. In fact, these frames are so much alike that the only way networking equipment can tell them apart is by examining the value of the ethertype/length field. This is a hack, almost by definition, but it works and there are no better alternatives for getting around the limitations inherint with the Ethernet architecture while also maintaining compatibility with legacy equipment.

This trickery is also being used to implement the new IEEE 802.1Q standard, a protocol that provides standardized VLAN and prioritization capabilites to Ethernet networks through four new bytes of data in the header. Although this changes the fundamental design of the frame's header, it also allows end-station equipment to continue using mathematics to figure out the frame's contents, without breaking infrastructure equipment at the same time. With this design, 802.1Q frames appear to contain data for ethertype "8100" networks, so if an 802.1Q-compliant end-station sees an ethertype of 8100, it can pretty much assume that the frame is formatted for 802.1Q data (and then conduct additional tests to verify the assumption). But older hubs and repeaters won't be affected, since the frame still looks like Ethernet.

The downside of this design is that older end-stations can't see whatever data is inside the 802.1Q frame. Thus, print servers won't see print jobs submitted by 802.1Q devices, database clients won't find servers with 802.1Q adapters, and a host of other problems will arise from mixing and matching old and new gear together without careful planning. To keep these problems at a minimum, most of the 802.1Q switch vendors will be providing support for legacy equipment on a per-port basis, stripping the excess header data from 802.1Q traffic that is being sent to a legacy device. However, users who don't deploy these new switches will surely have problems, and we can expect to see lots of ink being spilled on this issue for the next two or three years at least.

Lots of minor (yet indicative) problems are already starting to crop up from the use of 802.1Q frames, particularly in the network management space. All of the protocol analyzers in use today don't support 802.1Q of course, so they aren't able to decode the frame's contents either, making network diagnostics more difficult when problems do crop up from the deployment of 802.1Q equipment. Also, it turns out that the 8100 ethertype was used by Wellfleet at one time, so network management consoles are now showing sudden increases in "Wellfleet" traffic, which is causing a fair amount of consternation for users with nothing but Cisco routers, to say the least.

So, interoperability will definitely be an issue, and bringing the entire industry up to speed on the technology will be expensive and time-consuming. But given that Ethernet doesn't provide for different datatypes and versions directly, this is about the best we can expect while also maintaining basic compatibility with the existing infrastructure equipment. Such is the price you pay for wide-spread deployment.

The New IP TOS Byte

Another widely-deployed protocol undergoing some dramatic changes is IP. Unlike Ethernet however, IP already provides lots of mechanisms for extending the basic functionality of the protocol without breaking backwards compatibility. You can add new functionality fairly easily, without having to jump through hoops to do it.

The best-known method for this is the use of version numbers, with IPv4 being a distinct entity separate and apart from IPv6. Another mechanism that is available within IPv4 is the use of a 40-byte "options" field in the packet's header, allowing new extensions to be added in without affecting the rest of the packet. Meanwhile, IPv6 offers even more flexibility, providing support for the use of "extension headers" that let the packet contain as many "options" as needed, without constraining them to the 40 bytes of excess header space provided in IPv4.

Unfortunately, these compatibility services aren't always utilized. One example of this is the work being done by the Differential Services Working Group (diffserv), which is changing the defined usage of the IPv4 Type-of-Service field, rather than incorporate one of these other services. Worse, nothing is being done to indicate that the IP header is being changed, providing no way for equipment to discern that a new header design is in use.

The diffserv working group leaders say that this will not be much of a problem, since the TOS byte has historically gone unused. While this was true at one time, there is a growing level of support for the TOS byte among the vendor and user communities, with many new products (and end-user networks) leveraging it fully. What we are seeing here is a repeat of the H.323 scenario, where people who need IP prioritization services today are turning to TOS, simply because it is the only working standard available now. By the time diffserv is widely deployed in routers and end-nodes alike, TOS will be the overwhelmingly predominant prioritization scheme in use, making support for it essential and inescapable. Co-existence is going to be a very big issue in three years time.

And remember the interoperability problems with 802.1Q? Unless things change, we are going to see a host of interoperability problems that could have been avoided through some sort of indication that the packet wasn't 791-compliant. Since diffserv devices will be replacing the contents of the TOS byte without marking the packets as being any different, there will be no way for protocol analyzers, routers and end-systems to tell what the packets contains. Maybe those urgent packets contain TOS data set by the originating system, or maybe they contain diffserv data set by an intermediate network. Who knows? At least with Ethernet we were given some sort of indicator as to the type of packet in use, but with diffserv we're left to guess that maybe something happened somewhere along the way. For organizations trying to use the Internet for WAN services, a lack of some sort of indicator is going to be simply unacceptable.

What's most interesting here is that there are so many choices for incorporating changes into IP without breaking the current specs, and that these services are being ignored. One such method for preserving interoperability would be to place diffserv data into an IPv4 option or an IPv6 sub-header. Another method would be to flip the last bit of the TOS byte (currently unused), so that newer equipment could tell which kind of packet it was. Any of these would be satisfactory solutions, providing backwards compatibility while allowing us to move forward with necessary improvements. What is unacceptable is making changes to the header without providing some sort of clue that the packet is no longer compliant with the spec.

Standards Never Die

Once a standard gains footing, it's here to stay. You can try your best to replace it with alternatives (a la IMAP and IPv6), or you can pull rabbits out of a hat to smooth the transition as much as possible (a la 802.1Q), but it'll be years before the older stuff goes away entirely. If you go back and look at the state-of-the-art equipment from twenty years ago—whether this be protocols or minicomputers or wiring rigs or anything else—I guarantee that you'll find somebody somewhere that's still using it today (such as the FAA).

This is why it's so important to plan for future enhancements in protocol design, and why it's even more important to utilize these services when a protocol is modified. Although it is feasible to have somebody in a position of authority mandate that everyone shall upgrade on or by this date if they want to be supported by the corporate help desk, this kind of dictatorial approach just doesn't work outside of a single organization. In particular, it never flies with stuff like IPv4, where there are millions of nodes around the world, many of which won't be upgraded for years to come.

The good news here is that most of the more-recent protocols are being designed for future expansion capabilities in mind. IMAP and LDAP are two such examples, providing mechanisms whereby clients and servers can negotiate the specific versions that they want to use, rather than forcing everything to be at the same revision (like POP2 and POP3, which require different port numbers to ensure compatibility). These forward-looking designs should make transitioning to the third and fourth generation protocols much smoother than the current migration is going.

-- 30 --
Copyright © 2010-2017 Eric A. Hall.
---------------------------------------------