21st Century TCP/IP
The Internet's success as a global networking platform is due partly to its maturity and stability: TCP/IP is ubiquitous and well-understood, making it a solid foundation for new networked applications. But stability doesn't mean stagnation. Internet technology continues to evolve, with new protocols under development and existing technologies being continuously tweaked to deal with the demands of tomorrow's applications.
Innovation can be seen throughout the TCP/IP stack, often as a result of improved hardware capability and emerging market opportunities. For example, the price of gigabit networking technology has dropped to a point where it's routinely installed on new workstations, while powerful handheld computers and pervasive wireless networks allow networked applications to move beyond the LAN. VoIP and iSCSI can require changes to the underlying network, as telephony and storage have very different requirements than traditional data traffic. The broad adoption of Internet technologies both within every organization and to the furthest reaches of the planet is forcing end users to rethink issues such as internationalization, security, and spam.
However, end users have relatively little input into the workings of the IETF, the standards body responsible for the core Internet protocols. Although the IETF is transparent and encourages participation by individuals, standards are driven by vendors, and they must be adopted in multiple product lines before they can be successfully used by the masses. The most important standards over the next two years will be those that are supported by the IETF, implemented by vendors, and ultimately adopted by users.
There are currently 130 active working groups within the IETF, cumulatively developing hundreds of standards-track specifications. Alongside these are thousands of independent proposals submitted by individuals, vendors, and consortiums. The ones receiving the most attention are those that seek to solve the Transport-layer problems caused by real-time applications and high-speed networks, and those that handle security, naming services, or collaboration.
A successful standard involves more than just an RFC that solves an important problem. It requires significant commitment of intellectual, political, and monetary capital, and the only parties that can justify this are vendors that believe that the standard will help them gain a competitive advantage.
Every user has a wish list of technologies that the IETF should be developing, such as platform-independent backup protocols, distributed database access standards, and even trivial services such as network gaming. But vendors are unwilling to devote sufficient resources to problems that don't have a perceived impact on their bottom lines.
At the same time, standards have been widely adopted by the vendor community but subsequently rejected by end users, either because they didn't satisfy a real need or because another technology provided a better solution. In IETF politics, customers are the executive branch, exercising veto power but little more.
IPv6: Addressing a Problem?
IPv6 illustrates the gap between engineering and deployment. The standard itself is mostly complete, and enough products and services are available for it to be deployed by almost any organization. However, IPv4 still satisfies most networking demands, so very few organizations have any real need for IPv6.
IPv4's continued popularity has meant that IPv6 isn't viable in some important applications. It wasn't until July of this year that the Internet Corporation for Assigned Names and Numbers (ICANN) announced support for IPv6 queries to the DNS root servers. The result has been fewer products to choose from and more difficult administration. Living the IPv6 lifestyle carries a much higher cost than sticking with IPv4.
IPv6's main selling point has been made relatively moot by external events. It was designed in part to ease a predicted shortage of IPv4 address space, but that crisis was averted by the use of NAT and stricter delegation policies. Without a shortage of IPv4 addresses, IPv6 has no single compelling benefit that can offset its higher costs.
Vendors are still working on the technology, but they aren't hyping it, nor have they switched over to it on their internal networks. Cisco Systems, Microsoft, and HP all like to demonstrate technology leadership, but even they can't justify the switch.
IPv6 will eventually reach critical mass, but it may take a decade or more. New networks and Internet-enabled devices are accelerating the increase in demand for IP addresses, which will put further pressure on IPv4.
If ISPs and small and home office appliances started to support IPv6 seamlessly—even making it the default—its adoption would be certain. However, such a switchover would also require seamless conversion of legacy IPv4 traffic. As things currently stand, there's no compelling reason to adopt the technology.
Transport Protocols
Of all the major technology areas under the IETF's purview, the transport protocols are under the most duress, squeezed by both next-generation network topologies and emerging applications. As a result, existing transport protocols are undergoing radical reconstructive surgery, while others are forking into optimized variants that each attempt to address a specific problem.
For example, the Stream Control Transport Protocol (SCTP) was originally introduced to provide a reliable transport for Signaling System 7 (SS7) telephony switching networks, but has since found uses in many other applications. It was necessary because SS7 requires support for framed messages and multihomed endpoints that can survive the loss of a specific path or network address, neither of which TCP can provide. (TCP doesn't preserve Application-layer message boundaries, and TCP sessions must be killed when an endpoint address becomes unreachable.)
SCTP has since been adopted as a transport for the Session Initiation Protocol (SIP), sometimes seen as a replacement for SS7. It's also being considered for iSCSI because, like SS7, the technology requires a transport that can manage multiple sessions of framed messages.
In addition to repurposing existing protocols, the IETF is also designing new ones. The Datagram Congestion Control Protocol (DCCP) is intended to replace UDP, adding congestion-awareness and control mechanisms. These will enable the transport to throttle down its data rate whenever congestion is detected—unlike UDP, which simply keeps blasting packets out and making congestion worse.
SCTP and DCCP both represent important lines of work, but their widespread use will eventually depend on applications having access to them. This in turn requires that they be supported by off-the-shelf OSs and APIs. There's no indication that this will happen anytime soon, as the problems that SCTP and DCCP solve aren't yet widespread enough to prompt OS developers to include them.
One problem with TCP that's received a lot of attention is its Additive Increase, Multiplicative Decrease (AIMD) algorithm. Governing the data rate in a TCP link, it means that a sending machine will slowly increase the amount of network bandwidth it consumes, then cut the utilization rate in half whenever any kind of packet loss is detected.
The algorithm helps to ensure fairness on networks in which packet loss is a result of congestion, but overly conservative gearing can make startup times and network-related errors painful. This is particularly true with burst-oriented applications such as iSCSI, where the endpoints may be quiet for extended periods of time, but suddenly need all the bandwidth at once when activated.
Several organizations are working on technologies that promise to improve TCP's architecture, though most of these are still experimental and unusable with common hardware and software. Some problems can't be easily solved and may require new protocols or forks in TCP. The IETF hasn't yet begun any formal process for identifying and evaluating these, so widespread support for TCP fixes are probably a decade away. We'll probably need them sooner than that, so proprietary protocols may be the only short-term solution.
Decoding Security
Internet security receives a lot of attention from users and developers alike. But while a lot of energy is expended, there's little visible movement. Although some of the core technologies have been in development for several years, they aren't seeing broad-based implementation.
For example, some portions of the IPSec technology family are still in active development, even though it's already widely deployed. In particular, the Internet Key Exchange (IKE) protocol, used to exchange public key data, is currently being rebuilt with an eye toward simplification and easier deployment. Rather than wait, large parts of the encryption market have decided to deploy dynamic VPNs based on Transport Layer Security (TLS) or Secure Shell (SSH). Because these only encrypt traffic from one Transport-layer port, not from every port on a node, they're simpler to build and operate than IPSec.
Based on Netscape's SSL, TLS is already used in critical services such as HTTP and e-mail. TLS' ability to provide both VPN and application-specific encryption makes it particularly compelling. However, there are some holes in the TLS story, resulting in opportunities for competing technologies such as SSH.
In particular, TLS support for applications such as Telnet and FTP were slow in coming and still aren't widely supported by vendors in a standardized form. SSH fills this gap by providing encrypted terminal emulation and file transfer services in a way that's easy to deploy, and users have flocked to it accordingly. Like TLS, SSH began as a proprietary technology, but has now been adopted by the IETF, with standards-track proposals for both generalized encryption and point-to-point VPNs.
TLS and SSH provide similar functionality, but for different applications: TLS for the Web and e-mail, and SSH for terminal emulation and file transfer. Nobody wants to manage two different encryption systems forever, however. If the interests of the user base are to be satisfied, these technologies need to be made interoperable, a feat still several years away. Here, IPSec may have an advantage: Because it handles all IP traffic, it can be cheaper and easier than deploying multiple services.
Many security technologies rely on PKI, a technology that could be simplified by standardizing a way to locate and retrieve X.509 certificates across the Internet. The IETF's PKIX Working Group has been working on this for many years, but has made little progress toward an actual "infrastructure" that's widely accessible by Internet users.
The absence of a standard infrastructure has led some applications and services to define their own implementations. For example, Yahoo's DomainKeys anti-spam technology requires that application-specific public keys be stored in DNS. An Internet-wide PKI would require a globally distributed directory service, but this hasn't even been proposed yet.
Internet Naming Services
The Internet's growth has led to a lot of activity around naming services. These now go beyond DNS and its related service to covering secure and ad hoc networks.
The increased popularity of wireless devices allows the formation of dynamic TCP/IP networks. Devices need to have network names of some kind, but temporary ad hoc networks have no authoritative domain. The naming service developed for these environments is called Linklocal Multicast Name Resolution (LLMNR). Based on the same message format as DNS, it uses multicast lookups instead of the DNS hierarchy. It was adapted from the Rendezvous networking service that Apple uses in OS X and is expected to be standardized within the next year.
DNS itself has long been vulnerable to forgery: False domain name data can be inserted into a victim machine fairly easily, redirecting the victim to hosts under the control of an attacker. The DNS Security (DNSSEC) protocol is intended to fix this, providing a chain of keys that can be used to verify a DNS record's authenticity. This technology is absolutely crucial, as DNS vulnerabilities are used to gather host and network information that can later be used for attacks.
However, DNSSEC has also been under active development since the early 1990s. A new set of RFCs is likely to be published within the next year, but nobody knows whether these will satisfy either users or domain registration bodies. Even when published, it will take another five years before OSs and applications fully support the RFCs.
The IETF is also modernizing the Whois service, currently vulnerable to spam address harvesters and often inaccurate. The new service, known as the Internet Registry Information Service (IRIS), will use XML to describe and reference domain name delegation information. XML should allow improved programmatic access to the delegation data, necessary for operational support devices such as anti-spam tools and log-file analyzers. The IRIS specification is nearly complete, but it will probably be a year or more before registries have actual IRIS servers up and running.
Collaboration Tools
Although the IETF is involved with numerous application protocols and services, some of the heaviest work is in the area of collaboration tools: e-mail, VoIP, and IM.
Internet e-mail is one of the IETF's most successful technologies, with most of the necessary work completed years ago. However, broader trends have exposed a need for improvement. In particular, a significant amount of effort is going into developing tools that can help fight spam (see "E-mail Authentication Via Sender ID," Technology Roadmap, page 50).
Most current work is taking place within the IETF's research body, not the engineering wing that produces the actual standards. One such effort is MTA Authorization Records in DNS (MARID), a standard for publishing the authorized senders of a domain. Although this effort is principally geared toward eliminating forgeries, some percentage of spam and worm traffic will also be eliminated because many of those messages use forged sender addresses.
The hard truth is that the IETF alone can't fix the e-mail system. At best, it can standardize protocols for tasks such as transmitting spam-score data among filtering tools. Though some people dream of a replacement for SMTP itself, no technology can block spam while still allowing strangers to communicate with each other.
In the area of VoIP, SIP is emerging as a clear winner. OSs, applications, and phones are increasingly choosing it over the ITU's H.323 technology as a way to set up real-time communications sessions. However, there's still a significant amount of work going into SIP, particularly in the area of call-control services. Basic telephony features such as hold and transfer aren't yet standardized, meaning that most VoIP implementations are limited to a single vendor.
IM is the murkiest area of all. Most implementations are still proprietary, and the IETF has sanctioned two different technologies as possible standards. SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE) is designed to interoperate with SIP-based VoIP systems. This has earned it the support of all the VoIP vendors, as well as IBM and Microsoft, making it the preferred choice for many internal corporate networks. On the wider Internet, the Extensible Messaging and Presence Protocol (XMPP) has a large installed base. Based on the open-source Jabber software, it's tailored specifically for IM. At this point, it's still too early to declare a winner out of the two.
As the Internet expands, so does the effect of non-IETF standards bodies on IETF standards. For example, much VoIP work depends on ITU telephony specifications, and the practical technology needed to implement standards depends on the IEEE and vendor consortiums.
But the most important actors in the parade aren't vendors or standards bodies. They're the customers—the network managers who carry wallet-sized vetoes. Unfortunately, this arrangement, while providing balance, isn't necessarily efficient: Vendors have to spend more time and money on development, while users have to wait longer for deployable technology. To shorten the development cycle, customers need to move beyond a simple veto of vendors' actions and play an active role in the technology's development.