NOTE: This article is an archived copy for portfolio purposes only, and may refer to obsolete products or technologies. Old articles are not maintained for continued relevance and accuracy.
July 13, 1998

Internet Data

Although poets and idealists may disagree, the future of the Internet is already written, with the high-tech industry pushing it towards becoming a platform designed for and dominated by business-to-business information sharing.

In this model, the Internet is effectively a large public WAN, with organizations interconnecting their business systems into indistinguishable, cooperative facilities on an ad hoc basis, as required by their current missions. When Company X goes into partnership with Corporation Z, the two firms won't just swap phone numbers but instead will exchange user accounts and passwords, opening their systems to their new partners on a totally-unprecedented scale.

This kind of thing has been going on for years, of course, but only for those organizations with the money and expertise to build and manage their own private WANs (automobile dealer networks are a good example). But with the Internet, these barriers to entry have been significantly reduced. The cost for building a virtual WAN over the Internet is considerably less than it was when you had to buy your own copper, and the number of administrators with the required expertise has also increased substantially, allowing these kinds of cooperative computing agreements to increase dramatically.

So in the future, we can expect to see a lot more small organizations taking advantage of these "virtual company" capabilities, building dynamic networks with highly-distributed access and authority. One example of this might be a small hardware vendor who provides tools for their resellers to enters sales and shipping orders directly into their databases, allowing for reduced time-to-ship windows. At the other end of the opportunity range, another example might have that same hardware vendor opening their systems to their outsourcing providers (such as accounting and other service firms), who work on the local systems directly rather than batch data back and forth.

These scenarios are good examples of the problems that have already been resolved by Internet technologies, allowing this vision to come closer to reality. Rather than ask what protocols will be used for the network, IP's pervasiveness makes the question moot. Similarly, HTTP and HTML have been proven to work for distributing access to functionally-rich applications, allowing them to be exposed without much worry. As for security, there are many options to choose from there as well.

Wot? No Data?

In fact, there's only one component missing from this picture, although it's absence is so significant that it threatens the entire vision. The thing that's missing is a vendor-independent database-specific application-layer protocol, allowing users at Company X to access the databases on Corp. Z's network, regardless of the database system in use at either location.

In my opinion, this hole is the most significant hurdle to implementing true inter-organization communications. Oh sure, companies can open up their internal web-based applications, but that kind of environment requires duplicate data-entry or batch transfers, neither of which provides much in the way of efficiency.

In order to truly reduce the time-to-ship window, a reseller from the example above must be able to integrate the hardware vendor's remote database into their own local applications, and they must be able to do so seamlessly. And in order for the vendor to minimize their time-to-payment window, they have to be able to incorporate EDI in the same manner. In other words, if the small players are to take advantage of the newly-levelled playing field, they have to be able to incorporate the same database-sharing technologies as their larger competitors.

Without these capabilities, the smaller firms will simply not be able to compete with the big boys and their WANs. This will, in turn, keep them from adopting Internet technologies as a competitive tool. If we are all to benefit from the lowered barriers-to-entry that Internet technologies provide, then we must collectively work to resolve this fundamental issue. That means developing an IETF-sanctioned application-layer standard for vendor-independent database connectivity.

Such a standard would allow for many things. At one end, it would allow for remote users to access multiple remote databases without having to load vendor-specific agents or write vendor-specific application code. At the other end, it would allow organizations to interconnect their back-end servers together directly, with updates and joins being applied to all of them simultaneously. The latter of these two examples is the most compelling, I think, because it allows for true partnership at the corporate level.

Current Solutions

But before we run off half-cocked and start writing a new Internet Data Protocol, we need to examine how some of the current implementations work (or don't work), and the issues that they present. In general, there are four technologies that are of interest here, although it my belief that none of them are appropriate as foundations for a new protocol:

After looking at these existing technologies, we can gain a pretty clear understanding of the features that an Internet Data Protocol would need to provide. It would need to offer a consistent implementation (TCP port number, character set, etc.), and a consistent set of verbs (such as INSERT, UPDATE and DELETE).

In fact, I would think that the first version of such a protocol could be something as simple as a listener that did nothing more than accept standard SQL input, returning data and errors in a predefined form. Such an implementation would likely be a five-page RFC: "support SQL 92 commands and object types on TCP port XX, returning data and errors using this format."

Future versions would of course need to add support for arbitration, feature negotiation, version control, asynchronous operations and vendor-specific extensions. But the first version could be very, very simple. Simple enough to help drive the adoption of business-to-business information sharing in the small- and medium-enterprise markets, anyway.

Making It Happen

Writing new standards isn't easy, at least not for folks like me who have no real pull in the market. In order for this kind of thing to really fly, a variety of things need to happen:

The good news here is that I've already had some preliminary talks with the Area Directors of the IETF's Applications Working Group, and have received very positive responses so far. Taking this to the next step involves finding somebody with influence to chair a working group and bring version one of the protocol together quickly.

As stated, I believe that this person should come from the vendor community. There are people on this newsletter's mailing list at Oracle, Sybase and Solid Technologies, among others, any one of which I would think could lead this effort easily. If you are interested, please contact me and we'll get the ball moving. I, as a user, am most anxious to see this technology come to life quickly.

-- 30 --
Copyright © 2010-2017 Eric A. Hall.