Cloud Zone is brought to you in partnership with:

William Heinzman is a software engineer at JNBridge (www.jnbridge.com). He has more than 20 years of experience developing manufacturing test, distributed job scheduling and IT software products. He currently develops interoperability solutions between J2EE and .NET. He has a degree in Geophysics and a Masters of Engineering in Computer Science from the University of Colorado. Bill has posted 1 posts at DZone. View Full User Profile

Why Interoperability Must Succeed for the Cloud to Survive

02.26.2011
| 9110 views |
  • submit to reddit
During the past two years, not a week has gone by without several press releases, blogs, editorials or articles about cloud interoperability crossing my electronic desk—enough to qualify the term as an official buzz-word. The problem is that ‘interoperability’ has several meanings, a not surprising observation given that the term usually requires an asterisk next to it. It’s like Barry Bonds’ single season home run record—it exists, but it’s not very satisfying and someday somebody will do it right. While “what’s right” may be debatable, what’s certain is the cloud represents an opportunity to finally achieve true interoperability. The reason is simple, the success of the cloud—its very business model—is dependent on providing interoperability that is secure, transactional and that fully automates the construction of the consuming side.

The implication is that current interoperability mechanisms, like Web Services, are essentially vendor specific implementations that enable creating and integrating distributed solutions within the vendor’s product portfolio. Web Services may be based on agreed-upon standards, but those standards really only describe enabling mechanisms without the requirement that everybody implements everything exactly the same way—right down to the least significant bit. The important thing to understand is that interoperability is a double-edged sword to the business model of enterprise platform vendors. How do you give the customer the assurances that a heterogeneous enterprise can be built, but still protect your turf? It’s not for nothing that the standards don’t have any teeth, after all they’re written by committees of competitors. 

The premise that Web Service standards deliver interoperability between vendor implementations is obviated by the gulf that separates promised expectation and actual experience. The promised expectation is that you can use any enterprise platform vendor’s development environment, chose New->Project->Consume Web Service, point to any Web Service implemented in any other enterprise vendor’s server and automatically, out of the box and within minutes generate the consuming end point and execute a remote call. This means no searching of blogs and forums, no posting an appeal for help or no hand-tooling code at some lower level in the stack. Sorry interoperability fans, but when I take the latest GlassFish platform from Oracle, including the Metro WSIT stack, and point its Eclipse-based development environment at the simplest .NET WCF “Hello World” Web Service WSDL and watch it throw an exception, not only are my expectations not met, but my productivity has decreased while my time-to-deployment has increased (not to mention my frustration level). I think that there are not a few managers and sales reps at both Oracle and Microsoft who are perfectly fine with this result—after all, it’s not a bad business model to make interoperability easy among your own enterprise products and much harder with your competitor’s enterprise products. That’s precisely why the cloud has the opportunity to finally deliver universal interoperability—its business model must be fundamentally different from that of the enterprise platform vendor because its source of revenue is different.

The cloud is about services, not software sales. Cloud service revenues are based on the clock, bandwidth consumed, megabytes used and users connected. Cloud vendors, particularly those providing infrastructure and platforms as a service, need to worry about maximizing their ability to securely host and integrate as many different platforms, technologies and languages as possible; they simply can’t worry about whether it’s their stuff, or their competitors.  Simply put, there can’t be any protectionism in the cloud because interoperability is critical to the financial success of cloud offerings. Hence, the constant stream of links to cloud interoperability blogs, articles and PR copy that have been popping up on my displays these past two years.

Seemingly, the current cloud vendors may actually get it, however there’s a small problem: most of what passes for cloud interoperability is between platform specific code—the stuff running in the cloud written using a particular platform—and the cloud API, resources and storage services.  From a business perspective, Amazon’s IaaS cloud must provide Microsoft Windows Server environments and therefore offers its EC2 API, resources and services in both Java and .NET. However, the .NET EC2 API isn’t going to enable better interoperability between a WCF Web Service running in the Amazon cloud and Oracle’s GlassFish running on the ground. The cloud APIs, resources and services do not enable interoperability between different platform specific applications regardless of where they’re running.  Microsoft’s PaaS Azure cloud can host a JRE, but that’s mostly because current JVMs support the Windows API. While Microsoft provides a Java API, it only supports those Azure and AppFabric resources and services that use underlying Web Services; Microsoft doesn’t support a Java API for Azure Cloud Drives, for example.  However, that’s moot because the Java Azure and AppFabric APIs aren’t going to provide better interoperability between Web Services hosted by a JEE app server running in an Azure VM and a WCF client on the ground. 

Why is this important? As long as the cloud is perceived as a place to only host web sites, the current cross-platform cloud APIs will probably suffice as the only required interoperability is with a browser. A company that moves its website to the cloud may lower operating costs and the cloud vendor will realize revenue, but this is only half of the potential of the cloud.  The real promise of the cloud is enabling companies to build their own SaaS solutions and expose service APIs to any customer.  A company can expose its own secure, scalable cloud service and generate a revenue stream with minimum operating, development and sales costs. Everybody’s happy. The end customer receives value and can consume the services in the platform of its choice without the operating costs associated with traditional software licensing. The company providing the service and the cloud vendor can both enjoy an additional revenue stream.  Sounds great, except that the cloud only provides interoperability internally—between the hosted code and its own APIs, resources and storage services.  Interoperability between the service running in the cloud and the end customer is still based on existing platform-specific Web Services complete with the built-in protectionism. If the customer cannot quickly and easily consume a cloud service within its preferred platform, the entire value proposition fails and nobody’s happy. 

What’s the solution? Simple: the industry needs to provide interoperability based on the cloud business model.  Interoperability must be a fundamental capability that resides in the cloud rather than just a mechanism that enables remote procedure calls. From the early days of RPC/IDL, through to the present WSIT, interoperability has been about providing a mechanism for integrating distributed applications that leverages software sales and not about providing a service. The very concept of the cloud connotes a level of abstraction, availability and automation that moves beyond the RPC mechanism of Web Services. The cloud’s motto should be “any object, on any platform, in any language, anywhere at any time”. In other words, I think the cloud requires a common object paradigm that provides fine-grained interoperability complete with asynchronous events, automatic discovery, automatic instantiation of the object on the consuming side and simple and automatic pay per use. Of course, this isn’t a new idea and, in the past, industry wide consortiums like OMG have tried to solve the problem, but they have always been constrained by the protectionism inherent to the software sales business model.

The cloud will not be successful without true interoperability. For the end customer, the cloud must provide a level of abstraction and ease-of-use that enables automatic consumption of services that is platform agnostic with minimal support and development overhead, not to mention at lower overall cost based on pay-per-use.  For the company providing the services, the cloud must embrace platform agnostic interoperability that’s easy to implement, in addition to being scalable, secure and globally accessible. For the cloud vendor, true interoperability removes barriers to adoption, moving more third-party software into the cloud and increasing volume and revenue. If there’s “some assembly required” by the end customer, or endless workarounds, or only certain platforms are supported in the cloud or on the ground, the business opportunity between the end customer, service provider and cloud vendor will fail. It’s time to remove the obstacles that permeate current interoperability implementations—obstacles based on an obsolete business model that is anathema to the success of the cloud. If ‘interoperability’ continues to sport an asterisk, then so will the cloud and that will be the end of a pretty cool thing.
Published at DZone with permission of its author, Bill Heinzman.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)