The Protocol [was: Syntax] is API Fallacy

Jonathan Borden jborden at
Thu May 6 20:56:22 BST 1999


    Although IDL started life as a way to specify RPC interfaces, more
recently it has become a way to specify interfaces in general (e.g. the DOM
which few intend to access via RPC). I don't mean to suggest that
'integration' or layering of IDL onto web protocols need actually mean
implementing any specific RPC protocol via XML (e.g. XML-RPC), rather, my
intention is to discuss a mechanism to integrate the abstraction of IDL onto
the data of XML or SGML.

    Let me try to rephrase this. The current popular style of object
specification describes an object as being composed of interfaces which
contain properties and methods (a property is comprised of a getX() setX()
method pair). Methods are intended to specify actions. There are benefits to
describing systems in this fashion. When we talk about an API, if we are
talking about an object system, we are typically talking about a defined set
of interfaces. In this way of thinking, in the object world, the API is the
primary specification. In the object world, the idea is that if the API is
properly specified all the other details will fall into place (e.g.

    The document world has a sharply different world-view, the primary focus
being data. When the data is properly specified e.g. property sets and
groves, all the details fall into place.

    What I am working on are methods to integrate these two world views. I
believe many of the arguments on both sides of the picture. We need to
rigorously  define interfaces between components and we need to rigorously
define data formats.

>> David Brownell wrote:
>> >There's a signifcant issue with the quality of the linking.  RPC systems
>> >hide significant parts of the network, which need to be surfaced.  They
>> >don't expose faults well, or recovery mechanisms; they complicate
>> >style messaging patterns unduly; they bury encoding issues, and impose
>> >specific protocol-to-API mappings even in environments where they're not
>> >at all appropriate.

    When I say:

>>     This isn't the problem with RPC systems at all (including CORBA, Java
>> RMI, DCOM, DCE-RPC etc),
>Which of those six points are you referring to?  I assure you I've seen
>at least half of those problems in each "RPC" system you mention.

    Perhaps I worded this incorrectly. We could argue the utility of RPC for
years. I find the abstraction of the RPC a very powerful one, regardless of
any problems with specific implementations. I think the abstraction of a
distributed object or remote method call will become integrated with the web
rather than replaced by the web.

My main problems with specific implementations is that firewalls are killers
even for firewall enabled RPC systems for these reasons:

1) Network address translation. 2) buggy firewalls that choke on binary data
but work fine with text. and most importantly !!!! 3) Sys admins already
open the HTTP and SMTP ports but get very nervuous or refuse to open
selected ports for RPC protocols.

These problems are compounded when you have complex networks of firewalls.
In the Boston medical environment, for example, each department in a
hospital may have its own firewall which links to the hospital which in turn
has links to both corporate as well as academic networks and internet
connections each passing through different firewalls. E-mail seems to always
work, HTTP usually works, otherwise all bets are off.

    I do believe that the abstraction of the RPC is an important enough one
that problems with specific implementations e.g. callbacks and faults, and
generally difficult things like recovery mechanisms (as opposed to detection
of transaction failure) are worth solving.

>> and certainly the current defacto web 'protocol'
>> namely a form and www-form-encoding or a CGI query string is hardly a
>> way for programs to communicate.
>For three years now, I've advised folk to use HTTP "POST" with request
>bodies that encode data in some useful form ... e.g. XML documents.

    Then we argee :-) Why are we arguing?

>Apples and oranges.  Exactly what do you think any RPC's IDL is doing, if
>not defining a new protocol?  (And causing problems by equating it to API?)
>Are you perhaps confusing lower level protocols with higher level ones?
    One person's low level protocol is another's high level protocol.
Perhaps here is the problem. There are a few 'terms' object, protocol, data
etc, which get used for a variety of purposes. My point about the need for
layering (and invoking the OSI model) is that an analysis at one level may
not be appropriate at another level. Is the interface the protocol or
DEC-RPC NDR (or ONC-XDR) the protocol? (point only that one 'protocol' may
change with each interface while the other remains the same across

    The protocols I am considering are HTTP and SMTP. If you are discussing
a need for protocols at a higher level than these, i.e. layered on top of
HTTP and SMTP I have no argument.

>> >Consider that no RPC system in the world (CORBA, ONC, DCE, etc) has had
>> >the reach of some rather basic non-RPC systems like E-Mail (SMTP, POP,
>> >IMAP) or the web (HTTP, HTML, XML, etc).  For folk who have spent a lot
>> >of time working on architectural issues, this is telling:  it says that
>> >there's quite likely a problem with the RPC approach.
>>     That's exactly my point, there is no reason not to layer IDL on top
>> perfectly good protocols such as HTTP and SMTP.
>You're then missing my point in its entirety.  The problem is the model,
>the notion that the system building block is an "RPC" of any kind.
>isn't the issue; after all, in an RPC system, it doesn't matter right?
>nobody sees it.

    No my point is that you equate IDL which is a specific way to define
interfaces (or APIs) and RPC which defines a network protocol to effect
procedure calls across networks. I am saying IDL not RPC. For example, is it
wrong to consider a web site as a type of 'distributed object'. Each 'page'
in a subdirectory corresponds on an abstract level to a method in an
interface. This might allow integration of UML tools for example and web
tools. Communications between client and server under this model are between
HTTP user agent/browser and HTTP server.

>>     This is my suggestion (feel free to propose another): Distributed
>> systems can communicate via HTTP and SMTP using XML documents as the
>> contents of their MIME messages
>So far so good; people have agreed on that one for some time.  Though
>I'm waiting to see details on how SMTP really fits in.  Store-and-forward
>messaging is usually done through a different API model than RPC.
    I am not proposing an RPC model but for the sake of argument, in
Microsoft's COM+, for example, async method calls operate via an MSMQ
transport of MS-RPC. MSMQ is a messaging protocol. I see no reason that COM+
async method calls couldn't be implemented over SMTP if MS had the

    I see SMTP as useful for async messaging where HTTP is useful for sync
messaging. Both transmit MIME messages. A concrete benefit of SMTP is that
it can go anywhere, even where HTTP is not enabled, for example I have used
this for ship to shore telemedicine links via bandwidth challenged satellite
    Again this is a comparison done

xml-dev: A list for W3C XML Developers. To post, mailto:xml-dev at
Archived as: and on CD-ROM/ISBN 981-02-3594-1
To (un)subscribe, mailto:majordomo at the following message;
(un)subscribe xml-dev
To subscribe to the digests, mailto:majordomo at the following message;
subscribe xml-dev-digest
List coordinator, Henry Rzepa (mailto:rzepa at

More information about the Xml-dev mailing list