Benchmark of 6 XML parsers on Linux

Robb Shecter shecter at
Mon May 10 15:57:09 BST 1999

"Joshua E. Smith" wrote:

> The speed increase you saw...

Hmm... I read the article and checked out the perl script: It looks to me like
there's a problem with how the test was conducted.  Maybe I don't
understand what's going on, but this looks obvious:

The tests were apparently done with the unix "time" command, by shelling out, and
starting a new process for each document. This means that the interpreter-based
languages get hit with two disadvantages: 1) They're penalized for VM startup and
shutdown times.  2) After parsing a document, all loaded objects, references, and knowledge
gained are thrown away, and can't be used for the next document.

To me, this is a valid issue because the test environment wasn't a good approximation of
real-world use:  The test most closely modeled a CGI environment, which is a dying
programming style.

My guess is that if the test more closely modelled real-world use: a server that,
in its lifetime, parses many documents, then the results may not have been so
exagerated.  I'm thinking of something like Servlets in the Java world, mod_perl in the Perl
world, etc.  The interpreters would still have lost, but maybe it wouldn't have been so

- Robb

xml-dev: A list for W3C XML Developers. To post, mailto:xml-dev at
Archived as: and on CD-ROM/ISBN 981-02-3594-1
To (un)subscribe, mailto:majordomo at the following message;
(un)subscribe xml-dev
To subscribe to the digests, mailto:majordomo at the following message;
subscribe xml-dev-digest
List coordinator, Henry Rzepa (mailto:rzepa at

More information about the Xml-dev mailing list