Interesting Monday.

Matthew Sergeant (EML) Matthew.Sergeant at
Mon Feb 1 10:30:07 GMT 1999

Boy, you people sure can write when something stirs you up... It's 10:10am
and I've only just got through my backlog of XML-Dev mail...

Well, as the person who introduced the topic "Will XML eat the web?", I feel
I should just add some points of note. I thank everyone who has contributed
to this topic though.

Firstly, I think there is still an issue with processing power and XML,
although I can see that my system is poorly designed. Time for a rethink... 
The area where I can forsee potential problems is in e-commerce. Take an
e-commerce transaction processing company that's moved to an XML transaction
format. They don't have a shop web site, they just process credit card
transactions for other sites. I imagine they are going to need to process
hundreds of transactions per second. I don't for a second suggest that they
store the XML as the primary data format (store it as a backup as suggested
here) - it should immediately be put into an RDBMS. But to do that they have
to parse each transaction. There's no caching that can go on here.

Luckily that's their problem and not mine <g>.

My problem was slightly different. I needed to be ready for the 5.0 browsers
(probably IE5, although I'd prefer NS5), and XML seemed ideal because we
would be displaying/editing documents that look like data (or data that
looks like a document if you like). We really needed an object database, but
I needed to get moving quickly (a typical web project: "Can we have it
yesterday"). Learning an object database wasn't a possibility. I already
knew XML. So I looked at it like this - we could have it 2 ways:

1) Store XML now, process into HTML now, Transmit XML in the future.

2) Store in RDBMS now, process into HTML now, process into XML in the

#1 looked like a nicer solution because it gives performance gains in the
future, which #2 doesn't really (except perhaps XML is a lighter weight
format to transmit than HTML). However this, it appears, is not the right
way to go because RDBMS->*ML is always faster than XML->HTML. That's a
lesson learned, and I thank you for it.

Some of the points about caching are great when you're reading 1 XML file
multiple times, but we're talking about 400 - 1000 XML files being accessed
and constantly changed. A nicer solution would be an OODB. It's probably
time to go shopping...

Perl on Win32, PerlScript, ASP, Database, XML
GCS(GAT) d+ s:+ a-- C++ UL++>UL+++$ P++++$ E- W+++ N++ w--@$ O- M-- !V 
!PS !PE Y+ PGP- t+ 5 R tv+ X++ b+ DI++ D G-- e++ h--->z+++ R+++

xml-dev: A list for W3C XML Developers. To post, mailto:xml-dev at
Archived as:
To (un)subscribe, mailto:majordomo at the following message;
(un)subscribe xml-dev
To subscribe to the digests, mailto:majordomo at the following message;
subscribe xml-dev-digest
List coordinator, Henry Rzepa (mailto:rzepa at

More information about the Xml-dev mailing list