Will XML eat the web?
paul at prescod.net
Fri Jan 29 14:17:13 GMT 1999
Pavel Velikhov wrote:
> I would like to view XML as a logical data model. I hope the actual
> storage model
> will be transparent in the future. I.e. the next generation xml "parser"
> implements a DOM interface should be able to talk to an xml source that
> is say an OODB and fetch small pieces of the document as they are
> requested by
> the application.
This isn't really "next generation" and it also isn't a "parser."
Applications such as this have existed for many years. Examples include
GroveMinder, Astoria, Texcel Information Manager, etc. These things aren't
parsers because they don't parse anything. In ISO terminology they are
"grove providers." In Web terminology they are persistent DOM
> I agree, generating 20Mb XML files is bad. However it will happen. If
> you make
> a lot of data available in XML by wrapping a relational database for
> users/applications will be able to request large XML files.
If you design your system correctly then you can just send the user a
table of contents that *represents* the large data set, or a page of data
at a time.
This thread is confusing to me because people keep sliding back and forth
a) relational storage,
b) human-created document archiving,
c) human-requested query results,
as if there were no architectural difference. But there is.
Paul Prescod - ISOGEN Consulting Engineer speaking for only himself
Don't you know that the smart bombs are so clever, they only kill
xml-dev: A list for W3C XML Developers. To post, mailto:xml-dev at ic.ac.uk
Archived as: http://www.lists.ic.ac.uk/hypermail/xml-dev/
To (un)subscribe, mailto:majordomo at ic.ac.uk the following message;
To subscribe to the digests, mailto:majordomo at ic.ac.uk the following message;
List coordinator, Henry Rzepa (mailto:rzepa at ic.ac.uk)
More information about the Xml-dev