[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qexo-general] Speed of execution
From: |
Per Bothner |
Subject: |
Re: [Qexo-general] Speed of execution |
Date: |
Mon, 20 Jan 2003 22:39:42 -0800 |
User-agent: |
Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2) Gecko/20021202 |
Eric Safern wrote:
Per's suggestion is to pre-calculate the counts for the different types,
No, my suggestion is for each fortune type, create (in advance)
a separate cookie-file:
/var/www/html/fortunes-1.xml
/var/www/html/fortunes-2.xml
...
/var/www/html/fortunes-n.xml
and then do:
$cookies := document(concat("/var/www/html/fortunes-1",$tp,".xml"),
$min := 1,
$max := count($cookies)
return $cookies [ Rand($min,$max) ]
Note this test should run *faster* than the final code - I'm not loading
the count data, or
generating the random number.
But you're still reading the entire /var/www/html/fortunes.xml,
and creating a big data structure for it.
Of course we want Qexo to do this quickly - and quite likely
there are things we could relatively easily improve. Still,
if you have large data sets and need to improve performance,
you need to avoid having to process the entire data set,
typically by using index files, or partition the data (as I
suggested here).
I'm not sure I understood your final suggestion. If I pre-read the
entire document in the server
init code, can I tell XQuery to run the query against the document *in
memory*? How
would that work?
For example something like this:
class Cookies
{
static final TreeList cookies = Document.parse("...");
public static TreeList getCookies () { return cookies; }
}
Then in your Qexo program replace:
document("/var/www/html/fortunes.xml")
by:
Cookies:getCookies()
--
--Per Bothner
address@hidden http://www.bothner.com/per/