vrs-development
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [DotGNU]Re: [Vrs-development] Re: SEE and Goldwater integration


From: Chris Smith
Subject: Re: [DotGNU]Re: [Vrs-development] Re: SEE and Goldwater integration
Date: Mon, 14 Oct 2002 13:04:05 +0100

On Sunday 13 October 2002 19:03, you wrote:
> Long email!!

Sorry, it was late and I wrote this over 2 hrs whilst watching the telly.

> I agree.  It sounds good to me.  My only response is that (eventually,
> at least), we're going to want to have the webservices loaded into
> the VM permanently, executing in the same process.  

Interesting idea.  I saw the caching being done at a higher level.  The VM 
will get quite big - especially when you take the heap of the running 
webservice into account.  It is something we can trial to get some metrics 
out.

> Loading the webservice code and passing it to the VM every time a request 
comes
> in is, even with caching, going to be very slow.  

Well the webservice will be loaded, it's just the passing of it to another 
process.  If it's quite big it gets spooled to disk and a reference to it 
passed instead.  Yes this is quite slow as it's doing IO, but if you scale 
your IPC resources enough the average case will go via IPC and not via disk.


Your idea of caching in the VM is better, but the SM may need to know what is 
cached and what isn't, and it cannot guarantee that what it thinks the VM has 
cached actually _is_ cached.  The VM may have died and been restarted, plus 
there will be multiple instances of the VM running so the first webservice 
that is executed will be cached by the running VM and not by the other 'n' 
VM's.

Oh bugger.  This messes up the whole caching thing for the VM.  If the SM 
askes the VMs if a webservice is cached and then sent a request for that 
webservice to the VM, it is unlikely that the VM that said it has cached the 
webservice will be the one that gets to process it.

Hmm.  Need to think more.
Ah - No! Wait! An Idea!

The SM validates a webservice exists and passes the request data to the VM 
instance collection.  The VM that receives the request checks it's internal 
cache of webservices and if it is not there it calls the SM to get the 
webservice.
>From that point on this VM has a cached copy of the webservice.
All other VM instances will do the same the first time they receive a request.

This is basically how apache/mod_perl works, except the cgi is loaded from 
disk instead of the SM.

This is good as we can have a finite VM cache size.  On reaching this level, 
the least frequently requested webservice is dropped (or some other cache 
cleaning algorithm).

I like it.  Thanks for pointing the VM cache idea out Eric!


> The code needs to
> be pre-loaded into the VM (possibly in the same process?  I'm not
> sure how fast IPC is for large blocks of data) or we're going to have
> some performance issues.  

Not bad.  Not great.  See above.

> Pre-loading allows for partial compilation,

Yes, I'm building a GWServer with ilrun embedded in it (got the necessary 
pnet libraries building as shared libs now - need to submit the patch back to 
Rhys/Weekend Warriors).  We'll then have full control over the decoding and 
execution of the code, plus be able to invoke different engines depending on 
bytecode (C#/Java etc).

> optimization, etc (eg, Java HotSpot JIT compiler).  Note that we in
> the long run we *definitely* can't start up a new VM process for each
> request. 

No, that's why there are multiple instances of the VM sitting on the queue.
Goldwaters dynamic booting of GWServers when under load isn't enabled, but 
the configuration controls are there (min/max/onload).

 That'll be CGI-bin inefficient. :-)  For now, sure, we can
> just load the code and start up a VM and run it -- good easy proof of
> concept.


Chris - getting excited.

-- 
Chris Smith
  Technical Architect - netFluid Technology Ltd.
  "Internet Technologies, Distributed Systems and Tuxedo Consultancy"
  E: address@hidden  W: http://www.nfluid.co.uk




reply via email to

[Prev in Thread] Current Thread [Next in Thread]