qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3 of 4] [UPDATE ]single vnc server surface


From: Stefano Stabellini
Subject: Re: [Qemu-devel] [PATCH 3 of 4] [UPDATE ]single vnc server surface
Date: Fri, 31 Jul 2009 17:51:11 +0100
User-agent: Alpine 2.00 (DEB 1167 2008-08-23)

On Fri, 31 Jul 2009, Gerd Hoffmann wrote:
> On 07/30/09 15:18, Stefano Stabellini wrote:
> > This patch removes the server surface from VncState and adds a single
> > server surface to VncDisplay for all the possible clients connected.
> > Each client maintains a different dirty bitmap in VncState.
> > The guest surface is moved to VncDisplay as well because we don't need
> > to track guest updates in more than one place.
> >
> > This patch has been updated to handle copyrect correctly.
> 
> Well.  Sort of.  At least it has no screen corruption.  The patch kills 
> a number of bandwith saving optimizations though.
> 
> Number one (the big one):  vnc clients without copyrect support get a 
> *huge* penalty.  Each vnc_copy call now sends a screen refresh to *all* 
> clients, not only the ones with copyrect support.  And they get the 
> *whole* destination rectangle, not only the screen areas which did 
> actually change.  Given that vnc_copy can happen much more frequently 
> than the refresh interval I think this increases the bandwith used *alot*.

I admit I didn't pay too much attention to copyrect performances because
everyone that is interested in performances should disable hw
acceleration in the VM for cirrus: software emulated hw acceleration is
between 5 and 10 times slower than plain old vesa.

That said, I am all for improving everything that can be improved, so I
made further enhancements to this patch, addressing these problems: now
copyrect should have the same performances as before (thank you for the
very detailed description of the issues present).

I am going to send an update of this patch and the following one that
needs a rebase.


> Number two: In case we skip frames because we have data buffered (i.e. 
> kernel output pipe is full) we might send more updates than we have to. 
>   With your patch the dirty bits of all screen updates are combined 
> together, including the screen updates skipped.  So screen areas which 
> changed in the skipped frame, then changed back for the frame we 
> actually sent are updated nevertheless.  This isn't the case without the 
> patch.
> 

I don't think I can do much about this.
In any case this issue shouldn't affect performances significantly: the
benefits of the patch are still greater than the disadvantages.







reply via email to

[Prev in Thread] Current Thread [Next in Thread]