qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4


From: Yonit Halperin
Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
Date: Sun, 26 Jun 2011 12:59:06 -0400 (EDT)

Sorry for the late response, wasn't available.
I'm afraid  that (1) and (2) will indeed wakeup the worker, but will not assure 
emptying the command ring, as it depends on the client pipe size.


----- Original Message -----
From: "Alon Levy" <address@hidden>
To: "Gerd Hoffmann" <address@hidden>
Cc: address@hidden, address@hidden
Sent: Wednesday, June 22, 2011 11:57:54 AM
Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest 
S3&S4 support

On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> >>worker call.  We can add a I/O command to ask qxl to push the
> >>release queue head to the release ring.
> >
> >So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands 
> >instead
> >of using the val parameter?
> 
> I'd like to (a) avoid updating the libspice-server API if possible
> and (b) have one I/O command for each logical step.  So going into
> S3 could look like this:
> 
>   (0) stop putting new commands into the rings
>   (1) QXL_IO_NOTIFY_CMD
>   (2) QXL_IO_NOTIFY_CURSOR
>       qxl calls notify(), to make the worker thread empty the command
>       rings before it processes the next dispatcher request.
>   (3) QXl_IO_FLUSH_SURFACES (to be implemented)
>       qxl calls stop()+start(), spice-server renders all surfaces,
>       thereby flushing state to device memory.
>   (4) QXL_IO_DESTROY_ALL_SURFACES
>       zap surfaces
>   (5) QXL_IO_FLUSH_RELEASE (to be implemented)
>       push release queue head into the release ring, so the guest
>       will see it and can release everything.
> 
> (1)+(2)+(4) exist already.
> (3)+(5) can be done without libspice-server changes.
> 
> Looks workable?

yeah. Yonit?

> 
> cheers,
>   Gerd
> 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]