qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH] ui/gtk: Wait until the current guest frame is rendered befor


From: Kim, Dongwon
Subject: RE: [PATCH] ui/gtk: Wait until the current guest frame is rendered before switching to RUN_STATE_SAVE_VM
Date: Wed, 12 Jun 2024 01:29:38 +0000

Hi, 

From: Marc-André Lureau <marcandre.lureau@gmail.com> 
Sent: Wednesday, June 5, 2024 12:56 AM
To: Kim, Dongwon <dongwon.kim@intel.com>
Cc: qemu-devel@nongnu.org; Peter Xu <peterx@redhat.com>
Subject: Re: [PATCH] ui/gtk: Wait until the current guest frame is rendered 
before switching to RUN_STATE_SAVE_VM

Hi

On Tue, Jun 4, 2024 at 9:49 PM Kim, Dongwon <mailto:dongwon.kim@intel.com> 
wrote:
On 6/4/2024 4:12 AM, Marc-André Lureau wrote:
> Hi
> 
> On Thu, May 30, 2024 at 2:44 AM <mailto:dongwon.kim@intel.com 
> <mailto:mailto:dongwon.kim@intel.com>> wrote:
> 
>     From: Dongwon <mailto:dongwon.kim@intel.com 
><mailto:mailto:dongwon.kim@intel.com>>
> 
>     Make sure rendering of the current frame is finished before switching
>     the run state to RUN_STATE_SAVE_VM by waiting for egl-sync object to be
>     signaled.
> 
> 
> Can you expand on what this solves?

In current scheme, guest waits for the fence to be signaled for each 
frame it submits before moving to the next frame. If the guest’s state 
is saved while it is still waiting for the fence, The guest will 
continue to  wait for the fence that was signaled while ago when it is 
restored to the point. One way to prevent it is to get it finish the 
current frame before changing the state.

After the UI sets a fence, hw_ops->gl_block(true) gets called, which will block 
virtio-gpu/virgl from processing commands (until the fence is signaled and 
gl_block/false called again).

But this "blocking" state is not saved. So how does this affect save/restore? 
Please give more details, thanks

Yeah sure. "Blocking" state is not saved but guest's state is saved while it 
was still waiting for the response for its last resource-flush virtio msg. This 
virtio response, by the way is set to be sent to the guest when the pipeline is 
unblocked (and when the fence is signaled.). Once the guest's state is saved, 
current instance of guest will be continued and receives the response as usual. 
The problem is happening when we restore the saved guest's state again because 
what guest does will be waiting for the response that was sent a while ago to 
the original instance.

> 
> 
>     Cc: Marc-André Lureau <mailto:marcandre.lureau@redhat.com
>     <mailto:mailto:marcandre.lureau@redhat.com>>
>     Cc: Vivek Kasireddy <mailto:vivek.kasireddy@intel.com
>     <mailto:mailto:vivek.kasireddy@intel.com>>
>     Signed-off-by: Dongwon Kim <mailto:dongwon.kim@intel.com
>     <mailto:mailto:dongwon.kim@intel.com>>
>     ---
>       ui/egl-helpers.c |  2 --
>       ui/gtk.c         | 19 +++++++++++++++++++
>       2 files changed, 19 insertions(+), 2 deletions(-)
> 
>     diff --git a/ui/egl-helpers.c b/ui/egl-helpers.c
>     index 99b2ebbe23..dafeb36074 100644
>     --- a/ui/egl-helpers.c
>     +++ b/ui/egl-helpers.c
>     @@ -396,8 +396,6 @@ void egl_dmabuf_create_fence(QemuDmaBuf *dmabuf)
>               fence_fd = eglDupNativeFenceFDANDROID(qemu_egl_display,
>                                                     sync);
>               qemu_dmabuf_set_fence_fd(dmabuf, fence_fd);
>     -        eglDestroySyncKHR(qemu_egl_display, sync);
>     -        qemu_dmabuf_set_sync(dmabuf, NULL);
> 
> 
> If this function is called multiple times, it will now set a new 
> fence_fd each time, and potentially leak older fd. Maybe it could first 
> check if a fence_fd exists instead.

We can make that change.

> 
>           }
>       }
> 
>     diff --git a/ui/gtk.c b/ui/gtk.c
>     index 93b13b7a30..cf0dd6abed 100644
>     --- a/ui/gtk.c
>     +++ b/ui/gtk.c
>     @@ -600,9 +600,12 @@ void gd_hw_gl_flushed(void *vcon)
> 
>           fence_fd = qemu_dmabuf_get_fence_fd(dmabuf);
>           if (fence_fd >= 0) {
>     +        void *sync = qemu_dmabuf_get_sync(dmabuf);
>               qemu_set_fd_handler(fence_fd, NULL, NULL, NULL);
>               close(fence_fd);
>               qemu_dmabuf_set_fence_fd(dmabuf, -1);
>     +        eglDestroySyncKHR(qemu_egl_display, sync);
>     +        qemu_dmabuf_set_sync(dmabuf, NULL);
>               graphic_hw_gl_block(vc->gfx.dcl.con, false);
>           }
>       }
>     @@ -682,6 +685,22 @@ static const DisplayGLCtxOps egl_ctx_ops = {
>       static void gd_change_runstate(void *opaque, bool running,
>     RunState state)
>       {
>           GtkDisplayState *s = opaque;
>     +    QemuDmaBuf *dmabuf;
>     +    int i;
>     +
>     +    if (state == RUN_STATE_SAVE_VM) {
>     +        for (i = 0; i < s->nb_vcs; i++) {
>     +            VirtualConsole *vc = &s->vc[i];
>     +            dmabuf = vc->gfx.guest_fb.dmabuf;
>     +            if (dmabuf && qemu_dmabuf_get_fence_fd(dmabuf) >= 0) {
>     +                /* wait for the rendering to be completed */
>     +                eglClientWaitSync(qemu_egl_display,
>     +                                  qemu_dmabuf_get_sync(dmabuf),
>     +                                  EGL_SYNC_FLUSH_COMMANDS_BIT_KHR,
>     +                                  1000000000);
> 
> 
>   I don't think adding waiting points in the migration path is 
> appropriate. Perhaps once you explain the actual issue, it will be 
> easier to help.
> 
>     +            }
>     +        }
>     +    }
> 
>           gd_update_caption(s);
>       }
>     -- 
>     2.34.1
> 
> 
> 
> 
> -- 
> Marc-André Lureau


-- 
Marc-André Lureau

reply via email to

[Prev in Thread] Current Thread [Next in Thread]