qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] qemu-ga: possible race while suspending the guest


From: Michael Roth
Subject: Re: [Qemu-devel] qemu-ga: possible race while suspending the guest
Date: Mon, 9 Apr 2012 17:29:51 -0500
User-agent: Mutt/1.5.21 (2010-09-15)

On Mon, Apr 09, 2012 at 04:57:48PM -0300, Luiz Capitulino wrote:
> Hi Michael,
> 
> There's a possible race condition in the bios_supports_mode() function used
> by all suspend commands in qemu-ga: if the parent process receives a SIGCHLD
> while executing:
> 
>     close(pipefds[1]);
>     g_free(pmutils_path);
> 
> Or:
> 
>     ret = read(pipefds[0], &status, sizeof(status));
> 
> Then bad things can happen, like reporting that the guest doesn't support
> suspend or a resource leak (a segfault may be possible too).
> 
> The easy & obvious way to fix this is just to block SIGCLHD while in close()
> and g_free() and to loop read() on EINTR.
> 
> However, the real problem here is that the double fork schema used in the
> suspend commands got complex. It's this way because I was trying to avoid
> causing a conflict with your series that adds support for running commands
> in the guest.
> 
> The ideal solution is to add an internal API to qemu-ga to create & wait for
> child processes, like we discussed during the suspend commands review.
> 
> Now, what it's best way to go with this bug? Should I fix it the easy way
> or should I wait for the new API (which I believe you're going to work on)?

Was initially planning on using qemu_add_child_watch(), but this may end
up not being workable for w32 so I'm prefer to hold off on trying to
come up with a general solution till I start looking at that. I'm hoping
g_spawn* can save the day there but haven't looked at it too much detail.

In the meantime let's go with the "easy way". We should take care to
restore the default SIGCHLD in the 2nd child though to avoid causing
anything we exec() run to with SIGCHLD blocked though, since that may
hang us in certain situations.

> 
> Thanks.
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]