qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/3] hw/qxl: support client monitor configuratio


From: Alon Levy
Subject: Re: [Qemu-devel] [PATCH 3/3] hw/qxl: support client monitor configuration via device
Date: Tue, 11 Sep 2012 07:29:32 -0400 (EDT)

> Hi,
> 
> >> I don't think an explicit handshake via
> >> QXL_IO_CLIENT_MONITORS_CONFIG_DONE is a good idea.
> > 
> > Why? I don't see the below as being better - it just moves the
> > checking to the guest, and racily.
> 
> It is more robust.

I suggested a way for it to be as robust - I take robust to mean "no lost 
messages except for intermediate ones", which both solutions suffer from anyway.

> We don't have to keep state in qxl for the
> handshake
You mean it's preferable to have state on the device 
(QXLRom::client_monitors_config_updating) rather then private qxl device state?

> one less opportunity for a buggy guest driver to screw up
> things.
The logic you outline can be screwed up as well, the logic for writing this io 
is basically:

        qxl_display_copy_rom_client_monitors_config(qdev);
        qxl_crtc_set_from_monitors_config(qdev);
        outb(0, qdev->io_base + QXL_IO_CLIENT_MONITORS_CONFIG_DONE);

> It is also closer to how real hardware handles this.

I don't know how it is in real hardware.

> 
> Yes, there is a race, but we detect and handle that case, so it is no
> problem.
> 
> >>
> >> How about this update protocol:
> >>
> >> qemu:
> >>   (1) set QXLRom::client_monitors_config_updating
> >>   (2) fill QXLRom::client_monitors_config
> >>   (3) raise QXL_INTERRUPT_CLIENT_MONITORS_CONFIG
> >>   (4) clear QXLRom::client_monitors_config_updating
> >>
> >> guest:
> >>   (1) clear QXL_INTERRUPT_CLIENT_MONITORS_CONFIG bit in irq status
> >>   (2) wait until QXLRom::client_monitors_config_updating is clear
> >>   (3) parse QXLRom::client_monitors_config
> >>   (4) check QXL_INTERRUPT_CLIENT_MONITORS_CONFIG bit in irq status
> >>       (a) when set, goto (1).
> >>       (b) when clear we are done.
> >>
> > 
> > Hmm, you are making the guest more complicated instead of the host
> > side, don't know if that's better.
> > 
> > Point (2) is a busy wait, no?
> 
> The guest doesn't have to spin, it can yield().  But yes, better
> don't
> do that in a IRQ handler.
> 
> > Also, guest will have to handle half old / half new configurations:
> 
> It should not.
> 
> > qemu(1)
> > qemu(2) start
> > guest(1)
> > guest(2)
> > guest(3) reads half old half new
> 
> No, guest will wait at (2).
> 
> But I just see it can happen the other way around though: guest
> starts
> reading and qemu starts updating in parallel.  We need one additional
> step (3a):  The guest needs to check
> QXLRom::client_monitors_config_updating is clear after parsing the
> data.
> 
> When qemu updated the data while the guest was reading it one of the
> two
> conditions will be true: either (3a) if qemu is still busy updating
> or
> (4) if qemu finished updating.

Making things more complicated in the host, qemu, means making the kernel 
driver in the guest simpler, so even though you have a good solution for the 
race you discovered I don't see why the alternative is worse. (answered point 
to point above).

> 
> >> We might want to notify spice-server when the guest flips the
> >> QXL_INTERRUPT_CLIENT_MONITORS_CONFIG bit in the irq mask, so we
> >> can
> >> route the event accordingly.
> > 
> > That would mean even more complex logic, to delay spice-server from
> > sending the monitors command to the guest while more data could be
> > incoming (which can be multiple chunks comprising one message, so
> > we
> > must wait for it to complete before pushing the
> > VDAgentMonitorsConfig
> > message).
> 
> I don't understand.  Why is the client capability bit fundamentally
> different from the irq mask?  I'd expect a guest driver which
> supports
> it would set the irq mask bit and the capability bit at the same
> time, no?

You are right, so we can do away with the guest capabilities field, but remain 
with the guest capabilities api in spice, because spice depends on it for the 
routing decision.

> 
> >> Or we just route it unconditionally both ways and let the guest
> >> sort it (i.e. make vdagent ignore monitors config when the qxl kms
> >> driver is active).
> 
> > Routing it only one way is simpler in spice code. In other words, I
> > had a buggy version doing that and decided that I should just do it
> > one way or the other to not bang my head on it. But it's also
> > simpler
> > to handle - what order are the events going to happen in the guest?
> > Also, not only spice-vdagent plus our driver, but with, for
> > instance,
> > gnome-settings-daemon listening to the uevent from the kernel it
> > will
> > do modesetting by itself, racing with spice-vdagent.
> 
> I was thinking about spice-vdagent detecting the qxl kms driver is
> active, then just ignore all VDAgentMonitorsConfig messages
> unconditionally if this happens to be the case, so there will be no
> race. But if you think it is too complicated, no problem. 

This sounds like a good idea, didn't think of it. But that leaves me fixing the 
vdagent code now :) Also, I guess I still think doing just one message looks 
simpler.

> Was just an idea.  Notifying spice-server which way to route is fine with me 
> too.

OK, if it's all the same to you I'll stick with spice-server routing the 
message.

Overall, if you find this tedious I will switch to your suggestion since it 
isn't such a big deal.

> 
> cheers,
>   Gerd
> 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]