qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] console muti-head some more design input


From: John Baboval
Subject: Re: [Qemu-devel] console muti-head some more design input
Date: Wed, 20 Nov 2013 10:49:17 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120714 Thunderbird/14.0

On 11/20/2013 10:14 AM, Gerd Hoffmann wrote:
On Mi, 2013-11-20 at 09:32 -0500, John Baboval wrote:
On 11/20/2013 03:12 AM, Gerd Hoffmann wrote:
    Hi,

I think you are only considering output here, for input we definitely
need some idea of screen layout, and this needs to be stored
somewhere.
Oh yea, input.  That needs quite some work for multihead / multiseat.

I think we should *not* try to hack that into the ui.  We should extend
the input layer instead.
This would be a contrast to how a real system works.
No.  We have to solve problem here which doesn't exist on real hardware
in the first place.

IMO, the UI is the
appropriate place for this sort of thing. A basic UI is going to be
sending relative events anyway.

I think a "seat" should be a UI construct as well.
A seat on real hardware is a group of input (kbd, mouse, tablet, ...)
and output (display, speakers, ....) devices.

In qemu the displays are represented by QemuConsoles.  So to model real
hardware we should put the QemuConsoles and input devices for a seat
into a group.

The ui displays some QemuConsole.  If we tag input events with the
QemuConsole the input layer can figure the correct input device which
should receive the event according to the seat grouping.

With absolute pointer events the whole thing becomes a bit more tricky
as we have to map input from multiple displays (QemuConsoles) to a
single absolute pointing device (usb tablet).  This is what Dave wants
the screen layout for.  I still think the input layer is the place to do
this transformation.

We solve this problem in our UI now. It's not enough to know the offsets. You also need to know all the resolutions - the display window, the guest, and the device coordinate system of the virtual pointing device (we use a PV event ring instead of a USB tablet).

If your UI can scale the guest output, that means you need to also store the UI's window geometry in the QemuConsole to get the math right.

Incidentally, XenClient will eventually be moving back to relative coordinates from mice. We will handle seamless transitions by having the guest feed the pointer coordinates back down through an emulated hardware cursor channel. The reason for this is that operating systems like Windows 8 implement various types of "pointer friction" that don't work when you send absolute coordinates. We are still working out the latency kinks.





While thinking about this:  A completely different approach to tackle
this would be to implement touchscreen emulation.  So we don't have a
single usb-tablet, but multiple (one per display) touch input devices.
Then we can simply route absolute input events from this display as-is
to that touch device and be done with it.  No need to deal with
coordinate transformations in qemu, the guest will deal with it.

cheers,
   Gerd






reply via email to

[Prev in Thread] Current Thread [Next in Thread]