qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] console muti-head some more design input


From: Dave Airlie
Subject: Re: [Qemu-devel] console muti-head some more design input
Date: Thu, 21 Nov 2013 10:45:14 +1000

On Thu, Nov 21, 2013 at 1:14 AM, Gerd Hoffmann <address@hidden> wrote:
> On Mi, 2013-11-20 at 09:32 -0500, John Baboval wrote:
>> On 11/20/2013 03:12 AM, Gerd Hoffmann wrote:
>> >    Hi,
>> >
>> >> I think you are only considering output here, for input we definitely
>> >> need some idea of screen layout, and this needs to be stored
>> >> somewhere.
>> > Oh yea, input.  That needs quite some work for multihead / multiseat.
>> >
>> > I think we should *not* try to hack that into the ui.  We should extend
>> > the input layer instead.
>>
>> This would be a contrast to how a real system works.
>
> No.  We have to solve problem here which doesn't exist on real hardware
> in the first place.
>
>> IMO, the UI is the
>> appropriate place for this sort of thing. A basic UI is going to be
>> sending relative events anyway.
>>
>> I think a "seat" should be a UI construct as well.
>
> A seat on real hardware is a group of input (kbd, mouse, tablet, ...)
> and output (display, speakers, ....) devices.
>
> In qemu the displays are represented by QemuConsoles.  So to model real
> hardware we should put the QemuConsoles and input devices for a seat
> into a group.
>
> The ui displays some QemuConsole.  If we tag input events with the
> QemuConsole the input layer can figure the correct input device which
> should receive the event according to the seat grouping.
>
> With absolute pointer events the whole thing becomes a bit more tricky
> as we have to map input from multiple displays (QemuConsoles) to a
> single absolute pointing device (usb tablet).  This is what Dave wants
> the screen layout for.  I still think the input layer is the place to do
> this transformation.
>
>
> While thinking about this:  A completely different approach to tackle
> this would be to implement touchscreen emulation.  So we don't have a
> single usb-tablet, but multiple (one per display) touch input devices.
> Then we can simply route absolute input events from this display as-is
> to that touch device and be done with it.  No need to deal with
> coordinate transformations in qemu, the guest will deal with it.

This is a nice dream, except you'll find the guest won't deal with it
very well, and you'll have all kinds of guest scenarios to link up
that touchscreen a talks to monitor a etc.

Dave.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]