qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 4/7] s390x/css: add missing css state conditi


From: Halil Pasic
Subject: Re: [Qemu-devel] [PATCH v2 4/7] s390x/css: add missing css state conditionally
Date: Thu, 8 Jun 2017 10:52:33 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.0


On 06/07/2017 08:03 PM, Juan Quintela wrote:
> Halil Pasic <address@hidden> wrote:
>> On 06/01/2017 01:32 PM, Cornelia Huck wrote:
>>> On Thu, 1 Jun 2017 11:35:27 +0200
>>> Halil Pasic <address@hidden> wrote:
>>>
> 
>>>>>
>>>>> I was about to suggest to move css from pointer to an embedded struct,
>>>>> and then noticed that MAX_CSSID is .... 65535.  I guess that this is
>>
>> #define MAX_CSSID 255 
>> ("include/hw/s390x/css.h" line 23) 
> 
> I have to grep better O:-)
> 
>>>>> going to be sparse, very sparse.  Perhaps there is an easier way to
>>>>> transmit it?
>>>>>
>>>>> You are transmiting:
>>>>>
>>>>> 65000 structs each of size MAX_CHPID (255) and each element is byte +
>>>>> byte +byte = 65000 * 255 * 3 = 47 MB.
>>>>>
>>>>> If it is really sparse, I think that sending something like a list
>>>>> of <index in array> <contents> could make sense, no?
>>>>>
>>>>> Or I am missunderstanding something?
>>>>>
>>>>
>>>> I think you are misunderstanding. Sparse arrays of pointers were not
>>>> supported in the first place. Support was added by me recentily (patch
>>>> 07d4e69 "migration/vmstate: fix array of ptr with nullptrs"). For a null
>>>> pointer just a single byte is transmitted. So we should be more like
>>>> around 64KB, given that it is really sparse. Bottom line, the in stream
>>
>> Given that my calculation is also wrong. The overhead is below 255 bytes
>> (overhead compared to not transferring anything for null pointers, which
>> is clearly an unrealistic lower bound case).
> 
> Yeap, I greeped wrong.  255 x 255 makes no sense to optimize at all.
> Sorry for the noise.
> 
>>>> representation is at least as efficient as the in memory representation.
>>>>
>>>> Of course we could do something special for sparse arrays of pointers in
>>>> general, and indeed, one of the possibilities is a list/vector of non
>>>> null indexes. 
>>>
>>> Just as discussion fodder: For a run-of-the-mill qemu guest, I'd expect
>>> a low double-digit number of subchannels in css 0xfe, in subchannel set
>>> 0, which are clustered right at the beginning, and one chpid (0) in css
>>> 0xfe.
>>>
>>> As the subchannel id is autogenerated based upon the devno (if
>>> provided), it is easy to have subchannels in an arbitrary subchannel
>>> set (of which there are only four), but to get high subchannel ids with
>>> a lot of empty slots before, you'd have to create a lot (really a lot)
>>> of devices and then detach most of them again - not a very likely
>>> pattern.
>>>
>>> The 3270 support does not really change much: one additional subchannel
>>> (with the same characteristics) and one further chpid. But I don't
>>> expect many people to use 3270, and it is not (yet) migratable anyway.
>>>
>>> vfio-ccw devices will be in another css (possibly mapped), but they are
>>> not migrateable either.
>>>
>>> So if we want to optimize, some "nothing of interest beyond that point"
>>> marker might be useful - but not very.
>>
>> Thus I do not think we need extra sparse array handling for this. 
>>
>> (Nevertheless an extra sparse array handling might make sense for
>> vmstate core, provided there is an use case).
> 
> So, I retire all my objections.
> 
> Again, sorry for the noise.

No problem. Actually, it's appreciated! Better have some false
positives than having something stupid slipping in and having
to live with it.

Thanks for your review!


Regards,
Halil




reply via email to

[Prev in Thread] Current Thread [Next in Thread]