qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 4/7] s390x/css: add missing css state conditi


From: Halil Pasic
Subject: Re: [Qemu-devel] [PATCH v2 4/7] s390x/css: add missing css state conditionally
Date: Thu, 1 Jun 2017 13:46:36 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0


On 06/01/2017 01:32 PM, Cornelia Huck wrote:
> On Thu, 1 Jun 2017 11:35:27 +0200
> Halil Pasic <address@hidden> wrote:
> 
>> On 05/31/2017 08:52 PM, Juan Quintela wrote:
>>> Halil Pasic <address@hidden> wrote:
>>>> Although we have recently vmstatified the migration of some css
>>>> infrastructure,  for some css entities there is still state to be
>>>> migrated left, because the focus was keeping migration stream
>>>> compatibility (that is basically everything as-is).
>>>>
>>>> Let us add vmstate helpers and extend existing vmstate descriptions so
>>>> that we have everything we need. Let us guard the added state with via
>>>> css_migration_enabled, so we keep the compatible behavior if css
>>>> migration is disabled.
>>>>
>>>> Let's also annotate the bits which do not need to be migrated for better
>>>> readability.
>>>>
>>>> Signed-off-by: Halil Pasic <address@hidden>
>>>> ---
>>>>  hw/intc/s390_flic.c | 20 +++++++++++++++
>>>>  hw/s390x/css.c      | 74 
>>>> +++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>>  2 files changed, 94 insertions(+)
>>>
>>> Reviewed-by: Juan Quintela <address@hidden>
>>>
>>
>> Thanks!
>>
>>>
>>> For the vmstate bits.  I have exactly zero clue about s390
>>>
>>>> +static const VMStateDescription vmstate_chp_info = {
>>>> +    .name = "s390_chp_info",
>>>> +    .version_id = 1,
>>>> +    .minimum_version_id = 1,
>>>> +    .fields = (VMStateField[]) {
>>>> +        VMSTATE_UINT8(in_use, ChpInfo),
>>>> +        VMSTATE_UINT8(type, ChpInfo),
>>>> +        VMSTATE_UINT8(is_virtual, ChpInfo),
>>>> +        VMSTATE_END_OF_LIST()
>>>> +    }
>>>> +};
>>>> +
>>>>  typedef struct SubchSet {
>>>>      SubchDev *sch[MAX_SCHID + 1];
>>>>      unsigned long schids_used[BITS_TO_LONGS(MAX_SCHID + 1)];
>>>> @@ -215,6 +248,19 @@ typedef struct CssImage {
>>>>      ChpInfo chpids[MAX_CHPID + 1];
>>>>  } CssImage;
>>>>  
>>>> +static const VMStateDescription vmstate_css_img = {
>>>> +    .name = "s390_css_img",
>>>> +    .version_id = 1,
>>>> +    .minimum_version_id = 1,
>>>> +    .fields = (VMStateField[]) {
>>>> +        /* Subchannel sets have no relevant state. */
>>>> +        VMSTATE_STRUCT_ARRAY(chpids, CssImage, MAX_CHPID + 1, 0,
>>>> +                             vmstate_chp_info, ChpInfo),
>>>> +        VMSTATE_END_OF_LIST()
>>>> +    }
>>>> +
>>>> +};
>>>
>>>> +static const VMStateDescription vmstate_css = {
>>>> +    .name = "s390_css",
>>>> +    .version_id = 1,
>>>> +    .minimum_version_id = 1,
>>>> +    .fields = (VMStateField[]) {
>>>> +        VMSTATE_QTAILQ_V(pending_crws, ChannelSubSys, 1, 
>>>> vmstate_crw_container,
>>>> +                         CrwContainer, sibling),
>>>> +        VMSTATE_BOOL(sei_pending, ChannelSubSys),
>>>> +        VMSTATE_BOOL(do_crw_mchk, ChannelSubSys),
>>>> +        VMSTATE_BOOL(crws_lost, ChannelSubSys),
>>>> +        /* These were kind of migrated by virtio */
>>>> +        VMSTATE_UINT8(max_cssid, ChannelSubSys),
>>>> +        VMSTATE_UINT8(max_ssid, ChannelSubSys),
>>>> +        VMSTATE_BOOL(chnmon_active, ChannelSubSys),
>>>> +        VMSTATE_UINT64(chnmon_area, ChannelSubSys),
>>>> +        VMSTATE_ARRAY_OF_POINTER_TO_STRUCT(css, ChannelSubSys, MAX_CSSID 
>>>> + 1,
>>>> +                0, vmstate_css_img, CssImage),
>>>
>>> I was about to suggest to move css from pointer to an embedded struct,
>>> and then noticed that MAX_CSSID is .... 65535.  I guess that this is

#define MAX_CSSID 255 
("include/hw/s390x/css.h" line 23) 

>>> going to be sparse, very sparse.  Perhaps there is an easier way to
>>> transmit it?
>>>
>>> You are transmiting:
>>>
>>> 65000 structs each of size MAX_CHPID (255) and each element is byte +
>>> byte +byte = 65000 * 255 * 3 = 47 MB.
>>>
>>> If it is really sparse, I think that sending something like a list
>>> of <index in array> <contents> could make sense, no?
>>>
>>> Or I am missunderstanding something?
>>>
>>
>> I think you are misunderstanding. Sparse arrays of pointers were not
>> supported in the first place. Support was added by me recentily (patch
>> 07d4e69 "migration/vmstate: fix array of ptr with nullptrs"). For a null
>> pointer just a single byte is transmitted. So we should be more like
>> around 64KB, given that it is really sparse. Bottom line, the in stream

Given that my calculation is also wrong. The overhead is below 255 bytes
(overhead compared to not transferring anything for null pointers, which
is clearly an unrealistic lower bound case).

>> representation is at least as efficient as the in memory representation.
>>
>> Of course we could do something special for sparse arrays of pointers in
>> general, and indeed, one of the possibilities is a list/vector of non
>> null indexes. 
> 
> Just as discussion fodder: For a run-of-the-mill qemu guest, I'd expect
> a low double-digit number of subchannels in css 0xfe, in subchannel set
> 0, which are clustered right at the beginning, and one chpid (0) in css
> 0xfe.
> 
> As the subchannel id is autogenerated based upon the devno (if
> provided), it is easy to have subchannels in an arbitrary subchannel
> set (of which there are only four), but to get high subchannel ids with
> a lot of empty slots before, you'd have to create a lot (really a lot)
> of devices and then detach most of them again - not a very likely
> pattern.
> 
> The 3270 support does not really change much: one additional subchannel
> (with the same characteristics) and one further chpid. But I don't
> expect many people to use 3270, and it is not (yet) migratable anyway.
> 
> vfio-ccw devices will be in another css (possibly mapped), but they are
> not migrateable either.
> 
> So if we want to optimize, some "nothing of interest beyond that point"
> marker might be useful - but not very.

Thus I do not think we need extra sparse array handling for this. 

(Nevertheless an extra sparse array handling might make sense for
vmstate core, provided there is an use case).

Halil

> 
>>
>> I would like to ask the s390x maintainers (mainly Connie) about the
>> particular case, and the migration maintainers (I spontaneously think of
>> you Juan and Dave) about the general case (If we want to have something
>> like VMSTATE_ARRAY_OF_POINTER_TO_STRUCT_SPARSE).
>>
>> Regards,
>> Halil
>>
>>> Later, Juan.
>>>
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]