qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [qemu-s390x] [PATCH v5 2/2] s390: diagnose 318 info res


From: Christian Borntraeger
Subject: Re: [Qemu-devel] [qemu-s390x] [PATCH v5 2/2] s390: diagnose 318 info reset and migration support
Date: Thu, 27 Jun 2019 12:29:57 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0


On 26.06.19 16:22, Collin Walling wrote:
> On 6/26/19 8:14 AM, Cornelia Huck wrote:
>> On Wed, 26 Jun 2019 11:12:04 +0200
>> Christian Borntraeger <address@hidden> wrote:
>>
>>> On 25.06.19 17:17, Collin Walling wrote:
>>>> index a606547..4c26754 100644
>>>> --- a/target/s390x/cpu.h
>>>> +++ b/target/s390x/cpu.h
>>>> @@ -39,7 +39,13 @@
>>>>     #define MMU_USER_IDX 0
>>>>   -#define S390_MAX_CPUS 248
>>>> +/*
>>>> + * HACK: The introduction of additional facility bytes in the Read Info
>>>> + * struct consumes space used for CPU entries, thus we must reduce the
>>>> + * original maximum CPUs of 248 by one for each new byte or risk smashing
>>>> + * the stack.
>>>> + */
>>>> +#define S390_MAX_CPUS 247
>>>
>>> I think we decided to not change that. Only if the cpu model contains the 
>>> diag318
>>> feature we are limited to 247 but only for the sclp response.
>>> So we said:
>>> - we continue to allow 248 cpus
>>> - the sclp response will be limited to 247 CPUs if the feature is one
>>> - (optional) we print a warning that the guest might not see all CPUs
>>>
>>
>> Yes, that's what I remember as well... and printing/logging a warning
>> is a good idea.
>>
> 
> I recall this conversation, but I encountered a bit of a hangup when
> running some tests with the new changes.
> 
> Since we're adding a new field in the ReadInfo struct, we're permanently
> intruding on the space used for CPU entries. A machine with these changes and 
> 248 CPUs resulted in stack smash when the guest would start up. This happened 
> with diag318 on *and* off. This is a limitation to the
> 4k SCCB size right now :/

Yes of course, you need to touch sclp_read_cpu_info to not overwrite the buffer.

Since we are going to provider larger sccbs in the future I do not see a point
in limiting this now to 247 just the increase that back later on.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]