qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/8] [PATCH RFC v3] s390 cpu hotplug


From: Jason J. Herne
Subject: Re: [Qemu-devel] [PATCH 0/8] [PATCH RFC v3] s390 cpu hotplug
Date: Fri, 13 Sep 2013 11:01:57 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130623 Thunderbird/17.0.7

On 09/05/2013 10:06 AM, Andreas Färber wrote:
Am 05.09.2013 15:10, schrieb Alexander Graf:
On 05.09.2013, at 15:05, Andreas Färber wrote:
Am 05.09.2013 14:54, schrieb Alexander Graf:
Very simple and clean patch set. I don't think it deserves the RFC tag.

Negative, see my review. If you want to fix up and queue patches 1-2
that's fine with me, but the others need a respin. No major blocker
though, just some more footwork mostly related to QOM and Jason's
shifted focus on cpu-add rather than device_add.

Yeah, that's what I'm referring to. I've seen a lot worse patch sets at v8 than 
this RFC :).

I don't think we should apply it as is, and I'm very happy to see your review 
and comment on the modeling bits :). But I try to never apply or cherry pick 
RFC patches - and this set looks like he sent it with the intent of getting it 
merged.

Agreed, we can continue with "PATCH v4". I was more upset about the
"very simple and clean" bit after I commented on a number of unclean
things to improve - mostly about doing things in different places.

If you could find some time to review my two model string patches then I
could supply Jason with a branch or even a pull to base on:

http://patchwork.ozlabs.org/patch/272511/
http://patchwork.ozlabs.org/patch/272509/

I would also volunteer to provide a base patch for the link<> issue if
there is agreement. Apart from the QOM API question this depends on the
contradictory modelling of whether we allow CPU addresses 0..max_cpus as
seen in this series or 0..somemax with <= max_cpus non-NULL as discussed
on #zkvm.

According to http://wiki.qemu.org/Features/CPUHotplug:

"adding CPUs should be done in successive order from lower to higher IDs in [0..max-cpus) range. It's possible to add arbitrary CPUs in random order, however that would cause migration to fail on its target side."

Considering that, in a virtual environment, it rarely (if ever) makes sense to define out of order cpu ids maybe we should keep the patch as is and only allow consecutive cpu ids to be used.

By extension, hot-unplug would require that the highest id be unplugged. This is probably not acceptable in any type of "mixed cpu" environment because the greatest id may not be the cpu type you want to remove. I'm not sure if S390 will implement mixed cpu types.

(child<s390-cpu> properties would allow to model the latter sparse
address space very well, but an object can only have one parent in the
hot-add case. We could of course add cpu[n] link<s390-cpu> properties as
CPUs get added, but that doesn't strike me as very clean. My underlying
thought is to offload the error handling to QOM so that we don't start
hardcoding s/smp_cpus/max_cpus/g (or some max_cpu_address) all around
ipi_states.)


I'm not sure I understand. What is meant by: "an object can only have one parent in the hot-add case."

What is the difference between "child<s390-cpu>" and "cpu[n] link<s390-cpu>"? And why do you feel the link case would be unclean?

Btw an unanswered question: ipi_states is just pointers to CPUs
currently, no further state. So what's "ipi" in the name? Will that
array need to carry state beyond S390CPU someday?


Quoting Jens Freimann:
"
The ipi_states array holds all our S390CPU states. The position of a cpu
within this array equals its "cpu address". See section "CPU address identification"
in the Principles of Operation. This cpu address is used for
cpu signalling (inter-processor interrupts->ipi) via the sigp instruction.
The cpu address does not contain information about which book this cpu is
plugged into, nor does it hold any other topology information.

When we intercept a sigp instruction in qemu the cpu address is passed
to us in a field of the SIE control block. We then use the cpu address
as an index to the ipi_states field, get hold of the S390CPU and then
for example do an vcpu ioctl on the chosen cpu.
"

Based on this explanation I do not think this array will ever be anything more than what it is today.

--
-- Jason J. Herne (address@hidden)




reply via email to

[Prev in Thread] Current Thread [Next in Thread]