qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 11/19] target-i386: introduce apic-id property


From: Eduardo Habkost
Subject: Re: [Qemu-devel] [PATCH 11/19] target-i386: introduce apic-id property
Date: Fri, 12 Apr 2013 13:29:40 -0300
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Apr 12, 2013 at 05:46:42PM +0200, Igor Mammedov wrote:
> On Fri, 12 Apr 2013 10:13:13 -0300
> Eduardo Habkost <address@hidden> wrote:
[...]
> > > > > +static void x86_cpuid_set_apic_id(Object *obj, Visitor *v, void 
> > > > > *opaque,
> > > > > +                                  const char *name, Error **errp)
> > > > > +{
> > > > > +    X86CPU *cpu = X86_CPU(obj);
> > > > > +    const int64_t min = 0;
> > > > > +    const int64_t max = UINT32_MAX;
> > > > > +    Error *error = NULL;
> > > > > +    int64_t value;
> > > > > +
> > > > > +    visit_type_int(v, &value, name, &error);
> > > > > +    if (error) {
> > > > > +        error_propagate(errp, error);
> > > > > +        return;
> > > > > +    }
> > > > > +    if (value < min || value > max) {
> > > > > +        error_setg(&error, "Property %s.%s doesn't take value %" 
> > > > > PRId64
> > > > > +                   " (minimum: %" PRId64 ", maximum: %" PRId64 ")" ,
> > > > > +                   object_get_typename(obj), name, value, min, max);
> > > > > +        error_propagate(errp, error);
> > > > > +        return;
> > > > > +    }
> > > > > +
> > > > > +    if (cpu_exists(value)) {
> > > > > +        error_setg(&error, "CPU with APIC ID %" PRIi64 " exists", 
> > > > > value);
> > > > > +        error_propagate(errp, error);
> > > > 
> > > > What about implementing this check after implementing the
> > > > /machine/icc-bridge links? Then we can simply check if
> > > > "/machine/icc-bridge/cpus[ID]" is already set.
> > > because it's more generic and won't break even if link location changes
> > 
> > Well, if link location changed, we could change the cpu_exists()
> > implementation. We don't even need to hardcode the icc-bridge path, we
> > could simply follow appropriate links if we set them up.
> > 
> > My problem with cpu_exists() is that it is based on global state,
> > instead of simply using the links between bus/devices to find out if the
> > IDs are being allocated correctly. We could make it work like PCI: devfn
> > conflicts are solved simply by looking at the list of devices in the
> > specific PCIBus where the device is being added. See
> > do_pci_register_device().
> > 
> > That said, I am not against the current approach if we don't have the
> > necessary bus/links modelled yet. But if we are going to add icc-bridge,
> > we could benefit from it to implement cpu_exists().
> Well,check has to be done here anyway, regardless how cpu_exists() 
> implemented.
> 
> BTW:
> This discussion and below belongs more to 9/19 patch that implements
> cpu_exists() than here.

OK.

> 
> > 
> > > 
> > > > 
> > > > (And then icc-bridge could be responsible for ensuring APIC ID
> > > > uniqueness, not the CPU object itself)
> > > Yep, with cpu-add id could be/is verified before setting property, but if
> > > one considers device_add or fictional -device cpuxxx on cmd line, then 
> > > won't
> > > be an external check for this. This check takes care about these use 
> > > cases.
> > 
> > That's why I believe the CPU has to calculate the APIC ID based on
> > signaling coming from other devices/buses, instead of allowing the APIC
> You mean by board not by CPU, I suppose.

Actually I am not 100% sure. For example: we could simply provide the
socket/core/thread IDs (or links to socket/core objects?) to the CPU
object, and the CPU object could calculate the APIC ID itself.

> 
> > ID to be directly specified in the device_add/-device options.  That's
> > how real CPUs work: as the CPU manufacturer doesn't know what will be
> > the package ID of the CPU, the APIC IDs are not hardcoded in the CPU;
> > they are calculated based on the CPU topology and some socket identifier
> > signal coming from the board.
> that is why apic_id has been made a property, to be set from outside.

True. I believe the conflict here is: we want other objects to set the
APIC ID (be it the board, or socket/core objects), but at the same time
it would be interesting to not expose the APIC ID outside QEMU. Being
too flexible regarding the APIC ID is more likely to cause problems
later.

That said, I don't mind having a "apic-id" property because it is easier
to simply expose it directly. But do you agree that: 1) we don't really
need to expose it to be set from outside QEMU; 2) we shouldn't require
it to be set from outside QEMU; 3) we should recommend users to not try
to fiddle it with?


> 
> > 
> > Comparing it with PCI again: devfn of PCI devices can be specified
> > manually, but if it is not present you just need to look at the PCI bus
> > to find out the next available devfn.
> 18/19 gives an option to lookup available APIC IDs in a target specific way.
> If we come up with a better target agnostic way to lookup ID, we can always
> add it on top of currently proposed implementation.
> 
> > Based on that, I thought that icc-bridge was going be the component
> > responsible for the APIC ID allocation logic, considering that it
> > already contains APIC-ID-based links to the CPU objects.
> in 19/19 APIC IDs are allocated by board for each possible CPU, icc-bridge is
> just convenient place for storing links now. If machine was QOM object, I'd
> probably place it there.

I was just wondering if we could solve both problems at the same time
time: if we have a convenient place to store APIC-ID-based links, we are
getting a more efficient APIC ID allocation/check system for free.
Especially considering how inefficient is the method used by
cpu_exists().

It looks like the problem with my approach is that the QOM APIC-ID-based
links are target-specific, but cpu_exists() is target-independent. If
both were target-specific or both were target-independent, we could
easily reuse the same mechanism for both, but that's not the case now.

-- 
Eduardo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]