qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [Qemu-devel] [RFC 3/4] hw/intc/arm_gicv3_its: Implement s


From: Auger Eric
Subject: Re: [Qemu-arm] [Qemu-devel] [RFC 3/4] hw/intc/arm_gicv3_its: Implement state save/restore
Date: Mon, 30 Jan 2017 11:45:38 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

Hi Juan,

On 30/01/2017 10:15, Juan Quintela wrote:
> Eric Auger <address@hidden> wrote:
>> We need to handle both registers and ITS tables. While
>> register handling is standard, ITS table handling is more
>> challenging since the kernel API is devised so that the
>> tables are flushed into guest RAM and not in vmstate buffers.
>>
>> Flushing the ITS tables on device pre_save() is too late
>> since the guest RAM had already been saved at this point.
>>
>> Table flushing needs to happen when we are sure the vcpus
>> are stopped and before the last dirty page saving. The
>> right point is RUN_STATE_FINISH_MIGRATE but sometimes the
>> VM gets stopped before migration launch so let's simply
>> flush the tables each time the VM gets stopped.
>>
>> For regular ITS registers we just can use vmstate pre_save
>> and post_load callbacks.
>>
>> Signed-off-by: Eric Auger <address@hidden>
> 
> Hi
> 
> 
>> + * vm_change_state_handler - VM change state callback aiming at flushing
>> + * ITS tables into guest RAM
>> + *
>> + * The tables get flushed to guest RAM whenever the VM gets stopped.
>> + */
>> +static void vm_change_state_handler(void *opaque, int running,
>> +                                    RunState state)
>> +{
>> +    GICv3ITSState *s = (GICv3ITSState *)opaque;
> 
> Cast is unneeded.
> 
>> +
>> +    if (running) {
>> +        return;
>> +    }
>> +    kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_TABLES,
>> +                      0, NULL, false);
> 
> As you are adding it to do everytime that we stop the guest, how
> expensive/slow is that?

This is highly dependent on the number of devices using MSIs and number
of allocated MSIs on guest. The number of bytes to transfer basically is:

(#nb_vcpus + #nb_devices_using_MSI_on_guest  +  2 *
nb_allocated_guest_MSIs bytes ) * 8 bytes

So I would say < 10 kB in real life case. In my virtio-pci test case it
is just 440 Bytes.

For live migration I could hook a callback at RUN_STATE_FINISH_MIGRATE.
However this does not work with virsh save/restore use case since the
notifier is not called (the VM being already paused), hence that choice.

Thanks

Eric

> 
> Thanks, Juan.
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]