qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 08/13] vfio: Add save state functions to Save


From: Kirti Wankhede
Subject: Re: [Qemu-devel] [PATCH v4 08/13] vfio: Add save state functions to SaveVMHandlers
Date: Sat, 22 Jun 2019 01:37:47 +0530


On 6/22/2019 1:32 AM, Alex Williamson wrote:
> On Sat, 22 Jun 2019 01:08:40 +0530
> Kirti Wankhede <address@hidden> wrote:
> 
>> On 6/21/2019 8:46 PM, Alex Williamson wrote:
>>> On Fri, 21 Jun 2019 12:08:26 +0530
>>> Kirti Wankhede <address@hidden> wrote:
>>>   
>>>> On 6/21/2019 12:55 AM, Alex Williamson wrote:  
>>>>> On Thu, 20 Jun 2019 20:07:36 +0530
>>>>> Kirti Wankhede <address@hidden> wrote:
>>>>>     
>>>>>> Added .save_live_pending, .save_live_iterate and 
>>>>>> .save_live_complete_precopy
>>>>>> functions. These functions handles pre-copy and stop-and-copy phase.
>>>>>>
>>>>>> In _SAVING|_RUNNING device state or pre-copy phase:
>>>>>> - read pending_bytes
>>>>>> - read data_offset - indicates kernel driver to write data to staging
>>>>>>   buffer which is mmapped.    
>>>>>
>>>>> Why is data_offset the trigger rather than data_size?  It seems that
>>>>> data_offset can't really change dynamically since it might be mmap'd,
>>>>> so it seems unnatural to bother re-reading it.
>>>>>     
>>>>
>>>> Vendor driver can change data_offset, he can have different data_offset
>>>> for device data and dirty pages bitmap.
>>>>  
>>>>>> - read data_size - amount of data in bytes written by vendor driver in 
>>>>>> migration
>>>>>>   region.
>>>>>> - if data section is trapped, pread() number of bytes in data_size, from
>>>>>>   data_offset.
>>>>>> - if data section is mmaped, read mmaped buffer of size data_size.
>>>>>> - Write data packet to file stream as below:
>>>>>> {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data,
>>>>>> VFIO_MIG_FLAG_END_OF_STATE }
>>>>>>
>>>>>> In _SAVING device state or stop-and-copy phase
>>>>>> a. read config space of device and save to migration file stream. This
>>>>>>    doesn't need to be from vendor driver. Any other special config state
>>>>>>    from driver can be saved as data in following iteration.
>>>>>> b. read pending_bytes - indicates kernel driver to write data to staging
>>>>>>    buffer which is mmapped.    
>>>>>
>>>>> Is it pending_bytes or data_offset that triggers the write out of
>>>>> data?  Why pending_bytes vs data_size?  I was interpreting
>>>>> pending_bytes as the total data size while data_size is the size
>>>>> available to read now, so assumed data_size would be more closely
>>>>> aligned to making the data available.
>>>>>     
>>>>
>>>> Sorry, that's my mistake while editing, its read data_offset as in above
>>>> case.
>>>>  
>>>>>> c. read data_size - amount of data in bytes written by vendor driver in
>>>>>>    migration region.
>>>>>> d. if data section is trapped, pread() from data_offset of size 
>>>>>> data_size.
>>>>>> e. if data section is mmaped, read mmaped buffer of size data_size.    
>>>>>
>>>>> Should this read as "pread() from data_offset of data_size, or
>>>>> optionally if mmap is supported on the data area, read data_size from
>>>>> start of mapped buffer"?  IOW, pread should always work.  Same in
>>>>> previous section.
>>>>>     
>>>>
>>>> ok. I'll update.
>>>>  
>>>>>> f. Write data packet as below:
>>>>>>    {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data}
>>>>>> g. iterate through steps b to f until (pending_bytes > 0)    
>>>>>
>>>>> s/until/while/    
>>>>
>>>> Ok.
>>>>  
>>>>>     
>>>>>> h. Write {VFIO_MIG_FLAG_END_OF_STATE}
>>>>>>
>>>>>> .save_live_iterate runs outside the iothread lock in the migration case, 
>>>>>> which
>>>>>> could race with asynchronous call to get dirty page list causing data 
>>>>>> corruption
>>>>>> in mapped migration region. Mutex added here to serial migration buffer 
>>>>>> read
>>>>>> operation.    
>>>>>
>>>>> Would we be ahead to use different offsets within the region for device
>>>>> data vs dirty bitmap to avoid this?
>>>>>    
>>>>
>>>> Lock will still be required to serialize the read/write operations on
>>>> vfio_device_migration_info structure in the region.
>>>>
>>>>  
>>>>>> Signed-off-by: Kirti Wankhede <address@hidden>
>>>>>> Reviewed-by: Neo Jia <address@hidden>
>>>>>> ---
>>>>>>  hw/vfio/migration.c | 212 
>>>>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>  1 file changed, 212 insertions(+)
>>>>>>
>>>>>> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
>>>>>> index fe0887c27664..0a2f30872316 100644
>>>>>> --- a/hw/vfio/migration.c
>>>>>> +++ b/hw/vfio/migration.c
>>>>>> @@ -107,6 +107,111 @@ static int vfio_migration_set_state(VFIODevice 
>>>>>> *vbasedev, uint32_t state)
>>>>>>      return 0;
>>>>>>  }
>>>>>>  
>>>>>> +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev)
>>>>>> +{
>>>>>> +    VFIOMigration *migration = vbasedev->migration;
>>>>>> +    VFIORegion *region = &migration->region.buffer;
>>>>>> +    uint64_t data_offset = 0, data_size = 0;
>>>>>> +    int ret;
>>>>>> +
>>>>>> +    ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
>>>>>> +                region->fd_offset + offsetof(struct 
>>>>>> vfio_device_migration_info,
>>>>>> +                                             data_offset));
>>>>>> +    if (ret != sizeof(data_offset)) {
>>>>>> +        error_report("Failed to get migration buffer data offset %d",
>>>>>> +                     ret);
>>>>>> +        return -EINVAL;
>>>>>> +    }
>>>>>> +
>>>>>> +    ret = pread(vbasedev->fd, &data_size, sizeof(data_size),
>>>>>> +                region->fd_offset + offsetof(struct 
>>>>>> vfio_device_migration_info,
>>>>>> +                                             data_size));
>>>>>> +    if (ret != sizeof(data_size)) {
>>>>>> +        error_report("Failed to get migration buffer data size %d",
>>>>>> +                     ret);
>>>>>> +        return -EINVAL;
>>>>>> +    }
>>>>>> +
>>>>>> +    if (data_size > 0) {
>>>>>> +        void *buf = NULL;
>>>>>> +        bool buffer_mmaped = false;
>>>>>> +
>>>>>> +        if (region->mmaps) {
>>>>>> +            int i;
>>>>>> +
>>>>>> +            for (i = 0; i < region->nr_mmaps; i++) {
>>>>>> +                if ((data_offset >= region->mmaps[i].offset) &&
>>>>>> +                    (data_offset < region->mmaps[i].offset +
>>>>>> +                                   region->mmaps[i].size)) {
>>>>>> +                    buf = region->mmaps[i].mmap + (data_offset -
>>>>>> +                                                   
>>>>>> region->mmaps[i].offset);    
>>>>>
>>>>> So you're expecting that data_offset is somewhere within the data
>>>>> area.  Why doesn't the data always simply start at the beginning of the
>>>>> data area?  ie. data_offset would coincide with the beginning of the
>>>>> mmap'able area (if supported) and be static.  Does this enable some
>>>>> functionality in the vendor driver?    
>>>>
>>>> Do you want to enforce that to vendor driver?
>>>> From the feedback on previous version I thought vendor driver should
>>>> define data_offset within the region
>>>> "I'd suggest that the vendor driver expose a read-only
>>>> data_offset that matches a sparse mmap capability entry should the
>>>> driver support mmap.  The use should always read or write data from the
>>>> vendor defined data_offset"
>>>>
>>>> This also adds flexibility to vendor driver such that vendor driver can
>>>> define different data_offset for device data and dirty page bitmap
>>>> within same mmaped region.  
>>>
>>> I agree, it adds flexibility, the protocol was not evident to me until
>>> I got here though.
>>>   
>>>>>  Does resume data need to be
>>>>> written from the same offset where it's read?    
>>>>
>>>> No, resume data should be written from the data_offset that vendor
>>>> driver provided during resume.  
> 
> A)
> 
>>> s/resume/save/?
> 
> B)
>  
>>> Or is this saying that on resume that the vendor driver is requesting a
>>> specific block of data via data_offset?   
>>
>> Correct.
> 
> Which one is correct?  Thanks,
> 

B is correct.

Thanks,
Kirti


> Alex
> 
>>> I think resume is going to be
>>> directed by the user, writing in the same order they received the
>>> data.  Thanks,



reply via email to

[Prev in Thread] Current Thread [Next in Thread]