qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 07/15] dataplane: use object pool to speed up al


From: Ming Lei
Subject: Re: [Qemu-devel] [PATCH 07/15] dataplane: use object pool to speed up allocation for virtio blk request
Date: Fri, 1 Aug 2014 15:42:05 +0800

On Thu, Jul 31, 2014 at 5:18 PM, Paolo Bonzini <address@hidden> wrote:
> Il 31/07/2014 05:22, Ming Lei ha scritto:
>>> >
>>> > The problem is that g_slice here is not using the slab-style allocator
>>> > because the object is larger than roughly 500 bytes.  One solution would
>>> > be to make virtqueue_pop/vring_pop allocate a VirtQueueElement of the
>>> > right size (and virtqueue_push/vring_push free it), as mentioned in the
>>> > review of patch 8.
>> Unluckily both iovec and addr array can't be fitted into 500 bytes, :-(
>> Not mention all users of VirtQueueElement need to be changed too,
>> I hate to make that work involved in this patchset, :-)
>
> Well, the point of dataplane was not just to get maximum iops.  It was
> also to provide guidance in the work necessary to improve the code and
> get maximum iops without special-casing everything.  This can be a lot
> of work indeed.
>
>>> >
>>> > However, I now remembered that VirtQueueElement is a mess because it's
>>> > serialized directly into the migration state. :(  So you basically
>>> > cannot change it without mucking with migration.  Please leave out patch
>>> > 8 for now.
>> save_device code serializes elem in this way:
>>
>>  qemu_put_buffer(f, (unsigned char *)&req->elem,
>>                         sizeof(VirtQueueElement));
>>
>> so I am wondering why this patch may break migration.
>
> Because you change the on-wire format and break migration from 2.1 to
> 2.2.  Sorry, I wasn't clear enough.

That is really a mess, but in future we still may convert VirtQueueElement
into a smart one, and keep the original structure only for save/load, but
a conversion between the two structures is required in save/load.


Thanks,



reply via email to

[Prev in Thread] Current Thread [Next in Thread]