qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V4 00/19] Support more virtio queues


From: Jason Wang
Subject: Re: [Qemu-devel] [PATCH V4 00/19] Support more virtio queues
Date: Fri, 20 Mar 2015 13:11:52 +0800



On Thu, Mar 19, 2015 at 5:23 PM, Michael S. Tsirkin <address@hidden> wrote:
On Thu, Mar 19, 2015 at 03:42:56PM +0800, Jason Wang wrote:
On Thu, Mar 19, 2015 at 3:32 PM, Michael S. Tsirkin <address@hidden> wrote:
 >On Thu, Mar 19, 2015 at 01:24:53PM +0800, Jason Wang wrote:
 >>     On Wed, Mar 18, 2015 at 8:58 PM, Michael S. Tsirkin
 >><address@hidden> wrote:
 >> >On Wed, Mar 18, 2015 at 05:34:50PM +0800, Jason Wang wrote:
>> >> We current limit the max virtio queues to 64. This is not sufficient >> >> to support multiqueue devices (e.g recent Linux support up to 256 >> >> tap queues). So this series tries to let virtio to support more
 >>queues.
 >> >> No much works need to be done except:
 >> >> - Introducing transport specific queue limitation.
 >> >> - Let each virtio transport to use specific limit.
>> >> - Speedup the MSI-X masking and unmasking through per vector queue >> >> list, and increase the maximum MSI-X vectors supported by qemu.
 >> >> - With the above enhancements, increase the maximum number of
 >> >>   virtqueues supported by PCI from 64 to 513.
 >> >> - Compat the changes for legacy machine types.
 >> >
 >> >What are the compatibility considerations here?
 >> Two considerations:
 >> 1) To keep msix bar size to 4K for legacy machine types
 >> 2) Limit the pci queue max to 64 for legacy machine types
 >
 >2 seems not relevant to me.
If we don't limit this. Consider migration from 2.4 to 2.3 Before migration write 0 to queue_sel
 write 100 to queue_sel
 read queue_sel will get 100
but after migration write 0 to queue_sel
 write 100 to queue_sel
 read queue_sel will get 0
The hardware behavior was changed after migration.

But this driver is out of spec - drivers are not supposed to select
non-existent queues. So this doesn't matter.

Technically, we need make sure there's not change in the behavior after migration. The fix is not hard and leaving thing likes this will make it hard to debug the issue after migration and it will be too late to fix if we find a 'buggy' driver in the future.

For the spec, we have already had a lot of examples that the driver or device was out of spec, we could not make sure that there will be no driver who depends on undocumented behavior.


Another reason is not wasting memory for the extra virtqueues allocated for
 legacy machine types.

If that's a significant amount of memory, we need to work
to reduce memory consumption for new machine types.

It will save about 38K if 513 is queue max, and will save more if we want to increase the limit in the future or new member was added to VirtQueue.


Not many people use the compat machine types, especially
upstream.

--
MST

But there's still user and we do a lot to maintain compatibility even in upstream. Another side, this can also reduce the downstream specific codes.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]