On Thu, Mar 19, 2015 at 3:32 PM, Michael S. Tsirkin
<address@hidden> wrote:
>On Thu, Mar 19, 2015 at 01:24:53PM +0800, Jason Wang wrote:
>> On Wed, Mar 18, 2015 at 8:58 PM, Michael S. Tsirkin
>><address@hidden> wrote:
>> >On Wed, Mar 18, 2015 at 05:34:50PM +0800, Jason Wang wrote:
>> >> We current limit the max virtio queues to 64. This is not
sufficient
>> >> to support multiqueue devices (e.g recent Linux support up to
256
>> >> tap queues). So this series tries to let virtio to support
more
>>queues.
>> >> No much works need to be done except:
>> >> - Introducing transport specific queue limitation.
>> >> - Let each virtio transport to use specific limit.
>> >> - Speedup the MSI-X masking and unmasking through per vector
queue
>> >> list, and increase the maximum MSI-X vectors supported by
qemu.
>> >> - With the above enhancements, increase the maximum number of
>> >> virtqueues supported by PCI from 64 to 513.
>> >> - Compat the changes for legacy machine types.
>> >
>> >What are the compatibility considerations here?
>> Two considerations:
>> 1) To keep msix bar size to 4K for legacy machine types
>> 2) Limit the pci queue max to 64 for legacy machine types
>
>2 seems not relevant to me.
If we don't limit this. Consider migration from 2.4 to 2.3
Before migration
write 0 to queue_sel
write 100 to queue_sel
read queue_sel will get 100
but after migration
write 0 to queue_sel
write 100 to queue_sel
read queue_sel will get 0
The hardware behavior was changed after migration.