qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] MSI interrupt support with vioscsi.c miniport driver


From: Wangting (Kathy)
Subject: Re: [Qemu-devel] MSI interrupt support with vioscsi.c miniport driver
Date: Thu, 30 Oct 2014 18:28:44 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; rv:24.0) Gecko/20100101 Thunderbird/24.6.0


On 2014-10-30 16:48, Vadim Rozenfeld wrote:
> On Thu, 2014-10-30 at 14:54 +0800, Wangting (Kathy) wrote:
>>> On Tue, 2014-02-18 at 13:11 -0800, Nicholas A. Bellinger wrote:
>>>> On Tue, 2014-02-18 at 13:00 -0800, Nicholas A. Bellinger wrote:
>>>>> On Mon, 2014-02-10 at 11:05 -0800, Nicholas A. Bellinger wrote:
>>>>>
>>>>> <SNIP>
>>>>>
>>>>>>>>> Hi Yan,
>>>>>>>>>
>>>>>>>>> So recently I've been doing some KVM guest performance comparisons
>>>>>>>>> between the scsi-mq prototype using virtio-scsi + vhost-scsi, and
>>>>>>>>> Windows Server 2012 with vioscsi.sys (virtio-win-0.1-74.iso) +
>>>>>>>>> vhost-scsi using PCIe flash backend devices.
>>>>>>>>>
>>>>>>>>> I've noticed that small block random performance for the MSFT guest 
>>>>>>>>> is
>>>>>>>>> at around ~80K IOPs with multiple vioscsi LUNs + adapters, which 
>>>>>>>>> ends up
>>>>>>>>> being well below what the Linux guest with scsi-mq + virtio-scsi is
>>>>>>>>> capable of (~500K).
>>>>>>>>>
>>>>>>>>> After searching through the various vioscsi registry settings, it
>>>>>>>>> appears that MSIEnabled is being explicitly disabled (0x00000000), 
>>>>>>>>> that
>>>>>>>>> is different from what vioscsi.inx is currently defining:
>>>>>>>>>
>>>>>>>>> [pnpsafe_pci_addreg_msix]
>>>>>>>>> HKR, "Interrupt Management",, 0x00000010
>>>>>>>>> HKR, "Interrupt Management\MessageSignaledInterruptProperties",, 
>>>>>>>>> 0x00000010
>>>>>>>>> HKR, "Interrupt Management\MessageSignaledInterruptProperties", 
>>>>>>>>> MSISupported, 0x00010001, 0
>>>>>>>>> HKR, "Interrupt Management\MessageSignaledInterruptProperties", 
>>>>>>>>> MessageNumberLimit, 0x00010001, 4
>>>>>>>>>
>>>>>>>>> Looking deeper at vioscsi.c code, I've noticed that MSI_SUPPORTED=0 
>>>>>>>>> is
>>>>>>>>> explicitly disabled at build time in SOURCES + vioscsi.vcxproj, as 
>>>>>>>>> well
>>>>>>>>> as VioScsiFindAdapter() code always ends setting msix_enabled = 
>>>>>>>>> FALSE
>>>>>>>>> here, regardless of MSI_SUPPORTED:
>>>>>>>>>
>>>>>>>>>  
>>>>>>>>> https://github.com/YanVugenfirer/kvm-guest-drivers-windows/blob/master/vioscsi/vioscsi.c#L340
>>>>>>>>>
>>>>>>>>> Also looking at virtio_stor.c for the raw block driver, 
>>>>>>>>> MSI_SUPPORTED=1
>>>>>>>>> appears to be the default setting for the driver included in the 
>>>>>>>>> offical
>>>>>>>>> virtio-win iso builds, right..?
>>>>>>>>>
>>>>>>>>> Sooo, I'd like to try enabling MSI_SUPPORTED=1 in a test vioscsi.sys
>>>>>>>>> build of my own, but before going down the WDK development rabbit 
>>>>>>>>> whole,
>>>>>>>>> I'd like to better understand why you've explicitly disabled this 
>>>>>>>>> logic
>>>>>>>>> within vioscsi.c code to start..?
>>>>>>>>>
>>>>>>>>> Is there anything that needs to be addressed / carried over from
>>>>>>>>> virtio_stor.c in order to get MSI_SUPPORTED=1 to work with vioscsi.c
>>>>>>>>> miniport code..?
>>>>>>>
>>>>>>> Hi Nicholas,
>>>>>>>
>>>>>>> I was thinking about enabling MSI in RHEL 6.6 (build 74) but for some
>>>>>>> reasons decided to keep it disabled until adding mq support.
>>>>>>>
>>>>>>>
>>>>>>> You definitely should be able to turn on MSI_SUPPORTED, rebuild the
>>>>>>> driver, and switch MSISupported to 1 to make vioscsi driver working in
>>>>>>> MSI mode.
>>>>>>>    
>>>>>>
>>>>>> Thanks for the quick response.  We'll give MSI_SUPPORTED=1 a shot over
>>>>>> the next days with a test build on Server 2012 / Server 2008 R2 and see
>>>>>> how things go..
>>>>>>
>>>>>
>>>>> Just a quick update on progress.
>>>>>
>>>>> I've been able to successfully build + load a unsigned vioscsi.sys
>>>>> driver on Server 2012 with WDK 8.0.
>>>>>
>>>>> Running with MSI_SUPPORTED=1 against vhost-scsi results in a significant
>>>>> performance and efficiency gain, on the order of 100K to 225K IOPs for
>>>>> 4K block random I/O workload, depending on read/write mix.
>>>>>
>>>>
>>>> One other performance related question..
>>>>
>>>> In vioscsi.c:VioScsiFindAdapter() code, the default setting for
>>>> adaptExt->queue_depth ends up getting set to 32 (pageNum / 4) when
>>>> indirect mode is enabled in the following bits:
>>>>
>>>>     if(adaptExt->indirect) {
>>>>         adaptExt->queue_depth = max(2, (pageNum / 4));
>>>>     } else {
>>>>         adaptExt->queue_depth = pageNum / 
>>>> ConfigInfo->NumberOfPhysicalBreaks 
>>>> - 1;
>>>>     }
>>>>
>>>> Looking at viostor/virtio_stor.c:VirtIoFindAdapter() code, the default
>>>> setting for ->queue_depth appears to be 128 (pageNum):
>>>>
>>>> #if (INDIRECT_SUPPORTED)
>>>>     if(!adaptExt->dump_mode) {
>>>>         adaptExt->indirect = CHECKBIT(adaptExt->features, 
>>>> VIRTIO_RING_F_INDIRECT_DESC);
>>>>     }
>>>>     if(adaptExt->indirect) {
>>>>         adaptExt->queue_depth = pageNum;
>>>>     }
>>>> #else
>>>>     adaptExt->indirect = 0;
>>>> #endif
>>>>
>>>> Is there a reason for the lower queue_depth for vioscsi vs. viostor..?
>>>
>>> It's a horrible work around for the following bug:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1013443
>>>
>>> I'm going to remove it as soon as found better solution for it.
>>>
>>> Best regards,
>>> Vadim.
>>>
>>>
>> Hi Vadim,
>>
>> I have found that Bug 1013443 has been closed with a
>> resolution of ERRATA.
>>
>> The windows device queue must be between 20 and 254
>> for StorPortSetDeviceQueueDepth to succeed.
>>
>> So I have the question that why queue_depth can not be
>> set to pageNum(128)?
> 
> It will create a problem on multi disk setup, when several 
> disks are attached to the same virtio-scsi pci controller.
> Adding some sort of manually managed SRBs queue for storing and
> resubmitting pending requests can solve this problem.
> 
> Cheers,
> Vadim.
> 

Is there a patch for it?

>>
>> Best wishes,
>> Ting Wang
>>
>>>>
>>>> How about using min(adaptExt->scsi_config.cmd_per_lun, pageNum) instead..?
>>>>
>>>> Thanks!
>>>>
>>>> -nab
>>>>
>>>>
>>
>>
> 
> 
> 
> .
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]