qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v1 2/2] intel-iommu: Extend address width to 48


From: Prasad Singamsetty
Subject: Re: [Qemu-devel] [PATCH v1 2/2] intel-iommu: Extend address width to 48 bits
Date: Thu, 11 Jan 2018 08:19:07 -0800
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2



On 1/10/2018 6:46 PM, Liu, Yi L wrote:
-----Original Message-----
From: Qemu-devel [mailto:address@hidden On
Behalf Of Prasad Singamsetty
Sent: Thursday, January 11, 2018 8:06 AM
To: Liu, Yi L <address@hidden>
Cc: address@hidden; address@hidden; address@hidden; qemu-
address@hidden; address@hidden; address@hidden;
address@hidden; address@hidden
Subject: Re: [Qemu-devel] [PATCH v1 2/2] intel-iommu: Extend address width to 48
bits


Hi Yi L,

On 12/1/2017 3:29 AM, Liu, Yi L wrote:
On Tue, Nov 14, 2017 at 06:13:50PM -0500, address@hidden
wrote:
From: Prasad Singamsetty <address@hidden>

The current implementation of Intel IOMMU code only supports 39 bits
iova address width. This patch provides a new parameter (x-aw-bits)
for intel-iommu to extend its address width to 48 bits but keeping
the default the same (39 bits). The reason for not changing the
default is to avoid potential compatibility problems with live
migration of intel-iommu enabled QEMU guest. The only valid values for 'x-aw-
bits'
parameter are 39 and 48.

After enabling larger address width (48), we should be able to map
larger iova addresses in the guest. For example, a QEMU guest that is
configured with large memory ( >=1TB ). To check whether 48 bits aw
is enabled, we can grep in the guest dmesg output with line:
"DMAR: Host address width 48".

Signed-off-by: Prasad Singamsetty <address@hidden>

Prasad,

Have you tested the scenario with physical device assigned to a guest?

Sorry for the long delay in following up on this.

I did some testing with vfio-pci devices assigned to the guest.
This is done on the latest qemu code base (2.11.50).

Here are the test cases/results:

1. Booting VM with one or two vfio-pci (network) devices
     and multiple memory size configs (up to 256G). Assigned pci
     devices (network interfaces) worked fine and no issues
     in using these devices. This test is run for both address
     widths (39 and 48).
2. If the guest VM is configured to use 512G and address
     width is the default 39 bits then guest OS fails to
     boot due to DMA failures. The same is observed without
     applying the patch set. The guest OS ends up booting into
     dracut shell. This problem is not seen if we set the address
     width to 48 bits. So, the patch set addresses a latent bug
     with large memory config.

ISSUE - VM could take long time to boot with vfio-pci devices

Qemu process could take a long time to initialize the VM when vfio-pci device is
configured depending on the memory size. For small memory sizes (less than 32G) 
it
is not noticeable (<30s). For larger memory sizes, the delay ranges from several
minutes and longer (2-40min). For more than 512G, qemu process appears to hang
but can be interrupted. This behavior is observed without patch set applied 
also. The
slowness is due to VFIO_IOMMU_MAP_DMA ioctl taking long time to map the
system ram assigned to the guest. This is when qemu process is initializing the 
vfio
device where it maps all the assigned ram memory regions. Here is the stack 
trace
from gdb:

#0  vfio_dma_map (container=0x5555582709d0, iova=4294967296,
                    size=547608330240, vaddr=0x7f7fd3e00000,
                    readonly=false)
      at /home/psingams/qemu-upstream-v2/hw/vfio/common.c:250
#1  0x000055555584f471 in vfio_listener_region_add(
                    listener=0x5555582709e0,
                    section=0x7fffffffc7f0)
      at /home/psingams/qemu-upstream-v2/hw/vfio/common.c:521
#2  0x00005555557f08fc in listener_add_address_space (
                    listener=0x5555582709e0, as=0x55555813b790)
      at /home/psingams/qemu-upstream-v2/memory.c:2600
#3  0x00005555557f0bbe in memory_listener_register (
                    listener=0x5555582709e0, as=0x55555813b790)
      at /home/psingams/qemu-upstream-v2/memory.c:2643
#4  0x00005555558511ef in vfio_connect_container (group=0x555558270960,
                    as=0x55555813b790, errp=0x7fffffffdae8)
      at /home/psingams/qemu-upstream-v2/hw/vfio/common.c:1130
****
(gdb) print/x size
$2 = 0x7f80000000

This is before guest OS gets to boot. The host is running 4.15.0-rc6 kernel 
with qemu
version 2.11.50.

I am not sure if this is a known issue and someone is already working on fixing 
the
implementation of VFIO_IOMMU_MAP_DMA ioctl.

It seems to be same issue with the one reported by Bob.
https://lists.gnu.org/archive/html/qemu-devel/2017-12/msg05098.html

Per chatted with them, the reason looks to be no enough memory in host. how 
about
the memory size in your host?

The host system has 1.2TB memory and just one VM with one vfio-pci
device assigned to it. I don't think it is the same issue as not
enough memory.

Regards,
--Prasad


This issue is not related to this patch set and need to be investigated 
separately.

Please let me know if there are other comments on this patch set.


Regards,
Yi L




reply via email to

[Prev in Thread] Current Thread [Next in Thread]