qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC/PATCH] Fix guest OS panic when 64bit BAR is presen


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [RFC/PATCH] Fix guest OS panic when 64bit BAR is present
Date: Wed, 1 Feb 2012 09:04:33 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Wed, Feb 01, 2012 at 06:44:42PM +1300, Alexey Korolev wrote:
> On 31/01/12 22:43, Avi Kivity wrote:
> > On 01/31/2012 11:40 AM, Avi Kivity wrote:
> >> On 01/27/2012 06:42 AM, Alexey Korolev wrote:
> >>> On 27/01/12 04:12, Avi Kivity wrote:
> >>>> On 01/26/2012 04:36 PM, Michael S. Tsirkin wrote:
> >>>>> On Thu, Jan 26, 2012 at 03:52:27PM +0200, Avi Kivity wrote:
> >>>>>> On 01/26/2012 11:14 AM, Michael S. Tsirkin wrote:
> >>>>>>> On Wed, Jan 25, 2012 at 06:46:03PM +1300, Alexey Korolev wrote:
> >>>>>>>> Hi, 
> >>>>>>>> In this post
> >>>>>>>> http://lists.gnu.org/archive/html/qemu-devel/2011-12/msg03171.html 
> >>>>>>>> I've
> >>>>>>>> mentioned about the issues when 64Bit PCI BAR is present and 32bit
> >>>>>>>> address range is selected for it.
> >>>>>>>> The issue affects all recent qemu releases and all
> >>>>>>>> old and recent guest Linux kernel versions.
> >>>>>>>>
> >>>>>>>> We've done some investigations. Let me explain what happens.
> >>>>>>>> Assume we have 64bit BAR with size 32MB mapped at [0xF0000000 -
> >>>>>>>> 0xF2000000]
> >>>>>>>>
> >>>>>>>> When Linux guest starts it does PCI bus enumeration.
> >>>>>>>> The OS enumerates 64BIT bars using the following procedure.
> >>>>>>>> 1. Write all FF's to lower half of 64bit BAR
> >>>>>>>> 2. Write address back to lower half of 64bit BAR
> >>>>>>>> 3. Write all FF's to higher half of 64bit BAR
> >>>>>>>> 4. Write address back to higher half of 64bit BAR
> >>>>>>>>
> >>>>>>>> Linux code is here: 
> >>>>>>>> http://lxr.linux.no/#linux+v3.2.1/drivers/pci/probe.c#L149
> >>>>>>>>
> >>>>>>>> What does it mean for qemu?
> >>>>>>>>
> >>>>>>>> At step 1. qemu pci_default_write_config() recevies all FFs for lower
> >>>>>>>> part of the 64bit BAR. Then it applies the mask and converts the 
> >>>>>>>> value
> >>>>>>>> to "All FF's - size + 1" (FE000000 if size is 32MB).
> >>>>>>>> Then pci_bar_address() checks if BAR address is valid. Since it is a
> >>>>>>>> 64bit bar it reads 0x00000000FE000000 - this address is valid. So 
> >>>>>>>> qemu
> >>>>>>>> updates topology and sends request to update mappings in KVM with new
> >>>>>>>> range for the 64bit BAR FE000000 - 0xFFFFFFFF. This usually means 
> >>>>>>>> kernel
> >>>>>>>> panic on boot, if there is another mapping in the FE000000 - 
> >>>>>>>> 0xFFFFFFFF
> >>>>>>>> range, which is quite common.
> >>>>>>> Do you know why does it panic? As far as I can see
> >>>>>>> from code at
> >>>>>>> http://lxr.linux.no/#linux+v2.6.35.9/drivers/pci/probe.c#L162
> >>>>>>>
> >>>>>>>  171        pci_read_config_dword(dev, pos, &l);
> >>>>>>>  172        pci_write_config_dword(dev, pos, l | mask);
> >>>>>>>  173        pci_read_config_dword(dev, pos, &sz);
> >>>>>>>  174        pci_write_config_dword(dev, pos, l);
> >>>>>>>
> >>>>>>> BAR is restored: what triggers an access between lines 172 and 174?
> >>>>>> Random interrupt reading the time, likely.
> >>>>> Weird, what the backtrace shows is init, unrelated
> >>>>> to interrupts.
> >>>>>
> >>>> It's a bug then.  qemu doesn't undo the mapping correctly.
> >>>>
> >>>> If you have clear instructions, I'll try to reproduce it.
> >>>>
> >>> Well the easiest way to reproduce this is:
> >>>
> >>>
> >>> 1. Get kernel bzImage (version < 2.6.36)
> >>> 2. Apply patch to ivshmem.c
> >>>
> >>> ---
> >>> diff --git a/hw/ivshmem.c b/hw/ivshmem.c
> >>> index 1aa9e3b..71f8c21 100644
> >>> --- a/hw/ivshmem.c
> >>> +++ b/hw/ivshmem.c
> >>> @@ -341,7 +341,7 @@ static void create_shared_memory_BAR(IVShmemState *s, 
> >>> int fd) {
> >>>      memory_region_add_subregion(&s->bar, 0, &s->ivshmem);
> >>>  
> >>>      /* region for shared memory */
> >>> -    pci_register_bar(&s->dev, 2, PCI_BASE_ADDRESS_SPACE_MEMORY, &s->bar);
> >>> +    pci_register_bar(&s->dev, 2, 
> >>> PCI_BASE_ADDRESS_SPACE_MEMORY|PCI_BASE_ADDRESS_MEM_TYPE_64, &s->bar)
> >>>  }
> >>>  
> >>>  static void close_guest_eventfds(IVShmemState *s, int posn)
> >>> ---
> >>>
> >>> 3. Launch qemu with a command like that
> >>>
> >>> /usr/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm -m 2048 -smp 
> >>> 1,socket=1,cores=1,threads=1 -name centos54 -uuid
> >>> d37daefd-75bd-4387-cee1-7f0b153ee2af -nodefconfig -nodefaults -chardev
> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/centos54.monitor,server,nowait
> >>>  -mon chardev=charmonitor,id=monitor,mode=readline -rtc
> >>> base=utc -drive 
> >>> file=/dev/dock200-1/centos54,if=none,id=drive-ide0-0-0,format=raw -device
> >>> ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 
> >>> -drive
> >>> file=/data/CentOS-5.4-x86_64-bin-DVD.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
> >>>  -device
> >>> ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -chardev 
> >>> file,id=charserial0,path=/home/alexey/cent54.log -device
> >>> isa-serial,chardev=charserial0,id=serial0 -usb -vnc 127.0.0.1:0 -k en-us 
> >>> -vga cirrus -device
> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,multifunction=on,addr=0x4.0x0 
> >>> --device ivshmem,size=32,shm="shm" -kernel bzImage -append
> >>> "root=/dev/hda1 console=ttyS0,115200n8 console=tty0"
> >>>
> >>> in other words add: --device ivshmem,size=32,shm="shm"
> >>>
> >>> That is all.
> >>>
> >>> Note: it won't necessary cause panic message on some kernels it just 
> >>> hangs or reboots.
> >>>
> >> In fact qemu segfaults for me, since registering a ram region not on a
> >> page boundary is broken.  This happens when the ivshmem bar is split by
> >> the hpet region, which is less than page long.
> >>
> > Happens only with qemu-kvm for some reason.  Two separate bugs.
> >
> Well it's quite possible that there are two separate problems.
> 
> 1. Page boundary related
> 2. Another is related to invalid mapping, when we request region size on 
> 64bit BAR.
> The patch sent previously addresses this sizing behaviour, and so
> avoids the mapping error.

The patch catches what the specific guest is doing but it's a hack.  It's
completely OK to write random values into BARs as long as the claimed
range is not accessed.

> Not sure if it is valid to temporary occupy completely wrong memory region 
> when we request size of PCI BAR.
> 
> This issue needs to be addressed to allow 64-bit PCI allocations to work 
> correctly with older Linux guest kernels.
> 
> Will your core rewrite address the invalid mapping issue? 
> 
> Is it possible to have an early version of new core so we could check the 
> 64bit BAR issues before the release.


-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]