-------- Original message -------- From: address@hidden Date: 27/03/2013 9:12 AM (GMT+03:00) To: address@hidden Subject: Qemu-devel Digest, Vol 120, Issue 887
Send Qemu-devel mailing list submissions to address@hidden
To subscribe or unsubscribe via the World Wide Web, visit https://lists.nongnu.org/mailman/listinfo/qemu-devel or, via email, send a message with subject or body 'help' to address@hidden
You can reach the person managing the list at address@hidden
When replying, please edit your Subject line so it is more specific than "Re: Contents of Qemu-devel digest..."
Today's Topics:
1. Re: coroutine: hung when using gthread backend (Wenchao Xia) 2. Re: [RFC PATCH v4 00/30] ACPI memory hotplug (li guang) 3. Re: [RFC PATCH v4 00/30] ACPI memory hotplug (li guang) 4. Re: [RFC] qmp interface for save vmstate to image (Wenchao Xia) 5. [Bug 1158912] Re: QEMU Version 1.4.0 - SLIRP hangs VM (Kenneth Salerno) 6. [PATCH] hw/i386/pc: reject to boot a wrong header magic kernel (liguang)
Message: 1 Date: Wed, 27 Mar 2013 10:11:57 +0800 From: Wenchao Xia <address@hidden> To: Peter Maydell <address@hidden> Cc: Stefan Hajnoczi <address@hidden>, qemu-devel <address@hidden>, Paolo Bonzini <address@hidden> Subject: Re: [Qemu-devel] coroutine: hung when using gthread backend Message-ID: <address@hidden> Content-Type: text/plain; charset=UTF-8; format=flowed
? 2013-3-26 17:56, Peter Maydell ??: > On 26 March 2013 09:54, Stefan Hajnoczi <address@hidden> wrote: >> On Tue, Mar 26, 2013 at 08:03:50AM +0100, Paolo Bonzini wrote: >>> coroutine backend gthread hardly works for qemu, only qemu-io and qemu-img. >> >> Do you know why it doesn't work? > > Because nobody tests it? > > -- PMM > It is not enabled by default in configure, so missed in tests. I feel a full regression test suit covering different configure case is missing.
? 2013-03-26?? 17:58 +0100?Vasilis Liaskovitis??? > Hi, > > On Tue, Mar 19, 2013 at 02:30:25PM +0800, li guang wrote: > > ? 2013-01-10?? 19:57 +0100?Vasilis Liaskovitis??? > > > > > > > > > > IIRC q35 supports memory hotplug natively (picked up in some > > > > > discussion). Is that correct? > > > > > > > > > From previous discussion I also understand that q35 supports native hotplug. > > > > Sections 5.1 and 5.2 of the spec describe the MCH registers but the native > > > > memory hotplug specifics are not yet clear to me. Any pointers from the > > > > spec are welcome. > > > > > > Ping. Could anyone who's familiar with the q35 spec provide some pointers on > > > native memory hotplug details in the spec? I see pcie hotplug registers but can't > > > find memory hotplug interface details. If I am not mistaken, the spec is here: > > > http://www.intel.com/design/chipsets/datashts/316966.htm > > > > > > Is the q35 memory hotplug support supposed to be an shpc-like interface geared > > > towards memory slots instead of pci slots? > > > > > > > seems there's no so-called q35-native support > > that was also my first impression when scanning the specification. Wasn't native > memory hotplug capabilities one of the reasons that q35 got picked as the next > pc chipset?
Um, I can't find the original statement of q35, but I think if we can't find in intel's official SPEC, then we have to say 'there's no q35-native support'.
? 2013-03-26?? 17:43 +0100?Vasilis Liaskovitis??? > Hi, > > On Tue, Mar 19, 2013 at 03:28:38PM +0800, li guang wrote: > [...] > > > > > This is v4 of the ACPI memory hotplug functionality. Only x86_64 target is > > > > > supported (both i440fx and q35). There are still several issues, but it's > > > > > been a while since v3 and I wanted to get some more feedback on the current > > > > > state of the patchseries. > > > > > > > > > > > > > > We are working in memory hotplug functionality on pSeries machine. I'm > > > > wondering whether and how we can better integrate things. Do you think the > > > > DIMM abstraction is generic enough to be used in other machine types? > > > > > > I think the DimmDevice is generic enough but I am open to other suggestions. > > > > > > A related issue is that the patchseries uses a DimmBus to hot-add and hot-remove > > > DimmDevice. Another approach that has been suggested is to use links<> between > > > DimmDevices and the dram controller device (piix4 or mch for pc and q35-pc > > > machines respectively). This would be more similar to the CPUState/qom > > > patches - see Andreas F?rber's earlier reply to this thread. > > > > > > I think we should get some consensus from the community/maintainers before we > > > continue to integrate. > > > > > > I haven't updated the series for a while, but I can rework if there is a more > > > clear direction for the community. > > > > > > Another open issue is reference counting of memoryregions in qemu memory > > > model. In order to make memory hot-remove operations safe, we need to remove > > > a memoryregion after all users (e.g. both guest and block layer) have stopped > > > using it, > > > > it seems it mostly up to the user who want to hot-(un)plug, > > if user want to un-plug a memory which is kernel's main memory, kernel > > will always run on it(never stop) unless power off. > > and if guest stops, all DIMMs should be safe to hot-remove, > > or else we should do something to let user can unlock all reference. > > it's not only the guest-side that needs to stop using it, we need to make sure > that the qemu block layer is also not using the memory region anymore. See the 2 > links below for discussion: >
can't we simply track this(MemoryRegion) usage by ref-count? e.g. every time mr used, inc ref-count, then dec it when unused even for cpu_physical_memory_map and other potential users.
> > > see discussion at > > > http://lists.gnu.org/archive/html/qemu-devel/2012-10/msg03986.html. There was a > > > relevant ibm patchset > > > https://lists.gnu.org/archive/html/qemu-devel/2012-11/msg02697.html > > > but it was not merged. > > > > thanks, > > - Vasilis
------------------------------
Message: 4 Date: Wed, 27 Mar 2013 11:35:24 +0800 From: Wenchao Xia <address@hidden> To: Eric Blake <address@hidden> Cc: Kevin Wolf <address@hidden>, Carsten Otte <address@hidden>, Anthony Liguori <address@hidden>, Pavel Hrdina <address@hidden>, Heiko Carstens <address@hidden>, Juan Quintela <address@hidden>, Stefan Hajnoczi <address@hidden>, Marcelo Tosatti <address@hidden>, Sebastian Ott <address@hidden>, qemu-devel <address@hidden>, Alexander Graf <address@hidden>, Christian Borntraeger <address@hidden>, Cornelia Huck <address@hidden>, Paolo Bonzini <address@hidden>, Dietmar Maurer <address@hidden>, Martin Schwidefsky <address@hidden> Subject: Re: [Qemu-devel] [RFC] qmp interface for save vmstate to image Message-ID: <address@hidden> Content-Type: text/plain; charset=GB2312
> With a deeper thinking, I'd like to share some more analyse: Vmstate saving equals memory snapshotting, to do it in theory methods can be concluded as: 1 get a mirror of it just in the time sending the "snapshot" request, kernel cow that region. 2 get a mirror of it by gradually coping out the region, complete when clone sync with the original region, basically similar to migrate.
Take a closer look: 1 cow the memory region: Saving: block I/O, cpu, since any duplicated step do not exist. Sacrifice: mem. Industry improvement solution: NUMA, price: expensive. Implement: hard, need quite some work. Qemu code maintain: easy. Detail: This method is the closest one to the meaning of "snapshot", but it contains a hidden requirement: reserved memory. As a really used server today, it is not possible that a huge memory is reserved for it: for example, one 4G mem server will possible to run a 3.5G mem guest, to get benefit of easing deploying, hardware independency, whole machine backup/restore. In this case, memory is not enough to do it. Let's take another example more possible happen: one 4G mem server run two 1.5G guest, in this case one guest need to be migrated out, obvious bad. So a much better solution is adding memory at the time doing snapshot, to do it without hardware plug and economic, it need NUMA+memory sharing:
Host1 Host2 Host3 | | | | | | | mem | mem | mem | | | |------------------ | shared mem
Some hosts share a memory to do snapshot, they get it when doing snapshot and return it to cluster manager after complete. This is possible on expensive architecture, but hard to be done on x86 architecture which labels itself cheap. One unrelated topic I thought: does qemu support migrating to a host device? If not it should support migrate to a block device with fixed size(different with snapshot, two mirror need sync), when shared memory present they can be migrated to a RAM block device quickly.
Implement detail: It should be done by adding an API in kernel: mem_snapshot(), from where kernel can cow a region, and write the snapshotted pages to far slower shared mem(if this logic is added as optimization). Fork() can do it, but brings many trouble and wound not benefit from NUMA architecture by moving snapshotted pages to slower mem.
2 gradually coping out and sync the memory region, two ways to do it: 2.1 migrate to block device.(migrate to fd, or migrate to image): Saving: mem. Sacrifice: CPU, block I/O. Industry improvement solution: Flash disk, cheap. Implement: easy, based on migration. Qemu code maintain: easy. Detail: It is a relative easier case, we just need to make the size fixed. And flash disk is possible on X86 architecture.
2.2 migrate to a stream, use another process to receive and rearrange the data. Saving: mem. Sacrifice: CPU(very high), block I/O(unless big buffer). Industry improvement solution: another host or CPU do it. Implement: hard, need new qemu tool. Qemu code maintain: hard, data need to be encoded in qemu, decoded on another process and rearrange, every change or new device adding need change it on both side. Detail: It invokes a process to receive the data, or invoke a fake qemu to recieve it and save(need many memory). Since code are hard to maintain, personally I think it is worse than 2.1.
Summary: suggest: 1) support both method 1 and 2.1, treat 2.1 as an improvement for migrate fd. Adding a new qmp interface as "vmsate snapshot" for method 1 to declare it as true snapshot. This allow it work on different architecture. 2) pushing a API to Linux to do method 1, instead of fork(). I'd like to send a RFC to Linux memory mail-list to get feedback.
-- Best Regards
Wenchao Xia
------------------------------
Message: 5 Date: Wed, 27 Mar 2013 04:17:12 -0000 From: Kenneth Salerno <address@hidden> To: address@hidden Subject: [Qemu-devel] [Bug 1158912] Re: QEMU Version 1.4.0 - SLIRP hangs VM Message-ID: <address@hidden> Content-Type: text/plain; charset="utf-8"
Sorry for the confusion, I was impatient for the first bisect run to complete - this time I figured out how to automate the testing portion of the git bisect run script so I could walk away and let it run until full completion.
Here is the result:
acbb090b2400f627a801074c4e3e006c7501bb26 is the first bad commit commit acbb090b2400f627a801074c4e3e006c7501bb26 Author: Andreas F?rber <address@hidden> Date: Wed Aug 15 14:15:41 2012 +0200
prep: Include devices for ppc64 as well
Allows running qemu-system-ppc64 -M prep for consistency.
Reported-by: Markus Armbruster <address@hidden> Signed-off-by: Andreas F?rber <address@hidden> Acked-by: Herv? Poussineau <address@hidden>
:040000 040000 efe2ed40eeef1863d210ab089033fdb0ce1eaea5 05c5174a00f99176a57398b32c5af659b8b0096c M default-configs bisect run success
-- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1158912
Title: QEMU Version 1.4.0 - SLIRP hangs VM
Status in QEMU: New
Bug description: (Note: problem is not present in version 1.3.0)
sed -i 's/--static-libs/--static --libs/' configure CC=i686-pc-mingw32-gcc ./configure \ --target-list=ppc64-softmmu \ --enable-debug \ --enable-sdl \ --static \ --enable-fdt && \ sed -i 's/ -Wl,--dynamicbase//g; s/-Wl,--nxcompat //g;' config-host.mak && \ make -j$THREADS && { echo "renaming binw.exe to bin.exe..." for i in `echo $TARGET_LIST | tr ',' ' '`; do BINARCH=`echo $i | sed 's/-softmmu//'` mv $i/qemu-system-${BINARCH}w.exe \ $i/qemu-system-$BINARCH.exe done }
3. From VM: Command to hang VM: zypper dup Last message before VM hang: Retrieving repository 'openSUSE-12.2-12.2-0' metadata -----------------------[|]
To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1158912/+subscriptions
------------------------------
Message: 6 Date: Wed, 27 Mar 2013 14:10:31 +0800 From: liguang <address@hidden> To: address@hidden, address@hidden Cc: liguang <address@hidden> Subject: [Qemu-devel] [PATCH] hw/i386/pc: reject to boot a wrong header magic kernel Message-ID: <address@hidden>
if head magic is missing or wrong unexpectedly, we'd better to reject booting. e.g. I make a mistake to boot a vmlinuz for MIPS(which I think it's for x86) like this: qemu-system-x86_64 -kernel vmlinuz -initrd demord then qemu report: "qemu: linux kernel too old to load a ram disk" that's misleading.