qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SC


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
Date: Fri, 31 May 2013 10:18:54 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6

Il 31/05/2013 07:58, Alexey Kardashevskiy ha scritto:
> On 05/27/2013 05:03 PM, Paolo Bonzini wrote:
>> Il 27/05/2013 08:48, Alexey Kardashevskiy ha scritto:
>>>>>
>>>>> This is only true when the rerror and werror options have the values
>>>>> "ignore" or "report".  See virtio-scsi for an example of how to save the
>>>>> requests using the save_request and load_request callbacks in SCSIBusInfo.
>>>
>>> Sigh.
>>
>> ?
> 
> I thought the series is ready to go but I was wrong. Furthermore when I got
> to the point where I could actually test the save/restore for vscsi_req,
> migration was totally broken on PPC and it took some time to fix it :-/

It is ready.  I was just pointing out that it's not _production_ ready.

(Sorry, I'm unusually terse these days).

> I run QEMU as (this is the destination, the source just does not have
> -incoming):
> ./qemu-system-ppc64 \
>  -L "qemu-ppc64-bios/" \
>  -device "spapr-vscsi,id=ibmvscsi0" \
>  -drive
> "file=virtimg/fc18guest,if=none,id=dddrive0,readonly=off,format=blkdebug,media=disk,werror=stop,rerror=stop"
> \
>  -device
> "scsi-disk,id=scsidisk0,bus=ibmvscsi0.0,channel=0,scsi-id=0,lun=0,drive=dddrive0,removable=off"
> \
>  -incoming "tcp:localhost:4000" \
>  -m "1024" \
>  -machine "pseries" \
>  -nographic \
>  -vga "none" \
>  -enable-kvm
> 
> Am I using werror/rerror correctly?

Yes.

> I did not really understand how to use blkdebug or what else to hack in
> raw-posix but the point is I cannot get QEMU into a state with at least one
> vcsci_req.active==1, they are always inactive no matter what I do - I run
> 10 instances of "dd if=/def/sda of=/dev/null bs=4K" (on 8GB image with
> FC18) and increase migration speed to 500MB/s, no effect.

No, that doesn't help.

> How do you trigger the situation when there are inactive requests which
> have to be migrated?

You need to trigger an error.  For example, you could use a sparse image
on an almost-full partition and let "dd" fill your disk.  Then migrate
to another instance of QEMU on the same machine, the destination machine
should succeed migration but fail starting the machine.  When free space
on that partition, and "cont" on the destination, it should resume.

> And another question (sorry I am not very familiar with terminology but
> cc:Ben is :) ) - what happens with indirect requests if migration happened
> in the middle of handling such a request? virtio-scsi does not seem to
> handle this situation anyhow, it just reconstructs the whole request and
> that's it.

What are indirect requests?

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]