|
From: | ronnie sahlberg |
Subject: | Re: [Qemu-block] RFC block/iscsi command timeout |
Date: | Tue, 2 Jun 2015 09:43:49 -0700 |
Am 26.05.2015 um 12:21 schrieb Paolo Bonzini:
On 26/05/2015 12:06, Kevin Wolf wrote:
Am 26.05.2015 um 11:44 hat Paolo Bonzini geschrieben:Reordering of operations. Say you have:
How would it cause data corruption for qemu, i.e. which of the block
On 26/05/2015 11:37, Kevin Wolf wrote:
Whenever the topic of timeout is brought about, I'm worried thatIf we run into a timeout we theoretically have the following options:Just trying to reconnect indefinitely might not be the best option.
- reconnect
- retry
- error
I would reconnect as Ronnie proposed.
Consider the situation where you're inside a bdrv_drain_all(), which
blocks qemu completely. Trying to reconnect once or twice is probably
fine, but if that doesn't work, eventually you want to return an error
so that qemu is unstuck.
introducing timeouts (and doing anything except reconnecting) is the
same as NFS's soft option, which can actually cause data corruption.
So, why would it be safe?
layer assumptions would be broken?
guest -> QEMU write A to sector 1
QEMU -> NFS write A to sector 1
QEMU -> guest write A to sector 1 timed out
guest -> QEMU write B to sector 1
So you would go for infinite reconnecting? We can SIGKILL then anyway.
At this point you have the two outstanding writes are for the same
sector and with different payloads, so it's undefined which one wins.
QEMU -> NFS write B to sector 1
NFS -> QEMU write B to sector 1 completed
QEMU -> guest write B to sector 1 completed
NFS -> QEMU write A to sector 1 completed
(QEMU doesn't report this to the guest)
The guest thinks it has written B, but it's possible that the storage
has written A.
As said before my idea would be default of 5000ms for all sync calls and
no timeout for all async calls coming from the block layer.
A user settable timeout can be optionally specified via cmdline options
to define a timeout for both sync and async calls.
[Prev in Thread] | Current Thread | [Next in Thread] |