|
From: | Peter Lieven |
Subject: | Re: [Qemu-devel] [PATCH 4/4] block: avoid creating oversized writes in multiwrite_merge |
Date: | Mon, 22 Sep 2014 11:43:22 +0200 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.1.1 |
On 19.09.2014 15:33, Paolo Bonzini wrote:
Il 19/09/2014 00:56, Peter Lieven ha scritto:So I think if we treat it just as a hint for multiwrite, we can avoid writing code to split oversized requests. They always worked so far, we can certainly wait until we have a real bug fix.I would not treat this as a hint. I would use it in cases where we definitely know an absolute hard limit for I/O request size. Otherwise the value for bs->bl.max_transfer_length should be 0. If there comes in an oversized request we fail it as early as possibleThat's the part that I'd rather not touch, at least not without doing request splitting.
This series aims not at touching default behaviour. The default for max_transfer_length is 0 (no limit). max_transfer_length is a limit that MUST be satisfied otherwise the request will fail. And Patch 2 aims at catching this fail earlier in the stack. Currently, we only have a limit for iSCSI. Without Patch 2 it would fail after we have send the command to the target. And without Patch 4 it may happen that multiwrite_merge traps the into the limit. Maybe I should adjust the description of max_transfer_length? Peter
[Prev in Thread] | Current Thread | [Next in Thread] |