qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCHv2 10/11] iscsi: ignore aio_discard if unsupporte


From: Peter Lieven
Subject: Re: [Qemu-devel] [PATCHv2 10/11] iscsi: ignore aio_discard if unsupported
Date: Wed, 10 Jul 2013 16:49:22 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120827 Thunderbird/15.0

Am 10.07.2013 16:28, schrieb Kevin Wolf:
> Am 10.07.2013 um 16:04 hat Peter Lieven geschrieben:
>> Am 10.07.2013 13:33, schrieb Kevin Wolf:
>>> Am 27.06.2013 um 15:11 hat Peter Lieven geschrieben:
>>>> if the target does not support UNMAP or the request
>>>> is too big silently ignore the discard request.
>>>>
>>>> Signed-off-by: Peter Lieven <address@hidden>
>>> Why not loop for the "too big" case? You can probably use the same logic
>>> for unmapping the whole device in .bdrv_create and here.
>> right, but looping in an aio function seemed not so trivial to me.
>> it seems more and more obvious to me that the best would be to change
>> all the remaining aio routines to co routines.
> The pattern for AIO functions is that the real work of submitting
> requests is done in the AIO callback, and it submits new AIO requests
> calling back into the same callback as long as acb->remaining_secs > 0
> (or something like that).
>
> You can still see that kind of thing alive in qed_aio_next_io(), (most
> of?) the rest is converted to coroutines because it makes the code look
> nicer.
would you agree if I leave the easy version in just to fix the potential
problems if iscsi_aio_discard is called with too high nb_sectors or
on a storage where UNMAP is unsupported.

I will add a TODO with the comment that the limit of iscsi->max_unmap should
be replaced by a loop once the routine is replaced by a coroutine?
>
>> in this case i could add the too big logic in iscsi_co_discard and simply 
>> call
>> it from iscsi_co_write_zeroes.
> I think that would be the nicest solution.
I promised to take care of this for 1.7.0 latest.

Peter




reply via email to

[Prev in Thread] Current Thread [Next in Thread]