[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 3/7] nbd: use BDS refcount
From: |
Paolo Bonzini |
Subject: |
Re: [Qemu-devel] [PATCH 3/7] nbd: use BDS refcount |
Date: |
Wed, 03 Jul 2013 09:28:06 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6 |
Il 03/07/2013 08:30, Fam Zheng ha scritto:
>> > The close notifier runs when the user invokes a drive_del or eject
>> > command from the monitor. The drive_get_ref/drive_put_ref delays the
>> > bdrv_delete until after nbd.c has cleaned up all the connections.
> But drive_put_ref is called by close notifier.
Not necessarily. nbd_export_close calls nbd_client_close, which shuts
down the socket. However, if requests are being processed, they will
complete after nbd_export_close returns. Completing the requests leads
to the following call chain:
nbd_request_put (from nbd_trip)
calls nbd_client_put
calls nbd_export_put
calls exp->close (if refcount goes to 0)
calls drive_put_ref
Completion will happen as soon as the main loop runs again, because
after shutdown() the reads and writes will fail. Still, it is
asynchronous, hence the call to drive_put_ref is also asynchronous.
> I think it can be
> omitted, registering a close notifier is enough, and close the export
> when drive_del calls it. It doesn't make more sense w/ drive_get_ref,
> does it?
I think that would cause a dangling pointer if NBD requests are being
processed at the time drive_del runs.
Paolo
- Re: [Qemu-devel] [PATCH 1/7] block: Convert BlockDriverState.in_use to refcount, (continued)
[Qemu-devel] [PATCH 4/7] block: simplify bdrv_drop_intermediate, Fam Zheng, 2013/07/02
[Qemu-devel] [PATCH 5/7] block: rename bdrv_in_use to bdrv_is_shared, Fam Zheng, 2013/07/02
[Qemu-devel] [PATCH 6/7] block: add target-id option to drive-backup QMP command, Fam Zheng, 2013/07/02
[Qemu-devel] [PATCH 7/7] block: assign backing relationship in drive-backup, Fam Zheng, 2013/07/02