|
From: | Chun Yan Liu |
Subject: | Re: [Qemu-devel] [PATCH V2] Add -f option to qemu-nbd |
Date: | Tue, 22 Nov 2011 23:51:59 -0700 |
I've had a look at nbd driver code and viewed the trace log, and get clear about why the previously mentioned problem happens: 1st time: qemu-nbd -c /dev/nbd0 disk.img nbd_init: send these ioctl(s) in order, SET_BLKSIZE, SET_SIZE, CLEAR_SOCK, SET_SOCK nbd_clinet: NBD_DO_IT (it will then handle request(s) in which it should use nbd_device->sock.) 2st time: qemu-nbd -c /dev/nbd0 disk1.img nbd_init: send same ioctl(s) to the same nbd device, it will reset nbd_device->sock nbd_client: still send NBD_DO_IT, it find there is on client connected, then return EBUSY and send CLEAR_SOCK, the result is: it will clear ndb_device->sock, which will cause the 1st time "qemu-nbd -c" fail to handle request any longer, including unable to read partition table.
According to above code logic, if lock in an early place is not accepted, then removing CLEAR_SOCK in nbd_init phase can also solve problem. In fact, if cleanup work done well, I think that ioctl is not needed. Any comments? diff --git a/nbd.c b/nbd.c index e6c931c..067a57b 100644 --- a/nbd.c +++ b/nbd.c @@ -386,15 +386,6 @@ int nbd_init(int fd, int csock, uint32_t flags, off_t size, size_t blocksize) return -1; } - TRACE("Clearing NBD socket"); - - if (ioctl(fd, NBD_CLEAR_SOCK) == -1) { - int serrno = errno; - LOG("Failed clearing NBD socket"); - errno = serrno; - return -1; - } - TRACE("Setting NBD socket"); if (ioctl(fd, NBD_SET_SOCK, csock) == -1) {
|
[Prev in Thread] | Current Thread | [Next in Thread] |