qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH 3/3] Drop internal bdrv_pread()/bdrv_pwrite() AP


From: Avi Kivity
Subject: [Qemu-devel] Re: [PATCH 3/3] Drop internal bdrv_pread()/bdrv_pwrite() APIs
Date: Sun, 08 Feb 2009 21:37:49 +0200
User-agent: Thunderbird 2.0.0.19 (X11/20090105)

Anthony Liguori wrote:
Avi Kivity wrote:
Now that scsi generic no longer uses bdrv_pread() and bdrv_pwrite(), we can drop the corresponding internal APIs, which overlap bdrv_read()/bdrv_write()
and, being byte oriented, are unnatural for a block device.

Signed-off-by: Avi Kivity <address@hidden>

 int bdrv_truncate(BlockDriverState *bs, int64_t offset)
diff --git a/block_int.h b/block_int.h
index e4630f0..cc9966b 100644
--- a/block_int.h
+++ b/block_int.h
@@ -58,10 +58,6 @@ struct BlockDriver {
     int aiocb_size;
const char *protocol_name;
-    int (*bdrv_pread)(BlockDriverState *bs, int64_t offset,
-                      uint8_t *buf, int count);
-    int (*bdrv_pwrite)(BlockDriverState *bs, int64_t offset,
-                       const uint8_t *buf, int count);

$ grep -l bdrv_pwrite *.c hw/*.c
block.c
block-qcow2.c
block-qcow.c
block-raw-posix.c
block-raw-win32.c
block-vmdk.c
block-vpc.c
savevm.c
hw/scsi-generic.c

So there's a lot of users other than scsi-generic. Usually, these callers are in the block layer to read/write metadata that isn't always block aligned. Some buffer adjustment could fix savevm.c to ensure alignment.

It's more accurate to say that there are now no users that depend on the request size. The other users will happily allow expansion of a small request to a sector.

These users are now relegated to using emulated pread/pwrite? Won't that have a noticable impact on performance if updating small bits of metadata. For instance, I think updating qcow2 refcounts would look bad since you have to read/write the full block to update 4 bytes of data. Granted it'll be cached, but...

I haven't measured but I'll bet the impact is unnoticable. We're doubling the syscall count for writes, but the actual transfer (if cached) or the I/O (if uncached) will swamp that. For reads, we're copying a bit more data, but that won't even tickle performance.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]