Am 20.09.2010 16:56, schrieb Anthony Liguori:
+void blkqueue_flush(BlockQueue *bq)
+{
+ qemu_mutex_lock(&bq->flush_lock);
+
+ /* Process any left over requests */
+ while (QTAILQ_FIRST(&bq->queue)) {
+ blkqueue_process_request(bq);
+ }
+
+ qemu_mutex_unlock(&bq->flush_lock);
+}
+
+static void *blkqueue_thread(void *_bq)
+{
+ BlockQueue *bq = _bq;
+#ifndef RUN_TESTS
+ BlockQueueRequest *req;
+#endif
+
+ qemu_mutex_lock(&bq->flush_lock);
+ while (!bq->thread_done) {
+ barrier();
A barrier shouldn't be needed here.
It was needed when I started with an empty thread because gcc would
"optimize" while(!bq->thread_done) into an endless loop. I guess there
is enough code added now that gcc won't try to be clever any more, so I
can remove that.
A less invasive way of doing this (assuming we're okay with it from a
correctness perspective) is to make use of qemu_aio_wait() as a
replacement for qemu_mutex_lock() and shift the pread/pwrite calls to
bdrv_aio_write/bdrv_aio_read.
IOW, blkqueue_pwrite stages a request via bdrv_aio_write().
blkqueue_pread() either returns a cached read or it does a
bdrv_pread(). The blkqueue_flush() call will then do qemu_aio_wait() to
wait for all pending I/Os to complete.
I was actually considering that, but it would have been a bit more
coding to keep track of another queue of in-flight requests, juggling
with some more AIOCBs and implementing an emulation for the missing
bdrv_aio_pwrite. Nothing really dramatic, it just was easier to start
this way.