qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH v2 6/7] qed: Read/write support


From: Stefan Hajnoczi
Subject: [Qemu-devel] Re: [PATCH v2 6/7] qed: Read/write support
Date: Mon, 11 Oct 2010 14:55:25 +0100
User-agent: Mutt/1.5.20 (2009-06-14)

On Mon, Oct 11, 2010 at 03:10:16PM +0200, Avi Kivity wrote:
>  On 10/11/2010 12:37 PM, Stefan Hajnoczi wrote:
> >On Sun, Oct 10, 2010 at 11:10:15AM +0200, Avi Kivity wrote:
> >>   On 10/08/2010 05:48 PM, Stefan Hajnoczi wrote:
> >>  >This patch implements the read/write state machine.  Operations are
> >>  >fully asynchronous and multiple operations may be active at any time.
> >>  >
> >>  >Allocating writes lock tables to ensure metadata updates do not
> >>  >interfere with each other.  If two allocating writes need to update the
> >>  >same L2 table they will run sequentially.  If two allocating writes need
> >>  >to update different L2 tables they will run in parallel.
> >>  >
> >>
> >>  Shouldn't there be a flush between an allocating write and an L2
> >>  update?  Otherwise a reuse of a cluster can move logical sectors
> >>  from one place to another, causing a data disclosure.
> >>
> >>  Can be skipped if the new cluster is beyond the physical image size.
> >
> >Currently clusters are never reused and new clusters are always beyond
> >physical image size.  The only exception I can think of is when the
> >image file size is not a multiple of the cluster size and we round down
> >to the start of cluster.
> >
> >In an implementation that supports TRIM or otherwise reuses clusters
> >this is a cost.
> >
> >>  >+
> >>  >+/*
> >>  >+ * Table locking works as follows:
> >>  >+ *
> >>  >+ * Reads and non-allocating writes do not acquire locks because they do 
> >> not
> >>  >+ * modify tables and only see committed L2 cache entries.
> >>
> >>  What about a non-allocating write that follows an allocating write?
> >>
> >>  1 Guest writes to sector 0
> >>  2 Host reads backing image (or supplies zeros), sectors 1-127
> >>  3 Host writes sectors 0-127
> >>  4 Guest writes sector 1
> >>  5 Host writes sector 1
> >>
> >>  There needs to be a barrier that prevents the host and the disk from
> >>  reordering operations 3 and 5, or guest operation 4 is lost.  As far
> >>  as the guest is concerned no overlapping writes were issued, so it
> >>  isn't required to provide any barriers.
> >>
> >>  (based on the comment only, haven't read the code)
> >
> >There is no barrier between operations 3 and 5.  However, operation 5
> >only starts after operation 3 has completed because of table locking.
> >It is my understanding that *independent* requests may be reordered but
> >two writes to the *same* sector will not be reordered if write A
> >completes before write B is issued.
> >
> >Imagine a test program that uses pwrite() to rewrite a counter many
> >times on disk.  When the program finishes it prints the counter
> >variable's last value.  This scenario is like operations 3 and 5 above.
> >If we read the counter back from disk it will be the final value, not
> >some intermediate value.  The writes will not be reordered.
> 
> Yes, all that is needed is a barrier in program order.  So,
> operation 5 is an allocating write as long as 3 hasn't returned? (at
> which point it becomes a non-allocating write)?

Yes, operation 5 waits until operation 3 completes.  After waking up on
the lock, the request looks up the cluster again because it may now be
allocated - operation 5 switches to an non-allocating write.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]