qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: KVM call minutes for Sept 7


From: Anthony Liguori
Subject: [Qemu-devel] Re: KVM call minutes for Sept 7
Date: Tue, 07 Sep 2010 09:39:26 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.11) Gecko/20100713 Lightning/1.0b1 Thunderbird/3.0.6

On 09/07/2010 09:30 AM, Chris Wright wrote:
0.13 schedule
- RSN
- rc1 uploaded, tagged in git (and tag should actually be there now)
- announcement once it propagates
- 0.13.0 should be 1 week after rc1 announcement
- please check rc1 for any missing critical patches

qed
- concession that qcow2 is complicated and hard to get right
- it's much more efficient than qcow2
- not had data integrity testing, but simple enough design to
   rationalize the format and meta-data updates
- formal spec planned...documented on wiki http://wiki.qemu.org/Features/QED
   - design doc written first, code written to design doc
- defragmentation supportable and important (not done now)

Just as an FYI, defragmentation is similar to bdrv_aio_stream implemented in http://repo.or.cz/w/qemu/aliguori.git/shortlog/refs/heads/features/qed

For each defragmentation run, you look for a cluster who's file offset != it's virtual location. You then look at the virtual location (file_offset - first_cluster * cluster_size) and see if another cluster is present there. The best way to answer all of these questions is to limit yourself to looking at things in the l2 cache which means it involves no disk I/O.

Once you've found two candidates for swapping, you can schedule a swap operation in the background. To do the swap, you need to read the contents of the first cluster and dirty track this cluster, and write the contents to a newly allocated cluster. If the first cluster isn't dirty, you update the L2 metadata to point to the new location at EOF for the cluster. You now have a free cluster in the desired location for the second cluster and you can do the same process to copy the cluster and update L2 metadata.

The dirty tracking could be implemented by stalling write requests to these clusters. That's probably the easiest approach.

Online defrag is then just a command that a management tool can run during idle I/O sequences until it reports that it can't find a candidate cluster to defrag.

This will never move metadata. Defragging metadata is a bit more complicated since metadata can be stored as more than a single cluster. I'd probably think that implementing a second stage for metadata defrag that came after cluster defrag would make the most sense (although the ROI for metadata defrag is low since metadata is proportionately tiny to the disk size).

Regards,

Anthony Liguori

- defragmented image should be as fast as raw
- concern about splitting install base (doubles qa effort, etc)
   - should be possible to do an in-place qcow2->qed update
   - even live update could be doable
- what about vmdk or vhd?
   - controlled externally
   - specification license implications are unclear
   - too close to NIH?
- qed and async model could put pressure to improve other formats and
   push code out of qed to core
- another interest for qed...streaming images (fault in image extents
   via network)
   - want to design this as starting from mgmt interface discussion
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to address@hidden
More majordomo info at  http://vger.kernel.org/majordomo-info.html




reply via email to

[Prev in Thread] Current Thread [Next in Thread]