qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] implement lvm-aware P2V to reduce time cost significant


From: Richard W.M. Jones
Subject: Re: [Qemu-devel] implement lvm-aware P2V to reduce time cost significantly for linux server
Date: Sat, 3 Jan 2015 00:52:05 +0000
User-agent: Mutt/1.5.20 (2009-12-10)

On Sat, Jan 03, 2015 at 12:47:10AM +0000, Richard W.M. Jones wrote:
> On Fri, Jan 02, 2015 at 10:19:29AM +0000, Stefan Hajnoczi wrote:
> > On Sat, Dec 27, 2014 at 09:28:53AM +0800, Haoyu Zhang wrote:
> > > I want to P2V a redhat server to kvm vm, and lvm was used to manage disks
> > > in the redhat server.
> > > I want to only migrate the really used storage to vm image, which can
> > > reduce the time cost significantly sometimes,
> > > so I need the information of logical volume to physical disks bitmap, to
> > > know which physical sectors were really used,
> > > any ideas?
> > > Is there a tool off-the-shelf have implemented the target?
> > 
> > Have you looked at virt-p2v(1)?
> > 
> > http://libguestfs.org/virt-p2v.1.html
> > 
> > I'm not sure if it sparsifies the disk image during conversion or
> > whether you would have to run virt-sparsify(1) afterwards
> > (http://libguestfs.org/virt-sparsify.1.html).  virt-sparsify(1) can
> > definitely unmap LVM's unused space.
> 
> It sparsifies automatically during conversion.  No need to run
> virt-sparsify afterwards :-)

I should note this statement only applies to the new version of
virt-p2v/virt-v2v, >= 1.28.  The old version, 0.9.x, did not do this.
The old version is not supported or maintained.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine.  Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/



reply via email to

[Prev in Thread] Current Thread [Next in Thread]