qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] qcow2: Avoid memory over-allocation on comp


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH 2/2] qcow2: Avoid memory over-allocation on compressed images
Date: Wed, 21 Feb 2018 19:48:22 +0100
User-agent: Mutt/1.9.1 (2017-09-22)

Am 21.02.2018 um 19:32 hat Eric Blake geschrieben:
> On 02/21/2018 11:39 AM, Kevin Wolf wrote:
> > > See my commit message comment - we have other spots in the code base that
> > > blindly g_malloc(2 * s->cluster_size).
> > 
> > Though is that a reason to do the same in new code or to phase out such
> > allocations whenever you touch them?
> 
> Touché.
> 
> > 
> > > And I intended (but sent the email without amending my commit) to use
> > > g_malloc().  But as Berto has convinced me that an externally produced
> > > image can convince us to read up to 4M (even though we don't need that
> > > much to decompress), I suppose that the try_ variant plus checking is
> > > reasonable (and care in NULL'ing out if one but not both allocations
> > > succeed).
> > 
> > Sounds good.
> > 
> > Another thought I had is whether we should do per-request allocation for
> > compressed clusters, too, instead of having per-BDS buffers.
> 
> The only benefit of a per-BDS buffer is that we cache things - multiple
> sub-cluster reads in a row all from the same compressed cluster benefit from
> decompressing only once.

Oh, you're right. I missed that this is actually used as a cache.

I guess we want to leave it for now then. Maybe at some point we can
actually implement the data cache that I proposed a few years ago (using
Qcow2Cache for data clusters under some circumstances), then we could
probably make that hold the data instead of having a separate cache.

> The drawbacks of a per-BDS buffer: we can't do things in parallel
> (everything else in qcow2 drops the lock around bdrv_co_pread[v]), so
> the initial read prevents anything else in the qcow2 layer from
> progressing.

Yes, though there are probably other optimisations that could be made
for compression before this becomes relevant, like reading more than one
cluster at a time.

> I also wonder - since we ARE allowing multiple parallel readers in other
> parts of qcow2 (without a patch, decompression is not in this boat, but
> decryption and even bounce buffers due to lower-layer alignment constraints
> are), what sort of mechanisms do we have for using a pool of reusable
> buffers, rather than having each cluster access that requires a buffer
> malloc and free the buffer on a per-access basis?  I don't know how much
> time the malloc/free per-transaction overhead adds, or if it is already much
> smaller than the actual I/O time.

I don't either. A while ago, we used g_slice_alloc() in some places (I
remember qemu_aio_get), but it was actually slower than just using
malloc/free each time.

So if we do want to pool buffers, we probably need to implement that
manually. I don't think we have a generic memory pool in qemu yet.

> But note that while reusable buffers from a pool would cut down on the
> per-I/O malloc/free overhead if we switch decompression away from per-BDS
> buffer, it would still not solve the fact that we only get the caching
> ability where multiple sub-cluster requests from the same compressed cluster
> require only one decompression, since that's only possible on a per-BDS
> caching level.

Yes, as I said above, I didn't notice that it's a real cache. Without
the possibility to use Qcow2Cache instead, we'll want to keep it.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]