qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [RFC] Disk integrity in QEMU


From: Ryan Harper
Subject: Re: [Qemu-devel] Re: [RFC] Disk integrity in QEMU
Date: Mon, 13 Oct 2008 16:05:09 -0500
User-agent: Mutt/1.5.6+20040907i

* Laurent Vivier <address@hidden> [2008-10-13 15:39]:
> >>
> >>as "cache=on" implies a factor (memory) shared by the whole system,
> >>you must take into account the size of the host memory and run some
> >>applications (several guests ?) to pollute the host cache, for
> >>instance you can run 4 guest and run bench in each of them
> >>concurrently, and you could reasonably limits the size of the host
> >>memory to 5 x the size of the guest memory.
> >>(for instance 4 guests with 128 MB on a host with 768 MB).
> >
> >I'm not following you here, the only assumption I see is that we  
> >have 1g
> >of host mem free for caching the write.
> 
> Is this a realistic use case ?

Optimistic? I don't think it is unrealistic.  It is hard to know what
hardware and use-case any end user may have at their disposal.

> >>
> >>as O_DSYNC implies journal commit, you should run a bench on the ext3
> >>host file system concurrently to the bench on a guest to see the
> >>impact of the commit on each bench.
> >
> >I understand the goal here, but what sort of host ext3 journaling load
> >is appropriate.  Additionally, when we're exporting block devices, I
> >don't believe the ext3 journal is an issue.
> 
> Yes, it's a comment for the last test case.
> I think you can run the same benchmark as you do in the guest.

I'm not sure where to go with this.  If it turns out that scaling out on
to of ext3 stinks, then the deployment needs to change to deal with that
limitation in ext3.  Use a proper block device, something like lvm.

> >>According to the semantic, I don't understand how O_DSYNC can be
> >>better than cache=off in this case...
> >
> >I don't have a good answer either, but O_DIRECT and O_DSYNC are
> >different paths through the kernel.  This deserves a better reply, but
> >I don't have one off the top of my head.
> 
> The O_DIRECT kernel path should be more "direct" than the O_DSYNC one.  
> Perhaps a oprofile could help to understand ?
> What it is strange also is the CPU usage with cache=off. It should be  
> lower than others, perhaps an alignment issue ?
>  due to the LVM ?

All possible, I don't have an oprofile of it.

> >>
> >>OK, but in this case the size of the cache for "cache=off" is the  
> >>size
> >>of the guest cache whereas in the other cases the size of the cache  
> >>is
> >>the size of the guest cache + the size of the host cache, this is not
> >>fair...
> >
> >it isn't supposed to be fair, cache=off is O_DIRECT, we're reading  
> >from
> >the device, we *want* to be able to lean on the host cache to read the
> >data, pay once and benefit in other guests if possible.
> 
> OK, but if you want to follow this way I think you must run several  
> guests concurrently to see how the host cache help each of them.
> If you want I can try this tomorrow ? The O_DSYNC patch is the one  
> posted to the mailing-list ?

The patch used is the same as what is on the list, feel free to try.

> 
> And moreover, you should run an endurance test to see how the cache  
> evolves.

I'm not sure how interesting this is, either it was in the cache or not,
depending on what work you do you can either devolve to a case where
nothing is in cache or where everything is in cache.  The point being
that by using cache where we can we get the benefit.  If you use
cache=off you'll never be able to get that boost when it would other wise
been available.


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
address@hidden




reply via email to

[Prev in Thread] Current Thread [Next in Thread]