[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH] rbd: hook up cache options
From: |
Josh Durgin |
Subject: |
Re: [Qemu-devel] [PATCH] rbd: hook up cache options |
Date: |
Tue, 22 May 2012 09:24:55 -0700 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120430 Thunderbird/12.0.1 |
On 05/22/2012 02:18 AM, Paolo Bonzini wrote:
Il 17/05/2012 22:42, Josh Durgin ha scritto:
+ * Fallback to more conservative semantics if setting cache
+ * options fails. Ignore errors from setting rbd_cache because the
+ * only possible error is that the option does not exist, and
+ * librbd defaults to no caching. If write through caching cannot
+ * be set up, fall back to no caching.
+ */
+ if (flags& BDRV_O_NOCACHE) {
+ rados_conf_set(s->cluster, "rbd_cache", "false");
+ } else {
+ rados_conf_set(s->cluster, "rbd_cache", "true");
+ if (!(flags& BDRV_O_CACHE_WB)) {
+ r = rados_conf_set(s->cluster, "rbd_cache_max_dirty", "0");
+ if (r< 0) {
+ rados_conf_set(s->cluster, "rbd_cache", "false");
+ }
+ }
+ }
Last time I looked at ceph, rbd_flush was not a full flush of the cache;
it only ensured that the pending requests were sent. So my questions are:
I'm not sure which version you were looking at, but this hasn't been
the case since caching was implemented. I don't think it was ever the
case, actually. rbd_flush has always waited for pending I/Os to complete
(be on disk on all replicas), not just be in flight.
If you're interested in the current implementation, you can see:
src/librbd.cc: librbd::flush()
which goes into:
src/osdc/ObjectCacher.cc: ObjectCacher::commit_set()
or
src/librados/IoCtxImpl.cc: IoCtxImpl::flush_aio_writes()
1) has this changed? does rbd_flush now flush dirty items when
rbd_cache_max_dirty> 0?
The rbd_cache_* options did not exist before 0.46.
2) should the usage of a cache be conditional on LIBRBD_VERSION_CODE>=
LIBRBD_VERSION(0, 1, 1)?
It doesn't matter if you use an older version because the non-existent
options don't have any effect.
Thanks,
Josh