qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Xen-devel] [PATCH RFC 1/3] xen_disk: handle disk files


From: Roger Pau Monné
Subject: Re: [Qemu-devel] [Xen-devel] [PATCH RFC 1/3] xen_disk: handle disk files on ramfs/tmpfs
Date: Fri, 4 Jan 2013 16:05:41 +0100
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:17.0) Gecko/17.0 Thunderbird/17.0

On 04/01/13 15:54, Stefano Stabellini wrote:
> On Thu, 3 Jan 2013, Ian Campbell wrote:
>> On Mon, 2012-12-31 at 12:16 +0000, Roger Pau Monne wrote:
>>> Files that reside on ramfs or tmpfs cannot be opened with O_DIRECT,
>>> if first call to bdrv_open fails with errno = EINVAL, try a second
>>> call without BDRV_O_NOCACHE.
>>
>> Doesn't that risk spuriously turning of NOCACHE on other sorts of
>> devices as well which (potentially) opens up a data loss issue?
> 
> I agree, we shouldn't have this kind of critical configuration changes
> behind the user's back.
> 
> I would rather let the user set the cache attributes, QEMU has already a
> command line option for it, but we can't use it directly because
> xen_disk gets the configuration solely from xenstore at the moment.
> 
> I guess we could add a key pair cache=foobar to the xl disk
> configuration spec, that gets translated somehow to a key on xenstore.
> Xen_disk would read the key and sets qflags accordingly.
> We could use the same cache parameters supported by QEMU, see
> bdrv_parse_cache_flags.
> 
> As an alternative, we could reuse the already defined "access" key, like
> this:
> 
> access=rw|nocache
> 
> or
> 
> access=rw|unsafe

I needed this patch to be able to perform the benchmarks for the
persistent grants implementation, but I realize this is not the best way
to solve this problem.

It might be worth to think of a good way to pass more information to the
qdisk backend (not only limited to whether O_DIRECT should be used or
not), so we can take advantage in the future of all the possible file
backends that Qemu supports, like GlusterFS or SheepDog.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]