[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC PATCH 2/2] block: gluster as block backend
From: |
Stefan Hajnoczi |
Subject: |
Re: [Qemu-devel] [RFC PATCH 2/2] block: gluster as block backend |
Date: |
Mon, 23 Jul 2012 10:06:40 +0100 |
On Mon, Jul 23, 2012 at 9:32 AM, Bharata B Rao
<address@hidden> wrote:
> On Sun, Jul 22, 2012 at 04:38:00PM +0100, Stefan Hajnoczi wrote:
>> On Sat, Jul 21, 2012 at 9:31 AM, Bharata B Rao
>> <address@hidden> wrote:
>> > +} GlusterAIOCB;
>> > +
>> > +typedef struct GlusterCBKData {
>> > + GlusterAIOCB *acb;
>> > + struct BDRVGlusterState *s;
>> > + int64_t size;
>> > + int ret;
>> > +} GlusterCBKData;
>>
>> I think GlusterCBKData could just be part of GlusterAIOCB. That would
>> simplify the code a little and avoid some malloc/free.
>
> Are you suggesting to put a field
>
> GlusterCBKData gcbk;
>
> inside GlusterAIOCB and use gcbk from there or
>
> Are you suggesting that I make the fields of GlusterCBKData part of
> GlusterAIOCB and get rid of GlusterCBKData altogether ? This means I would
> have to pass the GlusterAIOCB to gluster async calls and update its fields
> from
> gluster callback routine. I can do this, but I am not sure if you can touch
> the fields of GlusterAIOCB in non-QEMU threads (gluster callback thread).
The fields in GlusterCBKData could become part of GlusterAIOCB.
Different threads can access fields in a struct, they just need to
ensure access is synchronized if they touch the same fields. In the
case of this code I think there is nothing that requires
synchronization beyond the pipe mechanism that you already use to
complete processing in a QEMU thread.
>> When the argument is too long we should probably report an error
>> instead of truncating.
>
> Or should we let gluster APIs to flag an error with truncated
> server and volume names ?
What if the truncated name is a valid but different object? For example:
Max chars = 5
Objects:
"helloworld"
"hello"
If "helloworld" is truncated to "hello" we get no error back because
it's a valid object!
We need to either check sizes explicitly without truncating or use a
g_strdup() approach without any size limits and let the gfapi
functions error out if the input string is too long.
>> > +static struct glfs *qemu_gluster_init(GlusterConf *c, const char
>> > *filename)
>> > +{
>> > + struct glfs *glfs = NULL;
>> > + int ret;
>> > +
>> > + ret = qemu_gluster_parsename(c, filename);
>> > + if (ret < 0) {
>> > + errno = -ret;
>> > + goto out;
>> > + }
>> > +
>> > + glfs = glfs_new(c->volname);
>> > + if (!glfs) {
>> > + goto out;
>> > + }
>> > +
>> > + ret = glfs_set_volfile_server(glfs, "socket", c->server, c->port);
>> > + if (ret < 0) {
>> > + goto out;
>> > + }
>> > +
>> > + /*
>> > + * TODO: Logging is not necessary but instead nice to have.
>> > + * Can QEMU optionally log into a standard place ?
>>
>> QEMU prints to stderr, can you do that here too? The global log file
>> is not okay, especially when multiple QEMU instances are running.
>
> Ok, I can do glfs_set_logging(glfs, "/dev/stderr", loglevel);
Yes. I think "-" is best since it is supported by gfapi
(libglusterfs/src/logging.c:gf_log_init). /dev/stderr is not POSIX.
>> > + * Need to use defines like gf_loglevel_t:GF_LOG_INFO instead of
>> > + * hard coded values like 7 here.
>> > + */
>> > + ret = glfs_set_logging(glfs, "/tmp/qemu-gluster.log", 7);
>> > + if (ret < 0) {
>> > + goto out;
>> > + }
>> > +
>> > + ret = glfs_init(glfs);
>> > + if (ret < 0) {
>> > + goto out;
>> > + }
>> > + return glfs;
>> > +
>> > +out:
>> > + if (glfs) {
>> > + (void)glfs_fini(glfs);
>> > + }
>> > + return NULL;
>> > +}
>> > +
>> > +static int qemu_gluster_open(BlockDriverState *bs, const char *filename,
>> > + int bdrv_flags)
>> > +{
>> > + BDRVGlusterState *s = bs->opaque;
>> > + GlusterConf *c = g_malloc(sizeof(GlusterConf));
>>
>> Can this be allocated on the stack?
>
> It consists of PATH_MAX(4096), HOST_NAME_MAX(255) and GLUSTERD_MAX_VOLUME_NAME
> (1000). A bit heavy to be on stack ?
This is userspace, stacks are big but it's up to you.
Stefan