qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 1/1] block/gluster: add support for multiple


From: Peter Krempa
Subject: Re: [Qemu-devel] [PATCH v7 1/1] block/gluster: add support for multiple gluster backup volfile servers
Date: Thu, 15 Oct 2015 11:29:05 +0200
User-agent: Mutt/1.5.23 (2014-03-12)

On Wed, Oct 14, 2015 at 12:36:58 +0530, Prasanna Kumar Kalever wrote:
> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
> 
> Problem:
> 
> Currenly VM Image on gluster volume is specified like this:
> 
> file=gluster[+tcp]://host[:port]/testvol/a.img
> 
> Assuming we have three hosts in trustred pool with replica 3 volume
> in action and unfortunately host (mentioned in the command above) went down
> for some reason, since the volume is replica 3 we now have other 2 hosts
> active from which we can boot the VM.
> 
> But currently there is no mechanism to pass the other 2 gluster host
> addresses to qemu.
> 
> Solution:
> 
> New way of specifying VM Image on gluster volume with volfile servers:
> (We still support old syntax to maintain backward compatibility)
> 
> Basic command line syntax looks like:
> 
> Pattern I:
>  -drive driver=gluster,
>         volume=testvol,path=/path/a.raw,
>         servers.0.host=1.2.3.4,
>        [servers.0.port=24007,]
>        [servers.0.transport=tcp,]
>         servers.1.host=5.6.7.8,
>        [servers.1.port=24008,]
>        [servers.1.transport=rdma,] ...
> 
> Pattern II:
>  'json:{"driver":"qcow2","file":{"driver":"gluster",
>        "volume":"testvol","path":"/path/a.qcow2",
>        "servers":[{tuple0},{tuple1}, ...{tupleN}]}}'
> 
>    driver      => 'gluster' (protocol name)
>    volume      => name of gluster volume where our VM image resides
>    path        => absolute path of image in gluster volume
> 
>   {tuple}      => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}
> 
>    host        => host address (hostname/ipv4/ipv6 addresses)
>    port        => port number on which glusterd is listening. (default 24007)
>    tranport    => transport type used to connect to gluster management daemon,

s/tranport/transport/

>                    it can be tcp|rdma (default 'tcp')
> 
> Examples:
> 1.
>  -drive driver=qcow2,file.driver=gluster,
>         file.volume=testvol,file.path=/path/a.qcow2,
>         file.servers.0.host=1.2.3.4,
>         file.servers.0.port=24007,
>         file.servers.0.transport=tcp,
>         file.servers.1.host=5.6.7.8,
>         file.servers.1.port=24008,
>         file.servers.1.transport=rdma
> 2.
>  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
>          "path":"/path/a.qcow2","servers":
>          [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
>           {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'
> 
> This patch gives a mechanism to provide all the server addresses, which are in
> replica set, so in case host1 is down VM can still boot from any of the
> active hosts.
> 
> This is equivalent to the backup-volfile-servers option supported by
> mount.glusterfs (FUSE way of mounting gluster volume)
> 
> This patch depends on a recent fix in libgfapi raised as part of this work:
> http://review.gluster.org/#/c/12114/

According to the commit message of the gluster change this is not really
required since the code below actually uses it's own defaults if the
user didn't provide any so the libgfapi is never called with NULL/0

> 
> Credits: Sincere thanks to Kevin Wolf <address@hidden> and
> "Deepak C Shetty" <address@hidden> for inputs and all their support
> 
> Signed-off-by: Prasanna Kumar Kalever <address@hidden>
> ---

[...]

> @@ -24,22 +35,108 @@ typedef struct BDRVGlusterState {
>      struct glfs_fd *fd;
>  } BDRVGlusterState;
>  
> -typedef struct GlusterConf {
> -    char *server;
> -    int port;
> -    char *volname;
> -    char *image;
> +typedef struct BDRVGlusterReopenState {
> +    struct glfs *glfs;
> +    struct glfs_fd *fd;
> +} BDRVGlusterReopenState;
> +
> +typedef struct GlusterServerConf {
> +    char *host;
> +    int   port;
>      char *transport;
> +} GlusterServerConf;
> +
> +typedef struct GlusterConf {
> +    char *volume;
> +    char *path;
> +    GlusterServerConf *gsconf;
>  } GlusterConf;

I'd suggest you split up the renaming process of the variables from the
actual addition of the multi-host support. As you will see later, it
makes this patch cluttered with irrelevant changes.

>  
> +static QemuOptsList qemu_gluster_create_opts = {
> +    .name = "qemu-gluster-create-opts",
> +    .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
> +    .desc = {
> +        {
> +            .name = BLOCK_OPT_SIZE,
> +            .type = QEMU_OPT_SIZE,
> +            .help = "Virtual disk size"
> +        },
> +        {
> +            .name = BLOCK_OPT_PREALLOC,
> +            .type = QEMU_OPT_STRING,
> +            .help = "Preallocation mode (allowed values: off, full)"
> +        },
> +        { /* end of list */ }
> +    }
> +};
> +
> +static QemuOptsList runtime_opts = {
> +    .name = "gluster",
> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
> +    .desc = {
> +        {
> +            .name = GLUSTER_OPT_FILENAME,
> +            .type = QEMU_OPT_STRING,
> +            .help = "URL to the gluster image",
> +        },
> +        { /* end of list */ }
> +    },
> +};
> +
> +static QemuOptsList runtime_json_opts = {
> +    .name = "gluster_json",
> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
> +    .desc = {
> +        {
> +            .name = GLUSTER_OPT_VOLUME,
> +            .type = QEMU_OPT_STRING,
> +            .help = "name of gluster volume where our VM image resides",
> +        },
> +        {
> +            .name = GLUSTER_OPT_PATH,
> +            .type = QEMU_OPT_STRING,
> +            .help = "absolute path to image file in gluster volume",
> +        },
> +        { /* end of list */ }
> +    },
> +};
> +
> +static QemuOptsList runtime_tuple_opts = {
> +    .name = "gluster_tuple",
> +    .head = QTAILQ_HEAD_INITIALIZER(runtime_tuple_opts.head),
> +    .desc = {
> +        {
> +            .name = GLUSTER_OPT_HOST,
> +            .type = QEMU_OPT_STRING,
> +            .help = "host address (hostname/ipv4/ipv6 addresses)",
> +        },
> +        {
> +            .name = GLUSTER_OPT_PORT,
> +            .type = QEMU_OPT_NUMBER,
> +            .help = "port number on which glusterd is listening(default 
> 24007)",
> +        },
> +        {
> +            .name = GLUSTER_OPT_TRANSPORT,
> +            .type = QEMU_OPT_STRING,
> +            .help = "transport type used to connect to glusterd(default 
> tcp)",
> +        },
> +        { /* end of list */ }
> +    },
> +};
> +
>  static void qemu_gluster_gconf_free(GlusterConf *gconf)
>  {
>      if (gconf) {
> -        g_free(gconf->server);
> -        g_free(gconf->volname);
> -        g_free(gconf->image);
> -        g_free(gconf->transport);
> +        g_free(gconf->volume);
> +        g_free(gconf->path);
> +        if (gconf->gsconf) {
> +            g_free(gconf->gsconf[0].host);
> +            g_free(gconf->gsconf[0].transport);
> +            g_free(gconf->gsconf);
> +            gconf->gsconf = NULL;
> +        }

Renames interleaved with functional changes.

>          g_free(gconf);
> +        gconf = NULL;
>      }
>  }
>  
> @@ -57,19 +154,19 @@ static int parse_volume_options(GlusterConf *gconf, char 
> *path)
>      if (*p == '\0') {
>          return -EINVAL;
>      }
> -    gconf->volname = g_strndup(q, p - q);
> +    gconf->volume = g_strndup(q, p - q);
>  
> -    /* image */
> +    /* path */
>      p += strspn(p, "/");
>      if (*p == '\0') {
>          return -EINVAL;
>      }
> -    gconf->image = g_strdup(p);
> +    gconf->path = g_strdup(p);
>      return 0;
>  }

All are just renames.

>  
>  /*
> - * file=gluster[+transport]://[server[:port]]/volname/image[?socket=...]
> + * file=gluster[+transport]://[host[:port]]/volume/path[?socket=...]
>   *
>   * 'gluster' is the protocol.
>   *
> @@ -78,10 +175,10 @@ static int parse_volume_options(GlusterConf *gconf, char 
> *path)
>   * tcp, unix and rdma. If a transport type isn't specified, then tcp
>   * type is assumed.
>   *
> - * 'server' specifies the server where the volume file specification for
> + * 'host' specifies the host where the volume file specification for
>   * the given volume resides. This can be either hostname, ipv4 address
>   * or ipv6 address. ipv6 address needs to be within square brackets [ ].
> - * If transport type is 'unix', then 'server' field should not be specified.
> + * If transport type is 'unix', then 'host' field should not be specified.
>   * The 'socket' field needs to be populated with the path to unix domain
>   * socket.
>   *
> @@ -90,9 +187,9 @@ static int parse_volume_options(GlusterConf *gconf, char 
> *path)
>   * default port. If the transport type is unix, then 'port' should not be
>   * specified.
>   *
> - * 'volname' is the name of the gluster volume which contains the VM image.
> + * 'volume' is the name of the gluster volume which contains the VM image.
>   *
> - * 'image' is the path to the actual VM image that resides on gluster volume.
> + * 'path' is the path to the actual VM image that resides on gluster volume.
>   *
>   * Examples:
>   *

All this too.

> @@ -105,28 +202,32 @@ static int parse_volume_options(GlusterConf *gconf, 
> char *path)
>   * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
>   * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
>   */
> -static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> +static int qemu_gluster_parseuri(GlusterConf **pgconf, const char *filename)
>  {
> +    GlusterConf *gconf;
>      URI *uri;
>      QueryParams *qp = NULL;
>      bool is_unix = false;
> -    int ret = 0;
> +    int ret;
>  
>      uri = uri_parse(filename);
>      if (!uri) {
>          return -EINVAL;
>      }
>  
> +    gconf = g_new0(GlusterConf, 1);
> +    gconf->gsconf = g_new0(GlusterServerConf, 1);

Here you allocate the config object into a local variable ...

> +
>      /* transport */
>      if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> -        gconf->transport = g_strdup("tcp");
> +        gconf->gsconf[0].transport = g_strdup("tcp");
>      } else if (!strcmp(uri->scheme, "gluster+tcp")) {
> -        gconf->transport = g_strdup("tcp");
> +        gconf->gsconf[0].transport = g_strdup("tcp");
>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
> -        gconf->transport = g_strdup("unix");
> +        gconf->gsconf[0].transport = g_strdup("unix");
>          is_unix = true;
>      } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> -        gconf->transport = g_strdup("rdma");
> +        gconf->gsconf[0].transport = g_strdup("rdma");
>      } else {
>          ret = -EINVAL;

... but if it fails here ...

>          goto out;
> @@ -152,12 +253,19 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, 
> const char *filename)
>              ret = -EINVAL;
>              goto out;
>          }
> -        gconf->server = g_strdup(qp->p[0].value);
> +        gconf->gsconf[0].host = g_strdup(qp->p[0].value);
>      } else {
> -        gconf->server = g_strdup(uri->server ? uri->server : "localhost");
> -        gconf->port = uri->port;
> +        gconf->gsconf[0].host = g_strdup(uri->server ? uri->server : 
> "localhost");
> +        if (uri->port) {
> +            gconf->gsconf[0].port = uri->port;
> +        } else {
> +            gconf->gsconf[0].port = GLUSTER_DEFAULT_PORT;
> +        }

This hunk interleaves renames with functional changes.

> +
>      }
>  
> +    *pgconf = gconf;
> +
>  out:

... you don't free it at this point thus leaking it.

>      if (qp) {
>          query_params_free(qp);
> @@ -166,30 +274,35 @@ out:
>      return ret;
>  }
>  
> -static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char 
> *filename,
> -                                      Error **errp)
> +static struct glfs *qemu_gluster_glfs_init(GlusterConf *gconf, int 
> num_servers,
> +                                           Error **errp)
>  {
> -    struct glfs *glfs = NULL;
> -    int ret;
> +    struct glfs *glfs;
> +    Error *local_err = NULL;
>      int old_errno;
> +    int ret;
> +    int i;
>  
> -    ret = qemu_gluster_parseuri(gconf, filename);
> -    if (ret < 0) {
> -        error_setg(errp, "Usage: file=gluster[+transport]://[server[:port]]/"
> -                   "volname/image[?socket=...]");
> -        errno = -ret;
> -        goto out;
> -    }
> -
> -    glfs = glfs_new(gconf->volname);
> +    glfs = glfs_new(gconf->volume);
>      if (!glfs) {
>          goto out;
>      }
>  
> -    ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->server,
> -            gconf->port);
> -    if (ret < 0) {
> -        goto out;
> +    for (i = 0; i < num_servers; i++) {
> +        ret = glfs_set_volfile_server(glfs, gconf->gsconf[i].transport,
> +                                      gconf->gsconf[i].host,
> +                                      gconf->gsconf[i].port);
> +        if (ret < 0) {
> +            goto out;
> +        } else {
> +            error_setg(&local_err, "INFO: Successfully set volume file 
> server, "
> +                                   "host:'%s' port:'%d' trasport:'%s'",
> +                                    gconf->gsconf[i].host,
> +                                    gconf->gsconf[i].port,
> +                                    gconf->gsconf[i].transport);

I'm not quite sure whether using of the error reporting functions is a
good way to do debug logging in qemu. Also I don't really think it's
necessary to log this message here.

> +            error_report_err(local_err);
> +            local_err = NULL;
> +        }
>      }
>  
>      /*
> @@ -203,11 +316,10 @@ static struct glfs *qemu_gluster_init(GlusterConf 
> *gconf, const char *filename,
>  
>      ret = glfs_init(glfs);
>      if (ret) {
> -        error_setg_errno(errp, errno,
> -                         "Gluster connection failed for server=%s port=%d "
> -                         "volume=%s image=%s transport=%s", gconf->server,
> -                         gconf->port, gconf->volname, gconf->image,
> -                         gconf->transport);
> +        error_setg_errno(errp, errno, "Error: Gluster connection failed for "
> +                                      "given hosts volume:'%s' path:'%s' "
> +                                      "host1: %s", gconf->volume, 
> gconf->path,
> +                                      gconf->gsconf[0].host);
>  
>          /* glfs_init sometimes doesn't set errno although docs suggest that 
> */
>          if (errno == 0)
> @@ -215,6 +327,7 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, 
> const char *filename,
>  
>          goto out;
>      }
> +

Spurious whitespace change.

>      return glfs;
>  
>  out:
> @@ -226,6 +339,284 @@ out:
>      return NULL;
>  }
>  
> +static int parse_transport_option(const char *opt)
> +{
> +    int i;
> +
> +    if (!opt) {
> +        /* Set tcp as default */
> +        return GLUSTER_TRANSPORT_TCP;

So this helper takes NULL and returns TCP, so ... [1]

> +    }
> +
> +    for (i = 0; i < GLUSTER_TRANSPORT_MAX; i++) {
> +        if (!strcmp(opt, GlusterTransport_lookup[i])) {
> +            return i;
> +        }
> +    }
> +
> +    return -EINVAL;
> +}
> +
> +/*
> +*
> +*  Basic command line syntax looks like:
> +*
> +* Pattern I:
> +* -drive driver=gluster,
> +*        volume=testvol,file.path=/path/a.raw,
> +*        servers.0.host=1.2.3.4,
> +*       [servers.0.port=24007,]
> +*       [servers.0.transport=tcp,]
> +*        servers.1.host=5.6.7.8,
> +*       [servers.1.port=24008,]
> +*       [servers.1.transport=rdma,] ...

I've noticed this just now. Why didn't you implement unix socket
transport with the new syntax? The unix transport can be theoretically
used with conjunction with traditional remote transport in case the
local service is down. While I'm not really sure that it might be useful
I'm just asking so that the complex interface you are adding is
complete.

> +*
> +* Pattern II:
> +* 'json:{"driver":"qcow2","file":{"driver":"gluster",
> +*       "volume":"testvol","path":"/path/a.qcow2",
> +*       "servers":[{tuple0},{tuple1}, ...{tupleN}]}}'

[...]

> +static int qemu_gluster_parsejson(GlusterConf **pgconf, QDict *options)
> +{
> +    QemuOpts *opts;
> +    GlusterConf *gconf = NULL;
> +    QDict *backing_options = NULL;
> +    Error *local_err = NULL;
> +    const char *transport;
> +    char *str = NULL;
> +    int num_servers;
> +    int ret = 0;
> +    int i;
> +
> +    /* create opts info from runtime_json_opts list */
> +    opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
> +    qemu_opts_absorb_qdict(opts, options, &local_err);
> +    if (local_err) {
> +        goto out;
> +    }
> +    num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
> +    if (num_servers < 1) {
> +        error_setg(&local_err, "Error: qemu_gluster: please provide 
> 'servers' "
> +                               "option with valid fields in array of 
> tuples");
> +        goto out;
> +    }
> +
> +    gconf = g_new0(GlusterConf, 1);
> +    gconf->gsconf = g_new0(GlusterServerConf, num_servers);
> +
> +    gconf->volume = (char *) qemu_opt_get(opts, GLUSTER_OPT_VOLUME);

See below [4]

> +    if (!gconf->volume) {
> +        error_setg(&local_err, "Error: qemu_gluster: please provide 'volume' 
> "
> +                               "option");
> +        goto out;
> +    }
> +
> +    gconf->path = (char *) qemu_opt_get(opts, GLUSTER_OPT_PATH);

See below [4]

> +    if (!gconf->path) {
> +        error_setg(&local_err, "Error: qemu_gluster: please provide 'path' "
> +                               "option");
> +        goto out;
> +    }
> +
> +    /* create opts info from runtime_tuple_opts list */
> +    opts = qemu_opts_create(&runtime_tuple_opts, NULL, 0, &error_abort);

Here you overwrite the 'opts' pointer leaking the older qemu options.

[4]
This also makes the two pointer thefts above work (the two lines where
you cast const away) since the opts don't get freed then. I think you'll
need to copy the elements properly and not mess with the underlying data
structures.


> +    str = g_malloc(40);
> +    for (i = 0; i < num_servers; i++) {
> +        snprintf(str, 40, GLUSTER_OPT_SERVER_PATTERN"%d.", i);
> +        qdict_extract_subqdict(options, &backing_options, str);

So if you don't flatten ([3]) the options before absorbing them at, then
you can actually extract them via 'qdict_get_qdict' so that the code
doesn't need to convert them back and forth.

This also leaks 'backing_options' in every iteration since
qdict_extract_subqdict allocates a new one every time called.

> +        qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> +        if (local_err) {
> +            goto out;
> +        }
> +
> +        gconf->gsconf[i].host = (char *) qemu_opt_get(opts, 
> GLUSTER_OPT_HOST);

See below [2]

> +        if (!gconf->gsconf[i].host) {
> +            error_setg(&local_err, "Error: qemu_gluster: servers.{tuple.%d} "
> +                                   "requires 'host' option", i);
> +            goto out;
> +        }
> +
> +        transport = qemu_opt_get(opts, GLUSTER_OPT_TRANSPORT);
> +        if (!transport) {
> +            transport = GLUSTER_DEFAULT_TRANSPORT;

Why is this needed? [1] The helper handles NULL just fine.

> +        }
> +        /* check whether transport type specified in json command is valid */
> +        ret = parse_transport_option(transport);
> +        if (ret < 0) {
> +            error_setg(&local_err, "Error: qemu_gluster: please set 
> 'transport'"
> +                                   " type in tuple.%d as tcp or rdma", i);
> +            goto out;
> +        } else {
> +            /* only if valid transport i.e. either of tcp|rdma is specified 
> */
> +            gconf->gsconf[i].transport = (char *) transport;

See below [2]

> +        }
> +
> +        gconf->gsconf[i].port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
> +                                                    GLUSTER_DEFAULT_PORT);
> +
> +        /*
> +         * reset current tuple opts to NULL/0, so that in case if the next 
> tuple
> +         * misses any of (host, tranport, port) options there is no chance of
> +         * copying from current set.
> +         */

[2]
So the suspicious casting away of const is used for pointer theft in a
sublte way. The comment is rather unhelpful in this regard too since it
doesn't state that 'opts' is not re-created rather than just re-filled.

You'll need to re-do this in a way where you properly copy out fields
you need so that 


> +        qemu_opt_set(opts, GLUSTER_OPT_HOST, NULL, &error_abort);
> +        qemu_opt_set_number(opts, GLUSTER_OPT_PORT,
> +                            GLUSTER_DEFAULT_PORT, &error_abort);
> +        qemu_opt_set(opts, GLUSTER_OPT_TRANSPORT, NULL, &error_abort);

So this could be perhaps avoided by re-creating the 'opts' field from
runtime_tuple_opts for every iteration here rather than just delete
certain fields. If you decide against it, then the comment above should
explicitly state that in further iterations the dicts are merged into
the same 'opts' so that this is necessary. Also in that case you'll need
to free the fields.


> +    }
> +
> +    *pgconf = gconf;
> +    g_free(str);

This also leaks 'opts' on success; It's common practice to have a
'cleanup' section where all the stuff that needs to be freed on
success or error gets accumulated so that it does not need to be
duplicated.

> +    return num_servers;
> +
> +out:
> +    error_report_err(local_err);
> +    g_free(str);
> +    g_free(gconf->gsconf);
> +    g_free(gconf);
> +    gconf->gsconf = NULL;
> +    gconf = NULL;
> +    errno = EINVAL;
> +    qemu_opts_del(opts);
> +    return -EINVAL;
> +}
> +
> +static struct glfs *qemu_gluster_init(GlusterConf **gconf, const char 
> *filename,
> +                                      QDict *options, Error **errp)
> +{
> +    int ret;
> +    int num_servers = 1;
> +
> +    if (filename) {
> +        ret = qemu_gluster_parseuri(gconf, filename);
> +        if (ret < 0) {
> +            error_setg(errp, "Usage: 
> file=gluster[+transport]://[host[:port]]/"
> +                             "volume/path[?socket=...]");
> +            errno = -ret;
> +            goto out;
> +        }
> +    } else {
> +        num_servers = qemu_gluster_parsejson(gconf, options);
> +        if (num_servers < 1) {

So this triggers also if num_servers == 0;

> +            error_setg(errp, "#Usage1: "

The '#' is not in the other message nor in any other in the qemu
repository.

> +                             "-drive driver=qcow2,file.driver=gluster,"
> +                             "file.volume=testvol,file.path=/path/a.qcow2,"
> +                             "file.servers.0.host=1.2.3.4,"
> +                             "[file.servers.0.port=24007,]"
> +                             "[file.servers.0.transport=tcp,]"
> +                             "file.servers.1.host=5.6.7.8,"
> +                             "[file.servers.1.port=24008,]"
> +                             "[file.servers.1.transport=rdma,] ..."
> +                             "\n#Usage2: "
> +                             "'json:{\"driver\":\"qcow2\",\"file\":"
> +                             "{\"driver\":\"gluster\",\"volume\":\""
> +                             "testvol\",\"path\":\"/path/a.qcow2\","
> +                             "\"servers\":[{\"host\":\"1.2.3.4\","
> +                             "\"port\":\"24007\",\"transport\":\"tcp\"},"
> +                             "{\"host\":\"4.5.6.7\",\"port\":\"24007\","
> +                             "\"transport\":\"rdma\"}, ...]}}'");
> +            errno = -num_servers;
> +            goto out;
> +        }
> +    }
> +
> +    return qemu_gluster_glfs_init(*gconf, num_servers, errp);
> +
> +out:

You can "return NULL" directly at the places that jump here rather than
doing the extra jump.

> +
> +    return NULL;
> +}
> +
>  static void qemu_gluster_complete_aio(void *opaque)
>  {
>      GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
> @@ -254,20 +645,6 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, 
> ssize_t ret, void *arg)
>      qemu_bh_schedule(acb->bh);
>  }
>  
> -/* TODO Convert to fine grained options */
> -static QemuOptsList runtime_opts = {
> -    .name = "gluster",
> -    .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
> -    .desc = {
> -        {
> -            .name = "filename",
> -            .type = QEMU_OPT_STRING,
> -            .help = "URL to the gluster image",
> -        },
> -        { /* end of list */ }
> -    },
> -};
> -
>  static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
>  {
>      assert(open_flags != NULL);
> @@ -291,11 +668,12 @@ static int qemu_gluster_open(BlockDriverState *bs,  
> QDict *options,
>      BDRVGlusterState *s = bs->opaque;
>      int open_flags = 0;
>      int ret = 0;
> -    GlusterConf *gconf = g_new0(GlusterConf, 1);
> +    GlusterConf *gconf = NULL;
>      QemuOpts *opts;
>      Error *local_err = NULL;
>      const char *filename;
>  
> +    qdict_flatten(options);

[3] ... 


>      opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
>      qemu_opts_absorb_qdict(opts, options, &local_err);
>      if (local_err) {
> @@ -305,8 +683,7 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict 
> *options,
>      }
>  
>      filename = qemu_opt_get(opts, "filename");
> -
> -    s->glfs = qemu_gluster_init(gconf, filename, errp);
> +    s->glfs = qemu_gluster_init(&gconf, filename, options, errp);
>      if (!s->glfs) {
>          ret = -errno;
>          goto out;
> @@ -314,14 +691,19 @@ static int qemu_gluster_open(BlockDriverState *bs,  
> QDict *options,
>  
>      qemu_gluster_parse_flags(bdrv_flags, &open_flags);
>  
> -    s->fd = glfs_open(s->glfs, gconf->image, open_flags);
> +    s->fd = glfs_open(s->glfs, gconf->path, open_flags);

Just rename ...

>      if (!s->fd) {
>          ret = -errno;
>      }
>  
>  out:
>      qemu_opts_del(opts);
> -    qemu_gluster_gconf_free(gconf);
> +    if (filename) {
> +        qemu_gluster_gconf_free(gconf);
> +    } else if (gconf) {
> +        g_free(gconf->gsconf);
> +        g_free(gconf);

This still looks weird. qemu_gluster_gconf_free is freeing the members
while the second impl is not. I think that the pointer theft you are
doing when parsing the new syntax is unnecessarily complicating stuff
here.

> +    }
>      if (!ret) {
>          return ret;
>      }
> @@ -334,12 +716,6 @@ out:
>      return ret;
>  }
>  
> -typedef struct BDRVGlusterReopenState {
> -    struct glfs *glfs;
> -    struct glfs_fd *fd;
> -} BDRVGlusterReopenState;

Moving of this typedef is not really relevant to the body of this patch.
You should split it out.

> -
> -
>  static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>                                         BlockReopenQueue *queue, Error **errp)
>  {
> @@ -356,15 +732,13 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState 
> *state,
>  
>      qemu_gluster_parse_flags(state->flags, &open_flags);
>  
> -    gconf = g_new0(GlusterConf, 1);
> -
> -    reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
> +    reop_s->glfs = qemu_gluster_init(&gconf, state->bs->filename, NULL, 
> errp);

This doesn't look right. Here you are attempting a reopen an image along
with it's connection but you are not really using the parameters that
were possibly specified via the new syntax.

>      if (reop_s->glfs == NULL) {
>          ret = -errno;
>          goto exit;
>      }
>  
> -    reop_s->fd = glfs_open(reop_s->glfs, gconf->image, open_flags);
> +    reop_s->fd = glfs_open(reop_s->glfs, gconf->path, open_flags);
>      if (reop_s->fd == NULL) {
>          /* reops->glfs will be cleaned up in _abort */
>          ret = -errno;
> @@ -401,7 +775,6 @@ static void qemu_gluster_reopen_commit(BDRVReopenState 
> *state)
>      return;
>  }
>  
> -

Spurious whitespace change.

>  static void qemu_gluster_reopen_abort(BDRVReopenState *state)
>  {
>      BDRVGlusterReopenState *reop_s = state->opaque;

[...]

> @@ -504,14 +877,14 @@ static int qemu_gluster_create(const char *filename,
>                 gluster_supports_zerofill()) {
>          prealloc = 1;
>      } else {
> -        error_setg(errp, "Invalid preallocation mode: '%s'"
> -            " or GlusterFS doesn't support zerofill API",
> -            tmp);
> +        error_setg(errp, "Error: Invalid preallocation mode: '%s'"
> +                         " or GlusterFS doesn't support zerofill API",
> +                         tmp);

This change is not related to stuff happening in this patch.

>          ret = -EINVAL;
>          goto out;
>      }
>  
> -    fd = glfs_creat(glfs, gconf->image,
> +    fd = glfs_creat(glfs, gconf->path,

Neither this.

>          O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
>      if (!fd) {
>          ret = -errno;
> @@ -696,29 +1069,11 @@ static int qemu_gluster_has_zero_init(BlockDriverState 
> *bs)
>      return 0;
>  }
>  
> -static QemuOptsList qemu_gluster_create_opts = {
> -    .name = "qemu-gluster-create-opts",
> -    .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
> -    .desc = {
> -        {
> -            .name = BLOCK_OPT_SIZE,
> -            .type = QEMU_OPT_SIZE,
> -            .help = "Virtual disk size"
> -        },
> -        {
> -            .name = BLOCK_OPT_PREALLOC,
> -            .type = QEMU_OPT_STRING,
> -            .help = "Preallocation mode (allowed values: off, full)"
> -        },
> -        { /* end of list */ }
> -    }
> -};

Moving of the above hunk doesn't seem relevant or necessary to this
patch.

> -
>  static BlockDriver bdrv_gluster = {
>      .format_name                  = "gluster",
>      .protocol_name                = "gluster",
>      .instance_size                = sizeof(BDRVGlusterState),
> -    .bdrv_needs_filename          = true,
> +    .bdrv_needs_filename          = false,
>      .bdrv_file_open               = qemu_gluster_open,
>      .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
>      .bdrv_reopen_commit           = qemu_gluster_reopen_commit,

[...]

> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index bb2189e..8bc69e0 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json

[...]

> @@ -1794,6 +1794,56 @@
>              '*read-pattern': 'QuorumReadPattern' } }
>  
>  ##
> +# @GlusterTransport
> +#
> +# An enumeration of Gluster transport type
> +#
> +# @tcp:   TCP  - Transmission Control Protocol
> +#
> +# @rdma:  RDMA - Remote direct memory access
> +#
> +# Since: 2.5
> +##
> +{ 'enum': 'GlusterTransport', 'data': [ 'tcp', 'rdma' ] }

As said above, it'd be nice to have @unix as a transport for
completeness.

So this patch was really hard to review due to the merge of
non-functional changes (that were not described in the commit message)
with functional changes. I'd suggest you split up the patch to a series
containing the cleanups/renames/moves and then the functional changes
itself.

Peter

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]