[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v18 4/4] block/gluster: add support for multiple
From: |
Prasanna Kalever |
Subject: |
Re: [Qemu-devel] [PATCH v18 4/4] block/gluster: add support for multiple gluster servers |
Date: |
Fri, 15 Jul 2016 19:17:07 +0530 |
On Thu, Jul 14, 2016 at 5:35 PM, Markus Armbruster <address@hidden> wrote:
> Interface and error message review only.
>
> Prasanna Kumar Kalever <address@hidden> writes:
>
>> This patch adds a way to specify multiple volfile servers to the gluster
>> block backend of QEMU with tcp|rdma transport types and their port numbers.
>>
>> Problem:
>>
>> Currently VM Image on gluster volume is specified like this:
>>
>> file=gluster[+tcp]://host[:port]/testvol/a.img
>>
>> Assuming we have three hosts in trusted pool with replica 3 volume
>> in action and unfortunately host (mentioned in the command above) went down
>> for some reason, since the volume is replica 3 we now have other 2 hosts
>> active from which we can boot the VM.
>>
>> But currently there is no mechanism to pass the other 2 gluster host
>> addresses to qemu.
>
> Awkward. Perhaps:
>
> Say we have three hosts in a trusted pool with replica 3 volume
> in action. When the host mentioned in the command above goes down
> for some reason, the other two hosts are still available. But there's
> currently no way to tell QEMU about them.
Will incorporate in v19
>
>> Solution:
>>
>> New way of specifying VM Image on gluster volume with volfile servers:
>> (We still support old syntax to maintain backward compatibility)
>>
>> Basic command line syntax looks like:
>>
>> Pattern I:
>> -drive driver=gluster,
>> volume=testvol,path=/path/a.raw,
>> server.0.host=1.2.3.4,
>> [server.0.port=24007,]
>> [server.0.transport=tcp,]
>> server.1.host=5.6.7.8,
>> [server.1.port=24008,]
>> [server.1.transport=rdma,]
>
> Don't forget to update this line when you drop transport 'rdma'. More
> of the same below.
>
>> server.2.host=/var/run/glusterd.socket,
>> server.2.transport=unix ...
>>
>> Pattern II:
>> 'json:{"driver":"qcow2","file":{"driver":"gluster",
>> "volume":"testvol","path":"/path/a.qcow2",
>> "server":[{tuple0},{tuple1}, ...{tupleN}]}}'
>
> Suggest to add -drive here, for symmetry with pattern I.
>
> JSON calls the things in { ... } objects, not tuples. Let's stick to
> JSON terminology: [{object0},{object1}, ...]. But I think spelling
> things out to a similar degree as in pattern I would be clearer:
>
> -drive 'json:{"driver": "qcow2",
> "file": { "driver": "gluster",
> "volume": "testvol",
> "path": "/path/a.qcow2",
> "server": [
> { "host": "1.2.3.4",
> "port": 24007,
> "transport": "tcp" },
> { "host": "5.6.7.8"
> ... },
> ... ] }
> ... }'
IMO, this doesn't work, I have given a try.
>
>> driver => 'gluster' (protocol name)
>> volume => name of gluster volume where our VM image resides
>> path => absolute path of image in gluster volume
>>
>> {tuple} => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}
>>
>> host => host address (hostname/ipv4/ipv6 addresses/socket path)
>> port => port number on which glusterd is listening. (default 24007)
>> transport => transport type used to connect to gluster management
>> daemon,
>> it can be tcp|rdma|unix (default 'tcp')
>>
>> Examples:
>> 1.
>> -drive driver=qcow2,file.driver=gluster,
>> file.volume=testvol,file.path=/path/a.qcow2,
>> file.server.0.host=1.2.3.4,
>> file.server.0.port=24007,
>> file.server.0.transport=tcp,
>> file.server.1.host=5.6.7.8,
>> file.server.1.port=24008,
>> file.server.1.transport=rdma,
>> file.server.2.host=/var/run/glusterd.socket
>> file.server.1.transport=unix
>> 2.
>> 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
>> "path":"/path/a.qcow2","server":
>> [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
>> {"host":"4.5.6.7","port":"24008","transport":"rdma"},
>> {"host":"/var/run/glusterd.socket","transport":"unix"}] } }'
>
> Ah, working examples. Good! No need to expand your abbreviated
> description of pattern II then, just say object instead of tuple. But if
> you prefer to expand it, go ahead, your choice.
>
>> This patch gives a mechanism to provide all the server addresses, which are
>> in
>> replica set, so in case host1 is down VM can still boot from any of the
>> active hosts.
>>
>> This is equivalent to the backup-volfile-servers option supported by
>> mount.glusterfs (FUSE way of mounting gluster volume)
>>
>> Credits: Sincere thanks to Kevin Wolf <address@hidden> and
>> "Deepak C Shetty" <address@hidden> for inputs and all their support
>>
>> Signed-off-by: Prasanna Kumar Kalever <address@hidden>
>> ---
>> block/gluster.c | 290
>> ++++++++++++++++++++++++++++++++++++++++++++-------
>> qapi/block-core.json | 4 +-
>> 2 files changed, 255 insertions(+), 39 deletions(-)
>>
>> diff --git a/block/gluster.c b/block/gluster.c
>> index 40dcbc1..41046f0 100644
>> --- a/block/gluster.c
>> +++ b/block/gluster.c
>> @@ -12,8 +12,15 @@
>> #include "block/block_int.h"
>> #include "qapi/error.h"
>> #include "qemu/uri.h"
>> +#include "qemu/error-report.h"
>>
>> #define GLUSTER_OPT_FILENAME "filename"
>> +#define GLUSTER_OPT_VOLUME "volume"
>> +#define GLUSTER_OPT_PATH "path"
>> +#define GLUSTER_OPT_HOST "host"
>> +#define GLUSTER_OPT_PORT "port"
>> +#define GLUSTER_OPT_TRANSPORT "transport"
>> +#define GLUSTER_OPT_SERVER_PATTERN "server."
>> #define GLUSTER_OPT_DEBUG "debug"
>> #define GLUSTER_DEFAULT_PORT 24007
>> #define GLUSTER_DEBUG_DEFAULT 4
>> @@ -82,6 +89,46 @@ static QemuOptsList runtime_opts = {
>> },
>> };
>>
>> +static QemuOptsList runtime_json_opts = {
>> + .name = "gluster_json",
>> + .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
>> + .desc = {
>> + {
>> + .name = GLUSTER_OPT_VOLUME,
>> + .type = QEMU_OPT_STRING,
>> + .help = "name of gluster volume where VM image resides",
>> + },
>> + {
>> + .name = GLUSTER_OPT_PATH,
>> + .type = QEMU_OPT_STRING,
>> + .help = "absolute path to image file in gluster volume",
>> + },
>> + { /* end of list */ }
>> + },
>> +};
>> +
>> +static QemuOptsList runtime_tuple_opts = {
>> + .name = "gluster_tuple",
>> + .head = QTAILQ_HEAD_INITIALIZER(runtime_tuple_opts.head),
>> + .desc = {
>> + {
>> + .name = GLUSTER_OPT_HOST,
>> + .type = QEMU_OPT_STRING,
>> + .help = "host address (hostname/ipv4/ipv6 addresses/socket
>> path)",
>> + },
>> + {
>> + .name = GLUSTER_OPT_PORT,
>> + .type = QEMU_OPT_NUMBER,
>> + .help = "port number on which glusterd is listening (default
>> 24007)",
>> + },
>> + {
>> + .name = GLUSTER_OPT_TRANSPORT,
>> + .type = QEMU_OPT_STRING,
>> + .help = "transport type 'tcp'|'rdma'|'unix' (default 'tcp')",
>> + },
>> + { /* end of list */ }
>> + },
>> +};
>>
>> static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
>> {
>> @@ -148,6 +195,7 @@ static int parse_volume_options(BlockdevOptionsGluster
>> *gconf, char *path)
>> static int qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
>> const char *filename)
>> {
>> + GlusterServer *gsconf;
>> URI *uri;
>> QueryParams *qp = NULL;
>> bool is_unix = false;
>> @@ -158,23 +206,24 @@ static int
>> qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
>> return -EINVAL;
>> }
>>
>> - gconf->server = g_new0(GlusterServer, 1);
>> + gconf->server = g_new0(GlusterServerList, 1);
>> + gconf->server->value = gsconf = g_new0(GlusterServer, 1);
>>
>> /* transport */
>> if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
>> - gconf->server->transport = GLUSTER_TRANSPORT_TCP;
>> + gsconf->transport = GLUSTER_TRANSPORT_TCP;
>> } else if (!strcmp(uri->scheme, "gluster+tcp")) {
>> - gconf->server->transport = GLUSTER_TRANSPORT_TCP;
>> + gsconf->transport = GLUSTER_TRANSPORT_TCP;
>> } else if (!strcmp(uri->scheme, "gluster+unix")) {
>> - gconf->server->transport = GLUSTER_TRANSPORT_UNIX;
>> + gsconf->transport = GLUSTER_TRANSPORT_UNIX;
>> is_unix = true;
>> } else if (!strcmp(uri->scheme, "gluster+rdma")) {
>> - gconf->server->transport = GLUSTER_TRANSPORT_RDMA;
>> + gsconf->transport = GLUSTER_TRANSPORT_RDMA;
>> } else {
>> ret = -EINVAL;
>> goto out;
>> }
>> - gconf->server->has_transport = true;
>> + gsconf->has_transport = true;
>>
>> ret = parse_volume_options(gconf, uri->path);
>> if (ret < 0) {
>> @@ -196,15 +245,15 @@ static int
>> qemu_gluster_parseuri(BlockdevOptionsGluster *gconf,
>> ret = -EINVAL;
>> goto out;
>> }
>> - gconf->server->host = g_strdup(qp->p[0].value);
>> + gsconf->host = g_strdup(qp->p[0].value);
>> } else {
>> - gconf->server->host = g_strdup(uri->server ? uri->server :
>> "localhost");
>> + gsconf->host = g_strdup(uri->server ? uri->server : "localhost");
>> if (uri->port) {
>> - gconf->server->port = uri->port;
>> + gsconf->port = uri->port;
>> } else {
>> - gconf->server->port = GLUSTER_DEFAULT_PORT;
>> + gsconf->port = GLUSTER_DEFAULT_PORT;
>> }
>> - gconf->server->has_port = true;
>> + gsconf->has_port = true;
>> }
>>
>> out:
>> @@ -215,31 +264,26 @@ out:
>> return ret;
>> }
>>
>> -static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>> - const char *filename, Error **errp)
>> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
>> + Error **errp)
>> {
>> - struct glfs *glfs = NULL;
>> + struct glfs *glfs;
>> int ret;
>> int old_errno;
>> -
>> - ret = qemu_gluster_parseuri(gconf, filename);
>> - if (ret < 0) {
>> - error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
>> - "volume/path[?socket=...]");
>> - errno = -ret;
>> - goto out;
>> - }
>> + GlusterServerList *server = NULL;
>>
>> glfs = glfs_new(gconf->volume);
>> if (!glfs) {
>> goto out;
>> }
>>
>> - ret = glfs_set_volfile_server(glfs,
>> -
>> GlusterTransport_lookup[gconf->server->transport],
>> - gconf->server->host, gconf->server->port);
>> - if (ret < 0) {
>> - goto out;
>> + for (server = gconf->server; server; server = server->next) {
>> + ret = glfs_set_volfile_server(glfs,
>> +
>> GlusterTransport_lookup[server->value->transport],
>> + server->value->host,
>> server->value->port);
>> + if (ret < 0) {
>> + goto out;
>> + }
>> }
>>
>> ret = glfs_set_logging(glfs, "-", gconf->debug_level);
>> @@ -249,11 +293,13 @@ static struct glfs
>> *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>>
>> ret = glfs_init(glfs);
>> if (ret) {
>> + error_report("Gluster connection for 'volume: %s and path: %s': ",
>> + gconf->volume, gconf->path);
>> + for (server = gconf->server; server; server = server->next) {
>> + error_report("failed for host %s", server->value->host);
>> + }
>> error_setg_errno(errp, errno,
>> - "Gluster connection failed for host=%s port=%d "
>> - "volume=%s path=%s transport=%s",
>> gconf->server->host,
>> - gconf->server->port, gconf->volume, gconf->path,
>> - GlusterTransport_lookup[gconf->server->transport]);
>> + "Please refer to gluster logs for more info");
>
> error_setg() and error_report() to not mix.
>
> error_report() reports to stderr or the current monitor. Okay in some
> contexts, wrong in others. Okay contexts include QEMU startup, and HMP
> commands. Wrong contexts include QMP commands, and functions with an
> Error **errp parameter.
>
> It's wrong in QMP commands, because the error must be sent via QMP.
>
> It's wrong in functions taking errp, because how to handle errors is the
> caller's decision. In particular, the caller may decide to ignore the
> error. The function mustn't babble about it with error_report().
>
> In particular, when this runs in QMP context, the error sent to QMP will
> just be
>
> "Please refer to gluster logs for more info: %s", strerror(errno)
>
> and the real error "Gluster connection for ..." gets barfed to
> stderr.
>
> Aside: when error_report() is correct, use exactly one per error. You
> may add information with error_printf().
>
> You should call just error_setg_errno() here, with the full error
> message. Since the error message consists of a fixed part followed by
> variable number of "failed for host" parts, you have to construct it
> dynamically. Perhaps with a series of g_strdup_printf(). Inefficient,
> but good enough for an error path.
>
> "Please refer to gluster logs for more info" is a hint. The function to
> add hints to an Error is error_append_hint(). Search the tree for
> examples.
>
> Finally, do test your error messages to make sure they come out
> readable!
That's awesome, thanks for the keen theory I hope that I will not
disappoint you in v19.
>
>>
>> /* glfs_init sometimes doesn't set errno although docs suggest that
>> */
>> if (errno == 0)
> errno = EINVAL;
>
> Well, if that's the case, shouldn't you do this before you pass
> (possibly zero) errno to error_setg_errno()?
>
>> @@ -272,6 +318,176 @@ out:
>> return NULL;
>> }
>>
>> +static int qapi_enum_parse(const char *opt)
>> +{
>> + int i;
>> +
>> + if (!opt) {
>> + /* Set tcp as default */
>> + return GLUSTER_TRANSPORT_TCP;
>> + }
>> +
>> + for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
>> + if (!strcmp(opt, GlusterTransport_lookup[i])) {
>> + return i;
>> + }
>> + }
>> +
>> + return i;
>> +}
>> +
>> +/*
>> + * Convert the json formatted command line into qapi.
>> +*/
>> +static int qemu_gluster_parsejson(BlockdevOptionsGluster *gconf,
>
> "parsejson" isn't a word. "parse_json" would be two :)
taken care.
>
>> + QDict *options)
>> +{
>> + QemuOpts *opts;
>> + GlusterServer *gsconf;
>> + GlusterServerList *curr = NULL;
>> + QDict *backing_options = NULL;
>> + Error *local_err = NULL;
>> + char *str = NULL;
>> + const char *ptr;
>> + size_t num_servers;
>> + int i;
>> +
>> + /* create opts info from runtime_json_opts list */
>> + opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
>> + qemu_opts_absorb_qdict(opts, options, &local_err);
>> + if (local_err) {
>> + goto out;
>> + }
>> +
>> + num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
>> + if (num_servers < 1) {
>> + error_setg(&local_err, "qemu_gluster: please provide 'server' "
>> + "option with valid fields in array of
>> tuples");
>
> This isn't an error message, it's instructions what to do. Such
> instructions can be useful when they're correct, but they can't replace
> an error message. The error message should state what's wrong. No
> less, no more.
>
> Moreover, avoid prefixes like "qemu_gluster:". Usually, the fact that
> this is about Gluster is obvious. When it isn't, providing context is
> the caller's job.
sure, I have modified accordingly.
>
>> + goto out;
>> + }
>> +
>> + ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
>> + if (!ptr) {
>> + error_setg(&local_err, "qemu_gluster: please provide 'volume' "
>> + "option");
>
> Not an error message.
>
>> + goto out;
>> + }
>> + gconf->volume = g_strdup(ptr);
>> +
>> + ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
>> + if (!ptr) {
>> + error_setg(&local_err, "qemu_gluster: please provide 'path' "
>> + "option");
>
> Not an error message.
>
>> + goto out;
>> + }
>> + gconf->path = g_strdup(ptr);
>> +
>> + qemu_opts_del(opts);
>> +
>> + /* create opts info from runtime_tuple_opts list */
>> + for (i = 0; i < num_servers; i++) {
>> + opts = qemu_opts_create(&runtime_tuple_opts, NULL, 0, &error_abort);
>> + str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
>> + qdict_extract_subqdict(options, &backing_options, str);
>> + qemu_opts_absorb_qdict(opts, backing_options, &local_err);
>> + if (local_err) {
>> + goto out;
>> + }
>> + qdict_del(backing_options, str);
>> + g_free(str);
>> + str = NULL;
>> +
>> + ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
>> + if (!ptr) {
>> + error_setg(&local_err, "qemu_gluster: server.{tuple.%d} "
>> + "requires 'host' option", i);
>> + goto out;
>> + }
>> +
>> + gsconf = g_new0(GlusterServer, 1);
>> +
>> + gsconf->host = g_strdup(ptr);
>> +
>> + ptr = qemu_opt_get(opts, GLUSTER_OPT_TRANSPORT);
>> + /* check whether transport type specified in json command is valid
>> */
>> + gsconf->transport = qapi_enum_parse(ptr);
>> + if (gsconf->transport == GLUSTER_TRANSPORT__MAX) {
>> + error_setg(&local_err, "qemu_gluster: please set 'transport'"
>> + " type in tuple.%d as tcp|rdma|unix", i);
>
> Not an error message.
>
>> + qapi_free_GlusterServer(gsconf);
>> + goto out;
>> + }
>> + gsconf->has_transport = true;
>> +
>> + /* we don't need port number for unix transport */
>> + if (gsconf->transport != GLUSTER_TRANSPORT_UNIX) {
>> + gsconf->port = qemu_opt_get_number(opts, GLUSTER_OPT_PORT,
>> + GLUSTER_DEFAULT_PORT);
>> + gsconf->has_port = true;
>> + }
>> +
>> + if (gconf->server == NULL) {
>> + gconf->server = g_new0(GlusterServerList, 1);
>> + gconf->server->value = gsconf;
>> + curr = gconf->server;
>> + } else {
>> + curr->next = g_new0(GlusterServerList, 1);
>> + curr->next->value = gsconf;
>> + curr = curr->next;
>> + }
>> +
>> + qemu_opts_del(opts);
>> + }
>> +
>> + return 0;
>> +
>> +out:
>> + error_report_err(local_err);
>
> Aha, qemu_gluster_parsejson() reports errors with error_report_err().
oops...!
>
>> + qemu_opts_del(opts);
>> + if (str) {
>> + qdict_del(backing_options, str);
>> + g_free(str);
>> + }
>> + errno = EINVAL;
>> + return -errno;
>> +}
>> +
>> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>> + const char *filename,
>> + QDict *options, Error **errp)
>> +{
>> + int ret;
>> +
>> + if (filename) {
>> + ret = qemu_gluster_parseuri(gconf, filename);
>> + if (ret < 0) {
>> + error_setg(errp, "Usage:
>> file=gluster[+transport]://[host[:port]]/"
>> + "volume/path[?socket=...]");
>
> Not an error message.
>
>> + errno = -ret;
>> + return NULL;
>> + }
>> + } else {
>> + ret = qemu_gluster_parsejson(gconf, options);
>
> Anti-pattern: function that reports errors with error_report() or
> similar called from a function with an Error **errp parameter.
>
> In QMP context, qemu_gluster_parsejson() writes the real error to
> stderr, and QMP sends back only the useless "Usage: ..." error below.
>
> qemu_gluster_parsejson() needs to return the Error instead, via errp
> parameter. Then you can error_propagate() here, and...
>
>> + if (ret < 0) {
>> + error_setg(errp, "Usage: "
>> + "-drive driver=qcow2,file.driver=gluster,"
>> + "file.volume=testvol,file.path=/path/a.qcow2,"
>> + "file.server.0.host=1.2.3.4,"
>> + "[file.server.0.port=24007,]"
>> + "[file.server.0.transport=tcp,]"
>> + "file.server.1.host=5.6.7.8,"
>> + "[file.server.1.port=24008,]"
>> + "[file.server.1.transport=rdma,]"
>> + "file.server.2.host=/var/run/glusterd.socket,"
>> + "file.server.2.transport=unix ...");
>
> ... drop this usage hint.
I still think this is needed with a slight modified version.
Thanks Markus!
--
Prasanna
>
>> + errno = -ret;
>> + return NULL;
>> + }
>> + }
>> +
>> + return qemu_gluster_glfs_init(gconf, errp);
>> +}
>> +
>> static void qemu_gluster_complete_aio(void *opaque)
>> {
>> GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
>> @@ -371,7 +587,7 @@ static int qemu_gluster_open(BlockDriverState *bs,
>> QDict *options,
>> gconf = g_new0(BlockdevOptionsGluster, 1);
>> gconf->debug_level = s->debug_level;
>> gconf->has_debug_level = true;
>> - s->glfs = qemu_gluster_init(gconf, filename, errp);
>> + s->glfs = qemu_gluster_init(gconf, filename, options, errp);
>> if (!s->glfs) {
>> ret = -errno;
>> goto out;
>> @@ -442,7 +658,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState
>> *state,
>> gconf = g_new0(BlockdevOptionsGluster, 1);
>> gconf->debug_level = s->debug_level;
>> gconf->has_debug_level = true;
>> - reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
>> + reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, NULL,
>> errp);
>> if (reop_s->glfs == NULL) {
>> ret = -errno;
>> goto exit;
>> @@ -589,7 +805,7 @@ static int qemu_gluster_create(const char *filename,
>> }
>> gconf->has_debug_level = true;
>>
>> - glfs = qemu_gluster_init(gconf, filename, errp);
>> + glfs = qemu_gluster_init(gconf, filename, NULL, errp);
>> if (!glfs) {
>> ret = -errno;
>> goto out;
>> @@ -969,7 +1185,7 @@ static BlockDriver bdrv_gluster = {
>> .format_name = "gluster",
>> .protocol_name = "gluster",
>> .instance_size = sizeof(BDRVGlusterState),
>> - .bdrv_needs_filename = true,
>> + .bdrv_needs_filename = false,
>> .bdrv_file_open = qemu_gluster_open,
>> .bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
>> .bdrv_reopen_commit = qemu_gluster_reopen_commit,
>> @@ -997,7 +1213,7 @@ static BlockDriver bdrv_gluster_tcp = {
>> .format_name = "gluster",
>> .protocol_name = "gluster+tcp",
>> .instance_size = sizeof(BDRVGlusterState),
>> - .bdrv_needs_filename = true,
>> + .bdrv_needs_filename = false,
>> .bdrv_file_open = qemu_gluster_open,
>> .bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
>> .bdrv_reopen_commit = qemu_gluster_reopen_commit,
>> @@ -1053,7 +1269,7 @@ static BlockDriver bdrv_gluster_rdma = {
>> .format_name = "gluster",
>> .protocol_name = "gluster+rdma",
>> .instance_size = sizeof(BDRVGlusterState),
>> - .bdrv_needs_filename = true,
>> + .bdrv_needs_filename = false,
>> .bdrv_file_open = qemu_gluster_open,
>> .bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
>> .bdrv_reopen_commit = qemu_gluster_reopen_commit,
>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>> index f8b38bb..52cee7d 100644
>> --- a/qapi/block-core.json
>> +++ b/qapi/block-core.json
>> @@ -2077,7 +2077,7 @@
>> #
>> # @path: absolute path to image file in gluster volume
>> #
>> -# @server: gluster server description
>> +# @server: one or more gluster server descriptions
>> #
>> # @debug_level: #libgfapi log level (default '4' which is Error)
>> #
>> @@ -2086,7 +2086,7 @@
>> { 'struct': 'BlockdevOptionsGluster',
>> 'data': { 'volume': 'str',
>> 'path': 'str',
>> - 'server': 'GlusterServer',
>> + 'server': [ 'GlusterServer' ],
>> '*debug_level': 'int' } }
>>
>> ##
>
> Incompatible change, okay because the thing got added only in the
> previous patch.
- [Qemu-devel] [PATCH v18 0/4] block/gluster: add support for multiple gluster servers, Prasanna Kumar Kalever, 2016/07/13
- [Qemu-devel] [PATCH v18 1/4] block/gluster: rename [server, volname, image] -> [host, volume, path], Prasanna Kumar Kalever, 2016/07/13
- [Qemu-devel] [PATCH v18 2/4] block/gluster: code cleanup, Prasanna Kumar Kalever, 2016/07/13
- [Qemu-devel] [PATCH v18 3/4] block/gluster: using new qapi schema, Prasanna Kumar Kalever, 2016/07/13
[Qemu-devel] [PATCH v18 4/4] block/gluster: add support for multiple gluster servers, Prasanna Kumar Kalever, 2016/07/13