qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 1/1] block/gluster: add support for multiple


From: Prasanna Kumar Kalever
Subject: Re: [Qemu-devel] [PATCH v7 1/1] block/gluster: add support for multiple gluster backup volfile servers
Date: Fri, 16 Oct 2015 05:05:46 -0400 (EDT)

On 10/15/2015 02:59 PM, Peter Krempa wrote:
> On Wed, Oct 14, 2015 at 12:36:58 +0530, Prasanna Kumar Kalever wrote:
> > This patch adds a way to specify multiple volfile servers to the gluster
> > block backend of QEMU with tcp|rdma transport types and their port numbers.
> > 
> > Problem:
> > 
> > Currenly VM Image on gluster volume is specified like this:
> > 
> > file=gluster[+tcp]://host[:port]/testvol/a.img
> > 
> > Assuming we have three hosts in trustred pool with replica 3 volume
> > in action and unfortunately host (mentioned in the command above) went down
> > for some reason, since the volume is replica 3 we now have other 2 hosts
> > active from which we can boot the VM.
> > 
> > But currently there is no mechanism to pass the other 2 gluster host
> > addresses to qemu.
> > 
> > Solution:
> > 
> > New way of specifying VM Image on gluster volume with volfile servers:
> > (We still support old syntax to maintain backward compatibility)
> > 
> > Basic command line syntax looks like:
> > 
> > Pattern I:
> >  -drive driver=gluster,
> >         volume=testvol,path=/path/a.raw,
> >         servers.0.host=1.2.3.4,
> >        [servers.0.port=24007,]
> >        [servers.0.transport=tcp,]
> >         servers.1.host=5.6.7.8,
> >        [servers.1.port=24008,]
> >        [servers.1.transport=rdma,] ...
> > 
> > Pattern II:
> >  'json:{"driver":"qcow2","file":{"driver":"gluster",
> >        "volume":"testvol","path":"/path/a.qcow2",
> >        "servers":[{tuple0},{tuple1}, ...{tupleN}]}}'
> > 
> >    driver      => 'gluster' (protocol name)
> >    volume      => name of gluster volume where our VM image resides
> >    path        => absolute path of image in gluster volume
> > 
> >   {tuple}      => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}
> > 
> >    host        => host address (hostname/ipv4/ipv6 addresses)
> >    port        => port number on which glusterd is listening. (default
> >    24007)
> >    tranport    => transport type used to connect to gluster management
> >    daemon,
> 
> s/tranport/transport/
> 
> >                    it can be tcp|rdma (default 'tcp')
> > 
> > Examples:
> > 1.
> >  -drive driver=qcow2,file.driver=gluster,
> >         file.volume=testvol,file.path=/path/a.qcow2,
> >         file.servers.0.host=1.2.3.4,
> >         file.servers.0.port=24007,
> >         file.servers.0.transport=tcp,
> >         file.servers.1.host=5.6.7.8,
> >         file.servers.1.port=24008,
> >         file.servers.1.transport=rdma
> > 2.
> >  'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
> >          "path":"/path/a.qcow2","servers":
> >          [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
> >           {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'
> > 
> > This patch gives a mechanism to provide all the server addresses, which are
> > in
> > replica set, so in case host1 is down VM can still boot from any of the
> > active hosts.
> > 
> > This is equivalent to the backup-volfile-servers option supported by
> > mount.glusterfs (FUSE way of mounting gluster volume)
> > 
> > This patch depends on a recent fix in libgfapi raised as part of this work:
> > http://review.gluster.org/#/c/12114/
> 
> According to the commit message of the gluster change this is not really
> required since the code below actually uses it's own defaults if the
> user didn't provide any so the libgfapi is never called with NULL/0
> 
Sorry for the short commit message description given there, It also fix some 
other problems,
like if you look at http://review.gluster.org/#/c/12114/9/api/src/glfs.c in 
line 440 correction
was done to stop taking first server always, that patch such that traverse the 
list for each server
and finally helps in initializing with all the servers list.

> > 
> > Credits: Sincere thanks to Kevin Wolf <address@hidden> and
> > "Deepak C Shetty" <address@hidden> for inputs and all their support
> > 
> > Signed-off-by: Prasanna Kumar Kalever <address@hidden>
> > ---
> 

[...]
> > +/*
> > +*
> > +*  Basic command line syntax looks like:
> > +*
> > +* Pattern I:
> > +* -drive driver=gluster,
> > +*        volume=testvol,file.path=/path/a.raw,
> > +*        servers.0.host=1.2.3.4,
> > +*       [servers.0.port=24007,]
> > +*       [servers.0.transport=tcp,]
> > +*        servers.1.host=5.6.7.8,
> > +*       [servers.1.port=24008,]
> > +*       [servers.1.transport=rdma,] ...
> 
> I've noticed this just now. Why didn't you implement unix socket
> transport with the new syntax? The unix transport can be theoretically
> used with conjunction with traditional remote transport in case the
> local service is down. While I'm not really sure that it might be useful
> I'm just asking so that the complex interface you are adding is
> complete.
> 
gluster currently support only tcp and rdma for the volfile fetching and 
notification management.
unix socket funda is deprecated, I shall send a patch later to remove unix part 
completely.


[...]
> > @@ -1794,6 +1794,56 @@
> >              '*read-pattern': 'QuorumReadPattern' } }
> >  
> >  ##
> > +# @GlusterTransport
> > +#
> > +# An enumeration of Gluster transport type
> > +#
> > +# @tcp:   TCP  - Transmission Control Protocol
> > +#
> > +# @rdma:  RDMA - Remote direct memory access
> > +#
> > +# Since: 2.5
> > +##
> > +{ 'enum': 'GlusterTransport', 'data': [ 'tcp', 'rdma' ] }
> 
> As said above, it'd be nice to have @unix as a transport for
> completeness.
> 
> So this patch was really hard to review due to the merge of
> non-functional changes (that were not described in the commit message)
> with functional changes. I'd suggest you split up the patch to a series
> containing the cleanups/renames/moves and then the functional changes
> itself.
> 
Sure, I shall work on splitting this patch to series 3 containing 
original/renames/cleanups&moves

> Peter
> 

- Prasanna



reply via email to

[Prev in Thread] Current Thread [Next in Thread]