qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [ovirt-users] Enabling libgfapi disk access with oVirt


From: Nir Soffer
Subject: Re: [Qemu-block] [ovirt-users] Enabling libgfapi disk access with oVirt 4.2
Date: Wed, 15 Nov 2017 22:05:30 +0000

On Wed, Nov 15, 2017 at 8:58 AM Misak Khachatryan <address@hidden> wrote:
Hi,

will it be a more clean approach? I can't tolerate full stop of all
VMs just to enable it, seems too disastrous for real production
environment. Will it be some migration mechanisms in future?

You can enable it per vm, you don't need to stop all of them. But I think
we do not support upgrading a machine with running vms, so upgrading 
requires:

1. migrating vms from the host you want to upgrade
2. upgrading the host
3. stopping the vm you want to upgrade to libgfapi
4. starting this vm on the upgraded host

Theoretically qemu could switch from one disk to another, but I'm not
sure this is supported when switching to the same disk using different
transports. I know it is not supported now to mirror a network drive to
another network drive.

The old disk is using:

            <disk device="disk" snapshot="no" type="file">
                <source file="/rhev/data-center/mnt/server:_volname/sd_id/images/img_id/vol_id"/>
                <target bus="virtio" dev="vda"/>
                <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
            </disk>

The new disk should use:

            <disk device="disk" snapshot="no" type="network">                                                                                                                                 
                <source name="volname/sd_id/images/img_id/vol_id" protocol="gluster">
                    <host name="1.2.3.4" port="0" transport="tcp"/>
                </source>
                <driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
            </disk>

Adding qemu-block mailing list.

Nir
 

Best regards,
Misak Khachatryan


On Fri, Nov 10, 2017 at 12:35 AM, Darrell Budic <address@hidden> wrote:
> You do need to stop the VMs and restart them, not just issue a reboot. I
> havn’t tried under 4.2 yet, but it works in 4.1.6 that way for me.
>
> ________________________________
> From: Alessandro De Salvo <address@hidden>
> Subject: Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2
> Date: November 9, 2017 at 2:35:01 AM CST
> To: address@hidden
>
>
> Hi again,
>
> OK, tried to stop all the vms, except the engine, set engine-config -s
> LibgfApiSupported=true (for 4.2 only) and restarted the engine.
>
> When I tried restarting the VMs they are still not using gfapi, so it does
> not seem to help.
>
> Cheers,
>
>
>     Alessandro
>
>
>
> Il 09/11/17 09:12, Alessandro De Salvo ha scritto:
>
> Hi,
> where should I enable gfapi via the UI?
> The only command I tried was engine-config -s LibgfApiSupported=true but the
> result is what is shown in my output below, so it’s set to true for v4.2. Is
> it enough?
> I’ll try restarting the engine. Is it really needed to stop all the VMs and
> restart them all? Of course this is a test setup and I can do it, but for
> production clusters in the future it may be a problem.
> Thanks,
>
>    Alessandro
>
> Il giorno 09 nov 2017, alle ore 07:23, Kasturi Narra <address@hidden> ha
> scritto:
>
> Hi ,
>
>     The procedure to enable gfapi is below.
>
> 1) stop all the vms running
> 2) Enable gfapi via UI or using engine-config command
> 3) Restart ovirt-engine service
> 4) start the vms.
>
> Hope you have not missed any !!
>
> Thanks
> kasturi
>
> On Wed, Nov 8, 2017 at 11:58 PM, Alessandro De Salvo
> <address@hidden> wrote:
>>
>> Hi,
>>
>> I'm using the latest 4.2 beta release and want to try the gfapi access,
>> but I'm currently failing to use it.
>>
>> My test setup has an external glusterfs cluster v3.12, not managed by
>> oVirt.
>>
>> The compatibility flag is correctly showing gfapi should be enabled with
>> 4.2:
>>
>> # engine-config -g LibgfApiSupported
>> LibgfApiSupported: false version: 3.6
>> LibgfApiSupported: false version: 4.0
>> LibgfApiSupported: false version: 4.1
>> LibgfApiSupported: true version: 4.2
>>
>> The data center and cluster have the 4.2 compatibility flags as well.
>>
>> However, when starting a VM with a disk on gluster I can still see the
>> disk is mounted via fuse.
>>
>> Any clue of what I'm still missing?
>>
>> Thanks,
>>
>>
>>    Alessandro
>>
>> _______________________________________________
>> Users mailing list
>> address@hidden
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> _______________________________________________
> Users mailing list
> address@hidden
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> _______________________________________________
> Users mailing list
> address@hidden
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> address@hidden
> http://lists.ovirt.org/mailman/listinfo/users
>
_______________________________________________
Users mailing list
address@hidden
http://lists.ovirt.org/mailman/listinfo/users

reply via email to

[Prev in Thread] Current Thread [Next in Thread]