qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Verita


From: Ketan Nilangekar
Subject: Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support
Date: Wed, 26 Oct 2016 22:17:57 +0000
User-agent: Microsoft-MacOutlook/0.0.0.160109

Including the rest of the folks from the original thread.


Ketan.





On 10/26/16, 9:33 AM, "Paolo Bonzini" <address@hidden> wrote:

>
>
>On 26/10/2016 00:39, Ketan Nilangekar wrote:
>> 
>> 
>>> On Oct 26, 2016, at 12:00 AM, Paolo Bonzini <address@hidden> wrote:
>>>
>>>
>>>
>>>> On 25/10/2016 23:53, Ketan Nilangekar wrote:
>>>> We need to confirm the perf numbers but it really depends on the way we do 
>>>> failover outside qemu.
>>>>
>>>> We are looking at a vip based failover implementation which may need
>>>> some handling code in qnio but that overhead should be minimal (atleast
>>>> no more than the current impl in qemu driver)
>>>
>>> Then it's not outside QEMU's address space, it's only outside
>>> block/vxhs.c... I don't understand.
>>>
>>> Paolo
>>>
>> 
>> Yes and that is something that we are considering and not finalized on a 
>> design. But even if some of the failover code is in the qnio library, is 
>> that a problem? 
>> As per my understanding the original suggestions were around getting the 
>> failover code out of the block driver and into the network library.
>> If an optimal design for this means that some of the failover handling needs 
>> to be done in qnio, is that not acceptable?
>> The way we see it, driver/qnio will talk to the storage service using a 
>> single IP but may have some retry code for retransmitting failed IOs in a 
>> failover scenario.
>
>Sure, that's fine.  It's just that it seemed different from the previous
>explanation.
>
>Paolo
>
>>>> IMO, the real benefit of qemu + qnio perf comes from:
>>>> 1. the epoll based io multiplexer
>>>> 2. 8 epoll threads
>>>> 3. Zero buffer copies in userland code
>>>> 4. Minimal locking
>>>>
>>>> We are also looking at replacing the existing qnio socket code with
>>>> memory readv/writev calls available with the latest kernel for even
>>>> better performance.
>>>
>>>>
>>>> Ketan
>>>>
>>>>> On Oct 25, 2016, at 1:01 PM, Paolo Bonzini <address@hidden> wrote:
>>>>>
>>>>>
>>>>>
>>>>>> On 25/10/2016 07:07, Ketan Nilangekar wrote:
>>>>>> We are able to derive significant performance from the qemu block
>>>>>> driver as compared to nbd/iscsi/nfs. We have prototyped nfs and nbd
>>>>>> based io tap in the past and the performance of qemu block driver is
>>>>>> significantly better. Hence we would like to go with the vxhs driver
>>>>>> for now.
>>>>>
>>>>> Is this still true with failover implemented outside QEMU (which
>>>>> requires I/O to be proxied, if I'm not mistaken)?  What does the benefit
>>>>> come from if so, is it the threaded backend and performing multiple
>>>>> connections to the same server?
>>>>>
>>>>> Paolo
>>>>>
>>>>>> Ketan
>>>>>>
>>>>>>
>>>>>>> On Oct 24, 2016, at 4:24 PM, Paolo Bonzini <address@hidden>
>>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On 20/10/2016 03:31, Ketan Nilangekar wrote: This way the
>>>>>>>> failover logic will be completely out of qemu address space. We
>>>>>>>> are considering use of some of our proprietary 
>>>>>>>> clustering/monitoring services to implement service failover.
>>>>>>>
>>>>>>> Are you implementing a different protocol just for the sake of
>>>>>>> QEMU, in other words, and forwarding from that protocol to your
>>>>>>> proprietary code?
>>>>>>>
>>>>>>> If that is what you are doing, you don't need at all a vxhs driver
>>>>>>> in QEMU.  Just implement NBD or iSCSI on your side, QEMU already
>>>>>>> has drivers for that.
>>>>>>>
>>>>>>> Paolo
>>>>
>>>>

reply via email to

[Prev in Thread] Current Thread [Next in Thread]