qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] spapr: Reduce advertised max LUNs for spapr_vsc


From: Laurent Vivier
Subject: Re: [Qemu-devel] [PATCH] spapr: Reduce advertised max LUNs for spapr_vscsi
Date: Thu, 10 Sep 2015 12:31:57 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1



On 10/09/2015 08:48, David Gibson wrote:
> On Thu, Sep 10, 2015 at 08:12:47AM +0200, Thomas Huth wrote:
>> On 10/09/15 03:24, David Gibson wrote:
>>> On Wed, Sep 09, 2015 at 09:29:18AM +0200, Thomas Huth wrote:
>>>> On 09/09/15 09:19, David Gibson wrote:
>>>>> On Wed, Sep 09, 2015 at 08:25:34AM +0200, Thomas Huth
>>>>> wrote:
>>>>>> On 09/09/15 03:22, David Gibson wrote:
>>>>>>> The implementation of the PAPR paravirtual SCSI adapter
>>>>>>> currently allows up to 32 LUNs (max_lun == 31).
>>>>>>> However the adapter isn't really designed to support
>>>>>>> lots of devices - the PowerVM implementation only ever
>>>>>>> puts one disk per vSCSI controller.
>>>>>> 
>>>>>> Do you know how many LUNs are advertised by PowerVM?
>>>>> 
>>>>> Well, what do you mean by "advertised".  AFAIK from the
>>>>> point of view of the guest, the number of LUNs is
>>>>> advertised per-target, not per controller.
>>>> 
>>>> I mean, what's the highest LUN number that can be seen by a
>>>> guest under PowerVM? Is it always using only one LUN per
>>>> controller, or is there a way to change the amount of LUNs?
>>>> (Sorry if I ask dumb questions ... I do not have much
>>>> experience with PowerVM yet)
>>> 
>>> Um.. I'm not sure, I have very little experience with PowerVM
>>> too.  I think with PowerVM it's usually real SCSI devices being
>>> passed through, rather than disk images, so presumably the SCSI
>>> target itself reports however many LUNs it has.  There may be a
>>> limitation in PowerVM, or in the AIX VIO server I think it
>>> typically backends onto, but I don't know what it is.
>>> 
>>> Since that limit has been in the guest side driver forever,
>>> presumbly no-one has hit LUNs > 8 in practice.
>>> 
>>>>>>> More specifically, the Linux guest side vscsi driver
>>>>>>> (the only one we really care about) is hardcoded to
>>>>>>> allow a maximum of 8 LUNs.
>>>>>> 
>>>>>> So what about changing the vscsi driver in Linux instead
>>>>>> to support more LUNs?
>>>>> 
>>>>> Doesn't help for existing guests.  Basically what I'm
>>>>> trying to achieve is for qemu to reject up-front
>>>>> configurations that are unlikely to actually work in the
>>>>> guest.
>>>> 
>>>> I just wonder whether it makes sense to change the guest
>>>> instead. In the future, if we ever have guests that support
>>>> more LUNs than 8 (maybe some non-Linux guests like FreeBSD?),
>>>> we've got to change QEMU back again... OTOH, since this is
>>>> just a one-line fix, it's likely ok to limit this to 8 now -
>>>> it's easy to revert if we ever need to, so I'm fine with
>>>> that change, I just wanted to discuss the other
>>>> possibilites.
>>> 
>>> Remember that the spapr-vscsi device exists pretty much
>>> entirely to make transition simpler for existing PowerVM
>>> guests.  New guests (Linux or otherwise) intended to run under
>>> KVM should be using virtio-blk or virtio-scsi.
>> 
>> FWIW, I had a quick look at FreeBSD sources here:
>> 
>> https://svnweb.freebsd.org/base/stable/10/sys/powerpc/pseries/phyp_vs
csi.c?revision=259204&view=markup
>>
>>
>> 
... and as far as I can see, they do not limit the LUNs to 8.
>> (I only spotted a "cpi->max_lun = ~(lun_id_t)(0);" in there). So
>> there indeed might also be older guests that support more than 8
>> LUNs.
> 
> Fair enough, you've convinced me.
> 
> I still think it makes sense downstream only, though.
> 

I agree with that. I've sent a patch series to the kernel ML to
display current limits and to allow to change max_lun.

Laurent
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iEYEARECAAYFAlXxXB0ACgkQNKT2yavzbFPcNwCg8nyxLkZWx4MInpRTi2U98u3e
YgsAoNUsTQBAUXkdS5Cu1u5HINh8FEa8
=AaRF
-----END PGP SIGNATURE-----



reply via email to

[Prev in Thread] Current Thread [Next in Thread]