qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] virtio-scsi spec, first public draft


From: Paolo Bonzini
Subject: Re: [Qemu-devel] virtio-scsi spec, first public draft
Date: Thu, 05 May 2011 16:50:46 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110307 Fedora/3.1.9-0.39.b3pre.fc14 Lightning/1.0b3pre Mnenhy/0.8.3 Thunderbird/3.1.9

On 05/05/2011 04:29 PM, Hannes Reinecke wrote:
I chose 1 requestq per target so that, with MSI-X support, each
target can be associated to one MSI-X vector.

If you want a large number of units, you can subdivide targets into
logical units, or use multiple adapters if you prefer. We can have
20-odd SCSI adapters, each with 65534 targets. I think we're way
beyond the practical limits even before LUN support is added to QEMU.

But this will make queue full tracking harder.
If we have one queue per LUN the SCSI stack is able to track QUEUE FULL
states and will adjust the queue depth accordingly.
When we have only one queue per target we cannot track QUEUE FULL
anymore and have to rely on the static per-host 'can_queue' setting.
Which doesn't work as well, especially in a virtualized environment
where the queue full conditions might change at any time.

So you want one virtqueue per LUN? I had it in the first version, but then you had to associate a (target, 8-byte LUN) pair to each virtqueue manually. That was very hairy, so I changed it to one target per queue.

But read on:

For comparison, Windows supports up to 1024 targets per adapter
(split across 8 channels); IBM vSCSI provides up to 128; VMware
supports a maximum of 15 SCSI targets per adapter and 4 adapters per
VM.

We don't have to impose any hard limits here. The virtio scsi transport
would need to be able to detect the targets, and we would be using
whatever targets have been found.

Yes, that's what I wrote above. Right now "detect the targets" means "send INQUIRY for LUN0 and/or REPORT LUNS to each virtqueue", thanks to the 1:1 relationship. In my first version it would mean:

- associate each target's LUN0 to a virtqueue
- if needed, send INQUIRY for LUN0 and/or REPORT LUNS
- if needed, deassociate the LUN0 and the virtqueue

Really, it was ugly. It also brings a lot more the question, such as what to do if a virtqueue has pending requests at deassociation time.

Yes, just add the first LUN to it (it will be LUN0 which must be
there anyway). The target's existence will be reported on the
control receiveq.

?? How is this supposed to work?
How can I detect the existence of a virtqueue ?

Config space tells you how many virtqueue exist. That gives how many targets you can address at most. If some of them are empty at the beginning of the guest's life, their LUN0 will fail to answer INQUIRY and REPORT LUNS.

(It is the same for vmw_pvscsi by the way, except simpler: the maximum # of targets is not configurable, and there is just one queue + one interrupt).

And to be consistent with the SCSI layer the virtqueues then in fact
would need to map the SCSI targets; LUNs would be detected from the SCSI
midlayer outside the control of the virtio-scsi HBA.

Exactly, that was my point! It seemed so clean compared to a dynamic assignment between LUNs and virtqueues.

VIRTIO_SCSI_T_TMF_LOGICAL_UNIT_DETACH asks the device to make the
logical unit (and the target as well if this is the last logical
unit) disappear. It takes an I_T_L nexus. This non-standard TMF
should be used in response to a host request to shutdown a target
or LUN, after having placed the LUN in a clean state.

It is not really an initiator-driven detach, it is the initiator's
acknowledgement of a target-driven detach. The target needs to know
when the initiator is ready so that it can free resources attached
to the logical unit (this is particularly important if the LU is a
physical disk and it is opened with exclusive access).

Not required. The target can detach any LUN at any time and can rely on
the initiator to handle this situation. Multipath handles this just fine.

I didn't invent this, we had a customer request this feature for Xen guests in the past (a "soft" target detach where the filesystem is unmounted cleanly). But I guess I can drop it since KVM guests have agents like Matahari that will take care of this. They will use out-of-band channels to start an initiator-driven detach, and I guess it's better this way. :)

BTW, with barriers gone, I think I can also drop the per-target TMF command.

Thanks for the review.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]