qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread


From: Stefan Hajnoczi
Subject: Re: [PULL 11/33] scsi: only access SCSIDevice->requests from one thread
Date: Thu, 1 Feb 2024 09:28:17 -0500

On Thu, Feb 01, 2024 at 03:10:12PM +0100, Hanna Czenczek wrote:
> On 31.01.24 21:35, Stefan Hajnoczi wrote:
> > On Fri, Jan 26, 2024 at 04:24:49PM +0100, Hanna Czenczek wrote:
> > > On 26.01.24 14:18, Kevin Wolf wrote:
> > > > Am 25.01.2024 um 18:32 hat Hanna Czenczek geschrieben:
> > > > > On 23.01.24 18:10, Kevin Wolf wrote:
> > > > > > Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
> > > > > > > On 21.12.23 22:23, Kevin Wolf wrote:
> > > > > > > > From: Stefan Hajnoczi<stefanha@redhat.com>
> > > > > > > > 
> > > > > > > > Stop depending on the AioContext lock and instead access
> > > > > > > > SCSIDevice->requests from only one thread at a time:
> > > > > > > > - When the VM is running only the BlockBackend's AioContext may 
> > > > > > > > access
> > > > > > > >       the requests list.
> > > > > > > > - When the VM is stopped only the main loop may access the 
> > > > > > > > requests
> > > > > > > >       list.
> > > > > > > > 
> > > > > > > > These constraints protect the requests list without the need 
> > > > > > > > for locking
> > > > > > > > in the I/O code path.
> > > > > > > > 
> > > > > > > > Note that multiple IOThreads are not supported yet because the 
> > > > > > > > code
> > > > > > > > assumes all SCSIRequests are executed from a single AioContext. 
> > > > > > > > Leave
> > > > > > > > that as future work.
> > > > > > > > 
> > > > > > > > Signed-off-by: Stefan Hajnoczi<stefanha@redhat.com>
> > > > > > > > Reviewed-by: Eric Blake<eblake@redhat.com>
> > > > > > > > Message-ID:<20231204164259.1515217-2-stefanha@redhat.com>
> > > > > > > > Signed-off-by: Kevin Wolf<kwolf@redhat.com>
> > > > > > > > ---
> > > > > > > >      include/hw/scsi/scsi.h |   7 +-
> > > > > > > >      hw/scsi/scsi-bus.c     | 181 
> > > > > > > > ++++++++++++++++++++++++++++-------------
> > > > > > > >      2 files changed, 131 insertions(+), 57 deletions(-)
> > > > > > > My reproducer forhttps://issues.redhat.com/browse/RHEL-3934  now 
> > > > > > > breaks more
> > > > > > > often because of this commit than because of the original bug, 
> > > > > > > i.e. when
> > > > > > > repeatedly hot-plugging and unplugging a virtio-scsi and a 
> > > > > > > scsi-hd device,
> > > > > > > this tends to happen when unplugging the scsi-hd:
> > > Note: We (on issues.redhat.com) have a separate report that seems to be
> > > concerning this very problem: https://issues.redhat.com/browse/RHEL-19381
> > > 
> > > > > > > {"execute":"device_del","arguments":{"id":"stg0"}}
> > > > > > > {"return": {}}
> > > > > > > qemu-system-x86_64: ../block/block-backend.c:2429: 
> > > > > > > blk_get_aio_context:
> > > > > > > Assertion `ctx == blk->ctx' failed.
> > > > > [...]
> > > > > 
> > > > > > > I don’t know anything about the problem yet, but as usual, I like
> > > > > > > speculation and discovering how wrong I was later on, so one 
> > > > > > > thing I came
> > > > > > > across that’s funny about virtio-scsi is that requests can happen 
> > > > > > > even while
> > > > > > > a disk is being attached or detached.  That is, Linux seems to 
> > > > > > > probe all
> > > > > > > LUNs when a new virtio-scsi device is being attached, and it 
> > > > > > > won’t stop just
> > > > > > > because a disk is being attached or removed.  So maybe that’s 
> > > > > > > part of the
> > > > > > > problem, that we get a request while the BB is being detached, and
> > > > > > > temporarily in an inconsistent state (BDS context differs from BB 
> > > > > > > context).
> > > > > > I don't know anything about the problem either, but since you 
> > > > > > already
> > > > > > speculated about the cause, let me speculate about the solution:
> > > > > > Can we hold the graph writer lock for the tran_commit() call in
> > > > > > bdrv_try_change_aio_context()? And of course take the reader lock 
> > > > > > for
> > > > > > blk_get_aio_context(), but that should be completely unproblematic.
> > > > > Actually, now that completely unproblematic part is giving me 
> > > > > trouble.  I
> > > > > wanted to just put a graph lock into blk_get_aio_context() (making it 
> > > > > a
> > > > > coroutine with a wrapper)
> > > > Which is the first thing I neglected and already not great. We have
> > > > calls of blk_get_aio_context() in the SCSI I/O path, and creating a
> > > > coroutine and doing at least two context switches simply for this call
> > > > is a lot of overhead...
> > > > 
> > > > > but callers of blk_get_aio_context() generally assume the context is
> > > > > going to stay the BB’s context for as long as their AioContext *
> > > > > variable is in scope.
> > > > I'm not so sure about that. And taking another step back, I'm actually
> > > > also not sure how much it still matters now that they can submit I/O
> > > > from any thread.
> > > That’s my impression, too, but “not sure” doesn’t feel great. :)
> > > scsi_device_for_each_req_async_bh() specifically double-checks whether 
> > > it’s
> > > still in the right context before invoking the specified function, so it
> > > seems there was some intention to continue to run in the context 
> > > associated
> > > with the BB.
> > > 
> > > (Not judging whether that intent makes sense or not, yet.)
> > > 
> > > > Maybe the correct solution is to remove the assertion from
> > > > blk_get_aio_context() and just always return blk->ctx. If it's in the
> > > > middle of a change, you'll either get the old one or the new one. Either
> > > > one is fine to submit I/O from, and if you care about changes for other
> > > > reasons (like SCSI does), then you need explicit code to protect it
> > > > anyway (which SCSI apparently has, but it doesn't work).
> > > I think most callers do just assume the BB stays in the context they got
> > > (without any proof, admittedly), but I agree that under re-evaluation, it
> > > probably doesn’t actually matter to them, really. And yes, basically, if 
> > > the
> > > caller doesn’t need to take a lock because it doesn’t really matter 
> > > whether
> > > blk->ctx changes while its still using the old value, 
> > > blk_get_aio_context()
> > > in turn doesn’t need to double-check blk->ctx against the root node’s
> > > context either, and nobody needs a lock.
> > > 
> > > So I agree, it’s on the caller to protect against a potentially changing
> > > context, blk_get_aio_context() should just return blk->ctx and not check
> > > against the root node.
> > > 
> > > (On a tangent: blk_drain() is a caller of blk_get_aio_context(), and it
> > > polls that context.  Why does it need to poll that context specifically 
> > > when
> > > requests may be in any context?  Is it because if there are requests in 
> > > the
> > > main thread, we must poll that, but otherwise it’s fine to poll any 
> > > thread,
> > > and we can only have requests in the main thread if that’s the BB’s
> > > context?)
> > > 
> > > > > I was tempted to think callers know what happens to the BB they pass
> > > > > to blk_get_aio_context(), and it won’t change contexts so easily, but
> > > > > then I remembered this is exactly what happens in this case; we run
> > > > > scsi_device_for_each_req_async_bh() in one thread (which calls
> > > > > blk_get_aio_context()), and in the other, we change the BB’s context.
> > > > Let's think a bit more about scsi_device_for_each_req_async()
> > > > specifically. This is a function that runs in the main thread. Nothing
> > > > will change any AioContext assignment if it doesn't call it. It wants to
> > > > make sure that scsi_device_for_each_req_async_bh() is called in the
> > > > same AioContext where the virtqueue is processed, so it schedules a BH
> > > > and waits for it.
> > > I don’t quite follow, it doesn’t wait for the BH.  It uses
> > > aio_bh_schedule_oneshot(), not aio_wait_bh_oneshot().  While you’re right
> > > that if it did wait, the BB context might still change, in practice we
> > > wouldn’t have the problem at hand because the caller is actually the one 
> > > to
> > > change the context, concurrently while the BH is running.
> > > 
> > > > Waiting for it means running a nested event loop that could do anything,
> > > > including changing AioContexts. So this is what needs the locking, not
> > > > the blk_get_aio_context() call in scsi_device_for_each_req_async_bh().
> > > > If we lock before the nested event loop and unlock in the BH, the check
> > > > in the BH can become an assertion. (It is important that we unlock in
> > > > the BH rather than after waiting because if something takes the writer
> > > > lock, we need to unlock during the nested event loop of bdrv_wrlock() to
> > > > avoid a deadlock.)
> > > > 
> > > > And spawning a coroutine for this looks a lot more acceptable because
> > > > it's on a slow path anyway.
> > > > 
> > > > In fact, we probably don't technically need a coroutine to take the
> > > > reader lock here. We can have a new graph lock function that asserts
> > > > that there is no writer (we know because we're running in the main loop)
> > > > and then atomically increments the reader count. But maybe that already
> > > > complicates things again...
> > > So as far as I understand we can’t just use aio_wait_bh_oneshot() and wrap
> > > it in bdrv_graph_rd{,un}lock_main_loop(), because that doesn’t actually 
> > > lock
> > > the graph.  I feel like adding a new graph lock function for this quite
> > > highly specific case could be dangerous, because it seems easy to use the
> > > wrong way.
> > > 
> > > Just having a trampoline coroutine to call bdrv_graph_co_rd{,un}lock() 
> > > seems
> > > simple enough and reasonable here (not a hot path).  Can we have that
> > > coroutine then use aio_wait_bh_oneshot() with the existing _bh function, 
> > > or
> > > should that be made a coroutine, too?
> > There is a reason for running in the context associated with the BB: the
> > virtio-scsi code assumes all request processing happens in the BB's
> > AioContext. The SCSI request list and other SCSI emulation code is not
> > thread-safe!
> 
> One peculiarity about virtio-scsi, as far as I understand, is that its
> context is not necessarily the BB’s context, because one virtio-scsi device
> may have many BBs.  While the BBs are being hot-plugged or un-plugged, their
> context may change (as is happening here), but that doesn’t stop SCSI
> request processing, because SCSI requests happen independently of whether
> there are devices on the SCSI bus.
> 
> If SCSI request processing is not thread-safe, doesn’t that mean it always
> must be done in the very same context, i.e. the context the virtio-scsi
> device was configured to use?  Just because a new scsi-hd BB is added or
> removed, and so we temporarily have a main context BB associated with the
> virtio-scsi device, I don’t think we should switch to processing requests in
> the main context.

This case is not supposed to happen because virtio_scsi_hotplug()
immediately places the BB into the virtio-scsi device's AioContext by
calling blk_set_aio_context().

The AioContext invariant is checked at several points in the SCSI
request lifecycle by this function:

  static inline void virtio_scsi_ctx_check(VirtIOSCSI *s, SCSIDevice *d)
  {   
      if (s->dataplane_started && d && blk_is_available(d->conf.blk)) {
          assert(blk_get_aio_context(d->conf.blk) == s->ctx);
      } 
  }

Did you find a scenario where the virtio-scsi AioContext is different
from the scsi-hd BB's Aiocontext?

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]