qemu-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-commits] [qemu/qemu] 8187f6: blockdev: refactor transaction to use


From: Richard Henderson
Subject: [Qemu-commits] [qemu/qemu] 8187f6: blockdev: refactor transaction to use Transaction API
Date: Mon, 22 May 2023 09:06:22 -0700

  Branch: refs/heads/master
  Home:   https://github.com/qemu/qemu
  Commit: 8187f63c9c64c2f2322fc6730c478a57bc0ce9eb
      
https://github.com/qemu/qemu/commit/8187f63c9c64c2f2322fc6730c478a57bc0ce9eb
  Author: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: refactor transaction to use Transaction API

We are going to add more block-graph modifying transaction actions,
and block-graph modifying functions are already based on Transaction
API.

Next, we'll need to separately update permissions after several
graph-modifying actions, and this would be simple with help of
Transaction API.

So, now let's just transform what we have into new-style transaction
actions.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20230510150624.310640-2-vsementsov@yandex-team.ru>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 240396965fc653756aed90edc2985f05783c5ad6
      
https://github.com/qemu/qemu/commit/240396965fc653756aed90edc2985f05783c5ad6
  Author: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: transactions: rename some things

Look at qmp_transaction(): dev_list is not obvious name for list of
actions. Let's look at qapi spec, this argument is "actions". Let's
follow the common practice of using same argument names in qapi scheme
and code.

To be honest, rename props to properties for same reason.

Next, we have to rename global map of actions, to not conflict with new
name for function argument.

Rename also dev_entry loop variable accordingly to new name of the
list.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230510150624.310640-3-vsementsov@yandex-team.ru>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 30c96b555974ed0341b3c374b20a26242f1de239
      
https://github.com/qemu/qemu/commit/30c96b555974ed0341b3c374b20a26242f1de239
  Author: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: qmp_transaction: refactor loop to classic for

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230510150624.310640-4-vsementsov@yandex-team.ru>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: c85f34cf89edb8e7bcdb35bc423da8a7e4b8c7ba
      
https://github.com/qemu/qemu/commit/c85f34cf89edb8e7bcdb35bc423da8a7e4b8c7ba
  Author: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: transaction: refactor handling transaction properties

Only backup supports GROUPED mode. Make this logic more clear. And
avoid passing extra thing to each action.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20230510150624.310640-5-vsementsov@yandex-team.ru>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: c85feafa98e3f7835407d39bf5bfadf13f32075f
      
https://github.com/qemu/qemu/commit/c85feafa98e3f7835407d39bf5bfadf13f32075f
  Author: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: use state.bitmap in block-dirty-bitmap-add action

Other bitmap related actions use the .bitmap pointer in .abort action,
let's do same here:

1. It helps further refactoring, as bitmap-add is the only bitmap
   action that uses state.action in .abort

2. It must be safe: transaction actions rely on the fact that on
   .abort() the state is the same as at the end of .prepare(), so that
   in .abort() we could precisely rollback the changes done by
   .prepare().
   The only way to remove the bitmap during transaction should be
   block-dirty-bitmap-remove action, but it postpones actual removal to
   .commit(), so we are OK on any rollback path. (Note also that
   bitmap-remove is the only bitmap action that has .commit() phase,
   except for simple g_free the state on .clean())

3. Again, other bitmap actions behave this way: keep the bitmap pointer
   during the transaction.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20230510150624.310640-6-vsementsov@yandex-team.ru>
[kwolf: Also remove the now unused BlockDirtyBitmapState.prepared]
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: d53c89aed1d25b8a9d98b3904e9226bda699adf1
      
https://github.com/qemu/qemu/commit/d53c89aed1d25b8a9d98b3904e9226bda699adf1
  Author: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: qmp_transaction: drop extra generic layer

Let's simplify things:

First, actions generally don't need access to common BlkActionState
structure. The only exclusion are backup actions that need
block_job_txn.

Next, for transaction actions of Transaction API is more native to
allocated state structure in the action itself.

So, do the following transformation:

1. Let all actions be represented by a function with corresponding
   structure as arguments.

2. Instead of array-map marshaller, let's make a function, that calls
   corresponding action directly.

3. BlkActionOps and BlkActionState structures become unused

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20230510150624.310640-7-vsementsov@yandex-team.ru>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 41f8b633393021923fd555d8d94bded2f8f6f05d
      
https://github.com/qemu/qemu/commit/41f8b633393021923fd555d8d94bded2f8f6f05d
  Author: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M docs/interop/qcow2.txt

  Log Message:
  -----------
  docs/interop/qcow2.txt: fix description about "zlib" clusters

"zlib" clusters are actually raw deflate (RFC1951) clusters without
zlib headers.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Message-Id: <168424874322.11954.1340942046351859521-0@git.sr.ht>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 4db7ba3b87447fd06cd7e23dab69fdae6011496d
      
https://github.com/qemu/qemu/commit/4db7ba3b87447fd06cd7e23dab69fdae6011496d
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M block.c
    M block/create.c
    M block/crypto.c
    M block/parallels.c
    M block/qcow.c
    M block/qcow2.c
    M block/qed.c
    M block/raw-format.c
    M block/vdi.c
    M block/vhdx.c
    M block/vmdk.c
    M block/vpc.c
    M include/block/block-global-state.h
    M include/block/block_int-common.h

  Log Message:
  -----------
  block: Call .bdrv_co_create(_opts) unlocked

These are functions that modify the graph, so they must be able to take
a writer lock. This is impossible if they already hold the reader lock.
If they need a reader lock for some of their operations, they should
take it internally.

Many of them go through blk_*(), which will always take the lock itself.
Direct calls of bdrv_*() need to take the reader lock. Note that while
locking for bdrv_co_*() calls is checked by TSA, this is not the case
for the mixed_coroutine_fns bdrv_*(). Holding the lock is still required
when they are called from coroutine context like here!

This effectively reverts 4ec8df0183, but adds some internal locking
instead.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230510203601.418015-2-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: a184563778f2b8970eb93291f08108e66432a575
      
https://github.com/qemu/qemu/commit/a184563778f2b8970eb93291f08108e66432a575
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M block/export/export.c

  Log Message:
  -----------
  block/export: Fix null pointer dereference in error path

There are some error paths in blk_exp_add() that jump to 'fail:' before
'exp' is even created. So we can't just unconditionally access exp->blk.

Add a NULL check, and switch from exp->blk to blk, which is available
earlier, just to be extra sure that we really cover all cases where
BlockDevOps could have been set for it (in practice, this only happens
in drv->create() today, so this part of the change isn't strictly
necessary).

Fixes: Coverity CID 1509238
Fixes: de79b52604e43fdeba6cee4f5af600b62169f2d2
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230510203601.418015-3-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: e3e31dc87208007784b93a19f8efcdda90ea64f6
      
https://github.com/qemu/qemu/commit/e3e31dc87208007784b93a19f8efcdda90ea64f6
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M block/qcow2.c

  Log Message:
  -----------
  qcow2: Unlock the graph in qcow2_do_open() where necessary

qcow2_do_open() calls a few no_co_wrappers that wrap functions taking
the graph lock internally as a writer. Therefore, it can't hold the
reader lock across these calls, it causes deadlocks. Drop the lock
temporarily around the calls.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230510203601.418015-4-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 3db0c8b25c452b53aabc8efa36e655c7c02abb8f
      
https://github.com/qemu/qemu/commit/3db0c8b25c452b53aabc8efa36e655c7c02abb8f
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M qemu-img.c

  Log Message:
  -----------
  qemu-img: Take graph lock more selectively

If we take a reader lock, we can't call any functions that take a writer
lock internally without causing deadlocks once the reader lock is
actually enforced in the main thread, too. Take the reader lock only
where it is actually needed.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230510203601.418015-5-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 87f130bdaad68baad4216dbc97dec73ab4a2c4ef
      
https://github.com/qemu/qemu/commit/87f130bdaad68baad4216dbc97dec73ab4a2c4ef
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M tests/unit/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Take graph lock more selectively

If we take a reader lock, we can't call any functions that take a writer
lock internally without causing deadlocks once the reader lock is
actually enforced in the main thread, too. Take the reader lock only
where it is actually needed.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230510203601.418015-6-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 01a10c243362e49afcb7acbd85a47eba64a6fc74
      
https://github.com/qemu/qemu/commit/01a10c243362e49afcb7acbd85a47eba64a6fc74
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M tests/unit/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Call bdrv_co_unref() in coroutine context

bdrv_unref() is a no_coroutine_fn, so calling it from coroutine context
is invalid. Use bdrv_co_unref() instead.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230510203601.418015-7-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 018e5987b57589e6e9089c2d2ef31db4e7519fd5
      
https://github.com/qemu/qemu/commit/018e5987b57589e6e9089c2d2ef31db4e7519fd5
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M block/commit.c
    M block/mirror.c
    M block/stream.c
    M blockjob.c
    M include/block/blockjob_int.h

  Log Message:
  -----------
  blockjob: Adhere to rate limit even when reentered early

When jobs are sleeping, for example to enforce a given rate limit, they
can be reentered early, in particular in order to get paused, to update
the rate limit or to get cancelled.

Before this patch, they behave in this case as if they had fully
completed their rate limiting delay. This means that requests are sped
up beyond their limit, violating the constraints that the user gave us.

Change the block jobs to sleep in a loop until the necessary delay is
completed, while still allowing cancelling them immediately as well
pausing (handled by the pause point in job_sleep_ns()) and updating the
rate limit.

This change is also motivated by iotests cases being prone to fail
because drain operations pause and unpause them so often that block jobs
complete earlier than they are supposed to. In particular, the next
commit would fail iotests 030 without this change.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230510203601.418015-8-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 71438d8dac07f28c01cf6d90fce14efe04c77824
      
https://github.com/qemu/qemu/commit/71438d8dac07f28c01cf6d90fce14efe04c77824
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M block/graph-lock.c

  Log Message:
  -----------
  graph-lock: Honour read locks even in the main thread

There are some conditions under which we don't actually need to do
anything for taking a reader lock: Writing the graph is only possible
from the main context while holding the BQL. So if a reader is running
in the main context under the BQL and knows that it won't be interrupted
until the next writer runs, we don't actually need to do anything.

This is the case if the reader code neither has a nested event loop
(this is forbidden anyway while you hold the lock) nor is a coroutine
(because a writer could run when the coroutine has yielded).

These conditions are exactly what bdrv_graph_rdlock_main_loop() asserts.
They are not fulfilled in bdrv_graph_co_rdlock(), which always runs in a
coroutine.

This deletes the shortcuts in bdrv_graph_co_rdlock() that skip taking
the reader lock in the main thread.

Reported-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230510203601.418015-9-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 78935fcd88ec2d26d50e45043b262f0326e6d410
      
https://github.com/qemu/qemu/commit/78935fcd88ec2d26d50e45043b262f0326e6d410
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M tests/qemu-iotests/245
    M tests/qemu-iotests/245.out

  Log Message:
  -----------
  iotests/245: Check if 'compress' driver is available

Skip TestBlockdevReopen.test_insert_compress_filter() if the 'compress'
driver isn't available.

In order to make the test succeed when the case is skipped, we also need
to remove any output from it (which would be missing in the case where
we skip it). This is done by replacing qemu_io_log() with qemu_io(). In
case of failure, qemu_io() raises an exception with the output of the
qemu-io binary in its message, so we don't actually lose anything.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230511143801.255021-1-kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 6d740fb01b9f0f5ea7a82f4d5e458d91940a19ee
      
https://github.com/qemu/qemu/commit/6d740fb01b9f0f5ea7a82f4d5e458d91940a19ee
  Author: Stefan Hajnoczi <stefanha@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M util/aio-posix.c

  Log Message:
  -----------
  aio-posix: do not nest poll handlers

QEMU's event loop supports nesting, which means that event handler
functions may themselves call aio_poll(). The condition that triggered a
handler must be reset before the nested aio_poll() call, otherwise the
same handler will be called and immediately re-enter aio_poll. This
leads to an infinite loop and stack exhaustion.

Poll handlers are especially prone to this issue, because they typically
reset their condition by finishing the processing of pending work.
Unfortunately it is during the processing of pending work that nested
aio_poll() calls typically occur and the condition has not yet been
reset.

Disable a poll handler during ->io_poll_ready() so that a nested
aio_poll() call cannot invoke ->io_poll_ready() again. As a result, the
disabled poll handler and its associated fd handler do not run during
the nested aio_poll(). Calling aio_set_fd_handler() from inside nested
aio_poll() could cause it to run again. If the fd handler is pending
inside nested aio_poll(), then it will also run again.

In theory fd handlers can be affected by the same issue, but they are
more likely to reset the condition before calling nested aio_poll().

This is a special case and it's somewhat complex, but I don't see a way
around it as long as nested aio_poll() is supported.

Cc: qemu-stable@nongnu.org
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2186181
Fixes: c38270692593 ("block: Mark bdrv_co_io_(un)plug() and callers 
GRAPH_RDLOCK")
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20230502184134.534703-2-stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 844a12a63e12b1235a8fc17f9b278929dc6eb00e
      
https://github.com/qemu/qemu/commit/844a12a63e12b1235a8fc17f9b278929dc6eb00e
  Author: Stefan Hajnoczi <stefanha@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M tests/unit/meson.build
    A tests/unit/test-nested-aio-poll.c

  Log Message:
  -----------
  tested: add test for nested aio_poll() in poll handlers

Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20230502184134.534703-3-stefanha@redhat.com>
[kwolf: Restrict to CONFIG_POSIX, Windows doesn't support polling]
Tested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 80fc5d260002432628710f8b0c7cfc7d9b97bb9d
      
https://github.com/qemu/qemu/commit/80fc5d260002432628710f8b0c7cfc7d9b97bb9d
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M block/graph-lock.c

  Log Message:
  -----------
  graph-lock: Disable locking for now

In QEMU 8.0, we've been seeing deadlocks in bdrv_graph_wrlock(). They
come from callers that hold an AioContext lock, which is not allowed
during polling. In theory, we could temporarily release the lock, but
callers are inconsistent about whether they hold a lock, and if they do,
some are also confused about which one they hold. While all of this is
fixable, it's not trivial, and the best course of action for 8.0.1 is
probably just disabling the graph locking code temporarily.

We don't currently rely on graph locking yet. It is supposed to replace
the AioContext lock eventually to enable multiqueue support, but as long
as we still have the AioContext lock, it is sufficient without the graph
lock. Once the AioContext lock goes away, the deadlock doesn't exist any
more either and this commit can be reverted. (Of course, it can also be
reverted while the AioContext lock still exists if the callers have been
fixed.)

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230517152834.277483-2-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 7c1f51bf38de8cea4ed5030467646c37b46edeb7
      
https://github.com/qemu/qemu/commit/7c1f51bf38de8cea4ed5030467646c37b46edeb7
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M include/io/channel.h
    M io/channel.c
    M nbd/server.c

  Log Message:
  -----------
  nbd/server: Fix drained_poll to wake coroutine in right AioContext

nbd_drained_poll() generally runs in the main thread, not whatever
iothread the NBD server coroutine is meant to run in, so it can't
directly reenter the coroutines to wake them up.

The code seems to have the right intention, it specifies the correct
AioContext when it calls qemu_aio_coroutine_enter(). However, this
functions doesn't schedule the coroutine to run in that AioContext, but
it assumes it is already called in the home thread of the AioContext.

To fix this, add a new thread-safe qio_channel_wake_read() that can be
called in the main thread to wake up the coroutine in its AioContext,
and use this in nbd_drained_poll().

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230517152834.277483-3-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: 95fdd8db61848d31fde1d9b32da7f3f76babfa25
      
https://github.com/qemu/qemu/commit/95fdd8db61848d31fde1d9b32da7f3f76babfa25
  Author: Kevin Wolf <kwolf@redhat.com>
  Date:   2023-05-19 (Fri, 19 May 2023)

  Changed paths:
    M tests/qemu-iotests/iotests.py
    M tests/qemu-iotests/tests/graph-changes-while-io
    M tests/qemu-iotests/tests/graph-changes-while-io.out

  Log Message:
  -----------
  iotests: Test commit with iothreads and ongoing I/O

This tests exercises graph locking, draining, and graph modifications
with AioContext switches a lot. Amongst others, it serves as a
regression test for bdrv_graph_wrlock() deadlocking because it is called
with a locked AioContext and for AioContext handling in the NBD server.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230517152834.277483-4-kwolf@redhat.com>
Tested-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>


  Commit: ad3387396a71416cacc0b394e5e440dd6e9ba19a
      
https://github.com/qemu/qemu/commit/ad3387396a71416cacc0b394e5e440dd6e9ba19a
  Author: Richard Henderson <richard.henderson@linaro.org>
  Date:   2023-05-22 (Mon, 22 May 2023)

  Changed paths:
    M block.c
    M block/commit.c
    M block/create.c
    M block/crypto.c
    M block/export/export.c
    M block/graph-lock.c
    M block/mirror.c
    M block/parallels.c
    M block/qcow.c
    M block/qcow2.c
    M block/qed.c
    M block/raw-format.c
    M block/stream.c
    M block/vdi.c
    M block/vhdx.c
    M block/vmdk.c
    M block/vpc.c
    M blockdev.c
    M blockjob.c
    M docs/interop/qcow2.txt
    M include/block/block-global-state.h
    M include/block/block_int-common.h
    M include/block/blockjob_int.h
    M include/io/channel.h
    M io/channel.c
    M nbd/server.c
    M qemu-img.c
    M tests/qemu-iotests/245
    M tests/qemu-iotests/245.out
    M tests/qemu-iotests/iotests.py
    M tests/qemu-iotests/tests/graph-changes-while-io
    M tests/qemu-iotests/tests/graph-changes-while-io.out
    M tests/unit/meson.build
    M tests/unit/test-bdrv-drain.c
    A tests/unit/test-nested-aio-poll.c
    M util/aio-posix.c

  Log Message:
  -----------
  Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging

Block layer patches

- qcow2 spec: Rename "zlib" compression to "deflate"
- Honour graph read lock even in the main thread + prerequisite fixes
- aio-posix: do not nest poll handlers (fixes infinite recursion)
- Refactor QMP blockdev transactions
- graph-lock: Disable locking for now
- iotests/245: Check if 'compress' driver is available

# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmRnrxURHGt3b2xmQHJl
# ZGhhdC5jb20ACgkQfwmycsiPL9aHyw/9H0xpceVb0kcC5CStOWCcq4PJHzkl/8/m
# c6ABFe0fgEuN2FCiKiCKOt6+V7qaIAw0+YLgPr/LGIsbIBzdxF3Xgd2UyIH6o4dK
# bSaIAaes6ZLTcYGIYEVJtHuwNgvzhjyBlW5qqwTpN0YArKS411eHyQ3wlUkCEVwK
# ZNmDY/MC8jq8r1xfwpPi7CaH6k1I6HhDmyl1PdURW9hmoAKZQZMhEdA5reJrUwZ9
# EhfgbLIaK0kkLLsufJ9YIkd+b/P3mUbH30kekNMOiA0XlnhWm1Djol5pxlnNiflg
# CGh6CAyhJKdXzwV567cSF11NYCsFmiY+c/l0xRIGscujwvO4iD7wFT5xk2geUAKV
# yaox8JA7Le36g7lO2CRadlS24/Ekqnle6q09g2i8s2tZwB4fS286vaZz6QDPmf7W
# VSQp9vuDj6ZcVjMsuo2+LzF3yA2Vqvgd9s032iBAjRDSGLAoOdQZjBJrreypJ0Oi
# pVFwgK+9QNCZBsqVhwVOgElSoK/3Vbl1kqpi30Ikgc0epAn0suM1g2QQPJ2Zt/MJ
# xqMlTv+48OW3vq3ebr8GXqkhvG/u0ku6I1G6ZyCrjOce89osK8QUaovERyi1eOmo
# ouoZ8UJJa6VfEkkmdhq2vF6u/MP4PeZ8MW3pYQy6qEnSOPDKpLnR30Z/s/HZCZcm
# H4QIbfQnzic=
# =edNP
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 19 May 2023 10:17:09 AM PDT
# gpg:                using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6
# gpg:                issuer "kwolf@redhat.com"
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full]

* tag 'for-upstream' of https://repo.or.cz/qemu/kevin: (21 commits)
  iotests: Test commit with iothreads and ongoing I/O
  nbd/server: Fix drained_poll to wake coroutine in right AioContext
  graph-lock: Disable locking for now
  tested: add test for nested aio_poll() in poll handlers
  aio-posix: do not nest poll handlers
  iotests/245: Check if 'compress' driver is available
  graph-lock: Honour read locks even in the main thread
  blockjob: Adhere to rate limit even when reentered early
  test-bdrv-drain: Call bdrv_co_unref() in coroutine context
  test-bdrv-drain: Take graph lock more selectively
  qemu-img: Take graph lock more selectively
  qcow2: Unlock the graph in qcow2_do_open() where necessary
  block/export: Fix null pointer dereference in error path
  block: Call .bdrv_co_create(_opts) unlocked
  docs/interop/qcow2.txt: fix description about "zlib" clusters
  blockdev: qmp_transaction: drop extra generic layer
  blockdev: use state.bitmap in block-dirty-bitmap-add action
  blockdev: transaction: refactor handling transaction properties
  blockdev: qmp_transaction: refactor loop to classic for
  blockdev: transactions: rename some things
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>


Compare: https://github.com/qemu/qemu/compare/ffd9492f2a71...ad3387396a71



reply via email to

[Prev in Thread] Current Thread [Next in Thread]