|
From: | zhenwei pi |
Subject: | Re: Re: [PATCH v11 02/10] block/raw: add persistent reservation in/out driver |
Date: | Tue, 10 Sep 2024 09:59:00 +0800 |
User-agent: | Mozilla Thunderbird |
On 9/10/24 04:18, Keith Busch wrote:
On Mon, Sep 09, 2024 at 07:34:45PM +0800, Changqi Lu wrote:+static int coroutine_fn GRAPH_RDLOCK +raw_co_pr_register(BlockDriverState *bs, uint64_t old_key, + uint64_t new_key, BlockPrType type, + bool ptpl, bool ignore_key) +{ + return bdrv_co_pr_register(bs->file->bs, old_key, new_key, + type, ptpl, ignore_key); +}The nvme parts look fine, but could you help me understand how this all works? I was looking for something utilizing ioctl's, like IOC_PR_REGISTER for this one, chained through the file-posix block driver. Is this only supposed to work with iscsi?
Hi Keith,IOC_PR_REGISTER family supports PR OUT direction only in Linux kernel, so the command `blkpr` command (from util-linux since v2.39) supports PR OUT direction only too.
- In a guest: * `blkpr` command could test PR OUT commands. * `sg_persist` command (from sg3-utils) works fine on a SCSI device * `nvme` command (from nvme-cli) works fine on a NVMe device - On a host:* libiscsi supports PR, and LIO/SPDK supports PR from the target side, tgt supports uncompleted PR(lack of PTPL), so QEMU libiscsi driver works fine. * `iscsi-pr` command (from libiscsi-bin since v1.20.0) supports the full PR family command. * because of the lack of PR IN commands from linux block layer, so QEMU posix block driver can't support PR currently. Once this series is merged into QEMU, I think we have a scenario on posix block PR IN family, it's a hint to promote it for linux block layer. * I wrote a user space nvme-of library `libnvmf` (https://github.com/bytedance/libnvmf), it does not support PR family command, but I don't think it's difficult to support if necessary. * As far as I know, several private vendor block driver support PR family, this QEMU block framework will make the private drivers easy to integrate.
[Prev in Thread] | Current Thread | [Next in Thread] |