qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC Patch 01/12] PCI: Add virtfn_index for struct pci_


From: Lan, Tianyu
Subject: Re: [Qemu-devel] [RFC Patch 01/12] PCI: Add virtfn_index for struct pci_device
Date: Sat, 24 Oct 2015 22:46:59 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0



On 10/22/2015 2:07 AM, Alexander Duyck wrote:
On 10/21/2015 09:37 AM, Lan Tianyu wrote:
Add "virtfn_index" member in the struct pci_device to record VF sequence
of PF. This will be used in the VF sysfs node handle.

Signed-off-by: Lan Tianyu <address@hidden>
---
  drivers/pci/iov.c   | 1 +
  include/linux/pci.h | 1 +
  2 files changed, 2 insertions(+)

diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
index ee0ebff..065b6bb 100644
--- a/drivers/pci/iov.c
+++ b/drivers/pci/iov.c
@@ -136,6 +136,7 @@ static int virtfn_add(struct pci_dev *dev, int id,
int reset)
      virtfn->physfn = pci_dev_get(dev);
      virtfn->is_virtfn = 1;
      virtfn->multifunction = 0;
+    virtfn->virtfn_index = id;
      for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
          res = &dev->resource[i + PCI_IOV_RESOURCES];
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 353db8d..85c5531 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -356,6 +356,7 @@ struct pci_dev {
      unsigned int    io_window_1k:1;    /* Intel P2P bridge 1K I/O
windows */
      unsigned int    irq_managed:1;
      pci_dev_flags_t dev_flags;
+    unsigned int    virtfn_index;
      atomic_t    enable_cnt;    /* pci_enable_device has been called */
      u32        saved_config_space[16]; /* config space saved at
suspend time */


Can't you just calculate the VF index based on the VF BDF number
combined with the information in the PF BDF number and VF
offset/stride?  Seems kind of pointless to add a variable that is only
used by one driver and is in a slowpath when you can just calculate it
pretty quickly.

Good suggestion. Will try it.


- Alex



reply via email to

[Prev in Thread] Current Thread [Next in Thread]