pci-v5.15-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmE3jjYUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vwrIA/8DYHYRQ6tR3lY0ZxVeBdnd/ryp/ag
 z35N8RFLPaFlifLWSldwDV/8dylXnRjS57WS9sppp5gKsLl6xYySvTeMpt5QHdXd
 gJw27sBqiBmecUGFHWVp9B3yF2LvgrtItjd9RadYaHhWEfWyB5AFK7qwxx02fzvo
 hoGA2XbpI/Hb1BvSOi1avmPYgly1BRu8RFvKMwB2cxQNv3TZOnekT/iFK5WVR1o2
 Z5BA+0nj9PrDO/axS0Vh+TqXhU+hOGox7bkOMcNmbDV7Yo8hgot5SsxddbZqJX+O
 BNNrRv72pbHGIwT/vOP7OQ49sRXledHYeyEGIixjLylBcROk9t8M1z1sfgJ6obVy
 1eM3TIx/+7OS5dxC+gTNMVgUiL1NQIdA1LVIBb0BrXm6yNqNxBlj3o/gQ+VGEiNI
 0lATmpe4P/N0/cOSI7tK9O2zsX3qzbLnJxsseGrwtK1L+GRYMUPhP4ciblhB0CIf
 BmK9j0ROmCBGN0Pz/5wIaQgkTro74dqO1BPX8n84M8KWByNZwTrJo/rCBdD4DGaJ
 eJvyt3hoYxhSxRQ1rp3zqZ9ytm4dJBGcZBKeO1IvKvJHEzfZBIqqq3M/hlNIaSDP
 v+8I9HaS1kI4SDB1Ia0LFRqKqvpN+WVLB+EoGkeDQozPO42tYSb43lYe83sEnZ+T
 KY0a/5feu975eLs=
 =g1WT
 -----END PGP SIGNATURE-----

Merge tag 'pci-v5.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
 "Enumeration:
   - Convert controller drivers to generic_handle_domain_irq() (Marc
     Zyngier)
   - Simplify VPD (Vital Product Data) access and search (Heiner
     Kallweit)
   - Update bnx2, bnx2x, bnxt, cxgb4, cxlflash, sfc, tg3 drivers to use
     simplified VPD interfaces (Heiner Kallweit)
   - Run Max Payload Size quirks before configuring MPS; work around
     ASMedia ASM1062 SATA MPS issue (Marek Behún)

  Resource management:
   - Refactor pci_ioremap_bar() and pci_ioremap_wc_bar() (Krzysztof
     Wilczyński)
   - Optimize pci_resource_len() to reduce kernel size (Zhen Lei)

  PCI device hotplug:
   - Fix a double unmap in ibmphp (Vishal Aslot)

  PCIe port driver:
   - Enable Bandwidth Notification only if port supports it (Stuart
     Hayes)

  Sysfs/proc/syscalls:
   - Add schedule point in proc_bus_pci_read() (Krzysztof Wilczyński)
   - Return ~0 data on pciconfig_read() CAP_SYS_ADMIN failure (Krzysztof
     Wilczyński)
   - Return "int" from pciconfig_read() syscall (Krzysztof Wilczyński)

  Virtualization:
   - Extend "pci=noats" to also turn on Translation Blocking to protect
     against some DMA attacks (Alex Williamson)
   - Add sysfs mechanism to control the type of reset used between
     device assignments to VMs (Amey Narkhede)
   - Add support for ACPI _RST reset method (Shanker Donthineni)
   - Add ACS quirks for Cavium multi-function devices (George Cherian)
   - Add ACS quirks for NXP LX2xx0 and LX2xx2 platforms (Wasim Khan)
   - Allow HiSilicon AMBA devices that appear as fake PCI devices to use
     PASID and SVA (Zhangfei Gao)

  Endpoint framework:
   - Add support for SR-IOV Endpoint devices (Kishon Vijay Abraham I)
   - Zero-initialize endpoint test tool parameters so we don't use
     random parameters (Shunyong Yang)

  APM X-Gene PCIe controller driver:
   - Remove redundant dev_err() call in xgene_msi_probe() (ErKun Yang)

  Broadcom iProc PCIe controller driver:
   - Don't fail devm_pci_alloc_host_bridge() on missing 'ranges' because
     it's optional on BCMA devices (Rob Herring)
   - Fix BCMA probe resource handling (Rob Herring)

  Cadence PCIe driver:
   - Work around J7200 Link training electrical issue by increasing
     delays in LTSSM (Nadeem Athani)

  Intel IXP4xx PCI controller driver:
   - Depend on ARCH_IXP4XX to avoid useless config questions (Geert
     Uytterhoeven)

  Intel Keembay PCIe controller driver:
   - Add Intel Keem Bay PCIe controller (Srikanth Thokala)

  Marvell Aardvark PCIe controller driver:
   - Work around config space completion handling issues (Evan Wang)
   - Increase timeout for config access completions (Pali Rohár)
   - Emulate CRS Software Visibility bit (Pali Rohár)
   - Configure resources from DT 'ranges' property to fix I/O space
     access (Pali Rohár)
   - Serialize INTx mask/unmask (Pali Rohár)

  MediaTek PCIe controller driver:
   - Add MT7629 support in DT (Chuanjia Liu)
   - Fix an MSI issue (Chuanjia Liu)
   - Get syscon regmap ("mediatek,generic-pciecfg"), IRQ number
     ("pci_irq"), PCI domain ("linux,pci-domain") from DT properties if
     present (Chuanjia Liu)

  Microsoft Hyper-V host bridge driver:
   - Add ARM64 support (Boqun Feng)
   - Support "Create Interrupt v3" message (Sunil Muthuswamy)

  NVIDIA Tegra PCIe controller driver:
   - Use seq_puts(), move err_msg from stack to static, fix OF node leak
     (Christophe JAILLET)

  NVIDIA Tegra194 PCIe driver:
   - Disable suspend when in Endpoint mode (Om Prakash Singh)
   - Fix MSI-X address programming error (Om Prakash Singh)
   - Disable interrupts during suspend to avoid spurious AER link down
     (Om Prakash Singh)

  Renesas R-Car PCIe controller driver:
   - Work around hardware issue that prevents Link L1->L0 transition
     (Marek Vasut)
   - Fix runtime PM refcount leak (Dinghao Liu)

  Rockchip DesignWare PCIe controller driver:
   - Add Rockchip RK356X host controller driver (Simon Xue)

  TI J721E PCIe driver:
   - Add support for J7200 and AM64 (Kishon Vijay Abraham I)

  Toshiba Visconti PCIe controller driver:
   - Add Toshiba Visconti PCIe host controller driver (Nobuhiro
     Iwamatsu)

  Xilinx NWL PCIe controller driver:
   - Enable PCIe reference clock via CCF (Hyun Kwon)

  Miscellaneous:
   - Convert sta2x11 from 'pci_' to 'dma_' API (Christophe JAILLET)
   - Fix pci_dev_str_match_path() alloc while atomic bug (used for
     kernel parameters that specify devices) (Dan Carpenter)
   - Remove pointless Precision Time Management warning when PTM is
     present but not enabled (Jakub Kicinski)
   - Remove surplus "break" statements (Krzysztof Wilczyński)"

* tag 'pci-v5.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (132 commits)
  PCI: ibmphp: Fix double unmap of io_mem
  x86/PCI: sta2x11: switch from 'pci_' to 'dma_' API
  PCI/VPD: Use unaligned access helpers
  PCI/VPD: Clean up public VPD defines and inline functions
  cxgb4: Use pci_vpd_find_id_string() to find VPD ID string
  PCI/VPD: Add pci_vpd_find_id_string()
  PCI/VPD: Include post-processing in pci_vpd_find_tag()
  PCI/VPD: Stop exporting pci_vpd_find_info_keyword()
  PCI/VPD: Stop exporting pci_vpd_find_tag()
  PCI: Set dma-can-stall for HiSilicon chips
  PCI: rockchip-dwc: Add Rockchip RK356X host controller driver
  PCI: dwc: Remove surplus break statement after return
  PCI: artpec6: Remove local code block from switch statement
  PCI: artpec6: Remove surplus break statement after return
  MAINTAINERS: Add entries for Toshiba Visconti PCIe controller
  PCI: visconti: Add Toshiba Visconti PCIe host controller driver
  PCI/portdrv: Enable Bandwidth Notification only if port supports it
  PCI: Allow PASID on fake PCIe devices without TLP prefixes
  PCI: mediatek: Use PCI domain to handle ports detection
  PCI: mediatek: Add new method to get irq number
  ...
This commit is contained in:
Linus Torvalds 2021-09-07 19:13:42 -07:00
commit ac08b1c68d
100 changed files with 3882 additions and 1577 deletions

View File

@ -121,6 +121,23 @@ Description:
child buses, and re-discover devices removed earlier
from this part of the device tree.
What: /sys/bus/pci/devices/.../reset_method
Date: August 2021
Contact: Amey Narkhede <ameynarkhede03@gmail.com>
Description:
Some devices allow an individual function to be reset
without affecting other functions in the same slot.
For devices that have this support, a file named
reset_method is present in sysfs. Reading this file
gives names of the supported and enabled reset methods and
their ordering. Writing a space-separated list of names of
reset methods sets the reset methods and ordering to be
used when resetting the device. Writing an empty string
disables the ability to reset the device. Writing
"default" enables all supported reset methods in the
default ordering.
What: /sys/bus/pci/devices/.../reset
Date: July 2009
Contact: Michael S. Tsirkin <mst@redhat.com>

View File

@ -43,6 +43,7 @@ entries corresponding to EPF driver will be created by the EPF core.
.. <EPF Driver1>/
... <EPF Device 11>/
... <EPF Device 21>/
... <EPF Device 31>/
.. <EPF Driver2>/
... <EPF Device 12>/
... <EPF Device 22>/
@ -68,6 +69,7 @@ created)
... subsys_vendor_id
... subsys_id
... interrupt_pin
... <Symlink EPF Device 31>/
... primary/
... <Symlink EPC Device1>/
... secondary/
@ -79,6 +81,13 @@ interface should be added in 'primary' directory and symlink of endpoint
controller connected to secondary interface should be added in 'secondary'
directory.
The <EPF Device> directory can have a list of symbolic links
(<Symlink EPF Device 31>) to other <EPF Device>. These symbolic links should
be created by the user to represent the virtual functions that are bound to
the physical function. In the above directory structure <EPF Device 11> is a
physical function and <EPF Device 31> is a virtual function. An EPF device once
it's linked to another EPF device, cannot be linked to a EPC device.
EPC Device
==========
@ -98,7 +107,8 @@ entries corresponding to EPC device will be created by the EPC core.
The <EPC Device> directory will have a list of symbolic links to
<EPF Device>. These symbolic links should be created by the user to
represent the functions present in the endpoint device.
represent the functions present in the endpoint device. Only <EPF Device>
that represents a physical function can be linked to a EPC device.
The <EPC Device> directory will also have a *start* field. Once
"1" is written to this field, the endpoint device will be ready to

View File

@ -0,0 +1,69 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: "http://devicetree.org/schemas/pci/intel,keembay-pcie-ep.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Intel Keem Bay PCIe controller Endpoint mode
maintainers:
- Wan Ahmad Zainie <wan.ahmad.zainie.wan.mohamad@intel.com>
- Srikanth Thokala <srikanth.thokala@intel.com>
properties:
compatible:
const: intel,keembay-pcie-ep
reg:
maxItems: 5
reg-names:
items:
- const: dbi
- const: dbi2
- const: atu
- const: addr_space
- const: apb
interrupts:
maxItems: 4
interrupt-names:
items:
- const: pcie
- const: pcie_ev
- const: pcie_err
- const: pcie_mem_access
num-lanes:
description: Number of lanes to use.
enum: [ 1, 2 ]
required:
- compatible
- reg
- reg-names
- interrupts
- interrupt-names
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
pcie-ep@37000000 {
compatible = "intel,keembay-pcie-ep";
reg = <0x37000000 0x00001000>,
<0x37100000 0x00001000>,
<0x37300000 0x00001000>,
<0x36000000 0x01000000>,
<0x37800000 0x00000200>;
reg-names = "dbi", "dbi2", "atu", "addr_space", "apb";
interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 108 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "pcie", "pcie_ev", "pcie_err", "pcie_mem_access";
num-lanes = <2>;
};

View File

@ -0,0 +1,97 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: "http://devicetree.org/schemas/pci/intel,keembay-pcie.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Intel Keem Bay PCIe controller Root Complex mode
maintainers:
- Wan Ahmad Zainie <wan.ahmad.zainie.wan.mohamad@intel.com>
- Srikanth Thokala <srikanth.thokala@intel.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
properties:
compatible:
const: intel,keembay-pcie
ranges:
maxItems: 1
reset-gpios:
maxItems: 1
reg:
maxItems: 4
reg-names:
items:
- const: dbi
- const: atu
- const: config
- const: apb
clocks:
maxItems: 2
clock-names:
items:
- const: master
- const: aux
interrupts:
maxItems: 3
interrupt-names:
items:
- const: pcie
- const: pcie_ev
- const: pcie_err
num-lanes:
description: Number of lanes to use.
enum: [ 1, 2 ]
required:
- compatible
- reg
- reg-names
- ranges
- clocks
- clock-names
- interrupts
- interrupt-names
- reset-gpios
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/gpio/gpio.h>
#define KEEM_BAY_A53_PCIE
#define KEEM_BAY_A53_AUX_PCIE
pcie@37000000 {
compatible = "intel,keembay-pcie";
reg = <0x37000000 0x00001000>,
<0x37300000 0x00001000>,
<0x36e00000 0x00200000>,
<0x37800000 0x00000200>;
reg-names = "dbi", "atu", "config", "apb";
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
ranges = <0x02000000 0 0x36000000 0x36000000 0 0x00e00000>;
interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "pcie", "pcie_ev", "pcie_err";
clocks = <&scmi_clk KEEM_BAY_A53_PCIE>,
<&scmi_clk KEEM_BAY_A53_AUX_PCIE>;
clock-names = "master", "aux";
reset-gpios = <&pca2 9 GPIO_ACTIVE_LOW>;
num-lanes = <2>;
};

View File

@ -0,0 +1,39 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/mediatek-pcie-cfg.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: MediaTek PCIECFG controller
maintainers:
- Chuanjia Liu <chuanjia.liu@mediatek.com>
- Jianjun Wang <jianjun.wang@mediatek.com>
description: |
The MediaTek PCIECFG controller controls some feature about
LTSSM, ASPM and so on.
properties:
compatible:
items:
- enum:
- mediatek,generic-pciecfg
- const: syscon
reg:
maxItems: 1
required:
- compatible
- reg
additionalProperties: false
examples:
- |
pciecfg: pciecfg@1a140000 {
compatible = "mediatek,generic-pciecfg", "syscon";
reg = <0x1a140000 0x1000>;
};
...

View File

@ -8,7 +8,7 @@ Required properties:
"mediatek,mt7623-pcie"
"mediatek,mt7629-pcie"
- device_type: Must be "pci"
- reg: Base addresses and lengths of the PCIe subsys and root ports.
- reg: Base addresses and lengths of the root ports.
- reg-names: Names of the above areas to use during resource lookup.
- #address-cells: Address representation for root ports (must be 3)
- #size-cells: Size representation for root ports (must be 2)
@ -47,9 +47,12 @@ Required properties for MT7623/MT2701:
- reset-names: Must be "pcie-rst0", "pcie-rst1", "pcie-rstN".. based on the
number of root ports.
Required properties for MT2712/MT7622:
Required properties for MT2712/MT7622/MT7629:
-interrupts: A list of interrupt outputs of the controller, must have one
entry for each PCIe port
- interrupt-names: Must include the following entries:
- "pcie_irq": The interrupt that is asserted when an MSI/INTX is received
- linux,pci-domain: PCI domain ID. Should be unique for each host controller
In addition, the device tree node must have sub-nodes describing each
PCIe port interface, having the following mandatory properties:
@ -143,130 +146,143 @@ Examples for MT7623:
Examples for MT2712:
pcie: pcie@11700000 {
pcie1: pcie@112ff000 {
compatible = "mediatek,mt2712-pcie";
device_type = "pci";
reg = <0 0x11700000 0 0x1000>,
<0 0x112ff000 0 0x1000>;
reg-names = "port0", "port1";
reg = <0 0x112ff000 0 0x1000>;
reg-names = "port1";
linux,pci-domain = <1>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&topckgen CLK_TOP_PE2_MAC_P0_SEL>,
<&topckgen CLK_TOP_PE2_MAC_P1_SEL>,
<&pericfg CLK_PERI_PCIE0>,
interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "pcie_irq";
clocks = <&topckgen CLK_TOP_PE2_MAC_P1_SEL>,
<&pericfg CLK_PERI_PCIE1>;
clock-names = "sys_ck0", "sys_ck1", "ahb_ck0", "ahb_ck1";
phys = <&pcie0_phy PHY_TYPE_PCIE>, <&pcie1_phy PHY_TYPE_PCIE>;
phy-names = "pcie-phy0", "pcie-phy1";
clock-names = "sys_ck1", "ahb_ck1";
phys = <&u3port1 PHY_TYPE_PCIE>;
phy-names = "pcie-phy1";
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x10000000>;
ranges = <0x82000000 0 0x11400000 0x0 0x11400000 0 0x300000>;
status = "disabled";
pcie0: pcie@0,0 {
reg = <0x0000 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc1 0>,
<0 0 0 2 &pcie_intc1 1>,
<0 0 0 3 &pcie_intc1 2>,
<0 0 0 4 &pcie_intc1 3>;
pcie_intc1: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
ranges;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc0 0>,
<0 0 0 2 &pcie_intc0 1>,
<0 0 0 3 &pcie_intc0 2>,
<0 0 0 4 &pcie_intc0 3>;
pcie_intc0: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
};
pcie1: pcie@1,0 {
reg = <0x0800 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
pcie0: pcie@11700000 {
compatible = "mediatek,mt2712-pcie";
device_type = "pci";
reg = <0 0x11700000 0 0x1000>;
reg-names = "port0";
linux,pci-domain = <0>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "pcie_irq";
clocks = <&topckgen CLK_TOP_PE2_MAC_P0_SEL>,
<&pericfg CLK_PERI_PCIE0>;
clock-names = "sys_ck0", "ahb_ck0";
phys = <&u3port0 PHY_TYPE_PCIE>;
phy-names = "pcie-phy0";
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x10000000>;
status = "disabled";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc0 0>,
<0 0 0 2 &pcie_intc0 1>,
<0 0 0 3 &pcie_intc0 2>,
<0 0 0 4 &pcie_intc0 3>;
pcie_intc0: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
ranges;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc1 0>,
<0 0 0 2 &pcie_intc1 1>,
<0 0 0 3 &pcie_intc1 2>,
<0 0 0 4 &pcie_intc1 3>;
pcie_intc1: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
};
Examples for MT7622:
pcie: pcie@1a140000 {
pcie0: pcie@1a143000 {
compatible = "mediatek,mt7622-pcie";
device_type = "pci";
reg = <0 0x1a140000 0 0x1000>,
<0 0x1a143000 0 0x1000>,
<0 0x1a145000 0 0x1000>;
reg-names = "subsys", "port0", "port1";
reg = <0 0x1a143000 0 0x1000>;
reg-names = "port0";
linux,pci-domain = <0>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 228 IRQ_TYPE_LEVEL_LOW>,
<GIC_SPI 229 IRQ_TYPE_LEVEL_LOW>;
interrupts = <GIC_SPI 228 IRQ_TYPE_LEVEL_LOW>;
interrupt-names = "pcie_irq";
clocks = <&pciesys CLK_PCIE_P0_MAC_EN>,
<&pciesys CLK_PCIE_P1_MAC_EN>,
<&pciesys CLK_PCIE_P0_AHB_EN>,
<&pciesys CLK_PCIE_P1_AHB_EN>,
<&pciesys CLK_PCIE_P0_AUX_EN>,
<&pciesys CLK_PCIE_P1_AUX_EN>,
<&pciesys CLK_PCIE_P0_AXI_EN>,
<&pciesys CLK_PCIE_P1_AXI_EN>,
<&pciesys CLK_PCIE_P0_OBFF_EN>,
<&pciesys CLK_PCIE_P1_OBFF_EN>,
<&pciesys CLK_PCIE_P0_PIPE_EN>,
<&pciesys CLK_PCIE_P1_PIPE_EN>;
clock-names = "sys_ck0", "sys_ck1", "ahb_ck0", "ahb_ck1",
"aux_ck0", "aux_ck1", "axi_ck0", "axi_ck1",
"obff_ck0", "obff_ck1", "pipe_ck0", "pipe_ck1";
phys = <&pcie0_phy PHY_TYPE_PCIE>, <&pcie1_phy PHY_TYPE_PCIE>;
phy-names = "pcie-phy0", "pcie-phy1";
<&pciesys CLK_PCIE_P0_PIPE_EN>;
clock-names = "sys_ck0", "ahb_ck0", "aux_ck0",
"axi_ck0", "obff_ck0", "pipe_ck0";
power-domains = <&scpsys MT7622_POWER_DOMAIN_HIF0>;
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x10000000>;
ranges = <0x82000000 0 0x20000000 0x0 0x20000000 0 0x8000000>;
status = "disabled";
pcie0: pcie@0,0 {
reg = <0x0000 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc0 0>,
<0 0 0 2 &pcie_intc0 1>,
<0 0 0 3 &pcie_intc0 2>,
<0 0 0 4 &pcie_intc0 3>;
pcie_intc0: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
pcie1: pcie@1a145000 {
compatible = "mediatek,mt7622-pcie";
device_type = "pci";
reg = <0 0x1a145000 0 0x1000>;
reg-names = "port1";
linux,pci-domain = <1>;
#address-cells = <3>;
#size-cells = <2>;
interrupts = <GIC_SPI 229 IRQ_TYPE_LEVEL_LOW>;
interrupt-names = "pcie_irq";
clocks = <&pciesys CLK_PCIE_P1_MAC_EN>,
/* designer has connect RC1 with p0_ahb clock */
<&pciesys CLK_PCIE_P0_AHB_EN>,
<&pciesys CLK_PCIE_P1_AUX_EN>,
<&pciesys CLK_PCIE_P1_AXI_EN>,
<&pciesys CLK_PCIE_P1_OBFF_EN>,
<&pciesys CLK_PCIE_P1_PIPE_EN>;
clock-names = "sys_ck1", "ahb_ck1", "aux_ck1",
"axi_ck1", "obff_ck1", "pipe_ck1";
power-domains = <&scpsys MT7622_POWER_DOMAIN_HIF0>;
bus-range = <0x00 0xff>;
ranges = <0x82000000 0 0x28000000 0x0 0x28000000 0 0x8000000>;
status = "disabled";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc1 0>,
<0 0 0 2 &pcie_intc1 1>,
<0 0 0 3 &pcie_intc1 2>,
<0 0 0 4 &pcie_intc1 3>;
pcie_intc1: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
ranges;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc0 0>,
<0 0 0 2 &pcie_intc0 1>,
<0 0 0 3 &pcie_intc0 2>,
<0 0 0 4 &pcie_intc0 3>;
pcie_intc0: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
pcie1: pcie@1,0 {
reg = <0x0800 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
ranges;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc1 0>,
<0 0 0 2 &pcie_intc1 1>,
<0 0 0 3 &pcie_intc1 2>,
<0 0 0 4 &pcie_intc1 3>;
pcie_intc1: interrupt-controller {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <1>;
};
};
};

View File

@ -23,6 +23,13 @@ properties:
default: 1
maximum: 255
max-virtual-functions:
description: Array representing the number of virtual functions corresponding to each physical
function
$ref: /schemas/types.yaml#/definitions/uint8-array
minItems: 1
maxItems: 255
max-link-speed:
$ref: /schemas/types.yaml#/definitions/uint32
enum: [ 1, 2, 3, 4 ]

View File

@ -35,6 +35,7 @@ Required properties:
Optional properties:
- dma-coherent: present if DMA operations are coherent
- clocks: Input clock specifier. Refer to common clock bindings
Example:
++++++++

View File

@ -2733,11 +2733,13 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/iwamatsu/linux-visconti.git
F: Documentation/devicetree/bindings/arm/toshiba.yaml
F: Documentation/devicetree/bindings/net/toshiba,visconti-dwmac.yaml
F: Documentation/devicetree/bindings/gpio/toshiba,gpio-visconti.yaml
F: Documentation/devicetree/bindings/pci/toshiba,visconti-pcie.yaml
F: Documentation/devicetree/bindings/pinctrl/toshiba,tmpv7700-pinctrl.yaml
F: Documentation/devicetree/bindings/watchdog/toshiba,visconti-wdt.yaml
F: arch/arm64/boot/dts/toshiba/
F: drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
F: drivers/gpio/gpio-visconti.c
F: drivers/pci/controller/dwc/pcie-visconti.c
F: drivers/pinctrl/visconti/
F: drivers/watchdog/visconti_wdt.c
N: visconti
@ -14541,6 +14543,13 @@ S: Maintained
F: Documentation/devicetree/bindings/pci/hisilicon-histb-pcie.txt
F: drivers/pci/controller/dwc/pcie-histb.c
PCIE DRIVER FOR INTEL KEEM BAY
M: Srikanth Thokala <srikanth.thokala@intel.com>
L: linux-pci@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/pci/intel,keembay-pcie*
F: drivers/pci/controller/dwc/pcie-keembay.c
PCIE DRIVER FOR INTEL LGM GW SOC
M: Rahul Tanwar <rtanwar@maxlinear.com>
L: linux-pci@vger.kernel.org

View File

@ -82,14 +82,29 @@ int acpi_pci_bus_find_domain_nr(struct pci_bus *bus)
int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
{
if (!acpi_disabled) {
struct pci_config_window *cfg = bridge->bus->sysdata;
struct acpi_device *adev = to_acpi_device(cfg->parent);
struct device *bus_dev = &bridge->bus->dev;
struct pci_config_window *cfg;
struct acpi_device *adev;
struct device *bus_dev;
ACPI_COMPANION_SET(&bridge->dev, adev);
set_dev_node(bus_dev, acpi_get_node(acpi_device_handle(adev)));
}
if (acpi_disabled)
return 0;
cfg = bridge->bus->sysdata;
/*
* On Hyper-V there is no corresponding ACPI device for a root bridge,
* therefore ->parent is set as NULL by the driver. And set 'adev' as
* NULL in this case because there is no proper ACPI device.
*/
if (!cfg->parent)
adev = NULL;
else
adev = to_acpi_device(cfg->parent);
bus_dev = &bridge->bus->dev;
ACPI_COMPANION_SET(&bridge->dev, adev);
set_dev_node(bus_dev, acpi_get_node(acpi_device_handle(adev)));
return 0;
}

View File

@ -12,6 +12,7 @@
#include <linux/pci.h>
#include <asm/pci_x86.h>
#include <asm/numachip/numachip.h>
static u8 limit __read_mostly;

View File

@ -146,8 +146,7 @@ static void sta2x11_map_ep(struct pci_dev *pdev)
dev_err(dev, "sta2x11: could not set DMA offset\n");
dev->bus_dma_limit = max_amba_addr;
pci_set_consistent_dma_mask(pdev, max_amba_addr);
pci_set_dma_mask(pdev, max_amba_addr);
dma_set_mask_and_coherent(&pdev->dev, max_amba_addr);
/* Configure AHB mapping */
pci_write_config_dword(pdev, AHB_PEXLBASE(0), 0);

View File

@ -306,9 +306,7 @@ static int nitrox_device_flr(struct pci_dev *pdev)
return -ENOMEM;
}
/* check flr support */
if (pcie_has_flr(pdev))
pcie_flr(pdev);
pcie_reset_flr(pdev, PCI_RESET_DO_RESET);
pci_restore_state(pdev);

View File

@ -69,6 +69,8 @@
#define FLAG_USE_DMA BIT(0)
#define PCI_DEVICE_ID_TI_AM654 0xb00c
#define PCI_DEVICE_ID_TI_J7200 0xb00f
#define PCI_DEVICE_ID_TI_AM64 0xb010
#define PCI_DEVICE_ID_LS1088A 0x80c0
#define is_am654_pci_dev(pdev) \
@ -970,6 +972,12 @@ static const struct pci_device_id pci_endpoint_test_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E),
.driver_data = (kernel_ulong_t)&j721e_data,
},
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J7200),
.driver_data = (kernel_ulong_t)&j721e_data,
},
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM64),
.driver_data = (kernel_ulong_t)&j721e_data,
},
{ }
};
MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl);
@ -979,6 +987,7 @@ static struct pci_driver pci_endpoint_test_driver = {
.id_table = pci_endpoint_test_tbl,
.probe = pci_endpoint_test_probe,
.remove = pci_endpoint_test_remove,
.sriov_configure = pci_sriov_configure_simple,
};
module_pci_driver(pci_endpoint_test_driver);

View File

@ -8038,9 +8038,9 @@ bnx2_get_pci_speed(struct bnx2 *bp)
static void
bnx2_read_vpd_fw_ver(struct bnx2 *bp)
{
unsigned int len;
int rc, i, j;
u8 *data;
unsigned int block_end, rosize, len;
#define BNX2_VPD_NVRAM_OFFSET 0x300
#define BNX2_VPD_LEN 128
@ -8057,38 +8057,21 @@ bnx2_read_vpd_fw_ver(struct bnx2 *bp)
for (i = 0; i < BNX2_VPD_LEN; i += 4)
swab32s((u32 *)&data[i]);
i = pci_vpd_find_tag(data, BNX2_VPD_LEN, PCI_VPD_LRDT_RO_DATA);
if (i < 0)
goto vpd_done;
rosize = pci_vpd_lrdt_size(&data[i]);
i += PCI_VPD_LRDT_TAG_SIZE;
block_end = i + rosize;
if (block_end > BNX2_VPD_LEN)
goto vpd_done;
j = pci_vpd_find_info_keyword(data, i, rosize,
PCI_VPD_RO_KEYWORD_MFR_ID);
j = pci_vpd_find_ro_info_keyword(data, BNX2_VPD_LEN,
PCI_VPD_RO_KEYWORD_MFR_ID, &len);
if (j < 0)
goto vpd_done;
len = pci_vpd_info_field_size(&data[j]);
j += PCI_VPD_INFO_FLD_HDR_SIZE;
if (j + len > block_end || len != 4 ||
memcmp(&data[j], "1028", 4))
if (len != 4 || memcmp(&data[j], "1028", 4))
goto vpd_done;
j = pci_vpd_find_info_keyword(data, i, rosize,
PCI_VPD_RO_KEYWORD_VENDOR0);
j = pci_vpd_find_ro_info_keyword(data, BNX2_VPD_LEN,
PCI_VPD_RO_KEYWORD_VENDOR0,
&len);
if (j < 0)
goto vpd_done;
len = pci_vpd_info_field_size(&data[j]);
j += PCI_VPD_INFO_FLD_HDR_SIZE;
if (j + len > block_end || len > BNX2_MAX_VER_SLEN)
if (len > BNX2_MAX_VER_SLEN)
goto vpd_done;
memcpy(bp->fw_version, &data[j], len);

View File

@ -2407,7 +2407,6 @@ void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func, u8 idu_sb_id,
#define ETH_MAX_RX_CLIENTS_E2 ETH_MAX_RX_CLIENTS_E1H
#endif
#define BNX2X_VPD_LEN 128
#define VENDOR_ID_LEN 4
#define VF_ACQUIRE_THRESH 3

View File

@ -12189,86 +12189,35 @@ static int bnx2x_get_hwinfo(struct bnx2x *bp)
static void bnx2x_read_fwinfo(struct bnx2x *bp)
{
int cnt, i, block_end, rodi;
char vpd_start[BNX2X_VPD_LEN+1];
char str_id_reg[VENDOR_ID_LEN+1];
char str_id_cap[VENDOR_ID_LEN+1];
char *vpd_data;
char *vpd_extended_data = NULL;
u8 len;
char str_id[VENDOR_ID_LEN + 1];
unsigned int vpd_len, kw_len;
u8 *vpd_data;
int rodi;
cnt = pci_read_vpd(bp->pdev, 0, BNX2X_VPD_LEN, vpd_start);
memset(bp->fw_ver, 0, sizeof(bp->fw_ver));
if (cnt < BNX2X_VPD_LEN)
vpd_data = pci_vpd_alloc(bp->pdev, &vpd_len);
if (IS_ERR(vpd_data))
return;
rodi = pci_vpd_find_ro_info_keyword(vpd_data, vpd_len,
PCI_VPD_RO_KEYWORD_MFR_ID, &kw_len);
if (rodi < 0 || kw_len != VENDOR_ID_LEN)
goto out_not_found;
/* VPD RO tag should be first tag after identifier string, hence
* we should be able to find it in first BNX2X_VPD_LEN chars
*/
i = pci_vpd_find_tag(vpd_start, BNX2X_VPD_LEN, PCI_VPD_LRDT_RO_DATA);
if (i < 0)
goto out_not_found;
block_end = i + PCI_VPD_LRDT_TAG_SIZE +
pci_vpd_lrdt_size(&vpd_start[i]);
i += PCI_VPD_LRDT_TAG_SIZE;
if (block_end > BNX2X_VPD_LEN) {
vpd_extended_data = kmalloc(block_end, GFP_KERNEL);
if (vpd_extended_data == NULL)
goto out_not_found;
/* read rest of vpd image into vpd_extended_data */
memcpy(vpd_extended_data, vpd_start, BNX2X_VPD_LEN);
cnt = pci_read_vpd(bp->pdev, BNX2X_VPD_LEN,
block_end - BNX2X_VPD_LEN,
vpd_extended_data + BNX2X_VPD_LEN);
if (cnt < (block_end - BNX2X_VPD_LEN))
goto out_not_found;
vpd_data = vpd_extended_data;
} else
vpd_data = vpd_start;
/* now vpd_data holds full vpd content in both cases */
rodi = pci_vpd_find_info_keyword(vpd_data, i, block_end,
PCI_VPD_RO_KEYWORD_MFR_ID);
if (rodi < 0)
goto out_not_found;
len = pci_vpd_info_field_size(&vpd_data[rodi]);
if (len != VENDOR_ID_LEN)
goto out_not_found;
rodi += PCI_VPD_INFO_FLD_HDR_SIZE;
/* vendor specific info */
snprintf(str_id_reg, VENDOR_ID_LEN + 1, "%04x", PCI_VENDOR_ID_DELL);
snprintf(str_id_cap, VENDOR_ID_LEN + 1, "%04X", PCI_VENDOR_ID_DELL);
if (!strncmp(str_id_reg, &vpd_data[rodi], VENDOR_ID_LEN) ||
!strncmp(str_id_cap, &vpd_data[rodi], VENDOR_ID_LEN)) {
rodi = pci_vpd_find_info_keyword(vpd_data, i, block_end,
PCI_VPD_RO_KEYWORD_VENDOR0);
if (rodi >= 0) {
len = pci_vpd_info_field_size(&vpd_data[rodi]);
rodi += PCI_VPD_INFO_FLD_HDR_SIZE;
if (len < 32 && (len + rodi) <= BNX2X_VPD_LEN) {
memcpy(bp->fw_ver, &vpd_data[rodi], len);
bp->fw_ver[len] = ' ';
}
snprintf(str_id, VENDOR_ID_LEN + 1, "%04x", PCI_VENDOR_ID_DELL);
if (!strncasecmp(str_id, &vpd_data[rodi], VENDOR_ID_LEN)) {
rodi = pci_vpd_find_ro_info_keyword(vpd_data, vpd_len,
PCI_VPD_RO_KEYWORD_VENDOR0,
&kw_len);
if (rodi >= 0 && kw_len < sizeof(bp->fw_ver)) {
memcpy(bp->fw_ver, &vpd_data[rodi], kw_len);
bp->fw_ver[kw_len] = ' ';
}
kfree(vpd_extended_data);
return;
}
out_not_found:
kfree(vpd_extended_data);
return;
kfree(vpd_data);
}
static void bnx2x_set_modes_bitmap(struct bnx2x *bp)

View File

@ -13100,66 +13100,35 @@ static int bnxt_init_mac_addr(struct bnxt *bp)
return rc;
}
#define BNXT_VPD_LEN 512
static void bnxt_vpd_read_info(struct bnxt *bp)
{
struct pci_dev *pdev = bp->pdev;
int i, len, pos, ro_size, size;
ssize_t vpd_size;
unsigned int vpd_size, kw_len;
int pos, size;
u8 *vpd_data;
vpd_data = kmalloc(BNXT_VPD_LEN, GFP_KERNEL);
if (!vpd_data)
vpd_data = pci_vpd_alloc(pdev, &vpd_size);
if (IS_ERR(vpd_data)) {
pci_warn(pdev, "Unable to read VPD\n");
return;
vpd_size = pci_read_vpd(pdev, 0, BNXT_VPD_LEN, vpd_data);
if (vpd_size <= 0) {
netdev_err(bp->dev, "Unable to read VPD\n");
goto exit;
}
i = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA);
if (i < 0) {
netdev_err(bp->dev, "VPD READ-Only not found\n");
goto exit;
}
i = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA);
if (i < 0) {
netdev_err(bp->dev, "VPD READ-Only not found\n");
goto exit;
}
ro_size = pci_vpd_lrdt_size(&vpd_data[i]);
i += PCI_VPD_LRDT_TAG_SIZE;
if (i + ro_size > vpd_size)
goto exit;
pos = pci_vpd_find_info_keyword(vpd_data, i, ro_size,
PCI_VPD_RO_KEYWORD_PARTNO);
pos = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size,
PCI_VPD_RO_KEYWORD_PARTNO, &kw_len);
if (pos < 0)
goto read_sn;
len = pci_vpd_info_field_size(&vpd_data[pos]);
pos += PCI_VPD_INFO_FLD_HDR_SIZE;
if (len + pos > vpd_size)
goto read_sn;
size = min(len, BNXT_VPD_FLD_LEN - 1);
size = min_t(int, kw_len, BNXT_VPD_FLD_LEN - 1);
memcpy(bp->board_partno, &vpd_data[pos], size);
read_sn:
pos = pci_vpd_find_info_keyword(vpd_data, i, ro_size,
PCI_VPD_RO_KEYWORD_SERIALNO);
pos = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size,
PCI_VPD_RO_KEYWORD_SERIALNO,
&kw_len);
if (pos < 0)
goto exit;
len = pci_vpd_info_field_size(&vpd_data[pos]);
pos += PCI_VPD_INFO_FLD_HDR_SIZE;
if (len + pos > vpd_size)
goto exit;
size = min(len, BNXT_VPD_FLD_LEN - 1);
size = min_t(int, kw_len, BNXT_VPD_FLD_LEN - 1);
memcpy(bp->board_serialno, &vpd_data[pos], size);
exit:
kfree(vpd_data);

View File

@ -12788,7 +12788,7 @@ static void tg3_get_ethtool_stats(struct net_device *dev,
memset(tmp_stats, 0, sizeof(struct tg3_ethtool_stats));
}
static __be32 *tg3_vpd_readblock(struct tg3 *tp, u32 *vpdlen)
static __be32 *tg3_vpd_readblock(struct tg3 *tp, unsigned int *vpdlen)
{
int i;
__be32 *buf;
@ -12822,15 +12822,11 @@ static __be32 *tg3_vpd_readblock(struct tg3 *tp, u32 *vpdlen)
offset = TG3_NVM_VPD_OFF;
len = TG3_NVM_VPD_LEN;
}
} else {
len = TG3_NVM_PCI_VPD_MAX_LEN;
}
buf = kmalloc(len, GFP_KERNEL);
if (buf == NULL)
return NULL;
buf = kmalloc(len, GFP_KERNEL);
if (!buf)
return NULL;
if (magic == TG3_EEPROM_MAGIC) {
for (i = 0; i < len; i += 4) {
/* The data is in little-endian format in NVRAM.
* Use the big-endian read routines to preserve
@ -12841,12 +12837,9 @@ static __be32 *tg3_vpd_readblock(struct tg3 *tp, u32 *vpdlen)
}
*vpdlen = len;
} else {
ssize_t cnt;
cnt = pci_read_vpd(tp->pdev, 0, len, (u8 *)buf);
if (cnt < 0)
goto error;
*vpdlen = cnt;
buf = pci_vpd_alloc(tp->pdev, vpdlen);
if (IS_ERR(buf))
return NULL;
}
return buf;
@ -12868,9 +12861,10 @@ error:
static int tg3_test_nvram(struct tg3 *tp)
{
u32 csum, magic, len;
u32 csum, magic;
__be32 *buf;
int i, j, k, err = 0, size;
unsigned int len;
if (tg3_flag(tp, NO_NVRAM))
return 0;
@ -13013,33 +13007,10 @@ static int tg3_test_nvram(struct tg3 *tp)
if (!buf)
return -ENOMEM;
i = pci_vpd_find_tag((u8 *)buf, len, PCI_VPD_LRDT_RO_DATA);
if (i > 0) {
j = pci_vpd_lrdt_size(&((u8 *)buf)[i]);
if (j < 0)
goto out;
if (i + PCI_VPD_LRDT_TAG_SIZE + j > len)
goto out;
i += PCI_VPD_LRDT_TAG_SIZE;
j = pci_vpd_find_info_keyword((u8 *)buf, i, j,
PCI_VPD_RO_KEYWORD_CHKSUM);
if (j > 0) {
u8 csum8 = 0;
j += PCI_VPD_INFO_FLD_HDR_SIZE;
for (i = 0; i <= j; i++)
csum8 += ((u8 *)buf)[i];
if (csum8)
goto out;
}
}
err = 0;
err = pci_vpd_check_csum(buf, len);
/* go on if no checksum found */
if (err == 1)
err = 0;
out:
kfree(buf);
return err;
@ -15624,64 +15595,36 @@ skip_phy_reset:
static void tg3_read_vpd(struct tg3 *tp)
{
u8 *vpd_data;
unsigned int block_end, rosize, len;
u32 vpdlen;
int j, i = 0;
unsigned int len, vpdlen;
int i;
vpd_data = (u8 *)tg3_vpd_readblock(tp, &vpdlen);
if (!vpd_data)
goto out_no_vpd;
i = pci_vpd_find_tag(vpd_data, vpdlen, PCI_VPD_LRDT_RO_DATA);
i = pci_vpd_find_ro_info_keyword(vpd_data, vpdlen,
PCI_VPD_RO_KEYWORD_MFR_ID, &len);
if (i < 0)
goto out_not_found;
goto partno;
rosize = pci_vpd_lrdt_size(&vpd_data[i]);
block_end = i + PCI_VPD_LRDT_TAG_SIZE + rosize;
i += PCI_VPD_LRDT_TAG_SIZE;
if (len != 4 || memcmp(vpd_data + i, "1028", 4))
goto partno;
if (block_end > vpdlen)
goto out_not_found;
i = pci_vpd_find_ro_info_keyword(vpd_data, vpdlen,
PCI_VPD_RO_KEYWORD_VENDOR0, &len);
if (i < 0)
goto partno;
j = pci_vpd_find_info_keyword(vpd_data, i, rosize,
PCI_VPD_RO_KEYWORD_MFR_ID);
if (j > 0) {
len = pci_vpd_info_field_size(&vpd_data[j]);
j += PCI_VPD_INFO_FLD_HDR_SIZE;
if (j + len > block_end || len != 4 ||
memcmp(&vpd_data[j], "1028", 4))
goto partno;
j = pci_vpd_find_info_keyword(vpd_data, i, rosize,
PCI_VPD_RO_KEYWORD_VENDOR0);
if (j < 0)
goto partno;
len = pci_vpd_info_field_size(&vpd_data[j]);
j += PCI_VPD_INFO_FLD_HDR_SIZE;
if (j + len > block_end)
goto partno;
if (len >= sizeof(tp->fw_ver))
len = sizeof(tp->fw_ver) - 1;
memset(tp->fw_ver, 0, sizeof(tp->fw_ver));
snprintf(tp->fw_ver, sizeof(tp->fw_ver), "%.*s bc ", len,
&vpd_data[j]);
}
memset(tp->fw_ver, 0, sizeof(tp->fw_ver));
snprintf(tp->fw_ver, sizeof(tp->fw_ver), "%.*s bc ", len, vpd_data + i);
partno:
i = pci_vpd_find_info_keyword(vpd_data, i, rosize,
PCI_VPD_RO_KEYWORD_PARTNO);
i = pci_vpd_find_ro_info_keyword(vpd_data, vpdlen,
PCI_VPD_RO_KEYWORD_PARTNO, &len);
if (i < 0)
goto out_not_found;
len = pci_vpd_info_field_size(&vpd_data[i]);
i += PCI_VPD_INFO_FLD_HDR_SIZE;
if (len > TG3_BPN_SIZE ||
(len + i) > vpdlen)
if (len > TG3_BPN_SIZE)
goto out_not_found;
memcpy(tp->board_part_number, &vpd_data[i], len);

View File

@ -2101,7 +2101,6 @@
/* Hardware Legacy NVRAM layout */
#define TG3_NVM_VPD_OFF 0x100
#define TG3_NVM_VPD_LEN 256
#define TG3_NVM_PCI_VPD_MAX_LEN 512
/* Hardware Selfboot NVRAM layout */
#define TG3_NVM_HWSB_CFG1 0x00000004

View File

@ -526,7 +526,7 @@ static void octeon_destroy_resources(struct octeon_device *oct)
oct->irq_name_storage = NULL;
}
/* Soft reset the octeon device before exiting */
if (oct->pci_dev->reset_fn)
if (!pcie_reset_flr(oct->pci_dev, PCI_RESET_PROBE))
octeon_pci_flr(oct);
else
cn23xx_vf_ask_pf_to_do_flr(oct);

View File

@ -84,7 +84,6 @@ extern struct mutex uld_mutex;
enum {
MAX_NPORTS = 4, /* max # of ports */
SERNUM_LEN = 24, /* Serial # length */
EC_LEN = 16, /* E/C length */
ID_LEN = 16, /* ID length */
PN_LEN = 16, /* Part Number length */
MACADDR_LEN = 12, /* MAC Address length */
@ -391,7 +390,6 @@ struct tp_params {
struct vpd_params {
unsigned int cclk;
u8 ec[EC_LEN + 1];
u8 sn[SERNUM_LEN + 1];
u8 id[ID_LEN + 1];
u8 pn[PN_LEN + 1];

View File

@ -2743,10 +2743,9 @@ int t4_seeprom_wp(struct adapter *adapter, bool enable)
*/
int t4_get_raw_vpd_params(struct adapter *adapter, struct vpd_params *p)
{
int i, ret = 0, addr;
int ec, sn, pn, na;
u8 *vpd, csum, base_val = 0;
unsigned int vpdr_len, kw_offset, id_len;
unsigned int id_len, pn_len, sn_len, na_len;
int id, sn, pn, na, addr, ret = 0;
u8 *vpd, base_val = 0;
vpd = vmalloc(VPD_LEN);
if (!vpd)
@ -2765,74 +2764,52 @@ int t4_get_raw_vpd_params(struct adapter *adapter, struct vpd_params *p)
if (ret < 0)
goto out;
if (vpd[0] != PCI_VPD_LRDT_ID_STRING) {
dev_err(adapter->pdev_dev, "missing VPD ID string\n");
ret = pci_vpd_find_id_string(vpd, VPD_LEN, &id_len);
if (ret < 0)
goto out;
id = ret;
ret = pci_vpd_check_csum(vpd, VPD_LEN);
if (ret) {
dev_err(adapter->pdev_dev, "VPD checksum incorrect or missing\n");
ret = -EINVAL;
goto out;
}
id_len = pci_vpd_lrdt_size(vpd);
if (id_len > ID_LEN)
id_len = ID_LEN;
i = pci_vpd_find_tag(vpd, VPD_LEN, PCI_VPD_LRDT_RO_DATA);
if (i < 0) {
dev_err(adapter->pdev_dev, "missing VPD-R section\n");
ret = -EINVAL;
ret = pci_vpd_find_ro_info_keyword(vpd, VPD_LEN,
PCI_VPD_RO_KEYWORD_SERIALNO, &sn_len);
if (ret < 0)
goto out;
}
sn = ret;
vpdr_len = pci_vpd_lrdt_size(&vpd[i]);
kw_offset = i + PCI_VPD_LRDT_TAG_SIZE;
if (vpdr_len + kw_offset > VPD_LEN) {
dev_err(adapter->pdev_dev, "bad VPD-R length %u\n", vpdr_len);
ret = -EINVAL;
ret = pci_vpd_find_ro_info_keyword(vpd, VPD_LEN,
PCI_VPD_RO_KEYWORD_PARTNO, &pn_len);
if (ret < 0)
goto out;
}
pn = ret;
#define FIND_VPD_KW(var, name) do { \
var = pci_vpd_find_info_keyword(vpd, kw_offset, vpdr_len, name); \
if (var < 0) { \
dev_err(adapter->pdev_dev, "missing VPD keyword " name "\n"); \
ret = -EINVAL; \
goto out; \
} \
var += PCI_VPD_INFO_FLD_HDR_SIZE; \
} while (0)
FIND_VPD_KW(i, "RV");
for (csum = 0; i >= 0; i--)
csum += vpd[i];
if (csum) {
dev_err(adapter->pdev_dev,
"corrupted VPD EEPROM, actual csum %u\n", csum);
ret = -EINVAL;
ret = pci_vpd_find_ro_info_keyword(vpd, VPD_LEN, "NA", &na_len);
if (ret < 0)
goto out;
}
na = ret;
FIND_VPD_KW(ec, "EC");
FIND_VPD_KW(sn, "SN");
FIND_VPD_KW(pn, "PN");
FIND_VPD_KW(na, "NA");
#undef FIND_VPD_KW
memcpy(p->id, vpd + PCI_VPD_LRDT_TAG_SIZE, id_len);
memcpy(p->id, vpd + id, min_t(int, id_len, ID_LEN));
strim(p->id);
memcpy(p->ec, vpd + ec, EC_LEN);
strim(p->ec);
i = pci_vpd_info_field_size(vpd + sn - PCI_VPD_INFO_FLD_HDR_SIZE);
memcpy(p->sn, vpd + sn, min(i, SERNUM_LEN));
memcpy(p->sn, vpd + sn, min_t(int, sn_len, SERNUM_LEN));
strim(p->sn);
i = pci_vpd_info_field_size(vpd + pn - PCI_VPD_INFO_FLD_HDR_SIZE);
memcpy(p->pn, vpd + pn, min(i, PN_LEN));
memcpy(p->pn, vpd + pn, min_t(int, pn_len, PN_LEN));
strim(p->pn);
memcpy(p->na, vpd + na, min(i, MACADDR_LEN));
memcpy(p->na, vpd + na, min_t(int, na_len, MACADDR_LEN));
strim((char *)p->na);
out:
vfree(vpd);
return ret < 0 ? ret : 0;
if (ret < 0) {
dev_err(adapter->pdev_dev, "error reading VPD\n");
return ret;
}
return 0;
}
/**

View File

@ -900,74 +900,36 @@ static void efx_pci_remove(struct pci_dev *pci_dev)
/* NIC VPD information
* Called during probe to display the part number of the
* installed NIC. VPD is potentially very large but this should
* always appear within the first 512 bytes.
* installed NIC.
*/
#define SFC_VPD_LEN 512
static void efx_probe_vpd_strings(struct efx_nic *efx)
{
struct pci_dev *dev = efx->pci_dev;
char vpd_data[SFC_VPD_LEN];
ssize_t vpd_size;
int ro_start, ro_size, i, j;
unsigned int vpd_size, kw_len;
u8 *vpd_data;
int start;
/* Get the vpd data from the device */
vpd_size = pci_read_vpd(dev, 0, sizeof(vpd_data), vpd_data);
if (vpd_size <= 0) {
netif_err(efx, drv, efx->net_dev, "Unable to read VPD\n");
vpd_data = pci_vpd_alloc(dev, &vpd_size);
if (IS_ERR(vpd_data)) {
pci_warn(dev, "Unable to read VPD\n");
return;
}
/* Get the Read only section */
ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA);
if (ro_start < 0) {
netif_err(efx, drv, efx->net_dev, "VPD Read-only not found\n");
return;
}
start = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size,
PCI_VPD_RO_KEYWORD_PARTNO, &kw_len);
if (start < 0)
pci_err(dev, "Part number not found or incomplete\n");
else
pci_info(dev, "Part Number : %.*s\n", kw_len, vpd_data + start);
ro_size = pci_vpd_lrdt_size(&vpd_data[ro_start]);
j = ro_size;
i = ro_start + PCI_VPD_LRDT_TAG_SIZE;
if (i + j > vpd_size)
j = vpd_size - i;
start = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size,
PCI_VPD_RO_KEYWORD_SERIALNO, &kw_len);
if (start < 0)
pci_err(dev, "Serial number not found or incomplete\n");
else
efx->vpd_sn = kmemdup_nul(vpd_data + start, kw_len, GFP_KERNEL);
/* Get the Part number */
i = pci_vpd_find_info_keyword(vpd_data, i, j, "PN");
if (i < 0) {
netif_err(efx, drv, efx->net_dev, "Part number not found\n");
return;
}
j = pci_vpd_info_field_size(&vpd_data[i]);
i += PCI_VPD_INFO_FLD_HDR_SIZE;
if (i + j > vpd_size) {
netif_err(efx, drv, efx->net_dev, "Incomplete part number\n");
return;
}
netif_info(efx, drv, efx->net_dev,
"Part Number : %.*s\n", j, &vpd_data[i]);
i = ro_start + PCI_VPD_LRDT_TAG_SIZE;
j = ro_size;
i = pci_vpd_find_info_keyword(vpd_data, i, j, "SN");
if (i < 0) {
netif_err(efx, drv, efx->net_dev, "Serial number not found\n");
return;
}
j = pci_vpd_info_field_size(&vpd_data[i]);
i += PCI_VPD_INFO_FLD_HDR_SIZE;
if (i + j > vpd_size) {
netif_err(efx, drv, efx->net_dev, "Incomplete serial number\n");
return;
}
efx->vpd_sn = kmalloc(j + 1, GFP_KERNEL);
if (!efx->vpd_sn)
return;
snprintf(efx->vpd_sn, j + 1, "%s", &vpd_data[i]);
kfree(vpd_data);
}

View File

@ -2780,75 +2780,36 @@ static void ef4_pci_remove(struct pci_dev *pci_dev)
};
/* NIC VPD information
* Called during probe to display the part number of the
* installed NIC. VPD is potentially very large but this should
* always appear within the first 512 bytes.
* Called during probe to display the part number of the installed NIC.
*/
#define SFC_VPD_LEN 512
static void ef4_probe_vpd_strings(struct ef4_nic *efx)
{
struct pci_dev *dev = efx->pci_dev;
char vpd_data[SFC_VPD_LEN];
ssize_t vpd_size;
int ro_start, ro_size, i, j;
unsigned int vpd_size, kw_len;
u8 *vpd_data;
int start;
/* Get the vpd data from the device */
vpd_size = pci_read_vpd(dev, 0, sizeof(vpd_data), vpd_data);
if (vpd_size <= 0) {
netif_err(efx, drv, efx->net_dev, "Unable to read VPD\n");
vpd_data = pci_vpd_alloc(dev, &vpd_size);
if (IS_ERR(vpd_data)) {
pci_warn(dev, "Unable to read VPD\n");
return;
}
/* Get the Read only section */
ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA);
if (ro_start < 0) {
netif_err(efx, drv, efx->net_dev, "VPD Read-only not found\n");
return;
}
start = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size,
PCI_VPD_RO_KEYWORD_PARTNO, &kw_len);
if (start < 0)
pci_warn(dev, "Part number not found or incomplete\n");
else
pci_info(dev, "Part Number : %.*s\n", kw_len, vpd_data + start);
ro_size = pci_vpd_lrdt_size(&vpd_data[ro_start]);
j = ro_size;
i = ro_start + PCI_VPD_LRDT_TAG_SIZE;
if (i + j > vpd_size)
j = vpd_size - i;
start = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size,
PCI_VPD_RO_KEYWORD_SERIALNO, &kw_len);
if (start < 0)
pci_warn(dev, "Serial number not found or incomplete\n");
else
efx->vpd_sn = kmemdup_nul(vpd_data + start, kw_len, GFP_KERNEL);
/* Get the Part number */
i = pci_vpd_find_info_keyword(vpd_data, i, j, "PN");
if (i < 0) {
netif_err(efx, drv, efx->net_dev, "Part number not found\n");
return;
}
j = pci_vpd_info_field_size(&vpd_data[i]);
i += PCI_VPD_INFO_FLD_HDR_SIZE;
if (i + j > vpd_size) {
netif_err(efx, drv, efx->net_dev, "Incomplete part number\n");
return;
}
netif_info(efx, drv, efx->net_dev,
"Part Number : %.*s\n", j, &vpd_data[i]);
i = ro_start + PCI_VPD_LRDT_TAG_SIZE;
j = ro_size;
i = pci_vpd_find_info_keyword(vpd_data, i, j, "SN");
if (i < 0) {
netif_err(efx, drv, efx->net_dev, "Serial number not found\n");
return;
}
j = pci_vpd_info_field_size(&vpd_data[i]);
i += PCI_VPD_INFO_FLD_HDR_SIZE;
if (i + j > vpd_size) {
netif_err(efx, drv, efx->net_dev, "Incomplete serial number\n");
return;
}
efx->vpd_sn = kmalloc(j + 1, GFP_KERNEL);
if (!efx->vpd_sn)
return;
snprintf(efx->vpd_sn, j + 1, "%s", &vpd_data[i]);
kfree(vpd_data);
}

View File

@ -376,7 +376,7 @@ int pci_enable_pasid(struct pci_dev *pdev, int features)
if (WARN_ON(pdev->pasid_enabled))
return -EBUSY;
if (!pdev->eetlp_prefix_path)
if (!pdev->eetlp_prefix_path && !pdev->pasid_no_tlp)
return -EINVAL;
if (!pasid)

View File

@ -40,6 +40,7 @@ config PCI_FTPCI100
config PCI_IXP4XX
bool "Intel IXP4xx PCI controller"
depends on ARM && OF
depends on ARCH_IXP4XX || COMPILE_TEST
default ARCH_IXP4XX
help
Say Y here if you want support for the PCI host controller found

View File

@ -27,6 +27,7 @@
#define STATUS_REG_SYS_2 0x508
#define STATUS_CLR_REG_SYS_2 0x708
#define LINK_DOWN BIT(1)
#define J7200_LINK_DOWN BIT(10)
#define J721E_PCIE_USER_CMD_STATUS 0x4
#define LINK_TRAINING_ENABLE BIT(0)
@ -57,6 +58,7 @@ struct j721e_pcie {
struct cdns_pcie *cdns_pcie;
void __iomem *user_cfg_base;
void __iomem *intd_cfg_base;
u32 linkdown_irq_regfield;
};
enum j721e_pcie_mode {
@ -66,7 +68,10 @@ enum j721e_pcie_mode {
struct j721e_pcie_data {
enum j721e_pcie_mode mode;
bool quirk_retrain_flag;
unsigned int quirk_retrain_flag:1;
unsigned int quirk_detect_quiet_flag:1;
u32 linkdown_irq_regfield;
unsigned int byte_access_allowed:1;
};
static inline u32 j721e_pcie_user_readl(struct j721e_pcie *pcie, u32 offset)
@ -98,12 +103,12 @@ static irqreturn_t j721e_pcie_link_irq_handler(int irq, void *priv)
u32 reg;
reg = j721e_pcie_intd_readl(pcie, STATUS_REG_SYS_2);
if (!(reg & LINK_DOWN))
if (!(reg & pcie->linkdown_irq_regfield))
return IRQ_NONE;
dev_err(dev, "LINK DOWN!\n");
j721e_pcie_intd_writel(pcie, STATUS_CLR_REG_SYS_2, LINK_DOWN);
j721e_pcie_intd_writel(pcie, STATUS_CLR_REG_SYS_2, pcie->linkdown_irq_regfield);
return IRQ_HANDLED;
}
@ -112,7 +117,7 @@ static void j721e_pcie_config_link_irq(struct j721e_pcie *pcie)
u32 reg;
reg = j721e_pcie_intd_readl(pcie, ENABLE_REG_SYS_2);
reg |= LINK_DOWN;
reg |= pcie->linkdown_irq_regfield;
j721e_pcie_intd_writel(pcie, ENABLE_REG_SYS_2, reg);
}
@ -284,10 +289,36 @@ static struct pci_ops cdns_ti_pcie_host_ops = {
static const struct j721e_pcie_data j721e_pcie_rc_data = {
.mode = PCI_MODE_RC,
.quirk_retrain_flag = true,
.byte_access_allowed = false,
.linkdown_irq_regfield = LINK_DOWN,
};
static const struct j721e_pcie_data j721e_pcie_ep_data = {
.mode = PCI_MODE_EP,
.linkdown_irq_regfield = LINK_DOWN,
};
static const struct j721e_pcie_data j7200_pcie_rc_data = {
.mode = PCI_MODE_RC,
.quirk_detect_quiet_flag = true,
.linkdown_irq_regfield = J7200_LINK_DOWN,
.byte_access_allowed = true,
};
static const struct j721e_pcie_data j7200_pcie_ep_data = {
.mode = PCI_MODE_EP,
.quirk_detect_quiet_flag = true,
};
static const struct j721e_pcie_data am64_pcie_rc_data = {
.mode = PCI_MODE_RC,
.linkdown_irq_regfield = J7200_LINK_DOWN,
.byte_access_allowed = true,
};
static const struct j721e_pcie_data am64_pcie_ep_data = {
.mode = PCI_MODE_EP,
.linkdown_irq_regfield = J7200_LINK_DOWN,
};
static const struct of_device_id of_j721e_pcie_match[] = {
@ -299,6 +330,22 @@ static const struct of_device_id of_j721e_pcie_match[] = {
.compatible = "ti,j721e-pcie-ep",
.data = &j721e_pcie_ep_data,
},
{
.compatible = "ti,j7200-pcie-host",
.data = &j7200_pcie_rc_data,
},
{
.compatible = "ti,j7200-pcie-ep",
.data = &j7200_pcie_ep_data,
},
{
.compatible = "ti,am64-pcie-host",
.data = &am64_pcie_rc_data,
},
{
.compatible = "ti,am64-pcie-ep",
.data = &am64_pcie_ep_data,
},
{},
};
@ -332,6 +379,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
pcie->dev = dev;
pcie->mode = mode;
pcie->linkdown_irq_regfield = data->linkdown_irq_regfield;
base = devm_platform_ioremap_resource_byname(pdev, "intd_cfg");
if (IS_ERR(base))
@ -391,9 +439,11 @@ static int j721e_pcie_probe(struct platform_device *pdev)
goto err_get_sync;
}
bridge->ops = &cdns_ti_pcie_host_ops;
if (!data->byte_access_allowed)
bridge->ops = &cdns_ti_pcie_host_ops;
rc = pci_host_bridge_priv(bridge);
rc->quirk_retrain_flag = data->quirk_retrain_flag;
rc->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag;
cdns_pcie = &rc->pcie;
cdns_pcie->dev = dev;
@ -459,6 +509,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
ret = -ENOMEM;
goto err_get_sync;
}
ep->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag;
cdns_pcie = &ep->pcie;
cdns_pcie->dev = dev;

View File

@ -16,11 +16,37 @@
#define CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE 0x1
#define CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY 0x3
static int cdns_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
static u8 cdns_pcie_get_fn_from_vfn(struct cdns_pcie *pcie, u8 fn, u8 vfn)
{
u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET;
u32 first_vf_offset, stride;
if (vfn == 0)
return fn;
first_vf_offset = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_OFFSET);
stride = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_STRIDE);
fn = fn + first_vf_offset + ((vfn - 1) * stride);
return fn;
}
static int cdns_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn,
struct pci_epf_header *hdr)
{
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET;
struct cdns_pcie *pcie = &ep->pcie;
u32 reg;
if (vfn > 1) {
dev_err(&epc->dev, "Only Virtual Function #1 has deviceID\n");
return -EINVAL;
} else if (vfn == 1) {
reg = cap + PCI_SRIOV_VF_DID;
cdns_pcie_ep_fn_writew(pcie, fn, reg, hdr->deviceid);
return 0;
}
cdns_pcie_ep_fn_writew(pcie, fn, PCI_DEVICE_ID, hdr->deviceid);
cdns_pcie_ep_fn_writeb(pcie, fn, PCI_REVISION_ID, hdr->revid);
@ -47,7 +73,7 @@ static int cdns_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
return 0;
}
static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn,
static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn,
struct pci_epf_bar *epf_bar)
{
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
@ -92,32 +118,36 @@ static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn,
addr0 = lower_32_bits(bar_phys);
addr1 = upper_32_bits(bar_phys);
if (vfn == 1)
reg = CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn);
else
reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn);
b = (bar < BAR_4) ? bar : bar - BAR_4;
if (vfn == 0 || vfn == 1) {
cfg = cdns_pcie_readl(pcie, reg);
cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
cfg |= (CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) |
CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl));
cdns_pcie_writel(pcie, reg, cfg);
}
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar),
addr0);
cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar),
addr1);
if (bar < BAR_4) {
reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
b = bar;
} else {
reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
b = bar - BAR_4;
}
cfg = cdns_pcie_readl(pcie, reg);
cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
cfg |= (CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) |
CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl));
cdns_pcie_writel(pcie, reg, cfg);
if (vfn > 0)
epf = &epf->epf[vfn - 1];
epf->epf_bar[bar] = epf_bar;
return 0;
}
static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn,
static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn,
struct pci_epf_bar *epf_bar)
{
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
@ -126,29 +156,32 @@ static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn,
enum pci_barno bar = epf_bar->barno;
u32 reg, cfg, b, ctrl;
if (bar < BAR_4) {
reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn);
b = bar;
} else {
reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn);
b = bar - BAR_4;
if (vfn == 1)
reg = CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn);
else
reg = CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn);
b = (bar < BAR_4) ? bar : bar - BAR_4;
if (vfn == 0 || vfn == 1) {
ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
cfg = cdns_pcie_readl(pcie, reg);
cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
cfg |= CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl);
cdns_pcie_writel(pcie, reg, cfg);
}
ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED;
cfg = cdns_pcie_readl(pcie, reg);
cfg &= ~(CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
cfg |= CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl);
cdns_pcie_writel(pcie, reg, cfg);
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar), 0);
cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar), 0);
if (vfn > 0)
epf = &epf->epf[vfn - 1];
epf->epf_bar[bar] = NULL;
}
static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, phys_addr_t addr,
u64 pci_addr, size_t size)
static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
phys_addr_t addr, u64 pci_addr, size_t size)
{
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie;
@ -161,6 +194,7 @@ static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, phys_addr_t addr,
return -EINVAL;
}
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
cdns_pcie_set_outbound_region(pcie, 0, fn, r, false, addr, pci_addr, size);
set_bit(r, &ep->ob_region_map);
@ -169,7 +203,7 @@ static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, phys_addr_t addr,
return 0;
}
static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn,
static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn,
phys_addr_t addr)
{
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
@ -189,13 +223,15 @@ static void cdns_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn,
clear_bit(r, &ep->ob_region_map);
}
static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 mmc)
static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, u8 mmc)
{
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie;
u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
u16 flags;
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
/*
* Set the Multiple Message Capable bitfield into the Message Control
* register.
@ -209,13 +245,15 @@ static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 mmc)
return 0;
}
static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn)
{
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie;
u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
u16 flags, mme;
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
/* Validate that the MSI feature is actually enabled. */
flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
if (!(flags & PCI_MSI_FLAGS_ENABLE))
@ -230,13 +268,15 @@ static int cdns_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
return mme;
}
static int cdns_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no)
static int cdns_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie;
u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
u32 val, reg;
func_no = cdns_pcie_get_fn_from_vfn(pcie, func_no, vfunc_no);
reg = cap + PCI_MSIX_FLAGS;
val = cdns_pcie_ep_fn_readw(pcie, func_no, reg);
if (!(val & PCI_MSIX_FLAGS_ENABLE))
@ -247,14 +287,17 @@ static int cdns_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no)
return val;
}
static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u16 interrupts,
enum pci_barno bir, u32 offset)
static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u8 vfn,
u16 interrupts, enum pci_barno bir,
u32 offset)
{
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie;
u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
u32 val, reg;
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
reg = cap + PCI_MSIX_FLAGS;
val = cdns_pcie_ep_fn_readw(pcie, fn, reg);
val &= ~PCI_MSIX_FLAGS_QSIZE;
@ -274,8 +317,8 @@ static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u16 interrupts,
return 0;
}
static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
u8 intx, bool is_asserted)
static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx,
bool is_asserted)
{
struct cdns_pcie *pcie = &ep->pcie;
unsigned long flags;
@ -317,7 +360,8 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
writel(0, ep->irq_cpu_addr + offset);
}
static int cdns_pcie_ep_send_legacy_irq(struct cdns_pcie_ep *ep, u8 fn, u8 intx)
static int cdns_pcie_ep_send_legacy_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
u8 intx)
{
u16 cmd;
@ -334,7 +378,7 @@ static int cdns_pcie_ep_send_legacy_irq(struct cdns_pcie_ep *ep, u8 fn, u8 intx)
return 0;
}
static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
u8 interrupt_num)
{
struct cdns_pcie *pcie = &ep->pcie;
@ -343,6 +387,8 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
u8 msi_count;
u64 pci_addr, pci_addr_mask = 0xff;
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
/* Check whether the MSI feature has been enabled by the PCI host. */
flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
if (!(flags & PCI_MSI_FLAGS_ENABLE))
@ -382,7 +428,7 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
return 0;
}
static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn,
static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn, u8 vfn,
phys_addr_t addr, u8 interrupt_num,
u32 entry_size, u32 *msi_data,
u32 *msi_addr_offset)
@ -396,6 +442,8 @@ static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn,
int ret;
int i;
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
/* Check whether the MSI feature has been enabled by the PCI host. */
flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
if (!(flags & PCI_MSI_FLAGS_ENABLE))
@ -419,7 +467,7 @@ static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn,
pci_addr &= GENMASK_ULL(63, 2);
for (i = 0; i < interrupt_num; i++) {
ret = cdns_pcie_ep_map_addr(epc, fn, addr,
ret = cdns_pcie_ep_map_addr(epc, fn, vfn, addr,
pci_addr & ~pci_addr_mask,
entry_size);
if (ret)
@ -433,7 +481,7 @@ static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn,
return 0;
}
static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn,
static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn,
u16 interrupt_num)
{
u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
@ -446,6 +494,12 @@ static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn,
u16 flags;
u8 bir;
epf = &ep->epf[fn];
if (vfn > 0)
epf = &epf->epf[vfn - 1];
fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);
/* Check whether the MSI-X feature has been enabled by the PCI host. */
flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSIX_FLAGS);
if (!(flags & PCI_MSIX_FLAGS_ENABLE))
@ -456,7 +510,6 @@ static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn,
bir = tbl_offset & PCI_MSIX_TABLE_BIR;
tbl_offset &= PCI_MSIX_TABLE_OFFSET;
epf = &ep->epf[fn];
msix_tbl = epf->epf_bar[bir]->addr + tbl_offset;
msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr;
msg_data = msix_tbl[(interrupt_num - 1)].msg_data;
@ -478,21 +531,27 @@ static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn,
return 0;
}
static int cdns_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn,
static int cdns_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, u8 vfn,
enum pci_epc_irq_type type,
u16 interrupt_num)
{
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie;
struct device *dev = pcie->dev;
switch (type) {
case PCI_EPC_IRQ_LEGACY:
return cdns_pcie_ep_send_legacy_irq(ep, fn, 0);
if (vfn > 0) {
dev_err(dev, "Cannot raise legacy interrupts for VF\n");
return -EINVAL;
}
return cdns_pcie_ep_send_legacy_irq(ep, fn, vfn, 0);
case PCI_EPC_IRQ_MSI:
return cdns_pcie_ep_send_msi_irq(ep, fn, interrupt_num);
return cdns_pcie_ep_send_msi_irq(ep, fn, vfn, interrupt_num);
case PCI_EPC_IRQ_MSIX:
return cdns_pcie_ep_send_msix_irq(ep, fn, interrupt_num);
return cdns_pcie_ep_send_msix_irq(ep, fn, vfn, interrupt_num);
default:
break;
@ -523,6 +582,13 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
return 0;
}
static const struct pci_epc_features cdns_pcie_epc_vf_features = {
.linkup_notifier = false,
.msi_capable = true,
.msix_capable = true,
.align = 65536,
};
static const struct pci_epc_features cdns_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true,
@ -531,9 +597,12 @@ static const struct pci_epc_features cdns_pcie_epc_features = {
};
static const struct pci_epc_features*
cdns_pcie_ep_get_features(struct pci_epc *epc, u8 func_no)
cdns_pcie_ep_get_features(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{
return &cdns_pcie_epc_features;
if (!vfunc_no)
return &cdns_pcie_epc_features;
return &cdns_pcie_epc_vf_features;
}
static const struct pci_epc_ops cdns_pcie_epc_ops = {
@ -559,9 +628,11 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
struct platform_device *pdev = to_platform_device(dev);
struct device_node *np = dev->of_node;
struct cdns_pcie *pcie = &ep->pcie;
struct cdns_pcie_epf *epf;
struct resource *res;
struct pci_epc *epc;
int ret;
int i;
pcie->is_rc = false;
@ -606,6 +677,25 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
if (!ep->epf)
return -ENOMEM;
epc->max_vfs = devm_kcalloc(dev, epc->max_functions,
sizeof(*epc->max_vfs), GFP_KERNEL);
if (!epc->max_vfs)
return -ENOMEM;
ret = of_property_read_u8_array(np, "max-virtual-functions",
epc->max_vfs, epc->max_functions);
if (ret == 0) {
for (i = 0; i < epc->max_functions; i++) {
epf = &ep->epf[i];
if (epc->max_vfs[i] == 0)
continue;
epf->epf = devm_kcalloc(dev, epc->max_vfs[i],
sizeof(*ep->epf), GFP_KERNEL);
if (!epf->epf)
return -ENOMEM;
}
}
ret = pci_epc_mem_init(epc, pcie->mem_res->start,
resource_size(pcie->mem_res), PAGE_SIZE);
if (ret < 0) {
@ -623,6 +713,10 @@ int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
/* Reserve region 0 for IRQs */
set_bit(0, &ep->ob_region_map);
if (ep->quirk_detect_quiet_flag)
cdns_pcie_detect_quiet_min_delay_set(&ep->pcie);
spin_lock_init(&ep->lock);
return 0;

View File

@ -498,6 +498,9 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
return PTR_ERR(rc->cfg_base);
rc->cfg_res = res;
if (rc->quirk_detect_quiet_flag)
cdns_pcie_detect_quiet_min_delay_set(&rc->pcie);
ret = cdns_pcie_start_link(pcie);
if (ret) {
dev_err(dev, "Failed to start link\n");

View File

@ -7,6 +7,22 @@
#include "pcie-cadence.h"
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie)
{
u32 delay = 0x3;
u32 ltssm_control_cap;
/*
* Set the LTSSM Detect Quiet state min. delay to 2ms.
*/
ltssm_control_cap = cdns_pcie_readl(pcie, CDNS_PCIE_LTSSM_CONTROL_CAP);
ltssm_control_cap = ((ltssm_control_cap &
~CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK) |
CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay));
cdns_pcie_writel(pcie, CDNS_PCIE_LTSSM_CONTROL_CAP, ltssm_control_cap);
}
void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
u32 r, bool is_io,
u64 cpu_addr, u64 pci_addr, size_t size)

View File

@ -8,6 +8,7 @@
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/pci-epf.h>
#include <linux/phy/phy.h>
/* Parameters for the waiting for link up routine */
@ -46,10 +47,18 @@
#define CDNS_PCIE_LM_EP_ID_BUS_SHIFT 8
/* Endpoint Function f BAR b Configuration Registers */
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG(bar, fn) \
(((bar) < BAR_4) ? CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn))
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG0(fn) \
(CDNS_PCIE_LM_BASE + 0x0240 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG1(fn) \
(CDNS_PCIE_LM_BASE + 0x0244 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG(bar, fn) \
(((bar) < BAR_4) ? CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) : CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn))
#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG0(fn) \
(CDNS_PCIE_LM_BASE + 0x0280 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_VFUNC_BAR_CFG1(fn) \
(CDNS_PCIE_LM_BASE + 0x0284 + (fn) * 0x0008)
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \
(GENMASK(4, 0) << ((b) * 8))
#define CDNS_PCIE_LM_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
@ -114,6 +123,7 @@
#define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET 0x90
#define CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET 0xb0
#define CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET 0x200
/*
* Root Port Registers (PCI configuration space for the root port function)
@ -189,6 +199,14 @@
/* AXI link down register */
#define CDNS_PCIE_AT_LINKDOWN (CDNS_PCIE_AT_BASE + 0x0824)
/* LTSSM Capabilities register */
#define CDNS_PCIE_LTSSM_CONTROL_CAP (CDNS_PCIE_LM_BASE + 0x0054)
#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK GENMASK(2, 1)
#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT 1
#define CDNS_PCIE_DETECT_QUIET_MIN_DELAY(delay) \
(((delay) << CDNS_PCIE_DETECT_QUIET_MIN_DELAY_SHIFT) & \
CDNS_PCIE_DETECT_QUIET_MIN_DELAY_MASK)
enum cdns_pcie_rp_bar {
RP_BAR_UNDEFINED = -1,
RP_BAR0,
@ -295,6 +313,7 @@ struct cdns_pcie {
* @avail_ib_bar: Satus of RP_BAR0, RP_BAR1 and RP_NO_BAR if it's free or
* available
* @quirk_retrain_flag: Retrain link as quirk for PCIe Gen2
* @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk
*/
struct cdns_pcie_rc {
struct cdns_pcie pcie;
@ -303,14 +322,17 @@ struct cdns_pcie_rc {
u32 vendor_id;
u32 device_id;
bool avail_ib_bar[CDNS_PCIE_RP_MAX_IB];
bool quirk_retrain_flag;
unsigned int quirk_retrain_flag:1;
unsigned int quirk_detect_quiet_flag:1;
};
/**
* struct cdns_pcie_epf - Structure to hold info about endpoint function
* @epf: Info about virtual functions attached to the physical function
* @epf_bar: reference to the pci_epf_bar for the six Base Address Registers
*/
struct cdns_pcie_epf {
struct cdns_pcie_epf *epf;
struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS];
};
@ -334,6 +356,7 @@ struct cdns_pcie_epf {
* registers fields (RMW) accessible by both remote RC and EP to
* minimize time between read and write
* @epf: Structure to hold info about endpoint function
* @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk
*/
struct cdns_pcie_ep {
struct cdns_pcie pcie;
@ -348,6 +371,7 @@ struct cdns_pcie_ep {
/* protect writing to PCI_STATUS while raising legacy interrupts */
spinlock_t lock;
struct cdns_pcie_epf *epf;
unsigned int quirk_detect_quiet_flag:1;
};
@ -508,6 +532,9 @@ static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep)
return 0;
}
#endif
void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie);
void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn,
u32 r, bool is_io,
u64 cpu_addr, u64 pci_addr, size_t size);

View File

@ -214,6 +214,17 @@ config PCIE_ARTPEC6_EP
Enables support for the PCIe controller in the ARTPEC-6 SoC to work in
endpoint mode. This uses the DesignWare core.
config PCIE_ROCKCHIP_DW_HOST
bool "Rockchip DesignWare PCIe controller"
select PCIE_DW
select PCIE_DW_HOST
depends on PCI_MSI_IRQ_DOMAIN
depends on ARCH_ROCKCHIP || COMPILE_TEST
depends on OF
help
Enables support for the DesignWare PCIe controller in the
Rockchip SoC except RK3399.
config PCIE_INTEL_GW
bool "Intel Gateway PCIe host controller support"
depends on OF && (X86 || COMPILE_TEST)
@ -225,6 +236,34 @@ config PCIE_INTEL_GW
The PCIe controller uses the DesignWare core plus Intel-specific
hardware wrappers.
config PCIE_KEEMBAY
bool
config PCIE_KEEMBAY_HOST
bool "Intel Keem Bay PCIe controller - Host mode"
depends on ARCH_KEEMBAY || COMPILE_TEST
depends on PCI && PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST
select PCIE_KEEMBAY
help
Say 'Y' here to enable support for the PCIe controller in Keem Bay
to work in host mode.
The PCIe controller is based on DesignWare Hardware and uses
DesignWare core functions.
config PCIE_KEEMBAY_EP
bool "Intel Keem Bay PCIe controller - Endpoint mode"
depends on ARCH_KEEMBAY || COMPILE_TEST
depends on PCI && PCI_MSI_IRQ_DOMAIN
depends on PCI_ENDPOINT
select PCIE_DW_EP
select PCIE_KEEMBAY
help
Say 'Y' here to enable support for the PCIe controller in Keem Bay
to work in endpoint mode.
The PCIe controller is based on DesignWare Hardware and uses
DesignWare core functions.
config PCIE_KIRIN
depends on OF && (ARM64 || COMPILE_TEST)
bool "HiSilicon Kirin series SoCs PCIe controllers"
@ -286,6 +325,15 @@ config PCIE_TEGRA194_EP
in order to enable device-specific features PCIE_TEGRA194_EP must be
selected. This uses the DesignWare core.
config PCIE_VISCONTI_HOST
bool "Toshiba Visconti PCIe controllers"
depends on ARCH_VISCONTI || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST
help
Say Y here if you want PCIe controller support on Toshiba Visconti SoC.
This driver supports TMPV7708 SoC.
config PCIE_UNIPHIER
bool "Socionext UniPhier PCIe host controllers"
depends on ARCH_UNIPHIER || COMPILE_TEST

View File

@ -14,13 +14,16 @@ obj-$(CONFIG_PCI_LAYERSCAPE_EP) += pci-layerscape-ep.o
obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
obj-$(CONFIG_PCIE_ROCKCHIP_DW_HOST) += pcie-dw-rockchip.o
obj-$(CONFIG_PCIE_INTEL_GW) += pcie-intel-gw.o
obj-$(CONFIG_PCIE_KEEMBAY) += pcie-keembay.o
obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o
obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o
obj-$(CONFIG_PCI_MESON) += pci-meson.o
obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o
obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o
obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o
obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o
# The following drivers are for devices that use the generic ACPI
# pci_root.c driver but don't support standard ECAM config access.

View File

@ -204,7 +204,7 @@ static int dra7xx_pcie_handle_msi(struct pcie_port *pp, int index)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
unsigned long val;
int pos, irq;
int pos;
val = dw_pcie_readl_dbi(pci, PCIE_MSI_INTR0_STATUS +
(index * MSI_REG_CTRL_BLOCK_SIZE));
@ -213,9 +213,8 @@ static int dra7xx_pcie_handle_msi(struct pcie_port *pp, int index)
pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, 0);
while (pos != MAX_MSI_IRQS_PER_CTRL) {
irq = irq_find_mapping(pp->irq_domain,
(index * MAX_MSI_IRQS_PER_CTRL) + pos);
generic_handle_irq(irq);
generic_handle_domain_irq(pp->irq_domain,
(index * MAX_MSI_IRQS_PER_CTRL) + pos);
pos++;
pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, pos);
}
@ -257,7 +256,7 @@ static void dra7xx_pcie_msi_irq_handler(struct irq_desc *desc)
struct dw_pcie *pci;
struct pcie_port *pp;
unsigned long reg;
u32 virq, bit;
u32 bit;
chained_irq_enter(chip, desc);
@ -276,11 +275,8 @@ static void dra7xx_pcie_msi_irq_handler(struct irq_desc *desc)
case INTB:
case INTC:
case INTD:
for_each_set_bit(bit, &reg, PCI_NUM_INTX) {
virq = irq_find_mapping(dra7xx->irq_domain, bit);
if (virq)
generic_handle_irq(virq);
}
for_each_set_bit(bit, &reg, PCI_NUM_INTX)
generic_handle_domain_irq(dra7xx->irq_domain, bit);
break;
}

View File

@ -259,14 +259,12 @@ static void ks_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie,
struct dw_pcie *pci = ks_pcie->pci;
struct device *dev = pci->dev;
u32 pending;
int virq;
pending = ks_pcie_app_readl(ks_pcie, IRQ_STATUS(offset));
if (BIT(0) & pending) {
virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset);
dev_dbg(dev, ": irq: irq_offset %d, virq %d\n", offset, virq);
generic_handle_irq(virq);
dev_dbg(dev, ": irq: irq_offset %d", offset);
generic_handle_domain_irq(ks_pcie->legacy_irq_domain, offset);
}
/* EOI the INTx interrupt */
@ -579,7 +577,7 @@ static void ks_pcie_msi_irq_handler(struct irq_desc *desc)
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
struct irq_chip *chip = irq_desc_get_chip(desc);
u32 vector, virq, reg, pos;
u32 vector, reg, pos;
dev_dbg(dev, "%s, irq %d\n", __func__, irq);
@ -600,10 +598,8 @@ static void ks_pcie_msi_irq_handler(struct irq_desc *desc)
continue;
vector = offset + (pos << 3);
virq = irq_linear_revmap(pp->irq_domain, vector);
dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n", pos, vector,
virq);
generic_handle_irq(virq);
dev_dbg(dev, "irq: bit %d, vector %d\n", pos, vector);
generic_handle_domain_irq(pp->irq_domain, vector);
}
chained_irq_exit(chip, desc);

View File

@ -384,6 +384,7 @@ static int artpec6_pcie_probe(struct platform_device *pdev)
const struct artpec_pcie_of_data *data;
enum artpec_pcie_variants variant;
enum dw_pcie_device_mode mode;
u32 val;
match = of_match_device(artpec6_pcie_of_match, dev);
if (!match)
@ -432,9 +433,7 @@ static int artpec6_pcie_probe(struct platform_device *pdev)
if (ret < 0)
return ret;
break;
case DW_PCIE_EP_TYPE: {
u32 val;
case DW_PCIE_EP_TYPE:
if (!IS_ENABLED(CONFIG_PCIE_ARTPEC6_EP))
return -ENODEV;
@ -445,8 +444,6 @@ static int artpec6_pcie_probe(struct platform_device *pdev)
pci->ep.ops = &pcie_ep_ops;
return dw_pcie_ep_init(&pci->ep);
break;
}
default:
dev_err(dev, "INVALID device type %d\n", artpec6_pcie->mode);
}

View File

@ -125,7 +125,7 @@ static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap)
return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap);
}
static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no,
static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_header *hdr)
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
@ -202,7 +202,7 @@ static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, u8 func_no,
return 0;
}
static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no,
static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar)
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
@ -217,7 +217,7 @@ static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no,
ep->epf_bar[bar] = NULL;
}
static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no,
static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar)
{
int ret;
@ -276,7 +276,7 @@ static int dw_pcie_find_index(struct dw_pcie_ep *ep, phys_addr_t addr,
return -EINVAL;
}
static void dw_pcie_ep_unmap_addr(struct pci_epc *epc, u8 func_no,
static void dw_pcie_ep_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t addr)
{
int ret;
@ -292,9 +292,8 @@ static void dw_pcie_ep_unmap_addr(struct pci_epc *epc, u8 func_no,
clear_bit(atu_index, ep->ob_window_map);
}
static int dw_pcie_ep_map_addr(struct pci_epc *epc, u8 func_no,
phys_addr_t addr,
u64 pci_addr, size_t size)
static int dw_pcie_ep_map_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t addr, u64 pci_addr, size_t size)
{
int ret;
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
@ -309,7 +308,7 @@ static int dw_pcie_ep_map_addr(struct pci_epc *epc, u8 func_no,
return 0;
}
static int dw_pcie_ep_get_msi(struct pci_epc *epc, u8 func_no)
static int dw_pcie_ep_get_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
@ -333,7 +332,8 @@ static int dw_pcie_ep_get_msi(struct pci_epc *epc, u8 func_no)
return val;
}
static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts)
static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
u8 interrupts)
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
@ -358,7 +358,7 @@ static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts)
return 0;
}
static int dw_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no)
static int dw_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
@ -382,8 +382,8 @@ static int dw_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no)
return val;
}
static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
enum pci_barno bir, u32 offset)
static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
u16 interrupts, enum pci_barno bir, u32 offset)
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
@ -418,7 +418,7 @@ static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
return 0;
}
static int dw_pcie_ep_raise_irq(struct pci_epc *epc, u8 func_no,
static int dw_pcie_ep_raise_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
enum pci_epc_irq_type type, u16 interrupt_num)
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
@ -450,7 +450,7 @@ static int dw_pcie_ep_start(struct pci_epc *epc)
}
static const struct pci_epc_features*
dw_pcie_ep_get_features(struct pci_epc *epc, u8 func_no)
dw_pcie_ep_get_features(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
@ -525,14 +525,14 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
aligned_offset = msg_addr_lower & (epc->mem->window.page_size - 1);
msg_addr = ((u64)msg_addr_upper) << 32 |
(msg_addr_lower & ~aligned_offset);
ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr,
ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr,
epc->mem->window.page_size);
if (ret)
return ret;
writel(msg_data | (interrupt_num - 1), ep->msi_mem + aligned_offset);
dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys);
dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys);
return 0;
}
@ -593,14 +593,14 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
}
aligned_offset = msg_addr & (epc->mem->window.page_size - 1);
ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr,
ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr,
epc->mem->window.page_size);
if (ret)
return ret;
writel(msg_data, ep->msi_mem + aligned_offset);
dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys);
dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys);
return 0;
}

View File

@ -55,7 +55,7 @@ static struct msi_domain_info dw_pcie_msi_domain_info = {
/* MSI int handler */
irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
{
int i, pos, irq;
int i, pos;
unsigned long val;
u32 status, num_ctrls;
irqreturn_t ret = IRQ_NONE;
@ -74,10 +74,9 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
pos = 0;
while ((pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL,
pos)) != MAX_MSI_IRQS_PER_CTRL) {
irq = irq_find_mapping(pp->irq_domain,
(i * MAX_MSI_IRQS_PER_CTRL) +
pos);
generic_handle_irq(irq);
generic_handle_domain_irq(pp->irq_domain,
(i * MAX_MSI_IRQS_PER_CTRL) +
pos);
pos++;
}
}

View File

@ -164,7 +164,6 @@ static int dw_plat_pcie_probe(struct platform_device *pdev)
pci->ep.ops = &pcie_ep_ops;
return dw_pcie_ep_init(&pci->ep);
break;
default:
dev_err(dev, "INVALID device type %d\n", dw_plat_pcie->mode);
}

View File

@ -0,0 +1,279 @@
// SPDX-License-Identifier: GPL-2.0
/*
* PCIe host controller driver for Rockchip SoCs.
*
* Copyright (C) 2021 Rockchip Electronics Co., Ltd.
* http://www.rock-chips.com
*
* Author: Simon Xue <xxm@rock-chips.com>
*/
#include <linux/clk.h>
#include <linux/gpio/consumer.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/reset.h>
#include "pcie-designware.h"
/*
* The upper 16 bits of PCIE_CLIENT_CONFIG are a write
* mask for the lower 16 bits.
*/
#define HIWORD_UPDATE(mask, val) (((mask) << 16) | (val))
#define HIWORD_UPDATE_BIT(val) HIWORD_UPDATE(val, val)
#define to_rockchip_pcie(x) dev_get_drvdata((x)->dev)
#define PCIE_CLIENT_RC_MODE HIWORD_UPDATE_BIT(0x40)
#define PCIE_CLIENT_ENABLE_LTSSM HIWORD_UPDATE_BIT(0xc)
#define PCIE_SMLH_LINKUP BIT(16)
#define PCIE_RDLH_LINKUP BIT(17)
#define PCIE_LINKUP (PCIE_SMLH_LINKUP | PCIE_RDLH_LINKUP)
#define PCIE_L0S_ENTRY 0x11
#define PCIE_CLIENT_GENERAL_CONTROL 0x0
#define PCIE_CLIENT_GENERAL_DEBUG 0x104
#define PCIE_CLIENT_HOT_RESET_CTRL 0x180
#define PCIE_CLIENT_LTSSM_STATUS 0x300
#define PCIE_LTSSM_ENABLE_ENHANCE BIT(4)
#define PCIE_LTSSM_STATUS_MASK GENMASK(5, 0)
struct rockchip_pcie {
struct dw_pcie pci;
void __iomem *apb_base;
struct phy *phy;
struct clk_bulk_data *clks;
unsigned int clk_cnt;
struct reset_control *rst;
struct gpio_desc *rst_gpio;
struct regulator *vpcie3v3;
};
static int rockchip_pcie_readl_apb(struct rockchip_pcie *rockchip,
u32 reg)
{
return readl_relaxed(rockchip->apb_base + reg);
}
static void rockchip_pcie_writel_apb(struct rockchip_pcie *rockchip,
u32 val, u32 reg)
{
writel_relaxed(val, rockchip->apb_base + reg);
}
static void rockchip_pcie_enable_ltssm(struct rockchip_pcie *rockchip)
{
rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_ENABLE_LTSSM,
PCIE_CLIENT_GENERAL_CONTROL);
}
static int rockchip_pcie_link_up(struct dw_pcie *pci)
{
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
u32 val = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_LTSSM_STATUS);
if ((val & PCIE_LINKUP) == PCIE_LINKUP &&
(val & PCIE_LTSSM_STATUS_MASK) == PCIE_L0S_ENTRY)
return 1;
return 0;
}
static int rockchip_pcie_start_link(struct dw_pcie *pci)
{
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
/* Reset device */
gpiod_set_value_cansleep(rockchip->rst_gpio, 0);
rockchip_pcie_enable_ltssm(rockchip);
/*
* PCIe requires the refclk to be stable for 100µs prior to releasing
* PERST. See table 2-4 in section 2.6.2 AC Specifications of the PCI
* Express Card Electromechanical Specification, 1.1. However, we don't
* know if the refclk is coming from RC's PHY or external OSC. If it's
* from RC, so enabling LTSSM is the just right place to release #PERST.
* We need more extra time as before, rather than setting just
* 100us as we don't know how long should the device need to reset.
*/
msleep(100);
gpiod_set_value_cansleep(rockchip->rst_gpio, 1);
return 0;
}
static int rockchip_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
u32 val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE);
/* LTSSM enable control mode */
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL);
rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_RC_MODE,
PCIE_CLIENT_GENERAL_CONTROL);
return 0;
}
static const struct dw_pcie_host_ops rockchip_pcie_host_ops = {
.host_init = rockchip_pcie_host_init,
};
static int rockchip_pcie_clk_init(struct rockchip_pcie *rockchip)
{
struct device *dev = rockchip->pci.dev;
int ret;
ret = devm_clk_bulk_get_all(dev, &rockchip->clks);
if (ret < 0)
return ret;
rockchip->clk_cnt = ret;
return clk_bulk_prepare_enable(rockchip->clk_cnt, rockchip->clks);
}
static int rockchip_pcie_resource_get(struct platform_device *pdev,
struct rockchip_pcie *rockchip)
{
rockchip->apb_base = devm_platform_ioremap_resource_byname(pdev, "apb");
if (IS_ERR(rockchip->apb_base))
return PTR_ERR(rockchip->apb_base);
rockchip->rst_gpio = devm_gpiod_get_optional(&pdev->dev, "reset",
GPIOD_OUT_HIGH);
if (IS_ERR(rockchip->rst_gpio))
return PTR_ERR(rockchip->rst_gpio);
return 0;
}
static int rockchip_pcie_phy_init(struct rockchip_pcie *rockchip)
{
struct device *dev = rockchip->pci.dev;
int ret;
rockchip->phy = devm_phy_get(dev, "pcie-phy");
if (IS_ERR(rockchip->phy))
return dev_err_probe(dev, PTR_ERR(rockchip->phy),
"missing PHY\n");
ret = phy_init(rockchip->phy);
if (ret < 0)
return ret;
ret = phy_power_on(rockchip->phy);
if (ret)
phy_exit(rockchip->phy);
return ret;
}
static void rockchip_pcie_phy_deinit(struct rockchip_pcie *rockchip)
{
phy_exit(rockchip->phy);
phy_power_off(rockchip->phy);
}
static int rockchip_pcie_reset_control_release(struct rockchip_pcie *rockchip)
{
struct device *dev = rockchip->pci.dev;
rockchip->rst = devm_reset_control_array_get_exclusive(dev);
if (IS_ERR(rockchip->rst))
return dev_err_probe(dev, PTR_ERR(rockchip->rst),
"failed to get reset lines\n");
return reset_control_deassert(rockchip->rst);
}
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = rockchip_pcie_link_up,
.start_link = rockchip_pcie_start_link,
};
static int rockchip_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct rockchip_pcie *rockchip;
struct pcie_port *pp;
int ret;
rockchip = devm_kzalloc(dev, sizeof(*rockchip), GFP_KERNEL);
if (!rockchip)
return -ENOMEM;
platform_set_drvdata(pdev, rockchip);
rockchip->pci.dev = dev;
rockchip->pci.ops = &dw_pcie_ops;
pp = &rockchip->pci.pp;
pp->ops = &rockchip_pcie_host_ops;
ret = rockchip_pcie_resource_get(pdev, rockchip);
if (ret)
return ret;
/* DON'T MOVE ME: must be enable before PHY init */
rockchip->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3");
if (IS_ERR(rockchip->vpcie3v3)) {
if (PTR_ERR(rockchip->vpcie3v3) != -ENODEV)
return dev_err_probe(dev, PTR_ERR(rockchip->vpcie3v3),
"failed to get vpcie3v3 regulator\n");
rockchip->vpcie3v3 = NULL;
} else {
ret = regulator_enable(rockchip->vpcie3v3);
if (ret) {
dev_err(dev, "failed to enable vpcie3v3 regulator\n");
return ret;
}
}
ret = rockchip_pcie_phy_init(rockchip);
if (ret)
goto disable_regulator;
ret = rockchip_pcie_reset_control_release(rockchip);
if (ret)
goto deinit_phy;
ret = rockchip_pcie_clk_init(rockchip);
if (ret)
goto deinit_phy;
ret = dw_pcie_host_init(pp);
if (!ret)
return 0;
clk_bulk_disable_unprepare(rockchip->clk_cnt, rockchip->clks);
deinit_phy:
rockchip_pcie_phy_deinit(rockchip);
disable_regulator:
if (rockchip->vpcie3v3)
regulator_disable(rockchip->vpcie3v3);
return ret;
}
static const struct of_device_id rockchip_pcie_of_match[] = {
{ .compatible = "rockchip,rk3568-pcie", },
{},
};
static struct platform_driver rockchip_pcie_driver = {
.driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
},
.probe = rockchip_pcie_probe,
};
builtin_platform_driver(rockchip_pcie_driver);

View File

@ -0,0 +1,460 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* PCIe controller driver for Intel Keem Bay
* Copyright (C) 2020 Intel Corporation
*/
#include <linux/bitfield.h>
#include <linux/bits.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/gpio/consumer.h>
#include <linux/init.h>
#include <linux/iopoll.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/kernel.h>
#include <linux/mod_devicetable.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/property.h>
#include "pcie-designware.h"
/* PCIE_REGS_APB_SLV Registers */
#define PCIE_REGS_PCIE_CFG 0x0004
#define PCIE_DEVICE_TYPE BIT(8)
#define PCIE_RSTN BIT(0)
#define PCIE_REGS_PCIE_APP_CNTRL 0x0008
#define APP_LTSSM_ENABLE BIT(0)
#define PCIE_REGS_INTERRUPT_ENABLE 0x0028
#define MSI_CTRL_INT_EN BIT(8)
#define EDMA_INT_EN GENMASK(7, 0)
#define PCIE_REGS_INTERRUPT_STATUS 0x002c
#define MSI_CTRL_INT BIT(8)
#define PCIE_REGS_PCIE_SII_PM_STATE 0x00b0
#define SMLH_LINK_UP BIT(19)
#define RDLH_LINK_UP BIT(8)
#define PCIE_REGS_PCIE_SII_LINK_UP (SMLH_LINK_UP | RDLH_LINK_UP)
#define PCIE_REGS_PCIE_PHY_CNTL 0x0164
#define PHY0_SRAM_BYPASS BIT(8)
#define PCIE_REGS_PCIE_PHY_STAT 0x0168
#define PHY0_MPLLA_STATE BIT(1)
#define PCIE_REGS_LJPLL_STA 0x016c
#define LJPLL_LOCK BIT(0)
#define PCIE_REGS_LJPLL_CNTRL_0 0x0170
#define LJPLL_EN BIT(29)
#define LJPLL_FOUT_EN GENMASK(24, 21)
#define PCIE_REGS_LJPLL_CNTRL_2 0x0178
#define LJPLL_REF_DIV GENMASK(17, 12)
#define LJPLL_FB_DIV GENMASK(11, 0)
#define PCIE_REGS_LJPLL_CNTRL_3 0x017c
#define LJPLL_POST_DIV3A GENMASK(24, 22)
#define LJPLL_POST_DIV2A GENMASK(18, 16)
#define PERST_DELAY_US 1000
#define AUX_CLK_RATE_HZ 24000000
struct keembay_pcie {
struct dw_pcie pci;
void __iomem *apb_base;
enum dw_pcie_device_mode mode;
struct clk *clk_master;
struct clk *clk_aux;
struct gpio_desc *reset;
};
struct keembay_pcie_of_data {
enum dw_pcie_device_mode mode;
};
static void keembay_ep_reset_assert(struct keembay_pcie *pcie)
{
gpiod_set_value_cansleep(pcie->reset, 1);
usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500);
}
static void keembay_ep_reset_deassert(struct keembay_pcie *pcie)
{
/*
* Ensure that PERST# is asserted for a minimum of 100ms.
*
* For more details, refer to PCI Express Card Electromechanical
* Specification Revision 1.1, Table-2.4.
*/
msleep(100);
gpiod_set_value_cansleep(pcie->reset, 0);
usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500);
}
static void keembay_pcie_ltssm_set(struct keembay_pcie *pcie, bool enable)
{
u32 val;
val = readl(pcie->apb_base + PCIE_REGS_PCIE_APP_CNTRL);
if (enable)
val |= APP_LTSSM_ENABLE;
else
val &= ~APP_LTSSM_ENABLE;
writel(val, pcie->apb_base + PCIE_REGS_PCIE_APP_CNTRL);
}
static int keembay_pcie_link_up(struct dw_pcie *pci)
{
struct keembay_pcie *pcie = dev_get_drvdata(pci->dev);
u32 val;
val = readl(pcie->apb_base + PCIE_REGS_PCIE_SII_PM_STATE);
return (val & PCIE_REGS_PCIE_SII_LINK_UP) == PCIE_REGS_PCIE_SII_LINK_UP;
}
static int keembay_pcie_start_link(struct dw_pcie *pci)
{
struct keembay_pcie *pcie = dev_get_drvdata(pci->dev);
u32 val;
int ret;
if (pcie->mode == DW_PCIE_EP_TYPE)
return 0;
keembay_pcie_ltssm_set(pcie, false);
ret = readl_poll_timeout(pcie->apb_base + PCIE_REGS_PCIE_PHY_STAT,
val, val & PHY0_MPLLA_STATE, 20,
500 * USEC_PER_MSEC);
if (ret) {
dev_err(pci->dev, "MPLLA is not locked\n");
return ret;
}
keembay_pcie_ltssm_set(pcie, true);
return 0;
}
static void keembay_pcie_stop_link(struct dw_pcie *pci)
{
struct keembay_pcie *pcie = dev_get_drvdata(pci->dev);
keembay_pcie_ltssm_set(pcie, false);
}
static const struct dw_pcie_ops keembay_pcie_ops = {
.link_up = keembay_pcie_link_up,
.start_link = keembay_pcie_start_link,
.stop_link = keembay_pcie_stop_link,
};
static inline struct clk *keembay_pcie_probe_clock(struct device *dev,
const char *id, u64 rate)
{
struct clk *clk;
int ret;
clk = devm_clk_get(dev, id);
if (IS_ERR(clk))
return clk;
if (rate) {
ret = clk_set_rate(clk, rate);
if (ret)
return ERR_PTR(ret);
}
ret = clk_prepare_enable(clk);
if (ret)
return ERR_PTR(ret);
ret = devm_add_action_or_reset(dev,
(void(*)(void *))clk_disable_unprepare,
clk);
if (ret)
return ERR_PTR(ret);
return clk;
}
static int keembay_pcie_probe_clocks(struct keembay_pcie *pcie)
{
struct dw_pcie *pci = &pcie->pci;
struct device *dev = pci->dev;
pcie->clk_master = keembay_pcie_probe_clock(dev, "master", 0);
if (IS_ERR(pcie->clk_master))
return dev_err_probe(dev, PTR_ERR(pcie->clk_master),
"Failed to enable master clock");
pcie->clk_aux = keembay_pcie_probe_clock(dev, "aux", AUX_CLK_RATE_HZ);
if (IS_ERR(pcie->clk_aux))
return dev_err_probe(dev, PTR_ERR(pcie->clk_aux),
"Failed to enable auxiliary clock");
return 0;
}
/*
* Initialize the internal PCIe PLL in Host mode.
* See the following sections in Keem Bay data book,
* (1) 6.4.6.1 PCIe Subsystem Example Initialization,
* (2) 6.8 PCIe Low Jitter PLL for Ref Clk Generation.
*/
static int keembay_pcie_pll_init(struct keembay_pcie *pcie)
{
struct dw_pcie *pci = &pcie->pci;
u32 val;
int ret;
val = FIELD_PREP(LJPLL_REF_DIV, 0) | FIELD_PREP(LJPLL_FB_DIV, 0x32);
writel(val, pcie->apb_base + PCIE_REGS_LJPLL_CNTRL_2);
val = FIELD_PREP(LJPLL_POST_DIV3A, 0x2) |
FIELD_PREP(LJPLL_POST_DIV2A, 0x2);
writel(val, pcie->apb_base + PCIE_REGS_LJPLL_CNTRL_3);
val = FIELD_PREP(LJPLL_EN, 0x1) | FIELD_PREP(LJPLL_FOUT_EN, 0xc);
writel(val, pcie->apb_base + PCIE_REGS_LJPLL_CNTRL_0);
ret = readl_poll_timeout(pcie->apb_base + PCIE_REGS_LJPLL_STA,
val, val & LJPLL_LOCK, 20,
500 * USEC_PER_MSEC);
if (ret)
dev_err(pci->dev, "Low jitter PLL is not locked\n");
return ret;
}
static void keembay_pcie_msi_irq_handler(struct irq_desc *desc)
{
struct keembay_pcie *pcie = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
u32 val, mask, status;
struct pcie_port *pp;
/*
* Keem Bay PCIe Controller provides an additional IP logic on top of
* standard DWC IP to clear MSI IRQ by writing '1' to the respective
* bit of the status register.
*
* So, a chained irq handler is defined to handle this additional
* IP logic.
*/
chained_irq_enter(chip, desc);
pp = &pcie->pci.pp;
val = readl(pcie->apb_base + PCIE_REGS_INTERRUPT_STATUS);
mask = readl(pcie->apb_base + PCIE_REGS_INTERRUPT_ENABLE);
status = val & mask;
if (status & MSI_CTRL_INT) {
dw_handle_msi_irq(pp);
writel(status, pcie->apb_base + PCIE_REGS_INTERRUPT_STATUS);
}
chained_irq_exit(chip, desc);
}
static int keembay_pcie_setup_msi_irq(struct keembay_pcie *pcie)
{
struct dw_pcie *pci = &pcie->pci;
struct device *dev = pci->dev;
struct platform_device *pdev = to_platform_device(dev);
int irq;
irq = platform_get_irq_byname(pdev, "pcie");
if (irq < 0)
return irq;
irq_set_chained_handler_and_data(irq, keembay_pcie_msi_irq_handler,
pcie);
return 0;
}
static void keembay_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct keembay_pcie *pcie = dev_get_drvdata(pci->dev);
writel(EDMA_INT_EN, pcie->apb_base + PCIE_REGS_INTERRUPT_ENABLE);
}
static int keembay_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
enum pci_epc_irq_type type,
u16 interrupt_num)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
switch (type) {
case PCI_EPC_IRQ_LEGACY:
/* Legacy interrupts are not supported in Keem Bay */
dev_err(pci->dev, "Legacy IRQ is not supported\n");
return -EINVAL;
case PCI_EPC_IRQ_MSI:
return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num);
case PCI_EPC_IRQ_MSIX:
return dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num);
default:
dev_err(pci->dev, "Unknown IRQ type %d\n", type);
return -EINVAL;
}
}
static const struct pci_epc_features keembay_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true,
.msix_capable = true,
.reserved_bar = BIT(BAR_1) | BIT(BAR_3) | BIT(BAR_5),
.bar_fixed_64bit = BIT(BAR_0) | BIT(BAR_2) | BIT(BAR_4),
.align = SZ_16K,
};
static const struct pci_epc_features *
keembay_pcie_get_features(struct dw_pcie_ep *ep)
{
return &keembay_pcie_epc_features;
}
static const struct dw_pcie_ep_ops keembay_pcie_ep_ops = {
.ep_init = keembay_pcie_ep_init,
.raise_irq = keembay_pcie_ep_raise_irq,
.get_features = keembay_pcie_get_features,
};
static const struct dw_pcie_host_ops keembay_pcie_host_ops = {
};
static int keembay_pcie_add_pcie_port(struct keembay_pcie *pcie,
struct platform_device *pdev)
{
struct dw_pcie *pci = &pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = &pdev->dev;
u32 val;
int ret;
pp->ops = &keembay_pcie_host_ops;
pp->msi_irq = -ENODEV;
ret = keembay_pcie_setup_msi_irq(pcie);
if (ret)
return ret;
pcie->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
if (IS_ERR(pcie->reset))
return PTR_ERR(pcie->reset);
ret = keembay_pcie_probe_clocks(pcie);
if (ret)
return ret;
val = readl(pcie->apb_base + PCIE_REGS_PCIE_PHY_CNTL);
val |= PHY0_SRAM_BYPASS;
writel(val, pcie->apb_base + PCIE_REGS_PCIE_PHY_CNTL);
writel(PCIE_DEVICE_TYPE, pcie->apb_base + PCIE_REGS_PCIE_CFG);
ret = keembay_pcie_pll_init(pcie);
if (ret)
return ret;
val = readl(pcie->apb_base + PCIE_REGS_PCIE_CFG);
writel(val | PCIE_RSTN, pcie->apb_base + PCIE_REGS_PCIE_CFG);
keembay_ep_reset_deassert(pcie);
ret = dw_pcie_host_init(pp);
if (ret) {
keembay_ep_reset_assert(pcie);
dev_err(dev, "Failed to initialize host: %d\n", ret);
return ret;
}
val = readl(pcie->apb_base + PCIE_REGS_INTERRUPT_ENABLE);
if (IS_ENABLED(CONFIG_PCI_MSI))
val |= MSI_CTRL_INT_EN;
writel(val, pcie->apb_base + PCIE_REGS_INTERRUPT_ENABLE);
return 0;
}
static int keembay_pcie_probe(struct platform_device *pdev)
{
const struct keembay_pcie_of_data *data;
struct device *dev = &pdev->dev;
struct keembay_pcie *pcie;
struct dw_pcie *pci;
enum dw_pcie_device_mode mode;
data = device_get_match_data(dev);
if (!data)
return -ENODEV;
mode = (enum dw_pcie_device_mode)data->mode;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
pci = &pcie->pci;
pci->dev = dev;
pci->ops = &keembay_pcie_ops;
pcie->mode = mode;
pcie->apb_base = devm_platform_ioremap_resource_byname(pdev, "apb");
if (IS_ERR(pcie->apb_base))
return PTR_ERR(pcie->apb_base);
platform_set_drvdata(pdev, pcie);
switch (pcie->mode) {
case DW_PCIE_RC_TYPE:
if (!IS_ENABLED(CONFIG_PCIE_KEEMBAY_HOST))
return -ENODEV;
return keembay_pcie_add_pcie_port(pcie, pdev);
case DW_PCIE_EP_TYPE:
if (!IS_ENABLED(CONFIG_PCIE_KEEMBAY_EP))
return -ENODEV;
pci->ep.ops = &keembay_pcie_ep_ops;
return dw_pcie_ep_init(&pci->ep);
default:
dev_err(dev, "Invalid device type %d\n", pcie->mode);
return -ENODEV;
}
}
static const struct keembay_pcie_of_data keembay_pcie_rc_of_data = {
.mode = DW_PCIE_RC_TYPE,
};
static const struct keembay_pcie_of_data keembay_pcie_ep_of_data = {
.mode = DW_PCIE_EP_TYPE,
};
static const struct of_device_id keembay_pcie_of_match[] = {
{
.compatible = "intel,keembay-pcie",
.data = &keembay_pcie_rc_of_data,
},
{
.compatible = "intel,keembay-pcie-ep",
.data = &keembay_pcie_ep_of_data,
},
{}
};
static struct platform_driver keembay_pcie_driver = {
.driver = {
.name = "keembay-pcie",
.of_match_table = keembay_pcie_of_match,
.suppress_bind_attrs = true,
},
.probe = keembay_pcie_probe,
};
builtin_platform_driver(keembay_pcie_driver);

View File

@ -497,19 +497,19 @@ static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
struct tegra_pcie_dw *pcie = arg;
struct dw_pcie_ep *ep = &pcie->pci.ep;
int spurious = 1;
u32 val, tmp;
u32 status_l0, status_l1, link_status;
val = appl_readl(pcie, APPL_INTR_STATUS_L0);
if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0);
status_l0 = appl_readl(pcie, APPL_INTR_STATUS_L0);
if (status_l0 & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
appl_writel(pcie, status_l1, APPL_INTR_STATUS_L1_0_0);
if (val & APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE)
if (status_l1 & APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE)
pex_ep_event_hot_rst_done(pcie);
if (val & APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED) {
tmp = appl_readl(pcie, APPL_LINK_STATUS);
if (tmp & APPL_LINK_STATUS_RDLH_LINK_UP) {
if (status_l1 & APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED) {
link_status = appl_readl(pcie, APPL_LINK_STATUS);
if (link_status & APPL_LINK_STATUS_RDLH_LINK_UP) {
dev_dbg(pcie->dev, "Link is up with Host\n");
dw_pcie_ep_linkup(ep);
}
@ -518,11 +518,11 @@ static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
spurious = 0;
}
if (val & APPL_INTR_STATUS_L0_PCI_CMD_EN_INT) {
val = appl_readl(pcie, APPL_INTR_STATUS_L1_15);
appl_writel(pcie, val, APPL_INTR_STATUS_L1_15);
if (status_l0 & APPL_INTR_STATUS_L0_PCI_CMD_EN_INT) {
status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_15);
appl_writel(pcie, status_l1, APPL_INTR_STATUS_L1_15);
if (val & APPL_INTR_STATUS_L1_15_CFG_BME_CHGED)
if (status_l1 & APPL_INTR_STATUS_L1_15_CFG_BME_CHGED)
return IRQ_WAKE_THREAD;
spurious = 0;
@ -530,8 +530,8 @@ static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg)
if (spurious) {
dev_warn(pcie->dev, "Random interrupt (STATUS = 0x%08X)\n",
val);
appl_writel(pcie, val, APPL_INTR_STATUS_L0);
status_l0);
appl_writel(pcie, status_l0, APPL_INTR_STATUS_L0);
}
return IRQ_HANDLED;
@ -1493,6 +1493,16 @@ static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie)
return;
}
/*
* PCIe controller exits from L2 only if reset is applied, so
* controller doesn't handle interrupts. But in cases where
* L2 entry fails, PERST# is asserted which can trigger surprise
* link down AER. However this function call happens in
* suspend_noirq(), so AER interrupt will not be processed.
* Disable all interrupts to avoid such a scenario.
*/
appl_writel(pcie, 0x0, APPL_INTR_EN_L0_0);
if (tegra_pcie_try_link_l2(pcie)) {
dev_info(pcie->dev, "Link didn't transition to L2 state\n");
/*
@ -1763,7 +1773,7 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
val = (ep->msi_mem_phys & MSIX_ADDR_MATCH_LOW_OFF_MASK);
val |= MSIX_ADDR_MATCH_LOW_OFF_EN;
dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_LOW_OFF, val);
val = (lower_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK);
val = (upper_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK);
dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_HIGH_OFF, val);
ret = dw_pcie_ep_init_complete(ep);
@ -1935,13 +1945,6 @@ static int tegra_pcie_config_ep(struct tegra_pcie_dw *pcie,
return ret;
}
name = devm_kasprintf(dev, GFP_KERNEL, "tegra_pcie_%u_ep_work",
pcie->cid);
if (!name) {
dev_err(dev, "Failed to create PCIe EP work thread string\n");
return -ENOMEM;
}
pm_runtime_enable(dev);
ret = dw_pcie_ep_init(ep);
@ -2236,6 +2239,11 @@ static int tegra_pcie_dw_resume_early(struct device *dev)
struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
u32 val;
if (pcie->mode == DW_PCIE_EP_TYPE) {
dev_err(dev, "Suspend is not supported in EP mode");
return -ENOTSUPP;
}
if (!pcie->link_state)
return 0;

View File

@ -235,7 +235,7 @@ static void uniphier_pcie_irq_handler(struct irq_desc *desc)
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned long reg;
u32 val, bit, virq;
u32 val, bit;
/* INT for debug */
val = readl(priv->base + PCL_RCV_INT);
@ -257,10 +257,8 @@ static void uniphier_pcie_irq_handler(struct irq_desc *desc)
val = readl(priv->base + PCL_RCV_INTX);
reg = FIELD_GET(PCL_RCV_INTX_ALL_STATUS, val);
for_each_set_bit(bit, &reg, PCI_NUM_INTX) {
virq = irq_linear_revmap(priv->legacy_irq_domain, bit);
generic_handle_irq(virq);
}
for_each_set_bit(bit, &reg, PCI_NUM_INTX)
generic_handle_domain_irq(priv->legacy_irq_domain, bit);
chained_irq_exit(chip, desc);
}

View File

@ -0,0 +1,332 @@
// SPDX-License-Identifier: GPL-2.0
/*
* DWC PCIe RC driver for Toshiba Visconti ARM SoC
*
* Copyright (C) 2021 Toshiba Electronic Device & Storage Corporation
* Copyright (C) 2021 TOSHIBA CORPORATION
*
* Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp>
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/iopoll.h>
#include <linux/kernel.h>
#include <linux/of_platform.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/resource.h>
#include <linux/types.h>
#include "pcie-designware.h"
#include "../../pci.h"
struct visconti_pcie {
struct dw_pcie pci;
void __iomem *ulreg_base;
void __iomem *smu_base;
void __iomem *mpu_base;
struct clk *refclk;
struct clk *coreclk;
struct clk *auxclk;
};
#define PCIE_UL_REG_S_PCIE_MODE 0x00F4
#define PCIE_UL_REG_S_PCIE_MODE_EP 0x00
#define PCIE_UL_REG_S_PCIE_MODE_RC 0x04
#define PCIE_UL_REG_S_PERSTN_CTRL 0x00F8
#define PCIE_UL_IOM_PCIE_PERSTN_I_EN BIT(3)
#define PCIE_UL_DIRECT_PERSTN_EN BIT(2)
#define PCIE_UL_PERSTN_OUT BIT(1)
#define PCIE_UL_DIRECT_PERSTN BIT(0)
#define PCIE_UL_REG_S_PERSTN_CTRL_INIT (PCIE_UL_IOM_PCIE_PERSTN_I_EN | \
PCIE_UL_DIRECT_PERSTN_EN | \
PCIE_UL_DIRECT_PERSTN)
#define PCIE_UL_REG_S_PHY_INIT_02 0x0104
#define PCIE_UL_PHY0_SRAM_EXT_LD_DONE BIT(0)
#define PCIE_UL_REG_S_PHY_INIT_03 0x0108
#define PCIE_UL_PHY0_SRAM_INIT_DONE BIT(0)
#define PCIE_UL_REG_S_INT_EVENT_MASK1 0x0138
#define PCIE_UL_CFG_PME_INT BIT(0)
#define PCIE_UL_CFG_LINK_EQ_REQ_INT BIT(1)
#define PCIE_UL_EDMA_INT0 BIT(2)
#define PCIE_UL_EDMA_INT1 BIT(3)
#define PCIE_UL_EDMA_INT2 BIT(4)
#define PCIE_UL_EDMA_INT3 BIT(5)
#define PCIE_UL_S_INT_EVENT_MASK1_ALL (PCIE_UL_CFG_PME_INT | \
PCIE_UL_CFG_LINK_EQ_REQ_INT | \
PCIE_UL_EDMA_INT0 | \
PCIE_UL_EDMA_INT1 | \
PCIE_UL_EDMA_INT2 | \
PCIE_UL_EDMA_INT3)
#define PCIE_UL_REG_S_SB_MON 0x0198
#define PCIE_UL_REG_S_SIG_MON 0x019C
#define PCIE_UL_CORE_RST_N_MON BIT(0)
#define PCIE_UL_REG_V_SII_DBG_00 0x0844
#define PCIE_UL_REG_V_SII_GEN_CTRL_01 0x0860
#define PCIE_UL_APP_LTSSM_ENABLE BIT(0)
#define PCIE_UL_REG_V_PHY_ST_00 0x0864
#define PCIE_UL_SMLH_LINK_UP BIT(0)
#define PCIE_UL_REG_V_PHY_ST_02 0x0868
#define PCIE_UL_S_DETECT_ACT 0x01
#define PCIE_UL_S_L0 0x11
#define PISMU_CKON_PCIE 0x0038
#define PISMU_CKON_PCIE_AUX_CLK BIT(1)
#define PISMU_CKON_PCIE_MSTR_ACLK BIT(0)
#define PISMU_RSOFF_PCIE 0x0538
#define PISMU_RSOFF_PCIE_ULREG_RST_N BIT(1)
#define PISMU_RSOFF_PCIE_PWR_UP_RST_N BIT(0)
#define PCIE_MPU_REG_MP_EN 0x0
#define MPU_MP_EN_DISABLE BIT(0)
/* Access registers in PCIe ulreg */
static void visconti_ulreg_writel(struct visconti_pcie *pcie, u32 val, u32 reg)
{
writel_relaxed(val, pcie->ulreg_base + reg);
}
static u32 visconti_ulreg_readl(struct visconti_pcie *pcie, u32 reg)
{
return readl_relaxed(pcie->ulreg_base + reg);
}
/* Access registers in PCIe smu */
static void visconti_smu_writel(struct visconti_pcie *pcie, u32 val, u32 reg)
{
writel_relaxed(val, pcie->smu_base + reg);
}
/* Access registers in PCIe mpu */
static void visconti_mpu_writel(struct visconti_pcie *pcie, u32 val, u32 reg)
{
writel_relaxed(val, pcie->mpu_base + reg);
}
static u32 visconti_mpu_readl(struct visconti_pcie *pcie, u32 reg)
{
return readl_relaxed(pcie->mpu_base + reg);
}
static int visconti_pcie_link_up(struct dw_pcie *pci)
{
struct visconti_pcie *pcie = dev_get_drvdata(pci->dev);
void __iomem *addr = pcie->ulreg_base;
u32 val = readl_relaxed(addr + PCIE_UL_REG_V_PHY_ST_02);
return !!(val & PCIE_UL_S_L0);
}
static int visconti_pcie_start_link(struct dw_pcie *pci)
{
struct visconti_pcie *pcie = dev_get_drvdata(pci->dev);
void __iomem *addr = pcie->ulreg_base;
u32 val;
int ret;
visconti_ulreg_writel(pcie, PCIE_UL_APP_LTSSM_ENABLE,
PCIE_UL_REG_V_SII_GEN_CTRL_01);
ret = readl_relaxed_poll_timeout(addr + PCIE_UL_REG_V_PHY_ST_02,
val, (val & PCIE_UL_S_L0),
90000, 100000);
if (ret)
return ret;
visconti_ulreg_writel(pcie, PCIE_UL_S_INT_EVENT_MASK1_ALL,
PCIE_UL_REG_S_INT_EVENT_MASK1);
if (dw_pcie_link_up(pci)) {
val = visconti_mpu_readl(pcie, PCIE_MPU_REG_MP_EN);
visconti_mpu_writel(pcie, val & ~MPU_MP_EN_DISABLE,
PCIE_MPU_REG_MP_EN);
}
return 0;
}
static void visconti_pcie_stop_link(struct dw_pcie *pci)
{
struct visconti_pcie *pcie = dev_get_drvdata(pci->dev);
u32 val;
val = visconti_ulreg_readl(pcie, PCIE_UL_REG_V_SII_GEN_CTRL_01);
val &= ~PCIE_UL_APP_LTSSM_ENABLE;
visconti_ulreg_writel(pcie, val, PCIE_UL_REG_V_SII_GEN_CTRL_01);
val = visconti_mpu_readl(pcie, PCIE_MPU_REG_MP_EN);
visconti_mpu_writel(pcie, val | MPU_MP_EN_DISABLE, PCIE_MPU_REG_MP_EN);
}
/*
* In this SoC specification, the CPU bus outputs the offset value from
* 0x40000000 to the PCIe bus, so 0x40000000 is subtracted from the CPU
* bus address. This 0x40000000 is also based on io_base from DT.
*/
static u64 visconti_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 cpu_addr)
{
struct pcie_port *pp = &pci->pp;
return cpu_addr & ~pp->io_base;
}
static const struct dw_pcie_ops dw_pcie_ops = {
.cpu_addr_fixup = visconti_pcie_cpu_addr_fixup,
.link_up = visconti_pcie_link_up,
.start_link = visconti_pcie_start_link,
.stop_link = visconti_pcie_stop_link,
};
static int visconti_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct visconti_pcie *pcie = dev_get_drvdata(pci->dev);
void __iomem *addr;
int err;
u32 val;
visconti_smu_writel(pcie,
PISMU_CKON_PCIE_AUX_CLK | PISMU_CKON_PCIE_MSTR_ACLK,
PISMU_CKON_PCIE);
ndelay(250);
visconti_smu_writel(pcie, PISMU_RSOFF_PCIE_ULREG_RST_N,
PISMU_RSOFF_PCIE);
visconti_ulreg_writel(pcie, PCIE_UL_REG_S_PCIE_MODE_RC,
PCIE_UL_REG_S_PCIE_MODE);
val = PCIE_UL_REG_S_PERSTN_CTRL_INIT;
visconti_ulreg_writel(pcie, val, PCIE_UL_REG_S_PERSTN_CTRL);
udelay(100);
val |= PCIE_UL_PERSTN_OUT;
visconti_ulreg_writel(pcie, val, PCIE_UL_REG_S_PERSTN_CTRL);
udelay(100);
visconti_smu_writel(pcie, PISMU_RSOFF_PCIE_PWR_UP_RST_N,
PISMU_RSOFF_PCIE);
addr = pcie->ulreg_base + PCIE_UL_REG_S_PHY_INIT_03;
err = readl_relaxed_poll_timeout(addr, val,
(val & PCIE_UL_PHY0_SRAM_INIT_DONE),
100, 1000);
if (err)
return err;
visconti_ulreg_writel(pcie, PCIE_UL_PHY0_SRAM_EXT_LD_DONE,
PCIE_UL_REG_S_PHY_INIT_02);
addr = pcie->ulreg_base + PCIE_UL_REG_S_SIG_MON;
return readl_relaxed_poll_timeout(addr, val,
(val & PCIE_UL_CORE_RST_N_MON), 100,
1000);
}
static const struct dw_pcie_host_ops visconti_pcie_host_ops = {
.host_init = visconti_pcie_host_init,
};
static int visconti_get_resources(struct platform_device *pdev,
struct visconti_pcie *pcie)
{
struct device *dev = &pdev->dev;
pcie->ulreg_base = devm_platform_ioremap_resource_byname(pdev, "ulreg");
if (IS_ERR(pcie->ulreg_base))
return PTR_ERR(pcie->ulreg_base);
pcie->smu_base = devm_platform_ioremap_resource_byname(pdev, "smu");
if (IS_ERR(pcie->smu_base))
return PTR_ERR(pcie->smu_base);
pcie->mpu_base = devm_platform_ioremap_resource_byname(pdev, "mpu");
if (IS_ERR(pcie->mpu_base))
return PTR_ERR(pcie->mpu_base);
pcie->refclk = devm_clk_get(dev, "ref");
if (IS_ERR(pcie->refclk))
return dev_err_probe(dev, PTR_ERR(pcie->refclk),
"Failed to get ref clock\n");
pcie->coreclk = devm_clk_get(dev, "core");
if (IS_ERR(pcie->coreclk))
return dev_err_probe(dev, PTR_ERR(pcie->coreclk),
"Failed to get core clock\n");
pcie->auxclk = devm_clk_get(dev, "aux");
if (IS_ERR(pcie->auxclk))
return dev_err_probe(dev, PTR_ERR(pcie->auxclk),
"Failed to get aux clock\n");
return 0;
}
static int visconti_add_pcie_port(struct visconti_pcie *pcie,
struct platform_device *pdev)
{
struct dw_pcie *pci = &pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = &pdev->dev;
pp->irq = platform_get_irq_byname(pdev, "intr");
if (pp->irq < 0) {
dev_err(dev, "Interrupt intr is missing");
return pp->irq;
}
pp->ops = &visconti_pcie_host_ops;
return dw_pcie_host_init(pp);
}
static int visconti_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct visconti_pcie *pcie;
struct dw_pcie *pci;
int ret;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
pci = &pcie->pci;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
ret = visconti_get_resources(pdev, pcie);
if (ret)
return ret;
platform_set_drvdata(pdev, pcie);
return visconti_add_pcie_port(pcie, pdev);
}
static const struct of_device_id visconti_pcie_match[] = {
{ .compatible = "toshiba,visconti-pcie" },
{},
};
static struct platform_driver visconti_pcie_driver = {
.probe = visconti_pcie_probe,
.driver = {
.name = "visconti-pcie",
.of_match_table = visconti_pcie_match,
.suppress_bind_attrs = true,
},
};
builtin_platform_driver(visconti_pcie_driver);

View File

@ -92,7 +92,7 @@ static void mobiveil_pcie_isr(struct irq_desc *desc)
u32 msi_data, msi_addr_lo, msi_addr_hi;
u32 intr_status, msi_status;
unsigned long shifted_status;
u32 bit, virq, val, mask;
u32 bit, val, mask;
/*
* The core provides a single interrupt for both INTx/MSI messages.
@ -114,11 +114,10 @@ static void mobiveil_pcie_isr(struct irq_desc *desc)
shifted_status >>= PAB_INTX_START;
do {
for_each_set_bit(bit, &shifted_status, PCI_NUM_INTX) {
virq = irq_find_mapping(rp->intx_domain,
bit + 1);
if (virq)
generic_handle_irq(virq);
else
int ret;
ret = generic_handle_domain_irq(rp->intx_domain,
bit + 1);
if (ret)
dev_err_ratelimited(dev, "unexpected IRQ, INT%d\n",
bit);
@ -155,9 +154,7 @@ static void mobiveil_pcie_isr(struct irq_desc *desc)
dev_dbg(dev, "MSI registers, data: %08x, addr: %08x:%08x\n",
msi_data, msi_addr_hi, msi_addr_lo);
virq = irq_find_mapping(msi->dev_domain, msi_data);
if (virq)
generic_handle_irq(virq);
generic_handle_domain_irq(msi->dev_domain, msi_data);
msi_status = readl_relaxed(pcie->apb_csr_base +
MSI_STATUS_OFFSET);

View File

@ -58,6 +58,7 @@
#define PIO_COMPLETION_STATUS_CRS 2
#define PIO_COMPLETION_STATUS_CA 4
#define PIO_NON_POSTED_REQ BIT(10)
#define PIO_ERR_STATUS BIT(11)
#define PIO_ADDR_LS (PIO_BASE_ADDR + 0x8)
#define PIO_ADDR_MS (PIO_BASE_ADDR + 0xc)
#define PIO_WR_DATA (PIO_BASE_ADDR + 0x10)
@ -118,6 +119,46 @@
#define PCIE_MSI_MASK_REG (CONTROL_BASE_ADDR + 0x5C)
#define PCIE_MSI_PAYLOAD_REG (CONTROL_BASE_ADDR + 0x9C)
/* PCIe window configuration */
#define OB_WIN_BASE_ADDR 0x4c00
#define OB_WIN_BLOCK_SIZE 0x20
#define OB_WIN_COUNT 8
#define OB_WIN_REG_ADDR(win, offset) (OB_WIN_BASE_ADDR + \
OB_WIN_BLOCK_SIZE * (win) + \
(offset))
#define OB_WIN_MATCH_LS(win) OB_WIN_REG_ADDR(win, 0x00)
#define OB_WIN_ENABLE BIT(0)
#define OB_WIN_MATCH_MS(win) OB_WIN_REG_ADDR(win, 0x04)
#define OB_WIN_REMAP_LS(win) OB_WIN_REG_ADDR(win, 0x08)
#define OB_WIN_REMAP_MS(win) OB_WIN_REG_ADDR(win, 0x0c)
#define OB_WIN_MASK_LS(win) OB_WIN_REG_ADDR(win, 0x10)
#define OB_WIN_MASK_MS(win) OB_WIN_REG_ADDR(win, 0x14)
#define OB_WIN_ACTIONS(win) OB_WIN_REG_ADDR(win, 0x18)
#define OB_WIN_DEFAULT_ACTIONS (OB_WIN_ACTIONS(OB_WIN_COUNT-1) + 0x4)
#define OB_WIN_FUNC_NUM_MASK GENMASK(31, 24)
#define OB_WIN_FUNC_NUM_SHIFT 24
#define OB_WIN_FUNC_NUM_ENABLE BIT(23)
#define OB_WIN_BUS_NUM_BITS_MASK GENMASK(22, 20)
#define OB_WIN_BUS_NUM_BITS_SHIFT 20
#define OB_WIN_MSG_CODE_ENABLE BIT(22)
#define OB_WIN_MSG_CODE_MASK GENMASK(21, 14)
#define OB_WIN_MSG_CODE_SHIFT 14
#define OB_WIN_MSG_PAYLOAD_LEN BIT(12)
#define OB_WIN_ATTR_ENABLE BIT(11)
#define OB_WIN_ATTR_TC_MASK GENMASK(10, 8)
#define OB_WIN_ATTR_TC_SHIFT 8
#define OB_WIN_ATTR_RELAXED BIT(7)
#define OB_WIN_ATTR_NOSNOOP BIT(6)
#define OB_WIN_ATTR_POISON BIT(5)
#define OB_WIN_ATTR_IDO BIT(4)
#define OB_WIN_TYPE_MASK GENMASK(3, 0)
#define OB_WIN_TYPE_SHIFT 0
#define OB_WIN_TYPE_MEM 0x0
#define OB_WIN_TYPE_IO 0x4
#define OB_WIN_TYPE_CONFIG_TYPE0 0x8
#define OB_WIN_TYPE_CONFIG_TYPE1 0x9
#define OB_WIN_TYPE_MSG 0xc
/* LMI registers base address and register offsets */
#define LMI_BASE_ADDR 0x6000
#define CFG_REG (LMI_BASE_ADDR + 0x0)
@ -166,7 +207,7 @@
#define PCIE_CONFIG_WR_TYPE0 0xa
#define PCIE_CONFIG_WR_TYPE1 0xb
#define PIO_RETRY_CNT 500
#define PIO_RETRY_CNT 750000 /* 1.5 s */
#define PIO_RETRY_DELAY 2 /* 2 us*/
#define LINK_WAIT_MAX_RETRIES 10
@ -177,11 +218,21 @@
#define MSI_IRQ_NUM 32
#define CFG_RD_CRS_VAL 0xffff0001
struct advk_pcie {
struct platform_device *pdev;
void __iomem *base;
struct {
phys_addr_t match;
phys_addr_t remap;
phys_addr_t mask;
u32 actions;
} wins[OB_WIN_COUNT];
u8 wins_count;
struct irq_domain *irq_domain;
struct irq_chip irq_chip;
raw_spinlock_t irq_lock;
struct irq_domain *msi_domain;
struct irq_domain *msi_inner_domain;
struct irq_chip msi_bottom_irq_chip;
@ -366,9 +417,39 @@ err:
dev_err(dev, "link never came up\n");
}
/*
* Set PCIe address window register which could be used for memory
* mapping.
*/
static void advk_pcie_set_ob_win(struct advk_pcie *pcie, u8 win_num,
phys_addr_t match, phys_addr_t remap,
phys_addr_t mask, u32 actions)
{
advk_writel(pcie, OB_WIN_ENABLE |
lower_32_bits(match), OB_WIN_MATCH_LS(win_num));
advk_writel(pcie, upper_32_bits(match), OB_WIN_MATCH_MS(win_num));
advk_writel(pcie, lower_32_bits(remap), OB_WIN_REMAP_LS(win_num));
advk_writel(pcie, upper_32_bits(remap), OB_WIN_REMAP_MS(win_num));
advk_writel(pcie, lower_32_bits(mask), OB_WIN_MASK_LS(win_num));
advk_writel(pcie, upper_32_bits(mask), OB_WIN_MASK_MS(win_num));
advk_writel(pcie, actions, OB_WIN_ACTIONS(win_num));
}
static void advk_pcie_disable_ob_win(struct advk_pcie *pcie, u8 win_num)
{
advk_writel(pcie, 0, OB_WIN_MATCH_LS(win_num));
advk_writel(pcie, 0, OB_WIN_MATCH_MS(win_num));
advk_writel(pcie, 0, OB_WIN_REMAP_LS(win_num));
advk_writel(pcie, 0, OB_WIN_REMAP_MS(win_num));
advk_writel(pcie, 0, OB_WIN_MASK_LS(win_num));
advk_writel(pcie, 0, OB_WIN_MASK_MS(win_num));
advk_writel(pcie, 0, OB_WIN_ACTIONS(win_num));
}
static void advk_pcie_setup_hw(struct advk_pcie *pcie)
{
u32 reg;
int i;
/* Enable TX */
reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG);
@ -447,15 +528,51 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK);
advk_writel(pcie, reg, HOST_CTRL_INT_MASK_REG);
/*
* Enable AXI address window location generation:
* When it is enabled, the default outbound window
* configurations (Default User Field: 0xD0074CFC)
* are used to transparent address translation for
* the outbound transactions. Thus, PCIe address
* windows are not required for transparent memory
* access when default outbound window configuration
* is set for memory access.
*/
reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG);
reg |= PCIE_CORE_CTRL2_OB_WIN_ENABLE;
advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG);
/* Bypass the address window mapping for PIO */
/*
* Set memory access in Default User Field so it
* is not required to configure PCIe address for
* transparent memory access.
*/
advk_writel(pcie, OB_WIN_TYPE_MEM, OB_WIN_DEFAULT_ACTIONS);
/*
* Bypass the address window mapping for PIO:
* Since PIO access already contains all required
* info over AXI interface by PIO registers, the
* address window is not required.
*/
reg = advk_readl(pcie, PIO_CTRL);
reg |= PIO_CTRL_ADDR_WIN_DISABLE;
advk_writel(pcie, reg, PIO_CTRL);
/*
* Configure PCIe address windows for non-memory or
* non-transparent access as by default PCIe uses
* transparent memory access.
*/
for (i = 0; i < pcie->wins_count; i++)
advk_pcie_set_ob_win(pcie, i,
pcie->wins[i].match, pcie->wins[i].remap,
pcie->wins[i].mask, pcie->wins[i].actions);
/* Disable remaining PCIe outbound windows */
for (i = pcie->wins_count; i < OB_WIN_COUNT; i++)
advk_pcie_disable_ob_win(pcie, i);
advk_pcie_train_link(pcie);
/*
@ -472,7 +589,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
}
static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u32 *val)
{
struct device *dev = &pcie->pdev->dev;
u32 reg;
@ -483,14 +600,70 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
status = (reg & PIO_COMPLETION_STATUS_MASK) >>
PIO_COMPLETION_STATUS_SHIFT;
if (!status)
return;
/*
* According to HW spec, the PIO status check sequence as below:
* 1) even if COMPLETION_STATUS(bit9:7) indicates successful,
* it still needs to check Error Status(bit11), only when this bit
* indicates no error happen, the operation is successful.
* 2) value Unsupported Request(1) of COMPLETION_STATUS(bit9:7) only
* means a PIO write error, and for PIO read it is successful with
* a read value of 0xFFFFFFFF.
* 3) value Completion Retry Status(CRS) of COMPLETION_STATUS(bit9:7)
* only means a PIO write error, and for PIO read it is successful
* with a read value of 0xFFFF0001.
* 4) value Completer Abort (CA) of COMPLETION_STATUS(bit9:7) means
* error for both PIO read and PIO write operation.
* 5) other errors are indicated as 'unknown'.
*/
switch (status) {
case PIO_COMPLETION_STATUS_OK:
if (reg & PIO_ERR_STATUS) {
strcomp_status = "COMP_ERR";
break;
}
/* Get the read result */
if (val)
*val = advk_readl(pcie, PIO_RD_DATA);
/* No error */
strcomp_status = NULL;
break;
case PIO_COMPLETION_STATUS_UR:
strcomp_status = "UR";
break;
case PIO_COMPLETION_STATUS_CRS:
if (allow_crs && val) {
/* PCIe r4.0, sec 2.3.2, says:
* If CRS Software Visibility is enabled:
* For a Configuration Read Request that includes both
* bytes of the Vendor ID field of a device Function's
* Configuration Space Header, the Root Complex must
* complete the Request to the host by returning a
* read-data value of 0001h for the Vendor ID field and
* all '1's for any additional bytes included in the
* request.
*
* So CRS in this case is not an error status.
*/
*val = CFG_RD_CRS_VAL;
strcomp_status = NULL;
break;
}
/* PCIe r4.0, sec 2.3.2, says:
* If CRS Software Visibility is not enabled, the Root Complex
* must re-issue the Configuration Request as a new Request.
* If CRS Software Visibility is enabled: For a Configuration
* Write Request or for any other Configuration Read Request,
* the Root Complex must re-issue the Configuration Request as
* a new Request.
* A Root Complex implementation may choose to limit the number
* of Configuration Request/CRS Completion Status loops before
* determining that something is wrong with the target of the
* Request and taking appropriate action, e.g., complete the
* Request to the host as a failed transaction.
*
* To simplify implementation do not re-issue the Configuration
* Request and complete the Request as a failed transaction.
*/
strcomp_status = "CRS";
break;
case PIO_COMPLETION_STATUS_CA:
@ -501,6 +674,9 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
break;
}
if (!strcomp_status)
return 0;
if (reg & PIO_NON_POSTED_REQ)
str_posted = "Non-posted";
else
@ -508,6 +684,8 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
dev_err(dev, "%s PIO Response Status: %s, %#x @ %#x\n",
str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS));
return -EFAULT;
}
static int advk_pcie_wait_pio(struct advk_pcie *pcie)
@ -545,6 +723,7 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
case PCI_EXP_RTCTL: {
u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG);
*value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE;
*value |= PCI_EXP_RTCAP_CRSVIS << 16;
return PCI_BRIDGE_EMUL_HANDLED;
}
@ -626,6 +805,7 @@ static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
{
struct pci_bridge_emul *bridge = &pcie->bridge;
int ret;
bridge->conf.vendor =
cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff);
@ -649,7 +829,15 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
bridge->data = pcie;
bridge->ops = &advk_pci_bridge_emul_ops;
return pci_bridge_emul_init(bridge, 0);
/* PCIe config space can be initialized after pci_bridge_emul_init() */
ret = pci_bridge_emul_init(bridge, 0);
if (ret < 0)
return ret;
/* Indicates supports for Completion Retry Status */
bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
return 0;
}
static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
@ -701,6 +889,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
int where, int size, u32 *val)
{
struct advk_pcie *pcie = bus->sysdata;
bool allow_crs;
u32 reg;
int ret;
@ -713,7 +902,24 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
return pci_bridge_emul_conf_read(&pcie->bridge, where,
size, val);
/*
* Completion Retry Status is possible to return only when reading all
* 4 bytes from PCI_VENDOR_ID and PCI_DEVICE_ID registers at once and
* CRSSVE flag on Root Bridge is enabled.
*/
allow_crs = (where == PCI_VENDOR_ID) && (size == 4) &&
(le16_to_cpu(pcie->bridge.pcie_conf.rootctl) &
PCI_EXP_RTCTL_CRSSVE);
if (advk_pcie_pio_is_running(pcie)) {
/*
* If it is possible return Completion Retry Status so caller
* tries to issue the request again instead of failing.
*/
if (allow_crs) {
*val = CFG_RD_CRS_VAL;
return PCIBIOS_SUCCESSFUL;
}
*val = 0xffffffff;
return PCIBIOS_SET_FAILED;
}
@ -741,14 +947,25 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
ret = advk_pcie_wait_pio(pcie);
if (ret < 0) {
/*
* If it is possible return Completion Retry Status so caller
* tries to issue the request again instead of failing.
*/
if (allow_crs) {
*val = CFG_RD_CRS_VAL;
return PCIBIOS_SUCCESSFUL;
}
*val = 0xffffffff;
return PCIBIOS_SET_FAILED;
}
advk_pcie_check_pio_status(pcie);
/* Check PIO status and get the read result */
ret = advk_pcie_check_pio_status(pcie, allow_crs, val);
if (ret < 0) {
*val = 0xffffffff;
return PCIBIOS_SET_FAILED;
}
/* Get the read result */
*val = advk_readl(pcie, PIO_RD_DATA);
if (size == 1)
*val = (*val >> (8 * (where & 3))) & 0xff;
else if (size == 2)
@ -812,7 +1029,9 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
if (ret < 0)
return PCIBIOS_SET_FAILED;
advk_pcie_check_pio_status(pcie);
ret = advk_pcie_check_pio_status(pcie, false, NULL);
if (ret < 0)
return PCIBIOS_SET_FAILED;
return PCIBIOS_SUCCESSFUL;
}
@ -886,22 +1105,28 @@ static void advk_pcie_irq_mask(struct irq_data *d)
{
struct advk_pcie *pcie = d->domain->host_data;
irq_hw_number_t hwirq = irqd_to_hwirq(d);
unsigned long flags;
u32 mask;
raw_spin_lock_irqsave(&pcie->irq_lock, flags);
mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
mask |= PCIE_ISR1_INTX_ASSERT(hwirq);
advk_writel(pcie, mask, PCIE_ISR1_MASK_REG);
raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
}
static void advk_pcie_irq_unmask(struct irq_data *d)
{
struct advk_pcie *pcie = d->domain->host_data;
irq_hw_number_t hwirq = irqd_to_hwirq(d);
unsigned long flags;
u32 mask;
raw_spin_lock_irqsave(&pcie->irq_lock, flags);
mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
mask &= ~PCIE_ISR1_INTX_ASSERT(hwirq);
advk_writel(pcie, mask, PCIE_ISR1_MASK_REG);
raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
}
static int advk_pcie_irq_map(struct irq_domain *h,
@ -985,6 +1210,8 @@ static int advk_pcie_init_irq_domain(struct advk_pcie *pcie)
struct irq_chip *irq_chip;
int ret = 0;
raw_spin_lock_init(&pcie->irq_lock);
pcie_intc_node = of_get_next_child(node, NULL);
if (!pcie_intc_node) {
dev_err(dev, "No PCIe Intc node found\n");
@ -1049,7 +1276,7 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie)
{
u32 isr0_val, isr0_mask, isr0_status;
u32 isr1_val, isr1_mask, isr1_status;
int i, virq;
int i;
isr0_val = advk_readl(pcie, PCIE_ISR0_REG);
isr0_mask = advk_readl(pcie, PCIE_ISR0_MASK_REG);
@ -1077,8 +1304,7 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie)
advk_writel(pcie, PCIE_ISR1_INTX_ASSERT(i),
PCIE_ISR1_REG);
virq = irq_find_mapping(pcie->irq_domain, i);
generic_handle_irq(virq);
generic_handle_domain_irq(pcie->irq_domain, i);
}
}
@ -1162,6 +1388,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct advk_pcie *pcie;
struct pci_host_bridge *bridge;
struct resource_entry *entry;
int ret, irq;
bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct advk_pcie));
@ -1172,6 +1399,80 @@ static int advk_pcie_probe(struct platform_device *pdev)
pcie->pdev = pdev;
platform_set_drvdata(pdev, pcie);
resource_list_for_each_entry(entry, &bridge->windows) {
resource_size_t start = entry->res->start;
resource_size_t size = resource_size(entry->res);
unsigned long type = resource_type(entry->res);
u64 win_size;
/*
* Aardvark hardware allows to configure also PCIe window
* for config type 0 and type 1 mapping, but driver uses
* only PIO for issuing configuration transfers which does
* not use PCIe window configuration.
*/
if (type != IORESOURCE_MEM && type != IORESOURCE_MEM_64 &&
type != IORESOURCE_IO)
continue;
/*
* Skip transparent memory resources. Default outbound access
* configuration is set to transparent memory access so it
* does not need window configuration.
*/
if ((type == IORESOURCE_MEM || type == IORESOURCE_MEM_64) &&
entry->offset == 0)
continue;
/*
* The n-th PCIe window is configured by tuple (match, remap, mask)
* and an access to address A uses this window if A matches the
* match with given mask.
* So every PCIe window size must be a power of two and every start
* address must be aligned to window size. Minimal size is 64 KiB
* because lower 16 bits of mask must be zero. Remapped address
* may have set only bits from the mask.
*/
while (pcie->wins_count < OB_WIN_COUNT && size > 0) {
/* Calculate the largest aligned window size */
win_size = (1ULL << (fls64(size)-1)) |
(start ? (1ULL << __ffs64(start)) : 0);
win_size = 1ULL << __ffs64(win_size);
if (win_size < 0x10000)
break;
dev_dbg(dev,
"Configuring PCIe window %d: [0x%llx-0x%llx] as %lu\n",
pcie->wins_count, (unsigned long long)start,
(unsigned long long)start + win_size, type);
if (type == IORESOURCE_IO) {
pcie->wins[pcie->wins_count].actions = OB_WIN_TYPE_IO;
pcie->wins[pcie->wins_count].match = pci_pio_to_address(start);
} else {
pcie->wins[pcie->wins_count].actions = OB_WIN_TYPE_MEM;
pcie->wins[pcie->wins_count].match = start;
}
pcie->wins[pcie->wins_count].remap = start - entry->offset;
pcie->wins[pcie->wins_count].mask = ~(win_size - 1);
if (pcie->wins[pcie->wins_count].remap & (win_size - 1))
break;
start += win_size;
size -= win_size;
pcie->wins_count++;
}
if (size > 0) {
dev_err(&pcie->pdev->dev,
"Invalid PCIe region [0x%llx-0x%llx]\n",
(unsigned long long)entry->res->start,
(unsigned long long)entry->res->end + 1);
return -EINVAL;
}
}
pcie->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(pcie->base))
return PTR_ERR(pcie->base);
@ -1252,6 +1553,7 @@ static int advk_pcie_remove(struct platform_device *pdev)
{
struct advk_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
int i;
pci_lock_rescan_remove();
pci_stop_root_bus(bridge->bus);
@ -1261,6 +1563,10 @@ static int advk_pcie_remove(struct platform_device *pdev)
advk_pcie_remove_msi_irq_domain(pcie);
advk_pcie_remove_irq_domain(pcie);
/* Disable outbound address windows mapping */
for (i = 0; i < OB_WIN_COUNT; i++)
advk_pcie_disable_ob_win(pcie, i);
return 0;
}

View File

@ -314,7 +314,7 @@ static void faraday_pci_irq_handler(struct irq_desc *desc)
for (i = 0; i < 4; i++) {
if ((irq_stat & BIT(i)) == 0)
continue;
generic_handle_irq(irq_find_mapping(p->irqdomain, i));
generic_handle_domain_irq(p->irqdomain, i);
}
chained_irq_exit(irqchip, desc);

View File

@ -40,6 +40,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pci-ecam.h>
#include <linux/delay.h>
#include <linux/semaphore.h>
#include <linux/irqdomain.h>
@ -64,6 +65,7 @@ enum pci_protocol_version_t {
PCI_PROTOCOL_VERSION_1_1 = PCI_MAKE_VERSION(1, 1), /* Win10 */
PCI_PROTOCOL_VERSION_1_2 = PCI_MAKE_VERSION(1, 2), /* RS1 */
PCI_PROTOCOL_VERSION_1_3 = PCI_MAKE_VERSION(1, 3), /* Vibranium */
PCI_PROTOCOL_VERSION_1_4 = PCI_MAKE_VERSION(1, 4), /* WS2022 */
};
#define CPU_AFFINITY_ALL -1ULL
@ -73,6 +75,7 @@ enum pci_protocol_version_t {
* first.
*/
static enum pci_protocol_version_t pci_protocol_versions[] = {
PCI_PROTOCOL_VERSION_1_4,
PCI_PROTOCOL_VERSION_1_3,
PCI_PROTOCOL_VERSION_1_2,
PCI_PROTOCOL_VERSION_1_1,
@ -122,6 +125,8 @@ enum pci_message_type {
PCI_CREATE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x17,
PCI_DELETE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x18, /* unused */
PCI_BUS_RELATIONS2 = PCI_MESSAGE_BASE + 0x19,
PCI_RESOURCES_ASSIGNED3 = PCI_MESSAGE_BASE + 0x1A,
PCI_CREATE_INTERRUPT_MESSAGE3 = PCI_MESSAGE_BASE + 0x1B,
PCI_MESSAGE_MAXIMUM
};
@ -235,6 +240,21 @@ struct hv_msi_desc2 {
u16 processor_array[32];
} __packed;
/*
* struct hv_msi_desc3 - 1.3 version of hv_msi_desc
* Everything is the same as in 'hv_msi_desc2' except that the size of the
* 'vector' field is larger to support bigger vector values. For ex: LPI
* vectors on ARM.
*/
struct hv_msi_desc3 {
u32 vector;
u8 delivery_mode;
u8 reserved;
u16 vector_count;
u16 processor_count;
u16 processor_array[32];
} __packed;
/**
* struct tran_int_desc
* @reserved: unused, padding
@ -383,6 +403,12 @@ struct pci_create_interrupt2 {
struct hv_msi_desc2 int_desc;
} __packed;
struct pci_create_interrupt3 {
struct pci_message message_type;
union win_slot_encoding wslot;
struct hv_msi_desc3 int_desc;
} __packed;
struct pci_delete_interrupt {
struct pci_message message_type;
union win_slot_encoding wslot;
@ -448,7 +474,13 @@ enum hv_pcibus_state {
};
struct hv_pcibus_device {
#ifdef CONFIG_X86
struct pci_sysdata sysdata;
#elif defined(CONFIG_ARM64)
struct pci_config_window sysdata;
#endif
struct pci_host_bridge *bridge;
struct fwnode_handle *fwnode;
/* Protocol version negotiated with the host */
enum pci_protocol_version_t protocol_version;
enum hv_pcibus_state state;
@ -464,8 +496,6 @@ struct hv_pcibus_device {
spinlock_t device_list_lock; /* Protect lists below */
void __iomem *cfg_addr;
struct list_head resources_for_children;
struct list_head children;
struct list_head dr_list;
@ -1328,6 +1358,15 @@ static u32 hv_compose_msi_req_v1(
return sizeof(*int_pkt);
}
/*
* Create MSI w/ dummy vCPU set targeting just one vCPU, overwritten
* by subsequent retarget in hv_irq_unmask().
*/
static int hv_compose_msi_req_get_cpu(struct cpumask *affinity)
{
return cpumask_first_and(affinity, cpu_online_mask);
}
static u32 hv_compose_msi_req_v2(
struct pci_create_interrupt2 *int_pkt, struct cpumask *affinity,
u32 slot, u8 vector)
@ -1339,12 +1378,27 @@ static u32 hv_compose_msi_req_v2(
int_pkt->int_desc.vector = vector;
int_pkt->int_desc.vector_count = 1;
int_pkt->int_desc.delivery_mode = APIC_DELIVERY_MODE_FIXED;
cpu = hv_compose_msi_req_get_cpu(affinity);
int_pkt->int_desc.processor_array[0] =
hv_cpu_number_to_vp_number(cpu);
int_pkt->int_desc.processor_count = 1;
/*
* Create MSI w/ dummy vCPU set targeting just one vCPU, overwritten
* by subsequent retarget in hv_irq_unmask().
*/
cpu = cpumask_first_and(affinity, cpu_online_mask);
return sizeof(*int_pkt);
}
static u32 hv_compose_msi_req_v3(
struct pci_create_interrupt3 *int_pkt, struct cpumask *affinity,
u32 slot, u32 vector)
{
int cpu;
int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE3;
int_pkt->wslot.slot = slot;
int_pkt->int_desc.vector = vector;
int_pkt->int_desc.reserved = 0;
int_pkt->int_desc.vector_count = 1;
int_pkt->int_desc.delivery_mode = APIC_DELIVERY_MODE_FIXED;
cpu = hv_compose_msi_req_get_cpu(affinity);
int_pkt->int_desc.processor_array[0] =
hv_cpu_number_to_vp_number(cpu);
int_pkt->int_desc.processor_count = 1;
@ -1379,6 +1433,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
union {
struct pci_create_interrupt v1;
struct pci_create_interrupt2 v2;
struct pci_create_interrupt3 v3;
} int_pkts;
} __packed ctxt;
@ -1426,6 +1481,13 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
cfg->vector);
break;
case PCI_PROTOCOL_VERSION_1_4:
size = hv_compose_msi_req_v3(&ctxt.int_pkts.v3,
dest,
hpdev->desc.win_slot.slot,
cfg->vector);
break;
default:
/* As we only negotiate protocol versions known to this driver,
* this path should never hit. However, this is it not a hot
@ -1566,7 +1628,7 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_device *hbus)
hbus->msi_info.handler = handle_edge_irq;
hbus->msi_info.handler_name = "edge";
hbus->msi_info.data = hbus;
hbus->irq_domain = pci_msi_create_irq_domain(hbus->sysdata.fwnode,
hbus->irq_domain = pci_msi_create_irq_domain(hbus->fwnode,
&hbus->msi_info,
x86_vector_domain);
if (!hbus->irq_domain) {
@ -1575,6 +1637,8 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_device *hbus)
return -ENODEV;
}
dev_set_msi_domain(&hbus->bridge->dev, hbus->irq_domain);
return 0;
}
@ -1797,7 +1861,7 @@ static void hv_pci_assign_slots(struct hv_pcibus_device *hbus)
slot_nr = PCI_SLOT(wslot_to_devfn(hpdev->desc.win_slot.slot));
snprintf(name, SLOT_NAME_SIZE, "%u", hpdev->desc.ser);
hpdev->pci_slot = pci_create_slot(hbus->pci_bus, slot_nr,
hpdev->pci_slot = pci_create_slot(hbus->bridge->bus, slot_nr,
name, NULL);
if (IS_ERR(hpdev->pci_slot)) {
pr_warn("pci_create slot %s failed\n", name);
@ -1827,7 +1891,7 @@ static void hv_pci_remove_slots(struct hv_pcibus_device *hbus)
static void hv_pci_assign_numa_node(struct hv_pcibus_device *hbus)
{
struct pci_dev *dev;
struct pci_bus *bus = hbus->pci_bus;
struct pci_bus *bus = hbus->bridge->bus;
struct hv_pci_dev *hv_dev;
list_for_each_entry(dev, &bus->devices, bus_list) {
@ -1850,21 +1914,22 @@ static void hv_pci_assign_numa_node(struct hv_pcibus_device *hbus)
*/
static int create_root_hv_pci_bus(struct hv_pcibus_device *hbus)
{
/* Register the device */
hbus->pci_bus = pci_create_root_bus(&hbus->hdev->device,
0, /* bus number is always zero */
&hv_pcifront_ops,
&hbus->sysdata,
&hbus->resources_for_children);
if (!hbus->pci_bus)
return -ENODEV;
int error;
struct pci_host_bridge *bridge = hbus->bridge;
bridge->dev.parent = &hbus->hdev->device;
bridge->sysdata = &hbus->sysdata;
bridge->ops = &hv_pcifront_ops;
error = pci_scan_root_bus_bridge(bridge);
if (error)
return error;
pci_lock_rescan_remove();
pci_scan_child_bus(hbus->pci_bus);
hv_pci_assign_numa_node(hbus);
pci_bus_assign_resources(hbus->pci_bus);
pci_bus_assign_resources(bridge->bus);
hv_pci_assign_slots(hbus);
pci_bus_add_devices(hbus->pci_bus);
pci_bus_add_devices(bridge->bus);
pci_unlock_rescan_remove();
hbus->state = hv_pcibus_installed;
return 0;
@ -2127,7 +2192,7 @@ static void pci_devices_present_work(struct work_struct *work)
* because there may have been changes.
*/
pci_lock_rescan_remove();
pci_scan_child_bus(hbus->pci_bus);
pci_scan_child_bus(hbus->bridge->bus);
hv_pci_assign_numa_node(hbus);
hv_pci_assign_slots(hbus);
pci_unlock_rescan_remove();
@ -2295,11 +2360,11 @@ static void hv_eject_device_work(struct work_struct *work)
/*
* Ejection can come before or after the PCI bus has been set up, so
* attempt to find it and tear down the bus state, if it exists. This
* must be done without constructs like pci_domain_nr(hbus->pci_bus)
* because hbus->pci_bus may not exist yet.
* must be done without constructs like pci_domain_nr(hbus->bridge->bus)
* because hbus->bridge->bus may not exist yet.
*/
wslot = wslot_to_devfn(hpdev->desc.win_slot.slot);
pdev = pci_get_domain_bus_and_slot(hbus->sysdata.domain, 0, wslot);
pdev = pci_get_domain_bus_and_slot(hbus->bridge->domain_nr, 0, wslot);
if (pdev) {
pci_lock_rescan_remove();
pci_stop_and_remove_bus_device(pdev);
@ -2662,8 +2727,7 @@ static int hv_pci_allocate_bridge_windows(struct hv_pcibus_device *hbus)
/* Modify this resource to become a bridge window. */
hbus->low_mmio_res->flags |= IORESOURCE_WINDOW;
hbus->low_mmio_res->flags &= ~IORESOURCE_BUSY;
pci_add_resource(&hbus->resources_for_children,
hbus->low_mmio_res);
pci_add_resource(&hbus->bridge->windows, hbus->low_mmio_res);
}
if (hbus->high_mmio_space) {
@ -2682,8 +2746,7 @@ static int hv_pci_allocate_bridge_windows(struct hv_pcibus_device *hbus)
/* Modify this resource to become a bridge window. */
hbus->high_mmio_res->flags |= IORESOURCE_WINDOW;
hbus->high_mmio_res->flags &= ~IORESOURCE_BUSY;
pci_add_resource(&hbus->resources_for_children,
hbus->high_mmio_res);
pci_add_resource(&hbus->bridge->windows, hbus->high_mmio_res);
}
return 0;
@ -3002,6 +3065,7 @@ static void hv_put_dom_num(u16 dom)
static int hv_pci_probe(struct hv_device *hdev,
const struct hv_vmbus_device_id *dev_id)
{
struct pci_host_bridge *bridge;
struct hv_pcibus_device *hbus;
u16 dom_req, dom;
char *name;
@ -3014,6 +3078,10 @@ static int hv_pci_probe(struct hv_device *hdev,
*/
BUILD_BUG_ON(sizeof(*hbus) > HV_HYP_PAGE_SIZE);
bridge = devm_pci_alloc_host_bridge(&hdev->device, 0);
if (!bridge)
return -ENOMEM;
/*
* With the recent 59bb47985c1d ("mm, sl[aou]b: guarantee natural
* alignment for kmalloc(power-of-two)"), kzalloc() is able to allocate
@ -3035,6 +3103,8 @@ static int hv_pci_probe(struct hv_device *hdev,
hbus = kzalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL);
if (!hbus)
return -ENOMEM;
hbus->bridge = bridge;
hbus->state = hv_pcibus_init;
hbus->wslot_res_allocated = -1;
@ -3066,17 +3136,19 @@ static int hv_pci_probe(struct hv_device *hdev,
"PCI dom# 0x%hx has collision, using 0x%hx",
dom_req, dom);
hbus->bridge->domain_nr = dom;
#ifdef CONFIG_X86
hbus->sysdata.domain = dom;
#endif
hbus->hdev = hdev;
INIT_LIST_HEAD(&hbus->children);
INIT_LIST_HEAD(&hbus->dr_list);
INIT_LIST_HEAD(&hbus->resources_for_children);
spin_lock_init(&hbus->config_lock);
spin_lock_init(&hbus->device_list_lock);
spin_lock_init(&hbus->retarget_msi_interrupt_lock);
hbus->wq = alloc_ordered_workqueue("hv_pci_%x", 0,
hbus->sysdata.domain);
hbus->bridge->domain_nr);
if (!hbus->wq) {
ret = -ENOMEM;
goto free_dom;
@ -3113,9 +3185,9 @@ static int hv_pci_probe(struct hv_device *hdev,
goto unmap;
}
hbus->sysdata.fwnode = irq_domain_alloc_named_fwnode(name);
hbus->fwnode = irq_domain_alloc_named_fwnode(name);
kfree(name);
if (!hbus->sysdata.fwnode) {
if (!hbus->fwnode) {
ret = -ENOMEM;
goto unmap;
}
@ -3193,7 +3265,7 @@ exit_d0:
free_irq_domain:
irq_domain_remove(hbus->irq_domain);
free_fwnode:
irq_domain_free_fwnode(hbus->sysdata.fwnode);
irq_domain_free_fwnode(hbus->fwnode);
unmap:
iounmap(hbus->cfg_addr);
free_config:
@ -3203,7 +3275,7 @@ close:
destroy_wq:
destroy_workqueue(hbus->wq);
free_dom:
hv_put_dom_num(hbus->sysdata.domain);
hv_put_dom_num(hbus->bridge->domain_nr);
free_bus:
kfree(hbus);
return ret;
@ -3295,9 +3367,9 @@ static int hv_pci_remove(struct hv_device *hdev)
/* Remove the bus from PCI's point of view. */
pci_lock_rescan_remove();
pci_stop_root_bus(hbus->pci_bus);
pci_stop_root_bus(hbus->bridge->bus);
hv_pci_remove_slots(hbus);
pci_remove_root_bus(hbus->pci_bus);
pci_remove_root_bus(hbus->bridge->bus);
pci_unlock_rescan_remove();
}
@ -3307,12 +3379,11 @@ static int hv_pci_remove(struct hv_device *hdev)
iounmap(hbus->cfg_addr);
hv_free_config_window(hbus);
pci_free_resource_list(&hbus->resources_for_children);
hv_pci_free_bridge_windows(hbus);
irq_domain_remove(hbus->irq_domain);
irq_domain_free_fwnode(hbus->sysdata.fwnode);
irq_domain_free_fwnode(hbus->fwnode);
hv_put_dom_num(hbus->sysdata.domain);
hv_put_dom_num(hbus->bridge->domain_nr);
kfree(hbus);
return ret;
@ -3390,7 +3461,7 @@ static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg)
*/
static void hv_pci_restore_msi_state(struct hv_pcibus_device *hbus)
{
pci_walk_bus(hbus->pci_bus, hv_pci_restore_msi_msg, NULL);
pci_walk_bus(hbus->bridge->bus, hv_pci_restore_msi_msg, NULL);
}
static int hv_pci_resume(struct hv_device *hdev)

View File

@ -372,11 +372,6 @@ struct tegra_pcie_port {
struct gpio_desc *reset_gpio;
};
struct tegra_pcie_bus {
struct list_head list;
unsigned int nr;
};
static inline void afi_writel(struct tegra_pcie *pcie, u32 value,
unsigned long offset)
{
@ -764,7 +759,7 @@ static int tegra_pcie_map_irq(const struct pci_dev *pdev, u8 slot, u8 pin)
static irqreturn_t tegra_pcie_isr(int irq, void *arg)
{
const char *err_msg[] = {
static const char * const err_msg[] = {
"Unknown",
"AXI slave error",
"AXI decode error",
@ -1553,12 +1548,10 @@ static void tegra_pcie_msi_irq(struct irq_desc *desc)
while (reg) {
unsigned int offset = find_first_bit(&reg, 32);
unsigned int index = i * 32 + offset;
unsigned int irq;
int ret;
irq = irq_find_mapping(msi->domain->parent, index);
if (irq) {
generic_handle_irq(irq);
} else {
ret = generic_handle_domain_irq(msi->domain->parent, index);
if (ret) {
/*
* that's weird who triggered this?
* just clear it
@ -2193,13 +2186,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
rp->np = port;
rp->base = devm_pci_remap_cfg_resource(dev, &rp->regs);
if (IS_ERR(rp->base))
return PTR_ERR(rp->base);
if (IS_ERR(rp->base)) {
err = PTR_ERR(rp->base);
goto err_node_put;
}
label = devm_kasprintf(dev, GFP_KERNEL, "pex-reset-%u", index);
if (!label) {
dev_err(dev, "failed to create reset GPIO label\n");
return -ENOMEM;
err = -ENOMEM;
goto err_node_put;
}
/*
@ -2217,7 +2212,8 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
} else {
dev_err(dev, "failed to get reset GPIO: %ld\n",
PTR_ERR(rp->reset_gpio));
return PTR_ERR(rp->reset_gpio);
err = PTR_ERR(rp->reset_gpio);
goto err_node_put;
}
}
@ -2548,7 +2544,7 @@ static void *tegra_pcie_ports_seq_start(struct seq_file *s, loff_t *pos)
if (list_empty(&pcie->ports))
return NULL;
seq_printf(s, "Index Status\n");
seq_puts(s, "Index Status\n");
return seq_list_start(&pcie->ports, *pos);
}
@ -2585,16 +2581,16 @@ static int tegra_pcie_ports_seq_show(struct seq_file *s, void *v)
seq_printf(s, "%2u ", port->index);
if (up)
seq_printf(s, "up");
seq_puts(s, "up");
if (active) {
if (up)
seq_printf(s, ", ");
seq_puts(s, ", ");
seq_printf(s, "active");
seq_puts(s, "active");
}
seq_printf(s, "\n");
seq_puts(s, "\n");
return 0;
}

View File

@ -291,8 +291,7 @@ static void xgene_msi_isr(struct irq_desc *desc)
struct irq_chip *chip = irq_desc_get_chip(desc);
struct xgene_msi_group *msi_groups;
struct xgene_msi *xgene_msi;
unsigned int virq;
int msir_index, msir_val, hw_irq;
int msir_index, msir_val, hw_irq, ret;
u32 intr_index, grp_select, msi_grp;
chained_irq_enter(chip, desc);
@ -330,10 +329,8 @@ static void xgene_msi_isr(struct irq_desc *desc)
* CPU0
*/
hw_irq = hwirq_to_canonical_hwirq(hw_irq);
virq = irq_find_mapping(xgene_msi->inner_domain, hw_irq);
WARN_ON(!virq);
if (virq != 0)
generic_handle_irq(virq);
ret = generic_handle_domain_irq(xgene_msi->inner_domain, hw_irq);
WARN_ON_ONCE(ret);
msir_val &= ~(1 << intr_index);
}
grp_select &= ~(1 << msir_index);
@ -451,7 +448,6 @@ static int xgene_msi_probe(struct platform_device *pdev)
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
xgene_msi->msi_regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(xgene_msi->msi_regs)) {
dev_err(&pdev->dev, "no reg space\n");
rc = PTR_ERR(xgene_msi->msi_regs);
goto error;
}

View File

@ -55,7 +55,7 @@ static void altera_msi_isr(struct irq_desc *desc)
struct altera_msi *msi;
unsigned long status;
u32 bit;
u32 virq;
int ret;
chained_irq_enter(chip, desc);
msi = irq_desc_get_handler_data(desc);
@ -65,11 +65,9 @@ static void altera_msi_isr(struct irq_desc *desc)
/* Dummy read from vector to clear the interrupt */
readl_relaxed(msi->vector_base + (bit * sizeof(u32)));
virq = irq_find_mapping(msi->inner_domain, bit);
if (virq)
generic_handle_irq(virq);
else
dev_err(&msi->pdev->dev, "unexpected MSI\n");
ret = generic_handle_domain_irq(msi->inner_domain, bit);
if (ret)
dev_err_ratelimited(&msi->pdev->dev, "unexpected MSI\n");
}
}

View File

@ -646,7 +646,7 @@ static void altera_pcie_isr(struct irq_desc *desc)
struct device *dev;
unsigned long status;
u32 bit;
u32 virq;
int ret;
chained_irq_enter(chip, desc);
pcie = irq_desc_get_handler_data(desc);
@ -658,11 +658,9 @@ static void altera_pcie_isr(struct irq_desc *desc)
/* clear interrupts */
cra_writel(pcie, 1 << bit, P2A_INT_STATUS);
virq = irq_find_mapping(pcie->irq_domain, bit);
if (virq)
generic_handle_irq(virq);
else
dev_err(dev, "unexpected IRQ, INT%d\n", bit);
ret = generic_handle_domain_irq(pcie->irq_domain, bit);
if (ret)
dev_err_ratelimited(dev, "unexpected IRQ, INT%d\n", bit);
}
}

View File

@ -476,7 +476,7 @@ static struct msi_domain_info brcm_msi_domain_info = {
static void brcm_pcie_msi_isr(struct irq_desc *desc)
{
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned long status, virq;
unsigned long status;
struct brcm_msi *msi;
struct device *dev;
u32 bit;
@ -489,10 +489,9 @@ static void brcm_pcie_msi_isr(struct irq_desc *desc)
status >>= msi->legacy_shift;
for_each_set_bit(bit, &status, msi->nr) {
virq = irq_find_mapping(msi->inner_domain, bit);
if (virq)
generic_handle_irq(virq);
else
int ret;
ret = generic_handle_domain_irq(msi->inner_domain, bit);
if (ret)
dev_dbg(dev, "unexpected MSI\n");
}

View File

@ -35,7 +35,6 @@ static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
{
struct device *dev = &bdev->dev;
struct iproc_pcie *pcie;
LIST_HEAD(resources);
struct pci_host_bridge *bridge;
int ret;
@ -60,19 +59,16 @@ static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
pcie->mem.end = bdev->addr_s[0] + SZ_128M - 1;
pcie->mem.name = "PCIe MEM space";
pcie->mem.flags = IORESOURCE_MEM;
pci_add_resource(&resources, &pcie->mem);
pci_add_resource(&bridge->windows, &pcie->mem);
ret = devm_request_pci_bus_resources(dev, &bridge->windows);
if (ret)
return ret;
pcie->map_irq = iproc_pcie_bcma_map_irq;
ret = iproc_pcie_setup(pcie, &resources);
if (ret) {
dev_err(dev, "PCIe controller setup failed\n");
pci_free_resource_list(&resources);
return ret;
}
bcma_set_drvdata(bdev, pcie);
return 0;
return iproc_pcie_setup(pcie, &bridge->windows);
}
static void iproc_pcie_bcma_remove(struct bcma_device *bdev)

View File

@ -326,7 +326,6 @@ static void iproc_msi_handler(struct irq_desc *desc)
struct iproc_msi *msi;
u32 eq, head, tail, nr_events;
unsigned long hwirq;
int virq;
chained_irq_enter(chip, desc);
@ -362,8 +361,7 @@ static void iproc_msi_handler(struct irq_desc *desc)
/* process all outstanding events */
while (nr_events--) {
hwirq = decode_msi_hwirq(msi, eq, head);
virq = irq_find_mapping(msi->inner_domain, hwirq);
generic_handle_irq(virq);
generic_handle_domain_irq(msi->inner_domain, hwirq);
head++;
head %= EQ_LEN;

View File

@ -645,7 +645,6 @@ static void mtk_pcie_msi_handler(struct mtk_pcie_port *port, int set_idx)
{
struct mtk_msi_set *msi_set = &port->msi_sets[set_idx];
unsigned long msi_enable, msi_status;
unsigned int virq;
irq_hw_number_t bit, hwirq;
msi_enable = readl_relaxed(msi_set->base + PCIE_MSI_SET_ENABLE_OFFSET);
@ -659,8 +658,7 @@ static void mtk_pcie_msi_handler(struct mtk_pcie_port *port, int set_idx)
for_each_set_bit(bit, &msi_status, PCIE_MSI_IRQS_PER_SET) {
hwirq = bit + set_idx * PCIE_MSI_IRQS_PER_SET;
virq = irq_find_mapping(port->msi_bottom_domain, hwirq);
generic_handle_irq(virq);
generic_handle_domain_irq(port->msi_bottom_domain, hwirq);
}
} while (true);
}
@ -670,18 +668,15 @@ static void mtk_pcie_irq_handler(struct irq_desc *desc)
struct mtk_pcie_port *port = irq_desc_get_handler_data(desc);
struct irq_chip *irqchip = irq_desc_get_chip(desc);
unsigned long status;
unsigned int virq;
irq_hw_number_t irq_bit = PCIE_INTX_SHIFT;
chained_irq_enter(irqchip, desc);
status = readl_relaxed(port->base + PCIE_INT_STATUS_REG);
for_each_set_bit_from(irq_bit, &status, PCI_NUM_INTX +
PCIE_INTX_SHIFT) {
virq = irq_find_mapping(port->intx_domain,
irq_bit - PCIE_INTX_SHIFT);
generic_handle_irq(virq);
}
PCIE_INTX_SHIFT)
generic_handle_domain_irq(port->intx_domain,
irq_bit - PCIE_INTX_SHIFT);
irq_bit = PCIE_MSI_SHIFT;
for_each_set_bit_from(irq_bit, &status, PCIE_MSI_SET_NUM +

View File

@ -14,6 +14,7 @@
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
#include <linux/msi.h>
#include <linux/module.h>
#include <linux/of_address.h>
@ -23,6 +24,7 @@
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <linux/reset.h>
#include "../pci.h"
@ -207,6 +209,7 @@ struct mtk_pcie_port {
* struct mtk_pcie - PCIe host information
* @dev: pointer to PCIe device
* @base: IO mapped register base
* @cfg: IO mapped register map for PCIe config
* @free_ck: free-run reference clock
* @mem: non-prefetchable memory resource
* @ports: pointer to PCIe port information
@ -215,6 +218,7 @@ struct mtk_pcie_port {
struct mtk_pcie {
struct device *dev;
void __iomem *base;
struct regmap *cfg;
struct clk *free_ck;
struct list_head ports;
@ -602,7 +606,6 @@ static void mtk_pcie_intr_handler(struct irq_desc *desc)
struct mtk_pcie_port *port = irq_desc_get_handler_data(desc);
struct irq_chip *irqchip = irq_desc_get_chip(desc);
unsigned long status;
u32 virq;
u32 bit = INTX_SHIFT;
chained_irq_enter(irqchip, desc);
@ -612,9 +615,8 @@ static void mtk_pcie_intr_handler(struct irq_desc *desc)
for_each_set_bit_from(bit, &status, PCI_NUM_INTX + INTX_SHIFT) {
/* Clear the INTx */
writel(1 << bit, port->base + PCIE_INT_STATUS);
virq = irq_find_mapping(port->irq_domain,
bit - INTX_SHIFT);
generic_handle_irq(virq);
generic_handle_domain_irq(port->irq_domain,
bit - INTX_SHIFT);
}
}
@ -623,10 +625,8 @@ static void mtk_pcie_intr_handler(struct irq_desc *desc)
unsigned long imsi_status;
while ((imsi_status = readl(port->base + PCIE_IMSI_STATUS))) {
for_each_set_bit(bit, &imsi_status, MTK_MSI_IRQS_NUM) {
virq = irq_find_mapping(port->inner_domain, bit);
generic_handle_irq(virq);
}
for_each_set_bit(bit, &imsi_status, MTK_MSI_IRQS_NUM)
generic_handle_domain_irq(port->inner_domain, bit);
}
/* Clear MSI interrupt status */
writel(MSI_STATUS, port->base + PCIE_INT_STATUS);
@ -650,7 +650,11 @@ static int mtk_pcie_setup_irq(struct mtk_pcie_port *port,
return err;
}
port->irq = platform_get_irq(pdev, port->slot);
if (of_find_property(dev->of_node, "interrupt-names", NULL))
port->irq = platform_get_irq_byname(pdev, "pcie_irq");
else
port->irq = platform_get_irq(pdev, port->slot);
if (port->irq < 0)
return port->irq;
@ -682,6 +686,10 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
val |= PCIE_CSR_LTSSM_EN(port->slot) |
PCIE_CSR_ASPM_L1_EN(port->slot);
writel(val, pcie->base + PCIE_SYS_CFG_V2);
} else if (pcie->cfg) {
val = PCIE_CSR_LTSSM_EN(port->slot) |
PCIE_CSR_ASPM_L1_EN(port->slot);
regmap_update_bits(pcie->cfg, PCIE_SYS_CFG_V2, val, val);
}
/* Assert all reset signals */
@ -985,6 +993,7 @@ static int mtk_pcie_subsys_powerup(struct mtk_pcie *pcie)
struct device *dev = pcie->dev;
struct platform_device *pdev = to_platform_device(dev);
struct resource *regs;
struct device_node *cfg_node;
int err;
/* get shared registers, which are optional */
@ -995,6 +1004,14 @@ static int mtk_pcie_subsys_powerup(struct mtk_pcie *pcie)
return PTR_ERR(pcie->base);
}
cfg_node = of_find_compatible_node(NULL, NULL,
"mediatek,generic-pciecfg");
if (cfg_node) {
pcie->cfg = syscon_node_to_regmap(cfg_node);
if (IS_ERR(pcie->cfg))
return PTR_ERR(pcie->cfg);
}
pcie->free_ck = devm_clk_get(dev, "free_ck");
if (IS_ERR(pcie->free_ck)) {
if (PTR_ERR(pcie->free_ck) == -EPROBE_DEFER)
@ -1027,22 +1044,27 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
struct device *dev = pcie->dev;
struct device_node *node = dev->of_node, *child;
struct mtk_pcie_port *port, *tmp;
int err;
int err, slot;
for_each_available_child_of_node(node, child) {
int slot;
slot = of_get_pci_domain_nr(dev->of_node);
if (slot < 0) {
for_each_available_child_of_node(node, child) {
err = of_pci_get_devfn(child);
if (err < 0) {
dev_err(dev, "failed to get devfn: %d\n", err);
goto error_put_node;
}
err = of_pci_get_devfn(child);
if (err < 0) {
dev_err(dev, "failed to parse devfn: %d\n", err);
goto error_put_node;
slot = PCI_SLOT(err);
err = mtk_pcie_parse_port(pcie, child, slot);
if (err)
goto error_put_node;
}
slot = PCI_SLOT(err);
err = mtk_pcie_parse_port(pcie, child, slot);
} else {
err = mtk_pcie_parse_port(pcie, node, slot);
if (err)
goto error_put_node;
return err;
}
err = mtk_pcie_subsys_powerup(pcie);

View File

@ -412,16 +412,14 @@ static void mc_handle_msi(struct irq_desc *desc)
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
unsigned long status;
u32 bit;
u32 virq;
int ret;
status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
if (status & PM_MSI_INT_MSI_MASK) {
status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
for_each_set_bit(bit, &status, msi->num_vectors) {
virq = irq_find_mapping(msi->dev_domain, bit);
if (virq)
generic_handle_irq(virq);
else
ret = generic_handle_domain_irq(msi->dev_domain, bit);
if (ret)
dev_err_ratelimited(dev, "bad MSI IRQ %d\n",
bit);
}
@ -570,17 +568,15 @@ static void mc_handle_intx(struct irq_desc *desc)
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
unsigned long status;
u32 bit;
u32 virq;
int ret;
status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
if (status & PM_MSI_INT_INTX_MASK) {
status &= PM_MSI_INT_INTX_MASK;
status >>= PM_MSI_INT_INTX_SHIFT;
for_each_set_bit(bit, &status, PCI_NUM_INTX) {
virq = irq_find_mapping(port->intx_domain, bit);
if (virq)
generic_handle_irq(virq);
else
ret = generic_handle_domain_irq(port->intx_domain, bit);
if (ret)
dev_err_ratelimited(dev, "bad INTx IRQ %d\n",
bit);
}
@ -745,7 +741,7 @@ static void mc_handle_event(struct irq_desc *desc)
events = get_events(port);
for_each_set_bit(bit, &events, NUM_EVENTS)
generic_handle_irq(irq_find_mapping(port->event_domain, bit));
generic_handle_domain_irq(port->event_domain, bit);
chained_irq_exit(chip, desc);
}

View File

@ -159,7 +159,7 @@ static int rcar_pcie_ep_get_pdata(struct rcar_pcie_endpoint *ep,
return 0;
}
static int rcar_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
static int rcar_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn,
struct pci_epf_header *hdr)
{
struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc);
@ -195,7 +195,7 @@ static int rcar_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
return 0;
}
static int rcar_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no,
static int rcar_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar)
{
int flags = epf_bar->flags | LAR_ENABLE | LAM_64BIT;
@ -246,7 +246,7 @@ static int rcar_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no,
return 0;
}
static void rcar_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn,
static void rcar_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn,
struct pci_epf_bar *epf_bar)
{
struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc);
@ -259,7 +259,8 @@ static void rcar_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn,
clear_bit(atu_index + 1, ep->ib_window_map);
}
static int rcar_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 interrupts)
static int rcar_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn,
u8 interrupts)
{
struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc);
struct rcar_pcie *pcie = &ep->pcie;
@ -272,7 +273,7 @@ static int rcar_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 interrupts)
return 0;
}
static int rcar_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
static int rcar_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn)
{
struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc);
struct rcar_pcie *pcie = &ep->pcie;
@ -285,7 +286,7 @@ static int rcar_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
return ((flags & MSICAP0_MMESE_MASK) >> MSICAP0_MMESE_OFFSET);
}
static int rcar_pcie_ep_map_addr(struct pci_epc *epc, u8 fn,
static int rcar_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
phys_addr_t addr, u64 pci_addr, size_t size)
{
struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc);
@ -322,7 +323,7 @@ static int rcar_pcie_ep_map_addr(struct pci_epc *epc, u8 fn,
return 0;
}
static void rcar_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn,
static void rcar_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn,
phys_addr_t addr)
{
struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc);
@ -403,7 +404,7 @@ static int rcar_pcie_ep_assert_msi(struct rcar_pcie *pcie,
return 0;
}
static int rcar_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn,
static int rcar_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, u8 vfn,
enum pci_epc_irq_type type,
u16 interrupt_num)
{
@ -451,7 +452,7 @@ static const struct pci_epc_features rcar_pcie_epc_features = {
};
static const struct pci_epc_features*
rcar_pcie_ep_get_features(struct pci_epc *epc, u8 func_no)
rcar_pcie_ep_get_features(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{
return &rcar_pcie_epc_features;
}
@ -492,9 +493,9 @@ static int rcar_pcie_ep_probe(struct platform_device *pdev)
pcie->dev = dev;
pm_runtime_enable(dev);
err = pm_runtime_get_sync(dev);
err = pm_runtime_resume_and_get(dev);
if (err < 0) {
dev_err(dev, "pm_runtime_get_sync failed\n");
dev_err(dev, "pm_runtime_resume_and_get failed\n");
goto err_pm_disable;
}

View File

@ -13,12 +13,14 @@
#include <linux/bitops.h>
#include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/iopoll.h>
#include <linux/msi.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
@ -41,6 +43,21 @@ struct rcar_msi {
int irq2;
};
#ifdef CONFIG_ARM
/*
* Here we keep a static copy of the remapped PCIe controller address.
* This is only used on aarch32 systems, all of which have one single
* PCIe controller, to provide quick access to the PCIe controller in
* the L1 link state fixup function, called from the ARM fault handler.
*/
static void __iomem *pcie_base;
/*
* Static copy of bus clock pointer, so we can check whether the clock
* is enabled or not.
*/
static struct clk *pcie_bus_clk;
#endif
/* Structure representing the PCIe interface */
struct rcar_pcie_host {
struct rcar_pcie pcie;
@ -486,12 +503,10 @@ static irqreturn_t rcar_pcie_msi_irq(int irq, void *data)
while (reg) {
unsigned int index = find_first_bit(&reg, 32);
unsigned int msi_irq;
int ret;
msi_irq = irq_find_mapping(msi->domain->parent, index);
if (msi_irq) {
generic_handle_irq(msi_irq);
} else {
ret = generic_handle_domain_irq(msi->domain->parent, index);
if (ret) {
/* Unknown MSI, just clear it */
dev_dbg(dev, "unexpected MSI\n");
rcar_pci_write_reg(pcie, BIT(index), PCIEMSIFR);
@ -776,6 +791,12 @@ static int rcar_pcie_get_resources(struct rcar_pcie_host *host)
}
host->msi.irq2 = i;
#ifdef CONFIG_ARM
/* Cache static copy for L1 link state fixup hook on aarch32 */
pcie_base = pcie->base;
pcie_bus_clk = host->bus_clk;
#endif
return 0;
err_irq2:
@ -1031,4 +1052,67 @@ static struct platform_driver rcar_pcie_driver = {
},
.probe = rcar_pcie_probe,
};
#ifdef CONFIG_ARM
static DEFINE_SPINLOCK(pmsr_lock);
static int rcar_pcie_aarch32_abort_handler(unsigned long addr,
unsigned int fsr, struct pt_regs *regs)
{
unsigned long flags;
u32 pmsr, val;
int ret = 0;
spin_lock_irqsave(&pmsr_lock, flags);
if (!pcie_base || !__clk_is_enabled(pcie_bus_clk)) {
ret = 1;
goto unlock_exit;
}
pmsr = readl(pcie_base + PMSR);
/*
* Test if the PCIe controller received PM_ENTER_L1 DLLP and
* the PCIe controller is not in L1 link state. If true, apply
* fix, which will put the controller into L1 link state, from
* which it can return to L0s/L0 on its own.
*/
if ((pmsr & PMEL1RX) && ((pmsr & PMSTATE) != PMSTATE_L1)) {
writel(L1IATN, pcie_base + PMCTLR);
ret = readl_poll_timeout_atomic(pcie_base + PMSR, val,
val & L1FAEG, 10, 1000);
WARN(ret, "Timeout waiting for L1 link state, ret=%d\n", ret);
writel(L1FAEG | PMEL1RX, pcie_base + PMSR);
}
unlock_exit:
spin_unlock_irqrestore(&pmsr_lock, flags);
return ret;
}
static const struct of_device_id rcar_pcie_abort_handler_of_match[] __initconst = {
{ .compatible = "renesas,pcie-r8a7779" },
{ .compatible = "renesas,pcie-r8a7790" },
{ .compatible = "renesas,pcie-r8a7791" },
{ .compatible = "renesas,pcie-rcar-gen2" },
{},
};
static int __init rcar_pcie_init(void)
{
if (of_find_matching_node(NULL, rcar_pcie_abort_handler_of_match)) {
#ifdef CONFIG_ARM_LPAE
hook_fault_code(17, rcar_pcie_aarch32_abort_handler, SIGBUS, 0,
"asynchronous external abort");
#else
hook_fault_code(22, rcar_pcie_aarch32_abort_handler, SIGBUS, 0,
"imprecise external abort");
#endif
}
return platform_driver_register(&rcar_pcie_driver);
}
device_initcall(rcar_pcie_init);
#else
builtin_platform_driver(rcar_pcie_driver);
#endif

View File

@ -85,6 +85,13 @@
#define LTSMDIS BIT(31)
#define MACCTLR_INIT_VAL (LTSMDIS | MACCTLR_NFTS_MASK)
#define PMSR 0x01105c
#define L1FAEG BIT(31)
#define PMEL1RX BIT(23)
#define PMSTATE GENMASK(18, 16)
#define PMSTATE_L1 (3 << 16)
#define PMCTLR 0x011060
#define L1IATN BIT(31)
#define MACS2R 0x011078
#define MACCGSPSETR 0x011084
#define SPCNGRSN BIT(31)

View File

@ -122,7 +122,7 @@ static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn,
ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r));
}
static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn,
struct pci_epf_header *hdr)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
@ -159,7 +159,7 @@ static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
return 0;
}
static int rockchip_pcie_ep_set_bar(struct pci_epc *epc, u8 fn,
static int rockchip_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, u8 vfn,
struct pci_epf_bar *epf_bar)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
@ -227,7 +227,7 @@ static int rockchip_pcie_ep_set_bar(struct pci_epc *epc, u8 fn,
return 0;
}
static void rockchip_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn,
static void rockchip_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn,
struct pci_epf_bar *epf_bar)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
@ -256,7 +256,7 @@ static void rockchip_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn,
ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar));
}
static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn,
static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
phys_addr_t addr, u64 pci_addr,
size_t size)
{
@ -284,7 +284,7 @@ static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn,
return 0;
}
static void rockchip_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn,
static void rockchip_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn,
phys_addr_t addr)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
@ -308,7 +308,7 @@ static void rockchip_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn,
clear_bit(r, &ep->ob_region_map);
}
static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn,
static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn,
u8 multi_msg_cap)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
@ -329,7 +329,7 @@ static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn,
return 0;
}
static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
@ -471,7 +471,7 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn,
return 0;
}
static int rockchip_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn,
static int rockchip_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, u8 vfn,
enum pci_epc_irq_type type,
u16 interrupt_num)
{
@ -510,7 +510,7 @@ static const struct pci_epc_features rockchip_pcie_epc_features = {
};
static const struct pci_epc_features*
rockchip_pcie_ep_get_features(struct pci_epc *epc, u8 func_no)
rockchip_pcie_ep_get_features(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{
return &rockchip_pcie_epc_features;
}

View File

@ -517,7 +517,7 @@ static void rockchip_pcie_legacy_int_handler(struct irq_desc *desc)
struct device *dev = rockchip->dev;
u32 reg;
u32 hwirq;
u32 virq;
int ret;
chained_irq_enter(chip, desc);
@ -528,10 +528,8 @@ static void rockchip_pcie_legacy_int_handler(struct irq_desc *desc)
hwirq = ffs(reg) - 1;
reg &= ~BIT(hwirq);
virq = irq_find_mapping(rockchip->irq_domain, hwirq);
if (virq)
generic_handle_irq(virq);
else
ret = generic_handle_domain_irq(rockchip->irq_domain, hwirq);
if (ret)
dev_err(dev, "unexpected IRQ, INT%d\n", hwirq);
}

View File

@ -222,7 +222,7 @@ static void xilinx_cpm_pcie_intx_flow(struct irq_desc *desc)
pcie_read(port, XILINX_CPM_PCIE_REG_IDRN));
for_each_set_bit(i, &val, PCI_NUM_INTX)
generic_handle_irq(irq_find_mapping(port->intx_domain, i));
generic_handle_domain_irq(port->intx_domain, i);
chained_irq_exit(chip, desc);
}
@ -282,7 +282,7 @@ static void xilinx_cpm_pcie_event_flow(struct irq_desc *desc)
val = pcie_read(port, XILINX_CPM_PCIE_REG_IDR);
val &= pcie_read(port, XILINX_CPM_PCIE_REG_IMR);
for_each_set_bit(i, &val, 32)
generic_handle_irq(irq_find_mapping(port->cpm_domain, i));
generic_handle_domain_irq(port->cpm_domain, i);
pcie_write(port, val, XILINX_CPM_PCIE_REG_IDR);
/*

View File

@ -6,6 +6,7 @@
* (C) Copyright 2014 - 2015, Xilinx, Inc.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
@ -169,6 +170,7 @@ struct nwl_pcie {
u8 last_busno;
struct nwl_msi msi;
struct irq_domain *legacy_irq_domain;
struct clk *clk;
raw_spinlock_t leg_mask_lock;
};
@ -318,18 +320,14 @@ static void nwl_pcie_leg_handler(struct irq_desc *desc)
struct nwl_pcie *pcie;
unsigned long status;
u32 bit;
u32 virq;
chained_irq_enter(chip, desc);
pcie = irq_desc_get_handler_data(desc);
while ((status = nwl_bridge_readl(pcie, MSGF_LEG_STATUS) &
MSGF_LEG_SR_MASKALL) != 0) {
for_each_set_bit(bit, &status, PCI_NUM_INTX) {
virq = irq_find_mapping(pcie->legacy_irq_domain, bit);
if (virq)
generic_handle_irq(virq);
}
for_each_set_bit(bit, &status, PCI_NUM_INTX)
generic_handle_domain_irq(pcie->legacy_irq_domain, bit);
}
chained_irq_exit(chip, desc);
@ -340,16 +338,13 @@ static void nwl_pcie_handle_msi_irq(struct nwl_pcie *pcie, u32 status_reg)
struct nwl_msi *msi;
unsigned long status;
u32 bit;
u32 virq;
msi = &pcie->msi;
while ((status = nwl_bridge_readl(pcie, status_reg)) != 0) {
for_each_set_bit(bit, &status, 32) {
nwl_bridge_writel(pcie, 1 << bit, status_reg);
virq = irq_find_mapping(msi->dev_domain, bit);
if (virq)
generic_handle_irq(virq);
generic_handle_domain_irq(msi->dev_domain, bit);
}
}
}
@ -823,6 +818,16 @@ static int nwl_pcie_probe(struct platform_device *pdev)
return err;
}
pcie->clk = devm_clk_get(dev, NULL);
if (IS_ERR(pcie->clk))
return PTR_ERR(pcie->clk);
err = clk_prepare_enable(pcie->clk);
if (err) {
dev_err(dev, "can't enable PCIe ref clock\n");
return err;
}
err = nwl_pcie_bridge_init(pcie);
if (err) {
dev_err(dev, "HW Initialization failed\n");

View File

@ -385,7 +385,7 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
}
if (status & (XILINX_PCIE_INTR_INTX | XILINX_PCIE_INTR_MSI)) {
unsigned int irq;
struct irq_domain *domain;
val = pcie_read(port, XILINX_PCIE_REG_RPIFR1);
@ -399,19 +399,18 @@ static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data)
if (val & XILINX_PCIE_RPIFR1_MSI_INTR) {
val = pcie_read(port, XILINX_PCIE_REG_RPIFR2) &
XILINX_PCIE_RPIFR2_MSG_DATA;
irq = irq_find_mapping(port->msi_domain->parent, val);
domain = port->msi_domain->parent;
} else {
val = (val & XILINX_PCIE_RPIFR1_INTR_MASK) >>
XILINX_PCIE_RPIFR1_INTR_SHIFT;
irq = irq_find_mapping(port->leg_domain, val);
domain = port->leg_domain;
}
/* Clear interrupt FIFO register 1 */
pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK,
XILINX_PCIE_REG_RPIFR1);
if (irq)
generic_handle_irq(irq);
generic_handle_domain_irq(domain, val);
}
if (status & XILINX_PCIE_INTR_SLV_UNSUPP)

View File

@ -87,6 +87,7 @@ struct epf_ntb {
struct epf_ntb_epc {
u8 func_no;
u8 vfunc_no;
bool linkup;
bool is_msix;
int msix_bar;
@ -143,14 +144,15 @@ static int epf_ntb_link_up(struct epf_ntb *ntb, bool link_up)
struct epf_ntb_epc *ntb_epc;
struct epf_ntb_ctrl *ctrl;
struct pci_epc *epc;
u8 func_no, vfunc_no;
bool is_msix;
u8 func_no;
int ret;
for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) {
ntb_epc = ntb->epc[type];
epc = ntb_epc->epc;
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
is_msix = ntb_epc->is_msix;
ctrl = ntb_epc->reg;
if (link_up)
@ -158,7 +160,7 @@ static int epf_ntb_link_up(struct epf_ntb *ntb, bool link_up)
else
ctrl->link_status &= ~LINK_STATUS_UP;
irq_type = is_msix ? PCI_EPC_IRQ_MSIX : PCI_EPC_IRQ_MSI;
ret = pci_epc_raise_irq(epc, func_no, irq_type, 1);
ret = pci_epc_raise_irq(epc, func_no, vfunc_no, irq_type, 1);
if (ret) {
dev_err(&epc->dev,
"%s intf: Failed to raise Link Up IRQ\n",
@ -238,10 +240,10 @@ static int epf_ntb_configure_mw(struct epf_ntb *ntb,
enum pci_barno peer_barno;
struct epf_ntb_ctrl *ctrl;
phys_addr_t phys_addr;
u8 func_no, vfunc_no;
struct pci_epc *epc;
u64 addr, size;
int ret = 0;
u8 func_no;
ntb_epc = ntb->epc[type];
epc = ntb_epc->epc;
@ -267,8 +269,9 @@ static int epf_ntb_configure_mw(struct epf_ntb *ntb,
}
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
ret = pci_epc_map_addr(epc, func_no, phys_addr, addr, size);
ret = pci_epc_map_addr(epc, func_no, vfunc_no, phys_addr, addr, size);
if (ret)
dev_err(&epc->dev,
"%s intf: Failed to map memory window %d address\n",
@ -296,8 +299,8 @@ static void epf_ntb_teardown_mw(struct epf_ntb *ntb,
enum pci_barno peer_barno;
struct epf_ntb_ctrl *ctrl;
phys_addr_t phys_addr;
u8 func_no, vfunc_no;
struct pci_epc *epc;
u8 func_no;
ntb_epc = ntb->epc[type];
epc = ntb_epc->epc;
@ -311,8 +314,9 @@ static void epf_ntb_teardown_mw(struct epf_ntb *ntb,
if (mw + NTB_MW_OFFSET == BAR_DB_MW1)
phys_addr += ctrl->mw1_offset;
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
pci_epc_unmap_addr(epc, func_no, phys_addr);
pci_epc_unmap_addr(epc, func_no, vfunc_no, phys_addr);
}
/**
@ -385,8 +389,8 @@ static int epf_ntb_configure_msi(struct epf_ntb *ntb,
struct epf_ntb_ctrl *peer_ctrl;
enum pci_barno peer_barno;
phys_addr_t phys_addr;
u8 func_no, vfunc_no;
struct pci_epc *epc;
u8 func_no;
int ret, i;
ntb_epc = ntb->epc[type];
@ -400,8 +404,9 @@ static int epf_ntb_configure_msi(struct epf_ntb *ntb,
phys_addr = peer_epf_bar->phys_addr;
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
ret = pci_epc_map_msi_irq(epc, func_no, phys_addr, db_count,
ret = pci_epc_map_msi_irq(epc, func_no, vfunc_no, phys_addr, db_count,
db_entry_size, &db_data, &db_offset);
if (ret) {
dev_err(&epc->dev, "%s intf: Failed to map MSI IRQ\n",
@ -491,10 +496,10 @@ static int epf_ntb_configure_msix(struct epf_ntb *ntb,
u32 db_entry_size, msg_data;
enum pci_barno peer_barno;
phys_addr_t phys_addr;
u8 func_no, vfunc_no;
struct pci_epc *epc;
size_t align;
u64 msg_addr;
u8 func_no;
int ret, i;
ntb_epc = ntb->epc[type];
@ -512,12 +517,13 @@ static int epf_ntb_configure_msix(struct epf_ntb *ntb,
align = epc_features->align;
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
db_entry_size = peer_ctrl->db_entry_size;
for (i = 0; i < db_count; i++) {
msg_addr = ALIGN_DOWN(msix_tbl[i].msg_addr, align);
msg_data = msix_tbl[i].msg_data;
ret = pci_epc_map_addr(epc, func_no, phys_addr, msg_addr,
ret = pci_epc_map_addr(epc, func_no, vfunc_no, phys_addr, msg_addr,
db_entry_size);
if (ret) {
dev_err(&epc->dev,
@ -586,8 +592,8 @@ epf_ntb_teardown_db(struct epf_ntb *ntb, enum pci_epc_interface_type type)
struct pci_epf_bar *peer_epf_bar;
enum pci_barno peer_barno;
phys_addr_t phys_addr;
u8 func_no, vfunc_no;
struct pci_epc *epc;
u8 func_no;
ntb_epc = ntb->epc[type];
epc = ntb_epc->epc;
@ -597,8 +603,9 @@ epf_ntb_teardown_db(struct epf_ntb *ntb, enum pci_epc_interface_type type)
peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
phys_addr = peer_epf_bar->phys_addr;
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
pci_epc_unmap_addr(epc, func_no, phys_addr);
pci_epc_unmap_addr(epc, func_no, vfunc_no, phys_addr);
}
/**
@ -728,14 +735,15 @@ static void epf_ntb_peer_spad_bar_clear(struct epf_ntb_epc *ntb_epc)
{
struct pci_epf_bar *epf_bar;
enum pci_barno barno;
u8 func_no, vfunc_no;
struct pci_epc *epc;
u8 func_no;
epc = ntb_epc->epc;
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
barno = ntb_epc->epf_ntb_bar[BAR_PEER_SPAD];
epf_bar = &ntb_epc->epf_bar[barno];
pci_epc_clear_bar(epc, func_no, epf_bar);
pci_epc_clear_bar(epc, func_no, vfunc_no, epf_bar);
}
/**
@ -775,9 +783,9 @@ static int epf_ntb_peer_spad_bar_set(struct epf_ntb *ntb,
struct pci_epf_bar *peer_epf_bar, *epf_bar;
enum pci_barno peer_barno, barno;
u32 peer_spad_offset;
u8 func_no, vfunc_no;
struct pci_epc *epc;
struct device *dev;
u8 func_no;
int ret;
dev = &ntb->epf->dev;
@ -790,6 +798,7 @@ static int epf_ntb_peer_spad_bar_set(struct epf_ntb *ntb,
barno = ntb_epc->epf_ntb_bar[BAR_PEER_SPAD];
epf_bar = &ntb_epc->epf_bar[barno];
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
epc = ntb_epc->epc;
peer_spad_offset = peer_ntb_epc->reg->spad_offset;
@ -798,7 +807,7 @@ static int epf_ntb_peer_spad_bar_set(struct epf_ntb *ntb,
epf_bar->barno = barno;
epf_bar->flags = PCI_BASE_ADDRESS_MEM_TYPE_32;
ret = pci_epc_set_bar(epc, func_no, epf_bar);
ret = pci_epc_set_bar(epc, func_no, vfunc_no, epf_bar);
if (ret) {
dev_err(dev, "%s intf: peer SPAD BAR set failed\n",
pci_epc_interface_string(type));
@ -842,14 +851,15 @@ static void epf_ntb_config_sspad_bar_clear(struct epf_ntb_epc *ntb_epc)
{
struct pci_epf_bar *epf_bar;
enum pci_barno barno;
u8 func_no, vfunc_no;
struct pci_epc *epc;
u8 func_no;
epc = ntb_epc->epc;
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
barno = ntb_epc->epf_ntb_bar[BAR_CONFIG];
epf_bar = &ntb_epc->epf_bar[barno];
pci_epc_clear_bar(epc, func_no, epf_bar);
pci_epc_clear_bar(epc, func_no, vfunc_no, epf_bar);
}
/**
@ -886,10 +896,10 @@ static int epf_ntb_config_sspad_bar_set(struct epf_ntb_epc *ntb_epc)
{
struct pci_epf_bar *epf_bar;
enum pci_barno barno;
u8 func_no, vfunc_no;
struct epf_ntb *ntb;
struct pci_epc *epc;
struct device *dev;
u8 func_no;
int ret;
ntb = ntb_epc->epf_ntb;
@ -897,10 +907,11 @@ static int epf_ntb_config_sspad_bar_set(struct epf_ntb_epc *ntb_epc)
epc = ntb_epc->epc;
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
barno = ntb_epc->epf_ntb_bar[BAR_CONFIG];
epf_bar = &ntb_epc->epf_bar[barno];
ret = pci_epc_set_bar(epc, func_no, epf_bar);
ret = pci_epc_set_bar(epc, func_no, vfunc_no, epf_bar);
if (ret) {
dev_err(dev, "%s inft: Config/Status/SPAD BAR set failed\n",
pci_epc_interface_string(ntb_epc->type));
@ -1214,17 +1225,18 @@ static void epf_ntb_db_mw_bar_clear(struct epf_ntb_epc *ntb_epc)
struct pci_epf_bar *epf_bar;
enum epf_ntb_bar bar;
enum pci_barno barno;
u8 func_no, vfunc_no;
struct pci_epc *epc;
u8 func_no;
epc = ntb_epc->epc;
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
for (bar = BAR_DB_MW1; bar < BAR_MW4; bar++) {
barno = ntb_epc->epf_ntb_bar[bar];
epf_bar = &ntb_epc->epf_bar[barno];
pci_epc_clear_bar(epc, func_no, epf_bar);
pci_epc_clear_bar(epc, func_no, vfunc_no, epf_bar);
}
}
@ -1263,10 +1275,10 @@ static int epf_ntb_configure_interrupt(struct epf_ntb *ntb,
const struct pci_epc_features *epc_features;
bool msix_capable, msi_capable;
struct epf_ntb_epc *ntb_epc;
u8 func_no, vfunc_no;
struct pci_epc *epc;
struct device *dev;
u32 db_count;
u8 func_no;
int ret;
ntb_epc = ntb->epc[type];
@ -1282,6 +1294,7 @@ static int epf_ntb_configure_interrupt(struct epf_ntb *ntb,
}
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
db_count = ntb->db_count;
if (db_count > MAX_DB_COUNT) {
@ -1293,7 +1306,7 @@ static int epf_ntb_configure_interrupt(struct epf_ntb *ntb,
epc = ntb_epc->epc;
if (msi_capable) {
ret = pci_epc_set_msi(epc, func_no, db_count);
ret = pci_epc_set_msi(epc, func_no, vfunc_no, db_count);
if (ret) {
dev_err(dev, "%s intf: MSI configuration failed\n",
pci_epc_interface_string(type));
@ -1302,7 +1315,7 @@ static int epf_ntb_configure_interrupt(struct epf_ntb *ntb,
}
if (msix_capable) {
ret = pci_epc_set_msix(epc, func_no, db_count,
ret = pci_epc_set_msix(epc, func_no, vfunc_no, db_count,
ntb_epc->msix_bar,
ntb_epc->msix_table_offset);
if (ret) {
@ -1423,11 +1436,11 @@ static int epf_ntb_db_mw_bar_init(struct epf_ntb *ntb,
u32 num_mws, db_count;
enum epf_ntb_bar bar;
enum pci_barno barno;
u8 func_no, vfunc_no;
struct pci_epc *epc;
struct device *dev;
size_t align;
int ret, i;
u8 func_no;
u64 size;
ntb_epc = ntb->epc[type];
@ -1437,6 +1450,7 @@ static int epf_ntb_db_mw_bar_init(struct epf_ntb *ntb,
epc_features = ntb_epc->epc_features;
align = epc_features->align;
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
epc = ntb_epc->epc;
num_mws = ntb->num_mws;
db_count = ntb->db_count;
@ -1464,7 +1478,7 @@ static int epf_ntb_db_mw_bar_init(struct epf_ntb *ntb,
barno = ntb_epc->epf_ntb_bar[bar];
epf_bar = &ntb_epc->epf_bar[barno];
ret = pci_epc_set_bar(epc, func_no, epf_bar);
ret = pci_epc_set_bar(epc, func_no, vfunc_no, epf_bar);
if (ret) {
dev_err(dev, "%s intf: DoorBell BAR set failed\n",
pci_epc_interface_string(type));
@ -1536,9 +1550,9 @@ static int epf_ntb_epc_create_interface(struct epf_ntb *ntb,
const struct pci_epc_features *epc_features;
struct pci_epf_bar *epf_bar;
struct epf_ntb_epc *ntb_epc;
u8 func_no, vfunc_no;
struct pci_epf *epf;
struct device *dev;
u8 func_no;
dev = &ntb->epf->dev;
@ -1547,6 +1561,7 @@ static int epf_ntb_epc_create_interface(struct epf_ntb *ntb,
return -ENOMEM;
epf = ntb->epf;
vfunc_no = epf->vfunc_no;
if (type == PRIMARY_INTERFACE) {
func_no = epf->func_no;
epf_bar = epf->bar;
@ -1558,11 +1573,12 @@ static int epf_ntb_epc_create_interface(struct epf_ntb *ntb,
ntb_epc->linkup = false;
ntb_epc->epc = epc;
ntb_epc->func_no = func_no;
ntb_epc->vfunc_no = vfunc_no;
ntb_epc->type = type;
ntb_epc->epf_bar = epf_bar;
ntb_epc->epf_ntb = ntb;
epc_features = pci_epc_get_features(epc, func_no);
epc_features = pci_epc_get_features(epc, func_no, vfunc_no);
if (!epc_features)
return -EINVAL;
ntb_epc->epc_features = epc_features;
@ -1702,10 +1718,10 @@ static int epf_ntb_epc_init_interface(struct epf_ntb *ntb,
enum pci_epc_interface_type type)
{
struct epf_ntb_epc *ntb_epc;
u8 func_no, vfunc_no;
struct pci_epc *epc;
struct pci_epf *epf;
struct device *dev;
u8 func_no;
int ret;
ntb_epc = ntb->epc[type];
@ -1713,6 +1729,7 @@ static int epf_ntb_epc_init_interface(struct epf_ntb *ntb,
dev = &epf->dev;
epc = ntb_epc->epc;
func_no = ntb_epc->func_no;
vfunc_no = ntb_epc->vfunc_no;
ret = epf_ntb_config_sspad_bar_set(ntb->epc[type]);
if (ret) {
@ -1742,11 +1759,13 @@ static int epf_ntb_epc_init_interface(struct epf_ntb *ntb,
goto err_db_mw_bar_init;
}
ret = pci_epc_write_header(epc, func_no, epf->header);
if (ret) {
dev_err(dev, "%s intf: Configuration header write failed\n",
pci_epc_interface_string(type));
goto err_write_header;
if (vfunc_no <= 1) {
ret = pci_epc_write_header(epc, func_no, vfunc_no, epf->header);
if (ret) {
dev_err(dev, "%s intf: Configuration header write failed\n",
pci_epc_interface_string(type));
goto err_write_header;
}
}
INIT_DELAYED_WORK(&ntb->epc[type]->cmd_handler, epf_ntb_cmd_handler);

View File

@ -247,8 +247,8 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
goto err;
}
ret = pci_epc_map_addr(epc, epf->func_no, src_phys_addr, reg->src_addr,
reg->size);
ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, src_phys_addr,
reg->src_addr, reg->size);
if (ret) {
dev_err(dev, "Failed to map source address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
@ -263,8 +263,8 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
goto err_src_map_addr;
}
ret = pci_epc_map_addr(epc, epf->func_no, dst_phys_addr, reg->dst_addr,
reg->size);
ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, dst_phys_addr,
reg->dst_addr, reg->size);
if (ret) {
dev_err(dev, "Failed to map destination address\n");
reg->status = STATUS_DST_ADDR_INVALID;
@ -291,13 +291,13 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma);
err_map_addr:
pci_epc_unmap_addr(epc, epf->func_no, dst_phys_addr);
pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, dst_phys_addr);
err_dst_addr:
pci_epc_mem_free_addr(epc, dst_phys_addr, dst_addr, reg->size);
err_src_map_addr:
pci_epc_unmap_addr(epc, epf->func_no, src_phys_addr);
pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, src_phys_addr);
err_src_addr:
pci_epc_mem_free_addr(epc, src_phys_addr, src_addr, reg->size);
@ -331,8 +331,8 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test)
goto err;
}
ret = pci_epc_map_addr(epc, epf->func_no, phys_addr, reg->src_addr,
reg->size);
ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, phys_addr,
reg->src_addr, reg->size);
if (ret) {
dev_err(dev, "Failed to map address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
@ -386,7 +386,7 @@ err_dma_map:
kfree(buf);
err_map_addr:
pci_epc_unmap_addr(epc, epf->func_no, phys_addr);
pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, phys_addr);
err_addr:
pci_epc_mem_free_addr(epc, phys_addr, src_addr, reg->size);
@ -419,8 +419,8 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
goto err;
}
ret = pci_epc_map_addr(epc, epf->func_no, phys_addr, reg->dst_addr,
reg->size);
ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, phys_addr,
reg->dst_addr, reg->size);
if (ret) {
dev_err(dev, "Failed to map address\n");
reg->status = STATUS_DST_ADDR_INVALID;
@ -479,7 +479,7 @@ err_dma_map:
kfree(buf);
err_map_addr:
pci_epc_unmap_addr(epc, epf->func_no, phys_addr);
pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, phys_addr);
err_addr:
pci_epc_mem_free_addr(epc, phys_addr, dst_addr, reg->size);
@ -501,13 +501,16 @@ static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test, u8 irq_type,
switch (irq_type) {
case IRQ_TYPE_LEGACY:
pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_LEGACY, 0);
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_LEGACY, 0);
break;
case IRQ_TYPE_MSI:
pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSI, irq);
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_MSI, irq);
break;
case IRQ_TYPE_MSIX:
pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSIX, irq);
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_MSIX, irq);
break;
default:
dev_err(dev, "Failed to raise IRQ, unknown type\n");
@ -542,7 +545,8 @@ static void pci_epf_test_cmd_handler(struct work_struct *work)
if (command & COMMAND_RAISE_LEGACY_IRQ) {
reg->status = STATUS_IRQ_RAISED;
pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_LEGACY, 0);
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_LEGACY, 0);
goto reset_handler;
}
@ -580,22 +584,22 @@ static void pci_epf_test_cmd_handler(struct work_struct *work)
}
if (command & COMMAND_RAISE_MSI_IRQ) {
count = pci_epc_get_msi(epc, epf->func_no);
count = pci_epc_get_msi(epc, epf->func_no, epf->vfunc_no);
if (reg->irq_number > count || count <= 0)
goto reset_handler;
reg->status = STATUS_IRQ_RAISED;
pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSI,
reg->irq_number);
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_MSI, reg->irq_number);
goto reset_handler;
}
if (command & COMMAND_RAISE_MSIX_IRQ) {
count = pci_epc_get_msix(epc, epf->func_no);
count = pci_epc_get_msix(epc, epf->func_no, epf->vfunc_no);
if (reg->irq_number > count || count <= 0)
goto reset_handler;
reg->status = STATUS_IRQ_RAISED;
pci_epc_raise_irq(epc, epf->func_no, PCI_EPC_IRQ_MSIX,
reg->irq_number);
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_MSIX, reg->irq_number);
goto reset_handler;
}
@ -618,7 +622,8 @@ static void pci_epf_test_unbind(struct pci_epf *epf)
epf_bar = &epf->bar[bar];
if (epf_test->reg[bar]) {
pci_epc_clear_bar(epc, epf->func_no, epf_bar);
pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no,
epf_bar);
pci_epf_free_space(epf, epf_test->reg[bar], bar,
PRIMARY_INTERFACE);
}
@ -650,7 +655,8 @@ static int pci_epf_test_set_bar(struct pci_epf *epf)
if (!!(epc_features->reserved_bar & (1 << bar)))
continue;
ret = pci_epc_set_bar(epc, epf->func_no, epf_bar);
ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no,
epf_bar);
if (ret) {
pci_epf_free_space(epf, epf_test->reg[bar], bar,
PRIMARY_INTERFACE);
@ -674,16 +680,18 @@ static int pci_epf_test_core_init(struct pci_epf *epf)
bool msi_capable = true;
int ret;
epc_features = pci_epc_get_features(epc, epf->func_no);
epc_features = pci_epc_get_features(epc, epf->func_no, epf->vfunc_no);
if (epc_features) {
msix_capable = epc_features->msix_capable;
msi_capable = epc_features->msi_capable;
}
ret = pci_epc_write_header(epc, epf->func_no, header);
if (ret) {
dev_err(dev, "Configuration header write failed\n");
return ret;
if (epf->vfunc_no <= 1) {
ret = pci_epc_write_header(epc, epf->func_no, epf->vfunc_no, header);
if (ret) {
dev_err(dev, "Configuration header write failed\n");
return ret;
}
}
ret = pci_epf_test_set_bar(epf);
@ -691,7 +699,8 @@ static int pci_epf_test_core_init(struct pci_epf *epf)
return ret;
if (msi_capable) {
ret = pci_epc_set_msi(epc, epf->func_no, epf->msi_interrupts);
ret = pci_epc_set_msi(epc, epf->func_no, epf->vfunc_no,
epf->msi_interrupts);
if (ret) {
dev_err(dev, "MSI configuration failed\n");
return ret;
@ -699,7 +708,8 @@ static int pci_epf_test_core_init(struct pci_epf *epf)
}
if (msix_capable) {
ret = pci_epc_set_msix(epc, epf->func_no, epf->msix_interrupts,
ret = pci_epc_set_msix(epc, epf->func_no, epf->vfunc_no,
epf->msix_interrupts,
epf_test->test_reg_bar,
epf_test->msix_table_offset);
if (ret) {
@ -832,7 +842,7 @@ static int pci_epf_test_bind(struct pci_epf *epf)
if (WARN_ON_ONCE(!epc))
return -EINVAL;
epc_features = pci_epc_get_features(epc, epf->func_no);
epc_features = pci_epc_get_features(epc, epf->func_no, epf->vfunc_no);
if (!epc_features) {
dev_err(&epf->dev, "epc_features not implemented\n");
return -EOPNOTSUPP;

View File

@ -475,6 +475,28 @@ static struct configfs_attribute *pci_epf_attrs[] = {
NULL,
};
static int pci_epf_vepf_link(struct config_item *epf_pf_item,
struct config_item *epf_vf_item)
{
struct pci_epf_group *epf_vf_group = to_pci_epf_group(epf_vf_item);
struct pci_epf_group *epf_pf_group = to_pci_epf_group(epf_pf_item);
struct pci_epf *epf_pf = epf_pf_group->epf;
struct pci_epf *epf_vf = epf_vf_group->epf;
return pci_epf_add_vepf(epf_pf, epf_vf);
}
static void pci_epf_vepf_unlink(struct config_item *epf_pf_item,
struct config_item *epf_vf_item)
{
struct pci_epf_group *epf_vf_group = to_pci_epf_group(epf_vf_item);
struct pci_epf_group *epf_pf_group = to_pci_epf_group(epf_pf_item);
struct pci_epf *epf_pf = epf_pf_group->epf;
struct pci_epf *epf_vf = epf_vf_group->epf;
pci_epf_remove_vepf(epf_pf, epf_vf);
}
static void pci_epf_release(struct config_item *item)
{
struct pci_epf_group *epf_group = to_pci_epf_group(item);
@ -487,6 +509,8 @@ static void pci_epf_release(struct config_item *item)
}
static struct configfs_item_operations pci_epf_ops = {
.allow_link = pci_epf_vepf_link,
.drop_link = pci_epf_vepf_unlink,
.release = pci_epf_release,
};

View File

@ -137,24 +137,29 @@ EXPORT_SYMBOL_GPL(pci_epc_get_next_free_bar);
* @epc: the features supported by *this* EPC device will be returned
* @func_no: the features supported by the EPC device specific to the
* endpoint function with func_no will be returned
* @vfunc_no: the features supported by the EPC device specific to the
* virtual endpoint function with vfunc_no will be returned
*
* Invoke to get the features provided by the EPC which may be
* specific to an endpoint function. Returns pci_epc_features on success
* and NULL for any failures.
*/
const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc,
u8 func_no)
u8 func_no, u8 vfunc_no)
{
const struct pci_epc_features *epc_features;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return NULL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return NULL;
if (!epc->ops->get_features)
return NULL;
mutex_lock(&epc->lock);
epc_features = epc->ops->get_features(epc, func_no);
epc_features = epc->ops->get_features(epc, func_no, vfunc_no);
mutex_unlock(&epc->lock);
return epc_features;
@ -205,13 +210,14 @@ EXPORT_SYMBOL_GPL(pci_epc_start);
/**
* pci_epc_raise_irq() - interrupt the host system
* @epc: the EPC device which has to interrupt the host
* @func_no: the endpoint function number in the EPC device
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @type: specify the type of interrupt; legacy, MSI or MSI-X
* @interrupt_num: the MSI or MSI-X interrupt number
*
* Invoke to raise an legacy, MSI or MSI-X interrupt
*/
int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
enum pci_epc_irq_type type, u16 interrupt_num)
{
int ret;
@ -219,11 +225,14 @@ int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return -EINVAL;
if (!epc->ops->raise_irq)
return 0;
mutex_lock(&epc->lock);
ret = epc->ops->raise_irq(epc, func_no, type, interrupt_num);
ret = epc->ops->raise_irq(epc, func_no, vfunc_no, type, interrupt_num);
mutex_unlock(&epc->lock);
return ret;
@ -235,6 +244,7 @@ EXPORT_SYMBOL_GPL(pci_epc_raise_irq);
* MSI data
* @epc: the EPC device which has the MSI capability
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @phys_addr: the physical address of the outbound region
* @interrupt_num: the MSI interrupt number
* @entry_size: Size of Outbound address region for each interrupt
@ -250,21 +260,25 @@ EXPORT_SYMBOL_GPL(pci_epc_raise_irq);
* physical address (in outbound region) of the other interface to ring
* doorbell.
*/
int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, phys_addr_t phys_addr,
u8 interrupt_num, u32 entry_size, u32 *msi_data,
u32 *msi_addr_offset)
int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t phys_addr, u8 interrupt_num, u32 entry_size,
u32 *msi_data, u32 *msi_addr_offset)
{
int ret;
if (IS_ERR_OR_NULL(epc))
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return -EINVAL;
if (!epc->ops->map_msi_irq)
return -EINVAL;
mutex_lock(&epc->lock);
ret = epc->ops->map_msi_irq(epc, func_no, phys_addr, interrupt_num,
entry_size, msi_data, msi_addr_offset);
ret = epc->ops->map_msi_irq(epc, func_no, vfunc_no, phys_addr,
interrupt_num, entry_size, msi_data,
msi_addr_offset);
mutex_unlock(&epc->lock);
return ret;
@ -274,22 +288,26 @@ EXPORT_SYMBOL_GPL(pci_epc_map_msi_irq);
/**
* pci_epc_get_msi() - get the number of MSI interrupt numbers allocated
* @epc: the EPC device to which MSI interrupts was requested
* @func_no: the endpoint function number in the EPC device
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
*
* Invoke to get the number of MSI interrupts allocated by the RC
*/
int pci_epc_get_msi(struct pci_epc *epc, u8 func_no)
int pci_epc_get_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{
int interrupt;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return 0;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return 0;
if (!epc->ops->get_msi)
return 0;
mutex_lock(&epc->lock);
interrupt = epc->ops->get_msi(epc, func_no);
interrupt = epc->ops->get_msi(epc, func_no, vfunc_no);
mutex_unlock(&epc->lock);
if (interrupt < 0)
@ -304,12 +322,13 @@ EXPORT_SYMBOL_GPL(pci_epc_get_msi);
/**
* pci_epc_set_msi() - set the number of MSI interrupt numbers required
* @epc: the EPC device on which MSI has to be configured
* @func_no: the endpoint function number in the EPC device
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @interrupts: number of MSI interrupts required by the EPF
*
* Invoke to set the required number of MSI interrupts.
*/
int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts)
int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no, u8 interrupts)
{
int ret;
u8 encode_int;
@ -318,13 +337,16 @@ int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts)
interrupts > 32)
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return -EINVAL;
if (!epc->ops->set_msi)
return 0;
encode_int = order_base_2(interrupts);
mutex_lock(&epc->lock);
ret = epc->ops->set_msi(epc, func_no, encode_int);
ret = epc->ops->set_msi(epc, func_no, vfunc_no, encode_int);
mutex_unlock(&epc->lock);
return ret;
@ -334,22 +356,26 @@ EXPORT_SYMBOL_GPL(pci_epc_set_msi);
/**
* pci_epc_get_msix() - get the number of MSI-X interrupt numbers allocated
* @epc: the EPC device to which MSI-X interrupts was requested
* @func_no: the endpoint function number in the EPC device
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
*
* Invoke to get the number of MSI-X interrupts allocated by the RC
*/
int pci_epc_get_msix(struct pci_epc *epc, u8 func_no)
int pci_epc_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{
int interrupt;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return 0;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return 0;
if (!epc->ops->get_msix)
return 0;
mutex_lock(&epc->lock);
interrupt = epc->ops->get_msix(epc, func_no);
interrupt = epc->ops->get_msix(epc, func_no, vfunc_no);
mutex_unlock(&epc->lock);
if (interrupt < 0)
@ -362,15 +388,16 @@ EXPORT_SYMBOL_GPL(pci_epc_get_msix);
/**
* pci_epc_set_msix() - set the number of MSI-X interrupt numbers required
* @epc: the EPC device on which MSI-X has to be configured
* @func_no: the endpoint function number in the EPC device
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @interrupts: number of MSI-X interrupts required by the EPF
* @bir: BAR where the MSI-X table resides
* @offset: Offset pointing to the start of MSI-X table
*
* Invoke to set the required number of MSI-X interrupts.
*/
int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
enum pci_barno bir, u32 offset)
int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
u16 interrupts, enum pci_barno bir, u32 offset)
{
int ret;
@ -378,11 +405,15 @@ int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
interrupts < 1 || interrupts > 2048)
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return -EINVAL;
if (!epc->ops->set_msix)
return 0;
mutex_lock(&epc->lock);
ret = epc->ops->set_msix(epc, func_no, interrupts - 1, bir, offset);
ret = epc->ops->set_msix(epc, func_no, vfunc_no, interrupts - 1, bir,
offset);
mutex_unlock(&epc->lock);
return ret;
@ -392,22 +423,26 @@ EXPORT_SYMBOL_GPL(pci_epc_set_msix);
/**
* pci_epc_unmap_addr() - unmap CPU address from PCI address
* @epc: the EPC device on which address is allocated
* @func_no: the endpoint function number in the EPC device
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @phys_addr: physical address of the local system
*
* Invoke to unmap the CPU address from PCI address.
*/
void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no,
void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t phys_addr)
{
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return;
if (!epc->ops->unmap_addr)
return;
mutex_lock(&epc->lock);
epc->ops->unmap_addr(epc, func_no, phys_addr);
epc->ops->unmap_addr(epc, func_no, vfunc_no, phys_addr);
mutex_unlock(&epc->lock);
}
EXPORT_SYMBOL_GPL(pci_epc_unmap_addr);
@ -415,14 +450,15 @@ EXPORT_SYMBOL_GPL(pci_epc_unmap_addr);
/**
* pci_epc_map_addr() - map CPU address to PCI address
* @epc: the EPC device on which address is allocated
* @func_no: the endpoint function number in the EPC device
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @phys_addr: physical address of the local system
* @pci_addr: PCI address to which the physical address should be mapped
* @size: the size of the allocation
*
* Invoke to map CPU address with PCI address.
*/
int pci_epc_map_addr(struct pci_epc *epc, u8 func_no,
int pci_epc_map_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t phys_addr, u64 pci_addr, size_t size)
{
int ret;
@ -430,11 +466,15 @@ int pci_epc_map_addr(struct pci_epc *epc, u8 func_no,
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return -EINVAL;
if (!epc->ops->map_addr)
return 0;
mutex_lock(&epc->lock);
ret = epc->ops->map_addr(epc, func_no, phys_addr, pci_addr, size);
ret = epc->ops->map_addr(epc, func_no, vfunc_no, phys_addr, pci_addr,
size);
mutex_unlock(&epc->lock);
return ret;
@ -444,12 +484,13 @@ EXPORT_SYMBOL_GPL(pci_epc_map_addr);
/**
* pci_epc_clear_bar() - reset the BAR
* @epc: the EPC device for which the BAR has to be cleared
* @func_no: the endpoint function number in the EPC device
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @epf_bar: the struct epf_bar that contains the BAR information
*
* Invoke to reset the BAR of the endpoint device.
*/
void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no,
void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar)
{
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions ||
@ -457,11 +498,14 @@ void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no,
epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64))
return;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return;
if (!epc->ops->clear_bar)
return;
mutex_lock(&epc->lock);
epc->ops->clear_bar(epc, func_no, epf_bar);
epc->ops->clear_bar(epc, func_no, vfunc_no, epf_bar);
mutex_unlock(&epc->lock);
}
EXPORT_SYMBOL_GPL(pci_epc_clear_bar);
@ -469,12 +513,13 @@ EXPORT_SYMBOL_GPL(pci_epc_clear_bar);
/**
* pci_epc_set_bar() - configure BAR in order for host to assign PCI addr space
* @epc: the EPC device on which BAR has to be configured
* @func_no: the endpoint function number in the EPC device
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @epf_bar: the struct epf_bar that contains the BAR information
*
* Invoke to configure the BAR of the endpoint device.
*/
int pci_epc_set_bar(struct pci_epc *epc, u8 func_no,
int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar)
{
int ret;
@ -489,11 +534,14 @@ int pci_epc_set_bar(struct pci_epc *epc, u8 func_no,
!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64)))
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return -EINVAL;
if (!epc->ops->set_bar)
return 0;
mutex_lock(&epc->lock);
ret = epc->ops->set_bar(epc, func_no, epf_bar);
ret = epc->ops->set_bar(epc, func_no, vfunc_no, epf_bar);
mutex_unlock(&epc->lock);
return ret;
@ -503,7 +551,8 @@ EXPORT_SYMBOL_GPL(pci_epc_set_bar);
/**
* pci_epc_write_header() - write standard configuration header
* @epc: the EPC device to which the configuration header should be written
* @func_no: the endpoint function number in the EPC device
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @header: standard configuration header fields
*
* Invoke to write the configuration header to the endpoint controller. Every
@ -511,7 +560,7 @@ EXPORT_SYMBOL_GPL(pci_epc_set_bar);
* configuration header would be written. The callback function should write
* the header fields to this dedicated location.
*/
int pci_epc_write_header(struct pci_epc *epc, u8 func_no,
int pci_epc_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_header *header)
{
int ret;
@ -519,11 +568,18 @@ int pci_epc_write_header(struct pci_epc *epc, u8 func_no,
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return -EINVAL;
/* Only Virtual Function #1 has deviceID */
if (vfunc_no > 1)
return -EINVAL;
if (!epc->ops->write_header)
return 0;
mutex_lock(&epc->lock);
ret = epc->ops->write_header(epc, func_no, header);
ret = epc->ops->write_header(epc, func_no, vfunc_no, header);
mutex_unlock(&epc->lock);
return ret;
@ -548,7 +604,7 @@ int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf,
u32 func_no;
int ret = 0;
if (IS_ERR_OR_NULL(epc))
if (IS_ERR_OR_NULL(epc) || epf->is_vf)
return -EINVAL;
if (type == PRIMARY_INTERFACE && epf->epc)

View File

@ -62,13 +62,20 @@ EXPORT_SYMBOL_GPL(pci_epf_type_add_cfs);
*/
void pci_epf_unbind(struct pci_epf *epf)
{
struct pci_epf *epf_vf;
if (!epf->driver) {
dev_WARN(&epf->dev, "epf device not bound to driver\n");
return;
}
mutex_lock(&epf->lock);
epf->driver->ops->unbind(epf);
list_for_each_entry(epf_vf, &epf->pci_vepf, list) {
if (epf_vf->is_bound)
epf_vf->driver->ops->unbind(epf_vf);
}
if (epf->is_bound)
epf->driver->ops->unbind(epf);
mutex_unlock(&epf->lock);
module_put(epf->driver->owner);
}
@ -83,10 +90,14 @@ EXPORT_SYMBOL_GPL(pci_epf_unbind);
*/
int pci_epf_bind(struct pci_epf *epf)
{
struct device *dev = &epf->dev;
struct pci_epf *epf_vf;
u8 func_no, vfunc_no;
struct pci_epc *epc;
int ret;
if (!epf->driver) {
dev_WARN(&epf->dev, "epf device not bound to driver\n");
dev_WARN(dev, "epf device not bound to driver\n");
return -EINVAL;
}
@ -94,13 +105,140 @@ int pci_epf_bind(struct pci_epf *epf)
return -EAGAIN;
mutex_lock(&epf->lock);
list_for_each_entry(epf_vf, &epf->pci_vepf, list) {
vfunc_no = epf_vf->vfunc_no;
if (vfunc_no < 1) {
dev_err(dev, "Invalid virtual function number\n");
ret = -EINVAL;
goto ret;
}
epc = epf->epc;
func_no = epf->func_no;
if (!IS_ERR_OR_NULL(epc)) {
if (!epc->max_vfs) {
dev_err(dev, "No support for virt function\n");
ret = -EINVAL;
goto ret;
}
if (vfunc_no > epc->max_vfs[func_no]) {
dev_err(dev, "PF%d: Exceeds max vfunc number\n",
func_no);
ret = -EINVAL;
goto ret;
}
}
epc = epf->sec_epc;
func_no = epf->sec_epc_func_no;
if (!IS_ERR_OR_NULL(epc)) {
if (!epc->max_vfs) {
dev_err(dev, "No support for virt function\n");
ret = -EINVAL;
goto ret;
}
if (vfunc_no > epc->max_vfs[func_no]) {
dev_err(dev, "PF%d: Exceeds max vfunc number\n",
func_no);
ret = -EINVAL;
goto ret;
}
}
epf_vf->func_no = epf->func_no;
epf_vf->sec_epc_func_no = epf->sec_epc_func_no;
epf_vf->epc = epf->epc;
epf_vf->sec_epc = epf->sec_epc;
ret = epf_vf->driver->ops->bind(epf_vf);
if (ret)
goto ret;
epf_vf->is_bound = true;
}
ret = epf->driver->ops->bind(epf);
if (ret)
goto ret;
epf->is_bound = true;
mutex_unlock(&epf->lock);
return 0;
ret:
mutex_unlock(&epf->lock);
pci_epf_unbind(epf);
return ret;
}
EXPORT_SYMBOL_GPL(pci_epf_bind);
/**
* pci_epf_add_vepf() - associate virtual EP function to physical EP function
* @epf_pf: the physical EP function to which the virtual EP function should be
* associated
* @epf_vf: the virtual EP function to be added
*
* A physical endpoint function can be associated with multiple virtual
* endpoint functions. Invoke pci_epf_add_epf() to add a virtual PCI endpoint
* function to a physical PCI endpoint function.
*/
int pci_epf_add_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf)
{
u32 vfunc_no;
if (IS_ERR_OR_NULL(epf_pf) || IS_ERR_OR_NULL(epf_vf))
return -EINVAL;
if (epf_pf->epc || epf_vf->epc || epf_vf->epf_pf)
return -EBUSY;
if (epf_pf->sec_epc || epf_vf->sec_epc)
return -EBUSY;
mutex_lock(&epf_pf->lock);
vfunc_no = find_first_zero_bit(&epf_pf->vfunction_num_map,
BITS_PER_LONG);
if (vfunc_no >= BITS_PER_LONG) {
mutex_unlock(&epf_pf->lock);
return -EINVAL;
}
set_bit(vfunc_no, &epf_pf->vfunction_num_map);
epf_vf->vfunc_no = vfunc_no;
epf_vf->epf_pf = epf_pf;
epf_vf->is_vf = true;
list_add_tail(&epf_vf->list, &epf_pf->pci_vepf);
mutex_unlock(&epf_pf->lock);
return 0;
}
EXPORT_SYMBOL_GPL(pci_epf_add_vepf);
/**
* pci_epf_remove_vepf() - remove virtual EP function from physical EP function
* @epf_pf: the physical EP function from which the virtual EP function should
* be removed
* @epf_vf: the virtual EP function to be removed
*
* Invoke to remove a virtual endpoint function from the physcial endpoint
* function.
*/
void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf)
{
if (IS_ERR_OR_NULL(epf_pf) || IS_ERR_OR_NULL(epf_vf))
return;
mutex_lock(&epf_pf->lock);
clear_bit(epf_vf->vfunc_no, &epf_pf->vfunction_num_map);
list_del(&epf_vf->list);
mutex_unlock(&epf_pf->lock);
}
EXPORT_SYMBOL_GPL(pci_epf_remove_vepf);
/**
* pci_epf_free_space() - free the allocated PCI EPF register space
* @epf: the EPF device from whom to free the memory
@ -317,6 +455,10 @@ struct pci_epf *pci_epf_create(const char *name)
return ERR_PTR(-ENOMEM);
}
/* VFs are numbered starting with 1. So set BIT(0) by default */
epf->vfunction_num_map = 1;
INIT_LIST_HEAD(&epf->pci_vepf);
dev = &epf->dev;
device_initialize(dev);
dev->bus = &pci_epf_bus_type;

View File

@ -40,9 +40,6 @@ ibmphp:
* The return value of pci_hp_register() is not checked.
* iounmap(io_mem) is called in the error path of ebda_rsrc_controller()
and once more in the error path of its caller ibmphp_access_ebda().
* The various slot data structures are difficult to follow and need to be
simplified. A lot of functions are too large and too complex, they need
to be broken up into smaller, manageable pieces. Negative examples are

View File

@ -714,8 +714,7 @@ static int __init ebda_rsrc_controller(void)
/* init hpc structure */
hpc_ptr = alloc_ebda_hpc(slot_num, bus_num);
if (!hpc_ptr) {
rc = -ENOMEM;
goto error_no_hpc;
return -ENOMEM;
}
hpc_ptr->ctlr_id = ctlr_id;
hpc_ptr->ctlr_relative_id = ctlr;
@ -910,8 +909,6 @@ error:
kfree(tmp_slot);
error_no_slot:
free_ebda_hpc(hpc_ptr);
error_no_hpc:
iounmap(io_mem);
return rc;
}

View File

@ -184,7 +184,7 @@ void pciehp_release_ctrl(struct controller *ctrl);
int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot);
int pciehp_sysfs_disable_slot(struct hotplug_slot *hotplug_slot);
int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe);
int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe);
int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status);
int pciehp_set_raw_indicator_status(struct hotplug_slot *h_slot, u8 status);
int pciehp_get_raw_indicator_status(struct hotplug_slot *h_slot, u8 *status);

View File

@ -870,7 +870,7 @@ void pcie_disable_interrupt(struct controller *ctrl)
* momentarily, if we see that they could interfere. Also, clear any spurious
* events after.
*/
int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe)
int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, bool probe)
{
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);

View File

@ -526,7 +526,7 @@ scan:
return 0;
}
static int pnv_php_reset_slot(struct hotplug_slot *slot, int probe)
static int pnv_php_reset_slot(struct hotplug_slot *slot, bool probe)
{
struct pnv_php_slot *php_slot = to_pnv_php_slot(slot);
struct pci_dev *bridge = php_slot->pdev;

View File

@ -310,7 +310,7 @@ static int devm_of_pci_get_host_bridge_resources(struct device *dev,
/* Check for ranges property */
err = of_pci_range_parser_init(&parser, dev_node);
if (err)
goto failed;
return 0;
dev_dbg(dev, "Parsing ranges property...\n");
for_each_of_pci_range(&parser, &range) {

View File

@ -934,58 +934,77 @@ static pci_power_t acpi_pci_choose_state(struct pci_dev *pdev)
static struct acpi_device *acpi_pci_find_companion(struct device *dev);
static bool acpi_pci_bridge_d3(struct pci_dev *dev)
void pci_set_acpi_fwnode(struct pci_dev *dev)
{
const struct fwnode_handle *fwnode;
struct acpi_device *adev;
struct pci_dev *root;
u8 val;
if (!ACPI_COMPANION(&dev->dev) && !pci_dev_is_added(dev))
ACPI_COMPANION_SET(&dev->dev,
acpi_pci_find_companion(&dev->dev));
}
if (!dev->is_hotplug_bridge)
return false;
/**
* pci_dev_acpi_reset - do a function level reset using _RST method
* @dev: device to reset
* @probe: if true, return 0 if device supports _RST
*/
int pci_dev_acpi_reset(struct pci_dev *dev, bool probe)
{
acpi_handle handle = ACPI_HANDLE(&dev->dev);
/* Assume D3 support if the bridge is power-manageable by ACPI. */
adev = ACPI_COMPANION(&dev->dev);
if (!adev && !pci_dev_is_added(dev)) {
adev = acpi_pci_find_companion(&dev->dev);
ACPI_COMPANION_SET(&dev->dev, adev);
if (!handle || !acpi_has_method(handle, "_RST"))
return -ENOTTY;
if (probe)
return 0;
if (ACPI_FAILURE(acpi_evaluate_object(handle, "_RST", NULL, NULL))) {
pci_warn(dev, "ACPI _RST failed\n");
return -ENOTTY;
}
if (adev && acpi_device_power_manageable(adev))
return true;
/*
* Look for a special _DSD property for the root port and if it
* is set we know the hierarchy behind it supports D3 just fine.
*/
root = pcie_find_root_port(dev);
if (!root)
return false;
adev = ACPI_COMPANION(&root->dev);
if (root == dev) {
/*
* It is possible that the ACPI companion is not yet bound
* for the root port so look it up manually here.
*/
if (!adev && !pci_dev_is_added(root))
adev = acpi_pci_find_companion(&root->dev);
}
if (!adev)
return false;
fwnode = acpi_fwnode_handle(adev);
if (fwnode_property_read_u8(fwnode, "HotPlugSupportInD3", &val))
return false;
return val == 1;
return 0;
}
static bool acpi_pci_power_manageable(struct pci_dev *dev)
{
struct acpi_device *adev = ACPI_COMPANION(&dev->dev);
return adev ? acpi_device_power_manageable(adev) : false;
if (!adev)
return false;
return acpi_device_power_manageable(adev);
}
static bool acpi_pci_bridge_d3(struct pci_dev *dev)
{
const union acpi_object *obj;
struct acpi_device *adev;
struct pci_dev *rpdev;
if (!dev->is_hotplug_bridge)
return false;
/* Assume D3 support if the bridge is power-manageable by ACPI. */
if (acpi_pci_power_manageable(dev))
return true;
/*
* The ACPI firmware will provide the device-specific properties through
* _DSD configuration object. Look for the 'HotPlugSupportInD3' property
* for the root port and if it is set we know the hierarchy behind it
* supports D3 just fine.
*/
rpdev = pcie_find_root_port(dev);
if (!rpdev)
return false;
adev = ACPI_COMPANION(&rpdev->dev);
if (!adev)
return false;
if (acpi_dev_get_property(adev, "HotPlugSupportInD3",
ACPI_TYPE_INTEGER, &obj) < 0)
return false;
return obj->integer.value == 1;
}
static int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state)

View File

@ -54,7 +54,7 @@ struct pci_bridge_emul_pcie_conf {
__le16 slotctl;
__le16 slotsta;
__le16 rootctl;
__le16 rsvd;
__le16 rootcap;
__le32 rootsta;
__le32 devcap2;
__le16 devctl2;

View File

@ -1367,7 +1367,7 @@ static umode_t pci_dev_reset_attr_is_visible(struct kobject *kobj,
{
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
if (!pdev->reset_fn)
if (!pci_reset_supported(pdev))
return 0;
return a->mode;
@ -1491,6 +1491,7 @@ const struct attribute_group *pci_dev_groups[] = {
&pci_dev_config_attr_group,
&pci_dev_rom_attr_group,
&pci_dev_reset_attr_group,
&pci_dev_reset_method_attr_group,
&pci_dev_vpd_attr_group,
#ifdef CONFIG_DMI
&pci_dev_smbios_attr_group,

View File

@ -31,6 +31,7 @@
#include <linux/vmalloc.h>
#include <asm/dma.h>
#include <linux/aer.h>
#include <linux/bitfield.h>
#include "pci.h"
DEFINE_MUTEX(pci_slot_mutex);
@ -72,6 +73,11 @@ static void pci_dev_d3_sleep(struct pci_dev *dev)
msleep(delay);
}
bool pci_reset_supported(struct pci_dev *dev)
{
return dev->reset_methods[0] != 0;
}
#ifdef CONFIG_PCI_DOMAINS
int pci_domains_supported = 1;
#endif
@ -206,32 +212,36 @@ int pci_status_get_and_clear_errors(struct pci_dev *pdev)
EXPORT_SYMBOL_GPL(pci_status_get_and_clear_errors);
#ifdef CONFIG_HAS_IOMEM
void __iomem *pci_ioremap_bar(struct pci_dev *pdev, int bar)
static void __iomem *__pci_ioremap_resource(struct pci_dev *pdev, int bar,
bool write_combine)
{
struct resource *res = &pdev->resource[bar];
resource_size_t start = res->start;
resource_size_t size = resource_size(res);
/*
* Make sure the BAR is actually a memory resource, not an IO resource
*/
if (res->flags & IORESOURCE_UNSET || !(res->flags & IORESOURCE_MEM)) {
pci_warn(pdev, "can't ioremap BAR %d: %pR\n", bar, res);
pci_err(pdev, "can't ioremap BAR %d: %pR\n", bar, res);
return NULL;
}
return ioremap(res->start, resource_size(res));
if (write_combine)
return ioremap_wc(start, size);
return ioremap(start, size);
}
void __iomem *pci_ioremap_bar(struct pci_dev *pdev, int bar)
{
return __pci_ioremap_resource(pdev, bar, false);
}
EXPORT_SYMBOL_GPL(pci_ioremap_bar);
void __iomem *pci_ioremap_wc_bar(struct pci_dev *pdev, int bar)
{
/*
* Make sure the BAR is actually a memory resource, not an IO resource
*/
if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) {
WARN_ON(1);
return NULL;
}
return ioremap_wc(pci_resource_start(pdev, bar),
pci_resource_len(pdev, bar));
return __pci_ioremap_resource(pdev, bar, true);
}
EXPORT_SYMBOL_GPL(pci_ioremap_wc_bar);
#endif
@ -265,7 +275,7 @@ static int pci_dev_str_match_path(struct pci_dev *dev, const char *path,
*endptr = strchrnul(path, ';');
wpath = kmemdup_nul(path, *endptr - path, GFP_KERNEL);
wpath = kmemdup_nul(path, *endptr - path, GFP_ATOMIC);
if (!wpath)
return -ENOMEM;
@ -915,8 +925,8 @@ static void pci_std_enable_acs(struct pci_dev *dev)
/* Upstream Forwarding */
ctrl |= (cap & PCI_ACS_UF);
/* Enable Translation Blocking for external devices */
if (dev->external_facing || dev->untrusted)
/* Enable Translation Blocking for external devices and noats */
if (pci_ats_disabled() || dev->external_facing || dev->untrusted)
ctrl |= (cap & PCI_ACS_TB);
pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl);
@ -4628,32 +4638,12 @@ int pci_wait_for_pending_transaction(struct pci_dev *dev)
}
EXPORT_SYMBOL(pci_wait_for_pending_transaction);
/**
* pcie_has_flr - check if a device supports function level resets
* @dev: device to check
*
* Returns true if the device advertises support for PCIe function level
* resets.
*/
bool pcie_has_flr(struct pci_dev *dev)
{
u32 cap;
if (dev->dev_flags & PCI_DEV_FLAGS_NO_FLR_RESET)
return false;
pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap);
return cap & PCI_EXP_DEVCAP_FLR;
}
EXPORT_SYMBOL_GPL(pcie_has_flr);
/**
* pcie_flr - initiate a PCIe function level reset
* @dev: device to reset
*
* Initiate a function level reset on @dev. The caller should ensure the
* device supports FLR before calling this function, e.g. by using the
* pcie_has_flr() helper.
* Initiate a function level reset unconditionally on @dev without
* checking any flags and DEVCAP
*/
int pcie_flr(struct pci_dev *dev)
{
@ -4676,7 +4666,29 @@ int pcie_flr(struct pci_dev *dev)
}
EXPORT_SYMBOL_GPL(pcie_flr);
static int pci_af_flr(struct pci_dev *dev, int probe)
/**
* pcie_reset_flr - initiate a PCIe function level reset
* @dev: device to reset
* @probe: if true, return 0 if device can be reset this way
*
* Initiate a function level reset on @dev.
*/
int pcie_reset_flr(struct pci_dev *dev, bool probe)
{
if (dev->dev_flags & PCI_DEV_FLAGS_NO_FLR_RESET)
return -ENOTTY;
if (!(dev->devcap & PCI_EXP_DEVCAP_FLR))
return -ENOTTY;
if (probe)
return 0;
return pcie_flr(dev);
}
EXPORT_SYMBOL_GPL(pcie_reset_flr);
static int pci_af_flr(struct pci_dev *dev, bool probe)
{
int pos;
u8 cap;
@ -4723,7 +4735,7 @@ static int pci_af_flr(struct pci_dev *dev, int probe)
/**
* pci_pm_reset - Put device into PCI_D3 and back into PCI_D0.
* @dev: Device to reset.
* @probe: If set, only check if the device can be reset this way.
* @probe: if true, return 0 if the device can be reset this way.
*
* If @dev supports native PCI PM and its PCI_PM_CTRL_NO_SOFT_RESET flag is
* unset, it will be reinitialized internally when going from PCI_D3hot to
@ -4735,7 +4747,7 @@ static int pci_af_flr(struct pci_dev *dev, int probe)
* by default (i.e. unless the @dev's d3hot_delay field has a different value).
* Moreover, only devices in D0 can be reset by this function.
*/
static int pci_pm_reset(struct pci_dev *dev, int probe)
static int pci_pm_reset(struct pci_dev *dev, bool probe)
{
u16 csr;
@ -4995,7 +5007,7 @@ int pci_bridge_secondary_bus_reset(struct pci_dev *dev)
}
EXPORT_SYMBOL_GPL(pci_bridge_secondary_bus_reset);
static int pci_parent_bus_reset(struct pci_dev *dev, int probe)
static int pci_parent_bus_reset(struct pci_dev *dev, bool probe)
{
struct pci_dev *pdev;
@ -5013,7 +5025,7 @@ static int pci_parent_bus_reset(struct pci_dev *dev, int probe)
return pci_bridge_secondary_bus_reset(dev->bus->self);
}
static int pci_reset_hotplug_slot(struct hotplug_slot *hotplug, int probe)
static int pci_reset_hotplug_slot(struct hotplug_slot *hotplug, bool probe)
{
int rc = -ENOTTY;
@ -5028,7 +5040,7 @@ static int pci_reset_hotplug_slot(struct hotplug_slot *hotplug, int probe)
return rc;
}
static int pci_dev_reset_slot_function(struct pci_dev *dev, int probe)
static int pci_dev_reset_slot_function(struct pci_dev *dev, bool probe)
{
if (dev->multifunction || dev->subordinate || !dev->slot ||
dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET)
@ -5037,7 +5049,7 @@ static int pci_dev_reset_slot_function(struct pci_dev *dev, int probe)
return pci_reset_hotplug_slot(dev->slot->hotplug, probe);
}
static int pci_reset_bus_function(struct pci_dev *dev, int probe)
static int pci_reset_bus_function(struct pci_dev *dev, bool probe)
{
int rc;
@ -5121,6 +5133,139 @@ static void pci_dev_restore(struct pci_dev *dev)
err_handler->reset_done(dev);
}
/* dev->reset_methods[] is a 0-terminated list of indices into this array */
static const struct pci_reset_fn_method pci_reset_fn_methods[] = {
{ },
{ pci_dev_specific_reset, .name = "device_specific" },
{ pci_dev_acpi_reset, .name = "acpi" },
{ pcie_reset_flr, .name = "flr" },
{ pci_af_flr, .name = "af_flr" },
{ pci_pm_reset, .name = "pm" },
{ pci_reset_bus_function, .name = "bus" },
};
static ssize_t reset_method_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
ssize_t len = 0;
int i, m;
for (i = 0; i < PCI_NUM_RESET_METHODS; i++) {
m = pdev->reset_methods[i];
if (!m)
break;
len += sysfs_emit_at(buf, len, "%s%s", len ? " " : "",
pci_reset_fn_methods[m].name);
}
if (len)
len += sysfs_emit_at(buf, len, "\n");
return len;
}
static int reset_method_lookup(const char *name)
{
int m;
for (m = 1; m < PCI_NUM_RESET_METHODS; m++) {
if (sysfs_streq(name, pci_reset_fn_methods[m].name))
return m;
}
return 0; /* not found */
}
static ssize_t reset_method_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct pci_dev *pdev = to_pci_dev(dev);
char *options, *name;
int m, n;
u8 reset_methods[PCI_NUM_RESET_METHODS] = { 0 };
if (sysfs_streq(buf, "")) {
pdev->reset_methods[0] = 0;
pci_warn(pdev, "All device reset methods disabled by user");
return count;
}
if (sysfs_streq(buf, "default")) {
pci_init_reset_methods(pdev);
return count;
}
options = kstrndup(buf, count, GFP_KERNEL);
if (!options)
return -ENOMEM;
n = 0;
while ((name = strsep(&options, " ")) != NULL) {
if (sysfs_streq(name, ""))
continue;
name = strim(name);
m = reset_method_lookup(name);
if (!m) {
pci_err(pdev, "Invalid reset method '%s'", name);
goto error;
}
if (pci_reset_fn_methods[m].reset_fn(pdev, PCI_RESET_PROBE)) {
pci_err(pdev, "Unsupported reset method '%s'", name);
goto error;
}
if (n == PCI_NUM_RESET_METHODS - 1) {
pci_err(pdev, "Too many reset methods\n");
goto error;
}
reset_methods[n++] = m;
}
reset_methods[n] = 0;
/* Warn if dev-specific supported but not highest priority */
if (pci_reset_fn_methods[1].reset_fn(pdev, PCI_RESET_PROBE) == 0 &&
reset_methods[0] != 1)
pci_warn(pdev, "Device-specific reset disabled/de-prioritized by user");
memcpy(pdev->reset_methods, reset_methods, sizeof(pdev->reset_methods));
kfree(options);
return count;
error:
/* Leave previous methods unchanged */
kfree(options);
return -EINVAL;
}
static DEVICE_ATTR_RW(reset_method);
static struct attribute *pci_dev_reset_method_attrs[] = {
&dev_attr_reset_method.attr,
NULL,
};
static umode_t pci_dev_reset_method_attr_is_visible(struct kobject *kobj,
struct attribute *a, int n)
{
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
if (!pci_reset_supported(pdev))
return 0;
return a->mode;
}
const struct attribute_group pci_dev_reset_method_attr_group = {
.attrs = pci_dev_reset_method_attrs,
.is_visible = pci_dev_reset_method_attr_is_visible,
};
/**
* __pci_reset_function_locked - reset a PCI device function while holding
* the @dev mutex lock.
@ -5143,66 +5288,64 @@ static void pci_dev_restore(struct pci_dev *dev)
*/
int __pci_reset_function_locked(struct pci_dev *dev)
{
int rc;
int i, m, rc = -ENOTTY;
might_sleep();
/*
* A reset method returns -ENOTTY if it doesn't support this device
* and we should try the next method.
* A reset method returns -ENOTTY if it doesn't support this device and
* we should try the next method.
*
* If it returns 0 (success), we're finished. If it returns any
* other error, we're also finished: this indicates that further
* reset mechanisms might be broken on the device.
* If it returns 0 (success), we're finished. If it returns any other
* error, we're also finished: this indicates that further reset
* mechanisms might be broken on the device.
*/
rc = pci_dev_specific_reset(dev, 0);
if (rc != -ENOTTY)
return rc;
if (pcie_has_flr(dev)) {
rc = pcie_flr(dev);
for (i = 0; i < PCI_NUM_RESET_METHODS; i++) {
m = dev->reset_methods[i];
if (!m)
return -ENOTTY;
rc = pci_reset_fn_methods[m].reset_fn(dev, PCI_RESET_DO_RESET);
if (!rc)
return 0;
if (rc != -ENOTTY)
return rc;
}
rc = pci_af_flr(dev, 0);
if (rc != -ENOTTY)
return rc;
rc = pci_pm_reset(dev, 0);
if (rc != -ENOTTY)
return rc;
return pci_reset_bus_function(dev, 0);
return -ENOTTY;
}
EXPORT_SYMBOL_GPL(__pci_reset_function_locked);
/**
* pci_probe_reset_function - check whether the device can be safely reset
* @dev: PCI device to reset
* pci_init_reset_methods - check whether device can be safely reset
* and store supported reset mechanisms.
* @dev: PCI device to check for reset mechanisms
*
* Some devices allow an individual function to be reset without affecting
* other functions in the same device. The PCI device must be responsive
* to PCI config space in order to use this function.
* other functions in the same device. The PCI device must be in D0-D3hot
* state.
*
* Returns 0 if the device function can be reset or negative if the
* device doesn't support resetting a single function.
* Stores reset mechanisms supported by device in reset_methods byte array
* which is a member of struct pci_dev.
*/
int pci_probe_reset_function(struct pci_dev *dev)
void pci_init_reset_methods(struct pci_dev *dev)
{
int rc;
int m, i, rc;
BUILD_BUG_ON(ARRAY_SIZE(pci_reset_fn_methods) != PCI_NUM_RESET_METHODS);
might_sleep();
rc = pci_dev_specific_reset(dev, 1);
if (rc != -ENOTTY)
return rc;
if (pcie_has_flr(dev))
return 0;
rc = pci_af_flr(dev, 1);
if (rc != -ENOTTY)
return rc;
rc = pci_pm_reset(dev, 1);
if (rc != -ENOTTY)
return rc;
i = 0;
for (m = 1; m < PCI_NUM_RESET_METHODS; m++) {
rc = pci_reset_fn_methods[m].reset_fn(dev, PCI_RESET_PROBE);
if (!rc)
dev->reset_methods[i++] = m;
else if (rc != -ENOTTY)
break;
}
return pci_reset_bus_function(dev, 1);
dev->reset_methods[i] = 0;
}
/**
@ -5225,7 +5368,7 @@ int pci_reset_function(struct pci_dev *dev)
{
int rc;
if (!dev->reset_fn)
if (!pci_reset_supported(dev))
return -ENOTTY;
pci_dev_lock(dev);
@ -5261,7 +5404,7 @@ int pci_reset_function_locked(struct pci_dev *dev)
{
int rc;
if (!dev->reset_fn)
if (!pci_reset_supported(dev))
return -ENOTTY;
pci_dev_save_and_disable(dev);
@ -5284,7 +5427,7 @@ int pci_try_reset_function(struct pci_dev *dev)
{
int rc;
if (!dev->reset_fn)
if (!pci_reset_supported(dev))
return -ENOTTY;
if (!pci_dev_trylock(dev))
@ -5512,7 +5655,7 @@ static void pci_slot_restore_locked(struct pci_slot *slot)
}
}
static int pci_slot_reset(struct pci_slot *slot, int probe)
static int pci_slot_reset(struct pci_slot *slot, bool probe)
{
int rc;
@ -5540,7 +5683,7 @@ static int pci_slot_reset(struct pci_slot *slot, int probe)
*/
int pci_probe_reset_slot(struct pci_slot *slot)
{
return pci_slot_reset(slot, 1);
return pci_slot_reset(slot, PCI_RESET_PROBE);
}
EXPORT_SYMBOL_GPL(pci_probe_reset_slot);
@ -5563,14 +5706,14 @@ static int __pci_reset_slot(struct pci_slot *slot)
{
int rc;
rc = pci_slot_reset(slot, 1);
rc = pci_slot_reset(slot, PCI_RESET_PROBE);
if (rc)
return rc;
if (pci_slot_trylock(slot)) {
pci_slot_save_and_disable_locked(slot);
might_sleep();
rc = pci_reset_hotplug_slot(slot->hotplug, 0);
rc = pci_reset_hotplug_slot(slot->hotplug, PCI_RESET_DO_RESET);
pci_slot_restore_locked(slot);
pci_slot_unlock(slot);
} else
@ -5579,7 +5722,7 @@ static int __pci_reset_slot(struct pci_slot *slot)
return rc;
}
static int pci_bus_reset(struct pci_bus *bus, int probe)
static int pci_bus_reset(struct pci_bus *bus, bool probe)
{
int ret;
@ -5625,14 +5768,14 @@ int pci_bus_error_reset(struct pci_dev *bridge)
goto bus_reset;
list_for_each_entry(slot, &bus->slots, list)
if (pci_slot_reset(slot, 0))
if (pci_slot_reset(slot, PCI_RESET_DO_RESET))
goto bus_reset;
mutex_unlock(&pci_slot_mutex);
return 0;
bus_reset:
mutex_unlock(&pci_slot_mutex);
return pci_bus_reset(bridge->subordinate, 0);
return pci_bus_reset(bridge->subordinate, PCI_RESET_DO_RESET);
}
/**
@ -5643,7 +5786,7 @@ bus_reset:
*/
int pci_probe_reset_bus(struct pci_bus *bus)
{
return pci_bus_reset(bus, 1);
return pci_bus_reset(bus, PCI_RESET_PROBE);
}
EXPORT_SYMBOL_GPL(pci_probe_reset_bus);
@ -5657,7 +5800,7 @@ static int __pci_reset_bus(struct pci_bus *bus)
{
int rc;
rc = pci_bus_reset(bus, 1);
rc = pci_bus_reset(bus, PCI_RESET_PROBE);
if (rc)
return rc;

View File

@ -33,10 +33,32 @@ enum pci_mmap_api {
int pci_mmap_fits(struct pci_dev *pdev, int resno, struct vm_area_struct *vmai,
enum pci_mmap_api mmap_api);
int pci_probe_reset_function(struct pci_dev *dev);
bool pci_reset_supported(struct pci_dev *dev);
void pci_init_reset_methods(struct pci_dev *dev);
int pci_bridge_secondary_bus_reset(struct pci_dev *dev);
int pci_bus_error_reset(struct pci_dev *dev);
struct pci_cap_saved_data {
u16 cap_nr;
bool cap_extended;
unsigned int size;
u32 data[];
};
struct pci_cap_saved_state {
struct hlist_node next;
struct pci_cap_saved_data cap;
};
void pci_allocate_cap_save_buffers(struct pci_dev *dev);
void pci_free_cap_save_buffers(struct pci_dev *dev);
int pci_add_cap_save_buffer(struct pci_dev *dev, char cap, unsigned int size);
int pci_add_ext_cap_save_buffer(struct pci_dev *dev,
u16 cap, unsigned int size);
struct pci_cap_saved_state *pci_find_saved_cap(struct pci_dev *dev, char cap);
struct pci_cap_saved_state *pci_find_saved_ext_cap(struct pci_dev *dev,
u16 cap);
#define PCI_PM_D2_DELAY 200 /* usec; see PCIe r4.0, sec 5.9.1 */
#define PCI_PM_D3HOT_WAIT 10 /* msec */
#define PCI_PM_D3COLD_WAIT 100 /* msec */
@ -100,8 +122,6 @@ void pci_pm_init(struct pci_dev *dev);
void pci_ea_init(struct pci_dev *dev);
void pci_msi_init(struct pci_dev *dev);
void pci_msix_init(struct pci_dev *dev);
void pci_allocate_cap_save_buffers(struct pci_dev *dev);
void pci_free_cap_save_buffers(struct pci_dev *dev);
bool pci_bridge_d3_possible(struct pci_dev *dev);
void pci_bridge_d3_update(struct pci_dev *dev);
void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev);
@ -604,13 +624,18 @@ static inline void pci_ptm_init(struct pci_dev *dev) { }
struct pci_dev_reset_methods {
u16 vendor;
u16 device;
int (*reset)(struct pci_dev *dev, int probe);
int (*reset)(struct pci_dev *dev, bool probe);
};
struct pci_reset_fn_method {
int (*reset_fn)(struct pci_dev *pdev, bool probe);
char *name;
};
#ifdef CONFIG_PCI_QUIRKS
int pci_dev_specific_reset(struct pci_dev *dev, int probe);
int pci_dev_specific_reset(struct pci_dev *dev, bool probe);
#else
static inline int pci_dev_specific_reset(struct pci_dev *dev, int probe)
static inline int pci_dev_specific_reset(struct pci_dev *dev, bool probe)
{
return -ENOTTY;
}
@ -698,7 +723,15 @@ static inline int pci_aer_raw_clear_status(struct pci_dev *dev) { return -EINVAL
#ifdef CONFIG_ACPI
int pci_acpi_program_hp_params(struct pci_dev *dev);
extern const struct attribute_group pci_dev_acpi_attr_group;
void pci_set_acpi_fwnode(struct pci_dev *dev);
int pci_dev_acpi_reset(struct pci_dev *dev, bool probe);
#else
static inline int pci_dev_acpi_reset(struct pci_dev *dev, bool probe)
{
return -ENOTTY;
}
static inline void pci_set_acpi_fwnode(struct pci_dev *dev) {}
static inline int pci_acpi_program_hp_params(struct pci_dev *dev)
{
return -ENODEV;
@ -709,4 +742,6 @@ static inline int pci_acpi_program_hp_params(struct pci_dev *dev)
extern const struct attribute_group aspm_ctrl_attr_group;
#endif
extern const struct attribute_group pci_dev_reset_method_attr_group;
#endif /* DRIVERS_PCI_H */

View File

@ -1407,13 +1407,11 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
}
if (type == PCI_EXP_TYPE_RC_EC || type == PCI_EXP_TYPE_RC_END) {
if (pcie_has_flr(dev)) {
rc = pcie_flr(dev);
pci_info(dev, "has been reset (%d)\n", rc);
} else {
pci_info(dev, "not reset (no FLR support)\n");
rc = -ENOTTY;
}
rc = pcie_reset_flr(dev, PCI_RESET_DO_RESET);
if (!rc)
pci_info(dev, "has been reset\n");
else
pci_info(dev, "not reset (no FLR support: %d)\n", rc);
} else {
rc = pci_bus_error_reset(dev);
pci_info(dev, "%s Port link has been reset (%d)\n",

View File

@ -257,8 +257,13 @@ static int get_port_device_capability(struct pci_dev *dev)
services |= PCIE_PORT_SERVICE_DPC;
if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM ||
pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT)
services |= PCIE_PORT_SERVICE_BWNOTIF;
pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) {
u32 linkcap;
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &linkcap);
if (linkcap & PCI_EXP_LNKCAP_LBNC)
services |= PCIE_PORT_SERVICE_BWNOTIF;
}
return services;
}

View File

@ -60,10 +60,8 @@ void pci_save_ptm_state(struct pci_dev *dev)
return;
save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_PTM);
if (!save_state) {
pci_err(dev, "no suspend buffer for PTM\n");
if (!save_state)
return;
}
cap = (u16 *)&save_state->cap.data[0];
pci_read_config_word(dev, ptm + PCI_PTM_CTRL, cap);

View File

@ -19,6 +19,7 @@
#include <linux/hypervisor.h>
#include <linux/irqdomain.h>
#include <linux/pm_runtime.h>
#include <linux/bitfield.h>
#include "pci.h"
#define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */
@ -594,6 +595,7 @@ static void pci_init_host_bridge(struct pci_host_bridge *bridge)
bridge->native_pme = 1;
bridge->native_ltr = 1;
bridge->native_dpc = 1;
bridge->domain_nr = PCI_DOMAIN_NR_NOT_SET;
device_initialize(&bridge->dev);
}
@ -828,11 +830,15 @@ static struct irq_domain *pci_host_bridge_msi_domain(struct pci_bus *bus)
{
struct irq_domain *d;
/* If the host bridge driver sets a MSI domain of the bridge, use it */
d = dev_get_msi_domain(bus->bridge);
/*
* Any firmware interface that can resolve the msi_domain
* should be called from here.
*/
d = pci_host_bridge_of_msi_domain(bus);
if (!d)
d = pci_host_bridge_of_msi_domain(bus);
if (!d)
d = pci_host_bridge_acpi_msi_domain(bus);
@ -898,7 +904,10 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
bus->ops = bridge->ops;
bus->number = bus->busn_res.start = bridge->busnr;
#ifdef CONFIG_PCI_DOMAINS_GENERIC
bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
if (bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET)
bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
else
bus->domain_nr = bridge->domain_nr;
#endif
b = pci_find_bus(pci_domain_nr(bus), bridge->busnr);
@ -1498,8 +1507,8 @@ void set_pcie_port_type(struct pci_dev *pdev)
pdev->pcie_cap = pos;
pci_read_config_word(pdev, pos + PCI_EXP_FLAGS, &reg16);
pdev->pcie_flags_reg = reg16;
pci_read_config_word(pdev, pos + PCI_EXP_DEVCAP, &reg16);
pdev->pcie_mpss = reg16 & PCI_EXP_DEVCAP_PAYLOAD;
pci_read_config_dword(pdev, pos + PCI_EXP_DEVCAP, &pdev->devcap);
pdev->pcie_mpss = FIELD_GET(PCI_EXP_DEVCAP_PAYLOAD, pdev->devcap);
parent = pci_upstream_bridge(pdev);
if (!parent)
@ -1809,6 +1818,9 @@ int pci_setup_device(struct pci_dev *dev)
dev->error_state = pci_channel_io_normal;
set_pcie_port_type(dev);
pci_set_of_node(dev);
pci_set_acpi_fwnode(dev);
pci_dev_assign_slot(dev);
/*
@ -1946,6 +1958,7 @@ int pci_setup_device(struct pci_dev *dev)
default: /* unknown header */
pci_err(dev, "unknown header type %02x, ignoring device\n",
dev->hdr_type);
pci_release_of_node(dev);
return -EIO;
bad:
@ -2225,7 +2238,6 @@ static void pci_release_capabilities(struct pci_dev *dev)
{
pci_aer_exit(dev);
pci_rcec_exit(dev);
pci_vpd_release(dev);
pci_iov_release(dev);
pci_free_cap_save_buffers(dev);
}
@ -2374,10 +2386,7 @@ static struct pci_dev *pci_scan_device(struct pci_bus *bus, int devfn)
dev->vendor = l & 0xffff;
dev->device = (l >> 16) & 0xffff;
pci_set_of_node(dev);
if (pci_setup_device(dev)) {
pci_release_of_node(dev);
pci_bus_put(dev->bus);
kfree(dev);
return NULL;
@ -2428,9 +2437,7 @@ static void pci_init_capabilities(struct pci_dev *dev)
pci_rcec_init(dev); /* Root Complex Event Collector */
pcie_report_downtraining(dev);
if (pci_probe_reset_function(dev) == 0)
dev->reset_fn = 1;
pci_init_reset_methods(dev);
}
/*

View File

@ -83,6 +83,7 @@ static ssize_t proc_bus_pci_read(struct file *file, char __user *buf,
buf += 4;
pos += 4;
cnt -= 4;
cond_resched();
}
if (cnt >= 2) {

View File

@ -1821,6 +1821,45 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7525_MCH, quir
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_HUAWEI, 0x1610, PCI_CLASS_BRIDGE_PCI, 8, quirk_pcie_mch);
/*
* HiSilicon KunPeng920 and KunPeng930 have devices appear as PCI but are
* actually on the AMBA bus. These fake PCI devices can support SVA via
* SMMU stall feature, by setting dma-can-stall for ACPI platforms.
*
* Normally stalling must not be enabled for PCI devices, since it would
* break the PCI requirement for free-flowing writes and may lead to
* deadlock. We expect PCI devices to support ATS and PRI if they want to
* be fault-tolerant, so there's no ACPI binding to describe anything else,
* even when a "PCI" device turns out to be a regular old SoC device
* dressed up as a RCiEP and normal rules don't apply.
*/
static void quirk_huawei_pcie_sva(struct pci_dev *pdev)
{
struct property_entry properties[] = {
PROPERTY_ENTRY_BOOL("dma-can-stall"),
{},
};
if (pdev->revision != 0x21 && pdev->revision != 0x30)
return;
pdev->pasid_no_tlp = 1;
/*
* Set the dma-can-stall property on ACPI platforms. Device tree
* can set it directly.
*/
if (!pdev->dev.of_node &&
device_add_properties(&pdev->dev, properties))
pci_warn(pdev, "could not add stall property");
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa250, quirk_huawei_pcie_sva);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa251, quirk_huawei_pcie_sva);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa255, quirk_huawei_pcie_sva);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa256, quirk_huawei_pcie_sva);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa258, quirk_huawei_pcie_sva);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0xa259, quirk_huawei_pcie_sva);
/*
* It's possible for the MSI to get corrupted if SHPC and ACPI are used
* together on certain PXH-based systems.
@ -3235,12 +3274,13 @@ static void fixup_mpss_256(struct pci_dev *dev)
{
dev->pcie_mpss = 1; /* 256 bytes */
}
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
PCI_DEVICE_ID_SOLARFLARE_SFC4000A_0, fixup_mpss_256);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
PCI_DEVICE_ID_SOLARFLARE_SFC4000A_0, fixup_mpss_256);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_ASMEDIA, 0x0612, fixup_mpss_256);
/*
* Intel 5000 and 5100 Memory controllers have an erratum with read completion
@ -3703,7 +3743,7 @@ DECLARE_PCI_FIXUP_SUSPEND_LATE(PCI_VENDOR_ID_INTEL,
* reset a single function if other methods (e.g. FLR, PM D0->D3) are
* not available.
*/
static int reset_intel_82599_sfp_virtfn(struct pci_dev *dev, int probe)
static int reset_intel_82599_sfp_virtfn(struct pci_dev *dev, bool probe)
{
/*
* http://www.intel.com/content/dam/doc/datasheet/82599-10-gbe-controller-datasheet.pdf
@ -3725,7 +3765,7 @@ static int reset_intel_82599_sfp_virtfn(struct pci_dev *dev, int probe)
#define NSDE_PWR_STATE 0xd0100
#define IGD_OPERATION_TIMEOUT 10000 /* set timeout 10 seconds */
static int reset_ivb_igd(struct pci_dev *dev, int probe)
static int reset_ivb_igd(struct pci_dev *dev, bool probe)
{
void __iomem *mmio_base;
unsigned long timeout;
@ -3768,7 +3808,7 @@ reset_complete:
}
/* Device-specific reset method for Chelsio T4-based adapters */
static int reset_chelsio_generic_dev(struct pci_dev *dev, int probe)
static int reset_chelsio_generic_dev(struct pci_dev *dev, bool probe)
{
u16 old_command;
u16 msix_flags;
@ -3846,14 +3886,14 @@ static int reset_chelsio_generic_dev(struct pci_dev *dev, int probe)
* Chapter 3: NVMe control registers
* Chapter 7.3: Reset behavior
*/
static int nvme_disable_and_flr(struct pci_dev *dev, int probe)
static int nvme_disable_and_flr(struct pci_dev *dev, bool probe)
{
void __iomem *bar;
u16 cmd;
u32 cfg;
if (dev->class != PCI_CLASS_STORAGE_EXPRESS ||
!pcie_has_flr(dev) || !pci_resource_start(dev, 0))
pcie_reset_flr(dev, PCI_RESET_PROBE) || !pci_resource_start(dev, 0))
return -ENOTTY;
if (probe)
@ -3920,15 +3960,12 @@ static int nvme_disable_and_flr(struct pci_dev *dev, int probe)
* device too soon after FLR. A 250ms delay after FLR has heuristically
* proven to produce reliably working results for device assignment cases.
*/
static int delay_250ms_after_flr(struct pci_dev *dev, int probe)
static int delay_250ms_after_flr(struct pci_dev *dev, bool probe)
{
if (!pcie_has_flr(dev))
return -ENOTTY;
if (probe)
return 0;
return pcie_reset_flr(dev, PCI_RESET_PROBE);
pcie_flr(dev);
pcie_reset_flr(dev, PCI_RESET_DO_RESET);
msleep(250);
@ -3943,7 +3980,7 @@ static int delay_250ms_after_flr(struct pci_dev *dev, int probe)
#define HINIC_OPERATION_TIMEOUT 15000 /* 15 seconds */
/* Device-specific reset method for Huawei Intelligent NIC virtual functions */
static int reset_hinic_vf_dev(struct pci_dev *pdev, int probe)
static int reset_hinic_vf_dev(struct pci_dev *pdev, bool probe)
{
unsigned long timeout;
void __iomem *bar;
@ -4020,7 +4057,7 @@ static const struct pci_dev_reset_methods pci_dev_reset_methods[] = {
* because when a host assigns a device to a guest VM, the host may need
* to reset the device but probably doesn't have a driver for it.
*/
int pci_dev_specific_reset(struct pci_dev *dev, int probe)
int pci_dev_specific_reset(struct pci_dev *dev, bool probe)
{
const struct pci_dev_reset_methods *i;
@ -4615,6 +4652,18 @@ static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags)
PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
}
/*
* Each of these NXP Root Ports is in a Root Complex with a unique segment
* number and does provide isolation features to disable peer transactions
* and validate bus numbers in requests, but does not provide an ACS
* capability.
*/
static int pci_quirk_nxp_rp_acs(struct pci_dev *dev, u16 acs_flags)
{
return pci_acs_ctrl_enabled(acs_flags,
PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
}
static int pci_quirk_al_acs(struct pci_dev *dev, u16 acs_flags)
{
if (pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT)
@ -4841,6 +4890,10 @@ static const struct pci_dev_acs_enabled {
{ 0x10df, 0x720, pci_quirk_mf_endpoint_acs }, /* Emulex Skyhawk-R */
/* Cavium ThunderX */
{ PCI_VENDOR_ID_CAVIUM, PCI_ANY_ID, pci_quirk_cavium_acs },
/* Cavium multi-function devices */
{ PCI_VENDOR_ID_CAVIUM, 0xA026, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_CAVIUM, 0xA059, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_CAVIUM, 0xA060, pci_quirk_mf_endpoint_acs },
/* APM X-Gene */
{ PCI_VENDOR_ID_AMCC, 0xE004, pci_quirk_xgene_acs },
/* Ampere Computing */
@ -4861,6 +4914,39 @@ static const struct pci_dev_acs_enabled {
{ PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs },
/* NXP root ports, xx=16, 12, or 08 cores */
/* LX2xx0A : without security features + CAN-FD */
{ PCI_VENDOR_ID_NXP, 0x8d81, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8da1, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8d83, pci_quirk_nxp_rp_acs },
/* LX2xx0C : security features + CAN-FD */
{ PCI_VENDOR_ID_NXP, 0x8d80, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8da0, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8d82, pci_quirk_nxp_rp_acs },
/* LX2xx0E : security features + CAN */
{ PCI_VENDOR_ID_NXP, 0x8d90, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8db0, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8d92, pci_quirk_nxp_rp_acs },
/* LX2xx0N : without security features + CAN */
{ PCI_VENDOR_ID_NXP, 0x8d91, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8db1, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8d93, pci_quirk_nxp_rp_acs },
/* LX2xx2A : without security features + CAN-FD */
{ PCI_VENDOR_ID_NXP, 0x8d89, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8da9, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8d8b, pci_quirk_nxp_rp_acs },
/* LX2xx2C : security features + CAN-FD */
{ PCI_VENDOR_ID_NXP, 0x8d88, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8da8, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8d8a, pci_quirk_nxp_rp_acs },
/* LX2xx2E : security features + CAN */
{ PCI_VENDOR_ID_NXP, 0x8d98, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8db8, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8d9a, pci_quirk_nxp_rp_acs },
/* LX2xx2N : without security features + CAN */
{ PCI_VENDOR_ID_NXP, 0x8d99, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8db9, pci_quirk_nxp_rp_acs },
{ PCI_VENDOR_ID_NXP, 0x8d9b, pci_quirk_nxp_rp_acs },
/* Zhaoxin Root/Downstream Ports */
{ PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs },
{ 0 }
@ -5032,7 +5118,7 @@ static int pci_quirk_enable_intel_spt_pch_acs(struct pci_dev *dev)
ctrl |= (cap & PCI_ACS_CR);
ctrl |= (cap & PCI_ACS_UF);
if (dev->external_facing || dev->untrusted)
if (pci_ats_disabled() || dev->external_facing || dev->untrusted)
ctrl |= (cap & PCI_ACS_TB);
pci_write_config_dword(dev, pos + INTEL_SPT_ACS_CTRL, ctrl);
@ -5630,7 +5716,7 @@ static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
if (pdev->subsystem_vendor != PCI_VENDOR_ID_LENOVO ||
pdev->subsystem_device != 0x222e ||
!pdev->reset_fn)
!pci_reset_supported(pdev))
return;
if (pci_enable_device_mem(pdev))

View File

@ -19,7 +19,6 @@ static void pci_stop_dev(struct pci_dev *dev)
pci_pme_active(dev, false);
if (pci_dev_is_added(dev)) {
dev->reset_fn = 0;
device_release_driver(&dev->dev);
pci_proc_detach_device(dev);

View File

@ -19,11 +19,12 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
u8 byte;
u16 word;
u32 dword;
long err;
int cfg_ret;
int err, cfg_ret;
err = -EPERM;
dev = NULL;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
goto error;
err = -ENODEV;
dev = pci_get_domain_bus_and_slot(0, bus, dfn);

View File

@ -9,116 +9,94 @@
#include <linux/delay.h>
#include <linux/export.h>
#include <linux/sched/signal.h>
#include <asm/unaligned.h>
#include "pci.h"
#define PCI_VPD_LRDT_TAG_SIZE 3
#define PCI_VPD_SRDT_LEN_MASK 0x07
#define PCI_VPD_SRDT_TAG_SIZE 1
#define PCI_VPD_STIN_END 0x0f
#define PCI_VPD_INFO_FLD_HDR_SIZE 3
static u16 pci_vpd_lrdt_size(const u8 *lrdt)
{
return get_unaligned_le16(lrdt + 1);
}
static u8 pci_vpd_srdt_tag(const u8 *srdt)
{
return *srdt >> 3;
}
static u8 pci_vpd_srdt_size(const u8 *srdt)
{
return *srdt & PCI_VPD_SRDT_LEN_MASK;
}
static u8 pci_vpd_info_field_size(const u8 *info_field)
{
return info_field[2];
}
/* VPD access through PCI 2.2+ VPD capability */
struct pci_vpd_ops {
ssize_t (*read)(struct pci_dev *dev, loff_t pos, size_t count, void *buf);
ssize_t (*write)(struct pci_dev *dev, loff_t pos, size_t count, const void *buf);
};
struct pci_vpd {
const struct pci_vpd_ops *ops;
struct mutex lock;
unsigned int len;
u16 flag;
u8 cap;
unsigned int busy:1;
unsigned int valid:1;
};
static struct pci_dev *pci_get_func0_dev(struct pci_dev *dev)
{
return pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
}
/**
* pci_read_vpd - Read one entry from Vital Product Data
* @dev: pci device struct
* @pos: offset in vpd space
* @count: number of bytes to read
* @buf: pointer to where to store result
*/
ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf)
{
if (!dev->vpd || !dev->vpd->ops)
return -ENODEV;
return dev->vpd->ops->read(dev, pos, count, buf);
}
EXPORT_SYMBOL(pci_read_vpd);
/**
* pci_write_vpd - Write entry to Vital Product Data
* @dev: pci device struct
* @pos: offset in vpd space
* @count: number of bytes to write
* @buf: buffer containing write data
*/
ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf)
{
if (!dev->vpd || !dev->vpd->ops)
return -ENODEV;
return dev->vpd->ops->write(dev, pos, count, buf);
}
EXPORT_SYMBOL(pci_write_vpd);
#define PCI_VPD_MAX_SIZE (PCI_VPD_ADDR_MASK + 1)
#define PCI_VPD_MAX_SIZE (PCI_VPD_ADDR_MASK + 1)
#define PCI_VPD_SZ_INVALID UINT_MAX
/**
* pci_vpd_size - determine actual size of Vital Product Data
* @dev: pci device struct
* @old_size: current assumed size, also maximum allowed size
*/
static size_t pci_vpd_size(struct pci_dev *dev, size_t old_size)
static size_t pci_vpd_size(struct pci_dev *dev)
{
size_t off = 0;
unsigned char header[1+2]; /* 1 byte tag, 2 bytes length */
size_t off = 0, size;
unsigned char tag, header[1+2]; /* 1 byte tag, 2 bytes length */
while (off < old_size && pci_read_vpd(dev, off, 1, header) == 1) {
unsigned char tag;
/* Otherwise the following reads would fail. */
dev->vpd.len = PCI_VPD_MAX_SIZE;
if (!header[0] && !off) {
pci_info(dev, "Invalid VPD tag 00, assume missing optional VPD EPROM\n");
return 0;
}
while (pci_read_vpd(dev, off, 1, header) == 1) {
size = 0;
if (off == 0 && (header[0] == 0x00 || header[0] == 0xff))
goto error;
if (header[0] & PCI_VPD_LRDT) {
/* Large Resource Data Type Tag */
tag = pci_vpd_lrdt_tag(header);
/* Only read length from known tag items */
if ((tag == PCI_VPD_LTIN_ID_STRING) ||
(tag == PCI_VPD_LTIN_RO_DATA) ||
(tag == PCI_VPD_LTIN_RW_DATA)) {
if (pci_read_vpd(dev, off+1, 2,
&header[1]) != 2) {
pci_warn(dev, "invalid large VPD tag %02x size at offset %zu",
tag, off + 1);
return 0;
}
off += PCI_VPD_LRDT_TAG_SIZE +
pci_vpd_lrdt_size(header);
if (pci_read_vpd(dev, off + 1, 2, &header[1]) != 2) {
pci_warn(dev, "failed VPD read at offset %zu\n",
off + 1);
return off ?: PCI_VPD_SZ_INVALID;
}
size = pci_vpd_lrdt_size(header);
if (off + size > PCI_VPD_MAX_SIZE)
goto error;
off += PCI_VPD_LRDT_TAG_SIZE + size;
} else {
/* Short Resource Data Type Tag */
off += PCI_VPD_SRDT_TAG_SIZE +
pci_vpd_srdt_size(header);
tag = pci_vpd_srdt_tag(header);
}
size = pci_vpd_srdt_size(header);
if (off + size > PCI_VPD_MAX_SIZE)
goto error;
if (tag == PCI_VPD_STIN_END) /* End tag descriptor */
return off;
if ((tag != PCI_VPD_LTIN_ID_STRING) &&
(tag != PCI_VPD_LTIN_RO_DATA) &&
(tag != PCI_VPD_LTIN_RW_DATA)) {
pci_warn(dev, "invalid %s VPD tag %02x at offset %zu",
(header[0] & PCI_VPD_LRDT) ? "large" : "short",
tag, off);
return 0;
off += PCI_VPD_SRDT_TAG_SIZE + size;
if (tag == PCI_VPD_STIN_END) /* End tag descriptor */
return off;
}
}
return 0;
return off;
error:
pci_info(dev, "invalid VPD tag %#04x (size %zu) at offset %zu%s\n",
header[0], size, off, off == 0 ?
"; assume missing optional EEPROM" : "");
return off ?: PCI_VPD_SZ_INVALID;
}
/*
@ -126,33 +104,26 @@ static size_t pci_vpd_size(struct pci_dev *dev, size_t old_size)
* This code has to spin since there is no other notification from the PCI
* hardware. Since the VPD is often implemented by serial attachment to an
* EEPROM, it may take many milliseconds to complete.
* @set: if true wait for flag to be set, else wait for it to be cleared
*
* Returns 0 on success, negative values indicate error.
*/
static int pci_vpd_wait(struct pci_dev *dev)
static int pci_vpd_wait(struct pci_dev *dev, bool set)
{
struct pci_vpd *vpd = dev->vpd;
struct pci_vpd *vpd = &dev->vpd;
unsigned long timeout = jiffies + msecs_to_jiffies(125);
unsigned long max_sleep = 16;
u16 status;
int ret;
if (!vpd->busy)
return 0;
do {
ret = pci_user_read_config_word(dev, vpd->cap + PCI_VPD_ADDR,
&status);
if (ret < 0)
return ret;
if ((status & PCI_VPD_ADDR_F) == vpd->flag) {
vpd->busy = 0;
if (!!(status & PCI_VPD_ADDR_F) == set)
return 0;
}
if (fatal_signal_pending(current))
return -EINTR;
if (time_after(jiffies, timeout))
break;
@ -169,22 +140,17 @@ static int pci_vpd_wait(struct pci_dev *dev)
static ssize_t pci_vpd_read(struct pci_dev *dev, loff_t pos, size_t count,
void *arg)
{
struct pci_vpd *vpd = dev->vpd;
int ret;
struct pci_vpd *vpd = &dev->vpd;
int ret = 0;
loff_t end = pos + count;
u8 *buf = arg;
if (!vpd->cap)
return -ENODEV;
if (pos < 0)
return -EINVAL;
if (!vpd->valid) {
vpd->valid = 1;
vpd->len = pci_vpd_size(dev, vpd->len);
}
if (vpd->len == 0)
return -EIO;
if (pos > vpd->len)
return 0;
@ -196,21 +162,20 @@ static ssize_t pci_vpd_read(struct pci_dev *dev, loff_t pos, size_t count,
if (mutex_lock_killable(&vpd->lock))
return -EINTR;
ret = pci_vpd_wait(dev);
if (ret < 0)
goto out;
while (pos < end) {
u32 val;
unsigned int i, skip;
if (fatal_signal_pending(current)) {
ret = -EINTR;
break;
}
ret = pci_user_write_config_word(dev, vpd->cap + PCI_VPD_ADDR,
pos & ~3);
if (ret < 0)
break;
vpd->busy = 1;
vpd->flag = PCI_VPD_ADDR_F;
ret = pci_vpd_wait(dev);
ret = pci_vpd_wait(dev, true);
if (ret < 0)
break;
@ -228,7 +193,7 @@ static ssize_t pci_vpd_read(struct pci_dev *dev, loff_t pos, size_t count,
val >>= 8;
}
}
out:
mutex_unlock(&vpd->lock);
return ret ? ret : count;
}
@ -236,41 +201,26 @@ out:
static ssize_t pci_vpd_write(struct pci_dev *dev, loff_t pos, size_t count,
const void *arg)
{
struct pci_vpd *vpd = dev->vpd;
struct pci_vpd *vpd = &dev->vpd;
const u8 *buf = arg;
loff_t end = pos + count;
int ret = 0;
if (!vpd->cap)
return -ENODEV;
if (pos < 0 || (pos & 3) || (count & 3))
return -EINVAL;
if (!vpd->valid) {
vpd->valid = 1;
vpd->len = pci_vpd_size(dev, vpd->len);
}
if (vpd->len == 0)
return -EIO;
if (end > vpd->len)
return -EINVAL;
if (mutex_lock_killable(&vpd->lock))
return -EINTR;
ret = pci_vpd_wait(dev);
if (ret < 0)
goto out;
while (pos < end) {
u32 val;
val = *buf++;
val |= *buf++ << 8;
val |= *buf++ << 16;
val |= *buf++ << 24;
ret = pci_user_write_config_dword(dev, vpd->cap + PCI_VPD_DATA, val);
ret = pci_user_write_config_dword(dev, vpd->cap + PCI_VPD_DATA,
get_unaligned_le32(buf));
if (ret < 0)
break;
ret = pci_user_write_config_word(dev, vpd->cap + PCI_VPD_ADDR,
@ -278,85 +228,28 @@ static ssize_t pci_vpd_write(struct pci_dev *dev, loff_t pos, size_t count,
if (ret < 0)
break;
vpd->busy = 1;
vpd->flag = 0;
ret = pci_vpd_wait(dev);
ret = pci_vpd_wait(dev, false);
if (ret < 0)
break;
buf += sizeof(u32);
pos += sizeof(u32);
}
out:
mutex_unlock(&vpd->lock);
return ret ? ret : count;
}
static const struct pci_vpd_ops pci_vpd_ops = {
.read = pci_vpd_read,
.write = pci_vpd_write,
};
static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count,
void *arg)
{
struct pci_dev *tdev = pci_get_func0_dev(dev);
ssize_t ret;
if (!tdev)
return -ENODEV;
ret = pci_read_vpd(tdev, pos, count, arg);
pci_dev_put(tdev);
return ret;
}
static ssize_t pci_vpd_f0_write(struct pci_dev *dev, loff_t pos, size_t count,
const void *arg)
{
struct pci_dev *tdev = pci_get_func0_dev(dev);
ssize_t ret;
if (!tdev)
return -ENODEV;
ret = pci_write_vpd(tdev, pos, count, arg);
pci_dev_put(tdev);
return ret;
}
static const struct pci_vpd_ops pci_vpd_f0_ops = {
.read = pci_vpd_f0_read,
.write = pci_vpd_f0_write,
};
void pci_vpd_init(struct pci_dev *dev)
{
struct pci_vpd *vpd;
u8 cap;
dev->vpd.cap = pci_find_capability(dev, PCI_CAP_ID_VPD);
mutex_init(&dev->vpd.lock);
cap = pci_find_capability(dev, PCI_CAP_ID_VPD);
if (!cap)
return;
if (!dev->vpd.len)
dev->vpd.len = pci_vpd_size(dev);
vpd = kzalloc(sizeof(*vpd), GFP_ATOMIC);
if (!vpd)
return;
vpd->len = PCI_VPD_MAX_SIZE;
if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0)
vpd->ops = &pci_vpd_f0_ops;
else
vpd->ops = &pci_vpd_ops;
mutex_init(&vpd->lock);
vpd->cap = cap;
vpd->busy = 0;
vpd->valid = 0;
dev->vpd = vpd;
}
void pci_vpd_release(struct pci_dev *dev)
{
kfree(dev->vpd);
if (dev->vpd.len == PCI_VPD_SZ_INVALID)
dev->vpd.cap = 0;
}
static ssize_t vpd_read(struct file *filp, struct kobject *kobj,
@ -388,7 +281,7 @@ static umode_t vpd_attr_is_visible(struct kobject *kobj,
{
struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
if (!pdev->vpd)
if (!pdev->vpd.cap)
return 0;
return a->attr.mode;
@ -399,23 +292,63 @@ const struct attribute_group pci_dev_vpd_attr_group = {
.is_bin_visible = vpd_attr_is_visible,
};
int pci_vpd_find_tag(const u8 *buf, unsigned int len, u8 rdt)
void *pci_vpd_alloc(struct pci_dev *dev, unsigned int *size)
{
unsigned int len = dev->vpd.len;
void *buf;
int cnt;
if (!dev->vpd.cap)
return ERR_PTR(-ENODEV);
buf = kmalloc(len, GFP_KERNEL);
if (!buf)
return ERR_PTR(-ENOMEM);
cnt = pci_read_vpd(dev, 0, len, buf);
if (cnt != len) {
kfree(buf);
return ERR_PTR(-EIO);
}
if (size)
*size = len;
return buf;
}
EXPORT_SYMBOL_GPL(pci_vpd_alloc);
static int pci_vpd_find_tag(const u8 *buf, unsigned int len, u8 rdt, unsigned int *size)
{
int i = 0;
/* look for LRDT tags only, end tag is the only SRDT tag */
while (i + PCI_VPD_LRDT_TAG_SIZE <= len && buf[i] & PCI_VPD_LRDT) {
if (buf[i] == rdt)
return i;
unsigned int lrdt_len = pci_vpd_lrdt_size(buf + i);
u8 tag = buf[i];
i += PCI_VPD_LRDT_TAG_SIZE + pci_vpd_lrdt_size(buf + i);
i += PCI_VPD_LRDT_TAG_SIZE;
if (tag == rdt) {
if (i + lrdt_len > len)
lrdt_len = len - i;
if (size)
*size = lrdt_len;
return i;
}
i += lrdt_len;
}
return -ENOENT;
}
EXPORT_SYMBOL_GPL(pci_vpd_find_tag);
int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off,
int pci_vpd_find_id_string(const u8 *buf, unsigned int len, unsigned int *size)
{
return pci_vpd_find_tag(buf, len, PCI_VPD_LRDT_ID_STRING, size);
}
EXPORT_SYMBOL_GPL(pci_vpd_find_id_string);
static int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off,
unsigned int len, const char *kw)
{
int i;
@ -431,7 +364,106 @@ int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off,
return -ENOENT;
}
EXPORT_SYMBOL_GPL(pci_vpd_find_info_keyword);
/**
* pci_read_vpd - Read one entry from Vital Product Data
* @dev: PCI device struct
* @pos: offset in VPD space
* @count: number of bytes to read
* @buf: pointer to where to store result
*/
ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf)
{
ssize_t ret;
if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) {
dev = pci_get_func0_dev(dev);
if (!dev)
return -ENODEV;
ret = pci_vpd_read(dev, pos, count, buf);
pci_dev_put(dev);
return ret;
}
return pci_vpd_read(dev, pos, count, buf);
}
EXPORT_SYMBOL(pci_read_vpd);
/**
* pci_write_vpd - Write entry to Vital Product Data
* @dev: PCI device struct
* @pos: offset in VPD space
* @count: number of bytes to write
* @buf: buffer containing write data
*/
ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf)
{
ssize_t ret;
if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) {
dev = pci_get_func0_dev(dev);
if (!dev)
return -ENODEV;
ret = pci_vpd_write(dev, pos, count, buf);
pci_dev_put(dev);
return ret;
}
return pci_vpd_write(dev, pos, count, buf);
}
EXPORT_SYMBOL(pci_write_vpd);
int pci_vpd_find_ro_info_keyword(const void *buf, unsigned int len,
const char *kw, unsigned int *size)
{
int ro_start, infokw_start;
unsigned int ro_len, infokw_size;
ro_start = pci_vpd_find_tag(buf, len, PCI_VPD_LRDT_RO_DATA, &ro_len);
if (ro_start < 0)
return ro_start;
infokw_start = pci_vpd_find_info_keyword(buf, ro_start, ro_len, kw);
if (infokw_start < 0)
return infokw_start;
infokw_size = pci_vpd_info_field_size(buf + infokw_start);
infokw_start += PCI_VPD_INFO_FLD_HDR_SIZE;
if (infokw_start + infokw_size > len)
return -EINVAL;
if (size)
*size = infokw_size;
return infokw_start;
}
EXPORT_SYMBOL_GPL(pci_vpd_find_ro_info_keyword);
int pci_vpd_check_csum(const void *buf, unsigned int len)
{
const u8 *vpd = buf;
unsigned int size;
u8 csum = 0;
int rv_start;
rv_start = pci_vpd_find_ro_info_keyword(buf, len, PCI_VPD_RO_KEYWORD_CHKSUM, &size);
if (rv_start == -ENOENT) /* no checksum in VPD */
return 1;
else if (rv_start < 0)
return rv_start;
if (!size)
return -EINVAL;
while (rv_start >= 0)
csum += vpd[rv_start--];
return csum ? -EILSEQ : 0;
}
EXPORT_SYMBOL_GPL(pci_vpd_check_csum);
#ifdef CONFIG_PCI_QUIRKS
/*
@ -450,7 +482,7 @@ static void quirk_f0_vpd_link(struct pci_dev *dev)
if (!f0)
return;
if (f0->vpd && dev->class == f0->class &&
if (f0->vpd.cap && dev->class == f0->class &&
dev->vendor == f0->vendor && dev->device == f0->device)
dev->dev_flags |= PCI_DEV_FLAGS_VPD_REF_F0;
@ -468,41 +500,27 @@ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
*/
static void quirk_blacklist_vpd(struct pci_dev *dev)
{
if (dev->vpd) {
dev->vpd->len = 0;
pci_warn(dev, FW_BUG "disabling VPD access (can't determine size of non-standard VPD format)\n");
}
dev->vpd.len = PCI_VPD_SZ_INVALID;
pci_warn(dev, FW_BUG "disabling VPD access (can't determine size of non-standard VPD format)\n");
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0060, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x007c, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0413, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0078, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0079, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0073, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0071, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005b, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x002f, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID,
quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0060, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x007c, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0413, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0078, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0079, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0073, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x0071, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x005b, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x002f, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID, quirk_blacklist_vpd);
/*
* The Amazon Annapurna Labs 0x0031 device id is reused for other non Root Port
* device types, so the quirk is registered for the PCI_CLASS_BRIDGE_PCI class.
*/
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031,
PCI_CLASS_BRIDGE_PCI, 8, quirk_blacklist_vpd);
static void pci_vpd_set_size(struct pci_dev *dev, size_t len)
{
struct pci_vpd *vpd = dev->vpd;
if (!vpd || len == 0 || len > PCI_VPD_MAX_SIZE)
return;
vpd->valid = 1;
vpd->len = len;
}
DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031,
PCI_CLASS_BRIDGE_PCI, 8, quirk_blacklist_vpd);
static void quirk_chelsio_extend_vpd(struct pci_dev *dev)
{
@ -522,12 +540,12 @@ static void quirk_chelsio_extend_vpd(struct pci_dev *dev)
* limits.
*/
if (chip == 0x0 && prod >= 0x20)
pci_vpd_set_size(dev, 8192);
dev->vpd.len = 8192;
else if (chip >= 0x4 && func < 0x8)
pci_vpd_set_size(dev, 2048);
dev->vpd.len = 2048;
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
quirk_chelsio_extend_vpd);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
quirk_chelsio_extend_vpd);
#endif

View File

@ -1629,8 +1629,8 @@ static int read_vpd(struct cxlflash_cfg *cfg, u64 wwpn[])
{
struct device *dev = &cfg->dev->dev;
struct pci_dev *pdev = cfg->dev;
int rc = 0;
int ro_start, ro_size, i, j, k;
int i, k, rc = 0;
unsigned int kw_size;
ssize_t vpd_size;
char vpd_data[CXLFLASH_VPD_LEN];
char tmp_buf[WWPN_BUF_LEN] = { 0 };
@ -1648,24 +1648,6 @@ static int read_vpd(struct cxlflash_cfg *cfg, u64 wwpn[])
goto out;
}
/* Get the read only section offset */
ro_start = pci_vpd_find_tag(vpd_data, vpd_size, PCI_VPD_LRDT_RO_DATA);
if (unlikely(ro_start < 0)) {
dev_err(dev, "%s: VPD Read-only data not found\n", __func__);
rc = -ENODEV;
goto out;
}
/* Get the read only section size, cap when extends beyond read VPD */
ro_size = pci_vpd_lrdt_size(&vpd_data[ro_start]);
j = ro_size;
i = ro_start + PCI_VPD_LRDT_TAG_SIZE;
if (unlikely((i + j) > vpd_size)) {
dev_dbg(dev, "%s: Might need to read more VPD (%d > %ld)\n",
__func__, (i + j), vpd_size);
ro_size = vpd_size - i;
}
/*
* Find the offset of the WWPN tag within the read only
* VPD data and validate the found field (partials are
@ -1681,11 +1663,9 @@ static int read_vpd(struct cxlflash_cfg *cfg, u64 wwpn[])
* ports programmed and operate in an undefined state.
*/
for (k = 0; k < cfg->num_fc_ports; k++) {
j = ro_size;
i = ro_start + PCI_VPD_LRDT_TAG_SIZE;
i = pci_vpd_find_info_keyword(vpd_data, i, j, wwpn_vpd_tags[k]);
if (i < 0) {
i = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size,
wwpn_vpd_tags[k], &kw_size);
if (i == -ENOENT) {
if (wwpn_vpd_required)
dev_err(dev, "%s: Port %d WWPN not found\n",
__func__, k);
@ -1693,9 +1673,7 @@ static int read_vpd(struct cxlflash_cfg *cfg, u64 wwpn[])
continue;
}
j = pci_vpd_info_field_size(&vpd_data[i]);
i += PCI_VPD_INFO_FLD_HDR_SIZE;
if (unlikely((i + j > vpd_size) || (j != WWPN_LEN))) {
if (i < 0 || kw_size != WWPN_LEN) {
dev_err(dev, "%s: Port %d WWPN incomplete or bad VPD\n",
__func__, k);
rc = -ENODEV;

View File

@ -52,4 +52,4 @@ static inline void __iomem *pci_iomap_wc_range(struct pci_dev *dev, int bar,
}
#endif
#endif /* __ASM_GENERIC_IO_H */
#endif /* __ASM_GENERIC_PCI_IOMAP_H */

View File

@ -62,31 +62,32 @@ pci_epc_interface_string(enum pci_epc_interface_type type)
* @owner: the module owner containing the ops
*/
struct pci_epc_ops {
int (*write_header)(struct pci_epc *epc, u8 func_no,
int (*write_header)(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_header *hdr);
int (*set_bar)(struct pci_epc *epc, u8 func_no,
int (*set_bar)(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar);
void (*clear_bar)(struct pci_epc *epc, u8 func_no,
void (*clear_bar)(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar);
int (*map_addr)(struct pci_epc *epc, u8 func_no,
int (*map_addr)(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t addr, u64 pci_addr, size_t size);
void (*unmap_addr)(struct pci_epc *epc, u8 func_no,
void (*unmap_addr)(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t addr);
int (*set_msi)(struct pci_epc *epc, u8 func_no, u8 interrupts);
int (*get_msi)(struct pci_epc *epc, u8 func_no);
int (*set_msix)(struct pci_epc *epc, u8 func_no, u16 interrupts,
enum pci_barno, u32 offset);
int (*get_msix)(struct pci_epc *epc, u8 func_no);
int (*raise_irq)(struct pci_epc *epc, u8 func_no,
int (*set_msi)(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
u8 interrupts);
int (*get_msi)(struct pci_epc *epc, u8 func_no, u8 vfunc_no);
int (*set_msix)(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
u16 interrupts, enum pci_barno, u32 offset);
int (*get_msix)(struct pci_epc *epc, u8 func_no, u8 vfunc_no);
int (*raise_irq)(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
enum pci_epc_irq_type type, u16 interrupt_num);
int (*map_msi_irq)(struct pci_epc *epc, u8 func_no,
int (*map_msi_irq)(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t phys_addr, u8 interrupt_num,
u32 entry_size, u32 *msi_data,
u32 *msi_addr_offset);
int (*start)(struct pci_epc *epc);
void (*stop)(struct pci_epc *epc);
const struct pci_epc_features* (*get_features)(struct pci_epc *epc,
u8 func_no);
u8 func_no, u8 vfunc_no);
struct module *owner;
};
@ -128,6 +129,8 @@ struct pci_epc_mem {
* single window.
* @num_windows: number of windows supported by device
* @max_functions: max number of functions that can be configured in this EPC
* @max_vfs: Array indicating the maximum number of virtual functions that can
* be associated with each physical function
* @group: configfs group representing the PCI EPC device
* @lock: mutex to protect pci_epc ops
* @function_num_map: bitmap to manage physical function number
@ -141,6 +144,7 @@ struct pci_epc {
struct pci_epc_mem *mem;
unsigned int num_windows;
u8 max_functions;
u8 *max_vfs;
struct config_group *group;
/* mutex to protect against concurrent access of EP controller */
struct mutex lock;
@ -208,31 +212,32 @@ void pci_epc_linkup(struct pci_epc *epc);
void pci_epc_init_notify(struct pci_epc *epc);
void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
enum pci_epc_interface_type type);
int pci_epc_write_header(struct pci_epc *epc, u8 func_no,
int pci_epc_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_header *hdr);
int pci_epc_set_bar(struct pci_epc *epc, u8 func_no,
int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar);
void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no,
void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar);
int pci_epc_map_addr(struct pci_epc *epc, u8 func_no,
int pci_epc_map_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t phys_addr,
u64 pci_addr, size_t size);
void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no,
void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t phys_addr);
int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts);
int pci_epc_get_msi(struct pci_epc *epc, u8 func_no);
int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
enum pci_barno, u32 offset);
int pci_epc_get_msix(struct pci_epc *epc, u8 func_no);
int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no,
int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
u8 interrupts);
int pci_epc_get_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no);
int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
u16 interrupts, enum pci_barno, u32 offset);
int pci_epc_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no);
int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t phys_addr, u8 interrupt_num,
u32 entry_size, u32 *msi_data, u32 *msi_addr_offset);
int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
enum pci_epc_irq_type type, u16 interrupt_num);
int pci_epc_start(struct pci_epc *epc);
void pci_epc_stop(struct pci_epc *epc);
const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc,
u8 func_no);
u8 func_no, u8 vfunc_no);
enum pci_barno
pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features);
enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features

View File

@ -121,8 +121,10 @@ struct pci_epf_bar {
* @bar: represents the BAR of EPF device
* @msi_interrupts: number of MSI interrupts required by this function
* @msix_interrupts: number of MSI-X interrupts required by this function
* @func_no: unique function number within this endpoint device
* @func_no: unique (physical) function number within this endpoint device
* @vfunc_no: unique virtual function number within a physical function
* @epc: the EPC device to which this EPF device is bound
* @epf_pf: the physical EPF device to which this virtual EPF device is bound
* @driver: the EPF driver to which this EPF device is bound
* @list: to add pci_epf as a list of PCI endpoint functions to pci_epc
* @nb: notifier block to notify EPF of any EPC events (like linkup)
@ -133,6 +135,10 @@ struct pci_epf_bar {
* @sec_epc_bar: represents the BAR of EPF device associated with secondary EPC
* @sec_epc_func_no: unique (physical) function number within the secondary EPC
* @group: configfs group associated with the EPF device
* @is_bound: indicates if bind notification to function driver has been invoked
* @is_vf: true - virtual function, false - physical function
* @vfunction_num_map: bitmap to manage virtual function number
* @pci_vepf: list of virtual endpoint functions associated with this function
*/
struct pci_epf {
struct device dev;
@ -142,8 +148,10 @@ struct pci_epf {
u8 msi_interrupts;
u16 msix_interrupts;
u8 func_no;
u8 vfunc_no;
struct pci_epc *epc;
struct pci_epf *epf_pf;
struct pci_epf_driver *driver;
struct list_head list;
struct notifier_block nb;
@ -156,6 +164,10 @@ struct pci_epf {
struct pci_epf_bar sec_epc_bar[6];
u8 sec_epc_func_no;
struct config_group *group;
unsigned int is_bound;
unsigned int is_vf;
unsigned long vfunction_num_map;
struct list_head pci_vepf;
};
/**
@ -199,4 +211,6 @@ int pci_epf_bind(struct pci_epf *epf);
void pci_epf_unbind(struct pci_epf *epf);
struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf,
struct config_group *group);
int pci_epf_add_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf);
void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf);
#endif /* __LINUX_PCI_EPF_H */

View File

@ -49,6 +49,12 @@
PCI_STATUS_SIG_TARGET_ABORT | \
PCI_STATUS_PARITY)
/* Number of reset methods used in pci_reset_fn_methods array in pci.c */
#define PCI_NUM_RESET_METHODS 7
#define PCI_RESET_PROBE true
#define PCI_RESET_DO_RESET false
/*
* The PCI interface treats multi-function devices as independent
* devices. The slot/function address of each device is encoded
@ -288,21 +294,14 @@ enum pci_bus_speed {
enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev);
enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev);
struct pci_cap_saved_data {
u16 cap_nr;
bool cap_extended;
unsigned int size;
u32 data[];
};
struct pci_cap_saved_state {
struct hlist_node next;
struct pci_cap_saved_data cap;
struct pci_vpd {
struct mutex lock;
unsigned int len;
u8 cap;
};
struct irq_affinity;
struct pcie_link_state;
struct pci_vpd;
struct pci_sriov;
struct pci_p2pdma;
struct rcec_ea;
@ -333,6 +332,7 @@ struct pci_dev {
struct rcec_ea *rcec_ea; /* RCEC cached endpoint association */
struct pci_dev *rcec; /* Associated RCEC device */
#endif
u32 devcap; /* PCIe Device Capabilities */
u8 pcie_cap; /* PCIe capability offset */
u8 msi_cap; /* MSI capability offset */
u8 msix_cap; /* MSI-X capability offset */
@ -388,6 +388,7 @@ struct pci_dev {
supported from root to here */
u16 l1ss; /* L1SS Capability pointer */
#endif
unsigned int pasid_no_tlp:1; /* PASID works without TLP Prefix */
unsigned int eetlp_prefix_path:1; /* End-to-End TLP Prefix */
pci_channel_state_t error_state; /* Current connectivity state */
@ -427,7 +428,6 @@ struct pci_dev {
unsigned int state_saved:1;
unsigned int is_physfn:1;
unsigned int is_virtfn:1;
unsigned int reset_fn:1;
unsigned int is_hotplug_bridge:1;
unsigned int shpc_managed:1; /* SHPC owned by shpchp */
unsigned int is_thunderbolt:1; /* Thunderbolt controller */
@ -473,7 +473,7 @@ struct pci_dev {
#ifdef CONFIG_PCI_MSI
const struct attribute_group **msi_irq_groups;
#endif
struct pci_vpd *vpd;
struct pci_vpd vpd;
#ifdef CONFIG_PCIE_DPC
u16 dpc_cap;
unsigned int dpc_rp_extensions:1;
@ -505,6 +505,9 @@ struct pci_dev {
char *driver_override; /* Driver name to force a match */
unsigned long priv_flags; /* Private flags for the PCI driver */
/* These methods index pci_reset_fn_methods[] */
u8 reset_methods[PCI_NUM_RESET_METHODS]; /* In priority order */
};
static inline struct pci_dev *pci_physfn(struct pci_dev *dev)
@ -526,6 +529,16 @@ static inline int pci_channel_offline(struct pci_dev *pdev)
return (pdev->error_state != pci_channel_io_normal);
}
/*
* Currently in ACPI spec, for each PCI host bridge, PCI Segment
* Group number is limited to a 16-bit value, therefore (int)-1 is
* not a valid PCI domain number, and can be used as a sentinel
* value indicating ->domain_nr is not set by the driver (and
* CONFIG_PCI_DOMAINS_GENERIC=y archs will set it with
* pci_bus_find_domain_nr()).
*/
#define PCI_DOMAIN_NR_NOT_SET (-1)
struct pci_host_bridge {
struct device dev;
struct pci_bus *bus; /* Root bus */
@ -533,6 +546,7 @@ struct pci_host_bridge {
struct pci_ops *child_ops;
void *sysdata;
int busnr;
int domain_nr;
struct list_head windows; /* resource_entry */
struct list_head dma_ranges; /* dma ranges resource list */
u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */
@ -1257,7 +1271,7 @@ u32 pcie_bandwidth_available(struct pci_dev *dev, struct pci_dev **limiting_dev,
enum pci_bus_speed *speed,
enum pcie_link_width *width);
void pcie_print_link_status(struct pci_dev *dev);
bool pcie_has_flr(struct pci_dev *dev);
int pcie_reset_flr(struct pci_dev *dev, bool probe);
int pcie_flr(struct pci_dev *dev);
int __pci_reset_function_locked(struct pci_dev *dev);
int pci_reset_function(struct pci_dev *dev);
@ -1307,12 +1321,6 @@ int pci_load_saved_state(struct pci_dev *dev,
struct pci_saved_state *state);
int pci_load_and_free_saved_state(struct pci_dev *dev,
struct pci_saved_state **state);
struct pci_cap_saved_state *pci_find_saved_cap(struct pci_dev *dev, char cap);
struct pci_cap_saved_state *pci_find_saved_ext_cap(struct pci_dev *dev,
u16 cap);
int pci_add_cap_save_buffer(struct pci_dev *dev, char cap, unsigned int size);
int pci_add_ext_cap_save_buffer(struct pci_dev *dev,
u16 cap, unsigned int size);
int pci_platform_power_transition(struct pci_dev *dev, pci_power_t state);
int pci_set_power_state(struct pci_dev *dev, pci_power_t state);
pci_power_t pci_choose_state(struct pci_dev *dev, pm_message_t state);
@ -1779,8 +1787,9 @@ static inline void pci_disable_device(struct pci_dev *dev) { }
static inline int pcim_enable_device(struct pci_dev *pdev) { return -EIO; }
static inline int pci_assign_resource(struct pci_dev *dev, int i)
{ return -EBUSY; }
static inline int __pci_register_driver(struct pci_driver *drv,
struct module *owner)
static inline int __must_check __pci_register_driver(struct pci_driver *drv,
struct module *owner,
const char *mod_name)
{ return 0; }
static inline int pci_register_driver(struct pci_driver *drv)
{ return 0; }
@ -1920,9 +1929,7 @@ int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma);
#define pci_resource_end(dev, bar) ((dev)->resource[(bar)].end)
#define pci_resource_flags(dev, bar) ((dev)->resource[(bar)].flags)
#define pci_resource_len(dev,bar) \
((pci_resource_start((dev), (bar)) == 0 && \
pci_resource_end((dev), (bar)) == \
pci_resource_start((dev), (bar))) ? 0 : \
((pci_resource_end((dev), (bar)) == 0) ? 0 : \
\
(pci_resource_end((dev), (bar)) - \
pci_resource_start((dev), (bar)) + 1))
@ -2289,20 +2296,6 @@ int pci_enable_atomic_ops_to_root(struct pci_dev *dev, u32 cap_mask);
#define PCI_VPD_LRDT_RO_DATA PCI_VPD_LRDT_ID(PCI_VPD_LTIN_RO_DATA)
#define PCI_VPD_LRDT_RW_DATA PCI_VPD_LRDT_ID(PCI_VPD_LTIN_RW_DATA)
/* Small Resource Data Type Tag Item Names */
#define PCI_VPD_STIN_END 0x0f /* End */
#define PCI_VPD_SRDT_END (PCI_VPD_STIN_END << 3)
#define PCI_VPD_SRDT_TIN_MASK 0x78
#define PCI_VPD_SRDT_LEN_MASK 0x07
#define PCI_VPD_LRDT_TIN_MASK 0x7f
#define PCI_VPD_LRDT_TAG_SIZE 3
#define PCI_VPD_SRDT_TAG_SIZE 1
#define PCI_VPD_INFO_FLD_HDR_SIZE 3
#define PCI_VPD_RO_KEYWORD_PARTNO "PN"
#define PCI_VPD_RO_KEYWORD_SERIALNO "SN"
#define PCI_VPD_RO_KEYWORD_MFR_ID "MN"
@ -2310,83 +2303,45 @@ int pci_enable_atomic_ops_to_root(struct pci_dev *dev, u32 cap_mask);
#define PCI_VPD_RO_KEYWORD_CHKSUM "RV"
/**
* pci_vpd_lrdt_size - Extracts the Large Resource Data Type length
* @lrdt: Pointer to the beginning of the Large Resource Data Type tag
* pci_vpd_alloc - Allocate buffer and read VPD into it
* @dev: PCI device
* @size: pointer to field where VPD length is returned
*
* Returns the extracted Large Resource Data Type length.
* Returns pointer to allocated buffer or an ERR_PTR in case of failure
*/
static inline u16 pci_vpd_lrdt_size(const u8 *lrdt)
{
return (u16)lrdt[1] + ((u16)lrdt[2] << 8);
}
void *pci_vpd_alloc(struct pci_dev *dev, unsigned int *size);
/**
* pci_vpd_lrdt_tag - Extracts the Large Resource Data Type Tag Item
* @lrdt: Pointer to the beginning of the Large Resource Data Type tag
* pci_vpd_find_id_string - Locate id string in VPD
* @buf: Pointer to buffered VPD data
* @len: The length of the buffer area in which to search
* @size: Pointer to field where length of id string is returned
*
* Returns the extracted Large Resource Data Type Tag item.
* Returns the index of the id string or -ENOENT if not found.
*/
static inline u16 pci_vpd_lrdt_tag(const u8 *lrdt)
{
return (u16)(lrdt[0] & PCI_VPD_LRDT_TIN_MASK);
}
int pci_vpd_find_id_string(const u8 *buf, unsigned int len, unsigned int *size);
/**
* pci_vpd_srdt_size - Extracts the Small Resource Data Type length
* @srdt: Pointer to the beginning of the Small Resource Data Type tag
*
* Returns the extracted Small Resource Data Type length.
*/
static inline u8 pci_vpd_srdt_size(const u8 *srdt)
{
return (*srdt) & PCI_VPD_SRDT_LEN_MASK;
}
/**
* pci_vpd_srdt_tag - Extracts the Small Resource Data Type Tag Item
* @srdt: Pointer to the beginning of the Small Resource Data Type tag
*
* Returns the extracted Small Resource Data Type Tag Item.
*/
static inline u8 pci_vpd_srdt_tag(const u8 *srdt)
{
return ((*srdt) & PCI_VPD_SRDT_TIN_MASK) >> 3;
}
/**
* pci_vpd_info_field_size - Extracts the information field length
* @info_field: Pointer to the beginning of an information field header
*
* Returns the extracted information field length.
*/
static inline u8 pci_vpd_info_field_size(const u8 *info_field)
{
return info_field[2];
}
/**
* pci_vpd_find_tag - Locates the Resource Data Type tag provided
* @buf: Pointer to buffered vpd data
* @len: The length of the vpd buffer
* @rdt: The Resource Data Type to search for
*
* Returns the index where the Resource Data Type was found or
* -ENOENT otherwise.
*/
int pci_vpd_find_tag(const u8 *buf, unsigned int len, u8 rdt);
/**
* pci_vpd_find_info_keyword - Locates an information field keyword in the VPD
* @buf: Pointer to buffered vpd data
* @off: The offset into the buffer at which to begin the search
* @len: The length of the buffer area, relative to off, in which to search
* pci_vpd_find_ro_info_keyword - Locate info field keyword in VPD RO section
* @buf: Pointer to buffered VPD data
* @len: The length of the buffer area in which to search
* @kw: The keyword to search for
* @size: Pointer to field where length of found keyword data is returned
*
* Returns the index where the information field keyword was found or
* -ENOENT otherwise.
* Returns the index of the information field keyword data or -ENOENT if
* not found.
*/
int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off,
unsigned int len, const char *kw);
int pci_vpd_find_ro_info_keyword(const void *buf, unsigned int len,
const char *kw, unsigned int *size);
/**
* pci_vpd_check_csum - Check VPD checksum
* @buf: Pointer to buffered VPD data
* @len: VPD size
*
* Returns 1 if VPD has no checksum, otherwise 0 or an errno
*/
int pci_vpd_check_csum(const void *buf, unsigned int len);
/* PCI <-> OF binding helpers */
#ifdef CONFIG_OF

View File

@ -44,7 +44,7 @@ struct hotplug_slot_ops {
int (*get_attention_status) (struct hotplug_slot *slot, u8 *value);
int (*get_latch_status) (struct hotplug_slot *slot, u8 *value);
int (*get_adapter_status) (struct hotplug_slot *slot, u8 *value);
int (*reset_slot) (struct hotplug_slot *slot, int probe);
int (*reset_slot) (struct hotplug_slot *slot, bool probe);
};
/**

View File

@ -2453,7 +2453,8 @@
#define PCI_VENDOR_ID_TDI 0x192E
#define PCI_DEVICE_ID_TDI_EHCI 0x0101
#define PCI_VENDOR_ID_FREESCALE 0x1957
#define PCI_VENDOR_ID_FREESCALE 0x1957 /* duplicate: NXP */
#define PCI_VENDOR_ID_NXP 0x1957 /* duplicate: FREESCALE */
#define PCI_DEVICE_ID_MPC8308 0xc006
#define PCI_DEVICE_ID_MPC8315E 0x00b4
#define PCI_DEVICE_ID_MPC8315 0x00b5

View File

@ -40,7 +40,7 @@ struct pci_test {
static int run_test(struct pci_test *test)
{
struct pci_endpoint_test_xfer_param param;
struct pci_endpoint_test_xfer_param param = {};
int ret = -EINVAL;
int fd;