pci-v5.16-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmGFXBkUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vx6Tg/7BsGWm8f+uw/mr9lLm47q2mc4XyoO
 7bR9KDp5NM84W/8ZOU7dqqqsnY0ddrSOLBRyhJJYMW3SwJd1y1ajTBsL1Ujqv+eN
 z+JUFmhq4Laqm4k6Spc9CEJE+Ol5P6gGUtxLYo6PM2R0VxnSs/rDxctT5i7YOpCi
 COJ+NVT/mc/by2loz1kLTSR9GgtBBgd+Y8UA33GFbHKssROw02L0OI3wffp81Oba
 EhMGPoD+0FndAniDw+vaOSoO+YaBuTfbM92T/O00mND69Fj1PWgmNWZz7gAVgsXb
 3RrNENUFxgw6CDt7LZWB8OyT04iXe0R2kJs+PA9gigFCGbypwbd/Nbz5M7e9HUTR
 ray+1EpZib6+nIksQBL2mX8nmtyHMcLiM57TOEhq0+ECDO640MiRm8t0FIG/1E8v
 3ZYd9w20o/NxlFNXHxxpZ3D/osGH5ocyF5c5m1rfB4RGRwztZGL172LWCB0Ezz9r
 eHB8sWxylxuhrH+hp2BzQjyddg7rbF+RA4AVfcQSxUpyV01hoRocKqknoDATVeLH
 664nJIINFxKJFwfuL3E6OhrInNe1LnAhCZsHHqbS+NNQFgvPRznbixBeLkI9dMf5
 Yf6vpsWO7ur8lHHbRndZubVu8nxklXTU7B/w+C11sq6k9LLRJSHzanr3Fn9WA80x
 sznCxwUvbTCu1r0=
 =nsMh
 -----END PGP SIGNATURE-----

Merge tag 'pci-v5.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:
   - Conserve IRQs by setting up portdrv IRQs only when there are users
     (Jan Kiszka)
   - Rework and simplify _OSC negotiation for control of PCIe features
     (Joerg Roedel)
   - Remove struct pci_dev.driver pointer since it's redundant with the
     struct device.driver pointer (Uwe Kleine-König)

  Resource management:
   - Coalesce contiguous host bridge apertures from _CRS to accommodate
     BARs that cover more than one aperture (Kai-Heng Feng)

  Sysfs:
   - Check CAP_SYS_ADMIN before parsing user input (Krzysztof
     Wilczyński)
   - Return -EINVAL consistently from "store" functions (Krzysztof
     Wilczyński)
   - Use sysfs_emit() in endpoint "show" functions to avoid buffer
     overruns (Kunihiko Hayashi)

  PCIe native device hotplug:
   - Ignore Link Down/Up caused by resets during error recovery so
     endpoint drivers can remain bound to the device (Lukas Wunner)

  Virtualization:
   - Avoid bus resets on Atheros QCA6174, where they hang the device
     (Ingmar Klein)
   - Work around Pericom PI7C9X2G switch packet drop erratum by using
     store and forward mode instead of cut-through (Nathan Rossi)
   - Avoid trying to enable AtomicOps on VFs; the PF setting applies to
     all VFs (Selvin Xavier)

  MSI:
   - Document that /sys/bus/pci/devices/.../irq contains the legacy INTx
     interrupt or the IRQ of the first MSI (not MSI-X) vector (Barry
     Song)

  VPD:
   - Add pci_read_vpd_any() and pci_write_vpd_any() to access anywhere
     in the possible VPD space; use these to simplify the cxgb3 driver
     (Heiner Kallweit)

  Peer-to-peer DMA:
   - Add (not subtract) the bus offset when calculating DMA address
     (Wang Lu)

  ASPM:
   - Re-enable LTR at Downstream Ports so they don't report Unsupported
     Requests when reset or hot-added devices send LTR messages
     (Mingchuang Qiao)

  Apple PCIe controller driver:
   - Add driver for Apple M1 PCIe controller (Alyssa Rosenzweig, Marc
     Zyngier)

  Cadence PCIe controller driver:
   - Return success when probe succeeds instead of falling into error
     path (Li Chen)

  HiSilicon Kirin PCIe controller driver:
   - Reorganize PHY logic and add support for external PHY drivers
     (Mauro Carvalho Chehab)
   - Support PERST# GPIOs for HiKey970 external PEX 8606 bridge (Mauro
     Carvalho Chehab)
   - Add Kirin 970 support (Mauro Carvalho Chehab)
   - Make driver removable (Mauro Carvalho Chehab)

  Intel VMD host bridge driver:
   - If IOMMU supports interrupt remapping, leave VMD MSI-X remapping
     enabled (Adrian Huang)
   - Number each controller so we can tell them apart in
     /proc/interrupts (Chunguang Xu)
   - Avoid building on UML because VMD depends on x86 bare metal APIs
     (Johannes Berg)

  Marvell Aardvark PCIe controller driver:
   - Define macros for PCI_EXP_DEVCTL_PAYLOAD_* (Pali Rohár)
   - Set Max Payload Size to 512 bytes per Marvell spec (Pali Rohár)
   - Downgrade PIO Response Status messages to debug level (Marek Behún)
   - Preserve CRS SV (Config Request Retry Software Visibility) bit in
     emulated Root Control register (Pali Rohár)
   - Fix issue in configuring reference clock (Pali Rohár)
   - Don't clear status bits for masked interrupts (Pali Rohár)
   - Don't mask unused interrupts (Pali Rohár)
   - Avoid code repetition in advk_pcie_rd_conf() (Marek Behún)
   - Retry config accesses on CRS response (Pali Rohár)
   - Simplify emulated Root Capabilities initialization (Pali Rohár)
   - Fix several link training issues (Pali Rohár)
   - Fix link-up checking via LTSSM (Pali Rohár)
   - Fix reporting of Data Link Layer Link Active (Pali Rohár)
   - Fix emulation of W1C bits (Marek Behún)
   - Fix MSI domain .alloc() method to return zero on success (Marek
     Behún)
   - Read entire 16-bit MSI vector in MSI handler, not just low 8 bits
     (Marek Behún)
   - Clear Root Port I/O Space, Memory Space, and Bus Master Enable bits
     at startup; PCI core will set those as necessary (Pali Rohár)
   - When operating as a Root Port, set class code to "PCI Bridge"
     instead of the default "Mass Storage Controller" (Pali Rohár)
   - Add emulation for PCI_BRIDGE_CTL_BUS_RESET since aardvark doesn't
     implement this per spec (Pali Rohár)
   - Add emulation of option ROM BAR since aardvark doesn't implement
     this per spec (Pali Rohár)

  MediaTek MT7621 PCIe controller driver:
   - Add MediaTek MT7621 PCIe host controller driver and DT binding
     (Sergio Paracuellos)

  Qualcomm PCIe controller driver:
   - Add SC8180x compatible string (Bjorn Andersson)
   - Add endpoint controller driver and DT binding (Manivannan
     Sadhasivam)
   - Restructure to use of_device_get_match_data() (Prasad Malisetty)
   - Add SC7280-specific pcie_1_pipe_clk_src handling (Prasad Malisetty)

  Renesas R-Car PCIe controller driver:
   - Remove unnecessary includes (Geert Uytterhoeven)

  Rockchip DesignWare PCIe controller driver:
   - Add DT binding (Simon Xue)

  Socionext UniPhier Pro5 controller driver:
   - Serialize INTx masking/unmasking (Kunihiko Hayashi)

  Synopsys DesignWare PCIe controller driver:
   - Run dwc .host_init() method before registering MSI interrupt
     handler so we can deal with pending interrupts left by bootloader
     (Bjorn Andersson)
   - Clean up Kconfig dependencies (Andy Shevchenko)
   - Export symbols to allow more modular drivers (Luca Ceresoli)

  TI DRA7xx PCIe controller driver:
   - Allow host and endpoint drivers to be modules (Luca Ceresoli)
   - Enable external clock if present (Luca Ceresoli)

  TI J721E PCIe driver:
   - Disable PHY when probe fails after initializing it (Christophe
     JAILLET)

  MicroSemi Switchtec management driver:
   - Return error to application when command execution fails because an
     out-of-band reset has cleared the device BARs, Memory Space Enable,
     etc (Kelvin Cao)
   - Fix MRPC error status handling issue (Kelvin Cao)
   - Mask out other bits when reading of management VEP instance ID
     (Kelvin Cao)
   - Return EOPNOTSUPP instead of ENOTSUPP from sysfs show functions
     (Kelvin Cao)
   - Add check of event support (Logan Gunthorpe)

  Miscellaneous:
   - Remove unused pci_pool wrappers, which have been replaced by
     dma_pool (Cai Huoqing)
   - Use 'unsigned int' instead of bare 'unsigned' (Krzysztof
     Wilczyński)
   - Use kstrtobool() directly, sans strtobool() wrapper (Krzysztof
     Wilczyński)
   - Fix some sscanf(), sprintf() format mismatches (Krzysztof
     Wilczyński)
   - Update PCI subsystem information in MAINTAINERS (Krzysztof
     Wilczyński)
   - Correct some misspellings (Krzysztof Wilczyński)"

* tag 'pci-v5.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (137 commits)
  PCI: Add ACS quirk for Pericom PI7C9X2G switches
  PCI: apple: Configure RID to SID mapper on device addition
  iommu/dart: Exclude MSI doorbell from PCIe device IOVA range
  PCI: apple: Implement MSI support
  PCI: apple: Add INTx and per-port interrupt support
  PCI: kirin: Allow removing the driver
  PCI: kirin: De-init the dwc driver
  PCI: kirin: Disable clkreq during poweroff sequence
  PCI: kirin: Move the power-off code to a common routine
  PCI: kirin: Add power_off support for Kirin 960 PHY
  PCI: kirin: Allow building it as a module
  PCI: kirin: Add MODULE_* macros
  PCI: kirin: Add Kirin 970 compatible
  PCI: kirin: Support PERST# GPIOs for HiKey970 external PEX 8606 bridge
  PCI: apple: Set up reference clocks when probing
  PCI: apple: Add initial hardware bring-up
  PCI: of: Allow matching of an interrupt-map local to a PCI device
  of/irq: Allow matching of an interrupt-map local to an interrupt controller
  irqdomain: Make of_phandle_args_to_fwspec() generally available
  PCI: Do not enable AtomicOps on VFs
  ...
This commit is contained in:
Linus Torvalds 2021-11-06 14:36:12 -07:00
commit 0c5c62ddf8
121 changed files with 3943 additions and 1190 deletions

View File

@ -100,6 +100,17 @@ Description:
This attribute indicates the mode that the irq vector named by This attribute indicates the mode that the irq vector named by
the file is in (msi vs. msix) the file is in (msi vs. msix)
What: /sys/bus/pci/devices/.../irq
Date: August 2021
Contact: Linux PCI developers <linux-pci@vger.kernel.org>
Description:
If a driver has enabled MSI (not MSI-X), "irq" contains the
IRQ of the first MSI vector. Otherwise "irq" contains the
IRQ of the legacy INTx interrupt.
"irq" being set to 0 indicates that the device isn't
capable of generating legacy INTx interrupts.
What: /sys/bus/pci/devices/.../remove What: /sys/bus/pci/devices/.../remove
Date: January 2009 Date: January 2009
Contact: Linux PCI developers <linux-pci@vger.kernel.org> Contact: Linux PCI developers <linux-pci@vger.kernel.org>

View File

@ -0,0 +1,142 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/mediatek,mt7621-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: MediaTek MT7621 PCIe controller
maintainers:
- Sergio Paracuellos <sergio.paracuellos@gmail.com>
description: |+
MediaTek MT7621 PCIe subsys supports a single Root Complex (RC)
with 3 Root Ports. Each Root Port supports a Gen1 1-lane Link
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
properties:
compatible:
const: mediatek,mt7621-pci
reg:
items:
- description: host-pci bridge registers
- description: pcie port 0 RC control registers
- description: pcie port 1 RC control registers
- description: pcie port 2 RC control registers
ranges:
maxItems: 2
patternProperties:
'pcie@[0-2],0':
type: object
$ref: /schemas/pci/pci-bus.yaml#
properties:
resets:
maxItems: 1
clocks:
maxItems: 1
phys:
maxItems: 1
required:
- "#interrupt-cells"
- interrupt-map-mask
- interrupt-map
- resets
- clocks
- phys
- phy-names
- ranges
unevaluatedProperties: false
required:
- compatible
- reg
- ranges
- "#interrupt-cells"
- interrupt-map-mask
- interrupt-map
- reset-gpios
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/mips-gic.h>
pcie: pcie@1e140000 {
compatible = "mediatek,mt7621-pci";
reg = <0x1e140000 0x100>,
<0x1e142000 0x100>,
<0x1e143000 0x100>,
<0x1e144000 0x100>;
#address-cells = <3>;
#size-cells = <2>;
pinctrl-names = "default";
pinctrl-0 = <&pcie_pins>;
device_type = "pci";
ranges = <0x02000000 0 0x60000000 0x60000000 0 0x10000000>, /* pci memory */
<0x01000000 0 0x1e160000 0x1e160000 0 0x00010000>; /* io space */
#interrupt-cells = <1>;
interrupt-map-mask = <0xF800 0 0 0>;
interrupt-map = <0x0000 0 0 0 &gic GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH>,
<0x0800 0 0 0 &gic GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH>,
<0x1000 0 0 0 &gic GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>;
reset-gpios = <&gpio 19 GPIO_ACTIVE_LOW>;
pcie@0,0 {
reg = <0x0000 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &gic GIC_SHARED 4 IRQ_TYPE_LEVEL_HIGH>;
resets = <&rstctrl 24>;
clocks = <&clkctrl 24>;
phys = <&pcie0_phy 1>;
phy-names = "pcie-phy0";
ranges;
};
pcie@1,0 {
reg = <0x0800 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &gic GIC_SHARED 24 IRQ_TYPE_LEVEL_HIGH>;
resets = <&rstctrl 25>;
clocks = <&clkctrl 25>;
phys = <&pcie0_phy 1>;
phy-names = "pcie-phy1";
ranges;
};
pcie@2,0 {
reg = <0x1000 0 0 0 0>;
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &gic GIC_SHARED 25 IRQ_TYPE_LEVEL_HIGH>;
resets = <&rstctrl 26>;
clocks = <&clkctrl 26>;
phys = <&pcie2_phy 0>;
phy-names = "pcie-phy2";
ranges;
};
};
...

View File

@ -0,0 +1,158 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/qcom,pcie-ep.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm PCIe Endpoint Controller binding
maintainers:
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
allOf:
- $ref: "pci-ep.yaml#"
properties:
compatible:
const: qcom,sdx55-pcie-ep
reg:
items:
- description: Qualcomm-specific PARF configuration registers
- description: DesignWare PCIe registers
- description: External local bus interface registers
- description: Address Translation Unit (ATU) registers
- description: Memory region used to map remote RC address space
- description: BAR memory region
reg-names:
items:
- const: parf
- const: dbi
- const: elbi
- const: atu
- const: addr_space
- const: mmio
clocks:
items:
- description: PCIe Auxiliary clock
- description: PCIe CFG AHB clock
- description: PCIe Master AXI clock
- description: PCIe Slave AXI clock
- description: PCIe Slave Q2A AXI clock
- description: PCIe Sleep clock
- description: PCIe Reference clock
clock-names:
items:
- const: aux
- const: cfg
- const: bus_master
- const: bus_slave
- const: slave_q2a
- const: sleep
- const: ref
qcom,perst-regs:
description: Reference to a syscon representing TCSR followed by the two
offsets within syscon for Perst enable and Perst separation
enable registers
$ref: "/schemas/types.yaml#/definitions/phandle-array"
items:
minItems: 3
maxItems: 3
interrupts:
items:
- description: PCIe Global interrupt
- description: PCIe Doorbell interrupt
interrupt-names:
items:
- const: global
- const: doorbell
reset-gpios:
description: GPIO used as PERST# input signal
maxItems: 1
wake-gpios:
description: GPIO used as WAKE# output signal
maxItems: 1
resets:
maxItems: 1
reset-names:
const: core
power-domains:
maxItems: 1
phys:
maxItems: 1
phy-names:
const: pciephy
num-lanes:
default: 2
required:
- compatible
- reg
- reg-names
- clocks
- clock-names
- qcom,perst-regs
- interrupts
- interrupt-names
- reset-gpios
- resets
- reset-names
- power-domains
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,gcc-sdx55.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
pcie_ep: pcie-ep@40000000 {
compatible = "qcom,sdx55-pcie-ep";
reg = <0x01c00000 0x3000>,
<0x40000000 0xf1d>,
<0x40000f20 0xc8>,
<0x40001000 0x1000>,
<0x40002000 0x1000>,
<0x01c03000 0x3000>;
reg-names = "parf", "dbi", "elbi", "atu", "addr_space",
"mmio";
clocks = <&gcc GCC_PCIE_AUX_CLK>,
<&gcc GCC_PCIE_CFG_AHB_CLK>,
<&gcc GCC_PCIE_MSTR_AXI_CLK>,
<&gcc GCC_PCIE_SLV_AXI_CLK>,
<&gcc GCC_PCIE_SLV_Q2A_AXI_CLK>,
<&gcc GCC_PCIE_SLEEP_CLK>,
<&gcc GCC_PCIE_0_CLKREF_CLK>;
clock-names = "aux", "cfg", "bus_master", "bus_slave",
"slave_q2a", "sleep", "ref";
qcom,perst-regs = <&tcsr 0xb258 0xb270>;
interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "global", "doorbell";
reset-gpios = <&tlmm 57 GPIO_ACTIVE_LOW>;
wake-gpios = <&tlmm 53 GPIO_ACTIVE_LOW>;
resets = <&gcc GCC_PCIE_BCR>;
reset-names = "core";
power-domains = <&gcc PCIE_GDSC>;
phys = <&pcie0_lane>;
phy-names = "pciephy";
max-link-speed = <3>;
num-lanes = <2>;
};

View File

@ -12,6 +12,7 @@
- "qcom,pcie-ipq4019" for ipq4019 - "qcom,pcie-ipq4019" for ipq4019
- "qcom,pcie-ipq8074" for ipq8074 - "qcom,pcie-ipq8074" for ipq8074
- "qcom,pcie-qcs404" for qcs404 - "qcom,pcie-qcs404" for qcs404
- "qcom,pcie-sc8180x" for sc8180x
- "qcom,pcie-sdm845" for sdm845 - "qcom,pcie-sdm845" for sdm845
- "qcom,pcie-sm8250" for sm8250 - "qcom,pcie-sm8250" for sm8250
- "qcom,pcie-ipq6018" for ipq6018 - "qcom,pcie-ipq6018" for ipq6018
@ -156,7 +157,7 @@
- "pipe" PIPE clock - "pipe" PIPE clock
- clock-names: - clock-names:
Usage: required for sm8250 Usage: required for sc8180x and sm8250
Value type: <stringlist> Value type: <stringlist>
Definition: Should contain the following entries Definition: Should contain the following entries
- "aux" Auxiliary clock - "aux" Auxiliary clock
@ -245,7 +246,7 @@
- "ahb" AHB reset - "ahb" AHB reset
- reset-names: - reset-names:
Usage: required for sdm845 and sm8250 Usage: required for sc8180x, sdm845 and sm8250
Value type: <stringlist> Value type: <stringlist>
Definition: Should contain the following entries Definition: Should contain the following entries
- "pci" PCIe core reset - "pci" PCIe core reset

View File

@ -0,0 +1,141 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/rockchip-dw-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: DesignWare based PCIe controller on Rockchip SoCs
maintainers:
- Shawn Lin <shawn.lin@rock-chips.com>
- Simon Xue <xxm@rock-chips.com>
- Heiko Stuebner <heiko@sntech.de>
description: |+
RK3568 SoC PCIe host controller is based on the Synopsys DesignWare
PCIe IP and thus inherits all the common properties defined in
designware-pcie.txt.
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
# We need a select here so we don't match all nodes with 'snps,dw-pcie'
select:
properties:
compatible:
contains:
const: rockchip,rk3568-pcie
required:
- compatible
properties:
compatible:
items:
- const: rockchip,rk3568-pcie
- const: snps,dw-pcie
reg:
items:
- description: Data Bus Interface (DBI) registers
- description: Rockchip designed configuration registers
- description: Config registers
reg-names:
items:
- const: dbi
- const: apb
- const: config
clocks:
items:
- description: AHB clock for PCIe master
- description: AHB clock for PCIe slave
- description: AHB clock for PCIe dbi
- description: APB clock for PCIe
- description: Auxiliary clock for PCIe
clock-names:
items:
- const: aclk_mst
- const: aclk_slv
- const: aclk_dbi
- const: pclk
- const: aux
msi-map: true
num-lanes: true
phys:
maxItems: 1
phy-names:
const: pcie-phy
power-domains:
maxItems: 1
ranges:
maxItems: 2
resets:
maxItems: 1
reset-names:
const: pipe
vpcie3v3-supply: true
required:
- compatible
- reg
- reg-names
- clocks
- clock-names
- msi-map
- num-lanes
- phys
- phy-names
- power-domains
- resets
- reset-names
unevaluatedProperties: false
examples:
- |
bus {
#address-cells = <2>;
#size-cells = <2>;
pcie3x2: pcie@fe280000 {
compatible = "rockchip,rk3568-pcie", "snps,dw-pcie";
reg = <0x3 0xc0800000 0x0 0x390000>,
<0x0 0xfe280000 0x0 0x10000>,
<0x3 0x80000000 0x0 0x100000>;
reg-names = "dbi", "apb", "config";
bus-range = <0x20 0x2f>;
clocks = <&cru 143>, <&cru 144>,
<&cru 145>, <&cru 146>,
<&cru 147>;
clock-names = "aclk_mst", "aclk_slv",
"aclk_dbi", "pclk",
"aux";
device_type = "pci";
linux,pci-domain = <2>;
max-link-speed = <2>;
msi-map = <0x2000 &its 0x2000 0x1000>;
num-lanes = <2>;
phys = <&pcie30phy>;
phy-names = "pcie-phy";
power-domains = <&power 15>;
ranges = <0x81000000 0x0 0x80800000 0x3 0x80800000 0x0 0x100000>,
<0x83000000 0x0 0x80900000 0x3 0x80900000 0x0 0x3f700000>;
resets = <&cru 193>;
reset-names = "pipe";
#address-cells = <3>;
#size-cells = <2>;
};
};
...

View File

@ -1297,6 +1297,13 @@ S: Maintained
F: Documentation/devicetree/bindings/iommu/apple,dart.yaml F: Documentation/devicetree/bindings/iommu/apple,dart.yaml
F: drivers/iommu/apple-dart.c F: drivers/iommu/apple-dart.c
APPLE PCIE CONTROLLER DRIVER
M: Alyssa Rosenzweig <alyssa@rosenzweig.io>
M: Marc Zyngier <maz@kernel.org>
L: linux-pci@vger.kernel.org
S: Maintained
F: drivers/pci/controller/pcie-apple.c
APPLE SMC DRIVER APPLE SMC DRIVER
M: Henrik Rydberg <rydberg@bitmath.org> M: Henrik Rydberg <rydberg@bitmath.org>
L: linux-hwmon@vger.kernel.org L: linux-hwmon@vger.kernel.org
@ -12005,6 +12012,12 @@ S: Maintained
F: Documentation/devicetree/bindings/i2c/i2c-mt7621.txt F: Documentation/devicetree/bindings/i2c/i2c-mt7621.txt
F: drivers/i2c/busses/i2c-mt7621.c F: drivers/i2c/busses/i2c-mt7621.c
MEDIATEK MT7621 PCIE CONTROLLER DRIVER
M: Sergio Paracuellos <sergio.paracuellos@gmail.com>
S: Maintained
F: Documentation/devicetree/bindings/pci/mediatek,mt7621-pcie.yaml
F: drivers/pci/controller/pcie-mt7621.c
MEDIATEK MT7621 PHY PCI DRIVER MEDIATEK MT7621 PHY PCI DRIVER
M: Sergio Paracuellos <sergio.paracuellos@gmail.com> M: Sergio Paracuellos <sergio.paracuellos@gmail.com>
S: Maintained S: Maintained
@ -14647,9 +14660,12 @@ M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
R: Krzysztof Wilczyński <kw@linux.com> R: Krzysztof Wilczyński <kw@linux.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
S: Supported S: Supported
Q: https://patchwork.kernel.org/project/linux-pci/list/
B: https://bugzilla.kernel.org
C: irc://irc.oftc.net/linux-pci
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git
F: Documentation/PCI/endpoint/* F: Documentation/PCI/endpoint/*
F: Documentation/misc-devices/pci-endpoint-test.rst F: Documentation/misc-devices/pci-endpoint-test.rst
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kishon/pci-endpoint.git
F: drivers/misc/pci_endpoint_test.c F: drivers/misc/pci_endpoint_test.c
F: drivers/pci/endpoint/ F: drivers/pci/endpoint/
F: tools/pci/ F: tools/pci/
@ -14695,15 +14711,21 @@ R: Rob Herring <robh@kernel.org>
R: Krzysztof Wilczyński <kw@linux.com> R: Krzysztof Wilczyński <kw@linux.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
S: Supported S: Supported
Q: http://patchwork.ozlabs.org/project/linux-pci/list/ Q: https://patchwork.kernel.org/project/linux-pci/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git/ B: https://bugzilla.kernel.org
C: irc://irc.oftc.net/linux-pci
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git
F: drivers/pci/controller/ F: drivers/pci/controller/
F: drivers/pci/pci-bridge-emul.c
F: drivers/pci/pci-bridge-emul.h
PCI SUBSYSTEM PCI SUBSYSTEM
M: Bjorn Helgaas <bhelgaas@google.com> M: Bjorn Helgaas <bhelgaas@google.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
S: Supported S: Supported
Q: http://patchwork.ozlabs.org/project/linux-pci/list/ Q: https://patchwork.kernel.org/project/linux-pci/list/
B: https://bugzilla.kernel.org
C: irc://irc.oftc.net/linux-pci
T: git git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git
F: Documentation/PCI/ F: Documentation/PCI/
F: Documentation/devicetree/bindings/pci/ F: Documentation/devicetree/bindings/pci/
@ -14803,7 +14825,15 @@ M: Stanimir Varbanov <svarbanov@mm-sol.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
L: linux-arm-msm@vger.kernel.org L: linux-arm-msm@vger.kernel.org
S: Maintained S: Maintained
F: drivers/pci/controller/dwc/*qcom* F: drivers/pci/controller/dwc/pcie-qcom.c
PCIE ENDPOINT DRIVER FOR QUALCOMM
M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
L: linux-pci@vger.kernel.org
L: linux-arm-msm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml
F: drivers/pci/controller/dwc/pcie-qcom-ep.c
PCIE DRIVER FOR ROCKCHIP PCIE DRIVER FOR ROCKCHIP
M: Shawn Lin <shawn.lin@rock-chips.com> M: Shawn Lin <shawn.lin@rock-chips.com>

View File

@ -587,13 +587,12 @@ static void pcibios_fixup_resources(struct pci_dev *dev)
} }
DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, pcibios_fixup_resources); DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, pcibios_fixup_resources);
int pcibios_add_device(struct pci_dev *dev) int pcibios_device_add(struct pci_dev *dev)
{ {
dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
return 0; return 0;
} }
EXPORT_SYMBOL(pcibios_add_device);
/* /*
* Reparent resource children of pr that conflict with res * Reparent resource children of pr that conflict with res

View File

@ -51,7 +51,8 @@ choice
select SYS_SUPPORTS_HIGHMEM select SYS_SUPPORTS_HIGHMEM
select MIPS_GIC select MIPS_GIC
select CLKSRC_MIPS_GIC select CLKSRC_MIPS_GIC
select HAVE_PCI if PCI_MT7621 select HAVE_PCI
select PCI_DRIVERS_GENERIC
select SOC_BUS select SOC_BUS
endchoice endchoice

View File

@ -55,11 +55,6 @@ void eeh_pe_dev_mode_mark(struct eeh_pe *pe, int mode);
void eeh_sysfs_add_device(struct pci_dev *pdev); void eeh_sysfs_add_device(struct pci_dev *pdev);
void eeh_sysfs_remove_device(struct pci_dev *pdev); void eeh_sysfs_remove_device(struct pci_dev *pdev);
static inline const char *eeh_driver_name(struct pci_dev *pdev)
{
return (pdev && pdev->driver) ? pdev->driver->name : "<null>";
}
#endif /* CONFIG_EEH */ #endif /* CONFIG_EEH */
#define PCI_BUSNO(bdfn) ((bdfn >> 8) & 0xff) #define PCI_BUSNO(bdfn) ((bdfn >> 8) & 0xff)

View File

@ -399,6 +399,14 @@ out:
return ret; return ret;
} }
static inline const char *eeh_driver_name(struct pci_dev *pdev)
{
if (pdev)
return dev_driver_string(&pdev->dev);
return "<null>";
}
/** /**
* eeh_dev_check_failure - Check if all 1's data is due to EEH slot freeze * eeh_dev_check_failure - Check if all 1's data is due to EEH slot freeze
* @edev: eeh device * @edev: eeh device

View File

@ -104,13 +104,13 @@ static bool eeh_edev_actionable(struct eeh_dev *edev)
*/ */
static inline struct pci_driver *eeh_pcid_get(struct pci_dev *pdev) static inline struct pci_driver *eeh_pcid_get(struct pci_dev *pdev)
{ {
if (!pdev || !pdev->driver) if (!pdev || !pdev->dev.driver)
return NULL; return NULL;
if (!try_module_get(pdev->driver->driver.owner)) if (!try_module_get(pdev->dev.driver->owner))
return NULL; return NULL;
return pdev->driver; return to_pci_driver(pdev->dev.driver);
} }
/** /**
@ -122,10 +122,10 @@ static inline struct pci_driver *eeh_pcid_get(struct pci_dev *pdev)
*/ */
static inline void eeh_pcid_put(struct pci_dev *pdev) static inline void eeh_pcid_put(struct pci_dev *pdev)
{ {
if (!pdev || !pdev->driver) if (!pdev || !pdev->dev.driver)
return; return;
module_put(pdev->driver->driver.owner); module_put(pdev->dev.driver->owner);
} }
/** /**

View File

@ -1059,7 +1059,7 @@ void pcibios_bus_add_device(struct pci_dev *dev)
ppc_md.pcibios_bus_add_device(dev); ppc_md.pcibios_bus_add_device(dev);
} }
int pcibios_add_device(struct pci_dev *dev) int pcibios_device_add(struct pci_dev *dev)
{ {
struct irq_domain *d; struct irq_domain *d;

View File

@ -51,7 +51,7 @@
* to "new_size", calculated above. Implementing this is a convoluted process * to "new_size", calculated above. Implementing this is a convoluted process
* which requires several hooks in the PCI core: * which requires several hooks in the PCI core:
* *
* 1. In pcibios_add_device() we call pnv_pci_ioda_fixup_iov(). * 1. In pcibios_device_add() we call pnv_pci_ioda_fixup_iov().
* *
* At this point the device has been probed and the device's BARs are sized, * At this point the device has been probed and the device's BARs are sized,
* but no resource allocations have been done. The SR-IOV BARs are sized * but no resource allocations have been done. The SR-IOV BARs are sized

View File

@ -561,7 +561,7 @@ static void zpci_cleanup_bus_resources(struct zpci_dev *zdev)
zdev->has_resources = 0; zdev->has_resources = 0;
} }
int pcibios_add_device(struct pci_dev *pdev) int pcibios_device_add(struct pci_dev *pdev)
{ {
struct zpci_dev *zdev = to_zpci(pdev); struct zpci_dev *zdev = to_zpci(pdev);
struct resource *res; struct resource *res;

View File

@ -1010,7 +1010,7 @@ void pcibios_set_master(struct pci_dev *dev)
} }
#ifdef CONFIG_PCI_IOV #ifdef CONFIG_PCI_IOV
int pcibios_add_device(struct pci_dev *dev) int pcibios_device_add(struct pci_dev *dev)
{ {
struct pci_dev *pdev; struct pci_dev *pdev;

View File

@ -1187,7 +1187,7 @@ static int uncore_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id
* PCI slot and func to indicate the uncore box. * PCI slot and func to indicate the uncore box.
*/ */
if (id->driver_data & ~0xffff) { if (id->driver_data & ~0xffff) {
struct pci_driver *pci_drv = pdev->driver; struct pci_driver *pci_drv = to_pci_driver(pdev->dev.driver);
pmu = uncore_pci_find_dev_pmu(pdev, pci_drv->id_table); pmu = uncore_pci_find_dev_pmu(pdev, pci_drv->id_table);
if (pmu == NULL) if (pmu == NULL)

View File

@ -80,7 +80,7 @@ static struct resource video_rom_resource = {
*/ */
static bool match_id(struct pci_dev *pdev, unsigned short vendor, unsigned short device) static bool match_id(struct pci_dev *pdev, unsigned short vendor, unsigned short device)
{ {
struct pci_driver *drv = pdev->driver; struct pci_driver *drv = to_pci_driver(pdev->dev.driver);
const struct pci_device_id *id; const struct pci_device_id *id;
if (pdev->vendor == vendor && pdev->device == device) if (pdev->vendor == vendor && pdev->device == device)

View File

@ -632,7 +632,7 @@ static void set_dev_domain_options(struct pci_dev *pdev)
pdev->hotplug_user_indicators = 1; pdev->hotplug_user_indicators = 1;
} }
int pcibios_add_device(struct pci_dev *dev) int pcibios_device_add(struct pci_dev *dev)
{ {
struct pci_setup_rom *rom; struct pci_setup_rom *rom;
struct irq_domain *msidom; struct irq_domain *msidom;

View File

@ -199,33 +199,20 @@ static acpi_status acpi_pci_query_osc(struct acpi_pci_root *root,
acpi_status status; acpi_status status;
u32 result, capbuf[3]; u32 result, capbuf[3];
support &= OSC_PCI_SUPPORT_MASKS;
support |= root->osc_support_set; support |= root->osc_support_set;
capbuf[OSC_QUERY_DWORD] = OSC_QUERY_ENABLE; capbuf[OSC_QUERY_DWORD] = OSC_QUERY_ENABLE;
capbuf[OSC_SUPPORT_DWORD] = support; capbuf[OSC_SUPPORT_DWORD] = support;
if (control) { capbuf[OSC_CONTROL_DWORD] = *control | root->osc_control_set;
*control &= OSC_PCI_CONTROL_MASKS;
capbuf[OSC_CONTROL_DWORD] = *control | root->osc_control_set;
} else {
/* Run _OSC query only with existing controls. */
capbuf[OSC_CONTROL_DWORD] = root->osc_control_set;
}
status = acpi_pci_run_osc(root->device->handle, capbuf, &result); status = acpi_pci_run_osc(root->device->handle, capbuf, &result);
if (ACPI_SUCCESS(status)) { if (ACPI_SUCCESS(status)) {
root->osc_support_set = support; root->osc_support_set = support;
if (control) *control = result;
*control = result;
} }
return status; return status;
} }
static acpi_status acpi_pci_osc_support(struct acpi_pci_root *root, u32 flags)
{
return acpi_pci_query_osc(root, flags, NULL);
}
struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle) struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle)
{ {
struct acpi_pci_root *root; struct acpi_pci_root *root;
@ -348,8 +335,9 @@ EXPORT_SYMBOL_GPL(acpi_get_pci_dev);
* _OSC bits the BIOS has granted control of, but its contents are meaningless * _OSC bits the BIOS has granted control of, but its contents are meaningless
* on failure. * on failure.
**/ **/
static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req) static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 support)
{ {
u32 req = OSC_PCI_EXPRESS_CAPABILITY_CONTROL;
struct acpi_pci_root *root; struct acpi_pci_root *root;
acpi_status status; acpi_status status;
u32 ctrl, capbuf[3]; u32 ctrl, capbuf[3];
@ -357,22 +345,16 @@ static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 r
if (!mask) if (!mask)
return AE_BAD_PARAMETER; return AE_BAD_PARAMETER;
ctrl = *mask & OSC_PCI_CONTROL_MASKS;
if ((ctrl & req) != req)
return AE_TYPE;
root = acpi_pci_find_root(handle); root = acpi_pci_find_root(handle);
if (!root) if (!root)
return AE_NOT_EXIST; return AE_NOT_EXIST;
*mask = ctrl | root->osc_control_set; ctrl = *mask;
/* No need to evaluate _OSC if the control was already granted. */ *mask |= root->osc_control_set;
if ((root->osc_control_set & ctrl) == ctrl)
return AE_OK;
/* Need to check the available controls bits before requesting them. */ /* Need to check the available controls bits before requesting them. */
while (*mask) { do {
status = acpi_pci_query_osc(root, root->osc_support_set, mask); status = acpi_pci_query_osc(root, support, mask);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
return status; return status;
if (ctrl == *mask) if (ctrl == *mask)
@ -380,7 +362,11 @@ static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 r
decode_osc_control(root, "platform does not support", decode_osc_control(root, "platform does not support",
ctrl & ~(*mask)); ctrl & ~(*mask));
ctrl = *mask; ctrl = *mask;
} } while (*mask);
/* No need to request _OSC if the control was already granted. */
if ((root->osc_control_set & ctrl) == ctrl)
return AE_OK;
if ((ctrl & req) != req) { if ((ctrl & req) != req) {
decode_osc_control(root, "not requesting control; platform does not support", decode_osc_control(root, "not requesting control; platform does not support",
@ -399,25 +385,9 @@ static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 r
return AE_OK; return AE_OK;
} }
static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, static u32 calculate_support(void)
bool is_pcie)
{ {
u32 support, control, requested; u32 support;
acpi_status status;
struct acpi_device *device = root->device;
acpi_handle handle = device->handle;
/*
* Apple always return failure on _OSC calls when _OSI("Darwin") has
* been called successfully. We know the feature set supported by the
* platform, so avoid calling _OSC at all
*/
if (x86_apple_machine) {
root->osc_control_set = ~OSC_PCI_EXPRESS_PME_CONTROL;
decode_osc_control(root, "OS assumes control of",
root->osc_control_set);
return;
}
/* /*
* All supported architectures that use ACPI have support for * All supported architectures that use ACPI have support for
@ -434,30 +404,12 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
if (IS_ENABLED(CONFIG_PCIE_EDR)) if (IS_ENABLED(CONFIG_PCIE_EDR))
support |= OSC_PCI_EDR_SUPPORT; support |= OSC_PCI_EDR_SUPPORT;
decode_osc_support(root, "OS supports", support); return support;
status = acpi_pci_osc_support(root, support); }
if (ACPI_FAILURE(status)) {
*no_aspm = 1;
/* _OSC is optional for PCI host bridges */ static u32 calculate_control(void)
if ((status == AE_NOT_FOUND) && !is_pcie) {
return; u32 control;
dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n",
acpi_format_exception(status));
return;
}
if (pcie_ports_disabled) {
dev_info(&device->dev, "PCIe port services disabled; not requesting _OSC control\n");
return;
}
if ((support & ACPI_PCIE_REQ_SUPPORT) != ACPI_PCIE_REQ_SUPPORT) {
decode_osc_support(root, "not requesting OS control; OS requires",
ACPI_PCIE_REQ_SUPPORT);
return;
}
control = OSC_PCI_EXPRESS_CAPABILITY_CONTROL control = OSC_PCI_EXPRESS_CAPABILITY_CONTROL
| OSC_PCI_EXPRESS_PME_CONTROL; | OSC_PCI_EXPRESS_PME_CONTROL;
@ -483,11 +435,59 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
if (IS_ENABLED(CONFIG_PCIE_DPC) && IS_ENABLED(CONFIG_PCIE_EDR)) if (IS_ENABLED(CONFIG_PCIE_DPC) && IS_ENABLED(CONFIG_PCIE_EDR))
control |= OSC_PCI_EXPRESS_DPC_CONTROL; control |= OSC_PCI_EXPRESS_DPC_CONTROL;
requested = control; return control;
status = acpi_pci_osc_control_set(handle, &control, }
OSC_PCI_EXPRESS_CAPABILITY_CONTROL);
static bool os_control_query_checks(struct acpi_pci_root *root, u32 support)
{
struct acpi_device *device = root->device;
if (pcie_ports_disabled) {
dev_info(&device->dev, "PCIe port services disabled; not requesting _OSC control\n");
return false;
}
if ((support & ACPI_PCIE_REQ_SUPPORT) != ACPI_PCIE_REQ_SUPPORT) {
decode_osc_support(root, "not requesting OS control; OS requires",
ACPI_PCIE_REQ_SUPPORT);
return false;
}
return true;
}
static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
bool is_pcie)
{
u32 support, control = 0, requested = 0;
acpi_status status;
struct acpi_device *device = root->device;
acpi_handle handle = device->handle;
/*
* Apple always return failure on _OSC calls when _OSI("Darwin") has
* been called successfully. We know the feature set supported by the
* platform, so avoid calling _OSC at all
*/
if (x86_apple_machine) {
root->osc_control_set = ~OSC_PCI_EXPRESS_PME_CONTROL;
decode_osc_control(root, "OS assumes control of",
root->osc_control_set);
return;
}
support = calculate_support();
decode_osc_support(root, "OS supports", support);
if (os_control_query_checks(root, support))
requested = control = calculate_control();
status = acpi_pci_osc_control_set(handle, &control, support);
if (ACPI_SUCCESS(status)) { if (ACPI_SUCCESS(status)) {
decode_osc_control(root, "OS now controls", control); if (control)
decode_osc_control(root, "OS now controls", control);
if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_ASPM) { if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_ASPM) {
/* /*
* We have ASPM control, but the FADT indicates that * We have ASPM control, but the FADT indicates that
@ -498,11 +498,6 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
*no_aspm = 1; *no_aspm = 1;
} }
} else { } else {
decode_osc_control(root, "OS requested", requested);
decode_osc_control(root, "platform willing to grant", control);
dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n",
acpi_format_exception(status));
/* /*
* We want to disable ASPM here, but aspm_disabled * We want to disable ASPM here, but aspm_disabled
* needs to remain in its state from boot so that we * needs to remain in its state from boot so that we
@ -511,6 +506,18 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
* root scan. * root scan.
*/ */
*no_aspm = 1; *no_aspm = 1;
/* _OSC is optional for PCI host bridges */
if ((status == AE_NOT_FOUND) && !is_pcie)
return;
if (control) {
decode_osc_control(root, "OS requested", requested);
decode_osc_control(root, "platform willing to grant", control);
}
dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n",
acpi_format_exception(status));
} }
} }

View File

@ -162,7 +162,6 @@ static int bcma_host_pci_probe(struct pci_dev *dev,
{ {
struct bcma_bus *bus; struct bcma_bus *bus;
int err = -ENOMEM; int err = -ENOMEM;
const char *name;
u32 val; u32 val;
/* Alloc */ /* Alloc */
@ -175,10 +174,7 @@ static int bcma_host_pci_probe(struct pci_dev *dev,
if (err) if (err)
goto err_kfree_bus; goto err_kfree_bus;
name = dev_name(&dev->dev); err = pci_request_regions(dev, "bcma-pci-bridge");
if (dev->driver && dev->driver->name)
name = dev->driver->name;
err = pci_request_regions(dev, name);
if (err) if (err)
goto err_pci_disable; goto err_pci_disable;
pci_set_master(dev); pci_set_master(dev);

View File

@ -3118,7 +3118,7 @@ static int qm_alloc_uacce(struct hisi_qm *qm)
}; };
int ret; int ret;
ret = strscpy(interface.name, pdev->driver->name, ret = strscpy(interface.name, dev_driver_string(&pdev->dev),
sizeof(interface.name)); sizeof(interface.name));
if (ret < 0) if (ret < 0)
return -ENAMETOOLONG; return -ENAMETOOLONG;

View File

@ -247,11 +247,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_set_master(pdev); pci_set_master(pdev);
if (adf_enable_aer(accel_dev)) { adf_enable_aer(accel_dev);
dev_err(&pdev->dev, "Failed to enable aer.\n");
ret = -EFAULT;
goto out_err;
}
if (pci_save_state(pdev)) { if (pci_save_state(pdev)) {
dev_err(&pdev->dev, "Failed to save pci state.\n"); dev_err(&pdev->dev, "Failed to save pci state.\n");
@ -304,6 +300,7 @@ static struct pci_driver adf_driver = {
.probe = adf_probe, .probe = adf_probe,
.remove = adf_remove, .remove = adf_remove,
.sriov_configure = adf_sriov_configure, .sriov_configure = adf_sriov_configure,
.err_handler = &adf_err_handler,
}; };
module_pci_driver(adf_driver); module_pci_driver(adf_driver);

View File

@ -33,6 +33,7 @@ static struct pci_driver adf_driver = {
.probe = adf_probe, .probe = adf_probe,
.remove = adf_remove, .remove = adf_remove,
.sriov_configure = adf_sriov_configure, .sriov_configure = adf_sriov_configure,
.err_handler = &adf_err_handler,
}; };
static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev) static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev)
@ -192,11 +193,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
} }
pci_set_master(pdev); pci_set_master(pdev);
if (adf_enable_aer(accel_dev)) { adf_enable_aer(accel_dev);
dev_err(&pdev->dev, "Failed to enable aer\n");
ret = -EFAULT;
goto out_err_free_reg;
}
if (pci_save_state(pdev)) { if (pci_save_state(pdev)) {
dev_err(&pdev->dev, "Failed to save pci state\n"); dev_err(&pdev->dev, "Failed to save pci state\n");

View File

@ -33,6 +33,7 @@ static struct pci_driver adf_driver = {
.probe = adf_probe, .probe = adf_probe,
.remove = adf_remove, .remove = adf_remove,
.sriov_configure = adf_sriov_configure, .sriov_configure = adf_sriov_configure,
.err_handler = &adf_err_handler,
}; };
static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev) static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev)
@ -192,11 +193,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
} }
pci_set_master(pdev); pci_set_master(pdev);
if (adf_enable_aer(accel_dev)) { adf_enable_aer(accel_dev);
dev_err(&pdev->dev, "Failed to enable aer\n");
ret = -EFAULT;
goto out_err_free_reg;
}
if (pci_save_state(pdev)) { if (pci_save_state(pdev)) {
dev_err(&pdev->dev, "Failed to save pci state\n"); dev_err(&pdev->dev, "Failed to save pci state\n");

View File

@ -166,11 +166,12 @@ static void adf_resume(struct pci_dev *pdev)
dev_info(&pdev->dev, "Device is up and running\n"); dev_info(&pdev->dev, "Device is up and running\n");
} }
static const struct pci_error_handlers adf_err_handler = { const struct pci_error_handlers adf_err_handler = {
.error_detected = adf_error_detected, .error_detected = adf_error_detected,
.slot_reset = adf_slot_reset, .slot_reset = adf_slot_reset,
.resume = adf_resume, .resume = adf_resume,
}; };
EXPORT_SYMBOL_GPL(adf_err_handler);
/** /**
* adf_enable_aer() - Enable Advance Error Reporting for acceleration device * adf_enable_aer() - Enable Advance Error Reporting for acceleration device
@ -179,17 +180,12 @@ static const struct pci_error_handlers adf_err_handler = {
* Function enables PCI Advance Error Reporting for the * Function enables PCI Advance Error Reporting for the
* QAT acceleration device accel_dev. * QAT acceleration device accel_dev.
* To be used by QAT device specific drivers. * To be used by QAT device specific drivers.
*
* Return: 0 on success, error code otherwise.
*/ */
int adf_enable_aer(struct adf_accel_dev *accel_dev) void adf_enable_aer(struct adf_accel_dev *accel_dev)
{ {
struct pci_dev *pdev = accel_to_pci_dev(accel_dev); struct pci_dev *pdev = accel_to_pci_dev(accel_dev);
struct pci_driver *pdrv = pdev->driver;
pdrv->err_handler = &adf_err_handler;
pci_enable_pcie_error_reporting(pdev); pci_enable_pcie_error_reporting(pdev);
return 0;
} }
EXPORT_SYMBOL_GPL(adf_enable_aer); EXPORT_SYMBOL_GPL(adf_enable_aer);

View File

@ -94,7 +94,8 @@ void adf_ae_fw_release(struct adf_accel_dev *accel_dev);
int adf_ae_start(struct adf_accel_dev *accel_dev); int adf_ae_start(struct adf_accel_dev *accel_dev);
int adf_ae_stop(struct adf_accel_dev *accel_dev); int adf_ae_stop(struct adf_accel_dev *accel_dev);
int adf_enable_aer(struct adf_accel_dev *accel_dev); extern const struct pci_error_handlers adf_err_handler;
void adf_enable_aer(struct adf_accel_dev *accel_dev);
void adf_disable_aer(struct adf_accel_dev *accel_dev); void adf_disable_aer(struct adf_accel_dev *accel_dev);
void adf_reset_sbr(struct adf_accel_dev *accel_dev); void adf_reset_sbr(struct adf_accel_dev *accel_dev);
void adf_reset_flr(struct adf_accel_dev *accel_dev); void adf_reset_flr(struct adf_accel_dev *accel_dev);

View File

@ -33,6 +33,7 @@ static struct pci_driver adf_driver = {
.probe = adf_probe, .probe = adf_probe,
.remove = adf_remove, .remove = adf_remove,
.sriov_configure = adf_sriov_configure, .sriov_configure = adf_sriov_configure,
.err_handler = &adf_err_handler,
}; };
static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev) static void adf_cleanup_pci_dev(struct adf_accel_dev *accel_dev)
@ -192,11 +193,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
} }
pci_set_master(pdev); pci_set_master(pdev);
if (adf_enable_aer(accel_dev)) { adf_enable_aer(accel_dev);
dev_err(&pdev->dev, "Failed to enable aer\n");
ret = -EFAULT;
goto out_err_free_reg;
}
if (pci_save_state(pdev)) { if (pci_save_state(pdev)) {
dev_err(&pdev->dev, "Failed to save pci state\n"); dev_err(&pdev->dev, "Failed to save pci state\n");

View File

@ -15,6 +15,7 @@
#include <linux/bitfield.h> #include <linux/bitfield.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/dev_printk.h> #include <linux/dev_printk.h>
#include <linux/dma-iommu.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
@ -737,6 +738,31 @@ static int apple_dart_def_domain_type(struct device *dev)
return 0; return 0;
} }
#ifndef CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR
/* Keep things compiling when CONFIG_PCI_APPLE isn't selected */
#define CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR 0
#endif
#define DOORBELL_ADDR (CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR & PAGE_MASK)
static void apple_dart_get_resv_regions(struct device *dev,
struct list_head *head)
{
if (IS_ENABLED(CONFIG_PCIE_APPLE) && dev_is_pci(dev)) {
struct iommu_resv_region *region;
int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
region = iommu_alloc_resv_region(DOORBELL_ADDR,
PAGE_SIZE, prot,
IOMMU_RESV_MSI);
if (!region)
return;
list_add_tail(&region->list, head);
}
iommu_dma_get_resv_regions(dev, head);
}
static const struct iommu_ops apple_dart_iommu_ops = { static const struct iommu_ops apple_dart_iommu_ops = {
.domain_alloc = apple_dart_domain_alloc, .domain_alloc = apple_dart_domain_alloc,
.domain_free = apple_dart_domain_free, .domain_free = apple_dart_domain_free,
@ -753,6 +779,8 @@ static const struct iommu_ops apple_dart_iommu_ops = {
.device_group = apple_dart_device_group, .device_group = apple_dart_device_group,
.of_xlate = apple_dart_of_xlate, .of_xlate = apple_dart_of_xlate,
.def_domain_type = apple_dart_def_domain_type, .def_domain_type = apple_dart_def_domain_type,
.get_resv_regions = apple_dart_get_resv_regions,
.put_resv_regions = generic_iommu_put_resv_regions,
.pgsize_bitmap = -1UL, /* Restricted during dart probe */ .pgsize_bitmap = -1UL, /* Restricted during dart probe */
}; };

View File

@ -829,7 +829,6 @@ int
mpt_device_driver_register(struct mpt_pci_driver * dd_cbfunc, u8 cb_idx) mpt_device_driver_register(struct mpt_pci_driver * dd_cbfunc, u8 cb_idx)
{ {
MPT_ADAPTER *ioc; MPT_ADAPTER *ioc;
const struct pci_device_id *id;
if (!cb_idx || cb_idx >= MPT_MAX_PROTOCOL_DRIVERS) if (!cb_idx || cb_idx >= MPT_MAX_PROTOCOL_DRIVERS)
return -EINVAL; return -EINVAL;
@ -838,10 +837,8 @@ mpt_device_driver_register(struct mpt_pci_driver * dd_cbfunc, u8 cb_idx)
/* call per pci device probe entry point */ /* call per pci device probe entry point */
list_for_each_entry(ioc, &ioc_list, list) { list_for_each_entry(ioc, &ioc_list, list) {
id = ioc->pcidev->driver ?
ioc->pcidev->driver->id_table : NULL;
if (dd_cbfunc->probe) if (dd_cbfunc->probe)
dd_cbfunc->probe(ioc->pcidev, id); dd_cbfunc->probe(ioc->pcidev);
} }
return 0; return 0;
@ -2032,7 +2029,7 @@ mpt_attach(struct pci_dev *pdev, const struct pci_device_id *id)
for(cb_idx = 0; cb_idx < MPT_MAX_PROTOCOL_DRIVERS; cb_idx++) { for(cb_idx = 0; cb_idx < MPT_MAX_PROTOCOL_DRIVERS; cb_idx++) {
if(MptDeviceDriverHandlers[cb_idx] && if(MptDeviceDriverHandlers[cb_idx] &&
MptDeviceDriverHandlers[cb_idx]->probe) { MptDeviceDriverHandlers[cb_idx]->probe) {
MptDeviceDriverHandlers[cb_idx]->probe(pdev,id); MptDeviceDriverHandlers[cb_idx]->probe(pdev);
} }
} }

View File

@ -257,7 +257,7 @@ typedef enum {
} MPT_DRIVER_CLASS; } MPT_DRIVER_CLASS;
struct mpt_pci_driver{ struct mpt_pci_driver{
int (*probe) (struct pci_dev *dev, const struct pci_device_id *id); int (*probe) (struct pci_dev *dev);
void (*remove) (struct pci_dev *dev); void (*remove) (struct pci_dev *dev);
}; };

View File

@ -114,7 +114,7 @@ static int mptctl_do_reset(MPT_ADAPTER *iocp, unsigned long arg);
static int mptctl_hp_hostinfo(MPT_ADAPTER *iocp, unsigned long arg, unsigned int cmd); static int mptctl_hp_hostinfo(MPT_ADAPTER *iocp, unsigned long arg, unsigned int cmd);
static int mptctl_hp_targetinfo(MPT_ADAPTER *iocp, unsigned long arg); static int mptctl_hp_targetinfo(MPT_ADAPTER *iocp, unsigned long arg);
static int mptctl_probe(struct pci_dev *, const struct pci_device_id *); static int mptctl_probe(struct pci_dev *);
static void mptctl_remove(struct pci_dev *); static void mptctl_remove(struct pci_dev *);
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
@ -2838,7 +2838,7 @@ static long compat_mpctl_ioctl(struct file *f, unsigned int cmd, unsigned long a
*/ */
static int static int
mptctl_probe(struct pci_dev *pdev, const struct pci_device_id *id) mptctl_probe(struct pci_dev *pdev)
{ {
MPT_ADAPTER *ioc = pci_get_drvdata(pdev); MPT_ADAPTER *ioc = pci_get_drvdata(pdev);

View File

@ -1377,7 +1377,7 @@ mpt_register_lan_device (MPT_ADAPTER *mpt_dev, int pnum)
} }
static int static int
mptlan_probe(struct pci_dev *pdev, const struct pci_device_id *id) mptlan_probe(struct pci_dev *pdev)
{ {
MPT_ADAPTER *ioc = pci_get_drvdata(pdev); MPT_ADAPTER *ioc = pci_get_drvdata(pdev);
struct net_device *dev; struct net_device *dev;

View File

@ -20,34 +20,38 @@ static void pci_error_handlers(struct cxl_afu *afu,
pci_channel_state_t state) pci_channel_state_t state)
{ {
struct pci_dev *afu_dev; struct pci_dev *afu_dev;
struct pci_driver *afu_drv;
const struct pci_error_handlers *err_handler;
if (afu->phb == NULL) if (afu->phb == NULL)
return; return;
list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) {
if (!afu_dev->driver) afu_drv = to_pci_driver(afu_dev->dev.driver);
if (!afu_drv)
continue; continue;
err_handler = afu_drv->err_handler;
switch (bus_error_event) { switch (bus_error_event) {
case CXL_ERROR_DETECTED_EVENT: case CXL_ERROR_DETECTED_EVENT:
afu_dev->error_state = state; afu_dev->error_state = state;
if (afu_dev->driver->err_handler && if (err_handler &&
afu_dev->driver->err_handler->error_detected) err_handler->error_detected)
afu_dev->driver->err_handler->error_detected(afu_dev, state); err_handler->error_detected(afu_dev, state);
break; break;
case CXL_SLOT_RESET_EVENT: case CXL_SLOT_RESET_EVENT:
afu_dev->error_state = state; afu_dev->error_state = state;
if (afu_dev->driver->err_handler && if (err_handler &&
afu_dev->driver->err_handler->slot_reset) err_handler->slot_reset)
afu_dev->driver->err_handler->slot_reset(afu_dev); err_handler->slot_reset(afu_dev);
break; break;
case CXL_RESUME_EVENT: case CXL_RESUME_EVENT:
if (afu_dev->driver->err_handler && if (err_handler &&
afu_dev->driver->err_handler->resume) err_handler->resume)
afu_dev->driver->err_handler->resume(afu_dev); err_handler->resume(afu_dev);
break; break;
} }
} }
} }

View File

@ -1795,6 +1795,8 @@ static pci_ers_result_t cxl_vphb_error_detected(struct cxl_afu *afu,
pci_channel_state_t state) pci_channel_state_t state)
{ {
struct pci_dev *afu_dev; struct pci_dev *afu_dev;
struct pci_driver *afu_drv;
const struct pci_error_handlers *err_handler;
pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET; pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET;
pci_ers_result_t afu_result = PCI_ERS_RESULT_NEED_RESET; pci_ers_result_t afu_result = PCI_ERS_RESULT_NEED_RESET;
@ -1805,14 +1807,16 @@ static pci_ers_result_t cxl_vphb_error_detected(struct cxl_afu *afu,
return result; return result;
list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) {
if (!afu_dev->driver) afu_drv = to_pci_driver(afu_dev->dev.driver);
if (!afu_drv)
continue; continue;
afu_dev->error_state = state; afu_dev->error_state = state;
if (afu_dev->driver->err_handler) err_handler = afu_drv->err_handler;
afu_result = afu_dev->driver->err_handler->error_detected(afu_dev, if (err_handler)
state); afu_result = err_handler->error_detected(afu_dev,
state);
/* Disconnect trumps all, NONE trumps NEED_RESET */ /* Disconnect trumps all, NONE trumps NEED_RESET */
if (afu_result == PCI_ERS_RESULT_DISCONNECT) if (afu_result == PCI_ERS_RESULT_DISCONNECT)
result = PCI_ERS_RESULT_DISCONNECT; result = PCI_ERS_RESULT_DISCONNECT;
@ -1972,6 +1976,8 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
struct cxl_afu *afu; struct cxl_afu *afu;
struct cxl_context *ctx; struct cxl_context *ctx;
struct pci_dev *afu_dev; struct pci_dev *afu_dev;
struct pci_driver *afu_drv;
const struct pci_error_handlers *err_handler;
pci_ers_result_t afu_result = PCI_ERS_RESULT_RECOVERED; pci_ers_result_t afu_result = PCI_ERS_RESULT_RECOVERED;
pci_ers_result_t result = PCI_ERS_RESULT_RECOVERED; pci_ers_result_t result = PCI_ERS_RESULT_RECOVERED;
int i; int i;
@ -2028,12 +2034,13 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
* shouldn't start new work until we call * shouldn't start new work until we call
* their resume function. * their resume function.
*/ */
if (!afu_dev->driver) afu_drv = to_pci_driver(afu_dev->dev.driver);
if (!afu_drv)
continue; continue;
if (afu_dev->driver->err_handler && err_handler = afu_drv->err_handler;
afu_dev->driver->err_handler->slot_reset) if (err_handler && err_handler->slot_reset)
afu_result = afu_dev->driver->err_handler->slot_reset(afu_dev); afu_result = err_handler->slot_reset(afu_dev);
if (afu_result == PCI_ERS_RESULT_DISCONNECT) if (afu_result == PCI_ERS_RESULT_DISCONNECT)
result = PCI_ERS_RESULT_DISCONNECT; result = PCI_ERS_RESULT_DISCONNECT;
@ -2060,6 +2067,8 @@ static void cxl_pci_resume(struct pci_dev *pdev)
struct cxl *adapter = pci_get_drvdata(pdev); struct cxl *adapter = pci_get_drvdata(pdev);
struct cxl_afu *afu; struct cxl_afu *afu;
struct pci_dev *afu_dev; struct pci_dev *afu_dev;
struct pci_driver *afu_drv;
const struct pci_error_handlers *err_handler;
int i; int i;
/* Everything is back now. Drivers should restart work now. /* Everything is back now. Drivers should restart work now.
@ -2074,9 +2083,13 @@ static void cxl_pci_resume(struct pci_dev *pdev)
continue; continue;
list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) {
if (afu_dev->driver && afu_dev->driver->err_handler && afu_drv = to_pci_driver(afu_dev->dev.driver);
afu_dev->driver->err_handler->resume) if (!afu_drv)
afu_dev->driver->err_handler->resume(afu_dev); continue;
err_handler = afu_drv->err_handler;
if (err_handler && err_handler->resume)
err_handler->resume(afu_dev);
} }
} }
spin_unlock(&adapter->afu_list_lock); spin_unlock(&adapter->afu_list_lock);

View File

@ -676,8 +676,6 @@ void t3_link_changed(struct adapter *adapter, int port_id);
void t3_link_fault(struct adapter *adapter, int port_id); void t3_link_fault(struct adapter *adapter, int port_id);
int t3_link_start(struct cphy *phy, struct cmac *mac, struct link_config *lc); int t3_link_start(struct cphy *phy, struct cmac *mac, struct link_config *lc);
const struct adapter_info *t3_get_adapter_info(unsigned int board_id); const struct adapter_info *t3_get_adapter_info(unsigned int board_id);
int t3_seeprom_read(struct adapter *adapter, u32 addr, __le32 *data);
int t3_seeprom_write(struct adapter *adapter, u32 addr, __le32 data);
int t3_seeprom_wp(struct adapter *adapter, int enable); int t3_seeprom_wp(struct adapter *adapter, int enable);
int t3_get_tp_version(struct adapter *adapter, u32 *vers); int t3_get_tp_version(struct adapter *adapter, u32 *vers);
int t3_check_tpsram_version(struct adapter *adapter); int t3_check_tpsram_version(struct adapter *adapter);

View File

@ -2036,20 +2036,16 @@ static int get_eeprom(struct net_device *dev, struct ethtool_eeprom *e,
{ {
struct port_info *pi = netdev_priv(dev); struct port_info *pi = netdev_priv(dev);
struct adapter *adapter = pi->adapter; struct adapter *adapter = pi->adapter;
int i, err = 0; int cnt;
u8 *buf = kmalloc(EEPROMSIZE, GFP_KERNEL);
if (!buf)
return -ENOMEM;
e->magic = EEPROM_MAGIC; e->magic = EEPROM_MAGIC;
for (i = e->offset & ~3; !err && i < e->offset + e->len; i += 4) cnt = pci_read_vpd(adapter->pdev, e->offset, e->len, data);
err = t3_seeprom_read(adapter, i, (__le32 *) & buf[i]); if (cnt < 0)
return cnt;
if (!err) e->len = cnt;
memcpy(data, buf + e->offset, e->len);
kfree(buf); return 0;
return err;
} }
static int set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom, static int set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
@ -2058,7 +2054,6 @@ static int set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
struct port_info *pi = netdev_priv(dev); struct port_info *pi = netdev_priv(dev);
struct adapter *adapter = pi->adapter; struct adapter *adapter = pi->adapter;
u32 aligned_offset, aligned_len; u32 aligned_offset, aligned_len;
__le32 *p;
u8 *buf; u8 *buf;
int err; int err;
@ -2072,12 +2067,9 @@ static int set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
buf = kmalloc(aligned_len, GFP_KERNEL); buf = kmalloc(aligned_len, GFP_KERNEL);
if (!buf) if (!buf)
return -ENOMEM; return -ENOMEM;
err = t3_seeprom_read(adapter, aligned_offset, (__le32 *) buf); err = pci_read_vpd(adapter->pdev, aligned_offset, aligned_len,
if (!err && aligned_len > 4) buf);
err = t3_seeprom_read(adapter, if (err < 0)
aligned_offset + aligned_len - 4,
(__le32 *) & buf[aligned_len - 4]);
if (err)
goto out; goto out;
memcpy(buf + (eeprom->offset & 3), data, eeprom->len); memcpy(buf + (eeprom->offset & 3), data, eeprom->len);
} else } else
@ -2087,17 +2079,13 @@ static int set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
if (err) if (err)
goto out; goto out;
for (p = (__le32 *) buf; !err && aligned_len; aligned_len -= 4, p++) { err = pci_write_vpd(adapter->pdev, aligned_offset, aligned_len, buf);
err = t3_seeprom_write(adapter, aligned_offset, *p); if (err >= 0)
aligned_offset += 4;
}
if (!err)
err = t3_seeprom_wp(adapter, 1); err = t3_seeprom_wp(adapter, 1);
out: out:
if (buf != data) if (buf != data)
kfree(buf); kfree(buf);
return err; return err < 0 ? err : 0;
} }
static void get_wol(struct net_device *dev, struct ethtool_wolinfo *wol) static void get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)

View File

@ -596,80 +596,9 @@ struct t3_vpd {
u32 pad; /* for multiple-of-4 sizing and alignment */ u32 pad; /* for multiple-of-4 sizing and alignment */
}; };
#define EEPROM_MAX_POLL 40
#define EEPROM_STAT_ADDR 0x4000 #define EEPROM_STAT_ADDR 0x4000
#define VPD_BASE 0xc00 #define VPD_BASE 0xc00
/**
* t3_seeprom_read - read a VPD EEPROM location
* @adapter: adapter to read
* @addr: EEPROM address
* @data: where to store the read data
*
* Read a 32-bit word from a location in VPD EEPROM using the card's PCI
* VPD ROM capability. A zero is written to the flag bit when the
* address is written to the control register. The hardware device will
* set the flag to 1 when 4 bytes have been read into the data register.
*/
int t3_seeprom_read(struct adapter *adapter, u32 addr, __le32 *data)
{
u16 val;
int attempts = EEPROM_MAX_POLL;
u32 v;
unsigned int base = adapter->params.pci.vpd_cap_addr;
if ((addr >= EEPROMSIZE && addr != EEPROM_STAT_ADDR) || (addr & 3))
return -EINVAL;
pci_write_config_word(adapter->pdev, base + PCI_VPD_ADDR, addr);
do {
udelay(10);
pci_read_config_word(adapter->pdev, base + PCI_VPD_ADDR, &val);
} while (!(val & PCI_VPD_ADDR_F) && --attempts);
if (!(val & PCI_VPD_ADDR_F)) {
CH_ERR(adapter, "reading EEPROM address 0x%x failed\n", addr);
return -EIO;
}
pci_read_config_dword(adapter->pdev, base + PCI_VPD_DATA, &v);
*data = cpu_to_le32(v);
return 0;
}
/**
* t3_seeprom_write - write a VPD EEPROM location
* @adapter: adapter to write
* @addr: EEPROM address
* @data: value to write
*
* Write a 32-bit word to a location in VPD EEPROM using the card's PCI
* VPD ROM capability.
*/
int t3_seeprom_write(struct adapter *adapter, u32 addr, __le32 data)
{
u16 val;
int attempts = EEPROM_MAX_POLL;
unsigned int base = adapter->params.pci.vpd_cap_addr;
if ((addr >= EEPROMSIZE && addr != EEPROM_STAT_ADDR) || (addr & 3))
return -EINVAL;
pci_write_config_dword(adapter->pdev, base + PCI_VPD_DATA,
le32_to_cpu(data));
pci_write_config_word(adapter->pdev,base + PCI_VPD_ADDR,
addr | PCI_VPD_ADDR_F);
do {
msleep(1);
pci_read_config_word(adapter->pdev, base + PCI_VPD_ADDR, &val);
} while ((val & PCI_VPD_ADDR_F) && --attempts);
if (val & PCI_VPD_ADDR_F) {
CH_ERR(adapter, "write to EEPROM address 0x%x failed\n", addr);
return -EIO;
}
return 0;
}
/** /**
* t3_seeprom_wp - enable/disable EEPROM write protection * t3_seeprom_wp - enable/disable EEPROM write protection
* @adapter: the adapter * @adapter: the adapter
@ -679,7 +608,14 @@ int t3_seeprom_write(struct adapter *adapter, u32 addr, __le32 data)
*/ */
int t3_seeprom_wp(struct adapter *adapter, int enable) int t3_seeprom_wp(struct adapter *adapter, int enable)
{ {
return t3_seeprom_write(adapter, EEPROM_STAT_ADDR, enable ? 0xc : 0); u32 data = enable ? 0xc : 0;
int ret;
/* EEPROM_STAT_ADDR is outside VPD area, use pci_write_vpd_any() */
ret = pci_write_vpd_any(adapter->pdev, EEPROM_STAT_ADDR, sizeof(u32),
&data);
return ret < 0 ? ret : 0;
} }
static int vpdstrtouint(char *s, u8 len, unsigned int base, unsigned int *val) static int vpdstrtouint(char *s, u8 len, unsigned int base, unsigned int *val)
@ -709,24 +645,22 @@ static int vpdstrtou16(char *s, u8 len, unsigned int base, u16 *val)
*/ */
static int get_vpd_params(struct adapter *adapter, struct vpd_params *p) static int get_vpd_params(struct adapter *adapter, struct vpd_params *p)
{ {
int i, addr, ret;
struct t3_vpd vpd; struct t3_vpd vpd;
u8 base_val = 0;
int addr, ret;
/* /*
* Card information is normally at VPD_BASE but some early cards had * Card information is normally at VPD_BASE but some early cards had
* it at 0. * it at 0.
*/ */
ret = t3_seeprom_read(adapter, VPD_BASE, (__le32 *)&vpd); ret = pci_read_vpd(adapter->pdev, VPD_BASE, 1, &base_val);
if (ret) if (ret < 0)
return ret; return ret;
addr = vpd.id_tag == 0x82 ? VPD_BASE : 0; addr = base_val == PCI_VPD_LRDT_ID_STRING ? VPD_BASE : 0;
for (i = 0; i < sizeof(vpd); i += 4) { ret = pci_read_vpd(adapter->pdev, addr, sizeof(vpd), &vpd);
ret = t3_seeprom_read(adapter, addr + i, if (ret < 0)
(__le32 *)((u8 *)&vpd + i)); return ret;
if (ret)
return ret;
}
ret = vpdstrtouint(vpd.cclk_data, vpd.cclk_len, 10, &p->cclk); ret = vpdstrtouint(vpd.cclk_data, vpd.cclk_len, 10, &p->cclk);
if (ret) if (ret)

View File

@ -608,7 +608,7 @@ static void hns3_get_drvinfo(struct net_device *netdev,
return; return;
} }
strncpy(drvinfo->driver, h->pdev->driver->name, strncpy(drvinfo->driver, dev_driver_string(&h->pdev->dev),
sizeof(drvinfo->driver)); sizeof(drvinfo->driver));
drvinfo->driver[sizeof(drvinfo->driver) - 1] = '\0'; drvinfo->driver[sizeof(drvinfo->driver) - 1] = '\0';

View File

@ -776,7 +776,7 @@ out_release:
static int prestera_pci_probe(struct pci_dev *pdev, static int prestera_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *id) const struct pci_device_id *id)
{ {
const char *driver_name = pdev->driver->name; const char *driver_name = dev_driver_string(&pdev->dev);
struct prestera_fw *fw; struct prestera_fw *fw;
int err; int err;

View File

@ -1875,7 +1875,7 @@ static void mlxsw_pci_cmd_fini(struct mlxsw_pci *mlxsw_pci)
static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) static int mlxsw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{ {
const char *driver_name = pdev->driver->name; const char *driver_name = dev_driver_string(&pdev->dev);
struct mlxsw_pci *mlxsw_pci; struct mlxsw_pci *mlxsw_pci;
int err; int err;

View File

@ -202,7 +202,8 @@ nfp_get_drvinfo(struct nfp_app *app, struct pci_dev *pdev,
{ {
char nsp_version[ETHTOOL_FWVERS_LEN] = {}; char nsp_version[ETHTOOL_FWVERS_LEN] = {};
strlcpy(drvinfo->driver, pdev->driver->name, sizeof(drvinfo->driver)); strlcpy(drvinfo->driver, dev_driver_string(&pdev->dev),
sizeof(drvinfo->driver));
nfp_net_get_nspinfo(app, nsp_version); nfp_net_get_nspinfo(app, nsp_version);
snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version), snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
"%s %s %s %s", vnic_version, nsp_version, "%s %s %s %s", vnic_version, nsp_version,

View File

@ -156,10 +156,14 @@ int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq)
/* Now start the actual "proper" walk of the interrupt tree */ /* Now start the actual "proper" walk of the interrupt tree */
while (ipar != NULL) { while (ipar != NULL) {
/* Now check if cursor is an interrupt-controller and if it is /*
* then we are done * Now check if cursor is an interrupt-controller and
* if it is then we are done, unless there is an
* interrupt-map which takes precedence.
*/ */
if (of_property_read_bool(ipar, "interrupt-controller")) { imap = of_get_property(ipar, "interrupt-map", &imaplen);
if (imap == NULL &&
of_property_read_bool(ipar, "interrupt-controller")) {
pr_debug(" -> got it !\n"); pr_debug(" -> got it !\n");
return 0; return 0;
} }
@ -173,8 +177,6 @@ int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq)
goto fail; goto fail;
} }
/* Now look for an interrupt-map */
imap = of_get_property(ipar, "interrupt-map", &imaplen);
/* No interrupt map, check for an interrupt parent */ /* No interrupt map, check for an interrupt parent */
if (imap == NULL) { if (imap == NULL) {
pr_debug(" -> no map, getting parent\n"); pr_debug(" -> no map, getting parent\n");
@ -255,6 +257,11 @@ int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq)
out_irq->args_count = intsize = newintsize; out_irq->args_count = intsize = newintsize;
addrsize = newaddrsize; addrsize = newaddrsize;
if (ipar == newpar) {
pr_debug("%pOF interrupt-map entry to self\n", ipar);
return 0;
}
skiplevel: skiplevel:
/* Iterate again with new parent */ /* Iterate again with new parent */
out_irq->np = newpar; out_irq->np = newpar;

View File

@ -254,7 +254,7 @@ config PCIE_MEDIATEK_GEN3
MediaTek SoCs. MediaTek SoCs.
config VMD config VMD
depends on PCI_MSI && X86_64 && SRCU depends on PCI_MSI && X86_64 && SRCU && !UML
tristate "Intel Volume Management Device Driver" tristate "Intel Volume Management Device Driver"
help help
Adds support for the Intel Volume Management Device (VMD). VMD is a Adds support for the Intel Volume Management Device (VMD). VMD is a
@ -312,6 +312,32 @@ config PCIE_HISI_ERR
Say Y here if you want error handling support Say Y here if you want error handling support
for the PCIe controller's errors on HiSilicon HIP SoCs for the PCIe controller's errors on HiSilicon HIP SoCs
config PCIE_APPLE_MSI_DOORBELL_ADDR
hex
default 0xfffff000
depends on PCIE_APPLE
config PCIE_APPLE
tristate "Apple PCIe controller"
depends on ARCH_APPLE || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
select PCI_HOST_COMMON
help
Say Y here if you want to enable PCIe controller support on Apple
system-on-chips, like the Apple M1. This is required for the USB
type-A ports, Ethernet, Wi-Fi, and Bluetooth.
If unsure, say Y if you have an Apple Silicon system.
config PCIE_MT7621
tristate "MediaTek MT7621 PCIe Controller"
depends on (RALINK && SOC_MT7621) || (MIPS && COMPILE_TEST)
select PHY_MT7621_PCI
default SOC_MT7621
help
This selects a driver for the MediaTek MT7621 PCIe Controller.
source "drivers/pci/controller/dwc/Kconfig" source "drivers/pci/controller/dwc/Kconfig"
source "drivers/pci/controller/mobiveil/Kconfig" source "drivers/pci/controller/mobiveil/Kconfig"
source "drivers/pci/controller/cadence/Kconfig" source "drivers/pci/controller/cadence/Kconfig"

View File

@ -37,6 +37,9 @@ obj-$(CONFIG_VMD) += vmd.o
obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o
obj-$(CONFIG_PCIE_APPLE) += pcie-apple.o
obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o
# pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
obj-y += dwc/ obj-y += dwc/
obj-y += mobiveil/ obj-y += mobiveil/

View File

@ -474,7 +474,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
ret = clk_prepare_enable(clk); ret = clk_prepare_enable(clk);
if (ret) { if (ret) {
dev_err(dev, "failed to enable pcie_refclk\n"); dev_err(dev, "failed to enable pcie_refclk\n");
goto err_get_sync; goto err_pcie_setup;
} }
pcie->refclk = clk; pcie->refclk = clk;

View File

@ -127,6 +127,8 @@ static int cdns_plat_pcie_probe(struct platform_device *pdev)
goto err_init; goto err_init;
} }
return 0;
err_init: err_init:
err_get_sync: err_get_sync:
pm_runtime_put_sync(dev); pm_runtime_put_sync(dev);

View File

@ -8,22 +8,20 @@ config PCIE_DW
config PCIE_DW_HOST config PCIE_DW_HOST
bool bool
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW select PCIE_DW
config PCIE_DW_EP config PCIE_DW_EP
bool bool
depends on PCI_ENDPOINT
select PCIE_DW select PCIE_DW
config PCI_DRA7XX config PCI_DRA7XX
bool tristate
config PCI_DRA7XX_HOST config PCI_DRA7XX_HOST
bool "TI DRA7xx PCIe controller Host Mode" tristate "TI DRA7xx PCIe controller Host Mode"
depends on SOC_DRA7XX || COMPILE_TEST depends on SOC_DRA7XX || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
depends on OF && HAS_IOMEM && TI_PIPE3 depends on OF && HAS_IOMEM && TI_PIPE3
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST select PCIE_DW_HOST
select PCI_DRA7XX select PCI_DRA7XX
default y if SOC_DRA7XX default y if SOC_DRA7XX
@ -36,10 +34,10 @@ config PCI_DRA7XX_HOST
This uses the DesignWare core. This uses the DesignWare core.
config PCI_DRA7XX_EP config PCI_DRA7XX_EP
bool "TI DRA7xx PCIe controller Endpoint Mode" tristate "TI DRA7xx PCIe controller Endpoint Mode"
depends on SOC_DRA7XX || COMPILE_TEST depends on SOC_DRA7XX || COMPILE_TEST
depends on PCI_ENDPOINT
depends on OF && HAS_IOMEM && TI_PIPE3 depends on OF && HAS_IOMEM && TI_PIPE3
depends on PCI_ENDPOINT
select PCIE_DW_EP select PCIE_DW_EP
select PCI_DRA7XX select PCI_DRA7XX
help help
@ -55,7 +53,7 @@ config PCIE_DW_PLAT
config PCIE_DW_PLAT_HOST config PCIE_DW_PLAT_HOST
bool "Platform bus based DesignWare PCIe Controller - Host mode" bool "Platform bus based DesignWare PCIe Controller - Host mode"
depends on PCI && PCI_MSI_IRQ_DOMAIN depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST select PCIE_DW_HOST
select PCIE_DW_PLAT select PCIE_DW_PLAT
help help
@ -138,8 +136,8 @@ config PCI_LAYERSCAPE
bool "Freescale Layerscape PCIe controller - Host mode" bool "Freescale Layerscape PCIe controller - Host mode"
depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST) depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST)
depends on PCI_MSI_IRQ_DOMAIN depends on PCI_MSI_IRQ_DOMAIN
select MFD_SYSCON
select PCIE_DW_HOST select PCIE_DW_HOST
select MFD_SYSCON
help help
Say Y here if you want to enable PCIe controller support on Layerscape Say Y here if you want to enable PCIe controller support on Layerscape
SoCs to work in Host mode. SoCs to work in Host mode.
@ -180,6 +178,16 @@ config PCIE_QCOM
PCIe controller uses the DesignWare core plus Qualcomm-specific PCIe controller uses the DesignWare core plus Qualcomm-specific
hardware wrappers. hardware wrappers.
config PCIE_QCOM_EP
tristate "Qualcomm PCIe controller - Endpoint mode"
depends on OF && (ARCH_QCOM || COMPILE_TEST)
depends on PCI_ENDPOINT
select PCIE_DW_EP
help
Say Y here to enable support for the PCIe controllers on Qualcomm SoCs
to work in endpoint mode. The PCIe controller uses the DesignWare core
plus Qualcomm-specific hardware wrappers.
config PCIE_ARMADA_8K config PCIE_ARMADA_8K
bool "Marvell Armada-8K PCIe controller" bool "Marvell Armada-8K PCIe controller"
depends on ARCH_MVEBU || COMPILE_TEST depends on ARCH_MVEBU || COMPILE_TEST
@ -266,7 +274,7 @@ config PCIE_KEEMBAY_EP
config PCIE_KIRIN config PCIE_KIRIN
depends on OF && (ARM64 || COMPILE_TEST) depends on OF && (ARM64 || COMPILE_TEST)
bool "HiSilicon Kirin series SoCs PCIe controllers" tristate "HiSilicon Kirin series SoCs PCIe controllers"
depends on PCI_MSI_IRQ_DOMAIN depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST select PCIE_DW_HOST
help help
@ -283,8 +291,8 @@ config PCIE_HISI_STB
config PCI_MESON config PCI_MESON
tristate "MESON PCIe controller" tristate "MESON PCIe controller"
depends on PCI_MSI_IRQ_DOMAIN
default m if ARCH_MESON default m if ARCH_MESON
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST select PCIE_DW_HOST
help help
Say Y here if you want to enable PCI controller support on Amlogic Say Y here if you want to enable PCI controller support on Amlogic

View File

@ -12,6 +12,7 @@ obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
obj-$(CONFIG_PCI_LAYERSCAPE_EP) += pci-layerscape-ep.o obj-$(CONFIG_PCI_LAYERSCAPE_EP) += pci-layerscape-ep.o
obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
obj-$(CONFIG_PCIE_QCOM_EP) += pcie-qcom-ep.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
obj-$(CONFIG_PCIE_ROCKCHIP_DW_HOST) += pcie-dw-rockchip.o obj-$(CONFIG_PCIE_ROCKCHIP_DW_HOST) += pcie-dw-rockchip.o

View File

@ -7,6 +7,7 @@
* Authors: Kishon Vijay Abraham I <kishon@ti.com> * Authors: Kishon Vijay Abraham I <kishon@ti.com>
*/ */
#include <linux/clk.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/err.h> #include <linux/err.h>
@ -14,7 +15,7 @@
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/irqdomain.h> #include <linux/irqdomain.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/init.h> #include <linux/module.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/of_gpio.h> #include <linux/of_gpio.h>
#include <linux/of_pci.h> #include <linux/of_pci.h>
@ -90,6 +91,7 @@ struct dra7xx_pcie {
int phy_count; /* DT phy-names count */ int phy_count; /* DT phy-names count */
struct phy **phy; struct phy **phy;
struct irq_domain *irq_domain; struct irq_domain *irq_domain;
struct clk *clk;
enum dw_pcie_device_mode mode; enum dw_pcie_device_mode mode;
}; };
@ -607,6 +609,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
}, },
{}, {},
}; };
MODULE_DEVICE_TABLE(of, of_dra7xx_pcie_match);
/* /*
* dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870 * dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
@ -740,6 +743,15 @@ static int dra7xx_pcie_probe(struct platform_device *pdev)
if (!link) if (!link)
return -ENOMEM; return -ENOMEM;
dra7xx->clk = devm_clk_get_optional(dev, NULL);
if (IS_ERR(dra7xx->clk))
return dev_err_probe(dev, PTR_ERR(dra7xx->clk),
"clock request failed");
ret = clk_prepare_enable(dra7xx->clk);
if (ret)
return ret;
for (i = 0; i < phy_count; i++) { for (i = 0; i < phy_count; i++) {
snprintf(name, sizeof(name), "pcie-phy%d", i); snprintf(name, sizeof(name), "pcie-phy%d", i);
phy[i] = devm_phy_get(dev, name); phy[i] = devm_phy_get(dev, name);
@ -925,6 +937,8 @@ static void dra7xx_pcie_shutdown(struct platform_device *pdev)
pm_runtime_disable(dev); pm_runtime_disable(dev);
dra7xx_pcie_disable_phy(dra7xx); dra7xx_pcie_disable_phy(dra7xx);
clk_disable_unprepare(dra7xx->clk);
} }
static const struct dev_pm_ops dra7xx_pcie_pm_ops = { static const struct dev_pm_ops dra7xx_pcie_pm_ops = {
@ -943,4 +957,8 @@ static struct platform_driver dra7xx_pcie_driver = {
}, },
.shutdown = dra7xx_pcie_shutdown, .shutdown = dra7xx_pcie_shutdown,
}; };
builtin_platform_driver(dra7xx_pcie_driver); module_platform_driver(dra7xx_pcie_driver);
MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>");
MODULE_DESCRIPTION("PCIe controller driver for TI DRA7xx SoCs");
MODULE_LICENSE("GPL v2");

View File

@ -1132,7 +1132,7 @@ static int imx6_pcie_probe(struct platform_device *pdev)
/* Limit link speed */ /* Limit link speed */
pci->link_gen = 1; pci->link_gen = 1;
ret = of_property_read_u32(node, "fsl,max-link-speed", &pci->link_gen); of_property_read_u32(node, "fsl,max-link-speed", &pci->link_gen);
imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie");
if (IS_ERR(imx6_pcie->vpcie)) { if (IS_ERR(imx6_pcie->vpcie)) {

View File

@ -83,6 +83,7 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
for (func_no = 0; func_no < funcs; func_no++) for (func_no = 0; func_no < funcs; func_no++)
__dw_pcie_ep_reset_bar(pci, func_no, bar, 0); __dw_pcie_ep_reset_bar(pci, func_no, bar, 0);
} }
EXPORT_SYMBOL_GPL(dw_pcie_ep_reset_bar);
static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie_ep *ep, u8 func_no, static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie_ep *ep, u8 func_no,
u8 cap_ptr, u8 cap) u8 cap_ptr, u8 cap)
@ -485,6 +486,7 @@ int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no)
return -EINVAL; return -EINVAL;
} }
EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_legacy_irq);
int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
u8 interrupt_num) u8 interrupt_num)
@ -536,6 +538,7 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(dw_pcie_ep_raise_msi_irq);
int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no, int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no,
u16 interrupt_num) u16 interrupt_num)

View File

@ -335,6 +335,16 @@ int dw_pcie_host_init(struct pcie_port *pp)
if (pci->link_gen < 1) if (pci->link_gen < 1)
pci->link_gen = of_pci_get_max_link_speed(np); pci->link_gen = of_pci_get_max_link_speed(np);
/* Set default bus ops */
bridge->ops = &dw_pcie_ops;
bridge->child_ops = &dw_child_pcie_ops;
if (pp->ops->host_init) {
ret = pp->ops->host_init(pp);
if (ret)
return ret;
}
if (pci_msi_enabled()) { if (pci_msi_enabled()) {
pp->has_msi_ctrl = !(pp->ops->msi_host_init || pp->has_msi_ctrl = !(pp->ops->msi_host_init ||
of_property_read_bool(np, "msi-parent") || of_property_read_bool(np, "msi-parent") ||
@ -388,15 +398,6 @@ int dw_pcie_host_init(struct pcie_port *pp)
} }
} }
/* Set default bus ops */
bridge->ops = &dw_pcie_ops;
bridge->child_ops = &dw_child_pcie_ops;
if (pp->ops->host_init) {
ret = pp->ops->host_init(pp);
if (ret)
goto err_free_msi;
}
dw_pcie_iatu_detect(pci); dw_pcie_iatu_detect(pci);
dw_pcie_setup_rc(pp); dw_pcie_setup_rc(pp);

View File

@ -538,6 +538,7 @@ int dw_pcie_link_up(struct dw_pcie *pci)
return ((val & PCIE_PORT_DEBUG1_LINK_UP) && return ((val & PCIE_PORT_DEBUG1_LINK_UP) &&
(!(val & PCIE_PORT_DEBUG1_LINK_IN_TRAINING))); (!(val & PCIE_PORT_DEBUG1_LINK_IN_TRAINING)));
} }
EXPORT_SYMBOL_GPL(dw_pcie_link_up);
void dw_pcie_upconfig_setup(struct dw_pcie *pci) void dw_pcie_upconfig_setup(struct dw_pcie *pci)
{ {

View File

@ -8,16 +8,18 @@
* Author: Xiaowei Song <songxiaowei@huawei.com> * Author: Xiaowei Song <songxiaowei@huawei.com>
*/ */
#include <linux/compiler.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/compiler.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/gpio.h> #include <linux/gpio.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/mfd/syscon.h> #include <linux/mfd/syscon.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h> #include <linux/of_gpio.h>
#include <linux/of_pci.h> #include <linux/of_pci.h>
#include <linux/phy/phy.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci_regs.h> #include <linux/pci_regs.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
@ -28,26 +30,16 @@
#define to_kirin_pcie(x) dev_get_drvdata((x)->dev) #define to_kirin_pcie(x) dev_get_drvdata((x)->dev)
#define REF_CLK_FREQ 100000000
/* PCIe ELBI registers */ /* PCIe ELBI registers */
#define SOC_PCIECTRL_CTRL0_ADDR 0x000 #define SOC_PCIECTRL_CTRL0_ADDR 0x000
#define SOC_PCIECTRL_CTRL1_ADDR 0x004 #define SOC_PCIECTRL_CTRL1_ADDR 0x004
#define SOC_PCIEPHY_CTRL2_ADDR 0x008
#define SOC_PCIEPHY_CTRL3_ADDR 0x00c
#define PCIE_ELBI_SLV_DBI_ENABLE (0x1 << 21) #define PCIE_ELBI_SLV_DBI_ENABLE (0x1 << 21)
/* info located in APB */ /* info located in APB */
#define PCIE_APP_LTSSM_ENABLE 0x01c #define PCIE_APP_LTSSM_ENABLE 0x01c
#define PCIE_APB_PHY_CTRL0 0x0
#define PCIE_APB_PHY_CTRL1 0x4
#define PCIE_APB_PHY_STATUS0 0x400 #define PCIE_APB_PHY_STATUS0 0x400
#define PCIE_LINKUP_ENABLE (0x8020) #define PCIE_LINKUP_ENABLE (0x8020)
#define PCIE_LTSSM_ENABLE_BIT (0x1 << 11) #define PCIE_LTSSM_ENABLE_BIT (0x1 << 11)
#define PIPE_CLK_STABLE (0x1 << 19)
#define PHY_REF_PAD_BIT (0x1 << 8)
#define PHY_PWR_DOWN_BIT (0x1 << 22)
#define PHY_RST_ACK_BIT (0x1 << 16)
/* info located in sysctrl */ /* info located in sysctrl */
#define SCTRL_PCIE_CMOS_OFFSET 0x60 #define SCTRL_PCIE_CMOS_OFFSET 0x60
@ -60,17 +52,70 @@
#define PCIE_DEBOUNCE_PARAM 0xF0F400 #define PCIE_DEBOUNCE_PARAM 0xF0F400
#define PCIE_OE_BYPASS (0x3 << 28) #define PCIE_OE_BYPASS (0x3 << 28)
/*
* Max number of connected PCI slots at an external PCI bridge
*
* This is used on HiKey 970, which has a PEX 8606 bridge with 4 connected
* lanes (lane 0 upstream, and the other three lanes, one connected to an
* in-board Ethernet adapter and the other two connected to M.2 and mini
* PCI slots.
*
* Each slot has a different clock source and uses a separate PERST# pin.
*/
#define MAX_PCI_SLOTS 3
enum pcie_kirin_phy_type {
PCIE_KIRIN_INTERNAL_PHY,
PCIE_KIRIN_EXTERNAL_PHY
};
struct kirin_pcie {
enum pcie_kirin_phy_type type;
struct dw_pcie *pci;
struct regmap *apb;
struct phy *phy;
void *phy_priv; /* only for PCIE_KIRIN_INTERNAL_PHY */
/* DWC PERST# */
int gpio_id_dwc_perst;
/* Per-slot PERST# */
int num_slots;
int gpio_id_reset[MAX_PCI_SLOTS];
const char *reset_names[MAX_PCI_SLOTS];
/* Per-slot clkreq */
int n_gpio_clkreq;
int gpio_id_clkreq[MAX_PCI_SLOTS];
const char *clkreq_names[MAX_PCI_SLOTS];
};
/*
* Kirin 960 PHY. Can't be split into a PHY driver without changing the
* DT schema.
*/
#define REF_CLK_FREQ 100000000
/* PHY info located in APB */
#define PCIE_APB_PHY_CTRL0 0x0
#define PCIE_APB_PHY_CTRL1 0x4
#define PCIE_APB_PHY_STATUS0 0x400
#define PIPE_CLK_STABLE BIT(19)
#define PHY_REF_PAD_BIT BIT(8)
#define PHY_PWR_DOWN_BIT BIT(22)
#define PHY_RST_ACK_BIT BIT(16)
/* peri_crg ctrl */ /* peri_crg ctrl */
#define CRGCTRL_PCIE_ASSERT_OFFSET 0x88 #define CRGCTRL_PCIE_ASSERT_OFFSET 0x88
#define CRGCTRL_PCIE_ASSERT_BIT 0x8c000000 #define CRGCTRL_PCIE_ASSERT_BIT 0x8c000000
/* Time for delay */ /* Time for delay */
#define REF_2_PERST_MIN 20000 #define REF_2_PERST_MIN 21000
#define REF_2_PERST_MAX 25000 #define REF_2_PERST_MAX 25000
#define PERST_2_ACCESS_MIN 10000 #define PERST_2_ACCESS_MIN 10000
#define PERST_2_ACCESS_MAX 12000 #define PERST_2_ACCESS_MAX 12000
#define LINK_WAIT_MIN 900
#define LINK_WAIT_MAX 1000
#define PIPE_CLK_WAIT_MIN 550 #define PIPE_CLK_WAIT_MIN 550
#define PIPE_CLK_WAIT_MAX 600 #define PIPE_CLK_WAIT_MAX 600
#define TIME_CMOS_MIN 100 #define TIME_CMOS_MIN 100
@ -78,118 +123,101 @@
#define TIME_PHY_PD_MIN 10 #define TIME_PHY_PD_MIN 10
#define TIME_PHY_PD_MAX 11 #define TIME_PHY_PD_MAX 11
struct kirin_pcie { struct hi3660_pcie_phy {
struct dw_pcie *pci; struct device *dev;
void __iomem *apb_base; void __iomem *base;
void __iomem *phy_base;
struct regmap *crgctrl; struct regmap *crgctrl;
struct regmap *sysctrl; struct regmap *sysctrl;
struct clk *apb_sys_clk; struct clk *apb_sys_clk;
struct clk *apb_phy_clk; struct clk *apb_phy_clk;
struct clk *phy_ref_clk; struct clk *phy_ref_clk;
struct clk *pcie_aclk; struct clk *aclk;
struct clk *pcie_aux_clk; struct clk *aux_clk;
int gpio_id_reset;
}; };
/* Registers in PCIeCTRL */
static inline void kirin_apb_ctrl_writel(struct kirin_pcie *kirin_pcie,
u32 val, u32 reg)
{
writel(val, kirin_pcie->apb_base + reg);
}
static inline u32 kirin_apb_ctrl_readl(struct kirin_pcie *kirin_pcie, u32 reg)
{
return readl(kirin_pcie->apb_base + reg);
}
/* Registers in PCIePHY */ /* Registers in PCIePHY */
static inline void kirin_apb_phy_writel(struct kirin_pcie *kirin_pcie, static inline void kirin_apb_phy_writel(struct hi3660_pcie_phy *hi3660_pcie_phy,
u32 val, u32 reg) u32 val, u32 reg)
{ {
writel(val, kirin_pcie->phy_base + reg); writel(val, hi3660_pcie_phy->base + reg);
} }
static inline u32 kirin_apb_phy_readl(struct kirin_pcie *kirin_pcie, u32 reg) static inline u32 kirin_apb_phy_readl(struct hi3660_pcie_phy *hi3660_pcie_phy,
u32 reg)
{ {
return readl(kirin_pcie->phy_base + reg); return readl(hi3660_pcie_phy->base + reg);
} }
static long kirin_pcie_get_clk(struct kirin_pcie *kirin_pcie, static int hi3660_pcie_phy_get_clk(struct hi3660_pcie_phy *phy)
struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = phy->dev;
kirin_pcie->phy_ref_clk = devm_clk_get(dev, "pcie_phy_ref"); phy->phy_ref_clk = devm_clk_get(dev, "pcie_phy_ref");
if (IS_ERR(kirin_pcie->phy_ref_clk)) if (IS_ERR(phy->phy_ref_clk))
return PTR_ERR(kirin_pcie->phy_ref_clk); return PTR_ERR(phy->phy_ref_clk);
kirin_pcie->pcie_aux_clk = devm_clk_get(dev, "pcie_aux"); phy->aux_clk = devm_clk_get(dev, "pcie_aux");
if (IS_ERR(kirin_pcie->pcie_aux_clk)) if (IS_ERR(phy->aux_clk))
return PTR_ERR(kirin_pcie->pcie_aux_clk); return PTR_ERR(phy->aux_clk);
kirin_pcie->apb_phy_clk = devm_clk_get(dev, "pcie_apb_phy"); phy->apb_phy_clk = devm_clk_get(dev, "pcie_apb_phy");
if (IS_ERR(kirin_pcie->apb_phy_clk)) if (IS_ERR(phy->apb_phy_clk))
return PTR_ERR(kirin_pcie->apb_phy_clk); return PTR_ERR(phy->apb_phy_clk);
kirin_pcie->apb_sys_clk = devm_clk_get(dev, "pcie_apb_sys"); phy->apb_sys_clk = devm_clk_get(dev, "pcie_apb_sys");
if (IS_ERR(kirin_pcie->apb_sys_clk)) if (IS_ERR(phy->apb_sys_clk))
return PTR_ERR(kirin_pcie->apb_sys_clk); return PTR_ERR(phy->apb_sys_clk);
kirin_pcie->pcie_aclk = devm_clk_get(dev, "pcie_aclk"); phy->aclk = devm_clk_get(dev, "pcie_aclk");
if (IS_ERR(kirin_pcie->pcie_aclk)) if (IS_ERR(phy->aclk))
return PTR_ERR(kirin_pcie->pcie_aclk); return PTR_ERR(phy->aclk);
return 0; return 0;
} }
static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie, static int hi3660_pcie_phy_get_resource(struct hi3660_pcie_phy *phy)
struct platform_device *pdev)
{ {
kirin_pcie->apb_base = struct device *dev = phy->dev;
devm_platform_ioremap_resource_byname(pdev, "apb"); struct platform_device *pdev;
if (IS_ERR(kirin_pcie->apb_base))
return PTR_ERR(kirin_pcie->apb_base);
kirin_pcie->phy_base = /* registers */
devm_platform_ioremap_resource_byname(pdev, "phy"); pdev = container_of(dev, struct platform_device, dev);
if (IS_ERR(kirin_pcie->phy_base))
return PTR_ERR(kirin_pcie->phy_base);
kirin_pcie->crgctrl = phy->base = devm_platform_ioremap_resource_byname(pdev, "phy");
syscon_regmap_lookup_by_compatible("hisilicon,hi3660-crgctrl"); if (IS_ERR(phy->base))
if (IS_ERR(kirin_pcie->crgctrl)) return PTR_ERR(phy->base);
return PTR_ERR(kirin_pcie->crgctrl);
kirin_pcie->sysctrl = phy->crgctrl = syscon_regmap_lookup_by_compatible("hisilicon,hi3660-crgctrl");
syscon_regmap_lookup_by_compatible("hisilicon,hi3660-sctrl"); if (IS_ERR(phy->crgctrl))
if (IS_ERR(kirin_pcie->sysctrl)) return PTR_ERR(phy->crgctrl);
return PTR_ERR(kirin_pcie->sysctrl);
phy->sysctrl = syscon_regmap_lookup_by_compatible("hisilicon,hi3660-sctrl");
if (IS_ERR(phy->sysctrl))
return PTR_ERR(phy->sysctrl);
return 0; return 0;
} }
static int kirin_pcie_phy_init(struct kirin_pcie *kirin_pcie) static int hi3660_pcie_phy_start(struct hi3660_pcie_phy *phy)
{ {
struct device *dev = kirin_pcie->pci->dev; struct device *dev = phy->dev;
u32 reg_val; u32 reg_val;
reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL1); reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_CTRL1);
reg_val &= ~PHY_REF_PAD_BIT; reg_val &= ~PHY_REF_PAD_BIT;
kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL1); kirin_apb_phy_writel(phy, reg_val, PCIE_APB_PHY_CTRL1);
reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL0); reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_CTRL0);
reg_val &= ~PHY_PWR_DOWN_BIT; reg_val &= ~PHY_PWR_DOWN_BIT;
kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL0); kirin_apb_phy_writel(phy, reg_val, PCIE_APB_PHY_CTRL0);
usleep_range(TIME_PHY_PD_MIN, TIME_PHY_PD_MAX); usleep_range(TIME_PHY_PD_MIN, TIME_PHY_PD_MAX);
reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL1); reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_CTRL1);
reg_val &= ~PHY_RST_ACK_BIT; reg_val &= ~PHY_RST_ACK_BIT;
kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL1); kirin_apb_phy_writel(phy, reg_val, PCIE_APB_PHY_CTRL1);
usleep_range(PIPE_CLK_WAIT_MIN, PIPE_CLK_WAIT_MAX); usleep_range(PIPE_CLK_WAIT_MIN, PIPE_CLK_WAIT_MAX);
reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_STATUS0); reg_val = kirin_apb_phy_readl(phy, PCIE_APB_PHY_STATUS0);
if (reg_val & PIPE_CLK_STABLE) { if (reg_val & PIPE_CLK_STABLE) {
dev_err(dev, "PIPE clk is not stable\n"); dev_err(dev, "PIPE clk is not stable\n");
return -EINVAL; return -EINVAL;
@ -198,102 +226,274 @@ static int kirin_pcie_phy_init(struct kirin_pcie *kirin_pcie)
return 0; return 0;
} }
static void kirin_pcie_oe_enable(struct kirin_pcie *kirin_pcie) static void hi3660_pcie_phy_oe_enable(struct hi3660_pcie_phy *phy)
{ {
u32 val; u32 val;
regmap_read(kirin_pcie->sysctrl, SCTRL_PCIE_OE_OFFSET, &val); regmap_read(phy->sysctrl, SCTRL_PCIE_OE_OFFSET, &val);
val |= PCIE_DEBOUNCE_PARAM; val |= PCIE_DEBOUNCE_PARAM;
val &= ~PCIE_OE_BYPASS; val &= ~PCIE_OE_BYPASS;
regmap_write(kirin_pcie->sysctrl, SCTRL_PCIE_OE_OFFSET, val); regmap_write(phy->sysctrl, SCTRL_PCIE_OE_OFFSET, val);
} }
static int kirin_pcie_clk_ctrl(struct kirin_pcie *kirin_pcie, bool enable) static int hi3660_pcie_phy_clk_ctrl(struct hi3660_pcie_phy *phy, bool enable)
{ {
int ret = 0; int ret = 0;
if (!enable) if (!enable)
goto close_clk; goto close_clk;
ret = clk_set_rate(kirin_pcie->phy_ref_clk, REF_CLK_FREQ); ret = clk_set_rate(phy->phy_ref_clk, REF_CLK_FREQ);
if (ret) if (ret)
return ret; return ret;
ret = clk_prepare_enable(kirin_pcie->phy_ref_clk); ret = clk_prepare_enable(phy->phy_ref_clk);
if (ret) if (ret)
return ret; return ret;
ret = clk_prepare_enable(kirin_pcie->apb_sys_clk); ret = clk_prepare_enable(phy->apb_sys_clk);
if (ret) if (ret)
goto apb_sys_fail; goto apb_sys_fail;
ret = clk_prepare_enable(kirin_pcie->apb_phy_clk); ret = clk_prepare_enable(phy->apb_phy_clk);
if (ret) if (ret)
goto apb_phy_fail; goto apb_phy_fail;
ret = clk_prepare_enable(kirin_pcie->pcie_aclk); ret = clk_prepare_enable(phy->aclk);
if (ret) if (ret)
goto aclk_fail; goto aclk_fail;
ret = clk_prepare_enable(kirin_pcie->pcie_aux_clk); ret = clk_prepare_enable(phy->aux_clk);
if (ret) if (ret)
goto aux_clk_fail; goto aux_clk_fail;
return 0; return 0;
close_clk: close_clk:
clk_disable_unprepare(kirin_pcie->pcie_aux_clk); clk_disable_unprepare(phy->aux_clk);
aux_clk_fail: aux_clk_fail:
clk_disable_unprepare(kirin_pcie->pcie_aclk); clk_disable_unprepare(phy->aclk);
aclk_fail: aclk_fail:
clk_disable_unprepare(kirin_pcie->apb_phy_clk); clk_disable_unprepare(phy->apb_phy_clk);
apb_phy_fail: apb_phy_fail:
clk_disable_unprepare(kirin_pcie->apb_sys_clk); clk_disable_unprepare(phy->apb_sys_clk);
apb_sys_fail: apb_sys_fail:
clk_disable_unprepare(kirin_pcie->phy_ref_clk); clk_disable_unprepare(phy->phy_ref_clk);
return ret; return ret;
} }
static int kirin_pcie_power_on(struct kirin_pcie *kirin_pcie) static int hi3660_pcie_phy_power_on(struct kirin_pcie *pcie)
{ {
struct hi3660_pcie_phy *phy = pcie->phy_priv;
int ret; int ret;
/* Power supply for Host */ /* Power supply for Host */
regmap_write(kirin_pcie->sysctrl, regmap_write(phy->sysctrl,
SCTRL_PCIE_CMOS_OFFSET, SCTRL_PCIE_CMOS_BIT); SCTRL_PCIE_CMOS_OFFSET, SCTRL_PCIE_CMOS_BIT);
usleep_range(TIME_CMOS_MIN, TIME_CMOS_MAX); usleep_range(TIME_CMOS_MIN, TIME_CMOS_MAX);
kirin_pcie_oe_enable(kirin_pcie);
ret = kirin_pcie_clk_ctrl(kirin_pcie, true); hi3660_pcie_phy_oe_enable(phy);
ret = hi3660_pcie_phy_clk_ctrl(phy, true);
if (ret) if (ret)
return ret; return ret;
/* ISO disable, PCIeCtrl, PHY assert and clk gate clear */ /* ISO disable, PCIeCtrl, PHY assert and clk gate clear */
regmap_write(kirin_pcie->sysctrl, regmap_write(phy->sysctrl,
SCTRL_PCIE_ISO_OFFSET, SCTRL_PCIE_ISO_BIT); SCTRL_PCIE_ISO_OFFSET, SCTRL_PCIE_ISO_BIT);
regmap_write(kirin_pcie->crgctrl, regmap_write(phy->crgctrl,
CRGCTRL_PCIE_ASSERT_OFFSET, CRGCTRL_PCIE_ASSERT_BIT); CRGCTRL_PCIE_ASSERT_OFFSET, CRGCTRL_PCIE_ASSERT_BIT);
regmap_write(kirin_pcie->sysctrl, regmap_write(phy->sysctrl,
SCTRL_PCIE_HPCLK_OFFSET, SCTRL_PCIE_HPCLK_BIT); SCTRL_PCIE_HPCLK_OFFSET, SCTRL_PCIE_HPCLK_BIT);
ret = kirin_pcie_phy_init(kirin_pcie); ret = hi3660_pcie_phy_start(phy);
if (ret) if (ret)
goto close_clk; goto disable_clks;
/* perst assert Endpoint */ return 0;
if (!gpio_request(kirin_pcie->gpio_id_reset, "pcie_perst")) {
usleep_range(REF_2_PERST_MIN, REF_2_PERST_MAX);
ret = gpio_direction_output(kirin_pcie->gpio_id_reset, 1);
if (ret)
goto close_clk;
usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX);
disable_clks:
hi3660_pcie_phy_clk_ctrl(phy, false);
return ret;
}
static int hi3660_pcie_phy_init(struct platform_device *pdev,
struct kirin_pcie *pcie)
{
struct device *dev = &pdev->dev;
struct hi3660_pcie_phy *phy;
int ret;
phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL);
if (!phy)
return -ENOMEM;
pcie->phy_priv = phy;
phy->dev = dev;
/* registers */
pdev = container_of(dev, struct platform_device, dev);
ret = hi3660_pcie_phy_get_clk(phy);
if (ret)
return ret;
return hi3660_pcie_phy_get_resource(phy);
}
static int hi3660_pcie_phy_power_off(struct kirin_pcie *pcie)
{
struct hi3660_pcie_phy *phy = pcie->phy_priv;
/* Drop power supply for Host */
regmap_write(phy->sysctrl, SCTRL_PCIE_CMOS_OFFSET, 0x00);
hi3660_pcie_phy_clk_ctrl(phy, false);
return 0;
}
/*
* The non-PHY part starts here
*/
static const struct regmap_config pcie_kirin_regmap_conf = {
.name = "kirin_pcie_apb",
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
};
static int kirin_pcie_get_gpio_enable(struct kirin_pcie *pcie,
struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
char name[32];
int ret, i;
/* This is an optional property */
ret = of_gpio_named_count(np, "hisilicon,clken-gpios");
if (ret < 0)
return 0; return 0;
if (ret > MAX_PCI_SLOTS) {
dev_err(dev, "Too many GPIO clock requests!\n");
return -EINVAL;
} }
close_clk: pcie->n_gpio_clkreq = ret;
kirin_pcie_clk_ctrl(kirin_pcie, false);
for (i = 0; i < pcie->n_gpio_clkreq; i++) {
pcie->gpio_id_clkreq[i] = of_get_named_gpio(dev->of_node,
"hisilicon,clken-gpios", i);
if (pcie->gpio_id_clkreq[i] < 0)
return pcie->gpio_id_clkreq[i];
sprintf(name, "pcie_clkreq_%d", i);
pcie->clkreq_names[i] = devm_kstrdup_const(dev, name,
GFP_KERNEL);
if (!pcie->clkreq_names[i])
return -ENOMEM;
}
return 0;
}
static int kirin_pcie_parse_port(struct kirin_pcie *pcie,
struct platform_device *pdev,
struct device_node *node)
{
struct device *dev = &pdev->dev;
struct device_node *parent, *child;
int ret, slot, i;
char name[32];
for_each_available_child_of_node(node, parent) {
for_each_available_child_of_node(parent, child) {
i = pcie->num_slots;
pcie->gpio_id_reset[i] = of_get_named_gpio(child,
"reset-gpios", 0);
if (pcie->gpio_id_reset[i] < 0)
continue;
pcie->num_slots++;
if (pcie->num_slots > MAX_PCI_SLOTS) {
dev_err(dev, "Too many PCI slots!\n");
ret = -EINVAL;
goto put_node;
}
ret = of_pci_get_devfn(child);
if (ret < 0) {
dev_err(dev, "failed to parse devfn: %d\n", ret);
goto put_node;
}
slot = PCI_SLOT(ret);
sprintf(name, "pcie_perst_%d", slot);
pcie->reset_names[i] = devm_kstrdup_const(dev, name,
GFP_KERNEL);
if (!pcie->reset_names[i]) {
ret = -ENOMEM;
goto put_node;
}
}
}
return 0;
put_node:
of_node_put(child);
of_node_put(parent);
return ret;
}
static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie,
struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *child, *node = dev->of_node;
void __iomem *apb_base;
int ret;
apb_base = devm_platform_ioremap_resource_byname(pdev, "apb");
if (IS_ERR(apb_base))
return PTR_ERR(apb_base);
kirin_pcie->apb = devm_regmap_init_mmio(dev, apb_base,
&pcie_kirin_regmap_conf);
if (IS_ERR(kirin_pcie->apb))
return PTR_ERR(kirin_pcie->apb);
/* pcie internal PERST# gpio */
kirin_pcie->gpio_id_dwc_perst = of_get_named_gpio(dev->of_node,
"reset-gpios", 0);
if (kirin_pcie->gpio_id_dwc_perst == -EPROBE_DEFER) {
return -EPROBE_DEFER;
} else if (!gpio_is_valid(kirin_pcie->gpio_id_dwc_perst)) {
dev_err(dev, "unable to get a valid gpio pin\n");
return -ENODEV;
}
ret = kirin_pcie_get_gpio_enable(kirin_pcie, pdev);
if (ret)
return ret;
/* Parse OF children */
for_each_available_child_of_node(node, child) {
ret = kirin_pcie_parse_port(kirin_pcie, pdev, child);
if (ret)
goto put_node;
}
return 0;
put_node:
of_node_put(child);
return ret; return ret;
} }
@ -302,13 +502,13 @@ static void kirin_pcie_sideband_dbi_w_mode(struct kirin_pcie *kirin_pcie,
{ {
u32 val; u32 val;
val = kirin_apb_ctrl_readl(kirin_pcie, SOC_PCIECTRL_CTRL0_ADDR); regmap_read(kirin_pcie->apb, SOC_PCIECTRL_CTRL0_ADDR, &val);
if (on) if (on)
val = val | PCIE_ELBI_SLV_DBI_ENABLE; val = val | PCIE_ELBI_SLV_DBI_ENABLE;
else else
val = val & ~PCIE_ELBI_SLV_DBI_ENABLE; val = val & ~PCIE_ELBI_SLV_DBI_ENABLE;
kirin_apb_ctrl_writel(kirin_pcie, val, SOC_PCIECTRL_CTRL0_ADDR); regmap_write(kirin_pcie->apb, SOC_PCIECTRL_CTRL0_ADDR, val);
} }
static void kirin_pcie_sideband_dbi_r_mode(struct kirin_pcie *kirin_pcie, static void kirin_pcie_sideband_dbi_r_mode(struct kirin_pcie *kirin_pcie,
@ -316,13 +516,13 @@ static void kirin_pcie_sideband_dbi_r_mode(struct kirin_pcie *kirin_pcie,
{ {
u32 val; u32 val;
val = kirin_apb_ctrl_readl(kirin_pcie, SOC_PCIECTRL_CTRL1_ADDR); regmap_read(kirin_pcie->apb, SOC_PCIECTRL_CTRL1_ADDR, &val);
if (on) if (on)
val = val | PCIE_ELBI_SLV_DBI_ENABLE; val = val | PCIE_ELBI_SLV_DBI_ENABLE;
else else
val = val & ~PCIE_ELBI_SLV_DBI_ENABLE; val = val & ~PCIE_ELBI_SLV_DBI_ENABLE;
kirin_apb_ctrl_writel(kirin_pcie, val, SOC_PCIECTRL_CTRL1_ADDR); regmap_write(kirin_pcie->apb, SOC_PCIECTRL_CTRL1_ADDR, val);
} }
static int kirin_pcie_rd_own_conf(struct pci_bus *bus, unsigned int devfn, static int kirin_pcie_rd_own_conf(struct pci_bus *bus, unsigned int devfn,
@ -351,9 +551,32 @@ static int kirin_pcie_wr_own_conf(struct pci_bus *bus, unsigned int devfn,
return PCIBIOS_SUCCESSFUL; return PCIBIOS_SUCCESSFUL;
} }
static int kirin_pcie_add_bus(struct pci_bus *bus)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata);
struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci);
int i, ret;
if (!kirin_pcie->num_slots)
return 0;
/* Send PERST# to each slot */
for (i = 0; i < kirin_pcie->num_slots; i++) {
ret = gpio_direction_output(kirin_pcie->gpio_id_reset[i], 1);
if (ret) {
dev_err(pci->dev, "PERST# %s error: %d\n",
kirin_pcie->reset_names[i], ret);
}
}
usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX);
return 0;
}
static struct pci_ops kirin_pci_ops = { static struct pci_ops kirin_pci_ops = {
.read = kirin_pcie_rd_own_conf, .read = kirin_pcie_rd_own_conf,
.write = kirin_pcie_wr_own_conf, .write = kirin_pcie_wr_own_conf,
.add_bus = kirin_pcie_add_bus,
}; };
static u32 kirin_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, static u32 kirin_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base,
@ -382,8 +605,9 @@ static void kirin_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base,
static int kirin_pcie_link_up(struct dw_pcie *pci) static int kirin_pcie_link_up(struct dw_pcie *pci)
{ {
struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci);
u32 val = kirin_apb_ctrl_readl(kirin_pcie, PCIE_APB_PHY_STATUS0); u32 val;
regmap_read(kirin_pcie->apb, PCIE_APB_PHY_STATUS0, &val);
if ((val & PCIE_LINKUP_ENABLE) == PCIE_LINKUP_ENABLE) if ((val & PCIE_LINKUP_ENABLE) == PCIE_LINKUP_ENABLE)
return 1; return 1;
@ -395,8 +619,8 @@ static int kirin_pcie_start_link(struct dw_pcie *pci)
struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci);
/* assert LTSSM enable */ /* assert LTSSM enable */
kirin_apb_ctrl_writel(kirin_pcie, PCIE_LTSSM_ENABLE_BIT, regmap_write(kirin_pcie->apb, PCIE_APP_LTSSM_ENABLE,
PCIE_APP_LTSSM_ENABLE); PCIE_LTSSM_ENABLE_BIT);
return 0; return 0;
} }
@ -408,6 +632,44 @@ static int kirin_pcie_host_init(struct pcie_port *pp)
return 0; return 0;
} }
static int kirin_pcie_gpio_request(struct kirin_pcie *kirin_pcie,
struct device *dev)
{
int ret, i;
for (i = 0; i < kirin_pcie->num_slots; i++) {
if (!gpio_is_valid(kirin_pcie->gpio_id_reset[i])) {
dev_err(dev, "unable to get a valid %s gpio\n",
kirin_pcie->reset_names[i]);
return -ENODEV;
}
ret = devm_gpio_request(dev, kirin_pcie->gpio_id_reset[i],
kirin_pcie->reset_names[i]);
if (ret)
return ret;
}
for (i = 0; i < kirin_pcie->n_gpio_clkreq; i++) {
if (!gpio_is_valid(kirin_pcie->gpio_id_clkreq[i])) {
dev_err(dev, "unable to get a valid %s gpio\n",
kirin_pcie->clkreq_names[i]);
return -ENODEV;
}
ret = devm_gpio_request(dev, kirin_pcie->gpio_id_clkreq[i],
kirin_pcie->clkreq_names[i]);
if (ret)
return ret;
ret = gpio_direction_output(kirin_pcie->gpio_id_clkreq[i], 0);
if (ret)
return ret;
}
return 0;
}
static const struct dw_pcie_ops kirin_dw_pcie_ops = { static const struct dw_pcie_ops kirin_dw_pcie_ops = {
.read_dbi = kirin_pcie_read_dbi, .read_dbi = kirin_pcie_read_dbi,
.write_dbi = kirin_pcie_write_dbi, .write_dbi = kirin_pcie_write_dbi,
@ -419,8 +681,99 @@ static const struct dw_pcie_host_ops kirin_pcie_host_ops = {
.host_init = kirin_pcie_host_init, .host_init = kirin_pcie_host_init,
}; };
static int kirin_pcie_power_off(struct kirin_pcie *kirin_pcie)
{
int i;
if (kirin_pcie->type == PCIE_KIRIN_INTERNAL_PHY)
return hi3660_pcie_phy_power_off(kirin_pcie);
for (i = 0; i < kirin_pcie->n_gpio_clkreq; i++)
gpio_direction_output(kirin_pcie->gpio_id_clkreq[i], 1);
phy_power_off(kirin_pcie->phy);
phy_exit(kirin_pcie->phy);
return 0;
}
static int kirin_pcie_power_on(struct platform_device *pdev,
struct kirin_pcie *kirin_pcie)
{
struct device *dev = &pdev->dev;
int ret;
if (kirin_pcie->type == PCIE_KIRIN_INTERNAL_PHY) {
ret = hi3660_pcie_phy_init(pdev, kirin_pcie);
if (ret)
return ret;
ret = hi3660_pcie_phy_power_on(kirin_pcie);
if (ret)
return ret;
} else {
kirin_pcie->phy = devm_of_phy_get(dev, dev->of_node, NULL);
if (IS_ERR(kirin_pcie->phy))
return PTR_ERR(kirin_pcie->phy);
ret = kirin_pcie_gpio_request(kirin_pcie, dev);
if (ret)
return ret;
ret = phy_init(kirin_pcie->phy);
if (ret)
goto err;
ret = phy_power_on(kirin_pcie->phy);
if (ret)
goto err;
}
/* perst assert Endpoint */
usleep_range(REF_2_PERST_MIN, REF_2_PERST_MAX);
if (!gpio_request(kirin_pcie->gpio_id_dwc_perst, "pcie_perst_bridge")) {
ret = gpio_direction_output(kirin_pcie->gpio_id_dwc_perst, 1);
if (ret)
goto err;
}
usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX);
return 0;
err:
kirin_pcie_power_off(kirin_pcie);
return ret;
}
static int __exit kirin_pcie_remove(struct platform_device *pdev)
{
struct kirin_pcie *kirin_pcie = platform_get_drvdata(pdev);
dw_pcie_host_deinit(&kirin_pcie->pci->pp);
kirin_pcie_power_off(kirin_pcie);
return 0;
}
static const struct of_device_id kirin_pcie_match[] = {
{
.compatible = "hisilicon,kirin960-pcie",
.data = (void *)PCIE_KIRIN_INTERNAL_PHY
},
{
.compatible = "hisilicon,kirin970-pcie",
.data = (void *)PCIE_KIRIN_EXTERNAL_PHY
},
{},
};
static int kirin_pcie_probe(struct platform_device *pdev) static int kirin_pcie_probe(struct platform_device *pdev)
{ {
enum pcie_kirin_phy_type phy_type;
const struct of_device_id *of_id;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct kirin_pcie *kirin_pcie; struct kirin_pcie *kirin_pcie;
struct dw_pcie *pci; struct dw_pcie *pci;
@ -431,6 +784,14 @@ static int kirin_pcie_probe(struct platform_device *pdev)
return -EINVAL; return -EINVAL;
} }
of_id = of_match_device(kirin_pcie_match, dev);
if (!of_id) {
dev_err(dev, "OF data missing\n");
return -EINVAL;
}
phy_type = (long)of_id->data;
kirin_pcie = devm_kzalloc(dev, sizeof(struct kirin_pcie), GFP_KERNEL); kirin_pcie = devm_kzalloc(dev, sizeof(struct kirin_pcie), GFP_KERNEL);
if (!kirin_pcie) if (!kirin_pcie)
return -ENOMEM; return -ENOMEM;
@ -443,44 +804,33 @@ static int kirin_pcie_probe(struct platform_device *pdev)
pci->ops = &kirin_dw_pcie_ops; pci->ops = &kirin_dw_pcie_ops;
pci->pp.ops = &kirin_pcie_host_ops; pci->pp.ops = &kirin_pcie_host_ops;
kirin_pcie->pci = pci; kirin_pcie->pci = pci;
kirin_pcie->type = phy_type;
ret = kirin_pcie_get_clk(kirin_pcie, pdev);
if (ret)
return ret;
ret = kirin_pcie_get_resource(kirin_pcie, pdev); ret = kirin_pcie_get_resource(kirin_pcie, pdev);
if (ret) if (ret)
return ret; return ret;
kirin_pcie->gpio_id_reset = of_get_named_gpio(dev->of_node, platform_set_drvdata(pdev, kirin_pcie);
"reset-gpios", 0);
if (kirin_pcie->gpio_id_reset == -EPROBE_DEFER) {
return -EPROBE_DEFER;
} else if (!gpio_is_valid(kirin_pcie->gpio_id_reset)) {
dev_err(dev, "unable to get a valid gpio pin\n");
return -ENODEV;
}
ret = kirin_pcie_power_on(kirin_pcie); ret = kirin_pcie_power_on(pdev, kirin_pcie);
if (ret) if (ret)
return ret; return ret;
platform_set_drvdata(pdev, kirin_pcie);
return dw_pcie_host_init(&pci->pp); return dw_pcie_host_init(&pci->pp);
} }
static const struct of_device_id kirin_pcie_match[] = {
{ .compatible = "hisilicon,kirin960-pcie" },
{},
};
static struct platform_driver kirin_pcie_driver = { static struct platform_driver kirin_pcie_driver = {
.probe = kirin_pcie_probe, .probe = kirin_pcie_probe,
.remove = __exit_p(kirin_pcie_remove),
.driver = { .driver = {
.name = "kirin-pcie", .name = "kirin-pcie",
.of_match_table = kirin_pcie_match, .of_match_table = kirin_pcie_match,
.suppress_bind_attrs = true, .suppress_bind_attrs = true,
}, },
}; };
builtin_platform_driver(kirin_pcie_driver); module_platform_driver(kirin_pcie_driver);
MODULE_DEVICE_TABLE(of, kirin_pcie_match);
MODULE_DESCRIPTION("PCIe host controller driver for Kirin Phone SoCs");
MODULE_AUTHOR("Xiaowei Song <songxiaowei@huawei.com>");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,721 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Qualcomm PCIe Endpoint controller driver
*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
* Author: Siddartha Mohanadoss <smohanad@codeaurora.org
*
* Copyright (c) 2021, Linaro Ltd.
* Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/mfd/syscon.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/pm_domain.h>
#include <linux/regmap.h>
#include <linux/reset.h>
#include "pcie-designware.h"
/* PARF registers */
#define PARF_SYS_CTRL 0x00
#define PARF_DB_CTRL 0x10
#define PARF_PM_CTRL 0x20
#define PARF_MHI_BASE_ADDR_LOWER 0x178
#define PARF_MHI_BASE_ADDR_UPPER 0x17c
#define PARF_DEBUG_INT_EN 0x190
#define PARF_AXI_MSTR_RD_HALT_NO_WRITES 0x1a4
#define PARF_AXI_MSTR_WR_ADDR_HALT 0x1a8
#define PARF_Q2A_FLUSH 0x1ac
#define PARF_LTSSM 0x1b0
#define PARF_CFG_BITS 0x210
#define PARF_INT_ALL_STATUS 0x224
#define PARF_INT_ALL_CLEAR 0x228
#define PARF_INT_ALL_MASK 0x22c
#define PARF_SLV_ADDR_MSB_CTRL 0x2c0
#define PARF_DBI_BASE_ADDR 0x350
#define PARF_DBI_BASE_ADDR_HI 0x354
#define PARF_SLV_ADDR_SPACE_SIZE 0x358
#define PARF_SLV_ADDR_SPACE_SIZE_HI 0x35c
#define PARF_ATU_BASE_ADDR 0x634
#define PARF_ATU_BASE_ADDR_HI 0x638
#define PARF_SRIS_MODE 0x644
#define PARF_DEVICE_TYPE 0x1000
#define PARF_BDF_TO_SID_CFG 0x2c00
/* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */
#define PARF_INT_ALL_LINK_DOWN BIT(1)
#define PARF_INT_ALL_BME BIT(2)
#define PARF_INT_ALL_PM_TURNOFF BIT(3)
#define PARF_INT_ALL_DEBUG BIT(4)
#define PARF_INT_ALL_LTR BIT(5)
#define PARF_INT_ALL_MHI_Q6 BIT(6)
#define PARF_INT_ALL_MHI_A7 BIT(7)
#define PARF_INT_ALL_DSTATE_CHANGE BIT(8)
#define PARF_INT_ALL_L1SUB_TIMEOUT BIT(9)
#define PARF_INT_ALL_MMIO_WRITE BIT(10)
#define PARF_INT_ALL_CFG_WRITE BIT(11)
#define PARF_INT_ALL_BRIDGE_FLUSH_N BIT(12)
#define PARF_INT_ALL_LINK_UP BIT(13)
#define PARF_INT_ALL_AER_LEGACY BIT(14)
#define PARF_INT_ALL_PLS_ERR BIT(15)
#define PARF_INT_ALL_PME_LEGACY BIT(16)
#define PARF_INT_ALL_PLS_PME BIT(17)
/* PARF_BDF_TO_SID_CFG register fields */
#define PARF_BDF_TO_SID_BYPASS BIT(0)
/* PARF_DEBUG_INT_EN register fields */
#define PARF_DEBUG_INT_PM_DSTATE_CHANGE BIT(1)
#define PARF_DEBUG_INT_CFG_BUS_MASTER_EN BIT(2)
#define PARF_DEBUG_INT_RADM_PM_TURNOFF BIT(3)
/* PARF_DEVICE_TYPE register fields */
#define PARF_DEVICE_TYPE_EP 0x0
/* PARF_PM_CTRL register fields */
#define PARF_PM_CTRL_REQ_EXIT_L1 BIT(1)
#define PARF_PM_CTRL_READY_ENTR_L23 BIT(2)
#define PARF_PM_CTRL_REQ_NOT_ENTR_L1 BIT(5)
/* PARF_AXI_MSTR_RD_HALT_NO_WRITES register fields */
#define PARF_AXI_MSTR_RD_HALT_NO_WRITE_EN BIT(0)
/* PARF_AXI_MSTR_WR_ADDR_HALT register fields */
#define PARF_AXI_MSTR_WR_ADDR_HALT_EN BIT(31)
/* PARF_Q2A_FLUSH register fields */
#define PARF_Q2A_FLUSH_EN BIT(16)
/* PARF_SYS_CTRL register fields */
#define PARF_SYS_CTRL_AUX_PWR_DET BIT(4)
#define PARF_SYS_CTRL_CORE_CLK_CGC_DIS BIT(6)
#define PARF_SYS_CTRL_SLV_DBI_WAKE_DISABLE BIT(11)
/* PARF_DB_CTRL register fields */
#define PARF_DB_CTRL_INSR_DBNCR_BLOCK BIT(0)
#define PARF_DB_CTRL_RMVL_DBNCR_BLOCK BIT(1)
#define PARF_DB_CTRL_DBI_WKP_BLOCK BIT(4)
#define PARF_DB_CTRL_SLV_WKP_BLOCK BIT(5)
#define PARF_DB_CTRL_MST_WKP_BLOCK BIT(6)
/* PARF_CFG_BITS register fields */
#define PARF_CFG_BITS_REQ_EXIT_L1SS_MSI_LTR_EN BIT(1)
/* ELBI registers */
#define ELBI_SYS_STTS 0x08
/* DBI registers */
#define DBI_CON_STATUS 0x44
/* DBI register fields */
#define DBI_CON_STATUS_POWER_STATE_MASK GENMASK(1, 0)
#define XMLH_LINK_UP 0x400
#define CORE_RESET_TIME_US_MIN 1000
#define CORE_RESET_TIME_US_MAX 1005
#define WAKE_DELAY_US 2000 /* 2 ms */
#define to_pcie_ep(x) dev_get_drvdata((x)->dev)
enum qcom_pcie_ep_link_status {
QCOM_PCIE_EP_LINK_DISABLED,
QCOM_PCIE_EP_LINK_ENABLED,
QCOM_PCIE_EP_LINK_UP,
QCOM_PCIE_EP_LINK_DOWN,
};
static struct clk_bulk_data qcom_pcie_ep_clks[] = {
{ .id = "cfg" },
{ .id = "aux" },
{ .id = "bus_master" },
{ .id = "bus_slave" },
{ .id = "ref" },
{ .id = "sleep" },
{ .id = "slave_q2a" },
};
struct qcom_pcie_ep {
struct dw_pcie pci;
void __iomem *parf;
void __iomem *elbi;
struct regmap *perst_map;
struct resource *mmio_res;
struct reset_control *core_reset;
struct gpio_desc *reset;
struct gpio_desc *wake;
struct phy *phy;
u32 perst_en;
u32 perst_sep_en;
enum qcom_pcie_ep_link_status link_status;
int global_irq;
int perst_irq;
};
static int qcom_pcie_ep_core_reset(struct qcom_pcie_ep *pcie_ep)
{
struct dw_pcie *pci = &pcie_ep->pci;
struct device *dev = pci->dev;
int ret;
ret = reset_control_assert(pcie_ep->core_reset);
if (ret) {
dev_err(dev, "Cannot assert core reset\n");
return ret;
}
usleep_range(CORE_RESET_TIME_US_MIN, CORE_RESET_TIME_US_MAX);
ret = reset_control_deassert(pcie_ep->core_reset);
if (ret) {
dev_err(dev, "Cannot de-assert core reset\n");
return ret;
}
usleep_range(CORE_RESET_TIME_US_MIN, CORE_RESET_TIME_US_MAX);
return 0;
}
/*
* Delatch PERST_EN and PERST_SEPARATION_ENABLE with TCSR to avoid
* device reset during host reboot and hibernation. The driver is
* expected to handle this situation.
*/
static void qcom_pcie_ep_configure_tcsr(struct qcom_pcie_ep *pcie_ep)
{
regmap_write(pcie_ep->perst_map, pcie_ep->perst_en, 0);
regmap_write(pcie_ep->perst_map, pcie_ep->perst_sep_en, 0);
}
static int qcom_pcie_dw_link_up(struct dw_pcie *pci)
{
struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
u32 reg;
reg = readl_relaxed(pcie_ep->elbi + ELBI_SYS_STTS);
return reg & XMLH_LINK_UP;
}
static int qcom_pcie_dw_start_link(struct dw_pcie *pci)
{
struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
enable_irq(pcie_ep->perst_irq);
return 0;
}
static void qcom_pcie_dw_stop_link(struct dw_pcie *pci)
{
struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
disable_irq(pcie_ep->perst_irq);
}
static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
{
struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
struct device *dev = pci->dev;
u32 val, offset;
int ret;
ret = clk_bulk_prepare_enable(ARRAY_SIZE(qcom_pcie_ep_clks),
qcom_pcie_ep_clks);
if (ret)
return ret;
ret = qcom_pcie_ep_core_reset(pcie_ep);
if (ret)
goto err_disable_clk;
ret = phy_init(pcie_ep->phy);
if (ret)
goto err_disable_clk;
ret = phy_power_on(pcie_ep->phy);
if (ret)
goto err_phy_exit;
/* Assert WAKE# to RC to indicate device is ready */
gpiod_set_value_cansleep(pcie_ep->wake, 1);
usleep_range(WAKE_DELAY_US, WAKE_DELAY_US + 500);
gpiod_set_value_cansleep(pcie_ep->wake, 0);
qcom_pcie_ep_configure_tcsr(pcie_ep);
/* Disable BDF to SID mapping */
val = readl_relaxed(pcie_ep->parf + PARF_BDF_TO_SID_CFG);
val |= PARF_BDF_TO_SID_BYPASS;
writel_relaxed(val, pcie_ep->parf + PARF_BDF_TO_SID_CFG);
/* Enable debug IRQ */
val = readl_relaxed(pcie_ep->parf + PARF_DEBUG_INT_EN);
val |= PARF_DEBUG_INT_RADM_PM_TURNOFF |
PARF_DEBUG_INT_CFG_BUS_MASTER_EN |
PARF_DEBUG_INT_PM_DSTATE_CHANGE;
writel_relaxed(val, pcie_ep->parf + PARF_DEBUG_INT_EN);
/* Configure PCIe to endpoint mode */
writel_relaxed(PARF_DEVICE_TYPE_EP, pcie_ep->parf + PARF_DEVICE_TYPE);
/* Allow entering L1 state */
val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL);
val &= ~PARF_PM_CTRL_REQ_NOT_ENTR_L1;
writel_relaxed(val, pcie_ep->parf + PARF_PM_CTRL);
/* Read halts write */
val = readl_relaxed(pcie_ep->parf + PARF_AXI_MSTR_RD_HALT_NO_WRITES);
val &= ~PARF_AXI_MSTR_RD_HALT_NO_WRITE_EN;
writel_relaxed(val, pcie_ep->parf + PARF_AXI_MSTR_RD_HALT_NO_WRITES);
/* Write after write halt */
val = readl_relaxed(pcie_ep->parf + PARF_AXI_MSTR_WR_ADDR_HALT);
val |= PARF_AXI_MSTR_WR_ADDR_HALT_EN;
writel_relaxed(val, pcie_ep->parf + PARF_AXI_MSTR_WR_ADDR_HALT);
/* Q2A flush disable */
val = readl_relaxed(pcie_ep->parf + PARF_Q2A_FLUSH);
val &= ~PARF_Q2A_FLUSH_EN;
writel_relaxed(val, pcie_ep->parf + PARF_Q2A_FLUSH);
/* Disable DBI Wakeup, core clock CGC and enable AUX power */
val = readl_relaxed(pcie_ep->parf + PARF_SYS_CTRL);
val |= PARF_SYS_CTRL_SLV_DBI_WAKE_DISABLE |
PARF_SYS_CTRL_CORE_CLK_CGC_DIS |
PARF_SYS_CTRL_AUX_PWR_DET;
writel_relaxed(val, pcie_ep->parf + PARF_SYS_CTRL);
/* Disable the debouncers */
val = readl_relaxed(pcie_ep->parf + PARF_DB_CTRL);
val |= PARF_DB_CTRL_INSR_DBNCR_BLOCK | PARF_DB_CTRL_RMVL_DBNCR_BLOCK |
PARF_DB_CTRL_DBI_WKP_BLOCK | PARF_DB_CTRL_SLV_WKP_BLOCK |
PARF_DB_CTRL_MST_WKP_BLOCK;
writel_relaxed(val, pcie_ep->parf + PARF_DB_CTRL);
/* Request to exit from L1SS for MSI and LTR MSG */
val = readl_relaxed(pcie_ep->parf + PARF_CFG_BITS);
val |= PARF_CFG_BITS_REQ_EXIT_L1SS_MSI_LTR_EN;
writel_relaxed(val, pcie_ep->parf + PARF_CFG_BITS);
dw_pcie_dbi_ro_wr_en(pci);
/* Set the L0s Exit Latency to 2us-4us = 0x6 */
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
val = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP);
val &= ~PCI_EXP_LNKCAP_L0SEL;
val |= FIELD_PREP(PCI_EXP_LNKCAP_L0SEL, 0x6);
dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, val);
/* Set the L1 Exit Latency to be 32us-64 us = 0x6 */
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
val = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP);
val &= ~PCI_EXP_LNKCAP_L1EL;
val |= FIELD_PREP(PCI_EXP_LNKCAP_L1EL, 0x6);
dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, val);
dw_pcie_dbi_ro_wr_dis(pci);
writel_relaxed(0, pcie_ep->parf + PARF_INT_ALL_MASK);
val = PARF_INT_ALL_LINK_DOWN | PARF_INT_ALL_BME |
PARF_INT_ALL_PM_TURNOFF | PARF_INT_ALL_DSTATE_CHANGE |
PARF_INT_ALL_LINK_UP;
writel_relaxed(val, pcie_ep->parf + PARF_INT_ALL_MASK);
ret = dw_pcie_ep_init_complete(&pcie_ep->pci.ep);
if (ret) {
dev_err(dev, "Failed to complete initialization: %d\n", ret);
goto err_phy_power_off;
}
/*
* The physical address of the MMIO region which is exposed as the BAR
* should be written to MHI BASE registers.
*/
writel_relaxed(pcie_ep->mmio_res->start,
pcie_ep->parf + PARF_MHI_BASE_ADDR_LOWER);
writel_relaxed(0, pcie_ep->parf + PARF_MHI_BASE_ADDR_UPPER);
dw_pcie_ep_init_notify(&pcie_ep->pci.ep);
/* Enable LTSSM */
val = readl_relaxed(pcie_ep->parf + PARF_LTSSM);
val |= BIT(8);
writel_relaxed(val, pcie_ep->parf + PARF_LTSSM);
return 0;
err_phy_power_off:
phy_power_off(pcie_ep->phy);
err_phy_exit:
phy_exit(pcie_ep->phy);
err_disable_clk:
clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks),
qcom_pcie_ep_clks);
return ret;
}
static void qcom_pcie_perst_assert(struct dw_pcie *pci)
{
struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
struct device *dev = pci->dev;
if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED) {
dev_dbg(dev, "Link is already disabled\n");
return;
}
phy_power_off(pcie_ep->phy);
phy_exit(pcie_ep->phy);
clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks),
qcom_pcie_ep_clks);
pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED;
}
/* Common DWC controller ops */
static const struct dw_pcie_ops pci_ops = {
.link_up = qcom_pcie_dw_link_up,
.start_link = qcom_pcie_dw_start_link,
.stop_link = qcom_pcie_dw_stop_link,
};
static int qcom_pcie_ep_get_io_resources(struct platform_device *pdev,
struct qcom_pcie_ep *pcie_ep)
{
struct device *dev = &pdev->dev;
struct dw_pcie *pci = &pcie_ep->pci;
struct device_node *syscon;
struct resource *res;
int ret;
pcie_ep->parf = devm_platform_ioremap_resource_byname(pdev, "parf");
if (IS_ERR(pcie_ep->parf))
return PTR_ERR(pcie_ep->parf);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
pci->dbi_base = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
pci->dbi_base2 = pci->dbi_base;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi");
pcie_ep->elbi = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(pcie_ep->elbi))
return PTR_ERR(pcie_ep->elbi);
pcie_ep->mmio_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"mmio");
syscon = of_parse_phandle(dev->of_node, "qcom,perst-regs", 0);
if (!syscon) {
dev_err(dev, "Failed to parse qcom,perst-regs\n");
return -EINVAL;
}
pcie_ep->perst_map = syscon_node_to_regmap(syscon);
of_node_put(syscon);
if (IS_ERR(pcie_ep->perst_map))
return PTR_ERR(pcie_ep->perst_map);
ret = of_property_read_u32_index(dev->of_node, "qcom,perst-regs",
1, &pcie_ep->perst_en);
if (ret < 0) {
dev_err(dev, "No Perst Enable offset in syscon\n");
return ret;
}
ret = of_property_read_u32_index(dev->of_node, "qcom,perst-regs",
2, &pcie_ep->perst_sep_en);
if (ret < 0) {
dev_err(dev, "No Perst Separation Enable offset in syscon\n");
return ret;
}
return 0;
}
static int qcom_pcie_ep_get_resources(struct platform_device *pdev,
struct qcom_pcie_ep *pcie_ep)
{
struct device *dev = &pdev->dev;
int ret;
ret = qcom_pcie_ep_get_io_resources(pdev, pcie_ep);
if (ret) {
dev_err(&pdev->dev, "Failed to get io resources %d\n", ret);
return ret;
}
ret = devm_clk_bulk_get(dev, ARRAY_SIZE(qcom_pcie_ep_clks),
qcom_pcie_ep_clks);
if (ret)
return ret;
pcie_ep->core_reset = devm_reset_control_get_exclusive(dev, "core");
if (IS_ERR(pcie_ep->core_reset))
return PTR_ERR(pcie_ep->core_reset);
pcie_ep->reset = devm_gpiod_get(dev, "reset", GPIOD_IN);
if (IS_ERR(pcie_ep->reset))
return PTR_ERR(pcie_ep->reset);
pcie_ep->wake = devm_gpiod_get_optional(dev, "wake", GPIOD_OUT_LOW);
if (IS_ERR(pcie_ep->wake))
return PTR_ERR(pcie_ep->wake);
pcie_ep->phy = devm_phy_optional_get(&pdev->dev, "pciephy");
if (IS_ERR(pcie_ep->phy))
ret = PTR_ERR(pcie_ep->phy);
return ret;
}
/* TODO: Notify clients about PCIe state change */
static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data)
{
struct qcom_pcie_ep *pcie_ep = data;
struct dw_pcie *pci = &pcie_ep->pci;
struct device *dev = pci->dev;
u32 status = readl_relaxed(pcie_ep->parf + PARF_INT_ALL_STATUS);
u32 mask = readl_relaxed(pcie_ep->parf + PARF_INT_ALL_MASK);
u32 dstate, val;
writel_relaxed(status, pcie_ep->parf + PARF_INT_ALL_CLEAR);
status &= mask;
if (FIELD_GET(PARF_INT_ALL_LINK_DOWN, status)) {
dev_dbg(dev, "Received Linkdown event\n");
pcie_ep->link_status = QCOM_PCIE_EP_LINK_DOWN;
} else if (FIELD_GET(PARF_INT_ALL_BME, status)) {
dev_dbg(dev, "Received BME event. Link is enabled!\n");
pcie_ep->link_status = QCOM_PCIE_EP_LINK_ENABLED;
} else if (FIELD_GET(PARF_INT_ALL_PM_TURNOFF, status)) {
dev_dbg(dev, "Received PM Turn-off event! Entering L23\n");
val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL);
val |= PARF_PM_CTRL_READY_ENTR_L23;
writel_relaxed(val, pcie_ep->parf + PARF_PM_CTRL);
} else if (FIELD_GET(PARF_INT_ALL_DSTATE_CHANGE, status)) {
dstate = dw_pcie_readl_dbi(pci, DBI_CON_STATUS) &
DBI_CON_STATUS_POWER_STATE_MASK;
dev_dbg(dev, "Received D%d state event\n", dstate);
if (dstate == 3) {
val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL);
val |= PARF_PM_CTRL_REQ_EXIT_L1;
writel_relaxed(val, pcie_ep->parf + PARF_PM_CTRL);
}
} else if (FIELD_GET(PARF_INT_ALL_LINK_UP, status)) {
dev_dbg(dev, "Received Linkup event. Enumeration complete!\n");
dw_pcie_ep_linkup(&pci->ep);
pcie_ep->link_status = QCOM_PCIE_EP_LINK_UP;
} else {
dev_dbg(dev, "Received unknown event: %d\n", status);
}
return IRQ_HANDLED;
}
static irqreturn_t qcom_pcie_ep_perst_irq_thread(int irq, void *data)
{
struct qcom_pcie_ep *pcie_ep = data;
struct dw_pcie *pci = &pcie_ep->pci;
struct device *dev = pci->dev;
u32 perst;
perst = gpiod_get_value(pcie_ep->reset);
if (perst) {
dev_dbg(dev, "PERST asserted by host. Shutting down the PCIe link!\n");
qcom_pcie_perst_assert(pci);
} else {
dev_dbg(dev, "PERST de-asserted by host. Starting link training!\n");
qcom_pcie_perst_deassert(pci);
}
irq_set_irq_type(gpiod_to_irq(pcie_ep->reset),
(perst ? IRQF_TRIGGER_HIGH : IRQF_TRIGGER_LOW));
return IRQ_HANDLED;
}
static int qcom_pcie_ep_enable_irq_resources(struct platform_device *pdev,
struct qcom_pcie_ep *pcie_ep)
{
int irq, ret;
irq = platform_get_irq_byname(pdev, "global");
if (irq < 0) {
dev_err(&pdev->dev, "Failed to get Global IRQ\n");
return irq;
}
ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
qcom_pcie_ep_global_irq_thread,
IRQF_ONESHOT,
"global_irq", pcie_ep);
if (ret) {
dev_err(&pdev->dev, "Failed to request Global IRQ\n");
return ret;
}
pcie_ep->perst_irq = gpiod_to_irq(pcie_ep->reset);
irq_set_status_flags(pcie_ep->perst_irq, IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(&pdev->dev, pcie_ep->perst_irq, NULL,
qcom_pcie_ep_perst_irq_thread,
IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
"perst_irq", pcie_ep);
if (ret) {
dev_err(&pdev->dev, "Failed to request PERST IRQ\n");
disable_irq(irq);
return ret;
}
return 0;
}
static int qcom_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
enum pci_epc_irq_type type, u16 interrupt_num)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
switch (type) {
case PCI_EPC_IRQ_LEGACY:
return dw_pcie_ep_raise_legacy_irq(ep, func_no);
case PCI_EPC_IRQ_MSI:
return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num);
default:
dev_err(pci->dev, "Unknown IRQ type\n");
return -EINVAL;
}
}
static const struct pci_epc_features qcom_pcie_epc_features = {
.linkup_notifier = true,
.core_init_notifier = true,
.msi_capable = true,
.msix_capable = false,
};
static const struct pci_epc_features *
qcom_pcie_epc_get_features(struct dw_pcie_ep *pci_ep)
{
return &qcom_pcie_epc_features;
}
static void qcom_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = BAR_0; bar <= BAR_5; bar++)
dw_pcie_ep_reset_bar(pci, bar);
}
static struct dw_pcie_ep_ops pci_ep_ops = {
.ep_init = qcom_pcie_ep_init,
.raise_irq = qcom_pcie_ep_raise_irq,
.get_features = qcom_pcie_epc_get_features,
};
static int qcom_pcie_ep_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct qcom_pcie_ep *pcie_ep;
int ret;
pcie_ep = devm_kzalloc(dev, sizeof(*pcie_ep), GFP_KERNEL);
if (!pcie_ep)
return -ENOMEM;
pcie_ep->pci.dev = dev;
pcie_ep->pci.ops = &pci_ops;
pcie_ep->pci.ep.ops = &pci_ep_ops;
platform_set_drvdata(pdev, pcie_ep);
ret = qcom_pcie_ep_get_resources(pdev, pcie_ep);
if (ret)
return ret;
ret = clk_bulk_prepare_enable(ARRAY_SIZE(qcom_pcie_ep_clks),
qcom_pcie_ep_clks);
if (ret)
return ret;
ret = qcom_pcie_ep_core_reset(pcie_ep);
if (ret)
goto err_disable_clk;
ret = phy_init(pcie_ep->phy);
if (ret)
goto err_disable_clk;
/* PHY needs to be powered on for dw_pcie_ep_init() */
ret = phy_power_on(pcie_ep->phy);
if (ret)
goto err_phy_exit;
ret = dw_pcie_ep_init(&pcie_ep->pci.ep);
if (ret) {
dev_err(dev, "Failed to initialize endpoint: %d\n", ret);
goto err_phy_power_off;
}
ret = qcom_pcie_ep_enable_irq_resources(pdev, pcie_ep);
if (ret)
goto err_phy_power_off;
return 0;
err_phy_power_off:
phy_power_off(pcie_ep->phy);
err_phy_exit:
phy_exit(pcie_ep->phy);
err_disable_clk:
clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks),
qcom_pcie_ep_clks);
return ret;
}
static int qcom_pcie_ep_remove(struct platform_device *pdev)
{
struct qcom_pcie_ep *pcie_ep = platform_get_drvdata(pdev);
if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED)
return 0;
phy_power_off(pcie_ep->phy);
phy_exit(pcie_ep->phy);
clk_bulk_disable_unprepare(ARRAY_SIZE(qcom_pcie_ep_clks),
qcom_pcie_ep_clks);
return 0;
}
static const struct of_device_id qcom_pcie_ep_match[] = {
{ .compatible = "qcom,sdx55-pcie-ep", },
{ }
};
static struct platform_driver qcom_pcie_ep_driver = {
.probe = qcom_pcie_ep_probe,
.remove = qcom_pcie_ep_remove,
.driver = {
.name = "qcom-pcie-ep",
.of_match_table = qcom_pcie_ep_match,
},
};
builtin_platform_driver(qcom_pcie_ep_driver);
MODULE_AUTHOR("Siddartha Mohanadoss <smohanad@codeaurora.org>");
MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>");
MODULE_DESCRIPTION("Qualcomm PCIe Endpoint controller driver");
MODULE_LICENSE("GPL v2");

View File

@ -166,6 +166,9 @@ struct qcom_pcie_resources_2_7_0 {
struct regulator_bulk_data supplies[2]; struct regulator_bulk_data supplies[2];
struct reset_control *pci_reset; struct reset_control *pci_reset;
struct clk *pipe_clk; struct clk *pipe_clk;
struct clk *pipe_clk_src;
struct clk *phy_pipe_clk;
struct clk *ref_clk_src;
}; };
union qcom_pcie_resources { union qcom_pcie_resources {
@ -189,6 +192,11 @@ struct qcom_pcie_ops {
int (*config_sid)(struct qcom_pcie *pcie); int (*config_sid)(struct qcom_pcie *pcie);
}; };
struct qcom_pcie_cfg {
const struct qcom_pcie_ops *ops;
unsigned int pipe_clk_need_muxing:1;
};
struct qcom_pcie { struct qcom_pcie {
struct dw_pcie *pci; struct dw_pcie *pci;
void __iomem *parf; /* DT parf */ void __iomem *parf; /* DT parf */
@ -197,6 +205,7 @@ struct qcom_pcie {
struct phy *phy; struct phy *phy;
struct gpio_desc *reset; struct gpio_desc *reset;
const struct qcom_pcie_ops *ops; const struct qcom_pcie_ops *ops;
unsigned int pipe_clk_need_muxing:1;
}; };
#define to_qcom_pcie(x) dev_get_drvdata((x)->dev) #define to_qcom_pcie(x) dev_get_drvdata((x)->dev)
@ -1167,6 +1176,20 @@ static int qcom_pcie_get_resources_2_7_0(struct qcom_pcie *pcie)
if (ret < 0) if (ret < 0)
return ret; return ret;
if (pcie->pipe_clk_need_muxing) {
res->pipe_clk_src = devm_clk_get(dev, "pipe_mux");
if (IS_ERR(res->pipe_clk_src))
return PTR_ERR(res->pipe_clk_src);
res->phy_pipe_clk = devm_clk_get(dev, "phy_pipe");
if (IS_ERR(res->phy_pipe_clk))
return PTR_ERR(res->phy_pipe_clk);
res->ref_clk_src = devm_clk_get(dev, "ref");
if (IS_ERR(res->ref_clk_src))
return PTR_ERR(res->ref_clk_src);
}
res->pipe_clk = devm_clk_get(dev, "pipe"); res->pipe_clk = devm_clk_get(dev, "pipe");
return PTR_ERR_OR_ZERO(res->pipe_clk); return PTR_ERR_OR_ZERO(res->pipe_clk);
} }
@ -1185,6 +1208,10 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
return ret; return ret;
} }
/* Set TCXO as clock source for pcie_pipe_clk_src */
if (pcie->pipe_clk_need_muxing)
clk_set_parent(res->pipe_clk_src, res->ref_clk_src);
ret = clk_bulk_prepare_enable(res->num_clks, res->clks); ret = clk_bulk_prepare_enable(res->num_clks, res->clks);
if (ret < 0) if (ret < 0)
goto err_disable_regulators; goto err_disable_regulators;
@ -1256,6 +1283,10 @@ static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie)
{ {
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
/* Set pipe clock as clock source for pcie_pipe_clk_src */
if (pcie->pipe_clk_need_muxing)
clk_set_parent(res->pipe_clk_src, res->phy_pipe_clk);
return clk_prepare_enable(res->pipe_clk); return clk_prepare_enable(res->pipe_clk);
} }
@ -1456,6 +1487,39 @@ static const struct qcom_pcie_ops ops_1_9_0 = {
.config_sid = qcom_pcie_config_sid_sm8250, .config_sid = qcom_pcie_config_sid_sm8250,
}; };
static const struct qcom_pcie_cfg apq8084_cfg = {
.ops = &ops_1_0_0,
};
static const struct qcom_pcie_cfg ipq8064_cfg = {
.ops = &ops_2_1_0,
};
static const struct qcom_pcie_cfg msm8996_cfg = {
.ops = &ops_2_3_2,
};
static const struct qcom_pcie_cfg ipq8074_cfg = {
.ops = &ops_2_3_3,
};
static const struct qcom_pcie_cfg ipq4019_cfg = {
.ops = &ops_2_4_0,
};
static const struct qcom_pcie_cfg sdm845_cfg = {
.ops = &ops_2_7_0,
};
static const struct qcom_pcie_cfg sm8250_cfg = {
.ops = &ops_1_9_0,
};
static const struct qcom_pcie_cfg sc7280_cfg = {
.ops = &ops_1_9_0,
.pipe_clk_need_muxing = true,
};
static const struct dw_pcie_ops dw_pcie_ops = { static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = qcom_pcie_link_up, .link_up = qcom_pcie_link_up,
.start_link = qcom_pcie_start_link, .start_link = qcom_pcie_start_link,
@ -1467,6 +1531,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
struct pcie_port *pp; struct pcie_port *pp;
struct dw_pcie *pci; struct dw_pcie *pci;
struct qcom_pcie *pcie; struct qcom_pcie *pcie;
const struct qcom_pcie_cfg *pcie_cfg;
int ret; int ret;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
@ -1488,7 +1553,14 @@ static int qcom_pcie_probe(struct platform_device *pdev)
pcie->pci = pci; pcie->pci = pci;
pcie->ops = of_device_get_match_data(dev); pcie_cfg = of_device_get_match_data(dev);
if (!pcie_cfg || !pcie_cfg->ops) {
dev_err(dev, "Invalid platform data\n");
return -EINVAL;
}
pcie->ops = pcie_cfg->ops;
pcie->pipe_clk_need_muxing = pcie_cfg->pipe_clk_need_muxing;
pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH);
if (IS_ERR(pcie->reset)) { if (IS_ERR(pcie->reset)) {
@ -1545,16 +1617,18 @@ err_pm_runtime_put:
} }
static const struct of_device_id qcom_pcie_match[] = { static const struct of_device_id qcom_pcie_match[] = {
{ .compatible = "qcom,pcie-apq8084", .data = &ops_1_0_0 }, { .compatible = "qcom,pcie-apq8084", .data = &apq8084_cfg },
{ .compatible = "qcom,pcie-ipq8064", .data = &ops_2_1_0 }, { .compatible = "qcom,pcie-ipq8064", .data = &ipq8064_cfg },
{ .compatible = "qcom,pcie-ipq8064-v2", .data = &ops_2_1_0 }, { .compatible = "qcom,pcie-ipq8064-v2", .data = &ipq8064_cfg },
{ .compatible = "qcom,pcie-apq8064", .data = &ops_2_1_0 }, { .compatible = "qcom,pcie-apq8064", .data = &ipq8064_cfg },
{ .compatible = "qcom,pcie-msm8996", .data = &ops_2_3_2 }, { .compatible = "qcom,pcie-msm8996", .data = &msm8996_cfg },
{ .compatible = "qcom,pcie-ipq8074", .data = &ops_2_3_3 }, { .compatible = "qcom,pcie-ipq8074", .data = &ipq8074_cfg },
{ .compatible = "qcom,pcie-ipq4019", .data = &ops_2_4_0 }, { .compatible = "qcom,pcie-ipq4019", .data = &ipq4019_cfg },
{ .compatible = "qcom,pcie-qcs404", .data = &ops_2_4_0 }, { .compatible = "qcom,pcie-qcs404", .data = &ipq4019_cfg },
{ .compatible = "qcom,pcie-sdm845", .data = &ops_2_7_0 }, { .compatible = "qcom,pcie-sdm845", .data = &sdm845_cfg },
{ .compatible = "qcom,pcie-sm8250", .data = &ops_1_9_0 }, { .compatible = "qcom,pcie-sm8250", .data = &sm8250_cfg },
{ .compatible = "qcom,pcie-sc8180x", .data = &sm8250_cfg },
{ .compatible = "qcom,pcie-sc7280", .data = &sc7280_cfg },
{ } { }
}; };

View File

@ -168,30 +168,21 @@ static void uniphier_pcie_irq_enable(struct uniphier_pcie_priv *priv)
writel(PCL_RCV_INTX_ALL_ENABLE, priv->base + PCL_RCV_INTX); writel(PCL_RCV_INTX_ALL_ENABLE, priv->base + PCL_RCV_INTX);
} }
static void uniphier_pcie_irq_ack(struct irq_data *d)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
u32 val;
val = readl(priv->base + PCL_RCV_INTX);
val &= ~PCL_RCV_INTX_ALL_STATUS;
val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_STATUS_SHIFT);
writel(val, priv->base + PCL_RCV_INTX);
}
static void uniphier_pcie_irq_mask(struct irq_data *d) static void uniphier_pcie_irq_mask(struct irq_data *d)
{ {
struct pcie_port *pp = irq_data_get_irq_chip_data(d); struct pcie_port *pp = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci); struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
unsigned long flags;
u32 val; u32 val;
raw_spin_lock_irqsave(&pp->lock, flags);
val = readl(priv->base + PCL_RCV_INTX); val = readl(priv->base + PCL_RCV_INTX);
val &= ~PCL_RCV_INTX_ALL_MASK;
val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT); val |= BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT);
writel(val, priv->base + PCL_RCV_INTX); writel(val, priv->base + PCL_RCV_INTX);
raw_spin_unlock_irqrestore(&pp->lock, flags);
} }
static void uniphier_pcie_irq_unmask(struct irq_data *d) static void uniphier_pcie_irq_unmask(struct irq_data *d)
@ -199,17 +190,20 @@ static void uniphier_pcie_irq_unmask(struct irq_data *d)
struct pcie_port *pp = irq_data_get_irq_chip_data(d); struct pcie_port *pp = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci); struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
unsigned long flags;
u32 val; u32 val;
raw_spin_lock_irqsave(&pp->lock, flags);
val = readl(priv->base + PCL_RCV_INTX); val = readl(priv->base + PCL_RCV_INTX);
val &= ~PCL_RCV_INTX_ALL_MASK;
val &= ~BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT); val &= ~BIT(irqd_to_hwirq(d) + PCL_RCV_INTX_MASK_SHIFT);
writel(val, priv->base + PCL_RCV_INTX); writel(val, priv->base + PCL_RCV_INTX);
raw_spin_unlock_irqrestore(&pp->lock, flags);
} }
static struct irq_chip uniphier_pcie_irq_chip = { static struct irq_chip uniphier_pcie_irq_chip = {
.name = "PCI", .name = "PCI",
.irq_ack = uniphier_pcie_irq_ack,
.irq_mask = uniphier_pcie_irq_mask, .irq_mask = uniphier_pcie_irq_mask,
.irq_unmask = uniphier_pcie_irq_unmask, .irq_unmask = uniphier_pcie_irq_unmask,
}; };

View File

@ -279,13 +279,10 @@ static int visconti_add_pcie_port(struct visconti_pcie *pcie,
{ {
struct dw_pcie *pci = &pcie->pci; struct dw_pcie *pci = &pcie->pci;
struct pcie_port *pp = &pci->pp; struct pcie_port *pp = &pci->pp;
struct device *dev = &pdev->dev;
pp->irq = platform_get_irq_byname(pdev, "intr"); pp->irq = platform_get_irq_byname(pdev, "intr");
if (pp->irq < 0) { if (pp->irq < 0)
dev_err(dev, "Interrupt intr is missing");
return pp->irq; return pp->irq;
}
pp->ops = &visconti_pcie_host_ops; pp->ops = &visconti_pcie_host_ops;

View File

@ -31,10 +31,8 @@
/* PCIe core registers */ /* PCIe core registers */
#define PCIE_CORE_DEV_ID_REG 0x0 #define PCIE_CORE_DEV_ID_REG 0x0
#define PCIE_CORE_CMD_STATUS_REG 0x4 #define PCIE_CORE_CMD_STATUS_REG 0x4
#define PCIE_CORE_CMD_IO_ACCESS_EN BIT(0)
#define PCIE_CORE_CMD_MEM_ACCESS_EN BIT(1)
#define PCIE_CORE_CMD_MEM_IO_REQ_EN BIT(2)
#define PCIE_CORE_DEV_REV_REG 0x8 #define PCIE_CORE_DEV_REV_REG 0x8
#define PCIE_CORE_EXP_ROM_BAR_REG 0x30
#define PCIE_CORE_PCIEXP_CAP 0xc0 #define PCIE_CORE_PCIEXP_CAP 0xc0
#define PCIE_CORE_ERR_CAPCTL_REG 0x118 #define PCIE_CORE_ERR_CAPCTL_REG 0x118
#define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5) #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5)
@ -99,6 +97,7 @@
#define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10) #define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10)
#define PCIE_CORE_REF_CLK_REG (CONTROL_BASE_ADDR + 0x14) #define PCIE_CORE_REF_CLK_REG (CONTROL_BASE_ADDR + 0x14)
#define PCIE_CORE_REF_CLK_TX_ENABLE BIT(1) #define PCIE_CORE_REF_CLK_TX_ENABLE BIT(1)
#define PCIE_CORE_REF_CLK_RX_ENABLE BIT(2)
#define PCIE_MSG_LOG_REG (CONTROL_BASE_ADDR + 0x30) #define PCIE_MSG_LOG_REG (CONTROL_BASE_ADDR + 0x30)
#define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40) #define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40)
#define PCIE_MSG_PM_PME_MASK BIT(7) #define PCIE_MSG_PM_PME_MASK BIT(7)
@ -106,18 +105,19 @@
#define PCIE_ISR0_MSI_INT_PENDING BIT(24) #define PCIE_ISR0_MSI_INT_PENDING BIT(24)
#define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val)) #define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val))
#define PCIE_ISR0_INTX_DEASSERT(val) BIT(20 + (val)) #define PCIE_ISR0_INTX_DEASSERT(val) BIT(20 + (val))
#define PCIE_ISR0_ALL_MASK GENMASK(26, 0) #define PCIE_ISR0_ALL_MASK GENMASK(31, 0)
#define PCIE_ISR1_REG (CONTROL_BASE_ADDR + 0x48) #define PCIE_ISR1_REG (CONTROL_BASE_ADDR + 0x48)
#define PCIE_ISR1_MASK_REG (CONTROL_BASE_ADDR + 0x4C) #define PCIE_ISR1_MASK_REG (CONTROL_BASE_ADDR + 0x4C)
#define PCIE_ISR1_POWER_STATE_CHANGE BIT(4) #define PCIE_ISR1_POWER_STATE_CHANGE BIT(4)
#define PCIE_ISR1_FLUSH BIT(5) #define PCIE_ISR1_FLUSH BIT(5)
#define PCIE_ISR1_INTX_ASSERT(val) BIT(8 + (val)) #define PCIE_ISR1_INTX_ASSERT(val) BIT(8 + (val))
#define PCIE_ISR1_ALL_MASK GENMASK(11, 4) #define PCIE_ISR1_ALL_MASK GENMASK(31, 0)
#define PCIE_MSI_ADDR_LOW_REG (CONTROL_BASE_ADDR + 0x50) #define PCIE_MSI_ADDR_LOW_REG (CONTROL_BASE_ADDR + 0x50)
#define PCIE_MSI_ADDR_HIGH_REG (CONTROL_BASE_ADDR + 0x54) #define PCIE_MSI_ADDR_HIGH_REG (CONTROL_BASE_ADDR + 0x54)
#define PCIE_MSI_STATUS_REG (CONTROL_BASE_ADDR + 0x58) #define PCIE_MSI_STATUS_REG (CONTROL_BASE_ADDR + 0x58)
#define PCIE_MSI_MASK_REG (CONTROL_BASE_ADDR + 0x5C) #define PCIE_MSI_MASK_REG (CONTROL_BASE_ADDR + 0x5C)
#define PCIE_MSI_PAYLOAD_REG (CONTROL_BASE_ADDR + 0x9C) #define PCIE_MSI_PAYLOAD_REG (CONTROL_BASE_ADDR + 0x9C)
#define PCIE_MSI_DATA_MASK GENMASK(15, 0)
/* PCIe window configuration */ /* PCIe window configuration */
#define OB_WIN_BASE_ADDR 0x4c00 #define OB_WIN_BASE_ADDR 0x4c00
@ -164,8 +164,50 @@
#define CFG_REG (LMI_BASE_ADDR + 0x0) #define CFG_REG (LMI_BASE_ADDR + 0x0)
#define LTSSM_SHIFT 24 #define LTSSM_SHIFT 24
#define LTSSM_MASK 0x3f #define LTSSM_MASK 0x3f
#define LTSSM_L0 0x10
#define RC_BAR_CONFIG 0x300 #define RC_BAR_CONFIG 0x300
/* LTSSM values in CFG_REG */
enum {
LTSSM_DETECT_QUIET = 0x0,
LTSSM_DETECT_ACTIVE = 0x1,
LTSSM_POLLING_ACTIVE = 0x2,
LTSSM_POLLING_COMPLIANCE = 0x3,
LTSSM_POLLING_CONFIGURATION = 0x4,
LTSSM_CONFIG_LINKWIDTH_START = 0x5,
LTSSM_CONFIG_LINKWIDTH_ACCEPT = 0x6,
LTSSM_CONFIG_LANENUM_ACCEPT = 0x7,
LTSSM_CONFIG_LANENUM_WAIT = 0x8,
LTSSM_CONFIG_COMPLETE = 0x9,
LTSSM_CONFIG_IDLE = 0xa,
LTSSM_RECOVERY_RCVR_LOCK = 0xb,
LTSSM_RECOVERY_SPEED = 0xc,
LTSSM_RECOVERY_RCVR_CFG = 0xd,
LTSSM_RECOVERY_IDLE = 0xe,
LTSSM_L0 = 0x10,
LTSSM_RX_L0S_ENTRY = 0x11,
LTSSM_RX_L0S_IDLE = 0x12,
LTSSM_RX_L0S_FTS = 0x13,
LTSSM_TX_L0S_ENTRY = 0x14,
LTSSM_TX_L0S_IDLE = 0x15,
LTSSM_TX_L0S_FTS = 0x16,
LTSSM_L1_ENTRY = 0x17,
LTSSM_L1_IDLE = 0x18,
LTSSM_L2_IDLE = 0x19,
LTSSM_L2_TRANSMIT_WAKE = 0x1a,
LTSSM_DISABLED = 0x20,
LTSSM_LOOPBACK_ENTRY_MASTER = 0x21,
LTSSM_LOOPBACK_ACTIVE_MASTER = 0x22,
LTSSM_LOOPBACK_EXIT_MASTER = 0x23,
LTSSM_LOOPBACK_ENTRY_SLAVE = 0x24,
LTSSM_LOOPBACK_ACTIVE_SLAVE = 0x25,
LTSSM_LOOPBACK_EXIT_SLAVE = 0x26,
LTSSM_HOT_RESET = 0x27,
LTSSM_RECOVERY_EQUALIZATION_PHASE0 = 0x28,
LTSSM_RECOVERY_EQUALIZATION_PHASE1 = 0x29,
LTSSM_RECOVERY_EQUALIZATION_PHASE2 = 0x2a,
LTSSM_RECOVERY_EQUALIZATION_PHASE3 = 0x2b,
};
#define VENDOR_ID_REG (LMI_BASE_ADDR + 0x44) #define VENDOR_ID_REG (LMI_BASE_ADDR + 0x44)
/* PCIe core controller registers */ /* PCIe core controller registers */
@ -198,7 +240,7 @@
#define PCIE_IRQ_MSI_INT2_DET BIT(21) #define PCIE_IRQ_MSI_INT2_DET BIT(21)
#define PCIE_IRQ_RC_DBELL_DET BIT(22) #define PCIE_IRQ_RC_DBELL_DET BIT(22)
#define PCIE_IRQ_EP_STATUS BIT(23) #define PCIE_IRQ_EP_STATUS BIT(23)
#define PCIE_IRQ_ALL_MASK 0xfff0fb #define PCIE_IRQ_ALL_MASK GENMASK(31, 0)
#define PCIE_IRQ_ENABLE_INTS_MASK PCIE_IRQ_CORE_INT #define PCIE_IRQ_ENABLE_INTS_MASK PCIE_IRQ_CORE_INT
/* Transaction types */ /* Transaction types */
@ -257,18 +299,49 @@ static inline u32 advk_readl(struct advk_pcie *pcie, u64 reg)
return readl(pcie->base + reg); return readl(pcie->base + reg);
} }
static inline u16 advk_read16(struct advk_pcie *pcie, u64 reg) static u8 advk_pcie_ltssm_state(struct advk_pcie *pcie)
{ {
return advk_readl(pcie, (reg & ~0x3)) >> ((reg & 0x3) * 8); u32 val;
} u8 ltssm_state;
static int advk_pcie_link_up(struct advk_pcie *pcie)
{
u32 val, ltssm_state;
val = advk_readl(pcie, CFG_REG); val = advk_readl(pcie, CFG_REG);
ltssm_state = (val >> LTSSM_SHIFT) & LTSSM_MASK; ltssm_state = (val >> LTSSM_SHIFT) & LTSSM_MASK;
return ltssm_state >= LTSSM_L0; return ltssm_state;
}
static inline bool advk_pcie_link_up(struct advk_pcie *pcie)
{
/* check if LTSSM is in normal operation - some L* state */
u8 ltssm_state = advk_pcie_ltssm_state(pcie);
return ltssm_state >= LTSSM_L0 && ltssm_state < LTSSM_DISABLED;
}
static inline bool advk_pcie_link_active(struct advk_pcie *pcie)
{
/*
* According to PCIe Base specification 3.0, Table 4-14: Link
* Status Mapped to the LTSSM, and 4.2.6.3.6 Configuration.Idle
* is Link Up mapped to LTSSM Configuration.Idle, Recovery, L0,
* L0s, L1 and L2 states. And according to 3.2.1. Data Link
* Control and Management State Machine Rules is DL Up status
* reported in DL Active state.
*/
u8 ltssm_state = advk_pcie_ltssm_state(pcie);
return ltssm_state >= LTSSM_CONFIG_IDLE && ltssm_state < LTSSM_DISABLED;
}
static inline bool advk_pcie_link_training(struct advk_pcie *pcie)
{
/*
* According to PCIe Base specification 3.0, Table 4-14: Link
* Status Mapped to the LTSSM is Link Training mapped to LTSSM
* Configuration and Recovery states.
*/
u8 ltssm_state = advk_pcie_ltssm_state(pcie);
return ((ltssm_state >= LTSSM_CONFIG_LINKWIDTH_START &&
ltssm_state < LTSSM_L0) ||
(ltssm_state >= LTSSM_RECOVERY_EQUALIZATION_PHASE0 &&
ltssm_state <= LTSSM_RECOVERY_EQUALIZATION_PHASE3));
} }
static int advk_pcie_wait_for_link(struct advk_pcie *pcie) static int advk_pcie_wait_for_link(struct advk_pcie *pcie)
@ -291,7 +364,7 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie)
size_t retries; size_t retries;
for (retries = 0; retries < RETRAIN_WAIT_MAX_RETRIES; ++retries) { for (retries = 0; retries < RETRAIN_WAIT_MAX_RETRIES; ++retries) {
if (!advk_pcie_link_up(pcie)) if (advk_pcie_link_training(pcie))
break; break;
udelay(RETRAIN_WAIT_USLEEP_US); udelay(RETRAIN_WAIT_USLEEP_US);
} }
@ -299,23 +372,9 @@ static void advk_pcie_wait_for_retrain(struct advk_pcie *pcie)
static void advk_pcie_issue_perst(struct advk_pcie *pcie) static void advk_pcie_issue_perst(struct advk_pcie *pcie)
{ {
u32 reg;
if (!pcie->reset_gpio) if (!pcie->reset_gpio)
return; return;
/*
* As required by PCI Express spec (PCI Express Base Specification, REV.
* 4.0 PCI Express, February 19 2014, 6.6.1 Conventional Reset) a delay
* for at least 100ms after de-asserting PERST# signal is needed before
* link training is enabled. So ensure that link training is disabled
* prior de-asserting PERST# signal to fulfill that PCI Express spec
* requirement.
*/
reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
reg &= ~LINK_TRAINING_EN;
advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
/* 10ms delay is needed for some cards */ /* 10ms delay is needed for some cards */
dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n"); dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n");
gpiod_set_value_cansleep(pcie->reset_gpio, 1); gpiod_set_value_cansleep(pcie->reset_gpio, 1);
@ -323,53 +382,46 @@ static void advk_pcie_issue_perst(struct advk_pcie *pcie)
gpiod_set_value_cansleep(pcie->reset_gpio, 0); gpiod_set_value_cansleep(pcie->reset_gpio, 0);
} }
static int advk_pcie_train_at_gen(struct advk_pcie *pcie, int gen) static void advk_pcie_train_link(struct advk_pcie *pcie)
{ {
int ret, neg_gen; struct device *dev = &pcie->pdev->dev;
u32 reg; u32 reg;
int ret;
/* Setup link speed */ /*
* Setup PCIe rev / gen compliance based on device tree property
* 'max-link-speed' which also forces maximal link speed.
*/
reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
reg &= ~PCIE_GEN_SEL_MSK; reg &= ~PCIE_GEN_SEL_MSK;
if (gen == 3) if (pcie->link_gen == 3)
reg |= SPEED_GEN_3; reg |= SPEED_GEN_3;
else if (gen == 2) else if (pcie->link_gen == 2)
reg |= SPEED_GEN_2; reg |= SPEED_GEN_2;
else else
reg |= SPEED_GEN_1; reg |= SPEED_GEN_1;
advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
/* /*
* Enable link training. This is not needed in every call to this * Set maximal link speed value also into PCIe Link Control 2 register.
* function, just once suffices, but it does not break anything either. * Armada 3700 Functional Specification says that default value is based
* on SPEED_GEN but tests showed that default value is always 8.0 GT/s.
*/ */
reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL2);
reg &= ~PCI_EXP_LNKCTL2_TLS;
if (pcie->link_gen == 3)
reg |= PCI_EXP_LNKCTL2_TLS_8_0GT;
else if (pcie->link_gen == 2)
reg |= PCI_EXP_LNKCTL2_TLS_5_0GT;
else
reg |= PCI_EXP_LNKCTL2_TLS_2_5GT;
advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL2);
/* Enable link training after selecting PCIe generation */
reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG);
reg |= LINK_TRAINING_EN; reg |= LINK_TRAINING_EN;
advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
/*
* Start link training immediately after enabling it.
* This solves problems for some buggy cards.
*/
reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL);
reg |= PCI_EXP_LNKCTL_RL;
advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL);
ret = advk_pcie_wait_for_link(pcie);
if (ret)
return ret;
reg = advk_read16(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKSTA);
neg_gen = reg & PCI_EXP_LNKSTA_CLS;
return neg_gen;
}
static void advk_pcie_train_link(struct advk_pcie *pcie)
{
struct device *dev = &pcie->pdev->dev;
int neg_gen = -1, gen;
/* /*
* Reset PCIe card via PERST# signal. Some cards are not detected * Reset PCIe card via PERST# signal. Some cards are not detected
* during link training when they are in some non-initial state. * during link training when they are in some non-initial state.
@ -380,41 +432,18 @@ static void advk_pcie_train_link(struct advk_pcie *pcie)
* PERST# signal could have been asserted by pinctrl subsystem before * PERST# signal could have been asserted by pinctrl subsystem before
* probe() callback has been called or issued explicitly by reset gpio * probe() callback has been called or issued explicitly by reset gpio
* function advk_pcie_issue_perst(), making the endpoint going into * function advk_pcie_issue_perst(), making the endpoint going into
* fundamental reset. As required by PCI Express spec a delay for at * fundamental reset. As required by PCI Express spec (PCI Express
* least 100ms after such a reset before link training is needed. * Base Specification, REV. 4.0 PCI Express, February 19 2014, 6.6.1
* Conventional Reset) a delay for at least 100ms after such a reset
* before sending a Configuration Request to the device is needed.
* So wait until PCIe link is up. Function advk_pcie_wait_for_link()
* waits for link at least 900ms.
*/ */
msleep(PCI_PM_D3COLD_WAIT); ret = advk_pcie_wait_for_link(pcie);
if (ret < 0)
/* dev_err(dev, "link never came up\n");
* Try link training at link gen specified by device tree property else
* 'max-link-speed'. If this fails, iteratively train at lower gen. dev_info(dev, "link up\n");
*/
for (gen = pcie->link_gen; gen > 0; --gen) {
neg_gen = advk_pcie_train_at_gen(pcie, gen);
if (neg_gen > 0)
break;
}
if (neg_gen < 0)
goto err;
/*
* After successful training if negotiated gen is lower than requested,
* train again on negotiated gen. This solves some stability issues for
* some buggy gen1 cards.
*/
if (neg_gen < gen) {
gen = neg_gen;
neg_gen = advk_pcie_train_at_gen(pcie, gen);
}
if (neg_gen == gen) {
dev_info(dev, "link up at gen %i\n", gen);
return;
}
err:
dev_err(dev, "link never came up\n");
} }
/* /*
@ -451,9 +480,15 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
u32 reg; u32 reg;
int i; int i;
/* Enable TX */ /*
* Configure PCIe Reference clock. Direction is from the PCIe
* controller to the endpoint card, so enable transmitting of
* Reference clock differential signal off-chip and disable
* receiving off-chip differential signal.
*/
reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG); reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG);
reg |= PCIE_CORE_REF_CLK_TX_ENABLE; reg |= PCIE_CORE_REF_CLK_TX_ENABLE;
reg &= ~PCIE_CORE_REF_CLK_RX_ENABLE;
advk_writel(pcie, reg, PCIE_CORE_REF_CLK_REG); advk_writel(pcie, reg, PCIE_CORE_REF_CLK_REG);
/* Set to Direct mode */ /* Set to Direct mode */
@ -477,6 +512,31 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
reg = (PCI_VENDOR_ID_MARVELL << 16) | PCI_VENDOR_ID_MARVELL; reg = (PCI_VENDOR_ID_MARVELL << 16) | PCI_VENDOR_ID_MARVELL;
advk_writel(pcie, reg, VENDOR_ID_REG); advk_writel(pcie, reg, VENDOR_ID_REG);
/*
* Change Class Code of PCI Bridge device to PCI Bridge (0x600400),
* because the default value is Mass storage controller (0x010400).
*
* Note that this Aardvark PCI Bridge does not have compliant Type 1
* Configuration Space and it even cannot be accessed via Aardvark's
* PCI config space access method. Something like config space is
* available in internal Aardvark registers starting at offset 0x0
* and is reported as Type 0. In range 0x10 - 0x34 it has totally
* different registers.
*
* Therefore driver uses emulation of PCI Bridge which emulates
* access to configuration space via internal Aardvark registers or
* emulated configuration buffer.
*/
reg = advk_readl(pcie, PCIE_CORE_DEV_REV_REG);
reg &= ~0xffffff00;
reg |= (PCI_CLASS_BRIDGE_PCI << 8) << 8;
advk_writel(pcie, reg, PCIE_CORE_DEV_REV_REG);
/* Disable Root Bridge I/O space, memory space and bus mastering */
reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
reg &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
/* Set Advanced Error Capabilities and Control PF0 register */ /* Set Advanced Error Capabilities and Control PF0 register */
reg = PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX | reg = PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX |
PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN | PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN |
@ -488,8 +548,9 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL); reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL);
reg &= ~PCI_EXP_DEVCTL_RELAX_EN; reg &= ~PCI_EXP_DEVCTL_RELAX_EN;
reg &= ~PCI_EXP_DEVCTL_NOSNOOP_EN; reg &= ~PCI_EXP_DEVCTL_NOSNOOP_EN;
reg &= ~PCI_EXP_DEVCTL_PAYLOAD;
reg &= ~PCI_EXP_DEVCTL_READRQ; reg &= ~PCI_EXP_DEVCTL_READRQ;
reg |= PCI_EXP_DEVCTL_PAYLOAD; /* Set max payload size */ reg |= PCI_EXP_DEVCTL_PAYLOAD_512B;
reg |= PCI_EXP_DEVCTL_READRQ_512B; reg |= PCI_EXP_DEVCTL_READRQ_512B;
advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL); advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL);
@ -574,19 +635,6 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
advk_pcie_disable_ob_win(pcie, i); advk_pcie_disable_ob_win(pcie, i);
advk_pcie_train_link(pcie); advk_pcie_train_link(pcie);
/*
* FIXME: The following register update is suspicious. This register is
* applicable only when the PCI controller is configured for Endpoint
* mode, not as a Root Complex. But apparently when this code is
* removed, some cards stop working. This should be investigated and
* a comment explaining this should be put here.
*/
reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
reg |= PCIE_CORE_CMD_MEM_ACCESS_EN |
PCIE_CORE_CMD_IO_ACCESS_EN |
PCIE_CORE_CMD_MEM_IO_REQ_EN;
advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
} }
static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u32 *val) static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u32 *val)
@ -595,6 +643,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
u32 reg; u32 reg;
unsigned int status; unsigned int status;
char *strcomp_status, *str_posted; char *strcomp_status, *str_posted;
int ret;
reg = advk_readl(pcie, PIO_STAT); reg = advk_readl(pcie, PIO_STAT);
status = (reg & PIO_COMPLETION_STATUS_MASK) >> status = (reg & PIO_COMPLETION_STATUS_MASK) >>
@ -619,6 +668,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
case PIO_COMPLETION_STATUS_OK: case PIO_COMPLETION_STATUS_OK:
if (reg & PIO_ERR_STATUS) { if (reg & PIO_ERR_STATUS) {
strcomp_status = "COMP_ERR"; strcomp_status = "COMP_ERR";
ret = -EFAULT;
break; break;
} }
/* Get the read result */ /* Get the read result */
@ -626,9 +676,11 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
*val = advk_readl(pcie, PIO_RD_DATA); *val = advk_readl(pcie, PIO_RD_DATA);
/* No error */ /* No error */
strcomp_status = NULL; strcomp_status = NULL;
ret = 0;
break; break;
case PIO_COMPLETION_STATUS_UR: case PIO_COMPLETION_STATUS_UR:
strcomp_status = "UR"; strcomp_status = "UR";
ret = -EOPNOTSUPP;
break; break;
case PIO_COMPLETION_STATUS_CRS: case PIO_COMPLETION_STATUS_CRS:
if (allow_crs && val) { if (allow_crs && val) {
@ -646,6 +698,7 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
*/ */
*val = CFG_RD_CRS_VAL; *val = CFG_RD_CRS_VAL;
strcomp_status = NULL; strcomp_status = NULL;
ret = 0;
break; break;
} }
/* PCIe r4.0, sec 2.3.2, says: /* PCIe r4.0, sec 2.3.2, says:
@ -661,31 +714,34 @@ static int advk_pcie_check_pio_status(struct advk_pcie *pcie, bool allow_crs, u3
* Request and taking appropriate action, e.g., complete the * Request and taking appropriate action, e.g., complete the
* Request to the host as a failed transaction. * Request to the host as a failed transaction.
* *
* To simplify implementation do not re-issue the Configuration * So return -EAGAIN and caller (pci-aardvark.c driver) will
* Request and complete the Request as a failed transaction. * re-issue request again up to the PIO_RETRY_CNT retries.
*/ */
strcomp_status = "CRS"; strcomp_status = "CRS";
ret = -EAGAIN;
break; break;
case PIO_COMPLETION_STATUS_CA: case PIO_COMPLETION_STATUS_CA:
strcomp_status = "CA"; strcomp_status = "CA";
ret = -ECANCELED;
break; break;
default: default:
strcomp_status = "Unknown"; strcomp_status = "Unknown";
ret = -EINVAL;
break; break;
} }
if (!strcomp_status) if (!strcomp_status)
return 0; return ret;
if (reg & PIO_NON_POSTED_REQ) if (reg & PIO_NON_POSTED_REQ)
str_posted = "Non-posted"; str_posted = "Non-posted";
else else
str_posted = "Posted"; str_posted = "Posted";
dev_err(dev, "%s PIO Response Status: %s, %#x @ %#x\n", dev_dbg(dev, "%s PIO Response Status: %s, %#x @ %#x\n",
str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS)); str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS));
return -EFAULT; return ret;
} }
static int advk_pcie_wait_pio(struct advk_pcie *pcie) static int advk_pcie_wait_pio(struct advk_pcie *pcie)
@ -693,13 +749,13 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
struct device *dev = &pcie->pdev->dev; struct device *dev = &pcie->pdev->dev;
int i; int i;
for (i = 0; i < PIO_RETRY_CNT; i++) { for (i = 1; i <= PIO_RETRY_CNT; i++) {
u32 start, isr; u32 start, isr;
start = advk_readl(pcie, PIO_START); start = advk_readl(pcie, PIO_START);
isr = advk_readl(pcie, PIO_ISR); isr = advk_readl(pcie, PIO_ISR);
if (!start && isr) if (!start && isr)
return 0; return i;
udelay(PIO_RETRY_DELAY); udelay(PIO_RETRY_DELAY);
} }
@ -707,6 +763,72 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
return -ETIMEDOUT; return -ETIMEDOUT;
} }
static pci_bridge_emul_read_status_t
advk_pci_bridge_emul_base_conf_read(struct pci_bridge_emul *bridge,
int reg, u32 *value)
{
struct advk_pcie *pcie = bridge->data;
switch (reg) {
case PCI_COMMAND:
*value = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG);
return PCI_BRIDGE_EMUL_HANDLED;
case PCI_ROM_ADDRESS1:
*value = advk_readl(pcie, PCIE_CORE_EXP_ROM_BAR_REG);
return PCI_BRIDGE_EMUL_HANDLED;
case PCI_INTERRUPT_LINE: {
/*
* From the whole 32bit register we support reading from HW only
* one bit: PCI_BRIDGE_CTL_BUS_RESET.
* Other bits are retrieved only from emulated config buffer.
*/
__le32 *cfgspace = (__le32 *)&bridge->conf;
u32 val = le32_to_cpu(cfgspace[PCI_INTERRUPT_LINE / 4]);
if (advk_readl(pcie, PCIE_CORE_CTRL1_REG) & HOT_RESET_GEN)
val |= PCI_BRIDGE_CTL_BUS_RESET << 16;
else
val &= ~(PCI_BRIDGE_CTL_BUS_RESET << 16);
*value = val;
return PCI_BRIDGE_EMUL_HANDLED;
}
default:
return PCI_BRIDGE_EMUL_NOT_HANDLED;
}
}
static void
advk_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
int reg, u32 old, u32 new, u32 mask)
{
struct advk_pcie *pcie = bridge->data;
switch (reg) {
case PCI_COMMAND:
advk_writel(pcie, new, PCIE_CORE_CMD_STATUS_REG);
break;
case PCI_ROM_ADDRESS1:
advk_writel(pcie, new, PCIE_CORE_EXP_ROM_BAR_REG);
break;
case PCI_INTERRUPT_LINE:
if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) {
u32 val = advk_readl(pcie, PCIE_CORE_CTRL1_REG);
if (new & (PCI_BRIDGE_CTL_BUS_RESET << 16))
val |= HOT_RESET_GEN;
else
val &= ~HOT_RESET_GEN;
advk_writel(pcie, val, PCIE_CORE_CTRL1_REG);
}
break;
default:
break;
}
}
static pci_bridge_emul_read_status_t static pci_bridge_emul_read_status_t
advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
@ -723,6 +845,7 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
case PCI_EXP_RTCTL: { case PCI_EXP_RTCTL: {
u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG); u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG);
*value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE; *value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE;
*value |= le16_to_cpu(bridge->pcie_conf.rootctl) & PCI_EXP_RTCTL_CRSSVE;
*value |= PCI_EXP_RTCAP_CRSVIS << 16; *value |= PCI_EXP_RTCAP_CRSVIS << 16;
return PCI_BRIDGE_EMUL_HANDLED; return PCI_BRIDGE_EMUL_HANDLED;
} }
@ -734,12 +857,26 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
return PCI_BRIDGE_EMUL_HANDLED; return PCI_BRIDGE_EMUL_HANDLED;
} }
case PCI_EXP_LNKCAP: {
u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
/*
* PCI_EXP_LNKCAP_DLLLARC bit is hardwired in aardvark HW to 0.
* But support for PCI_EXP_LNKSTA_DLLLA is emulated via ltssm
* state so explicitly enable PCI_EXP_LNKCAP_DLLLARC flag.
*/
val |= PCI_EXP_LNKCAP_DLLLARC;
*value = val;
return PCI_BRIDGE_EMUL_HANDLED;
}
case PCI_EXP_LNKCTL: { case PCI_EXP_LNKCTL: {
/* u32 contains both PCI_EXP_LNKCTL and PCI_EXP_LNKSTA */ /* u32 contains both PCI_EXP_LNKCTL and PCI_EXP_LNKSTA */
u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg) & u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg) &
~(PCI_EXP_LNKSTA_LT << 16); ~(PCI_EXP_LNKSTA_LT << 16);
if (!advk_pcie_link_up(pcie)) if (advk_pcie_link_training(pcie))
val |= (PCI_EXP_LNKSTA_LT << 16); val |= (PCI_EXP_LNKSTA_LT << 16);
if (advk_pcie_link_active(pcie))
val |= (PCI_EXP_LNKSTA_DLLLA << 16);
*value = val; *value = val;
return PCI_BRIDGE_EMUL_HANDLED; return PCI_BRIDGE_EMUL_HANDLED;
} }
@ -747,7 +884,6 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
case PCI_CAP_LIST_ID: case PCI_CAP_LIST_ID:
case PCI_EXP_DEVCAP: case PCI_EXP_DEVCAP:
case PCI_EXP_DEVCTL: case PCI_EXP_DEVCTL:
case PCI_EXP_LNKCAP:
*value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg); *value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
return PCI_BRIDGE_EMUL_HANDLED; return PCI_BRIDGE_EMUL_HANDLED;
default: default:
@ -794,6 +930,8 @@ advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
} }
static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
.read_base = advk_pci_bridge_emul_base_conf_read,
.write_base = advk_pci_bridge_emul_base_conf_write,
.read_pcie = advk_pci_bridge_emul_pcie_conf_read, .read_pcie = advk_pci_bridge_emul_pcie_conf_read,
.write_pcie = advk_pci_bridge_emul_pcie_conf_write, .write_pcie = advk_pci_bridge_emul_pcie_conf_write,
}; };
@ -805,7 +943,6 @@ static struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
static int advk_sw_pci_bridge_init(struct advk_pcie *pcie) static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
{ {
struct pci_bridge_emul *bridge = &pcie->bridge; struct pci_bridge_emul *bridge = &pcie->bridge;
int ret;
bridge->conf.vendor = bridge->conf.vendor =
cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff); cpu_to_le16(advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff);
@ -825,19 +962,14 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
/* Support interrupt A for MSI feature */ /* Support interrupt A for MSI feature */
bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE; bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE;
/* Indicates supports for Completion Retry Status */
bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
bridge->has_pcie = true; bridge->has_pcie = true;
bridge->data = pcie; bridge->data = pcie;
bridge->ops = &advk_pci_bridge_emul_ops; bridge->ops = &advk_pci_bridge_emul_ops;
/* PCIe config space can be initialized after pci_bridge_emul_init() */ return pci_bridge_emul_init(bridge, 0);
ret = pci_bridge_emul_init(bridge, 0);
if (ret < 0)
return ret;
/* Indicates supports for Completion Retry Status */
bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
return 0;
} }
static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus, static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
@ -889,6 +1021,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
int where, int size, u32 *val) int where, int size, u32 *val)
{ {
struct advk_pcie *pcie = bus->sysdata; struct advk_pcie *pcie = bus->sysdata;
int retry_count;
bool allow_crs; bool allow_crs;
u32 reg; u32 reg;
int ret; int ret;
@ -911,18 +1044,8 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
(le16_to_cpu(pcie->bridge.pcie_conf.rootctl) & (le16_to_cpu(pcie->bridge.pcie_conf.rootctl) &
PCI_EXP_RTCTL_CRSSVE); PCI_EXP_RTCTL_CRSSVE);
if (advk_pcie_pio_is_running(pcie)) { if (advk_pcie_pio_is_running(pcie))
/* goto try_crs;
* If it is possible return Completion Retry Status so caller
* tries to issue the request again instead of failing.
*/
if (allow_crs) {
*val = CFG_RD_CRS_VAL;
return PCIBIOS_SUCCESSFUL;
}
*val = 0xffffffff;
return PCIBIOS_SET_FAILED;
}
/* Program the control register */ /* Program the control register */
reg = advk_readl(pcie, PIO_CTRL); reg = advk_readl(pcie, PIO_CTRL);
@ -941,30 +1064,24 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
/* Program the data strobe */ /* Program the data strobe */
advk_writel(pcie, 0xf, PIO_WR_DATA_STRB); advk_writel(pcie, 0xf, PIO_WR_DATA_STRB);
/* Clear PIO DONE ISR and start the transfer */ retry_count = 0;
advk_writel(pcie, 1, PIO_ISR); do {
advk_writel(pcie, 1, PIO_START); /* Clear PIO DONE ISR and start the transfer */
advk_writel(pcie, 1, PIO_ISR);
advk_writel(pcie, 1, PIO_START);
ret = advk_pcie_wait_pio(pcie); ret = advk_pcie_wait_pio(pcie);
if (ret < 0) { if (ret < 0)
/* goto try_crs;
* If it is possible return Completion Retry Status so caller
* tries to issue the request again instead of failing.
*/
if (allow_crs) {
*val = CFG_RD_CRS_VAL;
return PCIBIOS_SUCCESSFUL;
}
*val = 0xffffffff;
return PCIBIOS_SET_FAILED;
}
/* Check PIO status and get the read result */ retry_count += ret;
ret = advk_pcie_check_pio_status(pcie, allow_crs, val);
if (ret < 0) { /* Check PIO status and get the read result */
*val = 0xffffffff; ret = advk_pcie_check_pio_status(pcie, allow_crs, val);
return PCIBIOS_SET_FAILED; } while (ret == -EAGAIN && retry_count < PIO_RETRY_CNT);
}
if (ret < 0)
goto fail;
if (size == 1) if (size == 1)
*val = (*val >> (8 * (where & 3))) & 0xff; *val = (*val >> (8 * (where & 3))) & 0xff;
@ -972,6 +1089,20 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
*val = (*val >> (8 * (where & 3))) & 0xffff; *val = (*val >> (8 * (where & 3))) & 0xffff;
return PCIBIOS_SUCCESSFUL; return PCIBIOS_SUCCESSFUL;
try_crs:
/*
* If it is possible, return Completion Retry Status so that caller
* tries to issue the request again instead of failing.
*/
if (allow_crs) {
*val = CFG_RD_CRS_VAL;
return PCIBIOS_SUCCESSFUL;
}
fail:
*val = 0xffffffff;
return PCIBIOS_SET_FAILED;
} }
static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn, static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
@ -980,6 +1111,7 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
struct advk_pcie *pcie = bus->sysdata; struct advk_pcie *pcie = bus->sysdata;
u32 reg; u32 reg;
u32 data_strobe = 0x0; u32 data_strobe = 0x0;
int retry_count;
int offset; int offset;
int ret; int ret;
@ -1021,19 +1153,22 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
/* Program the data strobe */ /* Program the data strobe */
advk_writel(pcie, data_strobe, PIO_WR_DATA_STRB); advk_writel(pcie, data_strobe, PIO_WR_DATA_STRB);
/* Clear PIO DONE ISR and start the transfer */ retry_count = 0;
advk_writel(pcie, 1, PIO_ISR); do {
advk_writel(pcie, 1, PIO_START); /* Clear PIO DONE ISR and start the transfer */
advk_writel(pcie, 1, PIO_ISR);
advk_writel(pcie, 1, PIO_START);
ret = advk_pcie_wait_pio(pcie); ret = advk_pcie_wait_pio(pcie);
if (ret < 0) if (ret < 0)
return PCIBIOS_SET_FAILED; return PCIBIOS_SET_FAILED;
ret = advk_pcie_check_pio_status(pcie, false, NULL); retry_count += ret;
if (ret < 0)
return PCIBIOS_SET_FAILED;
return PCIBIOS_SUCCESSFUL; ret = advk_pcie_check_pio_status(pcie, false, NULL);
} while (ret == -EAGAIN && retry_count < PIO_RETRY_CNT);
return ret < 0 ? PCIBIOS_SET_FAILED : PCIBIOS_SUCCESSFUL;
} }
static struct pci_ops advk_pcie_ops = { static struct pci_ops advk_pcie_ops = {
@ -1082,7 +1217,7 @@ static int advk_msi_irq_domain_alloc(struct irq_domain *domain,
domain->host_data, handle_simple_irq, domain->host_data, handle_simple_irq,
NULL, NULL); NULL, NULL);
return hwirq; return 0;
} }
static void advk_msi_irq_domain_free(struct irq_domain *domain, static void advk_msi_irq_domain_free(struct irq_domain *domain,
@ -1263,8 +1398,12 @@ static void advk_pcie_handle_msi(struct advk_pcie *pcie)
if (!(BIT(msi_idx) & msi_status)) if (!(BIT(msi_idx) & msi_status))
continue; continue;
/*
* msi_idx contains bits [4:0] of the msi_data and msi_data
* contains 16bit MSI interrupt number
*/
advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG); advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG);
msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & 0xFF; msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & PCIE_MSI_DATA_MASK;
generic_handle_irq(msi_data); generic_handle_irq(msi_data);
} }
@ -1286,12 +1425,6 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie)
isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK); isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK);
if (!isr0_status && !isr1_status) {
advk_writel(pcie, isr0_val, PCIE_ISR0_REG);
advk_writel(pcie, isr1_val, PCIE_ISR1_REG);
return;
}
/* Process MSI interrupts */ /* Process MSI interrupts */
if (isr0_status & PCIE_ISR0_MSI_INT_PENDING) if (isr0_status & PCIE_ISR0_MSI_INT_PENDING)
advk_pcie_handle_msi(pcie); advk_pcie_handle_msi(pcie);

View File

@ -3126,14 +3126,14 @@ static int hv_pci_probe(struct hv_device *hdev,
if (dom == HVPCI_DOM_INVALID) { if (dom == HVPCI_DOM_INVALID) {
dev_err(&hdev->device, dev_err(&hdev->device,
"Unable to use dom# 0x%hx or other numbers", dom_req); "Unable to use dom# 0x%x or other numbers", dom_req);
ret = -EINVAL; ret = -EINVAL;
goto free_bus; goto free_bus;
} }
if (dom != dom_req) if (dom != dom_req)
dev_info(&hdev->device, dev_info(&hdev->device,
"PCI dom# 0x%hx has collision, using 0x%hx", "PCI dom# 0x%x has collision, using 0x%x",
dom_req, dom); dom_req, dom);
hbus->bridge->domain_nr = dom; hbus->bridge->domain_nr = dom;

View File

@ -17,7 +17,7 @@ static void set_val(u32 v, int where, int size, u32 *val)
{ {
int shift = (where & 3) * 8; int shift = (where & 3) * 8;
pr_debug("set_val %04x: %08x\n", (unsigned)(where & ~3), v); pr_debug("set_val %04x: %08x\n", (unsigned int)(where & ~3), v);
v >>= shift; v >>= shift;
if (size == 1) if (size == 1)
v &= 0xff; v &= 0xff;
@ -187,7 +187,7 @@ static int thunder_ecam_config_read(struct pci_bus *bus, unsigned int devfn,
pr_debug("%04x:%04x - Fix pass#: %08x, where: %03x, devfn: %03x\n", pr_debug("%04x:%04x - Fix pass#: %08x, where: %03x, devfn: %03x\n",
vendor_device & 0xffff, vendor_device >> 16, class_rev, vendor_device & 0xffff, vendor_device >> 16, class_rev,
(unsigned) where, devfn); (unsigned int)where, devfn);
/* Check for non type-00 header */ /* Check for non type-00 header */
if (cfg_type == 0) { if (cfg_type == 0) {

View File

@ -302,7 +302,7 @@ static void xgene_msi_isr(struct irq_desc *desc)
/* /*
* MSIINTn (n is 0..F) indicates if there is a pending MSI interrupt * MSIINTn (n is 0..F) indicates if there is a pending MSI interrupt
* If bit x of this register is set (x is 0..7), one or more interupts * If bit x of this register is set (x is 0..7), one or more interrupts
* corresponding to MSInIRx is set. * corresponding to MSInIRx is set.
*/ */
grp_select = xgene_msi_int_read(xgene_msi, msi_grp); grp_select = xgene_msi_int_read(xgene_msi, msi_grp);

View File

@ -48,7 +48,6 @@
#define EN_COHERENCY 0xF0000000 #define EN_COHERENCY 0xF0000000
#define EN_REG 0x00000001 #define EN_REG 0x00000001
#define OB_LO_IO 0x00000002 #define OB_LO_IO 0x00000002
#define XGENE_PCIE_VENDORID 0x10E8
#define XGENE_PCIE_DEVICEID 0xE004 #define XGENE_PCIE_DEVICEID 0xE004
#define SZ_1T (SZ_1G*1024ULL) #define SZ_1T (SZ_1G*1024ULL)
#define PIPE_PHY_RATE_RD(src) ((0xc000 & (u32)(src)) >> 0xe) #define PIPE_PHY_RATE_RD(src) ((0xc000 & (u32)(src)) >> 0xe)
@ -560,7 +559,7 @@ static int xgene_pcie_setup(struct xgene_pcie_port *port)
xgene_pcie_clear_config(port); xgene_pcie_clear_config(port);
/* setup the vendor and device IDs correctly */ /* setup the vendor and device IDs correctly */
val = (XGENE_PCIE_DEVICEID << 16) | XGENE_PCIE_VENDORID; val = (XGENE_PCIE_DEVICEID << 16) | PCI_VENDOR_ID_AMCC;
xgene_pcie_writel(port, BRIDGE_CFG_0, val); xgene_pcie_writel(port, BRIDGE_CFG_0, val);
ret = xgene_pcie_map_ranges(port); ret = xgene_pcie_map_ranges(port);

View File

@ -0,0 +1,824 @@
// SPDX-License-Identifier: GPL-2.0
/*
* PCIe host bridge driver for Apple system-on-chips.
*
* The HW is ECAM compliant, so once the controller is initialized,
* the driver mostly deals MSI mapping and handling of per-port
* interrupts (INTx, management and error signals).
*
* Initialization requires enabling power and clocks, along with a
* number of register pokes.
*
* Copyright (C) 2021 Alyssa Rosenzweig <alyssa@rosenzweig.io>
* Copyright (C) 2021 Google LLC
* Copyright (C) 2021 Corellium LLC
* Copyright (C) 2021 Mark Kettenis <kettenis@openbsd.org>
*
* Author: Alyssa Rosenzweig <alyssa@rosenzweig.io>
* Author: Marc Zyngier <maz@kernel.org>
*/
#include <linux/gpio/consumer.h>
#include <linux/kernel.h>
#include <linux/iopoll.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/msi.h>
#include <linux/notifier.h>
#include <linux/of_irq.h>
#include <linux/pci-ecam.h>
#define CORE_RC_PHYIF_CTL 0x00024
#define CORE_RC_PHYIF_CTL_RUN BIT(0)
#define CORE_RC_PHYIF_STAT 0x00028
#define CORE_RC_PHYIF_STAT_REFCLK BIT(4)
#define CORE_RC_CTL 0x00050
#define CORE_RC_CTL_RUN BIT(0)
#define CORE_RC_STAT 0x00058
#define CORE_RC_STAT_READY BIT(0)
#define CORE_FABRIC_STAT 0x04000
#define CORE_FABRIC_STAT_MASK 0x001F001F
#define CORE_LANE_CFG(port) (0x84000 + 0x4000 * (port))
#define CORE_LANE_CFG_REFCLK0REQ BIT(0)
#define CORE_LANE_CFG_REFCLK1 BIT(1)
#define CORE_LANE_CFG_REFCLK0ACK BIT(2)
#define CORE_LANE_CFG_REFCLKEN (BIT(9) | BIT(10))
#define CORE_LANE_CTL(port) (0x84004 + 0x4000 * (port))
#define CORE_LANE_CTL_CFGACC BIT(15)
#define PORT_LTSSMCTL 0x00080
#define PORT_LTSSMCTL_START BIT(0)
#define PORT_INTSTAT 0x00100
#define PORT_INT_TUNNEL_ERR 31
#define PORT_INT_CPL_TIMEOUT 23
#define PORT_INT_RID2SID_MAPERR 22
#define PORT_INT_CPL_ABORT 21
#define PORT_INT_MSI_BAD_DATA 19
#define PORT_INT_MSI_ERR 18
#define PORT_INT_REQADDR_GT32 17
#define PORT_INT_AF_TIMEOUT 15
#define PORT_INT_LINK_DOWN 14
#define PORT_INT_LINK_UP 12
#define PORT_INT_LINK_BWMGMT 11
#define PORT_INT_AER_MASK (15 << 4)
#define PORT_INT_PORT_ERR 4
#define PORT_INT_INTx(i) i
#define PORT_INT_INTx_MASK 15
#define PORT_INTMSK 0x00104
#define PORT_INTMSKSET 0x00108
#define PORT_INTMSKCLR 0x0010c
#define PORT_MSICFG 0x00124
#define PORT_MSICFG_EN BIT(0)
#define PORT_MSICFG_L2MSINUM_SHIFT 4
#define PORT_MSIBASE 0x00128
#define PORT_MSIBASE_1_SHIFT 16
#define PORT_MSIADDR 0x00168
#define PORT_LINKSTS 0x00208
#define PORT_LINKSTS_UP BIT(0)
#define PORT_LINKSTS_BUSY BIT(2)
#define PORT_LINKCMDSTS 0x00210
#define PORT_OUTS_NPREQS 0x00284
#define PORT_OUTS_NPREQS_REQ BIT(24)
#define PORT_OUTS_NPREQS_CPL BIT(16)
#define PORT_RXWR_FIFO 0x00288
#define PORT_RXWR_FIFO_HDR GENMASK(15, 10)
#define PORT_RXWR_FIFO_DATA GENMASK(9, 0)
#define PORT_RXRD_FIFO 0x0028C
#define PORT_RXRD_FIFO_REQ GENMASK(6, 0)
#define PORT_OUTS_CPLS 0x00290
#define PORT_OUTS_CPLS_SHRD GENMASK(14, 8)
#define PORT_OUTS_CPLS_WAIT GENMASK(6, 0)
#define PORT_APPCLK 0x00800
#define PORT_APPCLK_EN BIT(0)
#define PORT_APPCLK_CGDIS BIT(8)
#define PORT_STATUS 0x00804
#define PORT_STATUS_READY BIT(0)
#define PORT_REFCLK 0x00810
#define PORT_REFCLK_EN BIT(0)
#define PORT_REFCLK_CGDIS BIT(8)
#define PORT_PERST 0x00814
#define PORT_PERST_OFF BIT(0)
#define PORT_RID2SID(i16) (0x00828 + 4 * (i16))
#define PORT_RID2SID_VALID BIT(31)
#define PORT_RID2SID_SID_SHIFT 16
#define PORT_RID2SID_BUS_SHIFT 8
#define PORT_RID2SID_DEV_SHIFT 3
#define PORT_RID2SID_FUNC_SHIFT 0
#define PORT_OUTS_PREQS_HDR 0x00980
#define PORT_OUTS_PREQS_HDR_MASK GENMASK(9, 0)
#define PORT_OUTS_PREQS_DATA 0x00984
#define PORT_OUTS_PREQS_DATA_MASK GENMASK(15, 0)
#define PORT_TUNCTRL 0x00988
#define PORT_TUNCTRL_PERST_ON BIT(0)
#define PORT_TUNCTRL_PERST_ACK_REQ BIT(1)
#define PORT_TUNSTAT 0x0098c
#define PORT_TUNSTAT_PERST_ON BIT(0)
#define PORT_TUNSTAT_PERST_ACK_PEND BIT(1)
#define PORT_PREFMEM_ENABLE 0x00994
#define MAX_RID2SID 64
/*
* The doorbell address is set to 0xfffff000, which by convention
* matches what MacOS does, and it is possible to use any other
* address (in the bottom 4GB, as the base register is only 32bit).
* However, it has to be excluded from the IOVA range, and the DART
* driver has to know about it.
*/
#define DOORBELL_ADDR CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR
struct apple_pcie {
struct mutex lock;
struct device *dev;
void __iomem *base;
struct irq_domain *domain;
unsigned long *bitmap;
struct list_head ports;
struct completion event;
struct irq_fwspec fwspec;
u32 nvecs;
};
struct apple_pcie_port {
struct apple_pcie *pcie;
struct device_node *np;
void __iomem *base;
struct irq_domain *domain;
struct list_head entry;
DECLARE_BITMAP(sid_map, MAX_RID2SID);
int sid_map_sz;
int idx;
};
static void rmw_set(u32 set, void __iomem *addr)
{
writel_relaxed(readl_relaxed(addr) | set, addr);
}
static void rmw_clear(u32 clr, void __iomem *addr)
{
writel_relaxed(readl_relaxed(addr) & ~clr, addr);
}
static void apple_msi_top_irq_mask(struct irq_data *d)
{
pci_msi_mask_irq(d);
irq_chip_mask_parent(d);
}
static void apple_msi_top_irq_unmask(struct irq_data *d)
{
pci_msi_unmask_irq(d);
irq_chip_unmask_parent(d);
}
static struct irq_chip apple_msi_top_chip = {
.name = "PCIe MSI",
.irq_mask = apple_msi_top_irq_mask,
.irq_unmask = apple_msi_top_irq_unmask,
.irq_eoi = irq_chip_eoi_parent,
.irq_set_affinity = irq_chip_set_affinity_parent,
.irq_set_type = irq_chip_set_type_parent,
};
static void apple_msi_compose_msg(struct irq_data *data, struct msi_msg *msg)
{
msg->address_hi = upper_32_bits(DOORBELL_ADDR);
msg->address_lo = lower_32_bits(DOORBELL_ADDR);
msg->data = data->hwirq;
}
static struct irq_chip apple_msi_bottom_chip = {
.name = "MSI",
.irq_mask = irq_chip_mask_parent,
.irq_unmask = irq_chip_unmask_parent,
.irq_eoi = irq_chip_eoi_parent,
.irq_set_affinity = irq_chip_set_affinity_parent,
.irq_set_type = irq_chip_set_type_parent,
.irq_compose_msi_msg = apple_msi_compose_msg,
};
static int apple_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args)
{
struct apple_pcie *pcie = domain->host_data;
struct irq_fwspec fwspec = pcie->fwspec;
unsigned int i;
int ret, hwirq;
mutex_lock(&pcie->lock);
hwirq = bitmap_find_free_region(pcie->bitmap, pcie->nvecs,
order_base_2(nr_irqs));
mutex_unlock(&pcie->lock);
if (hwirq < 0)
return -ENOSPC;
fwspec.param[1] += hwirq;
ret = irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &fwspec);
if (ret)
return ret;
for (i = 0; i < nr_irqs; i++) {
irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i,
&apple_msi_bottom_chip,
domain->host_data);
}
return 0;
}
static void apple_msi_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs)
{
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
struct apple_pcie *pcie = domain->host_data;
mutex_lock(&pcie->lock);
bitmap_release_region(pcie->bitmap, d->hwirq, order_base_2(nr_irqs));
mutex_unlock(&pcie->lock);
}
static const struct irq_domain_ops apple_msi_domain_ops = {
.alloc = apple_msi_domain_alloc,
.free = apple_msi_domain_free,
};
static struct msi_domain_info apple_msi_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX),
.chip = &apple_msi_top_chip,
};
static void apple_port_irq_mask(struct irq_data *data)
{
struct apple_pcie_port *port = irq_data_get_irq_chip_data(data);
writel_relaxed(BIT(data->hwirq), port->base + PORT_INTMSKSET);
}
static void apple_port_irq_unmask(struct irq_data *data)
{
struct apple_pcie_port *port = irq_data_get_irq_chip_data(data);
writel_relaxed(BIT(data->hwirq), port->base + PORT_INTMSKCLR);
}
static bool hwirq_is_intx(unsigned int hwirq)
{
return BIT(hwirq) & PORT_INT_INTx_MASK;
}
static void apple_port_irq_ack(struct irq_data *data)
{
struct apple_pcie_port *port = irq_data_get_irq_chip_data(data);
if (!hwirq_is_intx(data->hwirq))
writel_relaxed(BIT(data->hwirq), port->base + PORT_INTSTAT);
}
static int apple_port_irq_set_type(struct irq_data *data, unsigned int type)
{
/*
* It doesn't seem that there is any way to configure the
* trigger, so assume INTx have to be level (as per the spec),
* and the rest is edge (which looks likely).
*/
if (hwirq_is_intx(data->hwirq) ^ !!(type & IRQ_TYPE_LEVEL_MASK))
return -EINVAL;
irqd_set_trigger_type(data, type);
return 0;
}
static struct irq_chip apple_port_irqchip = {
.name = "PCIe",
.irq_ack = apple_port_irq_ack,
.irq_mask = apple_port_irq_mask,
.irq_unmask = apple_port_irq_unmask,
.irq_set_type = apple_port_irq_set_type,
};
static int apple_port_irq_domain_alloc(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs,
void *args)
{
struct apple_pcie_port *port = domain->host_data;
struct irq_fwspec *fwspec = args;
int i;
for (i = 0; i < nr_irqs; i++) {
irq_flow_handler_t flow = handle_edge_irq;
unsigned int type = IRQ_TYPE_EDGE_RISING;
if (hwirq_is_intx(fwspec->param[0] + i)) {
flow = handle_level_irq;
type = IRQ_TYPE_LEVEL_HIGH;
}
irq_domain_set_info(domain, virq + i, fwspec->param[0] + i,
&apple_port_irqchip, port, flow,
NULL, NULL);
irq_set_irq_type(virq + i, type);
}
return 0;
}
static void apple_port_irq_domain_free(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs)
{
int i;
for (i = 0; i < nr_irqs; i++) {
struct irq_data *d = irq_domain_get_irq_data(domain, virq + i);
irq_set_handler(virq + i, NULL);
irq_domain_reset_irq_data(d);
}
}
static const struct irq_domain_ops apple_port_irq_domain_ops = {
.translate = irq_domain_translate_onecell,
.alloc = apple_port_irq_domain_alloc,
.free = apple_port_irq_domain_free,
};
static void apple_port_irq_handler(struct irq_desc *desc)
{
struct apple_pcie_port *port = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned long stat;
int i;
chained_irq_enter(chip, desc);
stat = readl_relaxed(port->base + PORT_INTSTAT);
for_each_set_bit(i, &stat, 32)
generic_handle_domain_irq(port->domain, i);
chained_irq_exit(chip, desc);
}
static int apple_pcie_port_setup_irq(struct apple_pcie_port *port)
{
struct fwnode_handle *fwnode = &port->np->fwnode;
unsigned int irq;
/* FIXME: consider moving each interrupt under each port */
irq = irq_of_parse_and_map(to_of_node(dev_fwnode(port->pcie->dev)),
port->idx);
if (!irq)
return -ENXIO;
port->domain = irq_domain_create_linear(fwnode, 32,
&apple_port_irq_domain_ops,
port);
if (!port->domain)
return -ENOMEM;
/* Disable all interrupts */
writel_relaxed(~0, port->base + PORT_INTMSKSET);
writel_relaxed(~0, port->base + PORT_INTSTAT);
irq_set_chained_handler_and_data(irq, apple_port_irq_handler, port);
/* Configure MSI base address */
BUILD_BUG_ON(upper_32_bits(DOORBELL_ADDR));
writel_relaxed(lower_32_bits(DOORBELL_ADDR), port->base + PORT_MSIADDR);
/* Enable MSIs, shared between all ports */
writel_relaxed(0, port->base + PORT_MSIBASE);
writel_relaxed((ilog2(port->pcie->nvecs) << PORT_MSICFG_L2MSINUM_SHIFT) |
PORT_MSICFG_EN, port->base + PORT_MSICFG);
return 0;
}
static irqreturn_t apple_pcie_port_irq(int irq, void *data)
{
struct apple_pcie_port *port = data;
unsigned int hwirq = irq_domain_get_irq_data(port->domain, irq)->hwirq;
switch (hwirq) {
case PORT_INT_LINK_UP:
dev_info_ratelimited(port->pcie->dev, "Link up on %pOF\n",
port->np);
complete_all(&port->pcie->event);
break;
case PORT_INT_LINK_DOWN:
dev_info_ratelimited(port->pcie->dev, "Link down on %pOF\n",
port->np);
break;
default:
return IRQ_NONE;
}
return IRQ_HANDLED;
}
static int apple_pcie_port_register_irqs(struct apple_pcie_port *port)
{
static struct {
unsigned int hwirq;
const char *name;
} port_irqs[] = {
{ PORT_INT_LINK_UP, "Link up", },
{ PORT_INT_LINK_DOWN, "Link down", },
};
int i;
for (i = 0; i < ARRAY_SIZE(port_irqs); i++) {
struct irq_fwspec fwspec = {
.fwnode = &port->np->fwnode,
.param_count = 1,
.param = {
[0] = port_irqs[i].hwirq,
},
};
unsigned int irq;
int ret;
irq = irq_domain_alloc_irqs(port->domain, 1, NUMA_NO_NODE,
&fwspec);
if (WARN_ON(!irq))
continue;
ret = request_irq(irq, apple_pcie_port_irq, 0,
port_irqs[i].name, port);
WARN_ON(ret);
}
return 0;
}
static int apple_pcie_setup_refclk(struct apple_pcie *pcie,
struct apple_pcie_port *port)
{
u32 stat;
int res;
res = readl_relaxed_poll_timeout(pcie->base + CORE_RC_PHYIF_STAT, stat,
stat & CORE_RC_PHYIF_STAT_REFCLK,
100, 50000);
if (res < 0)
return res;
rmw_set(CORE_LANE_CTL_CFGACC, pcie->base + CORE_LANE_CTL(port->idx));
rmw_set(CORE_LANE_CFG_REFCLK0REQ, pcie->base + CORE_LANE_CFG(port->idx));
res = readl_relaxed_poll_timeout(pcie->base + CORE_LANE_CFG(port->idx),
stat, stat & CORE_LANE_CFG_REFCLK0ACK,
100, 50000);
if (res < 0)
return res;
rmw_set(CORE_LANE_CFG_REFCLK1, pcie->base + CORE_LANE_CFG(port->idx));
res = readl_relaxed_poll_timeout(pcie->base + CORE_LANE_CFG(port->idx),
stat, stat & CORE_LANE_CFG_REFCLK1,
100, 50000);
if (res < 0)
return res;
rmw_clear(CORE_LANE_CTL_CFGACC, pcie->base + CORE_LANE_CTL(port->idx));
rmw_set(CORE_LANE_CFG_REFCLKEN, pcie->base + CORE_LANE_CFG(port->idx));
rmw_set(PORT_REFCLK_EN, port->base + PORT_REFCLK);
return 0;
}
static u32 apple_pcie_rid2sid_write(struct apple_pcie_port *port,
int idx, u32 val)
{
writel_relaxed(val, port->base + PORT_RID2SID(idx));
/* Read back to ensure completion of the write */
return readl_relaxed(port->base + PORT_RID2SID(idx));
}
static int apple_pcie_setup_port(struct apple_pcie *pcie,
struct device_node *np)
{
struct platform_device *platform = to_platform_device(pcie->dev);
struct apple_pcie_port *port;
struct gpio_desc *reset;
u32 stat, idx;
int ret, i;
reset = gpiod_get_from_of_node(np, "reset-gpios", 0,
GPIOD_OUT_LOW, "#PERST");
if (IS_ERR(reset))
return PTR_ERR(reset);
port = devm_kzalloc(pcie->dev, sizeof(*port), GFP_KERNEL);
if (!port)
return -ENOMEM;
ret = of_property_read_u32_index(np, "reg", 0, &idx);
if (ret)
return ret;
/* Use the first reg entry to work out the port index */
port->idx = idx >> 11;
port->pcie = pcie;
port->np = np;
port->base = devm_platform_ioremap_resource(platform, port->idx + 2);
if (IS_ERR(port->base))
return PTR_ERR(port->base);
rmw_set(PORT_APPCLK_EN, port->base + PORT_APPCLK);
ret = apple_pcie_setup_refclk(pcie, port);
if (ret < 0)
return ret;
rmw_set(PORT_PERST_OFF, port->base + PORT_PERST);
gpiod_set_value(reset, 1);
ret = readl_relaxed_poll_timeout(port->base + PORT_STATUS, stat,
stat & PORT_STATUS_READY, 100, 250000);
if (ret < 0) {
dev_err(pcie->dev, "port %pOF ready wait timeout\n", np);
return ret;
}
ret = apple_pcie_port_setup_irq(port);
if (ret)
return ret;
/* Reset all RID/SID mappings, and check for RAZ/WI registers */
for (i = 0; i < MAX_RID2SID; i++) {
if (apple_pcie_rid2sid_write(port, i, 0xbad1d) != 0xbad1d)
break;
apple_pcie_rid2sid_write(port, i, 0);
}
dev_dbg(pcie->dev, "%pOF: %d RID/SID mapping entries\n", np, i);
port->sid_map_sz = i;
list_add_tail(&port->entry, &pcie->ports);
init_completion(&pcie->event);
ret = apple_pcie_port_register_irqs(port);
WARN_ON(ret);
writel_relaxed(PORT_LTSSMCTL_START, port->base + PORT_LTSSMCTL);
if (!wait_for_completion_timeout(&pcie->event, HZ / 10))
dev_warn(pcie->dev, "%pOF link didn't come up\n", np);
return 0;
}
static int apple_msi_init(struct apple_pcie *pcie)
{
struct fwnode_handle *fwnode = dev_fwnode(pcie->dev);
struct of_phandle_args args = {};
struct irq_domain *parent;
int ret;
ret = of_parse_phandle_with_args(to_of_node(fwnode), "msi-ranges",
"#interrupt-cells", 0, &args);
if (ret)
return ret;
ret = of_property_read_u32_index(to_of_node(fwnode), "msi-ranges",
args.args_count + 1, &pcie->nvecs);
if (ret)
return ret;
of_phandle_args_to_fwspec(args.np, args.args, args.args_count,
&pcie->fwspec);
pcie->bitmap = devm_bitmap_zalloc(pcie->dev, pcie->nvecs, GFP_KERNEL);
if (!pcie->bitmap)
return -ENOMEM;
parent = irq_find_matching_fwspec(&pcie->fwspec, DOMAIN_BUS_WIRED);
if (!parent) {
dev_err(pcie->dev, "failed to find parent domain\n");
return -ENXIO;
}
parent = irq_domain_create_hierarchy(parent, 0, pcie->nvecs, fwnode,
&apple_msi_domain_ops, pcie);
if (!parent) {
dev_err(pcie->dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
irq_domain_update_bus_token(parent, DOMAIN_BUS_NEXUS);
pcie->domain = pci_msi_create_irq_domain(fwnode, &apple_msi_info,
parent);
if (!pcie->domain) {
dev_err(pcie->dev, "failed to create MSI domain\n");
irq_domain_remove(parent);
return -ENOMEM;
}
return 0;
}
static struct apple_pcie_port *apple_pcie_get_port(struct pci_dev *pdev)
{
struct pci_config_window *cfg = pdev->sysdata;
struct apple_pcie *pcie = cfg->priv;
struct pci_dev *port_pdev;
struct apple_pcie_port *port;
/* Find the root port this device is on */
port_pdev = pcie_find_root_port(pdev);
/* If finding the port itself, nothing to do */
if (WARN_ON(!port_pdev) || pdev == port_pdev)
return NULL;
list_for_each_entry(port, &pcie->ports, entry) {
if (port->idx == PCI_SLOT(port_pdev->devfn))
return port;
}
return NULL;
}
static int apple_pcie_add_device(struct apple_pcie_port *port,
struct pci_dev *pdev)
{
u32 sid, rid = PCI_DEVID(pdev->bus->number, pdev->devfn);
int idx, err;
dev_dbg(&pdev->dev, "added to bus %s, index %d\n",
pci_name(pdev->bus->self), port->idx);
err = of_map_id(port->pcie->dev->of_node, rid, "iommu-map",
"iommu-map-mask", NULL, &sid);
if (err)
return err;
mutex_lock(&port->pcie->lock);
idx = bitmap_find_free_region(port->sid_map, port->sid_map_sz, 0);
if (idx >= 0) {
apple_pcie_rid2sid_write(port, idx,
PORT_RID2SID_VALID |
(sid << PORT_RID2SID_SID_SHIFT) | rid);
dev_dbg(&pdev->dev, "mapping RID%x to SID%x (index %d)\n",
rid, sid, idx);
}
mutex_unlock(&port->pcie->lock);
return idx >= 0 ? 0 : -ENOSPC;
}
static void apple_pcie_release_device(struct apple_pcie_port *port,
struct pci_dev *pdev)
{
u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn);
int idx;
mutex_lock(&port->pcie->lock);
for_each_set_bit(idx, port->sid_map, port->sid_map_sz) {
u32 val;
val = readl_relaxed(port->base + PORT_RID2SID(idx));
if ((val & 0xffff) == rid) {
apple_pcie_rid2sid_write(port, idx, 0);
bitmap_release_region(port->sid_map, idx, 0);
dev_dbg(&pdev->dev, "Released %x (%d)\n", val, idx);
break;
}
}
mutex_unlock(&port->pcie->lock);
}
static int apple_pcie_bus_notifier(struct notifier_block *nb,
unsigned long action,
void *data)
{
struct device *dev = data;
struct pci_dev *pdev = to_pci_dev(dev);
struct apple_pcie_port *port;
int err;
/*
* This is a bit ugly. We assume that if we get notified for
* any PCI device, we must be in charge of it, and that there
* is no other PCI controller in the whole system. It probably
* holds for now, but who knows for how long?
*/
port = apple_pcie_get_port(pdev);
if (!port)
return NOTIFY_DONE;
switch (action) {
case BUS_NOTIFY_ADD_DEVICE:
err = apple_pcie_add_device(port, pdev);
if (err)
return notifier_from_errno(err);
break;
case BUS_NOTIFY_DEL_DEVICE:
apple_pcie_release_device(port, pdev);
break;
default:
return NOTIFY_DONE;
}
return NOTIFY_OK;
}
static struct notifier_block apple_pcie_nb = {
.notifier_call = apple_pcie_bus_notifier,
};
static int apple_pcie_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct platform_device *platform = to_platform_device(dev);
struct device_node *of_port;
struct apple_pcie *pcie;
int ret;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
pcie->dev = dev;
mutex_init(&pcie->lock);
pcie->base = devm_platform_ioremap_resource(platform, 1);
if (IS_ERR(pcie->base))
return PTR_ERR(pcie->base);
cfg->priv = pcie;
INIT_LIST_HEAD(&pcie->ports);
for_each_child_of_node(dev->of_node, of_port) {
ret = apple_pcie_setup_port(pcie, of_port);
if (ret) {
dev_err(pcie->dev, "Port %pOF setup fail: %d\n", of_port, ret);
of_node_put(of_port);
return ret;
}
}
return apple_msi_init(pcie);
}
static int apple_pcie_probe(struct platform_device *pdev)
{
int ret;
ret = bus_register_notifier(&pci_bus_type, &apple_pcie_nb);
if (ret)
return ret;
ret = pci_host_common_probe(pdev);
if (ret)
bus_unregister_notifier(&pci_bus_type, &apple_pcie_nb);
return ret;
}
static const struct pci_ecam_ops apple_pcie_cfg_ecam_ops = {
.init = apple_pcie_init,
.pci_ops = {
.map_bus = pci_ecam_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
}
};
static const struct of_device_id apple_pcie_of_match[] = {
{ .compatible = "apple,pcie", .data = &apple_pcie_cfg_ecam_ops },
{ }
};
MODULE_DEVICE_TABLE(of, apple_pcie_of_match);
static struct platform_driver apple_pcie_driver = {
.probe = apple_pcie_probe,
.driver = {
.name = "pcie-apple",
.of_match_table = apple_pcie_of_match,
.suppress_bind_attrs = true,
},
};
module_platform_driver(apple_pcie_driver);
MODULE_LICENSE("GPL v2");

View File

@ -145,7 +145,7 @@
#define BRCM_INT_PCI_MSI_LEGACY_NR 8 #define BRCM_INT_PCI_MSI_LEGACY_NR 8
#define BRCM_INT_PCI_MSI_SHIFT 0 #define BRCM_INT_PCI_MSI_SHIFT 0
/* MSI target adresses */ /* MSI target addresses */
#define BRCM_MSI_TARGET_ADDR_LT_4GB 0x0fffffffcULL #define BRCM_MSI_TARGET_ADDR_LT_4GB 0x0fffffffcULL
#define BRCM_MSI_TARGET_ADDR_GT_4GB 0xffffffffcULL #define BRCM_MSI_TARGET_ADDR_GT_4GB 0xffffffffcULL

View File

@ -249,7 +249,7 @@ enum iproc_pcie_reg {
/* /*
* To hold the address of the register where the MSI writes are * To hold the address of the register where the MSI writes are
* programed. When ARM GICv3 ITS is used, this should be programmed * programmed. When ARM GICv3 ITS is used, this should be programmed
* with the address of the GITS_TRANSLATER register. * with the address of the GITS_TRANSLATER register.
*/ */
IPROC_PCIE_MSI_ADDR_LO, IPROC_PCIE_MSI_ADDR_LO,

View File

@ -30,18 +30,18 @@
#include <linux/reset.h> #include <linux/reset.h>
#include <linux/sys_soc.h> #include <linux/sys_soc.h>
/* MediaTek specific configuration registers */ /* MediaTek-specific configuration registers */
#define PCIE_FTS_NUM 0x70c #define PCIE_FTS_NUM 0x70c
#define PCIE_FTS_NUM_MASK GENMASK(15, 8) #define PCIE_FTS_NUM_MASK GENMASK(15, 8)
#define PCIE_FTS_NUM_L0(x) (((x) & 0xff) << 8) #define PCIE_FTS_NUM_L0(x) (((x) & 0xff) << 8)
/* Host-PCI bridge registers */ /* Host-PCI bridge registers */
#define RALINK_PCI_PCICFG_ADDR 0x0000 #define RALINK_PCI_PCICFG_ADDR 0x0000
#define RALINK_PCI_PCIMSK_ADDR 0x000C #define RALINK_PCI_PCIMSK_ADDR 0x000c
#define RALINK_PCI_CONFIG_ADDR 0x0020 #define RALINK_PCI_CONFIG_ADDR 0x0020
#define RALINK_PCI_CONFIG_DATA 0x0024 #define RALINK_PCI_CONFIG_DATA 0x0024
#define RALINK_PCI_MEMBASE 0x0028 #define RALINK_PCI_MEMBASE 0x0028
#define RALINK_PCI_IOBASE 0x002C #define RALINK_PCI_IOBASE 0x002c
/* PCIe RC control registers */ /* PCIe RC control registers */
#define RALINK_PCI_ID 0x0030 #define RALINK_PCI_ID 0x0030
@ -132,7 +132,7 @@ static inline void pcie_port_write(struct mt7621_pcie_port *port,
static inline u32 mt7621_pci_get_cfgaddr(unsigned int bus, unsigned int slot, static inline u32 mt7621_pci_get_cfgaddr(unsigned int bus, unsigned int slot,
unsigned int func, unsigned int where) unsigned int func, unsigned int where)
{ {
return (((where & 0xF00) >> 8) << 24) | (bus << 16) | (slot << 11) | return (((where & 0xf00) >> 8) << 24) | (bus << 16) | (slot << 11) |
(func << 8) | (where & 0xfc) | 0x80000000; (func << 8) | (where & 0xfc) | 0x80000000;
} }
@ -217,7 +217,7 @@ static int setup_cm_memory_region(struct pci_host_bridge *host)
entry = resource_list_first_type(&host->windows, IORESOURCE_MEM); entry = resource_list_first_type(&host->windows, IORESOURCE_MEM);
if (!entry) { if (!entry) {
dev_err(dev, "Cannot get memory resource\n"); dev_err(dev, "cannot get memory resource\n");
return -EINVAL; return -EINVAL;
} }
@ -280,7 +280,7 @@ static int mt7621_pcie_parse_port(struct mt7621_pcie *pcie,
port->gpio_rst = devm_gpiod_get_index_optional(dev, "reset", slot, port->gpio_rst = devm_gpiod_get_index_optional(dev, "reset", slot,
GPIOD_OUT_LOW); GPIOD_OUT_LOW);
if (IS_ERR(port->gpio_rst)) { if (IS_ERR(port->gpio_rst)) {
dev_err(dev, "Failed to get GPIO for PCIe%d\n", slot); dev_err(dev, "failed to get GPIO for PCIe%d\n", slot);
err = PTR_ERR(port->gpio_rst); err = PTR_ERR(port->gpio_rst);
goto remove_reset; goto remove_reset;
} }
@ -409,7 +409,7 @@ static int mt7621_pcie_init_ports(struct mt7621_pcie *pcie)
err = mt7621_pcie_init_port(port); err = mt7621_pcie_init_port(port);
if (err) { if (err) {
dev_err(dev, "Initiating port %d failed\n", slot); dev_err(dev, "initializing port %d failed\n", slot);
list_del(&port->list); list_del(&port->list);
} }
} }
@ -476,7 +476,7 @@ static int mt7621_pcie_enable_ports(struct pci_host_bridge *host)
entry = resource_list_first_type(&host->windows, IORESOURCE_IO); entry = resource_list_first_type(&host->windows, IORESOURCE_IO);
if (!entry) { if (!entry) {
dev_err(dev, "Cannot get io resource\n"); dev_err(dev, "cannot get io resource\n");
return -EINVAL; return -EINVAL;
} }
@ -541,25 +541,25 @@ static int mt7621_pci_probe(struct platform_device *pdev)
err = mt7621_pcie_parse_dt(pcie); err = mt7621_pcie_parse_dt(pcie);
if (err) { if (err) {
dev_err(dev, "Parsing DT failed\n"); dev_err(dev, "parsing DT failed\n");
return err; return err;
} }
err = mt7621_pcie_init_ports(pcie); err = mt7621_pcie_init_ports(pcie);
if (err) { if (err) {
dev_err(dev, "Nothing connected in virtual bridges\n"); dev_err(dev, "nothing connected in virtual bridges\n");
return 0; return 0;
} }
err = mt7621_pcie_enable_ports(bridge); err = mt7621_pcie_enable_ports(bridge);
if (err) { if (err) {
dev_err(dev, "Error enabling pcie ports\n"); dev_err(dev, "error enabling pcie ports\n");
goto remove_resets; goto remove_resets;
} }
err = setup_cm_memory_region(bridge); err = setup_cm_memory_region(bridge);
if (err) { if (err) {
dev_err(dev, "Error setting up iocu mem regions\n"); dev_err(dev, "error setting up iocu mem regions\n");
goto remove_resets; goto remove_resets;
} }

View File

@ -6,16 +6,13 @@
* Author: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> * Author: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
*/ */
#include <linux/clk.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_pci.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-epc.h> #include <linux/pci-epc.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include "pcie-rcar.h" #include "pcie-rcar.h"

View File

@ -24,13 +24,11 @@
#include <linux/msi.h> #include <linux/msi.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_irq.h> #include <linux/of_irq.h>
#include <linux/of_pci.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/phy/phy.h> #include <linux/phy/phy.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/slab.h>
#include "pcie-rcar.h" #include "pcie-rcar.h"

View File

@ -6,6 +6,7 @@
#include <linux/device.h> #include <linux/device.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/iommu.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
@ -18,8 +19,6 @@
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <asm/irqdomain.h> #include <asm/irqdomain.h>
#include <asm/device.h>
#include <asm/msi.h>
#define VMD_CFGBAR 0 #define VMD_CFGBAR 0
#define VMD_MEMBAR1 2 #define VMD_MEMBAR1 2
@ -70,6 +69,8 @@ enum vmd_features {
VMD_FEAT_CAN_BYPASS_MSI_REMAP = (1 << 4), VMD_FEAT_CAN_BYPASS_MSI_REMAP = (1 << 4),
}; };
static DEFINE_IDA(vmd_instance_ida);
/* /*
* Lock for manipulating VMD IRQ lists. * Lock for manipulating VMD IRQ lists.
*/ */
@ -120,6 +121,8 @@ struct vmd_dev {
struct pci_bus *bus; struct pci_bus *bus;
u8 busn_start; u8 busn_start;
u8 first_vec; u8 first_vec;
char *name;
int instance;
}; };
static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus) static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus)
@ -650,7 +653,7 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd)
INIT_LIST_HEAD(&vmd->irqs[i].irq_list); INIT_LIST_HEAD(&vmd->irqs[i].irq_list);
err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i), err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i),
vmd_irq, IRQF_NO_THREAD, vmd_irq, IRQF_NO_THREAD,
"vmd", &vmd->irqs[i]); vmd->name, &vmd->irqs[i]);
if (err) if (err)
return err; return err;
} }
@ -761,7 +764,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
* acceptable because the guest is usually CPU-limited and MSI * acceptable because the guest is usually CPU-limited and MSI
* remapping doesn't become a performance bottleneck. * remapping doesn't become a performance bottleneck.
*/ */
if (!(features & VMD_FEAT_CAN_BYPASS_MSI_REMAP) || if (iommu_capable(vmd->dev->dev.bus, IOMMU_CAP_INTR_REMAP) ||
!(features & VMD_FEAT_CAN_BYPASS_MSI_REMAP) ||
offset[0] || offset[1]) { offset[0] || offset[1]) {
ret = vmd_alloc_irqs(vmd); ret = vmd_alloc_irqs(vmd);
if (ret) if (ret)
@ -834,18 +838,32 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
return -ENOMEM; return -ENOMEM;
vmd->dev = dev; vmd->dev = dev;
vmd->instance = ida_simple_get(&vmd_instance_ida, 0, 0, GFP_KERNEL);
if (vmd->instance < 0)
return vmd->instance;
vmd->name = kasprintf(GFP_KERNEL, "vmd%d", vmd->instance);
if (!vmd->name) {
err = -ENOMEM;
goto out_release_instance;
}
err = pcim_enable_device(dev); err = pcim_enable_device(dev);
if (err < 0) if (err < 0)
return err; goto out_release_instance;
vmd->cfgbar = pcim_iomap(dev, VMD_CFGBAR, 0); vmd->cfgbar = pcim_iomap(dev, VMD_CFGBAR, 0);
if (!vmd->cfgbar) if (!vmd->cfgbar) {
return -ENOMEM; err = -ENOMEM;
goto out_release_instance;
}
pci_set_master(dev); pci_set_master(dev);
if (dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(64)) && if (dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(64)) &&
dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32))) dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32))) {
return -ENODEV; err = -ENODEV;
goto out_release_instance;
}
if (features & VMD_FEAT_OFFSET_FIRST_VECTOR) if (features & VMD_FEAT_OFFSET_FIRST_VECTOR)
vmd->first_vec = 1; vmd->first_vec = 1;
@ -854,11 +872,16 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
pci_set_drvdata(dev, vmd); pci_set_drvdata(dev, vmd);
err = vmd_enable_domain(vmd, features); err = vmd_enable_domain(vmd, features);
if (err) if (err)
return err; goto out_release_instance;
dev_info(&vmd->dev->dev, "Bound to PCI domain %04x\n", dev_info(&vmd->dev->dev, "Bound to PCI domain %04x\n",
vmd->sysdata.domain); vmd->sysdata.domain);
return 0; return 0;
out_release_instance:
ida_simple_remove(&vmd_instance_ida, vmd->instance);
kfree(vmd->name);
return err;
} }
static void vmd_cleanup_srcu(struct vmd_dev *vmd) static void vmd_cleanup_srcu(struct vmd_dev *vmd)
@ -879,6 +902,8 @@ static void vmd_remove(struct pci_dev *dev)
vmd_cleanup_srcu(vmd); vmd_cleanup_srcu(vmd);
vmd_detach_resources(vmd); vmd_detach_resources(vmd);
vmd_remove_irq_domain(vmd); vmd_remove_irq_domain(vmd);
ida_simple_remove(&vmd_instance_ida, vmd->instance);
kfree(vmd->name);
} }
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
@ -903,7 +928,7 @@ static int vmd_resume(struct device *dev)
for (i = 0; i < vmd->msix_count; i++) { for (i = 0; i < vmd->msix_count; i++) {
err = devm_request_irq(dev, pci_irq_vector(pdev, i), err = devm_request_irq(dev, pci_irq_vector(pdev, i),
vmd_irq, IRQF_NO_THREAD, vmd_irq, IRQF_NO_THREAD,
"vmd", &vmd->irqs[i]); vmd->name, &vmd->irqs[i]);
if (err) if (err)
return err; return err;
} }

View File

@ -1937,7 +1937,7 @@ static ssize_t epf_ntb_##_name##_show(struct config_item *item, \
struct config_group *group = to_config_group(item); \ struct config_group *group = to_config_group(item); \
struct epf_ntb *ntb = to_epf_ntb(group); \ struct epf_ntb *ntb = to_epf_ntb(group); \
\ \
return sprintf(page, "%d\n", ntb->_name); \ return sysfs_emit(page, "%d\n", ntb->_name); \
} }
#define EPF_NTB_W(_name) \ #define EPF_NTB_W(_name) \
@ -1947,11 +1947,9 @@ static ssize_t epf_ntb_##_name##_store(struct config_item *item, \
struct config_group *group = to_config_group(item); \ struct config_group *group = to_config_group(item); \
struct epf_ntb *ntb = to_epf_ntb(group); \ struct epf_ntb *ntb = to_epf_ntb(group); \
u32 val; \ u32 val; \
int ret; \
\ \
ret = kstrtou32(page, 0, &val); \ if (kstrtou32(page, 0, &val) < 0) \
if (ret) \ return -EINVAL; \
return ret; \
\ \
ntb->_name = val; \ ntb->_name = val; \
\ \
@ -1968,7 +1966,7 @@ static ssize_t epf_ntb_##_name##_show(struct config_item *item, \
\ \
sscanf(#_name, "mw%d", &win_no); \ sscanf(#_name, "mw%d", &win_no); \
\ \
return sprintf(page, "%lld\n", ntb->mws_size[win_no - 1]); \ return sysfs_emit(page, "%lld\n", ntb->mws_size[win_no - 1]); \
} }
#define EPF_NTB_MW_W(_name) \ #define EPF_NTB_MW_W(_name) \
@ -1980,11 +1978,9 @@ static ssize_t epf_ntb_##_name##_store(struct config_item *item, \
struct device *dev = &ntb->epf->dev; \ struct device *dev = &ntb->epf->dev; \
int win_no; \ int win_no; \
u64 val; \ u64 val; \
int ret; \
\ \
ret = kstrtou64(page, 0, &val); \ if (kstrtou64(page, 0, &val) < 0) \
if (ret) \ return -EINVAL; \
return ret; \
\ \
if (sscanf(#_name, "mw%d", &win_no) != 1) \ if (sscanf(#_name, "mw%d", &win_no) != 1) \
return -EINVAL; \ return -EINVAL; \
@ -2005,11 +2001,9 @@ static ssize_t epf_ntb_num_mws_store(struct config_item *item,
struct config_group *group = to_config_group(item); struct config_group *group = to_config_group(item);
struct epf_ntb *ntb = to_epf_ntb(group); struct epf_ntb *ntb = to_epf_ntb(group);
u32 val; u32 val;
int ret;
ret = kstrtou32(page, 0, &val); if (kstrtou32(page, 0, &val) < 0)
if (ret) return -EINVAL;
return ret;
if (val > MAX_MW) if (val > MAX_MW)
return -EINVAL; return -EINVAL;

View File

@ -175,9 +175,8 @@ static ssize_t pci_epc_start_store(struct config_item *item, const char *page,
epc = epc_group->epc; epc = epc_group->epc;
ret = kstrtobool(page, &start); if (kstrtobool(page, &start) < 0)
if (ret) return -EINVAL;
return ret;
if (!start) { if (!start) {
pci_epc_stop(epc); pci_epc_stop(epc);
@ -198,8 +197,7 @@ static ssize_t pci_epc_start_store(struct config_item *item, const char *page,
static ssize_t pci_epc_start_show(struct config_item *item, char *page) static ssize_t pci_epc_start_show(struct config_item *item, char *page)
{ {
return sprintf(page, "%d\n", return sysfs_emit(page, "%d\n", to_pci_epc_group(item)->start);
to_pci_epc_group(item)->start);
} }
CONFIGFS_ATTR(pci_epc_, start); CONFIGFS_ATTR(pci_epc_, start);
@ -321,7 +319,7 @@ static ssize_t pci_epf_##_name##_show(struct config_item *item, char *page) \
struct pci_epf *epf = to_pci_epf_group(item)->epf; \ struct pci_epf *epf = to_pci_epf_group(item)->epf; \
if (WARN_ON_ONCE(!epf->header)) \ if (WARN_ON_ONCE(!epf->header)) \
return -EINVAL; \ return -EINVAL; \
return sprintf(page, "0x%04x\n", epf->header->_name); \ return sysfs_emit(page, "0x%04x\n", epf->header->_name); \
} }
#define PCI_EPF_HEADER_W_u32(_name) \ #define PCI_EPF_HEADER_W_u32(_name) \
@ -329,13 +327,11 @@ static ssize_t pci_epf_##_name##_store(struct config_item *item, \
const char *page, size_t len) \ const char *page, size_t len) \
{ \ { \
u32 val; \ u32 val; \
int ret; \
struct pci_epf *epf = to_pci_epf_group(item)->epf; \ struct pci_epf *epf = to_pci_epf_group(item)->epf; \
if (WARN_ON_ONCE(!epf->header)) \ if (WARN_ON_ONCE(!epf->header)) \
return -EINVAL; \ return -EINVAL; \
ret = kstrtou32(page, 0, &val); \ if (kstrtou32(page, 0, &val) < 0) \
if (ret) \ return -EINVAL; \
return ret; \
epf->header->_name = val; \ epf->header->_name = val; \
return len; \ return len; \
} }
@ -345,13 +341,11 @@ static ssize_t pci_epf_##_name##_store(struct config_item *item, \
const char *page, size_t len) \ const char *page, size_t len) \
{ \ { \
u16 val; \ u16 val; \
int ret; \
struct pci_epf *epf = to_pci_epf_group(item)->epf; \ struct pci_epf *epf = to_pci_epf_group(item)->epf; \
if (WARN_ON_ONCE(!epf->header)) \ if (WARN_ON_ONCE(!epf->header)) \
return -EINVAL; \ return -EINVAL; \
ret = kstrtou16(page, 0, &val); \ if (kstrtou16(page, 0, &val) < 0) \
if (ret) \ return -EINVAL; \
return ret; \
epf->header->_name = val; \ epf->header->_name = val; \
return len; \ return len; \
} }
@ -361,13 +355,11 @@ static ssize_t pci_epf_##_name##_store(struct config_item *item, \
const char *page, size_t len) \ const char *page, size_t len) \
{ \ { \
u8 val; \ u8 val; \
int ret; \
struct pci_epf *epf = to_pci_epf_group(item)->epf; \ struct pci_epf *epf = to_pci_epf_group(item)->epf; \
if (WARN_ON_ONCE(!epf->header)) \ if (WARN_ON_ONCE(!epf->header)) \
return -EINVAL; \ return -EINVAL; \
ret = kstrtou8(page, 0, &val); \ if (kstrtou8(page, 0, &val) < 0) \
if (ret) \ return -EINVAL; \
return ret; \
epf->header->_name = val; \ epf->header->_name = val; \
return len; \ return len; \
} }
@ -376,11 +368,9 @@ static ssize_t pci_epf_msi_interrupts_store(struct config_item *item,
const char *page, size_t len) const char *page, size_t len)
{ {
u8 val; u8 val;
int ret;
ret = kstrtou8(page, 0, &val); if (kstrtou8(page, 0, &val) < 0)
if (ret) return -EINVAL;
return ret;
to_pci_epf_group(item)->epf->msi_interrupts = val; to_pci_epf_group(item)->epf->msi_interrupts = val;
@ -390,19 +380,17 @@ static ssize_t pci_epf_msi_interrupts_store(struct config_item *item,
static ssize_t pci_epf_msi_interrupts_show(struct config_item *item, static ssize_t pci_epf_msi_interrupts_show(struct config_item *item,
char *page) char *page)
{ {
return sprintf(page, "%d\n", return sysfs_emit(page, "%d\n",
to_pci_epf_group(item)->epf->msi_interrupts); to_pci_epf_group(item)->epf->msi_interrupts);
} }
static ssize_t pci_epf_msix_interrupts_store(struct config_item *item, static ssize_t pci_epf_msix_interrupts_store(struct config_item *item,
const char *page, size_t len) const char *page, size_t len)
{ {
u16 val; u16 val;
int ret;
ret = kstrtou16(page, 0, &val); if (kstrtou16(page, 0, &val) < 0)
if (ret) return -EINVAL;
return ret;
to_pci_epf_group(item)->epf->msix_interrupts = val; to_pci_epf_group(item)->epf->msix_interrupts = val;
@ -412,8 +400,8 @@ static ssize_t pci_epf_msix_interrupts_store(struct config_item *item,
static ssize_t pci_epf_msix_interrupts_show(struct config_item *item, static ssize_t pci_epf_msix_interrupts_show(struct config_item *item,
char *page) char *page)
{ {
return sprintf(page, "%d\n", return sysfs_emit(page, "%d\n",
to_pci_epf_group(item)->epf->msix_interrupts); to_pci_epf_group(item)->epf->msix_interrupts);
} }
PCI_EPF_HEADER_R(vendorid) PCI_EPF_HEADER_R(vendorid)

View File

@ -700,7 +700,7 @@ EXPORT_SYMBOL_GPL(pci_epc_linkup);
/** /**
* pci_epc_init_notify() - Notify the EPF device that EPC device's core * pci_epc_init_notify() - Notify the EPF device that EPC device's core
* initialization is completed. * initialization is completed.
* @epc: the EPC device whose core initialization is completeds * @epc: the EPC device whose core initialization is completed
* *
* Invoke to Notify the EPF device that the EPC device's initialization * Invoke to Notify the EPF device that the EPC device's initialization
* is completed. * is completed.

View File

@ -224,7 +224,7 @@ EXPORT_SYMBOL_GPL(pci_epf_add_vepf);
* be removed * be removed
* @epf_vf: the virtual EP function to be removed * @epf_vf: the virtual EP function to be removed
* *
* Invoke to remove a virtual endpoint function from the physcial endpoint * Invoke to remove a virtual endpoint function from the physical endpoint
* function. * function.
*/ */
void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf) void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf)
@ -432,7 +432,7 @@ EXPORT_SYMBOL_GPL(pci_epf_destroy);
/** /**
* pci_epf_create() - create a new PCI EPF device * pci_epf_create() - create a new PCI EPF device
* @name: the name of the PCI EPF device. This name will be used to bind the * @name: the name of the PCI EPF device. This name will be used to bind the
* the EPF device to a EPF driver * EPF device to a EPF driver
* *
* Invoke to create a new PCI EPF device by providing the name of the function * Invoke to create a new PCI EPF device by providing the name of the function
* device. * device.

View File

@ -22,7 +22,7 @@
* when the bridge is scanned and it loses a refcount when the bridge * when the bridge is scanned and it loses a refcount when the bridge
* is removed. * is removed.
* - When a P2P bridge is present, we elevate the refcount on the subordinate * - When a P2P bridge is present, we elevate the refcount on the subordinate
* bus. It loses the refcount when the the driver unloads. * bus. It loses the refcount when the driver unloads.
*/ */
#define pr_fmt(fmt) "acpiphp_glue: " fmt #define pr_fmt(fmt) "acpiphp_glue: " fmt

View File

@ -15,7 +15,7 @@
#define _CPQPHP_H #define _CPQPHP_H
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <asm/io.h> /* for read? and write? functions */ #include <linux/io.h> /* for read? and write? functions */
#include <linux/delay.h> /* for delays */ #include <linux/delay.h> /* for delays */
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/sched/signal.h> /* for signal_pending() */ #include <linux/sched/signal.h> /* for signal_pending() */

View File

@ -519,7 +519,7 @@ error:
* @head: list to search * @head: list to search
* @size: size of node to find, must be a power of two. * @size: size of node to find, must be a power of two.
* *
* Description: This function sorts the resource list by size and then returns * Description: This function sorts the resource list by size and then
* returns the first node of "size" length that is not in the ISA aliasing * returns the first node of "size" length that is not in the ISA aliasing
* window. If it finds a node larger than "size" it will split it up. * window. If it finds a node larger than "size" it will split it up.
*/ */
@ -1202,7 +1202,7 @@ static u8 set_controller_speed(struct controller *ctrl, u8 adapter_speed, u8 hp_
mdelay(5); mdelay(5);
/* Reenable interrupts */ /* Re-enable interrupts */
writel(0, ctrl->hpc_reg + INT_MASK); writel(0, ctrl->hpc_reg + INT_MASK);
pci_write_config_byte(ctrl->pci_dev, 0x41, reg); pci_write_config_byte(ctrl->pci_dev, 0x41, reg);

View File

@ -189,8 +189,10 @@ int cpqhp_set_irq(u8 bus_num, u8 dev_num, u8 int_pin, u8 irq_num)
/* This should only be for x86 as it sets the Edge Level /* This should only be for x86 as it sets the Edge Level
* Control Register * Control Register
*/ */
outb((u8) (temp_word & 0xFF), 0x4d0); outb((u8) ((temp_word & outb((u8)(temp_word & 0xFF), 0x4d0);
0xFF00) >> 8), 0x4d1); rc = 0; } outb((u8)((temp_word & 0xFF00) >> 8), 0x4d1);
rc = 0;
}
return rc; return rc;
} }

View File

@ -352,7 +352,7 @@ struct resource_node {
u32 len; u32 len;
int type; /* MEM, IO, PFMEM */ int type; /* MEM, IO, PFMEM */
u8 fromMem; /* this is to indicate that the range is from u8 fromMem; /* this is to indicate that the range is from
* from the Memory bucket rather than from PFMem */ * the Memory bucket rather than from PFMem */
struct resource_node *next; struct resource_node *next;
struct resource_node *nextRange; /* for the other mem range on bus */ struct resource_node *nextRange; /* for the other mem range on bus */
}; };
@ -736,7 +736,7 @@ struct controller {
int ibmphp_init_devno(struct slot **); /* This function is called from EBDA, so we need it not be static */ int ibmphp_init_devno(struct slot **); /* This function is called from EBDA, so we need it not be static */
int ibmphp_do_disable_slot(struct slot *slot_cur); int ibmphp_do_disable_slot(struct slot *slot_cur);
int ibmphp_update_slot_info(struct slot *); /* This function is called from HPC, so we need it to not be be static */ int ibmphp_update_slot_info(struct slot *); /* This function is called from HPC, so we need it to not be static */
int ibmphp_configure_card(struct pci_func *, u8); int ibmphp_configure_card(struct pci_func *, u8);
int ibmphp_unconfigure_card(struct slot **, int); int ibmphp_unconfigure_card(struct slot **, int);
extern const struct hotplug_slot_ops ibmphp_hotplug_slot_ops; extern const struct hotplug_slot_ops ibmphp_hotplug_slot_ops;

View File

@ -189,6 +189,8 @@ int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status);
int pciehp_set_raw_indicator_status(struct hotplug_slot *h_slot, u8 status); int pciehp_set_raw_indicator_status(struct hotplug_slot *h_slot, u8 status);
int pciehp_get_raw_indicator_status(struct hotplug_slot *h_slot, u8 *status); int pciehp_get_raw_indicator_status(struct hotplug_slot *h_slot, u8 *status);
int pciehp_slot_reset(struct pcie_device *dev);
static inline const char *slot_name(struct controller *ctrl) static inline const char *slot_name(struct controller *ctrl)
{ {
return hotplug_slot_name(&ctrl->hotplug_slot); return hotplug_slot_name(&ctrl->hotplug_slot);

View File

@ -351,6 +351,8 @@ static struct pcie_port_service_driver hpdriver_portdrv = {
.runtime_suspend = pciehp_runtime_suspend, .runtime_suspend = pciehp_runtime_suspend,
.runtime_resume = pciehp_runtime_resume, .runtime_resume = pciehp_runtime_resume,
#endif /* PM */ #endif /* PM */
.slot_reset = pciehp_slot_reset,
}; };
int __init pcie_hp_init(void) int __init pcie_hp_init(void)

View File

@ -862,6 +862,32 @@ void pcie_disable_interrupt(struct controller *ctrl)
pcie_write_cmd(ctrl, 0, mask); pcie_write_cmd(ctrl, 0, mask);
} }
/**
* pciehp_slot_reset() - ignore link event caused by error-induced hot reset
* @dev: PCI Express port service device
*
* Called from pcie_portdrv_slot_reset() after AER or DPC initiated a reset
* further up in the hierarchy to recover from an error. The reset was
* propagated down to this hotplug port. Ignore the resulting link flap.
* If the link failed to retrain successfully, synthesize the ignored event.
* Surprise removal during reset is detected through Presence Detect Changed.
*/
int pciehp_slot_reset(struct pcie_device *dev)
{
struct controller *ctrl = get_service_data(dev);
if (ctrl->state != ON_STATE)
return 0;
pcie_capability_write_word(dev->port, PCI_EXP_SLTSTA,
PCI_EXP_SLTSTA_DLLSC);
if (!pciehp_check_link_active(ctrl))
pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC);
return 0;
}
/* /*
* pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary * pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary
* bus reset of the bridge, but at the same time we want to ensure that it is * bus reset of the bridge, but at the same time we want to ensure that it is

View File

@ -295,7 +295,7 @@ static int shpc_write_cmd(struct slot *slot, u8 t_slot, u8 cmd)
mutex_lock(&slot->ctrl->cmd_lock); mutex_lock(&slot->ctrl->cmd_lock);
if (!shpc_poll_ctrl_busy(ctrl)) { if (!shpc_poll_ctrl_busy(ctrl)) {
/* After 1 sec and and the controller is still busy */ /* After 1 sec and the controller is still busy */
ctrl_err(ctrl, "Controller is still busy after 1 sec\n"); ctrl_err(ctrl, "Controller is still busy after 1 sec\n");
retval = -EBUSY; retval = -EBUSY;
goto out; goto out;

View File

@ -164,13 +164,15 @@ static ssize_t sriov_vf_total_msix_show(struct device *dev,
char *buf) char *buf)
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
struct pci_driver *pdrv;
u32 vf_total_msix = 0; u32 vf_total_msix = 0;
device_lock(dev); device_lock(dev);
if (!pdev->driver || !pdev->driver->sriov_get_vf_total_msix) pdrv = to_pci_driver(dev->driver);
if (!pdrv || !pdrv->sriov_get_vf_total_msix)
goto unlock; goto unlock;
vf_total_msix = pdev->driver->sriov_get_vf_total_msix(pdev); vf_total_msix = pdrv->sriov_get_vf_total_msix(pdev);
unlock: unlock:
device_unlock(dev); device_unlock(dev);
return sysfs_emit(buf, "%u\n", vf_total_msix); return sysfs_emit(buf, "%u\n", vf_total_msix);
@ -183,23 +185,24 @@ static ssize_t sriov_vf_msix_count_store(struct device *dev,
{ {
struct pci_dev *vf_dev = to_pci_dev(dev); struct pci_dev *vf_dev = to_pci_dev(dev);
struct pci_dev *pdev = pci_physfn(vf_dev); struct pci_dev *pdev = pci_physfn(vf_dev);
int val, ret; struct pci_driver *pdrv;
int val, ret = 0;
ret = kstrtoint(buf, 0, &val); if (kstrtoint(buf, 0, &val) < 0)
if (ret) return -EINVAL;
return ret;
if (val < 0) if (val < 0)
return -EINVAL; return -EINVAL;
device_lock(&pdev->dev); device_lock(&pdev->dev);
if (!pdev->driver || !pdev->driver->sriov_set_msix_vec_count) { pdrv = to_pci_driver(dev->driver);
if (!pdrv || !pdrv->sriov_set_msix_vec_count) {
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
goto err_pdev; goto err_pdev;
} }
device_lock(&vf_dev->dev); device_lock(&vf_dev->dev);
if (vf_dev->driver) { if (to_pci_driver(vf_dev->dev.driver)) {
/* /*
* A driver is already attached to this VF and has configured * A driver is already attached to this VF and has configured
* itself based on the current MSI-X vector count. Changing * itself based on the current MSI-X vector count. Changing
@ -209,7 +212,7 @@ static ssize_t sriov_vf_msix_count_store(struct device *dev,
goto err_dev; goto err_dev;
} }
ret = pdev->driver->sriov_set_msix_vec_count(vf_dev, val); ret = pdrv->sriov_set_msix_vec_count(vf_dev, val);
err_dev: err_dev:
device_unlock(&vf_dev->dev); device_unlock(&vf_dev->dev);
@ -376,12 +379,12 @@ static ssize_t sriov_numvfs_store(struct device *dev,
const char *buf, size_t count) const char *buf, size_t count)
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
int ret; struct pci_driver *pdrv;
int ret = 0;
u16 num_vfs; u16 num_vfs;
ret = kstrtou16(buf, 0, &num_vfs); if (kstrtou16(buf, 0, &num_vfs) < 0)
if (ret < 0) return -EINVAL;
return ret;
if (num_vfs > pci_sriov_get_totalvfs(pdev)) if (num_vfs > pci_sriov_get_totalvfs(pdev))
return -ERANGE; return -ERANGE;
@ -392,14 +395,15 @@ static ssize_t sriov_numvfs_store(struct device *dev,
goto exit; goto exit;
/* is PF driver loaded */ /* is PF driver loaded */
if (!pdev->driver) { pdrv = to_pci_driver(dev->driver);
if (!pdrv) {
pci_info(pdev, "no driver bound to device; cannot configure SR-IOV\n"); pci_info(pdev, "no driver bound to device; cannot configure SR-IOV\n");
ret = -ENOENT; ret = -ENOENT;
goto exit; goto exit;
} }
/* is PF driver loaded w/callback */ /* is PF driver loaded w/callback */
if (!pdev->driver->sriov_configure) { if (!pdrv->sriov_configure) {
pci_info(pdev, "driver does not support SR-IOV configuration via sysfs\n"); pci_info(pdev, "driver does not support SR-IOV configuration via sysfs\n");
ret = -ENOENT; ret = -ENOENT;
goto exit; goto exit;
@ -407,7 +411,7 @@ static ssize_t sriov_numvfs_store(struct device *dev,
if (num_vfs == 0) { if (num_vfs == 0) {
/* disable VFs */ /* disable VFs */
ret = pdev->driver->sriov_configure(pdev, 0); ret = pdrv->sriov_configure(pdev, 0);
goto exit; goto exit;
} }
@ -419,7 +423,7 @@ static ssize_t sriov_numvfs_store(struct device *dev,
goto exit; goto exit;
} }
ret = pdev->driver->sriov_configure(pdev, num_vfs); ret = pdrv->sriov_configure(pdev, num_vfs);
if (ret < 0) if (ret < 0)
goto exit; goto exit;

View File

@ -582,7 +582,8 @@ err:
return ret; return ret;
} }
static void __iomem *msix_map_region(struct pci_dev *dev, unsigned nr_entries) static void __iomem *msix_map_region(struct pci_dev *dev,
unsigned int nr_entries)
{ {
resource_size_t phys_addr; resource_size_t phys_addr;
u32 table_offset; u32 table_offset;

View File

@ -423,7 +423,7 @@ failed:
*/ */
static int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq) static int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq)
{ {
struct device_node *dn, *ppnode; struct device_node *dn, *ppnode = NULL;
struct pci_dev *ppdev; struct pci_dev *ppdev;
__be32 laddr[3]; __be32 laddr[3];
u8 pin; u8 pin;
@ -452,8 +452,14 @@ static int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *
if (pin == 0) if (pin == 0)
return -ENODEV; return -ENODEV;
/* Local interrupt-map in the device node? Use it! */
if (of_get_property(dn, "interrupt-map", NULL)) {
pin = pci_swizzle_interrupt_pin(pdev, pin);
ppnode = dn;
}
/* Now we walk up the PCI tree */ /* Now we walk up the PCI tree */
for (;;) { while (!ppnode) {
/* Get the pci_dev of our parent */ /* Get the pci_dev of our parent */
ppdev = pdev->bus->self; ppdev = pdev->bus->self;

View File

@ -874,7 +874,7 @@ static int __pci_p2pdma_map_sg(struct pci_p2pdma_pagemap *p2p_pgmap,
int i; int i;
for_each_sg(sg, s, nents, i) { for_each_sg(sg, s, nents, i) {
s->dma_address = sg_phys(s) - p2p_pgmap->bus_offset; s->dma_address = sg_phys(s) + p2p_pgmap->bus_offset;
sg_dma_len(s) = s->length; sg_dma_len(s) = s->length;
} }
@ -943,7 +943,7 @@ EXPORT_SYMBOL_GPL(pci_p2pdma_unmap_sg_attrs);
* *
* Parses an attribute value to decide whether to enable p2pdma. * Parses an attribute value to decide whether to enable p2pdma.
* The value can select a PCI device (using its full BDF device * The value can select a PCI device (using its full BDF device
* name) or a boolean (in any format strtobool() accepts). A false * name) or a boolean (in any format kstrtobool() accepts). A false
* value disables p2pdma, a true value expects the caller * value disables p2pdma, a true value expects the caller
* to automatically find a compatible device and specifying a PCI device * to automatically find a compatible device and specifying a PCI device
* expects the caller to use the specific provider. * expects the caller to use the specific provider.
@ -975,11 +975,11 @@ int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev,
} else if ((page[0] == '0' || page[0] == '1') && !iscntrl(page[1])) { } else if ((page[0] == '0' || page[0] == '1') && !iscntrl(page[1])) {
/* /*
* If the user enters a PCI device that doesn't exist * If the user enters a PCI device that doesn't exist
* like "0000:01:00.1", we don't want strtobool to think * like "0000:01:00.1", we don't want kstrtobool to think
* it's a '0' when it's clearly not what the user wanted. * it's a '0' when it's clearly not what the user wanted.
* So we require 0's and 1's to be exactly one character. * So we require 0's and 1's to be exactly one character.
*/ */
} else if (!strtobool(page, use_p2pdma)) { } else if (!kstrtobool(page, use_p2pdma)) {
return 0; return 0;
} }

View File

@ -431,8 +431,21 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
/* Clear the W1C bits */ /* Clear the W1C bits */
new &= ~((value << shift) & (behavior[reg / 4].w1c & mask)); new &= ~((value << shift) & (behavior[reg / 4].w1c & mask));
/* Save the new value with the cleared W1C bits into the cfgspace */
cfgspace[reg / 4] = cpu_to_le32(new); cfgspace[reg / 4] = cpu_to_le32(new);
/*
* Clear the W1C bits not specified by the write mask, so that the
* write_op() does not clear them.
*/
new &= ~(behavior[reg / 4].w1c & ~mask);
/*
* Set the W1C bits specified by the write mask, so that write_op()
* knows about that they are to be cleared.
*/
new |= (value << shift) & (behavior[reg / 4].w1c & mask);
if (write_op) if (write_op)
write_op(bridge, reg, old, new, mask); write_op(bridge, reg, old, new, mask);

View File

@ -319,12 +319,10 @@ static long local_pci_probe(void *_ddi)
* its remove routine. * its remove routine.
*/ */
pm_runtime_get_sync(dev); pm_runtime_get_sync(dev);
pci_dev->driver = pci_drv;
rc = pci_drv->probe(pci_dev, ddi->id); rc = pci_drv->probe(pci_dev, ddi->id);
if (!rc) if (!rc)
return rc; return rc;
if (rc < 0) { if (rc < 0) {
pci_dev->driver = NULL;
pm_runtime_put_sync(dev); pm_runtime_put_sync(dev);
return rc; return rc;
} }
@ -390,14 +388,13 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
* @pci_dev: PCI device being probed * @pci_dev: PCI device being probed
* *
* returns 0 on success, else error. * returns 0 on success, else error.
* side-effect: pci_dev->driver is set to drv when drv claims pci_dev.
*/ */
static int __pci_device_probe(struct pci_driver *drv, struct pci_dev *pci_dev) static int __pci_device_probe(struct pci_driver *drv, struct pci_dev *pci_dev)
{ {
const struct pci_device_id *id; const struct pci_device_id *id;
int error = 0; int error = 0;
if (!pci_dev->driver && drv->probe) { if (drv->probe) {
error = -ENODEV; error = -ENODEV;
id = pci_match_device(drv, pci_dev); id = pci_match_device(drv, pci_dev);
@ -457,18 +454,15 @@ static int pci_device_probe(struct device *dev)
static void pci_device_remove(struct device *dev) static void pci_device_remove(struct device *dev)
{ {
struct pci_dev *pci_dev = to_pci_dev(dev); struct pci_dev *pci_dev = to_pci_dev(dev);
struct pci_driver *drv = pci_dev->driver; struct pci_driver *drv = to_pci_driver(dev->driver);
if (drv) { if (drv->remove) {
if (drv->remove) { pm_runtime_get_sync(dev);
pm_runtime_get_sync(dev); drv->remove(pci_dev);
drv->remove(pci_dev); pm_runtime_put_noidle(dev);
pm_runtime_put_noidle(dev);
}
pcibios_free_irq(pci_dev);
pci_dev->driver = NULL;
pci_iov_remove(pci_dev);
} }
pcibios_free_irq(pci_dev);
pci_iov_remove(pci_dev);
/* Undo the runtime PM settings in local_pci_probe() */ /* Undo the runtime PM settings in local_pci_probe() */
pm_runtime_put_sync(dev); pm_runtime_put_sync(dev);
@ -495,7 +489,7 @@ static void pci_device_remove(struct device *dev)
static void pci_device_shutdown(struct device *dev) static void pci_device_shutdown(struct device *dev)
{ {
struct pci_dev *pci_dev = to_pci_dev(dev); struct pci_dev *pci_dev = to_pci_dev(dev);
struct pci_driver *drv = pci_dev->driver; struct pci_driver *drv = to_pci_driver(dev->driver);
pm_runtime_resume(dev); pm_runtime_resume(dev);
@ -576,7 +570,7 @@ static int pci_pm_reenable_device(struct pci_dev *pci_dev)
{ {
int retval; int retval;
/* if the device was enabled before suspend, reenable */ /* if the device was enabled before suspend, re-enable */
retval = pci_reenable_device(pci_dev); retval = pci_reenable_device(pci_dev);
/* /*
* if the device was busmaster before the suspend, make it busmaster * if the device was busmaster before the suspend, make it busmaster
@ -591,7 +585,7 @@ static int pci_pm_reenable_device(struct pci_dev *pci_dev)
static int pci_legacy_suspend(struct device *dev, pm_message_t state) static int pci_legacy_suspend(struct device *dev, pm_message_t state)
{ {
struct pci_dev *pci_dev = to_pci_dev(dev); struct pci_dev *pci_dev = to_pci_dev(dev);
struct pci_driver *drv = pci_dev->driver; struct pci_driver *drv = to_pci_driver(dev->driver);
if (drv && drv->suspend) { if (drv && drv->suspend) {
pci_power_t prev = pci_dev->current_state; pci_power_t prev = pci_dev->current_state;
@ -632,7 +626,7 @@ static int pci_legacy_suspend_late(struct device *dev, pm_message_t state)
static int pci_legacy_resume(struct device *dev) static int pci_legacy_resume(struct device *dev)
{ {
struct pci_dev *pci_dev = to_pci_dev(dev); struct pci_dev *pci_dev = to_pci_dev(dev);
struct pci_driver *drv = pci_dev->driver; struct pci_driver *drv = to_pci_driver(dev->driver);
pci_fixup_device(pci_fixup_resume, pci_dev); pci_fixup_device(pci_fixup_resume, pci_dev);
@ -651,7 +645,7 @@ static void pci_pm_default_suspend(struct pci_dev *pci_dev)
static bool pci_has_legacy_pm_support(struct pci_dev *pci_dev) static bool pci_has_legacy_pm_support(struct pci_dev *pci_dev)
{ {
struct pci_driver *drv = pci_dev->driver; struct pci_driver *drv = to_pci_driver(pci_dev->dev.driver);
bool ret = drv && (drv->suspend || drv->resume); bool ret = drv && (drv->suspend || drv->resume);
/* /*
@ -1244,11 +1238,11 @@ static int pci_pm_runtime_suspend(struct device *dev)
int error; int error;
/* /*
* If pci_dev->driver is not set (unbound), we leave the device in D0, * If the device has no driver, we leave it in D0, but it may go to
* but it may go to D3cold when the bridge above it runtime suspends. * D3cold when the bridge above it runtime suspends. Save its
* Save its config space in case that happens. * config space in case that happens.
*/ */
if (!pci_dev->driver) { if (!to_pci_driver(dev->driver)) {
pci_save_state(pci_dev); pci_save_state(pci_dev);
return 0; return 0;
} }
@ -1305,7 +1299,7 @@ static int pci_pm_runtime_resume(struct device *dev)
*/ */
pci_restore_standard_config(pci_dev); pci_restore_standard_config(pci_dev);
if (!pci_dev->driver) if (!to_pci_driver(dev->driver))
return 0; return 0;
pci_fixup_device(pci_fixup_resume_early, pci_dev); pci_fixup_device(pci_fixup_resume_early, pci_dev);
@ -1324,14 +1318,13 @@ static int pci_pm_runtime_resume(struct device *dev)
static int pci_pm_runtime_idle(struct device *dev) static int pci_pm_runtime_idle(struct device *dev)
{ {
struct pci_dev *pci_dev = to_pci_dev(dev);
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
/* /*
* If pci_dev->driver is not set (unbound), the device should * If the device has no driver, it should always remain in D0
* always remain in D0 regardless of the runtime PM status * regardless of the runtime PM status
*/ */
if (!pci_dev->driver) if (!to_pci_driver(dev->driver))
return 0; return 0;
if (!pm) if (!pm)
@ -1438,8 +1431,10 @@ static struct pci_driver pci_compat_driver = {
*/ */
struct pci_driver *pci_dev_driver(const struct pci_dev *dev) struct pci_driver *pci_dev_driver(const struct pci_dev *dev)
{ {
if (dev->driver) struct pci_driver *drv = to_pci_driver(dev->dev.driver);
return dev->driver;
if (drv)
return drv;
else { else {
int i; int i;
for (i = 0; i <= PCI_ROM_RESOURCE; i++) for (i = 0; i <= PCI_ROM_RESOURCE; i++)
@ -1542,7 +1537,7 @@ static int pci_uevent(struct device *dev, struct kobj_uevent_env *env)
return 0; return 0;
} }
#if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH) #if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH)
/** /**
* pci_uevent_ers - emit a uevent during recovery path of PCI device * pci_uevent_ers - emit a uevent during recovery path of PCI device
* @pdev: PCI device undergoing error recovery * @pdev: PCI device undergoing error recovery

View File

@ -26,6 +26,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/vgaarb.h> #include <linux/vgaarb.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/msi.h>
#include <linux/of.h> #include <linux/of.h>
#include "pci.h" #include "pci.h"
@ -49,7 +50,28 @@ pci_config_attr(subsystem_vendor, "0x%04x\n");
pci_config_attr(subsystem_device, "0x%04x\n"); pci_config_attr(subsystem_device, "0x%04x\n");
pci_config_attr(revision, "0x%02x\n"); pci_config_attr(revision, "0x%02x\n");
pci_config_attr(class, "0x%06x\n"); pci_config_attr(class, "0x%06x\n");
pci_config_attr(irq, "%u\n");
static ssize_t irq_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
#ifdef CONFIG_PCI_MSI
/*
* For MSI, show the first MSI IRQ; for all other cases including
* MSI-X, show the legacy INTx IRQ.
*/
if (pdev->msi_enabled) {
struct msi_desc *desc = first_pci_msi_entry(pdev);
return sysfs_emit(buf, "%u\n", desc->irq);
}
#endif
return sysfs_emit(buf, "%u\n", pdev->irq);
}
static DEVICE_ATTR_RO(irq);
static ssize_t broken_parity_status_show(struct device *dev, static ssize_t broken_parity_status_show(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
@ -275,15 +297,15 @@ static ssize_t enable_store(struct device *dev, struct device_attribute *attr,
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
unsigned long val; unsigned long val;
ssize_t result = kstrtoul(buf, 0, &val); ssize_t result = 0;
if (result < 0)
return result;
/* this can crash the machine when done on the "wrong" device */ /* this can crash the machine when done on the "wrong" device */
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
if (kstrtoul(buf, 0, &val) < 0)
return -EINVAL;
device_lock(dev); device_lock(dev);
if (dev->driver) if (dev->driver)
result = -EBUSY; result = -EBUSY;
@ -314,14 +336,13 @@ static ssize_t numa_node_store(struct device *dev,
size_t count) size_t count)
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
int node, ret; int node;
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
ret = kstrtoint(buf, 0, &node); if (kstrtoint(buf, 0, &node) < 0)
if (ret) return -EINVAL;
return ret;
if ((node < 0 && node != NUMA_NO_NODE) || node >= MAX_NUMNODES) if ((node < 0 && node != NUMA_NO_NODE) || node >= MAX_NUMNODES)
return -EINVAL; return -EINVAL;
@ -380,12 +401,12 @@ static ssize_t msi_bus_store(struct device *dev, struct device_attribute *attr,
struct pci_bus *subordinate = pdev->subordinate; struct pci_bus *subordinate = pdev->subordinate;
unsigned long val; unsigned long val;
if (kstrtoul(buf, 0, &val) < 0)
return -EINVAL;
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
if (kstrtoul(buf, 0, &val) < 0)
return -EINVAL;
/* /*
* "no_msi" and "bus_flags" only affect what happens when a driver * "no_msi" and "bus_flags" only affect what happens when a driver
* requests MSI or MSI-X. They don't affect any drivers that have * requests MSI or MSI-X. They don't affect any drivers that have
@ -1341,10 +1362,10 @@ static ssize_t reset_store(struct device *dev, struct device_attribute *attr,
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
unsigned long val; unsigned long val;
ssize_t result = kstrtoul(buf, 0, &val); ssize_t result;
if (result < 0) if (kstrtoul(buf, 0, &val) < 0)
return result; return -EINVAL;
if (val != 1) if (val != 1)
return -EINVAL; return -EINVAL;

View File

@ -269,7 +269,7 @@ static int pci_dev_str_match_path(struct pci_dev *dev, const char *path,
const char **endptr) const char **endptr)
{ {
int ret; int ret;
int seg, bus, slot, func; unsigned int seg, bus, slot, func;
char *wpath, *p; char *wpath, *p;
char end; char end;
@ -1439,6 +1439,24 @@ static int pci_save_pcie_state(struct pci_dev *dev)
return 0; return 0;
} }
void pci_bridge_reconfigure_ltr(struct pci_dev *dev)
{
#ifdef CONFIG_PCIEASPM
struct pci_dev *bridge;
u32 ctl;
bridge = pci_upstream_bridge(dev);
if (bridge && bridge->ltr_path) {
pcie_capability_read_dword(bridge, PCI_EXP_DEVCTL2, &ctl);
if (!(ctl & PCI_EXP_DEVCTL2_LTR_EN)) {
pci_dbg(bridge, "re-enabling LTR\n");
pcie_capability_set_word(bridge, PCI_EXP_DEVCTL2,
PCI_EXP_DEVCTL2_LTR_EN);
}
}
#endif
}
static void pci_restore_pcie_state(struct pci_dev *dev) static void pci_restore_pcie_state(struct pci_dev *dev)
{ {
int i = 0; int i = 0;
@ -1449,6 +1467,13 @@ static void pci_restore_pcie_state(struct pci_dev *dev)
if (!save_state) if (!save_state)
return; return;
/*
* Downstream ports reset the LTR enable bit when link goes down.
* Check and re-configure the bit here before restoring device.
* PCIe r5.0, sec 7.5.3.16.
*/
pci_bridge_reconfigure_ltr(dev);
cap = (u16 *)&save_state->cap.data[0]; cap = (u16 *)&save_state->cap.data[0];
pcie_capability_write_word(dev, PCI_EXP_DEVCTL, cap[i++]); pcie_capability_write_word(dev, PCI_EXP_DEVCTL, cap[i++]);
pcie_capability_write_word(dev, PCI_EXP_LNKCTL, cap[i++]); pcie_capability_write_word(dev, PCI_EXP_LNKCTL, cap[i++]);
@ -2053,14 +2078,14 @@ void pcim_pin_device(struct pci_dev *pdev)
EXPORT_SYMBOL(pcim_pin_device); EXPORT_SYMBOL(pcim_pin_device);
/* /*
* pcibios_add_device - provide arch specific hooks when adding device dev * pcibios_device_add - provide arch specific hooks when adding device dev
* @dev: the PCI device being added * @dev: the PCI device being added
* *
* Permits the platform to provide architecture specific functionality when * Permits the platform to provide architecture specific functionality when
* devices are added. This is the default implementation. Architecture * devices are added. This is the default implementation. Architecture
* implementations can override this. * implementations can override this.
*/ */
int __weak pcibios_add_device(struct pci_dev *dev) int __weak pcibios_device_add(struct pci_dev *dev)
{ {
return 0; return 0;
} }
@ -2180,6 +2205,7 @@ int pci_set_pcie_reset_state(struct pci_dev *dev, enum pcie_reset_state state)
} }
EXPORT_SYMBOL_GPL(pci_set_pcie_reset_state); EXPORT_SYMBOL_GPL(pci_set_pcie_reset_state);
#ifdef CONFIG_PCIEAER
void pcie_clear_device_status(struct pci_dev *dev) void pcie_clear_device_status(struct pci_dev *dev)
{ {
u16 sta; u16 sta;
@ -2187,6 +2213,7 @@ void pcie_clear_device_status(struct pci_dev *dev)
pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &sta); pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &sta);
pcie_capability_write_word(dev, PCI_EXP_DEVSTA, sta); pcie_capability_write_word(dev, PCI_EXP_DEVSTA, sta);
} }
#endif
/** /**
* pcie_clear_root_pme_status - Clear root port PME interrupt status. * pcie_clear_root_pme_status - Clear root port PME interrupt status.
@ -3697,6 +3724,14 @@ int pci_enable_atomic_ops_to_root(struct pci_dev *dev, u32 cap_mask)
struct pci_dev *bridge; struct pci_dev *bridge;
u32 cap, ctl2; u32 cap, ctl2;
/*
* Per PCIe r5.0, sec 9.3.5.10, the AtomicOp Requester Enable bit
* in Device Control 2 is reserved in VFs and the PF value applies
* to all associated VFs.
*/
if (dev->is_virtfn)
return -EINVAL;
if (!pci_is_pcie(dev)) if (!pci_is_pcie(dev))
return -EINVAL; return -EINVAL;
@ -5068,13 +5103,14 @@ EXPORT_SYMBOL_GPL(pci_dev_unlock);
static void pci_dev_save_and_disable(struct pci_dev *dev) static void pci_dev_save_and_disable(struct pci_dev *dev)
{ {
struct pci_driver *drv = to_pci_driver(dev->dev.driver);
const struct pci_error_handlers *err_handler = const struct pci_error_handlers *err_handler =
dev->driver ? dev->driver->err_handler : NULL; drv ? drv->err_handler : NULL;
/* /*
* dev->driver->err_handler->reset_prepare() is protected against * drv->err_handler->reset_prepare() is protected against races
* races with ->remove() by the device lock, which must be held by * with ->remove() by the device lock, which must be held by the
* the caller. * caller.
*/ */
if (err_handler && err_handler->reset_prepare) if (err_handler && err_handler->reset_prepare)
err_handler->reset_prepare(dev); err_handler->reset_prepare(dev);
@ -5099,15 +5135,15 @@ static void pci_dev_save_and_disable(struct pci_dev *dev)
static void pci_dev_restore(struct pci_dev *dev) static void pci_dev_restore(struct pci_dev *dev)
{ {
struct pci_driver *drv = to_pci_driver(dev->dev.driver);
const struct pci_error_handlers *err_handler = const struct pci_error_handlers *err_handler =
dev->driver ? dev->driver->err_handler : NULL; drv ? drv->err_handler : NULL;
pci_restore_state(dev); pci_restore_state(dev);
/* /*
* dev->driver->err_handler->reset_done() is protected against * drv->err_handler->reset_done() is protected against races with
* races with ->remove() by the device lock, which must be held by * ->remove() by the device lock, which must be held by the caller.
* the caller.
*/ */
if (err_handler && err_handler->reset_done) if (err_handler && err_handler->reset_done)
err_handler->reset_done(dev); err_handler->reset_done(dev);
@ -5268,7 +5304,7 @@ const struct attribute_group pci_dev_reset_method_attr_group = {
*/ */
int __pci_reset_function_locked(struct pci_dev *dev) int __pci_reset_function_locked(struct pci_dev *dev)
{ {
int i, m, rc = -ENOTTY; int i, m, rc;
might_sleep(); might_sleep();
@ -6304,11 +6340,12 @@ EXPORT_SYMBOL_GPL(pci_pr3_present);
* cannot be left as a userspace activity). DMA aliases should therefore * cannot be left as a userspace activity). DMA aliases should therefore
* be configured via quirks, such as the PCI fixup header quirk. * be configured via quirks, such as the PCI fixup header quirk.
*/ */
void pci_add_dma_alias(struct pci_dev *dev, u8 devfn_from, unsigned nr_devfns) void pci_add_dma_alias(struct pci_dev *dev, u8 devfn_from,
unsigned int nr_devfns)
{ {
int devfn_to; int devfn_to;
nr_devfns = min(nr_devfns, (unsigned) MAX_NR_DEVFNS - devfn_from); nr_devfns = min(nr_devfns, (unsigned int)MAX_NR_DEVFNS - devfn_from);
devfn_to = devfn_from + nr_devfns - 1; devfn_to = devfn_from + nr_devfns - 1;
if (!dev->dma_alias_mask) if (!dev->dma_alias_mask)

View File

@ -86,6 +86,7 @@ void pci_msix_init(struct pci_dev *dev);
bool pci_bridge_d3_possible(struct pci_dev *dev); bool pci_bridge_d3_possible(struct pci_dev *dev);
void pci_bridge_d3_update(struct pci_dev *dev); void pci_bridge_d3_update(struct pci_dev *dev);
void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev); void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev);
void pci_bridge_reconfigure_ltr(struct pci_dev *dev);
static inline void pci_wakeup_event(struct pci_dev *dev) static inline void pci_wakeup_event(struct pci_dev *dev)
{ {

View File

@ -2,12 +2,12 @@
# #
# Makefile for PCI Express features and port driver # Makefile for PCI Express features and port driver
pcieportdrv-y := portdrv_core.o portdrv_pci.o err.o rcec.o pcieportdrv-y := portdrv_core.o portdrv_pci.o rcec.o
obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o
obj-$(CONFIG_PCIEASPM) += aspm.o obj-$(CONFIG_PCIEASPM) += aspm.o
obj-$(CONFIG_PCIEAER) += aer.o obj-$(CONFIG_PCIEAER) += aer.o err.o
obj-$(CONFIG_PCIEAER_INJECT) += aer_inject.o obj-$(CONFIG_PCIEAER_INJECT) += aer_inject.o
obj-$(CONFIG_PCIE_PME) += pme.o obj-$(CONFIG_PCIE_PME) += pme.o
obj-$(CONFIG_PCIE_DPC) += dpc.o obj-$(CONFIG_PCIE_DPC) += dpc.o

View File

@ -57,7 +57,7 @@ struct aer_stats {
* "as seen by this device". Note that this may mean that if an * "as seen by this device". Note that this may mean that if an
* end point is causing problems, the AER counters may increment * end point is causing problems, the AER counters may increment
* at its link partner (e.g. root port) because the errors will be * at its link partner (e.g. root port) because the errors will be
* "seen" by the link partner and not the the problematic end point * "seen" by the link partner and not the problematic end point
* itself (which may report all counters as 0 as it never saw any * itself (which may report all counters as 0 as it never saw any
* problems). * problems).
*/ */

View File

@ -1219,7 +1219,7 @@ static ssize_t aspm_attr_store_common(struct device *dev,
struct pcie_link_state *link = pcie_aspm_get_link(pdev); struct pcie_link_state *link = pcie_aspm_get_link(pdev);
bool state_enable; bool state_enable;
if (strtobool(buf, &state_enable) < 0) if (kstrtobool(buf, &state_enable) < 0)
return -EINVAL; return -EINVAL;
down_read(&pci_bus_sem); down_read(&pci_bus_sem);
@ -1276,7 +1276,7 @@ static ssize_t clkpm_store(struct device *dev,
struct pcie_link_state *link = pcie_aspm_get_link(pdev); struct pcie_link_state *link = pcie_aspm_get_link(pdev);
bool state_enable; bool state_enable;
if (strtobool(buf, &state_enable) < 0) if (kstrtobool(buf, &state_enable) < 0)
return -EINVAL; return -EINVAL;
down_read(&pci_bus_sem); down_read(&pci_bus_sem);

View File

@ -49,14 +49,16 @@ static int report_error_detected(struct pci_dev *dev,
pci_channel_state_t state, pci_channel_state_t state,
enum pci_ers_result *result) enum pci_ers_result *result)
{ {
struct pci_driver *pdrv;
pci_ers_result_t vote; pci_ers_result_t vote;
const struct pci_error_handlers *err_handler; const struct pci_error_handlers *err_handler;
device_lock(&dev->dev); device_lock(&dev->dev);
pdrv = to_pci_driver(dev->dev.driver);
if (!pci_dev_set_io_state(dev, state) || if (!pci_dev_set_io_state(dev, state) ||
!dev->driver || !pdrv ||
!dev->driver->err_handler || !pdrv->err_handler ||
!dev->driver->err_handler->error_detected) { !pdrv->err_handler->error_detected) {
/* /*
* If any device in the subtree does not have an error_detected * If any device in the subtree does not have an error_detected
* callback, PCI_ERS_RESULT_NO_AER_DRIVER prevents subsequent * callback, PCI_ERS_RESULT_NO_AER_DRIVER prevents subsequent
@ -70,7 +72,7 @@ static int report_error_detected(struct pci_dev *dev,
vote = PCI_ERS_RESULT_NONE; vote = PCI_ERS_RESULT_NONE;
} }
} else { } else {
err_handler = dev->driver->err_handler; err_handler = pdrv->err_handler;
vote = err_handler->error_detected(dev, state); vote = err_handler->error_detected(dev, state);
} }
pci_uevent_ers(dev, vote); pci_uevent_ers(dev, vote);
@ -91,16 +93,18 @@ static int report_normal_detected(struct pci_dev *dev, void *data)
static int report_mmio_enabled(struct pci_dev *dev, void *data) static int report_mmio_enabled(struct pci_dev *dev, void *data)
{ {
struct pci_driver *pdrv;
pci_ers_result_t vote, *result = data; pci_ers_result_t vote, *result = data;
const struct pci_error_handlers *err_handler; const struct pci_error_handlers *err_handler;
device_lock(&dev->dev); device_lock(&dev->dev);
if (!dev->driver || pdrv = to_pci_driver(dev->dev.driver);
!dev->driver->err_handler || if (!pdrv ||
!dev->driver->err_handler->mmio_enabled) !pdrv->err_handler ||
!pdrv->err_handler->mmio_enabled)
goto out; goto out;
err_handler = dev->driver->err_handler; err_handler = pdrv->err_handler;
vote = err_handler->mmio_enabled(dev); vote = err_handler->mmio_enabled(dev);
*result = merge_result(*result, vote); *result = merge_result(*result, vote);
out: out:
@ -110,16 +114,18 @@ out:
static int report_slot_reset(struct pci_dev *dev, void *data) static int report_slot_reset(struct pci_dev *dev, void *data)
{ {
struct pci_driver *pdrv;
pci_ers_result_t vote, *result = data; pci_ers_result_t vote, *result = data;
const struct pci_error_handlers *err_handler; const struct pci_error_handlers *err_handler;
device_lock(&dev->dev); device_lock(&dev->dev);
if (!dev->driver || pdrv = to_pci_driver(dev->dev.driver);
!dev->driver->err_handler || if (!pdrv ||
!dev->driver->err_handler->slot_reset) !pdrv->err_handler ||
!pdrv->err_handler->slot_reset)
goto out; goto out;
err_handler = dev->driver->err_handler; err_handler = pdrv->err_handler;
vote = err_handler->slot_reset(dev); vote = err_handler->slot_reset(dev);
*result = merge_result(*result, vote); *result = merge_result(*result, vote);
out: out:
@ -129,16 +135,18 @@ out:
static int report_resume(struct pci_dev *dev, void *data) static int report_resume(struct pci_dev *dev, void *data)
{ {
struct pci_driver *pdrv;
const struct pci_error_handlers *err_handler; const struct pci_error_handlers *err_handler;
device_lock(&dev->dev); device_lock(&dev->dev);
pdrv = to_pci_driver(dev->dev.driver);
if (!pci_dev_set_io_state(dev, pci_channel_io_normal) || if (!pci_dev_set_io_state(dev, pci_channel_io_normal) ||
!dev->driver || !pdrv ||
!dev->driver->err_handler || !pdrv->err_handler ||
!dev->driver->err_handler->resume) !pdrv->err_handler->resume)
goto out; goto out;
err_handler = dev->driver->err_handler; err_handler = pdrv->err_handler;
err_handler->resume(dev); err_handler->resume(dev);
out: out:
pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED); pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED);

View File

@ -85,8 +85,7 @@ struct pcie_port_service_driver {
int (*runtime_suspend)(struct pcie_device *dev); int (*runtime_suspend)(struct pcie_device *dev);
int (*runtime_resume)(struct pcie_device *dev); int (*runtime_resume)(struct pcie_device *dev);
/* Device driver may resume normal operations */ int (*slot_reset)(struct pcie_device *dev);
void (*error_resume)(struct pci_dev *dev);
int port_type; /* Type of the port this driver can handle */ int port_type; /* Type of the port this driver can handle */
u32 service; /* Port service this device represents */ u32 service; /* Port service this device represents */
@ -110,6 +109,7 @@ void pcie_port_service_unregister(struct pcie_port_service_driver *new);
extern struct bus_type pcie_port_bus_type; extern struct bus_type pcie_port_bus_type;
int pcie_port_device_register(struct pci_dev *dev); int pcie_port_device_register(struct pci_dev *dev);
int pcie_port_device_iter(struct device *dev, void *data);
#ifdef CONFIG_PM #ifdef CONFIG_PM
int pcie_port_device_suspend(struct device *dev); int pcie_port_device_suspend(struct device *dev);
int pcie_port_device_resume_noirq(struct device *dev); int pcie_port_device_resume_noirq(struct device *dev);
@ -118,8 +118,6 @@ int pcie_port_device_runtime_suspend(struct device *dev);
int pcie_port_device_runtime_resume(struct device *dev); int pcie_port_device_runtime_resume(struct device *dev);
#endif #endif
void pcie_port_device_remove(struct pci_dev *dev); void pcie_port_device_remove(struct pci_dev *dev);
int __must_check pcie_port_bus_register(void);
void pcie_port_bus_unregister(void);
struct pci_dev; struct pci_dev;

View File

@ -166,9 +166,6 @@ static int pcie_init_service_irqs(struct pci_dev *dev, int *irqs, int mask)
{ {
int ret, i; int ret, i;
for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++)
irqs[i] = -1;
/* /*
* If we support PME but can't use MSI/MSI-X for it, we have to * If we support PME but can't use MSI/MSI-X for it, we have to
* fall back to INTx or other interrupts, e.g., a system shared * fall back to INTx or other interrupts, e.g., a system shared
@ -317,8 +314,10 @@ static int pcie_device_init(struct pci_dev *pdev, int service, int irq)
*/ */
int pcie_port_device_register(struct pci_dev *dev) int pcie_port_device_register(struct pci_dev *dev)
{ {
int status, capabilities, i, nr_service; int status, capabilities, irq_services, i, nr_service;
int irqs[PCIE_PORT_DEVICE_MAXSERVICES]; int irqs[PCIE_PORT_DEVICE_MAXSERVICES] = {
[0 ... PCIE_PORT_DEVICE_MAXSERVICES-1] = -1
};
/* Enable PCI Express port device */ /* Enable PCI Express port device */
status = pci_enable_device(dev); status = pci_enable_device(dev);
@ -331,18 +330,32 @@ int pcie_port_device_register(struct pci_dev *dev)
return 0; return 0;
pci_set_master(dev); pci_set_master(dev);
/*
* Initialize service irqs. Don't use service devices that irq_services = 0;
* require interrupts if there is no way to generate them. if (IS_ENABLED(CONFIG_PCIE_PME))
* However, some drivers may have a polling mode (e.g. pciehp_poll_mode) irq_services |= PCIE_PORT_SERVICE_PME;
* that can be used in the absence of irqs. Allow them to determine if (IS_ENABLED(CONFIG_PCIEAER))
* if that is to be used. irq_services |= PCIE_PORT_SERVICE_AER;
*/ if (IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
status = pcie_init_service_irqs(dev, irqs, capabilities); irq_services |= PCIE_PORT_SERVICE_HP;
if (status) { if (IS_ENABLED(CONFIG_PCIE_DPC))
capabilities &= PCIE_PORT_SERVICE_HP; irq_services |= PCIE_PORT_SERVICE_DPC;
if (!capabilities) irq_services &= capabilities;
goto error_disable;
if (irq_services) {
/*
* Initialize service IRQs. Don't use service devices that
* require interrupts if there is no way to generate them.
* However, some drivers may have a polling mode (e.g.
* pciehp_poll_mode) that can be used in the absence of IRQs.
* Allow them to determine if that is to be used.
*/
status = pcie_init_service_irqs(dev, irqs, irq_services);
if (status) {
irq_services &= PCIE_PORT_SERVICE_HP;
if (!irq_services)
goto error_disable;
}
} }
/* Allocate child services if any */ /* Allocate child services if any */
@ -367,24 +380,24 @@ error_disable:
return status; return status;
} }
#ifdef CONFIG_PM typedef int (*pcie_callback_t)(struct pcie_device *);
typedef int (*pcie_pm_callback_t)(struct pcie_device *);
static int pm_iter(struct device *dev, void *data) int pcie_port_device_iter(struct device *dev, void *data)
{ {
struct pcie_port_service_driver *service_driver; struct pcie_port_service_driver *service_driver;
size_t offset = *(size_t *)data; size_t offset = *(size_t *)data;
pcie_pm_callback_t cb; pcie_callback_t cb;
if ((dev->bus == &pcie_port_bus_type) && dev->driver) { if ((dev->bus == &pcie_port_bus_type) && dev->driver) {
service_driver = to_service_driver(dev->driver); service_driver = to_service_driver(dev->driver);
cb = *(pcie_pm_callback_t *)((void *)service_driver + offset); cb = *(pcie_callback_t *)((void *)service_driver + offset);
if (cb) if (cb)
return cb(to_pcie_device(dev)); return cb(to_pcie_device(dev));
} }
return 0; return 0;
} }
#ifdef CONFIG_PM
/** /**
* pcie_port_device_suspend - suspend port services associated with a PCIe port * pcie_port_device_suspend - suspend port services associated with a PCIe port
* @dev: PCI Express port to handle * @dev: PCI Express port to handle
@ -392,13 +405,13 @@ static int pm_iter(struct device *dev, void *data)
int pcie_port_device_suspend(struct device *dev) int pcie_port_device_suspend(struct device *dev)
{ {
size_t off = offsetof(struct pcie_port_service_driver, suspend); size_t off = offsetof(struct pcie_port_service_driver, suspend);
return device_for_each_child(dev, &off, pm_iter); return device_for_each_child(dev, &off, pcie_port_device_iter);
} }
int pcie_port_device_resume_noirq(struct device *dev) int pcie_port_device_resume_noirq(struct device *dev)
{ {
size_t off = offsetof(struct pcie_port_service_driver, resume_noirq); size_t off = offsetof(struct pcie_port_service_driver, resume_noirq);
return device_for_each_child(dev, &off, pm_iter); return device_for_each_child(dev, &off, pcie_port_device_iter);
} }
/** /**
@ -408,7 +421,7 @@ int pcie_port_device_resume_noirq(struct device *dev)
int pcie_port_device_resume(struct device *dev) int pcie_port_device_resume(struct device *dev)
{ {
size_t off = offsetof(struct pcie_port_service_driver, resume); size_t off = offsetof(struct pcie_port_service_driver, resume);
return device_for_each_child(dev, &off, pm_iter); return device_for_each_child(dev, &off, pcie_port_device_iter);
} }
/** /**
@ -418,7 +431,7 @@ int pcie_port_device_resume(struct device *dev)
int pcie_port_device_runtime_suspend(struct device *dev) int pcie_port_device_runtime_suspend(struct device *dev)
{ {
size_t off = offsetof(struct pcie_port_service_driver, runtime_suspend); size_t off = offsetof(struct pcie_port_service_driver, runtime_suspend);
return device_for_each_child(dev, &off, pm_iter); return device_for_each_child(dev, &off, pcie_port_device_iter);
} }
/** /**
@ -428,7 +441,7 @@ int pcie_port_device_runtime_suspend(struct device *dev)
int pcie_port_device_runtime_resume(struct device *dev) int pcie_port_device_runtime_resume(struct device *dev)
{ {
size_t off = offsetof(struct pcie_port_service_driver, runtime_resume); size_t off = offsetof(struct pcie_port_service_driver, runtime_resume);
return device_for_each_child(dev, &off, pm_iter); return device_for_each_child(dev, &off, pcie_port_device_iter);
} }
#endif /* PM */ #endif /* PM */

View File

@ -160,6 +160,9 @@ static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev,
static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev) static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev)
{ {
size_t off = offsetof(struct pcie_port_service_driver, slot_reset);
device_for_each_child(&dev->dev, &off, pcie_port_device_iter);
pci_restore_state(dev); pci_restore_state(dev);
pci_save_state(dev); pci_save_state(dev);
return PCI_ERS_RESULT_RECOVERED; return PCI_ERS_RESULT_RECOVERED;
@ -170,29 +173,6 @@ static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev)
return PCI_ERS_RESULT_RECOVERED; return PCI_ERS_RESULT_RECOVERED;
} }
static int resume_iter(struct device *device, void *data)
{
struct pcie_device *pcie_device;
struct pcie_port_service_driver *driver;
if (device->bus == &pcie_port_bus_type && device->driver) {
driver = to_service_driver(device->driver);
if (driver && driver->error_resume) {
pcie_device = to_pcie_device(device);
/* Forward error message to service drivers */
driver->error_resume(pcie_device->port);
}
}
return 0;
}
static void pcie_portdrv_err_resume(struct pci_dev *dev)
{
device_for_each_child(&dev->dev, NULL, resume_iter);
}
/* /*
* LINUX Device Driver Model * LINUX Device Driver Model
*/ */
@ -210,7 +190,6 @@ static const struct pci_error_handlers pcie_portdrv_err_handler = {
.error_detected = pcie_portdrv_error_detected, .error_detected = pcie_portdrv_error_detected,
.slot_reset = pcie_portdrv_slot_reset, .slot_reset = pcie_portdrv_slot_reset,
.mmio_enabled = pcie_portdrv_mmio_enabled, .mmio_enabled = pcie_portdrv_mmio_enabled,
.resume = pcie_portdrv_err_resume,
}; };
static struct pci_driver pcie_portdriver = { static struct pci_driver pcie_portdriver = {

View File

@ -883,11 +883,11 @@ static void pci_set_bus_msi_domain(struct pci_bus *bus)
static int pci_register_host_bridge(struct pci_host_bridge *bridge) static int pci_register_host_bridge(struct pci_host_bridge *bridge)
{ {
struct device *parent = bridge->dev.parent; struct device *parent = bridge->dev.parent;
struct resource_entry *window, *n; struct resource_entry *window, *next, *n;
struct pci_bus *bus, *b; struct pci_bus *bus, *b;
resource_size_t offset; resource_size_t offset, next_offset;
LIST_HEAD(resources); LIST_HEAD(resources);
struct resource *res; struct resource *res, *next_res;
char addr[64], *fmt; char addr[64], *fmt;
const char *name; const char *name;
int err; int err;
@ -970,11 +970,34 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
if (nr_node_ids > 1 && pcibus_to_node(bus) == NUMA_NO_NODE) if (nr_node_ids > 1 && pcibus_to_node(bus) == NUMA_NO_NODE)
dev_warn(&bus->dev, "Unknown NUMA node; performance will be reduced\n"); dev_warn(&bus->dev, "Unknown NUMA node; performance will be reduced\n");
/* Add initial resources to the bus */ /* Coalesce contiguous windows */
resource_list_for_each_entry_safe(window, n, &resources) { resource_list_for_each_entry_safe(window, n, &resources) {
list_move_tail(&window->node, &bridge->windows); if (list_is_last(&window->node, &resources))
break;
next = list_next_entry(window, node);
offset = window->offset; offset = window->offset;
res = window->res; res = window->res;
next_offset = next->offset;
next_res = next->res;
if (res->flags != next_res->flags || offset != next_offset)
continue;
if (res->end + 1 == next_res->start) {
next_res->start = res->start;
res->flags = res->start = res->end = 0;
}
}
/* Add initial resources to the bus */
resource_list_for_each_entry_safe(window, n, &resources) {
offset = window->offset;
res = window->res;
if (!res->end)
continue;
list_move_tail(&window->node, &bridge->windows);
if (res->flags & IORESOURCE_BUS) if (res->flags & IORESOURCE_BUS)
pci_bus_insert_busn_res(bus, bus->number, res->end); pci_bus_insert_busn_res(bus, bus->number, res->end);
@ -2168,9 +2191,21 @@ static void pci_configure_ltr(struct pci_dev *dev)
* Complex and all intermediate Switches indicate support for LTR. * Complex and all intermediate Switches indicate support for LTR.
* PCIe r4.0, sec 6.18. * PCIe r4.0, sec 6.18.
*/ */
if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) {
((bridge = pci_upstream_bridge(dev)) && pcie_capability_set_word(dev, PCI_EXP_DEVCTL2,
bridge->ltr_path)) { PCI_EXP_DEVCTL2_LTR_EN);
dev->ltr_path = 1;
return;
}
/*
* If we're configuring a hot-added device, LTR was likely
* disabled in the upstream bridge, so re-enable it before enabling
* it in the new device.
*/
bridge = pci_upstream_bridge(dev);
if (bridge && bridge->ltr_path) {
pci_bridge_reconfigure_ltr(dev);
pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, pcie_capability_set_word(dev, PCI_EXP_DEVCTL2,
PCI_EXP_DEVCTL2_LTR_EN); PCI_EXP_DEVCTL2_LTR_EN);
dev->ltr_path = 1; dev->ltr_path = 1;
@ -2450,7 +2485,7 @@ static struct irq_domain *pci_dev_msi_domain(struct pci_dev *dev)
struct irq_domain *d; struct irq_domain *d;
/* /*
* If a domain has been set through the pcibios_add_device() * If a domain has been set through the pcibios_device_add()
* callback, then this is the one (platform code knows best). * callback, then this is the one (platform code knows best).
*/ */
d = dev_get_msi_domain(&dev->dev); d = dev_get_msi_domain(&dev->dev);
@ -2518,7 +2553,7 @@ void pci_device_add(struct pci_dev *dev, struct pci_bus *bus)
list_add_tail(&dev->bus_list, &bus->devices); list_add_tail(&dev->bus_list, &bus->devices);
up_write(&pci_bus_sem); up_write(&pci_bus_sem);
ret = pcibios_add_device(dev); ret = pcibios_device_add(dev);
WARN_ON(ret < 0); WARN_ON(ret < 0);
/* Set up MSI IRQ domain */ /* Set up MSI IRQ domain */
@ -2550,11 +2585,12 @@ struct pci_dev *pci_scan_single_device(struct pci_bus *bus, int devfn)
} }
EXPORT_SYMBOL(pci_scan_single_device); EXPORT_SYMBOL(pci_scan_single_device);
static unsigned next_fn(struct pci_bus *bus, struct pci_dev *dev, unsigned fn) static unsigned int next_fn(struct pci_bus *bus, struct pci_dev *dev,
unsigned int fn)
{ {
int pos; int pos;
u16 cap = 0; u16 cap = 0;
unsigned next_fn; unsigned int next_fn;
if (pci_ari_enabled(bus)) { if (pci_ari_enabled(bus)) {
if (!dev) if (!dev)
@ -2613,7 +2649,7 @@ static int only_one_child(struct pci_bus *bus)
*/ */
int pci_scan_slot(struct pci_bus *bus, int devfn) int pci_scan_slot(struct pci_bus *bus, int devfn)
{ {
unsigned fn, nr = 0; unsigned int fn, nr = 0;
struct pci_dev *dev; struct pci_dev *dev;
if (only_one_child(bus) && (devfn > 0)) if (only_one_child(bus) && (devfn > 0))

Some files were not shown because too many files have changed in this diff Show More