61 lines
2.3 KiB
Diff
61 lines
2.3 KiB
Diff
From 1c9fc3d11b84fbd0c4f4aa7855702c2a1f098ebb Mon Sep 17 00:00:00 2001
|
|
From: Chris Wright <chrisw@sous-sol.org>
|
|
Date: Sat, 28 May 2011 13:15:04 -0500
|
|
Subject: intel-iommu: Dont cache iova above 32bit
|
|
|
|
From: Chris Wright <chrisw@sous-sol.org>
|
|
|
|
commit 1c9fc3d11b84fbd0c4f4aa7855702c2a1f098ebb upstream.
|
|
|
|
Mike Travis and Mike Habeck reported an issue where iova allocation
|
|
would return a range that was larger than a device's dma mask.
|
|
|
|
https://lkml.org/lkml/2011/3/29/423
|
|
|
|
The dmar initialization code will reserve all PCI MMIO regions and copy
|
|
those reservations into a domain specific iova tree. It is possible for
|
|
one of those regions to be above the dma mask of a device. It is typical
|
|
to allocate iovas with a 32bit mask (despite device's dma mask possibly
|
|
being larger) and cache the result until it exhausts the lower 32bit
|
|
address space. Freeing the iova range that is >= the last iova in the
|
|
lower 32bit range when there is still an iova above the 32bit range will
|
|
corrupt the cached iova by pointing it to a region that is above 32bit.
|
|
If that region is also larger than the device's dma mask, a subsequent
|
|
allocation will return an unusable iova and cause dma failure.
|
|
|
|
Simply don't cache an iova that is above the 32bit caching boundary.
|
|
|
|
Reported-by: Mike Travis <travis@sgi.com>
|
|
Reported-by: Mike Habeck <habeck@sgi.com>
|
|
Acked-by: Mike Travis <travis@sgi.com>
|
|
Tested-by: Mike Habeck <habeck@sgi.com>
|
|
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
|
|
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
|
|
---
|
|
drivers/pci/iova.c | 12 ++++++++++--
|
|
1 file changed, 10 insertions(+), 2 deletions(-)
|
|
|
|
--- a/drivers/pci/iova.c
|
|
+++ b/drivers/pci/iova.c
|
|
@@ -63,8 +63,16 @@ __cached_rbnode_delete_update(struct iov
|
|
curr = iovad->cached32_node;
|
|
cached_iova = container_of(curr, struct iova, node);
|
|
|
|
- if (free->pfn_lo >= cached_iova->pfn_lo)
|
|
- iovad->cached32_node = rb_next(&free->node);
|
|
+ if (free->pfn_lo >= cached_iova->pfn_lo) {
|
|
+ struct rb_node *node = rb_next(&free->node);
|
|
+ struct iova *iova = container_of(node, struct iova, node);
|
|
+
|
|
+ /* only cache if it's below 32bit pfn */
|
|
+ if (node && iova->pfn_lo < iovad->dma_32bit_pfn)
|
|
+ iovad->cached32_node = node;
|
|
+ else
|
|
+ iovad->cached32_node = NULL;
|
|
+ }
|
|
}
|
|
|
|
/* Computes the padding size required, to make the
|