kernel-ark/kernel/irq/migration.c
Mike Travis 7f7ace0cda cpumask: update irq_desc to use cpumask_var_t
Impact: reduce memory usage, use new cpumask API.

Replace the affinity and pending_masks with cpumask_var_t's.  This adds
to the significant size reduction done with the SPARSE_IRQS changes.

The added functions (init_alloc_desc_masks & init_copy_desc_masks) are
in the include file so they can be inlined (and optimized out for the
!CONFIG_CPUMASKS_OFFSTACK case.)  [Naming chosen to be consistent with
the other init*irq functions, as well as the backwards arg declaration
of "from, to" instead of the more common "to, from" standard.]

Includes a slight change to the declaration of struct irq_desc to embed
the pending_mask within ifdef(CONFIG_SMP) to be consistent with other
references, and some small changes to Xen.

Tested: sparse/non-sparse/cpumask_offstack/non-cpumask_offstack/nonuma/nosmp on x86_64

Signed-off-by: Mike Travis <travis@sgi.com>
Cc: Chris Wright <chrisw@sous-sol.org>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: virtualization@lists.osdl.org
Cc: xen-devel@lists.xensource.com
Cc: Yinghai Lu <yhlu.kernel@gmail.com>
2009-01-11 19:12:46 +01:00

65 lines
1.4 KiB
C

#include <linux/irq.h>
void move_masked_irq(int irq)
{
struct irq_desc *desc = irq_to_desc(irq);
if (likely(!(desc->status & IRQ_MOVE_PENDING)))
return;
/*
* Paranoia: cpu-local interrupts shouldn't be calling in here anyway.
*/
if (CHECK_IRQ_PER_CPU(desc->status)) {
WARN_ON(1);
return;
}
desc->status &= ~IRQ_MOVE_PENDING;
if (unlikely(cpumask_empty(desc->pending_mask)))
return;
if (!desc->chip->set_affinity)
return;
assert_spin_locked(&desc->lock);
/*
* If there was a valid mask to work with, please
* do the disable, re-program, enable sequence.
* This is *not* particularly important for level triggered
* but in a edge trigger case, we might be setting rte
* when an active trigger is comming in. This could
* cause some ioapics to mal-function.
* Being paranoid i guess!
*
* For correct operation this depends on the caller
* masking the irqs.
*/
if (likely(cpumask_any_and(desc->pending_mask, cpu_online_mask)
< nr_cpu_ids)) {
cpumask_and(desc->affinity,
desc->pending_mask, cpu_online_mask);
desc->chip->set_affinity(irq, desc->affinity);
}
cpumask_clear(desc->pending_mask);
}
void move_native_irq(int irq)
{
struct irq_desc *desc = irq_to_desc(irq);
if (likely(!(desc->status & IRQ_MOVE_PENDING)))
return;
if (unlikely(desc->status & IRQ_DISABLED))
return;
desc->chip->mask(irq);
move_masked_irq(irq);
desc->chip->unmask(irq);
}