mirror of
https://github.com/adulau/aha.git
synced 2024-12-28 03:36:19 +00:00
[PATCH] lockdep: core, fix rq-lock handling on __ARCH_WANT_UNLOCKED_CTXSW
On platforms that have __ARCH_WANT_UNLOCKED_CTXSW set and want to implement lock validator support there's a bug in rq->lock handling: in this case we dont 'carry over' the runqueue lock into another task - but still we did a spinlock_release() of it. Fix this by making the spinlock_release() in context_switch() dependent on !__ARCH_WANT_UNLOCKED_CTXSW. (Reported by Ralf Baechle on MIPS, which has __ARCH_WANT_UNLOCKED_CTXSW. This fixes a lockdep-internal BUG message on such platforms.) Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This commit is contained in:
parent
2e8f7a3128
commit
3a5f5e488c
1 changed files with 8 additions and 0 deletions
|
@ -1788,7 +1788,15 @@ context_switch(struct rq *rq, struct task_struct *prev,
|
|||
WARN_ON(rq->prev_mm);
|
||||
rq->prev_mm = oldmm;
|
||||
}
|
||||
/*
|
||||
* Since the runqueue lock will be released by the next
|
||||
* task (which is an invalid locking op but in the case
|
||||
* of the scheduler it's an obvious special-case), so we
|
||||
* do an early lockdep release here:
|
||||
*/
|
||||
#ifndef __ARCH_WANT_UNLOCKED_CTXSW
|
||||
spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
|
||||
#endif
|
||||
|
||||
/* Here we just switch the register state and the stack. */
|
||||
switch_to(prev, next, prev);
|
||||
|
|
Loading…
Reference in a new issue