Optimize two cases in the MP locking code. First, it is not necessary

to use a locked cmpexg when unlocking a lock that we already hold, since
      nobody else can touch the lock while we hold it.  Second, it is not
      necessary to use a locked cmpexg when locking a lock that we already
      hold, for the same reason.  These changes will allow MP locks to be used
      recursively without impacting performance.

      Modify two procedures that are called only by assembly and are already
      NOPROF entries to pass a critical argument in %edx instead of on the
      stack, removing a significant amount of code from the critical path
      as a consequence.

Reviewed by:	Alfred Perlstein <bright@wintelcom.net>, Peter Wemm <peter@netplex.com.au>
This commit is contained in:
Matthew Dillon 1999-11-19 22:47:19 +00:00
parent 2996376ab5
commit f94f8efc61
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=53435

View File

@ -55,9 +55,8 @@
add $4, %esp
#define ISR_RELLOCK \
pushl $_mp_lock ; /* GIANT_LOCK */ \
call _MPrellock ; \
add $4, %esp
movl $_mp_lock,%edx ; /* GIANT_LOCK */ \
call _MPrellock_edx
/*
* Protects the IO APIC and apic_imen as a critical region.