freebsd-dev/libexec/rtld-elf/alpha/lockdflt.c

182 lines
5.3 KiB
C
Raw Normal View History

/*-
* Copyright 1999, 2000 John D. Polstra.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* $FreeBSD$
*/
/*
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
* Thread locking implementation for the dynamic linker.
*
* We use the "simple, non-scalable reader-preference lock" from:
*
* J. M. Mellor-Crummey and M. L. Scott. "Scalable Reader-Writer
* Synchronization for Shared-Memory Multiprocessors." 3rd ACM Symp. on
* Principles and Practice of Parallel Programming, April 1991.
*
* In this algorithm the lock is a single word. Its low-order bit is
* set when a writer holds the lock. The remaining high-order bits
* contain a count of readers desiring the lock. The algorithm requires
* atomic "compare_and_store" and "add" operations, which we implement
* using assembly language sequences in "rtld_start.S".
*
* These are spinlocks. When spinning we call nanosleep() for 1
* microsecond each time around the loop. This will most likely yield
* the CPU to other threads (including, we hope, the lockholder) allowing
* them to make some progress.
*/
#include <signal.h>
#include <stdlib.h>
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
#include <time.h>
#include "debug.h"
#include "rtld.h"
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
/*
* This value of CACHE_LINE_SIZE is conservative. The actual size
* is 32 on the 21064, 21064A, 21066, 21066A, and 21164. It is 64
* on the 21264. Compaq recommends sequestering each lock in its own
* 128-byte block to allow for future implementations with larger
* cache lines.
*/
#define CACHE_LINE_SIZE 128
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
#define WAFLAG 0x1 /* A writer holds the lock */
#define RC_INCR 0x2 /* Adjusts count of readers desiring lock */
typedef struct Struct_Lock {
volatile int lock;
void *base;
} Lock;
static const struct timespec usec = { 0, 1000 }; /* 1 usec. */
static sigset_t fullsigmask, oldsigmask;
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
static void *
lock_create(void *context)
{
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
void *base;
char *p;
uintptr_t r;
Lock *l;
/*
* Arrange for the lock to occupy its own cache line. First, we
* optimistically allocate just a cache line, hoping that malloc
* will give us a well-aligned block of memory. If that doesn't
* work, we allocate a larger block and take a well-aligned cache
* line from it.
*/
base = xmalloc(CACHE_LINE_SIZE);
p = (char *)base;
if ((uintptr_t)p % CACHE_LINE_SIZE != 0) {
free(base);
base = xmalloc(2 * CACHE_LINE_SIZE);
p = (char *)base;
if ((r = (uintptr_t)p % CACHE_LINE_SIZE) != 0)
p += CACHE_LINE_SIZE - r;
}
l = (Lock *)p;
l->base = base;
l->lock = 0;
return l;
}
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
static void
lock_destroy(void *lock)
{
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
Lock *l = (Lock *)lock;
free(l->base);
}
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
static void
rlock_acquire(void *lock)
{
Lock *l = (Lock *)lock;
atomic_add_int(&l->lock, RC_INCR);
while (l->lock & WAFLAG)
nanosleep(&usec, NULL);
}
static void
wlock_acquire(void *lock)
{
Lock *l = (Lock *)lock;
sigset_t tmp_oldsigmask;
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
for ( ; ; ) {
sigprocmask(SIG_BLOCK, &fullsigmask, &tmp_oldsigmask);
if (cmp0_and_store_int(&l->lock, WAFLAG) == 0)
break;
sigprocmask(SIG_SETMASK, &tmp_oldsigmask, NULL);
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
nanosleep(&usec, NULL);
}
oldsigmask = tmp_oldsigmask;
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
}
static void
rlock_release(void *lock)
{
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
Lock *l = (Lock *)lock;
atomic_add_int(&l->lock, -RC_INCR);
}
static void
wlock_release(void *lock)
{
Lock *l = (Lock *)lock;
atomic_add_int(&l->lock, -WAFLAG);
sigprocmask(SIG_SETMASK, &oldsigmask, NULL);
}
void
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
lockdflt_init(LockInfo *li)
{
Solve the dynamic linker's problems with multithreaded programs once and for all (I hope). Packages such as wine, JDK, and linuxthreads should no longer have any problems with re-entering the dynamic linker. This commit replaces the locking used in the dynamic linker with a new spinlock-based reader/writer lock implementation. Brian Fundakowski Feldman <green> argued for this from the very beginning, but it took me a long time to come around to his point of view. Spinlocks are the only kinds of locks that work with all thread packages. But on uniprocessor systems they can be inefficient, because while a contender for the lock is spinning the holder of the lock cannot make any progress toward releasing it. To alleviate this disadvantage I have borrowed a trick from Sleepycat's Berkeley DB implementation. When spinning for a lock, the requester does a nanosleep() call for 1 usec. each time around the loop. This will generally yield the CPU to other threads, allowing the lock holder to finish its business and release the lock. I chose 1 usec. as the minimum sleep which would with reasonable certainty not be rounded down to 0. The formerly machine-independent file "lockdflt.c" has been moved into the architecture-specific subdirectories by repository copy. It now contains the machine-dependent spinlocking code. For the spinlocks I used the very nifty "simple, non-scalable reader-preference lock" which I found at <http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/rw.html> on all CPUs except the 80386 (the specific CPU model, not the architecture). The 80386 CPU doesn't support the necessary "cmpxchg" instruction, so on that CPU a simple exclusive test-and-set lock is used instead. 80386 CPUs are detected at initialization time by trying to execute "cmpxchg" and catching the resulting SIGILL signal. To reduce contention for the locks, I have revamped a couple of key data structures, permitting all common operations to be done under non-exclusive (reader) locking. The only operations that require exclusive locking now are the rare intrusive operations such as dlopen() and dlclose(). The dllockinit() interface is now deprecated. It still exists, but only as a do-nothing stub. I plan to remove it as soon as is reasonably possible. (From the very beginning it was clearly labeled as experimental and subject to change.) As far as I know, only the linuxthreads port uses dllockinit(). This interface turned out to have several problems. As one example, when the dynamic linker called a client-supplied locking function, that function sometimes needed lazy binding, causing re-entry into the dynamic linker and a big looping mess. And in any case, it turned out to be too burdensome to require threads packages to register themselves with the dynamic linker.
2000-07-08 04:10:38 +00:00
li->context = NULL;
li->lock_create = lock_create;
li->rlock_acquire = rlock_acquire;
li->wlock_acquire = wlock_acquire;
li->rlock_release = rlock_release;
li->wlock_release = wlock_release;
li->lock_destroy = lock_destroy;
li->context_destroy = NULL;
/*
* Construct a mask to block all signals except traps which might
* conceivably be generated within the dynamic linker itself.
*/
sigfillset(&fullsigmask);
sigdelset(&fullsigmask, SIGILL);
sigdelset(&fullsigmask, SIGTRAP);
sigdelset(&fullsigmask, SIGABRT);
sigdelset(&fullsigmask, SIGEMT);
sigdelset(&fullsigmask, SIGFPE);
sigdelset(&fullsigmask, SIGBUS);
sigdelset(&fullsigmask, SIGSEGV);
sigdelset(&fullsigmask, SIGSYS);
}