branch targets that are too far apart for the BRADDR relocation.
This is caused by the branch prediction optimizationi in the atomic
inlines here, because they jump across sections.
The workaround is to suppress jumping to a different section when
compiling LINT. To generate correct code in that case, the section
directives are replaced by a branch and a label to deal with the
fall-through case. Reasonably good C compilers will optimize this
away anyway, so the end result isn't really that bad.
This provides a 30% reduction in system time and a 6% reduction in wallclock time
for a make buildworld on my xp1000 (one 21264).
FWIW, I've been running this for nearly 2 months without problems.
Portions submitted by: ticso, jhb
Tested by: jhb (ds20 dual 21264)
regardless of if they are signed or unsigned since it is easier to work
with sign-extended values. Thus, remove the disabled zapnot to
zero-extend the sign-extended value we read from *p in atomic_cmpset_32()
since the cmpval we are comparing against should already be
sign-extended.
- To ensure that the compiler knows to sign-extend the upper 32 bits of
cmpval rather than leaving garbage in there, cast the appropriately in
the constraints section.
Help from: Richard Henderson <rth@redhat.com>
value we load from memory. gcc3.1 passes in the u_int32_t old value to
compare against as a _sign_-extended 64-bit value for some reason (bug?).
This is a temporary workaround so kernels work again on alpha.
"inside" of locked regions. That is, an acquire atomic operation will
always enforce a memory barrier after the atomic operation and a release
operation will always enforce a memory barrier before the atomic
operation.
- Explicitly use 'mb' instead of 'wmb' in release atomic operations. The
'wmb' memory barrier is not strong enough to guarantee coherence with
other processors. This is effectively a nop since alpha_wmb() actually
performs a 'mb' and not a 'wmb', but I wanted the code to be more
correct since at some point in the future alpha_wmb()'s implementation
may switch to being a real 'wmb'.
in most of the atomic operations. Now for these operations, you can
use the normal atomic operation, you can use the operation with a read
barrier, or you can use the operation with a write barrier. The function
names follow the same semantics used in the ia64 instruction set. An
atomic operation with a read barrier has the extra suffix 'acq', due to
it having "acquire" semantics. An atomic operation with a write barrier
has the extra suffix 'rel'. These suffixes are inserted between the
name of the operation to perform and the typename. For example, the
atomic_add_int() function now has 3 variants:
- atomic_add_int() - this is the same as the previous function
- atomic_add_acq_int() - this function combines the add operation with a
read memory barrier
- atomic_add_rel_int() - this function combines the add operation with a
write memory barrier
- Add 'ptr' to the list of types that we can perform atomic operations
on. This allows one to do atomic operations on uintptr_t's. This is
useful in the mutex code, for example, because the actual mutex lock is
a pointer.
- Add two new operations for doing loads and stores with memory barriers.
The new load operations use a read barrier before the load, and the
new store operations use a write barrier after the load. For example,
atomic_load_acq_int() will atomically load an integer as well as
enforcing a read barrier.
fixes a serious problem with the previous version where an input could
have been placed in the same register as an output which would stop
the inline from working properly.
* Redo atomic_{set,clear,add,subtract}_{32,64} as inlines since the code
sequence is shorter than the call sequence to the code in atomic.s.
I will remove the functions from atomic.s after a grace period to allow
people to rebuild kernel modules.
Add some overflow checks to read/write (from bde).
Change all modifications to vm_page::flags, vm_page::busy, vm_object::flags
and vm_object::paging_in_progress to use operations which are not
interruptable.
Reviewed by: Bruce Evans <bde@zeta.org.au>