consequently the exponent is only 11 bits. Testing whether the
exponent equals 32767 in that case only effects to compiler warnings
and thus build breakage.
Add copyiniov() which copies a struct iovec array in from userland into
a malloc'ed struct iovec. Caller frees.
Change uiofromiov() to malloc the uio (caller frees) and name it
copyinuio() which is more appropriate.
Add cloneuio() which returns a malloc'ed copy. Caller frees.
Use them throughout.
assigning a pointer to the list and then dereferencing the pointer as a
second step. When the first spin lock is acquired, curthread is not in
a critical section so it may be preempted and would end up using another
CPUs lock list instead of its own.
When this code was in witness_lock() this sequence was safe as curthread
was in a critical section already since witness_lock() is called after the
lock is acquired.
Tested by: Daniel Lang dl at leo.org
In this mode you can setup even very small stripe size and you can be
sure that only one I/O request will be send to every disks in stripe.
It consumes some more memory, but if allocation fails, it will fall
back to "ECONOMIC" mode.
It is about 10 times faster for small stripe size than "ECONOMIC" mode
and other RAID0 implementations. It is even recommended to use this
mode and small stripe size, so our requests are always splitted.
One can still use "ECONOMIC" mode by setting kern.geom.stripe.fast to 0.
It is also possible to setup maximum memory which "FAST" mode can consume,
by setting kern.geom.stripe.maxmem from /boot/loader.conf.
one go before returning. This avoids calling uiomove() while holding
allproc_lock.
Don't adjust uio->uio_offset manually, uiomove() does that for us.
Don't drop allproc_lock before calling panic().
Suggested by: alfred
so setfault would return correctly when a page fault was invalid
(e.g. a syscall with a bad parameter).
This caused an endless DSI loop, seen when running sendmail which
does a setlogin() call with a NULL pointer.
- introduce KTR_SYSC tracing. expose the syscallnames[] array to
make the tracing more readable.
isnormal() the hard way, rather than relying on fpclassify(). This is
a lose in the sense that we need a total of 12 functions, but it is
necessary for binary compatibility because we have never bumped libm's
major version number. In particular, isinf(), isnan(), and isnanf()
were BSD libc functions before they were C99 macros, so we can't
reimplement them in terms of fpclassify() without adding a dependency
on libc.so.5. I have tried to arrange things so that programs that
could be compiled in FreeBSD 4.X will generate the same external
references when compiled in 5.X. At the same time, the new macros
should remain C99-compliant.
The isinf() and isnan() functions remain in libc for historical
reasons; however, I have moved the functions that implement the macros
isfinite() and isnormal() to libm where they belong. Moreover,
half a dozen MD versions of isinf() and isnan() have been replaced
with MI versions that work equally well.
Prodded by: kris
builtins are available: HUGE_VAL, HUGE_VALF, HUGE_VALL, INFINITY,
and NAN. These macros now expand to floating-point constant
expressions rather than external references, as required by C99.
Other compilers will retain the historical behavior. Note that
it is not possible say, e.g.
#define HUGE_VAL 1.0e9999
because the above may result in diagnostics at translation time
and spurious exceptions at runtime. Hence the need for compiler
support for these features.
Also use builtins to implement the macros isgreater(),
isgreaterequal(), isless(), islessequal(), islessgreater(),
and isunordered() when such builtins are available.
Although the old macros are correct, the builtin versions
are much faster, and they avoid double-expansion problems.
- Avoid an additional lock acquire/release when leaving xl_intr(), by
changing xl_start*() to xl_start*_locked(), and calling the appropriate
routine by chip revision (as the DMA descriptors are different).
- Simplify the appropriate routines now that they are called with the
lock held.
This should save a significant amount of CPU cycles spent on servicing
each interrupt for both UP and SMP whilst remaining MPSAFE.
Tested by: rwatson
data structures that scale better with large character sets, instead of
arrays indexed by character value:
- Sets of characters to delete/squeeze are stored in a new "cset" structure,
which is implemented as a splay tree of extents. This structure has the
ability to store character classes (ala wctype(3)), but this is not
currently fully utilized.
- Mappings between characters are stored in a new "cmap" structure, which
is also a splay tree.
- The parser no longer builds arrays containing all the characters in a
particular class; instead, next() determines them on-the-fly using
nextwctype(3).