prefix later, but doing so with @cwd %%OLDPREFIX%% (having
PLIST_SUB+="OLDPREFIX=${PREFIX}") hardcodes the value in the packing
list. That's not really a problem when dealing with ports but that's
a problem with packages since pkg_add -p option only overrides the
first @cwd occurrence.
This patch allow us to use @cwd without any argument. If no
directory argument is given, it will set current working directory
to the first prefix given by the @cwd command.
PR: bin/77212
Submitted by: flz
better, I discovered sn doing too many pointer dereferences. This
driver would do silly things like:
sn_foo(struct ifnet *ifp)
{
struct sn_softc *sc = ifp->if_softc;
sc->ifp->mumble
/* Other stuff */
}
while /* other stuff */ usually needed sc, the extra deref isn't
needed. Eliminate a few dozen of them.
and subsequently broke the build. This change is supposed to fix the
case where doing a mtx_destroy() off a spin mutex while you hold it fails.
If it had been tested I would just leave it in, but it hasn't been tested
yet, so it will have to wait until later.
to old-style signals, to be the DAR register for DSI miss exceptions.
This gives the address of the access rather than the instruction
address. The behaviour is now the same as on i386.
Found by: libsigsegv tests
defined to return an int, but on LP64 platforms the return value of
FD_ISSET() for file descriptors with a bit-index larger than 31 would
not fit an int (due to __fd_mask being defined as an unsigned long).
The fix is to explicitly test against 0.
PR: ia64/91421
Submitted by: Tanaka Akira (akr at m17n dot org)
MFC after: 1 week
modules would have overlapping names.
- Only create /dev/si_control for unit 0.
Tested by: Joerg Lehners Joerg dot Lehners at informatik dot
uni-oldenburg dot de (on 6.x)
MFC after: 1 week
While we don't use the NC_BROADCAST value of nc_flag anywhere in the
RPC code, it is parseable by getnetconfigent(3) from /etc/netconfig.
o Clean up some "see below"'s that were cut and pasted from netconfig.h.
various pcib drivers to use their own private devclass_t variables for
their modules.
- Use the DEFINE_CLASS_0() macro to declare drivers for the various pcib
drivers while I'm here.
struct sx). Instead of storing a direct pointer to a our lock_class
struct in lock_object, reserve 4 bits in the lo_flags field to serve as an
index into a global lock_classes array that contains pointers to the lock
classes. Only debugging code such as WITNESS or INVARIANTS checks and KTR
logging need to access the lock_class member, so this shouldn't add any
overhead to production kernels. It might add some slight overhead to
kernels using those debug options however.
As with the previous set of changes to lock_object, this is going to
completely obliterate the kernel ABI, so be sure to recompile all your
modules.
doesn't have any actual interrupts is listed in a _PRT entry, only print
a warning rather than panic'ing when we walk the _PRT's to build up count
of entries that reference a given link (the counts are used as weights so
that we can attempt to balance the load across IRQs used by link devices).
Instead, only panic if we attempt to use the _PRT entry to route an
interrupt for a device.
PR: i386/89545
Tested by: anders
- MPSAFE
- Fix / reorganize attach routine. Device specific initialization must
be done after generic bus / DMA setup. At last, Virtual Channels
(vchan) works as expected.
Note: Recent commit / fix against this driver proves that major enhancements
on the generic sound layer does indeed help to expose flaw within
device specific code. There are probably other drivers that need to
be addressed as well.
Tested by: barner
MFC after: 1 week
that NetBSD implemented it independently of them (don't know which one
was actually first). This saves about 24k for those times you don't
need snapshot support (like when running off a ram disk, or in an
embedded environment where size matters).
returns EBADF. That errno is correct and is mandated by POSIX. It also
goes back to revision 1.1 of our CVS history (i.e. 4.4BSD).
The _fget() function should probably also be upated as it currently returns
EINVAL in that case rather than EBADF. (It does return EBADF for reads
on a write-only descriptor without any XXX comments oddly enough.)
Discussed with: scottl, grog, mjacob, bde
operation, the caller is blocked util target threads are really
suspended, also avoid suspending a thread when it is holding a
critical lock.
Fix a bug in _thr_ref_delete which tests a never set flag.
commit broke the 2**24 cases where |x| > DBL_MAX/2. There are exponent
range problems not just for denormals (underflow) but for large values
(overflow). Doubles have more than enough exponent range to avoid the
problems, but I forgot to convert enough terms to double, so there was
an x+x term which was sometimes evaluated in float precision.
Unfortunately, this is a pessimization with some combinations of systems
and compilers (it makes no difference on Athlon XP's, but on Athlon64's
it gives a 5% pessimization with gcc-3.4 but not with gcc-3.3).
Exlain the problem better in comments.
algorithm for the second step significantly to also get a perfectly
rounded result in round-to-nearest mode. The resulting optimization
is about 25% on Athlon64's and 30% on Athlon XP's (about 25 cycles
out of 100 on the former).
Using extra precision, we don't need to do anything special to avoid
large rounding errors in the third step (Newton's method), so we can
regroup terms to avoid a division, increase clarity, and increase
opportunities for parallelism. Rearrangement for parallelism loses
the increase in clarity. We end up with the same number of operations
but with a division reduced to a multiplication.
Using specifically double precision, there is enough extra precision
for the third step to give enough precision for perfect rounding to
float precision provided the previous steps are accurate to 16 bits.
(They were accurate to 12 bits, which was almost minimal for imperfect
rounding in the old version but would be more than enough for imperfect
rounding in this version (9 bits would be enough now).) I couldn't
find any significant time optimizations from optimizing the previous
steps, so I decided to optimize for accuracy instead. The second step
needed a division although a previous commit optimized it to use a
polynomial approximation for its main detail, and this division dominated
the time for the second step. Use the same Newton's method for the
second step as for the third step since this is insignificantly slower
than the division plus the polynomial (now that Newton's method only
needs 1 division), significantly more accurate, and simpler. Single
precision would be precise enough for the second step, but doesn't
have enough exponent range to handle denormals without the special
grouping of terms (as in previous versions) that requires another
division, so we use double precision for both the second and third
steps.