The RAS implementation would set the end address, then the start
address. These were used by the kernel to restart a RAS sequence if
it was interrupted. When the thread switching code ran, it would
check these values and adjust the PC and clear them if it did.
However, there's a small flaw in this scheme. Thread T1, sets the end
address and gets preempted. Thread T2 runs and also does a RAS
operation. This resets end to zero. Thread T1 now runs again and
sets start and then begins the RAS sequence, but is preempted before
the RAS sequence executes its last instruction. The kernel code that
would ordinarily restart the RAS sequence doesn't because the PC isn't
between start and 0, so the PC isn't set to the start of the sequence.
So when T1 is resumed again, it is at the wrong location for RAS to
produce the correct results. This causes the wrong results for the
atomic sequence.
The window for the first race is 3 instructions. The window for the
second race is 5-10 instructions depending on the atomic operation.
This makes this failure fairly rare and hard to reproduce.
Mutexs are implemented in libthr using atomic operations. When the
above race would occur, a lock could get stuck locked, causing many
downstream problems, as you might expect.
Also, make sure to reset the start and end address when doing a syscall, or
a malicious process could set them before doing a syscall.
Reviewed by: imp, ups (thanks guys)
Pointy hat to: cognet
MFC After: 3 days
its -f and -v arguments:
kern.proc.filedesc - dump file descriptor information for a process, if
debugging is permitted, including socket addresses, open flags, file
offsets, file paths, etc.
kern.proc.vmmap - dump virtual memory mapping information for a process,
if debugging is permitted, including layout and information on
underlying objects, such as the type of object and path.
These provide a superset of the information historically available
through the now-deprecated procfs(4), and are intended to be exported
in an ABI-robust form.
January 1, 1601. The 1601 - 1970 period was in seconds rather than 100ns
units.
Remove duplication by having NdisGetCurrentSystemTime call ntoskrnl_time.
linker interfaces for looking up function names and offsets from
instruction pointers. Create two variants of each call: one that is
"DDB-safe" and avoids locking in the linker, and one that is safe for
use in live kernels, by virtue of observing locking, and in particular
safe when kernel modules are being loaded and unloaded simultaneous to
their use. This will allow them to be used outside of debugging
contexts.
Modify two of three current stack(9) consumers to use the DDB-safe
interfaces, as they run in low-level debugging contexts, such as inside
lockmgr(9) and the kernel memory allocator.
Update man page.
sx driver), change a magic value in the PLX bridge chip. Apparently later
builds of the PCI cards had corrected values in the configuration eeprom.
This change supposedly fixes some pci bus problems.
information in support of DDB(4); these functions bypass normal linker
locking as they may run in contexts where locking is unsafe (such as the
kernel debugger).
Add a new interface linker_ddb_search_symbol_name(), which looks up a
symbol name and offset given an address, and also
linker_search_symbol_name() which does the same but *does* follow the
locking conventions of the linker.
Unlike existing functions, these functions place the name in a
caller-provided buffer, which is stable even after linker locks have been
released. These functions will be used in upcoming revisions to stack(9)
to support kernel stack trace generation in contexts as part of a live,
rather than suspended, kernel.
sparc64, use ANSI function headers and specifically indicate the lack of
arguments with 'void'. Otherwise, warnings are generated at WARNS=3 for
libkse, leading to a compile failure with -Werror.
ia64, powerpc, and sparc64, use ANSI function headers and specifically
indicate the lack of arguments with 'void'. Otherwise, warnings are
generated at WARNS=3, leading to a compile failure with -Werror.
gets enabled when INVARIANTS is on instead of DIAGNOSTIC (which apparently
nobody uses). From Tor's description:
This happens when the block range spans two block maps, the first in the
inode (mapping up to NDADDR direct blocks) and the second being the first
indirect block. The current check assumes that both block maps are
indirect blocks.
Work done by: tegge
Tested by: kris, kensmith
in the tcp header. With relevant parts of the tcp header changing after
the 'signature' was computed, the signature becomes invalid.
Reviewed by: tools/regression/netinet/tcpconnect
MFC after: 3 days
Tested by: Nick Hilliard (see net@)
is required by the X.Org PCI domains code and additionally needs
a workaround for Hummingbird and Sabre bridges as these don't
allow their config headers to be read at any width, which is an
unusual behavior.
- In psycho(4) take advantage of DEFINE_CLASS_0 and use more
appropriate types for some softc members.
MFC after: 3 days
hack means you can get the units and flags to match up more easily with
serial consoles on machines with acpi tables that cause the com ports
to be probed in the wrong order (and hence get the wrong sio unit number).
This replaces the common alternative hack of editing the code to comment
out the acpi attachment. This could go away entirely when device wiring
patches are committed.
stomping on the units intended for the motherboard sio ports. This is
no real substitute for the not-yet-committed device wiring enhancements.
Code taken from sio's pci attachment.
allocation fails and pv entries are reclaimed, there may be an unused pv
entry in a pv chunk that survived the reclamation. However, previously,
after reclamation, get_pv_entry() did not look for an unused pv entry in
a surviving pv chunk; it simply retried the page allocation. Now, it
does look for an unused pv entry before retrying the page allocation.
Note: This only applies to RELENG_7. Earlier branches use a different
pv entry allocator.
MFC after: 6 weeks
libkse in FreeBSD 8.0, do not build or install static versions of libkse
(i.e. libkse*.a) in the default case. Static versions will be built and
installed if libthr is not built or if libkse is the default threading
library.
Discussed on: freebsd-arch
MFC after: 3 days
Intel CPUs with family 0x6, model 0xE and later (i.e., Intel Core(TM))
have a PMC architecture that differs somewhat from previous CPUs in
family 0x6. Even though the basic programming model is similar, the
documented set of legal values that may be loaded into their PMC MSRs
differs from that of the previous PMCs in family 0x6 and reusing bit
values valid for the older PMCs could result in undefined behaviour in
the general case.
per-cpu area. cp_time[] goes away and a new function creates a merged
cp_time-like array for things like linprocfs, sysctl etc. The
atomic ops for updating cp_time[] in statclock go away, and the scope
of the thread lock is reduced.
sysctl kern.cp_time returns a backwards compatible cp_time[] array.
A new kern.cp_times sysctl returns the individual per-cpu stats.
I have pending changes to make top and vmstat optionally show per-cpu
stats.
I'm very aware that there are something like 5 or 6 other versions "out
there" for doing this - but none were handy when I needed them.
I did merge my changes with John Baldwin's, and ended up replacing a
few chunks of my stuff with his, and stealing some other code.
Reviewed by: jhb
Partly obtained from: jhb
since the branch caches on at least Athlon XP through Athlon 64 CPU's
don't understand such instructions and guarantee a cache miss taking
at least 10 cycles. Use the documented workaround "ret $0" instead
("nop; ret" also works, but "ret $0" is probably faster on old CPUs).
Normal code (even asm code) doesn't branch to "ret", since there is
usually some cleanup to do, but the __mcount, .mcount and .mexitcount
entry points were optimized too well to have the minimum number of
instructions (3 instructions each if profiling is not enabled) and
they did this. I didn't see a significant number of cache misses for
.mexitcount, but for the shared "ret" for __mcount and .mcount I
observed cache misses costing 26 cycles each. For a send(2) syscall
that makes about 70 function calls, the cost of these cache misses
alone increased the syscall time from about 4000 cycles to about 7000
cycles. 4000 is for a profiling (GUPROF) kernel with profiling disabled;
after this fix, configuring profiling only costs about 600 cycles in the
4000, which is consistent with almost perfect branch prediction in the
mcounting calls.
unused except to obfuscate disassemblies. -mprofiler-epilogue is
currently with gcc-4 (it does too little), but -finstrument-functions
is broken in a different way (it does too much).
amd64 version: meger whitespace fixes from i386 version.
Call uma_sel_align() there at well.
Set CPU_CONTROL_VECRELOC if we're using the high vectors page.
Submitted by: Rafal Jaworowski <raj AT semihalf DOT com>
MFC After: 1 week