This has been an (incorrect) copy-paste duplicate of debug.kassert.warn_only
since it was originally committed in r243980.
Sponsored by: Dell EMC Isilon
To diagnose and fix secondary panics, it is useful to have a stack trace.
When panic tracing is enabled, optionally trace secondary panics as well.
The option is configured with the tunable/sysctl debug.trace_all_panics.
(The original concern that inspired only tracing the primary panic was
likely that the secondary trace may scroll the original panic message or trace
off the screen. This is less of a concern for serial consoles with logging.
Not everything has a serial console, though, so the behavior is optional.)
Discussed with: jhb
Sponsored by: Dell EMC Isilon
assertions to not suppressing post-panic assertions.
There are some post-panic assertions that are valuable and we shouldn't
default to disabling them. However, when a user trips over them, the
user can still adjust the tunable/sysctl to suppress them temporarily to
get conduct troubleshooting (e.g. get a core dump).
Reported by: cem, markj
r313683 introduced new lockmgr APIs that missed the panic-time neutering
present in the rest of our locks. Correct that by adding the usual check.
Additionally, move the __lockmgr_args neutering above the assertions at the
top of the function. Drop the interlock unlock because we shouldn't have
an unneutered interlock either. No point trying to unlock it.
PR: 227749
Reported by: jtl
Sponsored by: Dell EMC Isilon
Note that GDB at least implements single stepping for MIPS using software
breakpoints explicitly rather than using PT_STEP, so this has only been
tested via tests in ptrace_test which now pass rather than fail.
- Fix several places to use uintptr_t instead of int for virtual addresses.
- Check for errors from ptrace_read_int() when setting a breakpoint for a
step.
- Properly check for errors from ptrace_write_int() as it returns non-zero,
not negative values on failure.
- Change the error returns for ptrace_read_int() and ptrace_write_int() from
ENOMEM to EFAULT.
- Clear a single step breakpoint when it traps rather than waiting for it
to be cleared from ptrace(). This matches the behavior of the arm port
and in general seems a bit more reliable than waiting for ptrace() to
clear it via FIX_SSTEP.
- Drop the PROC_LOCK around ptrace_write_int() in ptrace_clear_single_step()
since it can sleep.
- Reorder the breakpoint handler in trap() to only read the instruction if
the address matches the current thread's breakpoint address.
- Replace various #if 0'd debugging printfs with KTR_PTRACE traces.
Tested on: mips64
This is necessary to make sure that functions that can have stack
protection are not used to update the stack guard. If not, the stack
guard check would fail when it shouldn't.
guard_setup() calls elf_aux_info(), which, in turn, calls memcpy() to
update stack_chk_guard. If either elf_aux_info() or memcpy() have
stack protection enabled, __stack_chk_guard will be modified before
returning from them, causing the stack protection check to fail.
This change uses a temporary buffer to delay changing
__stack_chk_guard until elf_aux_info() returns.
Submitted by: Luis Pires
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D15173
We must ensure that accesses occur, they do not have any other
compiler-visible effects. Bruce found some situations where
optimization could remove an access, and provided a patch to use
volatile qualifier for the state variables. Since volatile behaviour
there is the compiler-specific interpretation of the keyword, use
relaxed atomics instead, which gives exactly the desired semantic.
Noted by and discussed with: bde
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Use proper method to access userspace. For now, only the slow copyout
path is implemented.
Reported and tested by: tijl (previous version)
Sponsored by: The FreeBSD Foundation
This sysctl allows a deeper dive into the sleep abyss comparing to
debug.acpi.suspend_bounce. When the new sysctl is set the system will
execute the suspend sequence up to the call to AcpiEnterSleepState().
That includes saving processor contexts and parking APs. Then, instead
of actually entering the sleep state, the BSP will call resumectx() to
emulate the wakeup. The APs should get restarted by the sequence of
Init and Startup IPIs that BSP sends to them.
MFC after: 8 days
The MIPS ptrace_single_step() unlocks the PROC_LOCK while reading and
writing instructions from userland. One failure case was not reacquiring
the lock before returning.
- Use TRAP_TRACE for traps after stepping via PT_STEP.
- Use TRAP_BRKPT for software breakpoint traps and watchpoint traps.
This was tested via the recently added siginfo ptrace() tests. PT_STEP on
MIPS has several bugs that prevent it from working yet, but this does fix
the ptrace__breakpoint_siginfo test on MIPS.
- ptrace__breakpoint_siginfo tests that a SIGTRAP for a software breakpoint
in userland triggers a SIGTRAP with a signal code of TRAP_BRKPT.
- ptrace__step_siginfo tests that a SIGTRAP reported for a step after
stepping via PT_STEP or PT_SETSTEP has a signal code of TRAP_TRACE.
- Use a single list of platforms to define HAVE_BREAKPOINT for platforms
that expose a functional breakpoint() inline to userland. Replace
existing lists of platform tests with HAVE_BREAKPOINT instead.
- Add support for advancing PC past a breakpoint inserted via breakpoint()
to support the existing ptrace__PT_CONTINUE_different_thread test on
non-x86 platforms (x86 advances the PC past the breakpoint instruction,
but other platforms do not). This is implemented by defining a new
SKIP_BREAK macro which accepts a pointer to a 'struct reg' as its sole
argument and modifies the contents to advance the PC. The intention is
to use it in between PT_GETREGS and PT_SETREGS.
Tested on: amd64, i386, mips (after adding a breakpoint() to mips)
MFC after: 1 month
For cross-architecture reproducibility. The db(3) functions work with
hashes of either endianness, and the current (v4) version password db
entries already store integers in network order. Do so with the hash as
well so that identical password databases can be created on big- and
little-endian hosts.
The -B and -L flags exist to set the endianness for legacy (v3) entries
when the -l flag is used, and they will still control hash endianness
(at least until the backwards compatibility infrastructure is removed).
MFC after: 1 week
Sponsored by: The FreeBSD Foundation
Each malloc/free was testing dtrace_malloc_enabled and forcing
extra reads from the malloc type struct to see if perhaps a
dtmalloc probe was on.
Treat it like lockstat and sdt: have a global bolean.
[X86] In X86FlagsCopyLowering, when rewriting a memory setcc we need
to emit an explicit MOV8mr instruction.
Previously the code only knew how to handle setcc to a register.
This should fix a crash in the chromium build.
This fixes various assertion failures while building ports targeting
i386:
* www/firefox: isReg() && "This is not a register operand!"
* www/iridium, www/qt5-webengine: (I.atEnd() || std::next(I) ==
def_instr_end()) && "getVRegDef assumes a single definition or no
definition"
* devel/powerpc64-gcc: FromReg != ToReg && "Cannot replace a reg with
itself"
Reported by: jbeich
PR: 225330, 227686, 227698, 227699
MFC after: 1 week
X-MFC-With: r332833
malloc was showing at the top of profile during while running microbenchmarks.
#define DTMALLOC_PROBE_MAX 2
struct malloc_type_internal {
uint32_t mti_probes[DTMALLOC_PROBE_MAX];
u_char mti_zone;
struct malloc_type_stats mti_stats[MAXCPU];
};
Reading mti_zone it wastes a cacheline to hold mti_probes + mti_zone
(which we know is 0) + part of malloc stats of the first cpu which on top
induces false-sharing.
In particular will-it-scale lock1_processes -t 128 -s 10:
before: average:45879692
after: average:51655596
Note the counters can be padded but the right fix is to move them to
counter(9), leaving the struct read-only after creation (modulo dtrace
probes).
This fixes media display for 802.11 wireless devices.
Software outside the base system that uses these media types and
defines should use #ifdef IFM_FDDI or IFM_TOKEN to include or remove
support.
Reported by: zeising
Reviewed by: emaste, kib, zeising
Tested by: zeising
Sponsored by: DARPA, AFRL
Differential Revision: https://reviews.freebsd.org/D15170
This patch adds a new socket option, SO_REUSEPORT_LB, which allow multiple
programs or threads to bind to the same port and incoming connections will be
load balanced using a hash function.
Most of the code was copied from a similar patch for DragonflyBSD.
However, in DragonflyBSD, load balancing is a global on/off setting and can not
be set per socket. This patch allows for simultaneous use of both the current
SO_REUSEPORT and the new SO_REUSEPORT_LB options on the same system.
Required changes to structures
Globally change so_options from 16 to 32 bit value to allow for more options.
Add hashtable in pcbinfo to hold all SO_REUSEPORT_LB sockets.
Limitations
As DragonflyBSD, a load balance group is limited to 256 pcbs
(256 programs or threads sharing the same socket).
Submitted by: Johannes Lundberg <johanlun0@gmail.com>
Sponsored by: Limelight Networks
Differential Revision: https://reviews.freebsd.org/D11003
- Add an implementation of atomic_fcmpset_32() using RAS for armv4/v5.
This fixes recent world breakage due to use of atomic_fcmpset() in
userland.
- While here, be more careful to not expose wrapper macros for 64-bit
atomic_*cmpset to userland for armv4/v5 as only 32-bit cmpset is
implemented.
This has been reviewed, but not runtime-tested, but should fix the arm.arm
and arm.armeb worlds that have been broken for a while.
Reviewed by: imp
MFC after: 1 month
Differential Revision: https://reviews.freebsd.org/D15147
The return value of atomic_cmpset() and atomic_fcmpset() is an int (which
is really a bool) that has the values 0 or 1. Some of the inlines were
using the type being operated on (e.g. uint32_t) as either the return type
of the function, or the type of a local 'ret' variable used to hold the
return value. Fix all of these to just use plain 'int'. Due to C promotion
rules and the fact that the value can only be 0 or 1, these should all be
harmless.
Reviewed by: imp (only the v4 ones)
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D15147
considered as originated by our host packet. And thus rcvif should be
NULL, since it is used by ipfw(4) to determine that packet was originated
from this host. Some of icmp6_reflect() consumers reuse mbuf and m_pkthdr
without resetting rcvif pointer. To avoid this always reset m_pkthdr.rcvif
pointer to NULL in icmp6_reflect(). Also remove such line and comment
describing this from icmp6_error(), since it does not longer matters.
PR: 227674
Reported by: eugen
MFC after: 1 week
Intel® Arria® 10 SoC.
Cadence Quad SPI Flash is not generic SPI controller, but SPI flash
controller, so don't use spibus here, instead provide quad spi flash
interface.
Since it is not on spibus, then mx25l flash device driver is not usable
here, so provide new n25q flash device driver with quad spi flash
interface.
Sponsored by: DARPA, AFRL
Differential Revision: https://reviews.freebsd.org/D10245
This combined with previous changes significantly depessimizes the behaviour
under contentnion.
In particular the lock1_processes test (locking/unlocking separate files)
from the will-it-scale suite was executed with 128 concurrency on a
4-socket Broadwell with 128 hardware threads.
Operations/second (lock+unlock) go from ~750000 to ~45000000 (6000%)
For reference single-process is ~1680000 (i.e. on stock kernel the resulting
perf is less than *half* of the single-threaded run),
Note this still does not really scale all that well as the locks were just
bolted on top of the current implementation. Significant room for improvement
is still here. In particular the top performance fluctuates depending on the
extent of false sharing in given run (which extends beyond the file).
Added chain+lock pairs were not padded w.r.t. cacheline size.
One big ticket item is the hash used for spreading threads: it used to be the
process pid (which basically serialized all threaded ops). Temporarily the
vnode addr was slapped in instead.
Tested by: pho
I have changed my given name from Bruce to Rebecca, and my FreeBSD account
from brucec to bcran.
Update committers-src.dot and calendar.freebsd to show these changes.
Reviewed by: rrs
Differential Revision: https://reviews.freebsd.org/D15125
kproc_suspend_check. In r329612 bufspacedaemon was turned into a thread
of the bufdaemon process causing both to call kproc_suspend_check with the
same proc argument and that function contains the following while loop:
while (SIGISMEMBER(p->p_siglist, SIGSTOP)) {
wakeup(&p->p_siglist);
msleep(&p->p_siglist, &p->p_mtx, PPAUSE, "kpsusp", 0);
}
So one thread wakes up the other and the other wakes up the first again,
locking up UP machines on shutdown.
Also register the shutdown handlers with SHUTDOWN_PRI_LAST + 100 so they
run after the syncer has shutdown, because the syncer can cause a
situation where bufdaemon help is needed to proceed.
PR: 227404
Reviewed by: kib
Tested by: cy, rmacklem
1. check if P_ADVLOCK is already set and if so, don't lock to set it
(stolen from DragonFly)
2. when trying for fast path unlock, check that we are doing unlock
first instead of taking the interlock for no reason (e.g. if we want
to *lock*). whilere make it more likely that falling fast path will
not take the interlock either by checking for state
Note the code is severely pessimized both single- and multithreaded.