For all the INT13 calls, use symbolic names instead of magic numbers. This makes
it easier to understand what the code is doing w/o a trip to google to find what
these numbers mean.
array with a singleton.
Also, pccbb isa attachment is never going to happen, do disconnect it from the
build (will delete this in future commit). It would need to be updated as well,
but since this code is effectively dead code, remove it from the build instead.
fix an assert violation introduced in r355784. Without this spinlock_exit()
may see owepreempt and switch before reducing the spinlock count. amd64
had been optimized to do a single critical enter/exit regardless of the
number of spinlocks which avoided the problem and this optimization had
not been applied elsewhere.
Reported by: emaste
Suggested by: rlibby
Discussed with: jhb, rlibby
Tested by: manu (arm64)
than "/compat/linux". Useful when you have several compat directories
with different Linux versions and you don't want to clash with files
installed by linux-c7 packages.
Reviewed by: bcr (manpages)
MFC after: 2 weeks
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D22574
Excesively large TRIMs can result in timeouts, which cause big
problems. Limit trims to 1GB to mititgate these issues.
Reviewed by: scottl
Differential Revision: https://reviews.freebsd.org/D22809
Keyboard drivers are generally registered via linker set. In these cases,
they're also available as kmods which use KPI for registering/unregistering
keyboard drivers outside of the linker set.
For built-in modules, we still fire off MOD_LOAD and maybe even MOD_UNLOAD
if an error occurs, leading to registration via linker set and at MOD_LOAD
time.
This is a minor optimization at best, but it keeps the internal kbd driver
tidy as a future change will merge the linker set driver list into its
internal keyboard_drivers list via SYSINIT and simplify driver lookup by
removing the need to consult the linker set.
In original GNU libgcc, _Unwind_Backtrace is published with GCC_3.3 version
for all architectures but ARM. For ARM should be publishes with GCC_4.3.0
version. This was originally omitted in r255095, fixed in r318024 and omitted
aging in LLVM libunwind implementation in r354347.
For ARM _Unwind_Backtrace should be published as default with GCC_4.3.0
version , (because this is right original version) and again as
normal(not-default) with GCC_3.3 version (to maintain ABI compatibility
compiled/linked with wrong pre r318024 libgcc)
PR: 233664
On PowerPC, this is needed in order for the debugger to find out
the memory offset where the kernel image was loaded on the remote
target.
This fixes symbol resolution when remote debugging a PowerPC kernel.
Reviewed by: cem
Differential Revision: https://reviews.freebsd.org/D22767
This is needed after r355796. Some double-registration of kbd drivers needs
to be sorted out, then this sysinit will simply add these drivers into the
normal list and kill off any other bits in the driver that are aware of the
linker set, for simplicity.
The previous implementation relied on a kbdsw array that mirrored the global
keyboards array. This is fine, but also requires extra locking consideration
when accessing to ensure that it's not being resized as new keyboards are
added.
The extra pointer costs little in a struct that there are relatively few of
on any given system, and simplifies locking requirements ever-so-slightly as
we only need to consider the locking requirements of whichever method is
being invoked.
__FreeBSD_version is bumped as any kbd modules will need rebuilt following
this change.
Most keyboard drivers are using the genkbd implementations as it is;
formally use them for any that aren't set and make
genkbd_get_fkeystr/genkbd_diag private.
A future change will provide default implementations for some of these where
it makes sense and most of them are already using the genkbd
implementation (e.g. get_fkeystr, diag).
These invocations were directly calling enkbd_diag(), rather than
indirection back through kbdd_diag/kbdsw. While they're functionally
equivent, invoking kbdd_diag where feasible (i.e. not in a diag
implementation) makes it easier to visually identify locking needs in these
other drivers.
Most frequently used vops boil down to checking SDT probes, doing the call and
checking again. There is no vop_post/pre in their case but the check after the
call prevents tail call optimisation from taking place. Instead, check once
upfront. Kernels with debug or vops with non-empty vop_post still don't short
circuit.
Reviewed by: kib
Tested by: pho
Differential Revision: https://reviews.freebsd.org/D22739
Now that it is not used after schedlock changes got merged.
Note the unlock routine temporarily still checks for it on account of just using
regular spin unlock.
This is a prelude towards a general clean up.
It seems I read specifications not careful enough. There are devices not
setting successful completion bit, causing previous code report false error.
MFC after: 1 week
PHYS_TO_VM_PAGE() operation that always returns the same vm_page_t out of
the loop. (Since arm64 is configured as VM_PHYSSEG_SPARSE, the
implementation of PHYS_TO_VM_PAGE() is more costly than that of
VM_PHYSSEG_DENSE platforms, like amd64.)
MFC after: 1 week
In some cases the pool discovery will get stuck in infinite loop while setting
up the vdev children.
To fix, we split the vdev setup into two parts, first we create vdevs based on
configuration we do get from pool label, then, we process pool config from MOS
and update the pool config if needed.
Testing done: confirm previously hung loader is not hung any more.
MFC after: 1 week
Don't hold the scheduler lock while doing context switches. Instead we
unlock after selecting the new thread and switch within a spinlock
section leaving interrupts and preemption disabled to prevent local
concurrency. This means that mi_switch() is entered with the thread
locked but returns without. This dramatically simplifies scheduler
locking because we will not hold the schedlock while spinning on
blocked lock in switch.
This change has not been made to 4BSD but in principle it would be
more straightforward.
Discussed with: markj
Reviewed by: kib
Tested by: pho
Differential Revision: https://reviews.freebsd.org/D22778
The Nest MMU manages address translation for accelerators on the POWER9. To
do so, it needs a page table, so export the system page table to the Nest
MMU. This will quietly fail on pre-POWER9 systems that do not have a NMMU.
The NMMU is currently unused, so this change is currently effectively a NOP,
but the NMMU and VAS will eventually be used.
Eliminate lock recursion from turnstiles. This was simply used to avoid
tracking the top-level turnstile lock. explicitly check for it before
picking up and dropping locks.
Reviewed by: kib
Tested by: pho
Differential Revision: https://reviews.freebsd.org/D22746
Do all sleepqueue post-processing in sleepq_remove_thread() so that we
do not require the thread lock after a context switch.
Reviewed by: jhb, kib
Differential Revision: https://reviews.freebsd.org/D22745
The arm kernel stack unwinder has apparently never been able to unwind when
the path of execution leads through a kernel module. There was code that
tried to handle modules by looking for the unwind data in them, but it did
so by trying to find symbols which have never existed in arm kernel
modules. That caused the unwind code to panic, and because part of panic
handling calls into the unwind code, that just created a recursion loop.
Locating the unwind data in a loaded module requires accessing the Elf
section headers to find the SHT_ARM_EXIDX section. For preloaded modules
those headers are present in a metadata blob. For dynamically loaded
modules, the headers are present only while the loading is in progress; the
memory is freed once the module is ready to use. For that reason, there is
new code in kern/link_elf.c, wrapped in #ifdef __arm__, to extract the
unwind info while the headers are loaded. The values are saved into new
fields in the linker_file structure which are also conditional on __arm__.
In arm/unwind.c there is new code to locally cache the per-module info
needed to find the unwind tables. The local cache is crafted for lockless
read access, because the unwind code often needs to run in context where
sleeping is not allowed. A large comment block describes the local cache
list, so I won't repeat it all here.
Eliminate recursion from most thread_lock consumers. Return from
sched_add() without the thread_lock held. This eliminates unnecessary
atomics and lock word loads as well as reducing the hold time for
scheduler locks. This will eventually allow for lockless remote adds.
Discussed with: kib
Reviewed by: jhb
Tested by: pho
Differential Revision: https://reviews.freebsd.org/D22626
This adds a new -D/--all-repeats option to uniq(1), which outputs each copy
of any repeated lines (as opposed to a single copy of a repeated line). You
can specify a separator option to output a blank line before or after each
group of repeated lines. This adds compatibility with the GNU coreutils
version of uniq(1).
This change also re-groups the -c, -d, -D, -u options in the usage display
and man page to indicate that they are mutally exclusive of each other. This
matches the posix/opengroup definition of uniq(1) command line args. Note
that this change does NOT actually enforce the mutual exclusion in the code,
for now, it simply documents that the arguments should be considered
exclusive with each other.
Differential Revision: https://reviews.freebsd.org/D22262
And remove the inline/deprecated attribute use entirely in stdlib.h, from
r355747. The intent was to provide a buildable API transitionary period, but
clearly that was counter-productive.
Reported by: delphij, imp, others
Within command completion processing the callback function may access
DMAed data buffer. Synchronize it before use, not after.
This allows to use NVMe disk on non-DMA coherent arm64 system.
MFC after: 3 weeks
This #ifdef is misleading as there are actually no user-serviceable parts
inside and, as far as I can tell, there is no pollution leading from
userland to this header. Furthermore, it becomes a slight nuisance when
attempting to move things around in this header.
that if fault fails to progress and needs to restart the loop it must free
the page it is working on and allocate again on restart. Resolve the few
places that need to be modified to support this condition and simply
deactivate the page. Presently, we only permit this when fault restarts
for busy contention. This has an added benefit of removing some object
trylocking in this case.
While here consolidate some page cleanup logic into fault_page_free() and
fault_page_release() to reduce redundant code and automate some teardown.
Reviewed by: kib
Differential Revision: https://reviews.freebsd.org/D22653
an exclusive object lock.
Previously swap space was freed on a best effort basis when a page that
had valid swap was dirtied, thus invalidating the swap copy. This may be
done inconsistently and requires the object lock which is not always
convenient.
Instead, track when swap space is present. The first dirty is responsible
for deleting space or setting PGA_SWAP_FREE which will trigger background
scans to free the swap space.
Simplify the locking in vm_fault_dirty() now that we can reliably identify
the first dirty.
Discussed with: alc, kib, markj
Differential Revision: https://reviews.freebsd.org/D22654
require the object lock to synchronize collapse. Other swap objects such
as tmpfs do not.
Reported by: mjg
Reviewed by: kib, markj
Differential Revision: https://reviews.freebsd.org/D22747
exec_map_first_page(). This will also enable pagein clustering for other
interested consumers (tmpfs, md, etc).
Discussed with: alc
Approved by: kib
Differential Revision: https://reviews.freebsd.org/D22731