64bit and 32bit ABIs. As a side-effect, it enables AVX on capable
CPUs.
In particular:
- Query the CPU support for XSAVE, list of the supported extensions
and the required size of FPU save area. The hw.use_xsave tunable is
provided for disabling XSAVE, and hw.xsave_mask may be used to
select the enabled extensions.
- Remove the FPU save area from PCB and dynamically allocate the
(run-time sized) user save area on the top of the kernel stack,
right above the PCB. Reorganize the thread0 PCB initialization to
postpone it after BSP is queried for save area size.
- The dumppcb, stoppcbs and susppcbs now do not carry the FPU state as
well. FPU state is only useful for suspend, where it is saved in
dynamically allocated suspfpusave area.
- Use XSAVE and XRSTOR to save/restore FPU state, if supported and
enabled.
- Define new mcontext_t flag _MC_HASFPXSTATE, indicating that
mcontext_t has a valid pointer to out-of-struct extended FPU
state. Signal handlers are supplied with stack-allocated fpu
state. The sigreturn(2) and setcontext(2) syscall honour the flag,
allowing the signal handlers to inspect and manipilate extended
state in the interrupted context.
- The getcontext(2) never returns extended state, since there is no
place in the fixed-sized mcontext_t to place variable-sized save
area. And, since mcontext_t is embedded into ucontext_t, makes it
impossible to fix in a reasonable way. Instead of extending
getcontext(2) syscall, provide a sysarch(2) facility to query
extended FPU state.
- Add ptrace(2) support for getting and setting extended state; while
there, implement missed PT_I386_{GET,SET}XMMREGS for 32bit binaries.
- Change fpu_kern KPI to not expose struct fpu_kern_ctx layout to
consumers, making it opaque. Internally, struct fpu_kern_ctx now
contains a space for the extended state. Convert in-kernel consumers
of fpu_kern KPI both on i386 and amd64.
First version of the support for AVX was submitted by Tim Bird
<tim.bird am sony com> on behalf of Sony. This version was written
from scratch.
Tested by: pho (previous version), Yamagi Burmeister <lists yamagi org>
MFC after: 1 month
context from in-kernel execution of padlock instructions and to handle
spurious FPUDNA exceptions that sometime are raised when doing padlock
calculations.
Globally mark crypto(9) kthread as using FPU.
Reviewed by: pjd
Hardware provided by: Sentex Communications
Tested by: pho
PR: amd64/135014
MFC after: 1 month
to kproc_xxx as they actually make whole processes.
Thos makes way for us to add REAL kthread_create() and friends
that actually make theads. it turns out that most of these
calls actually end up being moved back to the thread version
when it's added. but we need to make this cosmetic change first.
I'd LOVE to do this rename in 7.0 so that we can eventually MFC the
new kthread_xxx() calls.
specific privilege names to a broad range of privileges. These may
require some future tweaking.
Sponsored by: nCircle Network Security, Inc.
Obtained from: TrustedBSD Project
Discussed on: arch@
Reviewed (at least in part) by: mlaier, jmg, pjd, bde, ceri,
Alex Lyashkov <umka at sevcity dot net>,
Skip Ford <skip dot ford at verizon dot net>,
Antoine Brodin <antoine dot brodin at laposte dot net>
if the specified priority is zero. This avoids a race where the calling
thread could read a snapshot of it's current priority, then a different
thread could change the first thread's priority, then the original thread
would call sched_prio() inside msleep() undoing the change made by the
second thread. I used a priority of zero as no thread that calls msleep()
or tsleep() should be specifying a priority of zero anyway.
The various places that passed 'curthread->td_priority' or some variant
as the priority now pass 0.
only allow proper values. ENTROPYSOURCE is a maxval+1, not an
allowable number.
Suggested loose protons in the solution: phk
Prefers to keep the pH close to seven: markm
place.
This moves the dependency on GCC's and other compiler's features into
the central sys/cdefs.h file, while the individual source files can
then refer to #ifdef __COMPILER_FEATURE_FOO where they by now used to
refer to #if __GNUC__ > 3.1415 && __BARC__ <= 42.
By now, GCC and ICC (the Intel compiler) have been actively tested on
IA32 platforms by netchild. Extension to other compilers is supposed
to be possible, of course.
Submitted by: netchild
Reviewed by: various developers on arch@, some time ago
supported for STAILQ via STAILQ_CONCAT().
(2) Maintain a count of the number of entries in the thread-local entropy
fifo so that we can keep the other fifo counts in synch.
MFC after: 3 weeks
MFC with: randomdev_soft.c revisions 1.5 and 1.6
Suggested by: jhb (1)
- Trade off granularity to reduce overhead, since the current model
doesn't appear to reduce contention substantially: move to a single
harvest mutex protecting harvesting queues, rather than one mutex
per source plus a mutex for the free list.
- Reduce mutex operations in a harvesting event to 2 from 4, and
maintain lockless read to avoid mutex operations if the queue is
full.
- When reaping harvested entries from the queue, move all entries from
the queue at once, and when done with them, insert them all into a
thread-local queue for processing; then insert them all into the
empty fifo at once. This reduces O(4n) mutex operations to O(2)
mutex operations per wakeup.
In the future, we may want to look at re-introducing granularity,
although perhaps at the granularity of the source rather than the
source class; both the new and old strategies would cause contention
between different instances of the same source (i.e., multiple
network interfaces).
Reviewed by: markm
full, avoiding the cost of mutex operations if it is. We re-test
once the mutex is acquired to make sure it's still true before doing
the -modify-write part of the read-modify-write. Note that due to
the maximum fifo depth being pretty deep, this is unlikely to improve
harvesting performance yet.
Approved by: markm
for unknown events.
A number of modules return EINVAL in this instance, and I have left
those alone for now and instead taught MOD_QUIESCE to accept this
as "didn't do anything".
Nehemiah chip, but the work is all done in hardware.
There are three opportunities to add other entropy; the Data
Buffer, the Cipher's IV and the Cipher's key. A future commit
will exploit these opportunities.
uiomove(9) is not properly locked. So, return to NEEDGIANT
mode. Later, when uiomove is finely locked, I'll revisit.
While I'm here, provide some temporary debugging output to
help catch blocking startups.
can more easily be used INSTEAD OF the hard-working Yarrow.
The only hardware source used at this point is the one inside
the VIA C3 Nehemiah (Stepping 3 and above) CPU. More sources will
be added in due course. Contributions welcome!
Introduce d_version field in struct cdevsw, this must always be
initialized to D_VERSION.
Flip sense of D_NOGIANT flag to D_NEEDGIANT, this involves removing
four D_NOGIANT flags and adding 145 D_NEEDGIANT flags.
happen in interrupt context; 1) sleep locks, and 2) malloc/free
calls.
1) is fixed by using spin locks instead.
2) is fixed by preallocating a FIFO (implemented with a STAILQ)
and using elements from this FIFO instead. This turns out
to be rather fast.
OK'ed by: re (scottl)
Thanks to: peter, jhb, rwatson, jake
Apologies to: *
o Each source gets its own queue, which is a FIFO, not a ring buffer.
The FIFOs are implemented with the sys/queue.h macros. The separation
is so that a low entropy/high rate source can't swamp the harvester
with low-grade entropy and destroy the reseeds.
o Each FIFO is limited to 256 (set as a macro, so adjustable) events
queueable. Full FIFOs are ignored by the harvester. This is to
prevent memory wastage, and helps to keep the kernel thread CPU
usage within reasonable limits.
o There is no need to break up the event harvesting into ${burst}
sized chunks, so retire that feature.
o Break the device away from its roots with the memory device, and
allow it to get its major number automagically.