Commit Graph

27 Commits

Author SHA1 Message Date
David Schultz
a7eaecefba Generate a warning if the kernel's arc4random() is seeded with bogus entropy. 2012-01-16 20:18:10 +00:00
Julian Elischer
3745c395ec Rename the kthread_xxx (e.g. kthread_create()) calls
to kproc_xxx as they actually make whole processes.
Thos makes way for us to add REAL kthread_create() and friends
that actually make theads. it turns out that most of these
calls actually end up being moved back to the thread version
when it's added. but we need to make this cosmetic change first.

I'd LOVE to do this rename in 7.0  so that we can eventually MFC the
new kthread_xxx() calls.
2007-10-20 23:23:23 +00:00
Robert Watson
8f30dde5fe Annotate that get_cyclecount() can be expensive on some platforms,
which juxtaposes nicely with the comment just above on how the
harvest function must be cheap.
2004-10-18 19:29:13 +00:00
Mark Murray
2a8b87d883 Default to harvesting everything. This is to help give a faster
startup. harvesting can be turned OFF in etc/rc.d/* if it is a
burden.
2004-04-16 17:07:11 +00:00
Mark Murray
e7806b4c0e Reorganise the entropy device so that high-yield entropy sources
can more easily be used INSTEAD OF the hard-working Yarrow.
The only hardware source used at this point is the one inside
the VIA C3 Nehemiah (Stepping 3 and above) CPU. More sources will
be added in due course. Contributions welcome!
2004-04-09 15:47:10 +00:00
John Baldwin
6074439965 kthread_exit() no longer requires Giant, so don't force callers to acquire
Giant just to call kthread_exit().

Requested by:	many
2004-03-05 22:42:17 +00:00
Mark Murray
0887c8c110 Overhaul the entropy device:
o Each source gets its own queue, which is a FIFO, not a ring buffer.
  The FIFOs are implemented with the sys/queue.h macros. The separation
  is so that a low entropy/high rate source can't swamp the harvester
  with low-grade entropy and destroy the reseeds.

o Each FIFO is limited to 256 (set as a macro, so adjustable) events
  queueable. Full FIFOs are ignored by the harvester. This is to
  prevent memory wastage, and helps to keep the kernel thread CPU
  usage within reasonable limits.

o There is no need to break up the event harvesting into ${burst}
  sized chunks, so retire that feature.

o Break the device away from its roots with the memory device, and
  allow it to get its major number automagically.
2003-11-17 23:02:21 +00:00
David E. O'Brien
aad970f1fe Use __FBSDID().
Also some minor style cleanups.
2003-08-24 17:55:58 +00:00
Andrey A. Chernov
952b6153de Remove srandom():
1) It is already called in init_main.c:proc0_post()
2) It is called each time read_random_phony() called, because "initialized"
variable is never set to 1.

Approved by:    markm
2003-02-05 15:56:04 +00:00
Mark Murray
815eb79cb8 Fix really dumb braino of mine; cast a sizeof() to an int, which it is
being compared to, not size_t, which it already is.
2002-04-21 11:02:36 +00:00
Mark Murray
e119960112 Massive lint-inspired cleanup.
Remove unneeded includes.
Deal with unused function arguments.
Resolve a boatload of signed/unsigned imcompatabilities.
Etc.
2002-03-03 19:44:22 +00:00
Mark Murray
87242f3724 Fix type warnings.
PR: 29101
2001-07-20 08:58:04 +00:00
John Baldwin
f34fa851e0 Catch up to header include changes:
- <sys/mutex.h> now requires <sys/systm.h>
- <sys/mutex.h> and <sys/sx.h> now require <sys/lock.h>
2001-03-28 09:17:56 +00:00
Mark Murray
02880f27f4 Silence (harmless) warnings. 2001-03-24 08:38:42 +00:00
Mark Murray
02c986ab54 Very large makeover of the /dev/random driver.
o Separate the kernel stuff from the Yarrow algorithm. Yarrow is now
  well contained in one source file and one header.

o Replace the Blowfish-based crypto routines with Rijndael-based ones.
  (Rijndael is the new AES algorithm). The huge improvement in
  Rijndael's key-agility over Blowfish means that this is an
  extremely dramatic improvement in speed, and makes a heck of
  a difference in its (lack of) CPU load.

o Clean up the sysctl's. At BDE's prompting, I have gone back to
  static sysctls.

o Bug fixes. The streamlining of the crypto stuff enabled me to
  find and fix some bugs. DES also found a bug in the reseed routine
  which is fixed.

o Change the way reseeds clear "used" entropy. Previously, only the
  source(s) that caused a reseed were cleared. Now all sources in the
  relevant pool(s) are cleared.

o Code tidy-up. Mostly to make it (nearly) 80-column compliant.
2001-03-10 12:51:55 +00:00
Mark Murray
14636c3b51 Provide the infrastructure for sysadmins to select the broad class
of entropy harvesting they wish to perform: "ethernet" (LAN),
point-to-point and interrupt.
2001-02-18 17:40:47 +00:00
Bosko Milekic
9ed346bab0 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
Garrett Wollman
0a2c3d48c6 select() DKI is now in <sys/selinfo.h>. 2001-01-09 04:33:49 +00:00
Mark Murray
e73a42f8fb Stop explicitly using nanotime(9) and use the new get_cyclecounter(9)
call instead.

This makes a pretty dramatic difference to the amount of work that
the harvester needs to do - it is much friendlier on the system.
(80386 and 80486 class machines will notice little, as the new
get_cyclecounter() call is a wrapper round nanotime(9) for them).
2000-11-25 17:09:01 +00:00
John Baldwin
35e0e5b311 Catch up to moving headers:
- machine/ipl.h -> sys/ipl.h
- machine/mutex.h -> sys/mutex.h
2000-10-20 07:58:15 +00:00
Mark Murray
a6278a2a42 After some complaints about the dir names, the random device is
now in dirs called sys/*/random/ instead of sys/*/randomdev/*.

Introduce blocking, but only at startup; the random device will
block until the first reseed happens to prevent clients from
using untrustworthy output.

Provide a read_random() call for the rest of the kernel so that
the entropy device does not need to be present. This means that
things like IPX no longer need to have "device random" hardcoded
into thir kernel config. The downside is that read_random() will
provide very poor output until the entropy device is loaded and
reseeded. It is recommended that developers do NOT use the
read_random() call; instead, they should use arc4random() which
internally uses read_random().

Clean up the mutex and locking code a bit; this makes it possible
to unload the module again.
2000-10-14 10:59:56 +00:00
Mark Murray
4a8612fd41 Remove unneeded includes.
Submitted by:	phk
2000-09-21 06:23:16 +00:00
Mark Murray
4d87a031c0 Large upgrade to the entropy device; mainly inspired by feedback
from many folk.

o The reseed process is now a kthread. With SMPng, kthreads are
  pre-emptive, so the annoying jerkiness of the mouse is gone.

o The data structures are protected by mutexes now, not splfoo()/splx().

o The cryptographic routines are broken out into their own subroutines.
  this facilitates review, and possible replacement if that is ever
  found necessary.

Thanks to:		kris, green, peter, jasone, grog, jhb
Forgotten to thank:	You know who you are; no offense intended.
2000-09-10 13:52:19 +00:00
Mark Murray
7aa4389a6c o Fix a horrible bug where small reads (< 8 bytes) would return the
wrong bytes.

o Improve the public interface; use void* instead of char* or u_int64_t
  to pass arbitrary data around.
Submitted by:	kris ("horrible bug")
2000-07-25 21:18:47 +00:00
Mark Murray
c90a8fc9a5 Clean this up with some BDE-inspired fixes.
o Make the comments KNF-compliant.
o Use nanotime instead of getnanotime; the manpage lies about the
  kern.timecounter.method - it has been removed.
o Fix the ENTROPYSOURCE const permanently.
o Make variable names more consistent.
o Make function prototypes more consistent.

Some more needs to be done; to follow.
2000-07-23 11:08:16 +00:00
Mark Murray
4d0e6f79d6 Storing to a pointer is (effectively) atomic; no need to protect this
with splhigh(). However, the entropy-harvesting routine needs pretty
serious irq-protection, as it is called out of irq handlers etc.

Clues given by:	bde
2000-07-11 19:37:25 +00:00
Mark Murray
c9ec235ca1 Add entropy gathering code. This will work whether the module is
compiled in or loaded.
2000-07-07 09:03:59 +00:00