o Separate the kernel stuff from the Yarrow algorithm. Yarrow is now
well contained in one source file and one header.
o Replace the Blowfish-based crypto routines with Rijndael-based ones.
(Rijndael is the new AES algorithm). The huge improvement in
Rijndael's key-agility over Blowfish means that this is an
extremely dramatic improvement in speed, and makes a heck of
a difference in its (lack of) CPU load.
o Clean up the sysctl's. At BDE's prompting, I have gone back to
static sysctls.
o Bug fixes. The streamlining of the crypto stuff enabled me to
find and fix some bugs. DES also found a bug in the reseed routine
which is fixed.
o Change the way reseeds clear "used" entropy. Previously, only the
source(s) that caused a reseed were cleared. Now all sources in the
relevant pool(s) are cleared.
o Code tidy-up. Mostly to make it (nearly) 80-column compliant.
Use mchain API to work with mbuf chains.
Do not depend on INET and IPX options.
Allocate ncp_rq structure dynamically to prevent possible stack overflows.
Let ncp_request() dispose control structure if request failed.
Move all NCP wrappers to ncp_ncp.c file and all NCP request processing
functions to ncp_rq.c file.
Improve reconnection logic.
Misc style fixes.
then knocked the extra digits off). Blegh. Update the comment and
adjustment method reading the chip clock year register to note that
anything less than 70 means we're past the year 2000.
reference count was transferred to the new object, but both the
new and the old map entries had pointers to the new object.
Correct this by transferring the second reference.
This fixes a panic that can occur when mmap(2) is used with the
MAP_INHERIT flag.
PR: i386/25603
Reviewed by: dillon, alc
on certain types of SOCK_RAW sockets. Also, use the ip.ttl MIB
variable instead of MAXTTL constant as the default time-to-live
value for outgoing IP packets all over the place, as we already
do this for TCP and UDP.
Reviewed by: wollman
if we hold a spin mutex, since we can trivially get into deadlocks if we
start switching out of processes that hold spinlocks. Checking to see if
interrupts were disabled was a sort of cheap way of doing this since most
of the time interrupts were only disabled when holding a spin lock. At
least on the i386. To fix this properly, use a per-process counter
p_spinlocks that counts the number of spin locks currently held, and
instead of checking to see if interrupts are disabled in the witness code,
check to see if we hold any spin locks. Since child processes always
start up with the sched lock magically held in fork_exit(), we initialize
p_spinlocks to 1 for child processes. Note that proc0 doesn't go through
fork_exit(), so it starts with no spin locks held.
Consulting from: cp
into an interruptable sleep and we increment a sleep count, we make sure
that we are the thread that will decrement the count when we wakeup.
Otherwise, what happens is that if we get interrupted (signal) and we
have to wake up, but before we get our mutex, some thread that wants
to wake us up detects that the count is non-zero and so enters wakeup_one(),
but there's nothing on the sleep queue and so we don't get woken up. The
thread will still decrement the sleep count, which is bad because we will
also decrement it again later (as we got interrupted) and are already off
the sleep queue.
an IP header with ip_len in network byte order. For certain
values of ip_len, this could cause icmp_error() to write
beyond the end of an mbuf, causing mbuf free-list corruption.
This problem was observed during generation of ICMP redirects.
We now make quite sure that the copy of the IP header kept
for icmp_error() is stored in a non-shared mbuf header so
that it will not be modified by ip_output().
Also:
- Calculate the correct number of bytes that need to be
retained for icmp_error(), instead of assuming that 64
is enough (it's not).
- In icmp_error(), use m_copydata instead of bcopy() to
copy from the supplied mbuf chain, in case the first 8
bytes of IP payload are not stored directly after the IP
header.
- Sanity-check ip_len in icmp_error(), and panic if it is
less than sizeof(struct ip). Incoming packets with bad
ip_len values are discarded in ip_input(), so this should
only be triggered by bugs in the code, not by bad packets.
This patch results from code and suggestions from Ruslan, Bosko,
Jonathan Lemon and Matt Dillon, with important testing by Mike
Tancsa, who could reproduce this problem at will.
Reported by: Mike Tancsa <mike@sentex.net>
Reviewed by: ru, bmilekic, jlemon, dillon
Since the compiler lays out the stuct so that pointers are naturally
(8-byte) aligned aligned, adding the int ki_layout didn't change the size of
the stuct; it just converted the alignment padding to a usable struct
field.
more robust. They would correctly return ENOMEM for the first time when
the buffer was exhausted, but subsequent calls in this case could cause
writes ouside of the buffer bounds.
Approved by: rwatson
in vr_init(). The VIA Rhine chip happens to be able to automatically
read its station address from the EEPROM automatically when reset,
so you don't need to program the filter if you want to keep using the
factory default address, but if you want to change it with "ifconfig vr0
ether xx:xx:xx:xx:xx:xx" then we need to manually set it in the init
routine.
sys/contrib/dev/acpica/Subsystem/Hardware/Attic/hwxface.c to the proper
location after AcpiEnterSleepState().
- Wait for the WAK_STS bit
- Evaluate the _WAK method and check result code
structure rather than assuming that the device vnode would reside
in the FFS filesystem (which is obviously a broken assumption with
the device filesystem).
in the hopes that they will actually *read* the comment above
it and *follow* the instructions so as to cause all the rest
of us less a lot less grief.
- Don't try to grab Giant before postsig() in userret() as it is no longer
needed.
- Don't grab Giant before psignal() in ast() but get the proc lock instead.