entry points that will be inserted over the life-time of the 6.x branch,
including for:
- New struct file labeling (void * already added to struct file), events,
access control checks.
- Additional struct mount access control checks, internalization/
externalization.
- mac_check_cap()
- System call enter/exit check and event.
- Socket and vnode ioctl entry points.
MFC after: 3 days
o separate configured beacon interval from listen interval; this
avoids potential use of one value for the other (e.g. setting
powersavesleep to 0 clobbers the beacon interval used in hostap
or ibss mode)
o bounds check the beacon interval received in probe response and
beacon frames and drop frames with bogus settings; not clear
if we should instead clamp the value as any alteration would
result in mismatched sta+ap configuration and probably be more
confusing (don't want to log to the console but perhaps ok with
rate limiting)
o while here up max beacon interval to reflect WiFi standard
Noticed by: Martin <nakal@nurfuerspam.de>
MFC after: 1 week
We're checking for /var/run/jail_<name>.id file and if it exists, we don't
start the jail. It should be also safe in case of reboot(8), because
rc.d/cleanvar script is going to remove /var/run/jail_* files.
It helps to avoid potential mess when the same jail is started twice,
because of an administrator mistake (been there, done that).
MFC after: 1 week
made in pmap_protect(): The pmap's resident count should not be reduced
unless mappings are removed.
The errant change to the pmap's resident count could result in a later
pmap_remove() failing to remove any mappings if the errant change has set
the pmap's resident count to zero.
tokenizer.c:1.3). Contrary to the commit log there were no memory leaks,
but the change introduced a bug because the free'd pointer was not zeroed
and calling the appropriate _end() function would call free() a second time.
o Allocate a VHPT per CPU. The VHPT is a hash table that the CPU
uses to look up translations it can't find in the TLB. As such,
the VHPT serves as a level 1 cache (the TLB being a level 0 cache)
and best results are obtained when it's not shared between CPUs.
The collision chain (i.e. the hash bucket) is shared between CPUs,
as all buckets together constitute our collection of PTEs. To
achieve this, the collision chain does not point to the first PTE
in the list anymore, but to a hash bucket head structure. The
head structure contains the pointer to the first PTE in the list,
as well as a mutex to lock the bucket. Thus, each bucket is locked
independently of each other. With at least 1024 buckets in the VHPT,
this provides for sufficiently finei-grained locking to make the
ssolution scalable to large SMP machines.
o Add synchronisation to the lazy FP context switching. We do this
with a seperate per-thread lock. On SMP machines the lazy high FP
context switching without synchronisation caused inconsistent
state, which resulted in a panic. Since the use of the high FP
registers is not common, it's possible that races exist. The ia64
package build has proven to be a good stress test, so this will
get plenty of exercise in the near future.
o Don't use the local ID of the processor we want to send the IPI to
as the argument to ipi_send(). use the struct pcpu pointer instead.
The reason for this is that IPI delivery is unreliable. It has been
observed that sending an IPI to a CPU causes it to receive a stray
external interrupt. As such, we need a way to make the delivery
reliable. The intended solution is to queue requests in the target
CPU's per-CPU structure and use a single IPI to inform the CPU that
there's a new entry in the queue. If that IPI gets lost, the CPU
can check it's queue at any convenient time (such as for each
clock interrupt). This also allows us to send requests to a CPU
without interrupting it, if such would be beneficial.
With these changes SMP is almost working. There are still some random
process crashes and the machine can hang due to having the IPI lost
that deals with the high FP context switch.
The overhead of introducing the hash bucket head structure results
in a performance degradation of about 1% for UP (extra pointer
indirection). This is surprisingly small and is offset by gaining
reasonably/good scalable SMP support.
allocating a VHPT per CPU. Since we don't yet know how many CPUs
are actually in the system at the time we need to allocate the
VHPTs, we allocate for MAXCPU processors. This can result in a
lot of wasted space for 2-way machines. So, for now, limit MAXCPU
to something smaller until we have something more dynamic.
command does, but worse.
o Remove the obscure proc command, because it does what the thread
command does, but not unambigously.
o Move the PID to the extra thread info, where it makes sense and
where it doesn't confuse users. The extra thread info holds some
process information, to which the PID belongs.
o Implement the to_find_new_threads target method by having it call
the target beneath us if we're not using KVM. This makes sure that
new threads are found when using the remote target.
o Fix various core dump scenarios:
- Implement the to_files_info target method. Previously the
'info target' command would cause a NULL pointer dereference.
- Don't assume there's a current thread. We're not initialized
in all cases. This prevents a NULL pointer dereference.
- When we're not ussing KVM, have the to_xfer_memory target
method call the target beneath us. This avoids calling into
KVM with a NULL pointer.
MFC after: 1 week
static.
o Register a function with atexit(3) to close the KVM object if
we have one open.
o Show the unread portion of the kernel's message buffer before
presenting the prompt. It's bound to provide some useful info.
o Don't call kgdb_target() twice. It results in having all threads
listed twice.
MFC after: 1 week