a parameter instead of using the level of a given witness. When
recursing, pass an indent level of indent + 1.
- Make use of the information witness_levelall() provides in
witness_display_list() to use an O(n) algorithm instead of an O(n^2)
algo to decide which witnesses to display hierarchies from. Basically,
we only display a hierarchy for witnesses with a level of 0.
- Add a new per-witness flag that is reset at the start of
witness_display() for all witness's and is set the first time a witness
is displayed in witness_displaydescendants(). If a witness is
encountered more than once in the lock order tree (which happens often),
witness_displaydescendants() marks the later occurrences with the string
"(already displayed)" and doesn't display the subtree under that
witness. This avoids duplicating large amounts of the lock order tree
in the 'show witness' output in DDB.
All these changes serve to make 'show witness' a lot more readable and
useful than it was previously.
adds a witness to the child list of a parent witness. rebalancetree()
runs through the entire tree removing direct descendants of witnesses
who already have said child witness as an indirect descendant through
another direct descendant. itismychild() now calls insertchild()
followed by rebalancetree() and no longer needs the evil hack of
having static recursed variable.
- Add a function reparentchildren() that adds all the direct descendants
of one witness as direct descendants of another witness.
- Change the return value of itismychild() and similar functions so that
they return 0 in the case of failure due to lack of resources instead
of 1. This makes the return value more intuitive.
- Check the return value of itismychild() when defining the static lock
order in witness_initialize().
- Don't try to setup a lock instance in witness_lock() if itismychild()
fails. Witness is hosed anyways so no need to do any more witness
related activity at that point. It also makes the code flow easier to
understand.
- Add a new depart() function as the opposite of enroll(). When the
reference count of a witness drops to 0 in witness_destroy(), this
function is called on that witness. First, it runs through the
lock order tree using reparentchildren() to reparent direct descendants
of the departing witness to each of the witness' parents in the tree.
Next, it releases it's own child list and other associated resources.
Finally it calls rebalanacetree() to rebalance the lock order tree.
- Sort function prototypes into something closer to alphabetical order.
As a result of these changes, there should no longer be 'dead' witnesses
in the order tree, and repeatedly loading and unloading a module should no
longer exhaust witness of its internal resources.
Inspired by: gallatin
recursing on a lock instead of before. This fixes a bug where WITNESS
could get a little confused if you did an sx_tryslock() on a sx lock that
you already had an slock on. WITNESS would still function correctly but
it could result in weirdness in the output of 'show locks'. This also
makes it possible for mtx_trylock() to recurse on a lock.
used popped into my head during my morning commute a few weeks ago, but
it is also very similar (though a bit simpler) to a patch that mini@
developed a while ago. Basically, each eventhandler list has a mutex and
a run count. During an eventhandler invocation, the mutex is held while
we traverse the list but is dropped while we execute actual handlers. Also,
a runcount counter is incremented at the start of an invocation and
decremented at the end of an invocation. Adding to the list is not a big
deal since the reference of a thread currently executing the handlers
remains valid across an add operation. Whether or not new handlers are
executed by threads currently executing the handlers for a given list is
indeterminate however. The harder case is when a handler is removed from
the list. If the runcount is zero, the handler is simply removed from the
list directly. If the runcount is not zero, then another thread is
currently executing the handlers of this list, so the priority of this
handler is set to a magic value (currently -1) to mark it as dead. Dead
handlers are not executed during an invocation. If the runcount is zero
after it is decremented at the end of an invocation, then a new
eventhandler_prune_list() function is called to remove dead handlers from
the list.
Additional minor notes:
- All the common parts of EVENTHANDLER_INVOKE() and
EVENTHANDLER_FAST_INVOKE() have been merged into a common
_EVENTHANDLER_INVOKE() macro to reduce duplication and ease maintenance.
- KTR logging for eventhandlers is now available via the KTR_EVH mask.
- The global eventhander_mutex is no longer recursive.
Tested by: scottl (SMP i386)
monitors the entropy data harvested by crypto drivers to verify it complies
with FIPS 140-2. If data fails any test then the driver discards it and
commences continuous testing of harvested data until it is deemed ok.
Results are collected in a statistics block and, optionally, reported on
the console. In normal use the overhead associated with this driver is
not noticeable.
Note that drivers must (currently) be compiled specially to enable use.
Obtained from: original code by Jason L. Wright
attach routine, calling WIUNLOCK in the error case of one of the ifs
for that routine is now bogus. This should have been removed when the
WILOCK() was removed, but wasn't.
Submitted by: "Harti Brandt" <brandt@fokus.fraunhofer.de>
it is expected that they will not be enabled at the time that it
is called. This is reported to work around a problem in RELENG_4
where the kernel panics on boot if FAST_IPSEC and crypto support
are enabled.
Tested by: Scott Johnson <scottj@insane.com>
- Issue the io that we will later block on prior to doing cluster read ahead
so that it is more likely to be ready when we block.
- Loop issuing clustered reads until we've exhausted the seq count supplied
by the file system.
- Use a sysctl tunable "vfs.read_max" to determine the maximum number of
blocks that we'll read ahead.
use the underlying AsahiOptical USB chip and thus this quirk may need to
be generalized in the future.
PR: kern/46369
Submitted by: Tim Vanderhoek <vanderh@ecf.utoronto.ca>
MFC After: 3 days
I had commented the #ifdef INVARIANTS checks out to make sure I ran this
code in all kernels and forgot to comment the #ifdefs back in before I
committed.
Spotted by: bmilekic
[1] PHCC = Pointy Hat Correction Commit
ddb 'show locks' command. Thus, move witness_list() to the #ifdef DDB
section and remove extra checks for calling this function outside of
DDB. Also, witness_list() now returns void instead of returning an int.
Reported by: Steve Ames <steve@energistic.com>
Prodded by: davidxu
like secure level but which restricts changes to the keymap. Its
values impose the following restrictions:
0: No restriction - this is the default.
1: Only root can change restricted keys (like boot, panic, ...)
2: Only root can change restricted keys and regular keys.
Other users still can change accents and function keys.
3: Only root can change restricted keys, regular keys and accents.
4: Only root can change any of the keymap (restricted keys, regular
keys, accents and function keys).
Unfortunately, the keyboard's accent map is cleared when a new keymap
is loaded, which makes the distinction between level 3 and level 4
less useful.
The MAC guys might like to make this a policy?
No objections from: -audit about 6 moths ago
Remove an incorrect comment. (Incrementing an object's reference count
does not prevent a process from exiting. The real concern here is that the
physical page must not be deleted until transmission is complete. That is
already handled by the VM system and sf_buf_free().)
Tested by: ken
is more robust and prevents the hijacking of /dev/console for the typical
mistake.
Remove unneeded MAJOR_AUTO uses, it is only needed explicitly now if the
driver source has cross-branch compatibility to old releases.
have to examine the stats structure to tell if we have outstanding I/O
requests.
Making them u_int improves the chance of atomic updates to them,
but risks roll-over. Since the only interesting property is if
they are equal or not, this is not an issue.
outstanding requests to return before we unravel the mesh.
It is very important that the stuff below us plays nice and don't
overlook a couple of outstanding bio's, because until they remember
the geom event thread is blocked. At an expense in code here this
could be made more robust, but I actually _want_ a robust failure
in this case so any offending drivers can be fixed.
included in XFree86 4.3, but includes some fixes. Notable changes include
Radeon 8500-9100 support, PCI Radeon/Rage 128 support, transform & lighting
support for Radeons, and vblank syncing support for r128, radeon, and mga.
The gamma driver was removed due to lack of any users.
the device statistics structures into userland instead of using sysctl.
Introduce new devstat_new_entry() function which allocates the devstat
structure an calls devstat_add_entry() on it.
Two fields are sequence numbers for integrity check when we switch devstat
to use mmap to export data rather than sysctl, the last field is to mark
this as an allocated devstat entry.
in geom_disk.c.
As a side effect this makes a lot of #include <sys/devicestat.h>
lines not needed and some biofinish() calls can be reduced to
biodone() again.
- On receive, vm_map_lookup() needs to trigger the creation of a shadow
object. To make that happen, call vm_map_lookup() with PROT_WRITE
instead of PROT_READ in vm_pgmoveco().
- On send, a shadow object will be created by the vm_map_lookup() in
vm_fault(), but vm_page_cowfault() will delete the original page from
the backing object rather than simply letting the legacy COW mechanism
take over. In other words, the new page should be added to the shadow
object rather than replacing the old page in the backing object. (i.e.
vm_page_cowfault() should not be called in this case.) We accomplish
this by making sure fs.object == fs.first_object before calling
vm_page_cowfault() in vm_fault().
Submitted by: gallatin, alc
Tested by: ken
The random value sometimes causes macro CLKF_USERMODE to return true
because PSL_VM bit is set and really shoudn't be, this causes statclock()
to execute in wrong path, and further breaks KSE code and kernel crashes
when executing threaded program.
of a snapshot's copy of a superblock. This patch fixes a panic
when taking a snapshot of a 4096/512 filesystem.
Reported by: Ian Freislich <ianf@za.uu.net>
Sponsored by: DARPA & NAI Labs.
#include <sys/types.h>
#include <sys/stat.h>
main () {
}
cc -D_POSIX_C_SOURCE test.c
/usr/include/sys/stat.h:127: syntax error before "u_int"
/usr/include/sys/stat.h:158: syntax error before "u_int"
(u_int becomes invisible for _POSIX_C_SOURCE and some other *_SOURCE modes)
memory-allocation purposes. Right now it is also a very good idea
because we hit a Giant assertion in the free(9) processing if we
free something larger than 64k.
Include read streaming in the PPR flags we display in diagnostics.
In ahd_reset(), set the known mode after our initial pause prior to
setting the mode. We can't just set the mode directly because the
current mode, after the pause, is most likely unknown and setting the
mode when the saved mode is unknown will trigger an assertion in
the mode debug code.
Complete an audit for SCB RAM reads. These reads must be performed
via the special ahd_in?_scbram() methods so we can perform a
Rev A. PCI-X workaround.
Remove a superfluous mode save operation that was performed just
prior to a call to ahd_clear_critical_section(). The saved mode
was never restored and wouldn't have been valid anyway since the
mode could change while single stepping out of a critical section.
aic79xx.h:
Add new BUG definition AHD_PCIX_SCBRAM_RD_BUG.
aic79xx_inline.h:
Update ahd_inb_scbram routine to check for AHD_PCIX_SCBRAM_RD_BUG
and only apply the workaround if this bug is active. The old code
applied the workaround in all cases.
aic79xx_pci.c:
Set AHD_PCIX_SCBRAM_RD_BUG for the A4.
Remove an attempted saved_modes call in ahd_pci_test_register_access().
Saving the modes can only occur when we are paused, but the call was
happening before the chip was known to be paused. Restoring the
modes doesn't make sense either since the code makes no assumptions
about the state of the sequencer until the first time the mode is set
by the driver. This happens after the registers are successfully
mapped.
requests whether or not the lock is available. To avoid "unlocked
buffer" panics after a crash, we just claim that all buffers
are locked when cleaning up after a system panic.
Reported by: Attila Nagy <bra@fsn.hu>
Sponsored by: DARPA & NAI Labs.
witness. Sleepable locks such as sx locks always come before all mutexes
including Giant. However, the static lock order list placed Giant before
the proctree and allproc sx locks. This resulted in witness creating a
cycle in its lock order "tree" (real trees don't have cycles) leading to
infinite recursion and eventually a double fault. To fix, put Giant after
sx locks in the lock order list.
occurs when mounting the filesystem. The problem is that venus issues
the mount() syscall, which calls vfs_mount(), which calls coda_root()
which attempts to communicate with venus.
check, mac_check_sysarch_ioperm(), permitting MAC security policy
modules to control access to these interfaces. Currently, they
protect access to IOPL on i386, and setting HAE on Alpha.
Additional checks might be required on other platforms to prevent
bypass of kernel security protections by unauthorized processes.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
modules to authorize disabling of swap against a particular vnode.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
before the MAC check so that we pass the flags field into the MAC
check properly initialized. This didn't affect any current MAC
modules since they didn't care what the flags argument was (as
they were primarily interested in the fact that it was a meta-data
write, not the contents of the write), but would be relevant to
future modules relying on that field.
Submitted by: Mike Halderman <mrh@spawar.navy.mil>
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
Submitted by Hiroyuki Aizu <eyes@navi.org>
(refer to [FreeBSD-users-jp 65061])
Tested by Hiroharu Tamaru <tamaru@myn.rcast.u-tokyo.ac.jp>
(refer to [bsd-usb:689])
an administrative limit on the size of tty/pty input buffers, this is
mostly an inconsequential change. (slti(4) will allocate an 8 kB
static buffer instead of a 1 kB buffer due to a hack in the driver.)
The increase happens to kludge around a lame limitation of syscons,
which does not allow one to paste more than TTYHOG bytes.
PR: 42031
Reviewed by: mike (mentor)
not save (restore) the global pointer (GP) in the jmpbuf in setjmp
(longjmp) because it's not needed in general. GP is considered a
scratch register at callsites and hence is always restored after a
call (when it's possible that the call resolves to a symbol in a
different loadmodule; otherwise GP does not have to be saved and
restored at all), including calls to setjmp/longjmp. There's just
one problem with this now that we use setjmp/longjmp for context
switching: A new context must have GP defined properly for the
thread's entry point. This means that we need to put GP in the
jmpbuf and consequently that we have to restore is in longjmp.
This automaticly requires us to save it as well.
When setjmp/longjmp isn't used for context switching, this can be
reverted again.
the J_SIG0 field. While here, rename J_SIG0 to J_SIGSET and
remove J_SIG1. The main reason for this change is that the
128-bit sigset_t is now aligned on a 16-byte boundary, which
allows us to use 16-byte atomic loads and stores on CPUs that
support it. The removal of J_SIG1 is done to avoid confusion:
it is never accessed and should not be. Renaming J_SIG0 to
J_SIGSET is the icing on the cake that's better done now than
later.
drain routines are done by swi_net, which allows for better queue control
at some future point. Packets may also be directly dispatched to a netisr
instead of queued, this may be of interest at some installations, but
currently defaults to off.
Reviewed by: hsu, silby, jayanth, sam
Sponsored by: DARPA, NAI Labs
- Use gbincore() and not incore() so that we can drop the vnode interlock
as we acquire the buflock.
- Use GB_LOCK_NOWAIT when getting bufs for read ahead clusters so that we
don't block on locked bufs.
- Convert a while loop to a howmany() that will most likely be faster on
modern processors. There is another while loop divide that was left
near by because it is operating on a 64bit int and is most likely faster.
- Cleanup the cluster_read() code a little to get rid of a goto and make
the logic clearer.
Tested on: x86, alpha
Tested by: Steve Kargl <sgk@troutmask.apl.washington.edu>
Reviewd by: arch
already own. The mtx_trylock() will fail however. Enhance the comment
at the top of the try lock function to explain this.
Requested by: jlemon and his evil netisr locking
- Add a comment about special lock order rules and Giant near the top of
subr_witness.c. Specifically, this documents and explains the real lock
order relationship between Giant and sleepable locks (i.e. lockmgr locks
and sx locks). Basically, Giant can be safely acquired either before or
after sleepable locks and the case of Giant before a sleepable lock is
exempted as a special case.
- Add a new static function 'witness_list_lock()' that displays a single
line of information about a struct lock_instance. This is used to
make the output of witness messages more consistent and reduce some code
duplication.
- Fixup a few comments in witness_lock().
- Properly handle the Giant-before-sleepable-lock lock order exception in
a more general fashion and remove the no longer needed LI_SLEPT flag.
- Break up the last condition before assuming a reversal a bit to try
and make the logic less confusing in witness_lock().
- Axe WITNESS_SLEEP() now that LI_SLEPT is no longer needed and replace it
with a more general WITNESS_WARN() macro/function combination.
WITNESS_WARN() allows you to output a customized message out to the
console along with a list of held locks. It will optionally drop into
the debugger as well. You can exempt a single lock from the check by
passing it in as the second argument. You can also use flags to specify
if Giant should be exempt from the check, if all sleepable locks should
be exempt from the check, and if witness should panic if any non-exempt
locks are found.
- Make the witness_list() function static. Other areas of the kernel
should use the new WITNESS_WARN() instead.
- Declare some local variables at the top of the function instead of in a
nested block.
- Use mtx_owned() instead of masking off bits from mtx_lock manually.
- Read the value of mtx_lock into 'v' as a separate line rather than inside
an if statement for clarity. This code is hairy enough as it is.
owned. Previously the KASSERT would only trigger if we successfully
acquired a lock that we already held. However, _obtain_lock() fails to
acquire locks that we already hold, so the KASSERT was never checked in
the case it was supposed to fail.
interactivity of a kseg and assigns it a value of 0 through 100.
- Use sched_interact_score() to determine the dynamic priority.
- Define SCHED_CURR() in terms of sched_interact_score().
- Adjust the maximum slice back down to 100ms.
- Remove redundant clearing of ke_runq in sched_wakeup()
- Clean up #defines and comment them.
- Define one flag GB_LOCK_NOWAIT that tells getblk() to pass the LK_NOWAIT
flag to the initial BUF_LOCK(). This will eventually be used in cases
were we want to use a buffer only if it is not currently in use.
- Convert all consumers of the getblk() api to use this extra parameter.
Reviwed by: arch
Not objected to by: mckusick
Remove extraneous uses of vop_null, instead defering to the default op.
Rename vnode type "vfs" to the more descriptive "syncer".
Fix formatting for various filesystems that use vop_print.
branches:
Initialize struct cdevsw using C99 sparse initializtion and remove
all initializations to default values.
This patch is automatically generated and has been tested by compiling
LINT with all the fields in struct cdevsw in reverse order on alpha,
sparc64 and i386.
Approved by: re(scottl)
calculations. Keep this changes local to the function so the tick count
is in its natural form otherwise. Previously 1000 was added each time
a tick fired and we divided by 1000 when it was reported. This is done
to reduce rounding errors.
This should fix some problem of SBP2 device probing.
Prior to rev 1.41, we keep writing the register while bus reset phase.
But in rev 1.41, we ignore successive bus reset events and some chips seem to
clear the register after we write to it.
Tested by: Michael Reifenberger <root@nihil.reifenberger.com>
permit users and groups to bind ports for TCP or UDP, and is intended
to be combined with the recently committed support for
net.inet.ip.portrange.reservedhigh. The policy is twiddled using
sysctl(8). To use this module, you will need to compile in MAC
support, and probably set reservedhigh to 0, then twiddle
security.mac.portacl.rules to set things as desired. This policy
module only restricts ports explicitly bound using bind(), not
implicitly bound ports where the port number is selected by the
IP stack. It appears to work properly in my local configuration,
but needs more broad testing.
A sample policy might be:
# sysctl security.mac.portacl.rules="uid:425:tcp:80,uid:425:tcp:79"
This permits uid 425 to bind TCP sockets to ports 79 and 80. Currently
no distinction is made for incoming vs. outgoing ports with TCP,
although that would probably be easy to add.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
queue items that can be allocated by netgraph and the number of free queue
items that are cached on a private list.
Netgraph places an upper limit on the number of queue items it may allocate.
When there is a large number of netgraph messages travelling through the
system (100k/sec and more) there is a high probability, that messages get
queued at the nodes and netgraph runs out of queue items. In this case the data
flow through netgraph gets blocked. The tuneable for the number of free
items lets one trade memory for performance.
The tunables are also available as read-only sysctls.
PR: kern/47393
Reviewed by: julian
Approved by: jake (mentor)