doesn't give them enough stack to do much before blowing away the pcb.
This adds MI and MD code to allow the allocation of an alternate kstack
who's size can be speficied when calling kthread_create. Passing the
value 0 prevents the alternate kstack from being created. Note that the
ia64 MD code is missing for now, and PowerPC was only partially written
due to the pmap.c being incomplete there.
Though this patch does not modify anything to make use of the alternate
kstack, acpi and usb are good candidates.
Reviewed by: jake, peter, jhb
modules to perform MAC-related events when a thread returns to user
space. This is required for policies that have floating process labels,
as it's not always possible to acquire the process lock at arbitrary
points in the stack during system call processing; process labels might
represent traditional authentication data, process history information,
or other data.
LOMAC will use this entry point to perform the process label update
prior to the thread returning to userspace, when plugged into the MAC
framework.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
ast().
- Actually set KEF_ASTPENDING so ast() is called. I think this is buggy
for a process with multiple KSE's in that PS_XCPU is not a KSE event,
it's a process-wide event. IMO there really should probably be two
ASTPENDING flags, one for per-process, and one for per-KSE.
Submitted by: bde
Peter had repocopied sys/disklabel.h to sys/diskpc98.h and sys/diskmbr.h.
These two new copies are still intact copies of disklabel.h and
therefore protected by #ifndef _SYS_DISKLABEL_H_ so #including them
in programs which already include <sys.disklabel.h> is currently a
no-op.
This commit adds a number of such #includes.
Once I have verified that I have fixed all the places which need fixing,
I will commit the updated versions of the three #include files.
Sponsored by: DARPA & NAI Labs.
(1) Where previously the pipe mutex was selectively grabbed during
pipe_ioctl(), now always grab it and then release if if not
needed. This protects the call to mac_check_pipe_ioctl() to
make sure the label remains consistent. (Note: it looks
like sigio locking may be incorrect for fgetown() since we
call it not-by-reference and sigio locking assumes call by
reference).
(2) In pipe_stat(), lock the pipe if MAC is compiled in so that
the call to mac_check_pipe_stat() gets a locked pipe to
protect label consistency. We still release the lock before
returning actual stat() data, risking inconsistency, but
apparently our pipe locking model accepts that risk.
(3) In various pipe MAC authorization checks, assert that the pipe
lock is held.
(4) Grab the lock when performing a pipe relabel operation, and
assert it a little deeper in the stack.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
__mac_get_pid Retrieve MAC label of a process by pid
Similar to __mac_get_proc() except that the target process of
the operation is explicitly specified rather than assuming
curthread.
__mac_get_link Retrieve MAC label of a path with NOFOLLOW
__mac_set_link Set MAC label of a path with NOFOLLOW
extattr_set_link Set EAs on a path with NOFOLLOW
extattr_get_link Retrieve EAs on a path with NOFOLLOW
extattr_delete_link Delete EAs on a path with NOFOLLOW
These calls are similar to __mac_get_file(), __mac_set_file(),
extattr_set_file(), extattr_get_file(), and extattr_delete_file(),
except that they do not follow symlinks. The distinction between
these calls is similar to lchown() vs chown().
Implementations to follow.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
I've added a structure, kernel-private, to represent a pending or in-delivery
signal, called `ksiginfo'. It is roughly analogous to the basic information
that is exported by the POSIX interface 'siginfo_t', but more basic. I've
added functions to allocate these structures, and further to wrap all signal
operations using them.
Once the operations are wrapped, I've added a TailQ (see queue(3)) of these
structures to 'struct proc', and all pending signals are in that TailQ. When
a signal is being delivered, it is dequeued from the list. Once I finish
the spreading of ksiginfo throughout the tree, the dequeued structure will be
delivered to the process in question, whereas currently and normally, the
signal number is what is used.
has exceeded its CPU time limit.
- In mi_switch(), set PS_XCPU when the CPU time limit is exceeded.
- Perform actual CPU time limit exceeded work in ast() when PS_XCPU is set.
Requested by: many
interlock in getnewvnode() to avoid possible sleeps while holding
the mutex. Note that the warning from Witness is a slight false
positive since we know there will be no contention on the interlock
since we haven't made the vnode available for use yet, but the theory
is not a bad one.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories
gets signals operating based on a TailQ, and is good enough to run X11,
GNOME, and do job control. There are some intricate parts which could be
more refined to match the sigset_t versions, but those require further
evaluation of directions in which our signal system can expand and contract
to fit our needs.
After this has been in the tree for a while, I will make in kernel API
changes, most notably to trapsignal(9) and sendsig(9), to use ksiginfo
more robustly, such that we can actually pass information with our
(queued) signals to the userland. That will also result in using a
struct ksiginfo pointer, rather than a signal number, in a lot of
kern_sig.c, to refer to an individual pending signal queue member, but
right now there is no defined behaviour for such.
CODAFS is unfinished in this regard because the logic is unclear in
some places.
Sponsored by: New Gold Technology
Reviewed by: bde, tjr, jake [an older version, logic similar]
from stopping another thread from completing a syscall, and this allows it to
release its resources etc. Probably more related commits to follow (at least
one I know of)
Initial concept by: julian, dillon
Submitted by: davidxu
device_t the same throughout kernel.
This is a very fine point of C which fortunatly does not make any
difference in normal circumstances but which due to the pervasiveness
of device_t in the kernel can make a lint barf a lot.
sparc v9 ABI. The Elf_Rela records for local symbols appear to already
have the symbol's value added in to the addend field, even though the ABI
specifies we need to lookup the symbol and add its value too. This breaks
text relocations in klds because the symbol's value is added twice, and
the resulting address points off into nowhere land, so for now just use
the addend.
Tested by: rwatson
if they are not going to cross over themselves. Also change how the list of
completed user threads is tracked and passed to the KSE. This is not
a change in design but rather the implementation of what was originally
envisionned.
- Make the VI asserts more orthogonal to the rest of the asserts by using a
new, common vfs_badlock() function and adding a 'str' arg.
- Adjust generated ASSERTS to match the new prototype.
- Adjust explicit ASSERTS to match the new prototype.
- Enable vfs_badlock_mutex by default.
- Assert that the vp is locked in VOP_UNLOCK.
- Use standard interlock macros in remaining code.
- Correct a race in getnewvnode().
- Lock access to v_numoutput with interlock.
- Lock access to buf lists and splay tree with interlock.
- Add VOP and VI asserts.
- Lock b_vnbufs with the vnode interlock.
- Add vrefcnt() for callers who want to retreive the vnode ref without
holding a lock. Add a comment that describes when this is safe.
- Add vholdl() and vdropl() so that callers who already own the interlock
can avoid race conditions and unnecessary unlocking.
- Move the VOP_GETATTR() in vflush() into the WRITECLOSE conditional case.
- Hold the interlock before droping the mntlist_mtx in vflush() to avoid
a race.
- Fix locking in vfs_msync().