`struct xucred` with the credentials of the connected peer.
Obviously this only works (and makes sense) on SOCK_STREAM
sockets. This works for both the connect(2) and listen(2)
callers.
There is precise documentation of the semantics in unix(4).
Reviewed by: dwmalone (eyeballed)
unnecessary breakage.
While here, use explicit sizes for the string fields so that we dont
have unintentional changes again in the future when key tunables change.
This still is not quite right, but a june userland is happy with
a -current kernel with these tweaks.
label if the dump device overflaps the label (which is a slight
misconfiguration). Dump routines don't use dscheck(), so the normal
write protection of the label doesn't help.
Reduced some nearby overflow bugs. In disk_dumpcheck(), there was
(fatal but fail-safe) overflow on i386's with 4GB of memory, at least
if Maxmem was the top page (can this happen?). The fix assumes that
the sector size divides PAGE_SIZE (dump routines already assume this).
In setdumpdev(), the corresponding overflow occurred with only about
2GB of memory on all machines with 32-bit ints. This allowed setdumpdev()
to succeed when it shouldn't have, but then disk_dumpcheck() failed
safe later. Except in old versions of FreeBSD like RELENG_3 where
there is no disk_dumpcheck().
PR: 28164 (label clobbering part)
MFC after: 1 week
structure is always free()ed yet only sometimes malloc()ed. In particular,
it was simply set to point to l_filename from the a linker_file_t in
link_elf_link_preload_finish(). The l_filename had been malloc()ed inside
the kern_linker.c module and was being free()ed twice: once by
link_elf_unload_file() and again by linker_file_unload(), leading to
a panic.
How to duplicate the problem:
- Pre-load a kernel module from the loader, i.e. if_sis.ko
- Boot system
- Attempt to unload module with kldunload if_sis
- Bewm
The problem here is that the case where the module was loaded with kldload
after system boot would work correctly, so this bug went unnoticed until
I stubbed my toe on it just now. (Also, you can only trip this bug if
you compile a kernel with options DDB, but that's the default now.)
Fix: remember to malloc() a separate copy of the module name for the
l_name member of the gdb linkage structure in three places where the
linkage structure can be initialized.
the process of exiting the kernel. The ast() function now loops as long
as the PS_ASTPENDING or PS_NEEDRESCHED flags are set. It returns with
preemption disabled so that any further AST's that arrive via an
interrupt will be delayed until the low-level MD code returns to user
mode.
- Use u_int's to store the tick counts for profiling purposes so that we
do not need sched_lock just to read p_sticks. This also closes a
problem where the call to addupc_task() could screw up the arithmetic
due to non-atomic reads of p_sticks.
- Axe need_proftick(), aston(), astoff(), astpending(), need_resched(),
clear_resched(), and resched_wanted() in favor of direct bit operations
on p_sflag.
- Fix up locking with sched_lock some. In addupc_intr(), use sched_lock
to ensure pr_addr and pr_ticks are updated atomically with setting
PS_OWEUPC. In ast() we clear pr_ticks atomically with clearing
PS_OWEUPC. We also do not grab the lock just to test a flag.
- Simplify the handling of Giant in ast() slightly.
Reviewed by: bde (mostly)
a time using the ogetdirentries() compatibility syscall. This is a
hack to ensure that rediculous values don't get passed to MALLOC().
Reviewed by: kris
for endtsleep() to be executing when msleep() resumed, for endtsleep()
to spin on sched_lock long enough for the other process to loop on
msleep() and sleep again resulting in endtsleep() waking up the "wrong"
msleep.
Obtained from: BSD/OS
removing the callout entry, return 1. If callout_stop() fails to remove
the callout entry because it is currently executing or has already been
executed, then the function returns 0. The idea was obtained from BSD/OS,
however, BSD/OS changed untimeout(), and I've just changed callout_stop()
to be more conservative.
Obtained from: BSD/OS
- Callers of asleep() and await() have been converted to calling tsleep().
The only caller outside of M_ASLEEP was the ata driver, which called both
asleep() and await() with spl-raised, so there was no need for the
asleep() and await() pair. M_ASLEEP was unused.
Reviewed by: jasone, peter
- Callers of asleep() and await() have been converted to calling tsleep().
The only caller outside of M_ASLEEP was the ata driver, which called both
asleep() and await() with spl-raised, so there was no need for the
asleep() and await() pair. M_ASLEEP was unused.
Reviewed by: jasone, peter
are a really nasty interface that should have been killed long ago
when 'ptrace(PT_[SG]ETREGS' etc came along. The entity that they
operate on (struct user) will not be around much longer since it
is part-per-process and part-per-thread in a post-KSE world.
gdb does not actually use this except for the obscure 'info udot'
command which does a hexdump of as much of the child's 'struct user'
as it can get. It carries its own #defines so it doesn't break
compiles.
filename passed in via the module loader functions in the GDB
"sharedlibrary" support structures. This isn't good, since the pointer
would become stale in almost every case (not the pre-loaded case, of
course).
Change this to malloc()ed copy of the string and finally fix the reason
that gdb -k's "sharedlibrary" command stopped working.
Obtained from: LOMAC/FreeBSD (cf. NAI Labs)
It didn't implement the proper /dev/fd functionality (which would be to
include in the directory listing /dev/fd/n if the process has fd n open)
anyway.
Anything needing access to /dev/fd/n where n > 2 can use the optional
fdescfs module, which implements this properly and does not cause any
trouble with devfs.
Discussed with: phk
bind() call on IPv4 sockets:
Currently, if one tries to bind a socket using INADDR_LOOPBACK inside a
jail, it will fail because prison_ip() does not take this possibility
into account. On the other hand, when one tries to connect(), for
example, to localhost, prison_remote_ip() will silently convert
INADDR_LOOPBACK to the jail's IP address. Therefore, it is desirable to
make bind() to do this implicit conversion as well.
Apart from this, the patch also replaces 0x7f000001 in
prison_remote_ip() to a more correct INADDR_LOOPBACK.
This is a 4.4-RELEASE "during the freeze, thanks" MFC candidate.
Submitted by: Anton Berezin <tobez@FreeBSD.org>
Discussed with at some point: phk
MFC after: 3 days
order to avoid namespace collision with subr_mchain.c's mb_init(). This
wasn't "fatal" as the mbuf initialization routine mb_init() was local to
subr_mbuf.c which in turn didn't pull in subr_mchain.c's mb_init()
declaration, but it should deffinately be changed now before it creates
headache.
This paniced my one of my machines one time too many :-( and there is
no sign of a solution in the pipeline. The deltas are still easily
available in cvs. The problem is that if the parent has been swapped
out, the child process cannot grope around in the parent's UPAGES to
see the sigact[] array or it will fault. This probably is a showstopper
for this implementation anyway.
defined to 0 in the non-SMP case, which very much makes sense as it
permits its usage in per-CPU initialization loops (for an example, check
out subr_mbuf.c).
Further, on a UP system, make mb_alloc always use the first per-CPU
container, regardless of cpuid (i.e. remove reliability on cpuid in the
UP case).
Requested by: alfred
asleep() and await() functions split the functionality of msleep() up into
two halves. Only the asleep() half (which is what puts the process on the
sleep queue) actually needs the lock usually passed to msleep() held to
prevent lost wakeups. await() does not need the lock held, so the lock
can be released prior to calling await() and does not need to be passed in
to the await() function. Typical usage of these functions would be as
follows:
mtx_lock(&foo_mtx);
... do stuff ...
asleep(&foo_cond, PRIxx, "foowt", hz);
...
mtx_unlock&foo_mtx);
...
await(-1, -1);
Inspired by: dillon on the couch at Usenix
of debugging the current process when that is in conflict with other
restrictions (such as jail, unprivileged_procdebug_permitted, etc).
o This corrects anomolies in the behavior of
kern.security.unprivileged_procdebug_permitted when using truss and
ktrace. The theory goes that this is now safe to use.
Obtained from: TrustedBSD Project
MIB entries.
o Relocate kern.suser_permitted to kern.security.suser_permitted.
o Introduce new kern.security.unprivileged_procdebug_permitted, which
(when set to 0) prevents processes without privilege from performing
a variety of inter-process debugging activities. The default is 1,
to provide current behavior.
This feature allows "hardened" systems to disable access to debugging
facilities, which have been associated with a number of past security
vulnerabilities. Previously, while procfs could be unmounted, other
in-kernel facilities (such as ptrace()) were still available. This
setting should not be modified on normal development systems, as it
will result in frustration. Some utilities respond poorly to
failing to get the debugging access they require, and error response
by these utilities may be improved in the future in the name of
beautification.
Note that there are currently some odd interactions with some
facilities, which will need to be resolved before this should be used
in production, including odd interactions with truss and ktrace.
Note also that currently, tracing is permitted on the current process
regardless of this flag, for compatibility with previous
authorization code in various facilities, but that will probably
change (and resolve the odd interactions).
Obtained from: TrustedBSD Project
dynamic symbol table buckets and chains. The sparc64 toolchain uses 32
bit .hash entries, unlike other 64 bits architectures (alpha), which use
64 bit entries.
Discussed with: dfr, jdp
FreeBSD _does_ define ENOMSG, so no need for checking if we support it.
Inspired by PR: 22470
Which was submitted by: Bjorn Tornqvist <bjorn@west.se>
MFC after: 1 week
VM caching of disks through mmap() and stopping syncing of open files
that had their last reference in the fs removed (ie: their unsync'ed
pages get discarded on close already, so I made it stop syncing too).
initialize in the right order to make derivative settings work right.
eg: at compile time, nmbufs was double nmbclusters. For POLA this should
work the same at runtime.
Tunables are now derived at boot time from maxusers. ie: change maxusers
via a tunable and all the derivative settings change. You can change
the other tunables individually as well. Even hz etc is tunable.
were indices in a dense array. The cpuids are a sparse set and treat
them as such, setting up containers only for CPUs activated during
mb_init().
- Fix netstat(1) and systat(1) to treat the per-CPU stats area as a sparse
map, in accordance with the above.
This allows us to properly boot with certain CPUs disactivated. However, if
we later decide to re-activate said CPUs, we will barf until we decide to
implement CPU spinon/spinoff callback hooks to allow for said CPUs' per-CPU
containers to get configured on their activation.
Reported by: mjacob
Partially (sys/ diffs) Submitted by: mjacob
static entries with oid's over 100, and defining enough dynamic entries
causes an overlap.
Move the "magic" value 0x100 into <sys/sysctl.h> where it belongs.
PR: 29131
Submitted by: "Alexander N. Kabaev" <kabaev@mail.ru>
Reviewed by: -arch, -audit
MFC after: 2 weeks
an unexpected user-visible side effect with the sigaction flags. Also cleanup
a minor union issue.
Submitted by: Rudolf Cejka <cejkar@dcse.fee.vutbr.cz>
MFC addendum: MFC will be combined w/ original commit
MFC after: 3 days
support. Trying to fix the merged set where dynamic overrode
static was getting more and more complicated by the day.
This should fix the duplicate atkbd, psm, fd* etc in GENERIC. (which
paniced the alpha, but not the i386)
The p_can(...) construct was a premature (and, it turns out,
awkward) abstraction. The individual calls to p_canxxx() better
reflect differences between the inter-process authorization checks,
such as differing checks based on the type of signal. This has
a side effect of improving code readability.
o Replace direct credential authorization checks in ktrace() with
invocation of p_candebug(), while maintaining the special case
check of KTR_ROOT. This allows ktrace() to "play more nicely"
with new mandatory access control schemes, as well as making its
authorization checks consistent with other "debugging class"
checks.
o Eliminate "privused" construct for p_can*() calls which allowed the
caller to determine if privilege was required for successful
evaluation of the access control check. This primitive is currently
unused, and as such, serves only to complicate the API.
Approved by: ({procfs,linprocfs} changes) des
Obtained from: TrustedBSD Project
(this commit is just the first stage). Also add various GIANT_ macros to
formalize the removal of Giant, making it easy to test in a more piecemeal
fashion. These macros will allow us to test fine-grained locks to a degree
before removing Giant, and also after, and to remove Giant in a piecemeal
fashion via sysctl's on those subsystems which the authors believe can
operate without Giant.
These take an additional mutex argument, which is dropped before any
processes are made runnable. This can avoid contention on the mutex
if the processes would immediately acquire it, and is done in such a
way that wakeups will not be lost.
Reviewed by: jhb
We already did this in the SMP case, and it is now maintained in the UP
case as well, and makes the code slightly more readable. Note that
curproc is always executing, thus the p != curproc test does not need to
be performed if the p_oncpu check is made.
We don't actually need to force a context switch of the current process.
The act of firing the event triggers a context switch to softclock() and
then switching back out again which is equivalent to a preemption, thus
no further work is needed on the local CPU.
allow call-specific authorization.
o Modify the authorization model so that p_can() is used to check
scheduling get/set events, using P_CAN_SEE for gets, and P_CAN_SCHED
for sets. This brings the checks in line with get/setpriority().
Obtained from: TrustedBSD Project
- The sx assertions don't actually need the internal sx mutex lock, so
don't bother doing so.
- Add a new assertion SX_ASSERT_LOCKED() that asserts that either a
shared or exclusive lock should be held. This assertion should be used
instead of SX_ASSERT_SLOCKED() in almost all cases.
- Adjust some KASSERT()'s to include file and line information.
- Use the new witness_assert() function in the WITNESS case for sx slock
asserts to verify that the current thread actually owns a slock.
- Clean up the KTR tracepoints to be slighlty more consistent and useful
- Fix a bug in WITNESS where we would recurse indefinitely and blow the
stack when acquiring Giant after sleeping with a sleepable lock held.
Reported by: tanimura (3)
processes.
- Don't construct fake call args and then call kill(). psignal is not
anymore complicated and is quicker and not prone to locking problems.
Calling psignal() avoids having to do a pfind() since we already have a
proc pointer and also allows us to keep the task leader locked while we
kill all the peer processes so the list is kept coherent.
- When a kthread exits, do a wakeup() on its proc pointers. This can be
used by kernel modules that have kthreads and want to ensure they have
safely exited before completely the MOD_UNLOAD event.
Connectivity provided by: Usenix wireless
may need the clock lock for nanotime().
- Add KTR trace events for lock list manipulations and other witness
operations.
- Use a temporary variable instead of setting the lock list head directly
and then setting up the links to add a new lock list entry to the lock
list. This small race could result in witness "forgetting" about all
the locks held by this process temporarily during an interrupt.
- Close a more fatal race condition when removing a lock from a list.
Removing a lock from the list entails both decrementing the count of
items in this bucket as well as shuffling items in the current bucket up
a notch to replace the gap left by the removed item. Wrap these
operations in a critical section.
class to trace witness events.
- Make the ktr_cpu field of ktr_entry be a standard field rather than one
present only in the KTR_EXTEND case.
- Move the default definition of KTR_ENTRIES from sys/ktr.h to
kern/kern_ktr.c. It has not been needed in the header file since KTR
was un-inlined.
- Minor include cleanup in kern/kern_ktr.c.
- Fiddle with the ktr_cpumask in ktr_tracepoint() to disable KTR events
on the current CPU while we are processing an event.
- Set the current CPU inside of the critical section to ensure we don't
migrate CPU's after the critical section but before we set the CPU.
switch. Count the context switch when preempting the current thread to let
a higher priority thread blocked on a mutex we just released run as an
involuntary context switch.
Reported by: bde
people are on track with the cause and effect of this, and although
fixing this severely degenerate case appears to violate the letter of
POSIX.1-200x, Bruce and I (and enough others) agree that it should be
comitted.
So, this patch generates an ENOENT error for any attempt to do a path lookup
through an empty symlink (e.g. open(), stat()).
Submitted by: "Andrey A. Chernov" <ache@nagual.pp.ru>
Reviewed by: bde
Discussed exhaustively on: freebsd-current
Previously committed to: NetBSD 4 years ago
- Grab Giant around ktrace points.
- Clean up KTR_PROC tracepoints to not display the value of
sched_lock.mtx_lock as it isn't really needed anymore and just obfuscates
the messages.
- Add a few if conditions to replace gotos.
- Ensure that every msleep KTR event ends up with a matching msleep resume
KTR event (this was broken when we didn't do a mi_switch()).
- Only note via ktrace that we resumed from a switch once rather than twice
in several places in msleep().
- Remove spl's rom asleep and await as the proc lock and sched_lock provide
all the needed locking.
- In mawait() add in a needed ktrace point for noting that we are about to
switch out.
lock until after grabbing the sched_lock to avoid CURSIG racing with
psignal.
- Don't grab Giant for addupc_task() as it isn't needed.
Reported by: tegge (signal race), bde (addupc_task a while back)
rather than grabbing it and releasing it themselves. This allows callers
of these functions to get the lock to close race conditions.
- Grab Giant around ktrace in postsig.
- Count the switches performed on SIGSTOP's as involuntary context switches
in the resource usage stats.
Reported by: tegge (signal race), bde (missing csw stats)
introduce a modified allocation mechanism for mbufs and mbuf clusters; one
which can scale under SMP and which offers the possibility of resource
reclamation to be implemented in the future. Notable advantages:
o Reduce contention for SMP by offering per-CPU pools and locks.
o Better use of data cache due to per-CPU pools.
o Much less code cache pollution due to excessively large allocation macros.
o Framework for `grouping' objects from same page together so as to be able
to possibly free wired-down pages back to the system if they are no longer
needed by the network stacks.
Additional things changed with this addition:
- Moved some mbuf specific declarations and initializations from
sys/conf/param.c into mbuf-specific code where they belong.
- m_getclr() has been renamed to m_get_clrd() because the old name is really
confusing. m_getclr() HAS been preserved though and is defined to the new
name. No tree sweep has been done "to change the interface," as the old
name will continue to be supported and is not depracated. The change was
merely done because m_getclr() sounds too much like "m_get a cluster."
- TEMPORARILY disabled mbtypes statistics displaying in netstat(1) and
systat(1) (see TODO below).
- Fixed systat(1) to display number of "free mbufs" based on new per-CPU
stat structures.
- Fixed netstat(1) to display new per-CPU stats based on sysctl-exported
per-CPU stat structures. All infos are fetched via sysctl.
TODO (in order of priority):
- Re-enable mbtypes statistics in both netstat(1) and systat(1) after
introducing an SMP friendly way to collect the mbtypes stats under the
already introduced per-CPU locks (i.e. hopefully don't use atomic() - it
seems too costly for a mere stat update, especially when other locks are
already present).
- Optionally have systat(1) display not only "total free mbufs" but also
"total free mbufs per CPU pool."
- Fix minor length-fetching issues in netstat(1) related to recently
re-enabled option to read mbuf stats from a core file.
- Move reference counters at least for mbuf clusters into an unused portion
of the cluster itself, to save space and need to allocate a counter.
- Look into introducing resource freeing possibly from a kproc.
Reviewed by (in parts): jlemon, jake, silby, terry
Tested by: jlemon (Intel & Alpha), mjacob (Intel & Alpha)
Preliminary performance measurements: jlemon (and me, obviously)
URL: http://people.freebsd.org/~bmilekic/mb_alloc/
lock. We now use temporary variables to save the process argument pointer
and just update the pointer while holding the lock. We then perform the
free on the cached pointer after releasing the lock.
something: offset into the first mbuf of the target chain before copying
the source data over.
Make drivers using m_devget() with a first argument "data - ETHER_ALIGN"
to use the offset argument to pass ETHER_ALIGN in. The way it was previously
done is potentially dangerous if the source data was at the top of a page
and the offset caused the previous page to be copied (if the
previous page has not yet been appropriately mapped).
The old `offset' argument in m_devget() is not used anywhere (it's always
0) and dates back to ~1995 (and earlier?) when support for ethernet trailers
existed. With that support gone, it was merely collecting dust.
Tested on alpha by: jlemon
Partially submitted by: jlemon
Reviewed by: jlemon
MFC after: 3 weeks
and its associated constants. Implement _SC_IOV_MAX in the usual way.
Be a bit sloppy about the namespace question; this should get cleared up
in time for 5.0.
MFC after: 1 month
take a const 'name', since they dont modify anything.
159: warning: passing arg 1 of `getenv_int' discards qualifiers...
167: warning: passing arg 1 of `getenv' discards qualifiers from pointer..
Replace the a.out emulation of 'struct linker_set' with something
a little more flexible. <sys/linker_set.h> now provides macros for
accessing elements and completely hides the implementation.
The linker_set.h macros have been on the back burner in various
forms since 1998 and has ideas and code from Mike Smith (SET_FOREACH()),
John Polstra (ELF clue) and myself (cleaned up API and the conversion
of the rest of the kernel to use it).
The macros declare a strongly typed set. They return elements with the
type that you declare the set with, rather than a generic void *.
For ELF, we use the magic ld symbols (__start_<setname> and
__stop_<setname>). Thanks to Richard Henderson <rth@redhat.com> for the
trick about how to force ld to provide them for kld's.
For a.out, we use the old linker_set struct.
NOTE: the item lists are no longer null terminated. This is why
the code impact is high in certain areas.
The runtime linker has a new method to find the linker set
boundaries depending on which backend format is in use.
linker sets are still module/kld unfriendly and should never be used
for anything that may be modular one day.
Reviewed by: eivind
- Replace some very poorly thought out API hacks that should have been
fixed a long while ago.
- Provide some much more flexible search functions (resource_find_*())
- Use strings for storage instead of an outgrowth of the rather
inconvenient temporary ioconf table from config(). We already had a
fallback to using strings before malloc/vm was running anyway.
This work was based on kame-20010528-freebsd43-snap.tgz and some
critical problem after the snap was out were fixed.
There are many many changes since last KAME merge.
TODO:
- The definitions of SADB_* in sys/net/pfkeyv2.h are still different
from RFC2407/IANA assignment because of binary compatibility
issue. It should be fixed under 5-CURRENT.
- ip6po_m member of struct ip6_pktopts is no longer used. But, it
is still there because of binary compatibility issue. It should
be removed under 5-CURRENT.
Reviewed by: itojun
Obtained from: KAME
MFC after: 3 weeks
around, use a common function for looking up and extracting the tunables
from the kernel environment. This saves duplicating the same function
over and over again. This way typically has an overhead of 8 bytes + the
path string, versus about 26 bytes + the path string.
compliant. All the variable definitions and function names are
reasonably consistent, and the functions which should be static (i.e.,
all of them) are. Other assorted fixes were made. The majority of
the delta is indentation fixes.
Partially reviewed by: bde
certain cases, and a close() by another process could potentially rip the
pipe out from under the (blocked) locking operation.
Reported-by: Alexander Viro <viro@math.psu.edu>
- move the sysctl code to kern_intr.c
- do not use INTRCNT_COUNT, but rather eintrcnt - intrcnt to determine
the length of the intrcnt array
- move the declarations of intrnames, eintrnames, intrcnt and eintrcnt
from machine-dependent include files to sys/interrupt.h
- remove the hw.nintr sysctl, it is not needed.
- fix various style bugs
Requested by: bde
Reviewed by: bde (some time ago)
* all members of msginfo from sysv_msg.c;
* msqids from sysv_msg.c;
* sema from sysv_sem.c; and
* shmsegs from sysv_shm.c;
These will be used by ipcs(1) in non-kvm mode.
Reviewed by: tmm
attempting to remove nonexistant exports with MNT_DELEXPORT returns
an error; before this change it always succeeded. This caused
mountd(8) to log "can't delete exports for /whatever" warnings.
Change the error code from EINVAL to a more specific ENOENT, and
make mountd ignore this error when deleting the export list. I
could have just restored the previous behaviour of returning success,
but I think an error return is a useful diagnostic.
Reviewed by: phk
it becomes possible to trap in ptsstop() in kern/tty_pty.c
if the slave side has never been opened during the life of a kernel.
What happens is that calls to ttyflush() done from ptyioctl() for the
controlling side end up calling ptsstop() [via (*tp->t_stop)(tp, <X>)]
which evaluates the following:
struct pt_ioctl *pti = tp->t_dev->si_drv1;
In order for tp->t_dev to be set, the slave device must first be
opened in ttyopen() [kern/tty.c].
It appears that the only problem is calls to (*tp->t_stop)(tp, <n>),
so this could also happen with other ioctls initiated by the
controlling side before the slave has been opened.
PR: 27698
Submitted by: David Bein bein@netapp.com
MFC after: 6 days
the saved uid and gid during execve(). Unfortunately, the optimizations
were incorrect in the case where the credential was updated, skipping
the setting of the saved uid and gid when new credentials were generated.
This change corrects that problem by handling the newcred!=NULL case
correctly.
Reported/tested by: David Malone <dwmalone@maths.tcd.ie>
Obtained from: TrustedBSD Project
dev_t. The dev_depends(dev_t, dev_t) function is for tying them
to each other.
When destroy_dev() is called on a dev_t, all dev_t's depending
on it will also be destroyed (depth first order).
Rewrite the make_dev_alias() to use this dependency facility.
kern/subr_disk.c:
Make the disk mini-layer use dependencies to make sure all
relevant dev_t's are removed when the disk disappears.
Make the disk mini-layer precreate some magic sub devices
which the disk/slice/label code expects to be there.
kern/subr_disklabel.c:
Remove some now unneeded variables.
kern/subr_diskmbr.c:
Remove some ancient, commented out code.
kern/subr_diskslice.c:
Minor cleanup. Use name from dev_t instead of dsname()
real uid, saved uid, real gid, and saved gid to ucred, as well as the
pcred->pc_uidinfo, which was associated with the real uid, only rename
it to cr_ruidinfo so as not to conflict with cr_uidinfo, which
corresponds to the effective uid.
o Remove p_cred from struct proc; add p_ucred to struct proc, replacing
original macro that pointed.
p->p_ucred to p->p_cred->pc_ucred.
o Universally update code so that it makes use of ucred instead of pcred,
p->p_ucred instead of p->p_pcred, cr_ruidinfo instead of p_uidinfo,
cr_{r,sv}{u,g}id instead of p_*, etc.
o Remove pcred0 and its initialization from init_main.c; initialize
cr_ruidinfo there.
o Restruction many credential modification chunks to always crdup while
we figure out locking and optimizations; generally speaking, this
means moving to a structure like this:
newcred = crdup(oldcred);
...
p->p_ucred = newcred;
crfree(oldcred);
It's not race-free, but better than nothing. There are also races
in sys_process.c, all inter-process authorization, fork, exec, and
exit.
o Remove sigio->sio_ruid since sigio->sio_ucred now contains the ruid;
remove comments indicating that the old arrangement was a problem.
o Restructure exec1() a little to use newcred/oldcred arrangement, and
use improved uid management primitives.
o Clean up exit1() so as to do less work in credential cleanup due to
pcred removal.
o Clean up fork1() so as to do less work in credential cleanup and
allocation.
o Clean up ktrcanset() to take into account changes, and move to using
suser_xxx() instead of performing a direct uid==0 comparision.
o Improve commenting in various kern_prot.c credential modification
calls to better document current behavior. In a couple of places,
current behavior is a little questionable and we need to check
POSIX.1 to make sure it's "right". More commenting work still
remains to be done.
o Update credential management calls, such as crfree(), to take into
account new ruidinfo reference.
o Modify or add the following uid and gid helper routines:
change_euid()
change_egid()
change_ruid()
change_rgid()
change_svuid()
change_svgid()
In each case, the call now acts on a credential not a process, and as
such no longer requires more complicated process locking/etc. They
now assume the caller will do any necessary allocation of an
exclusive credential reference. Each is commented to document its
reference requirements.
o CANSIGIO() is simplified to require only credentials, not processes
and pcreds.
o Remove lots of (p_pcred==NULL) checks.
o Add an XXX to authorization code in nfs_lock.c, since it's
questionable, and needs to be considered carefully.
o Simplify posix4 authorization code to require only credentials, not
processes and pcreds. Note that this authorization, as well as
CANSIGIO(), needs to be updated to use the p_cansignal() and
p_cansched() centralized authorization routines, as they currently
do not take into account some desirable restrictions that are handled
by the centralized routines, as well as being inconsistent with other
similar authorization instances.
o Update libkvm to take these changes into account.
Obtained from: TrustedBSD Project
Reviewed by: green, bde, jhb, freebsd-arch, freebsd-audit
Tor created a while ago, removes the raw I/O piece (that has cache coherency
problems), and adds a buffer cache / VM freeing piece.
Essentially this patch causes O_DIRECT I/O to not be left in the cache, but
does not prevent it from going through the cache, hence the 80%. For
the last 20% we need a method by which the I/O can be issued directly to
buffer supplied by the user process and bypass the buffer cache entirely,
but still maintain cache coherency.
I also have the code working under -stable but the changes made to sys/file.h
may not be MFCable, so an MFC is not on the table yet.
Submitted by: tegge, dillon
- Always call vfs_setdirty() with vm_mtx held.
- Fix an old comment: vm_hold_unload_pages is called vm_hold_free_pages()
nowadays.
- Always call vm_hold_free_pages() w/o vm_mtx held.
- Don't bother releasing Giant while doing a lookup on the vm_map of
initproc while starting up init. We have to grab it again right after
the lookup anyways.
at the top of the minute, whichever comes first. It seems
logtimeout() is only called once after the kernel log is opened
and then never again after that. So I guess syslogd only gets
kernel log messages by virtue of syncer(4)'s flushes ...?
PR: 27361
Submitted by: pkern@utcc.utoronto.ca
MFC after: 1 week
systems were repo-copied from sys/miscfs to sys/fs.
- Renamed the following file systems and their modules:
fdesc -> fdescfs, portal -> portalfs, union -> unionfs.
- Renamed corresponding kernel options:
FDESC -> FDESCFS, PORTAL -> PORTALFS, UNION -> UNIONFS.
- Install header files for the above file systems.
- Removed bogus -I${.CURDIR}/../../sys CFLAGS from userland
Makefiles.
needs instead of relying on idiosyncratic hacks in the tty subsystem.
Also add module code since this can now be compiled as a module.
Silence by: -hackers, -audit
simpler for npx exceptions that start as traps (no assembly required...)
and works better for npx exceptions that start as interrupts (there is
no longer a problem for nested interrupts).
Submitted by: original (pre-SMPng) version by luoqi
shm_deallocate_segment because shmexit_myhook calls it, and the latter
should always be called with it already held.
Submitted by: dwmalone, dd
Approved by: alfred
- Don't release the vm mutex early in pipespace() but instead hold it
across vm_object_deallocate() if vm_map_find() returns an error and
across pipe_free_kmem() if vm_map_find() succeeds.
- Add a XXX above a zfree() since zalloc already has its own locking,
one would hope that zfree() wouldn't need the vm lock.
vm_mtx does not recurse and is required for most low level
vm operations.
faults can not be taken without holding Giant.
Memory subsystems can now call the base page allocators safely.
Almost all atomic ops were removed as they are covered under the
vm mutex.
Alpha and ia64 now need to catch up to i386's trap handlers.
FFS and NFS have been tested, other filesystems will need minor
changes (grabbing the vm lock when twiddling page properties).
Reviewed (partially) by: jake, jhb
lock. Since we won't actually block on a try lock operation, it's not
a problem. Add a comment explaining why it is safe to skip lock order
checking with try locks.
- Remove the ithread list lock spin lock from the order list.
sleep locks.
- Delay returning from ithread_remove_handler() until we are certain that
the interrupt handler being removed has in fact been removed from the
ithread.
- XXX: There is still a problem in that nothing protects the kernel from
adding a new handler while the ithread is running, though with our
current architectures this is not a problem.
Requested by: gibbs (2)
- Attach a writable sysctl to bootverbose (debug.bootverbose) so it can be
toggled after boot.
- Move the printf of the version string to a SI_SUB_COPYRIGHT SYSINIT just
afer the display of the copyright message instead of doing it by hand in
three MD places.
follows: the effective uid of p1 (subject) must equal the real, saved,
and effective uids of p2 (object), p2 must not have undergone a
credential downgrade. A subject with appropriate privilege may override
these protections.
In the future, we will extend these checks to require that p1 effective
group membership must be a superset of p2 effective group membership.
Obtained from: TrustedBSD Project
Remove comment about setting error for reads on EOF, read returns 0 on
EOF so the code should be ok.
Remove non-effective priority boost, PRIO+1 doesn't do anything
(according to McKusick), if a real priority boost is needed it should
have been +4.
Style fixes:
.) return foo -> return (foo)
.) FLAG1|FlAG2 -> FLAG1 | FlAG2
.) wrap long lines
.) unwrap short lines
.) for(i=0;i=foo;i++) -> for (i = 0; i=foo; i++)
.) remove braces for some conditionals with a single statement
.) fix continuation lines.
md5 couldn't verify the binary because some code had to
be shuffled around to address the style issues.
the number of references on the filesystem root vnode to be both
expected and released. Many filesystems hold an extra reference on
the filesystem root vnode, which must be accounted for when
determining if the filesystem is busy and then released if it isn't
busy. The old `skipvp' approach required individual filesystem
xxx_unmount functions to re-implement much of vflush()'s logic to
deal with the root vnode.
All 9 filesystems that hold an extra reference on the root vnode
got the logic wrong in the case of forced unmounts, so `umount -f'
would always fail if there were any extra root vnode references.
Fix this issue centrally in vflush(), now that we can.
This commit also fixes a vnode reference leak in devfs, which could
result in idle devfs filesystems that refuse to unmount.
Reviewed by: phk, bp
- Require the proc lock be held for killproc() to allow for the vmdaemon to
kill a process when memory is exhausted while holding the lock of the
process to kill.
When people access /dev/tty, locate their controlling tty and return
the dev_t of it to them. This basically makes /dev/tty act like
a variant symlink sort of thing which is much simpler than all the
mucking about with vnodes.
- Since polling should not involve sleeping, keep holding a
process lock upon scanning file descriptors.
- Hold a reference to every file descriptor prior to entering
polling loop in order to avoid lock order reversal between
lockmgr and p_mtx upon calling fdrop() in fo_poll().
(NOTE: this work has not been done for netncp and netsmb
yet because a socket itself has no reference counts.)
Reviewed by: jhb
KASSERT when vp->v_usecount is zero or negative. In this case, the
"v*: negative ref cnt" panic that follows is much more appropriate.
Reviewed by: mckusick
fail due to witness exhausting its internal resources and shutting down.
Reported by: Szilveszter Adam <sziszi@petra.hos.u-szeged.hu>
Tested by: David Wolfskill <david@catwhisker.org>
The pipe code could not handle running out of kva, it would panic
if that happened. Instead return ENFILE to the application which
is an acceptable error return from pipe(2).
There was some slightly tricky things that needed to be worked on,
namely that the pipe code can 'realloc' the size of the buffer if
it detects that the pipe could use a bit more room. However if it
failed the reallocation it could not cope and would panic. Fix
this by attempting to grow the pipe while holding onto our old
resources. If all goes well free the old resources and use the
new ones, otherwise continue to use the smaller buffer already
allocated.
While I'm here add a few blank lines for style(9) and remove
'register'.
process on fork(2).
It is the supposed behavior stated in the manpage of sigaction(2), and
Solaris, NetBSD and FreeBSD 3-STABLE correctly do so.
The previous fix against libc_r/uthread/uthread_fork.c fixed the
problem only for the programs linked with libc_r, so back it out and
fix fork(2) itself to help those not linked with libc_r as well.
PR: kern/26705
Submitted by: KUROSAWA Takahiro <fwkg7679@mb.infoweb.ne.jp>
Tested by: knu, GOTOU Yuuzou <gotoyuzo@notwork.org>,
and some other people
Not objected by: hackers
MFC in: 3 days
implementation. Move from direct uid 0 comparision to using suser_xxx()
call with the same semantics. Simplify CAN_AFFECT() macro as passed
pcred was redundant. The checks here still aren't "right", but they
are probably "better".
Obtained from: TrustedBSD Project
struct lock_instance that is stored in the per-process and per-CPU lock
lists. Previously, the lock lists just kept a pointer to each lock held.
That pointer is now replaced by a lock instance which contains a pointer
to the lock object, the file and line of the last acquisition of a lock,
and various flags about a lock including its recursion count.
- If we sleep while holding a sleepable lock, then mark that lock instance
as having slept and ignore any lock order violations that occur while
acquiring Giant when we wake up with slept locks. This is ok because of
Giant's special nature.
- Allow witness to differentiate between shared and exclusive locks and
unlocks of a lock. Witness will now detect the case when a lock is
acquired first in one mode and then in another. Mutexes are always
locked and unlocked exclusively. Witness will also now detect the case
where a process attempts to unlock a shared lock while holding an
exclusive lock and vice versa.
- Fix a bug in the lock list implementation where we used the wrong
constant to detect the case where a lock list entry was full.
uses lockmgr locks and this leads to a lock order reversal. At this point
in wait1() the process is not on any process lists or in the process tree,
so no other process should be able to find it or have a reference to it
anyways, so the locking is not needed.
other "system" header files.
Also help the deprecation of lockmgr.h by making it a sub-include of
sys/lock.h and removing sys/lockmgr.h form kernel .c files.
Sort sys/*.h includes where possible in affected files.
OK'ed by: bde (with reservations)
- add a missing break which caused RTP_SET to always return EINVAL
- break instead of returning if p_can fails so proc_lock is always
dropped correctly
- only copyin data that is actually needed
- use break instead of goto
- make rtp_to_pri return EINVAL instead of -1 if the values are out
or range so we don't have to translate
and gid in the ACL, vaccess_acl_posix1e() was changed to accept
explicit file_uid and file_gid as arguments. However, in making the
change, I explicitly checked file_gid against cr->cr_groups[0], rather
than using groupmember, resulting in ACL_GROUP_OBJ entries being
compared to the caller's effective gid only, not the remainder of
its groups. This was recently corrected for the version of the
group call without privilege, but the second test (when privilege is
added) was missed. This change replaces an additiona cr->cr_groups[0]
check with groupmember().
Pointed out by: jedgar
Reviewed by: jedgar
Obtained from: TrustedBSD Project
Make 7 filesystems which don't really know about VOP_BMAP rely
on the default vector, rather than more or less complete local
vop_nopbmap() implementations.
been made machine independent and various other adjustments have been made
to support Alpha SMP.
- It splits the per-process portions of hardclock() and statclock() off
into hardclock_process() and statclock_process() respectively. hardclock()
and statclock() call the *_process() functions for the current process so
that UP systems will run as before. For SMP systems, it is simply necessary
to ensure that all other processors execute the *_process() functions when the
main clock functions are triggered on one CPU by an interrupt. For the alpha
4100, clock interrupts are delievered in a staggered broadcast fashion, so
we simply call hardclock/statclock on the boot CPU and call the *_process()
functions on the secondaries. For x86, we call statclock and hardclock as
usual and then call forward_hardclock/statclock in the MD code to send an IPI
to cause the AP's to execute forwared_hardclock/statclock which then call the
*_process() functions.
- forward_signal() and forward_roundrobin() have been reworked to be MI and to
involve less hackery. Now the cpu doing the forward sets any flags, etc. and
sends a very simple IPI_AST to the other cpu(s). AST IPIs now just basically
return so that they can execute ast() and don't bother with setting the
astpending or needresched flags themselves. This also removes the loop in
forward_signal() as sched_lock closes the race condition that the loop worked
around.
- need_resched(), resched_wanted() and clear_resched() have been changed to take
a process to act on rather than assuming curproc so that they can be used to
implement forward_roundrobin() as described above.
- Various other SMP variables have been moved to a MI subr_smp.c and a new
header sys/smp.h declares MI SMP variables and API's. The IPI API's from
machine/ipl.h have moved to machine/smp.h which is included by sys/smp.h.
- The globaldata_register() and globaldata_find() functions as well as the
SLIST of globaldata structures has become MI and moved into subr_smp.c.
Also, the globaldata list is only available if SMP support is compiled in.
Reviewed by: jake, peter
Looked over by: eivind