This allows obtaining crash dumps from the panics occured during late stages
of kernel initialisation before system enters into single-user mode.
MFC after: 2 weeks
replace mutex_lock calls on uidinfo with macro calls:
mtx_lock(&uidp->ui_mtx) -> UIDINFO_LOCK(uidp)
Terry Lambert <tlambert2@mindspring.com> helped with this.
sleeping on a process object but changed the corresponding
wakeup()s to the thread object. The result was that non-raw
aio ops waited for an aio daemon to timeout before action
was taken. Now, we sleep on the thread object.
PR: kern/34016
operation. The vgonel() code has always called vclean() but until we
started proactively freeing vnodes it would never actually be called with
a dirty vnode, so this situation did not occur prior to the vnlru() code.
Now that we proactively free vnodes when kern.maxvnodes is hit, however,
vclean() winds up with work to do and improperly generates the warnings.
Reviewed by: peter
Approved by: re (for MFC)
MFC after: 1 day
seem to be too short for the 500 Mhz DS20 I'm testing on. The rather
arbitrary numbers are rather bogus anyways. We should probably have
variables for these limits that are calibrated in the MD startup code
somehow.
involving file removal or file update were not always being fully
committed to disk. The result was lost files or corrupted file data.
This change ensures that the filesystem is properly synced to disk
before the filesystem is down-graded.
This delta also fixes a long standing bug in which a file open for
reading has been unlinked. When the last open reference to the file
is closed, the inode is reclaimed by the filesystem. Previously,
if the filesystem had been down-graded to read-only, the inode could
not be reclaimed, and thus was lost and had to be later recovered
by fsck. With this change, such files are found at the time of the
down-grade. Normally they will result in the filesystem down-grade
failing with `device busy'. If a forcible down-grade is done, then
the affected files will be revoked causing the inode to be released
and the open file descriptors to begin failing on attempts to read.
Submitted by: "Sam Leffler" <sam@errno.com>
Backout revision 1.56 and 1.57 of fifo_vnops.c.
Introduce a new poll op "POLLINIGNEOF" that can be used to ignore
EOF on a fifo, POLLIN/POLLRDNORM is converted to POLLINIGNEOF within
the FIFO implementation to effect the correct behavior.
This should allow one to view a fifo pretty much as a data source
rather than worry about connections coming and going.
Reviewed by: bde
than necessary.
o Move a rarely-used goto label inside a critical section so that we don't
perform an splnet() for which there is no corresponding splx().
o Remove unnecessary splnet()/splx() around accesses to kaioinfo::kaio_jobdone
in aio_return().
o Use TAILQ_FOREACH for simple cases of iteration over kaioinfo::kaio_jobdone.
Seigo Tanimura (tanimura) posted the initial delta.
I've polished it quite a bit reducing the need for locking and
adapting it for KSE.
Locks:
1 mutex in each filedesc
protects all the fields.
protects "struct file" initialization, while a struct file
is being changed from &badfileops -> &pipeops or something
the filedesc should be locked.
1 mutex in each struct file
protects the refcount fields.
doesn't protect anything else.
the flags used for garbage collection have been moved to
f_gcflag which was the FILLER short, this doesn't need
locking because the garbage collection is a single threaded
container.
could likely be made to use a pool mutex.
1 sx lock for the global filelist.
struct file * fhold(struct file *fp);
/* increments reference count on a file */
struct file * fhold_locked(struct file *fp);
/* like fhold but expects file to locked */
struct file * ffind_hold(struct thread *, int fd);
/* finds the struct file in thread, adds one reference and
returns it unlocked */
struct file * ffind_lock(struct thread *, int fd);
/* ffind_hold, but returns file locked */
I still have to smp-safe the fget cruft, I'll get to that asap.
We calculate a trigger point that both guarentees we will find a
sufficient number of vnodes to recycle and prevents us from recycling
vnodes with lots of resident pages. This particular section of
code is designed to recycle vnodes, not do unnecessary frees of
cached VM pages.
can't acquire the mnt_lock without blocking. Normally non-forced
unmount attempts return EBUSY quickly if any vnodes are active, so
this just extends that behaviour to cover the per-mount mnt_lock
too.
automatically extended to prevent overflow.
* Added sbuf_vprintf(); sbuf_printf() is now just a wrapper around
sbuf_vprintf().
* Include <stdio.h> and <string.h> when building libsbuf to silence
WARNS=4 warnings.
Reviewed by: des
macro. As a result, mandatory signal delivery policies will be
applied consistently across the kernel.
- Note that this subtly changes the protection semantics, and we should
watch out for any resulting breakage. Previously, delivery of SIGIO
in this circumstance was limited to situations where the subject was
privileged, or where one of the subject's (ruid, euid) matched one
of the object's (ruid, euid). In the new scenario, subject (ruid, euid)
are matched against the object's (ruid, svuid), and the object uid's
must be a subset of the subject uid's. Likewise, jail now affects
delivery, and special handling for P_SUGID of the object is present.
This change can always be reversed or tweaked if it proves to disrupt
application behavior substantially.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, NAI Labs
authorized based on a subject credential rather than a subject process.
This will permit the same logic to be reused in situations where only
the credential generating the signal is available, such as in the
delivery of SIGIO.
- Because of two clauses, the automatic success against curproc,
and the session semantics for SIGCONT, not all logic can be pushed
into cr_cansignal(), but those cases should not apply for most other
consumers of cr_cansignal().
- This brings the base system inter-process authorization code more
into line with the MAC implementation.
Obtained from: TrustedBSD Project
Sponsored by: DARPA, NAI Labs
fifesystem problems could prevent the release from completing and
this could result in init being blocked indefinitely.
This was looked over by Matt ages ago.
Approved by: dillon
SMTX in utils such as ps and top. The KI_CTTY flag was assigned to
kinfo_proc->ki_kiflag rather than or'd into the flag, thus clobbering
any flags set earlier, including KI_MTXBLOCK.
Prodding by: peter
mutex releases to not require flags for the cases when preemption is
not allowed:
The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent
switching to a higher priority thread on mutex releease and swi schedule,
respectively when that switch is not safe. Now that the critical section
API maintains a per-thread nesting count, the kernel can easily check
whether or not it should switch without relying on flags from the
programmer. This fixes a few bugs in that all current callers of
swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from
fast interrupt handlers and the swi_sched of softclock needed this flag.
Note that to ensure that swi_sched()'s in clock and fast interrupt
handlers do not switch, these handlers have to be explicitly wrapped
in critical_enter/exit pairs. Presently, just wrapping the handlers is
sufficient, but in the future with the fully preemptive kernel, the
interrupt must be EOI'd before critical_exit() is called. (critical_exit()
can switch due to a deferred preemption in a fully preemptive kernel.)
I've tested the changes to the interrupt code on i386 and alpha. I have
not tested ia64, but the interrupt code is almost identical to the alpha
code, so I expect it will work fine. PowerPC and ARM do not yet have
interrupt code in the tree so they shouldn't be broken. Sparc64 is
broken, but that's been ok'd by jake and tmm who will be fixing the
interrupt code for sparc64 shortly.
Reviewed by: peter
Tested on: i386, alpha