Commit Graph

5761 Commits

Author SHA1 Message Date
Ian Dowse
6a1b2a22ef Add a new vnode flag VI_DOINGINACT to indicate that a VOP_INACTIVE
call is in progress on the vnode. When vput() or vrele() sees a
1->0 reference count transition, it now return without any further
action if this flag is set. This flag is necessary to avoid recursion
into VOP_INACTIVE if the filesystem inactive routine causes the
reference count to increase and then drop back to zero. It is also
used to guarantee that an unlocked vnode will not be recycled while
blocked in VOP_INACTIVE().

There are at least two cases where the recursion can occur: one is
that the softupdates code called by ufs_inactive() via ffs_truncate()
can call vput() on the vnode. This has been reported by many people
as "lockmgr: draining against myself" panics. The other case is
that nfs_inactive() can call vget() and then vrele() on the vnode
to clean up a sillyrename file.

Reviewed by:	mckusick (an older version of the patch)
2002-12-29 18:30:49 +00:00
Poul-Henning Kamp
371400cf2e Use a timeout of one second while we wait for the vnode washer,
this prevents a potential race and makes the system a little bit
less jerky under extreme loads.
2002-12-29 11:18:25 +00:00
Poul-Henning Kamp
851a87ea1a Vnodes pull in 800-900 bytes these days, all things counted, so we need
to treat desiredvnodes much more like a limit than as a vague concept.

On a 2GB RAM machine where desired vnodes is 130k, we run out of
kmem_map space when we hit about 190k vnodes.

If we wake up the vnode washer in getnewvnode(), sleep until it is done,
so that it has a chance to offer us a washed vnode.  If we don't sleep
here we'll just race ahead and allocate yet a vnode which will never
get freed.

In the vnodewasher, instead of doing 10 vnodes per mountpoint per
rotation, do 10% of the vnodes distributed evenly across the
mountpoints.
2002-12-29 10:39:05 +00:00
Alan Cox
a28cc55e5b Reduce the number of times that we acquire and release the page queues
lock by making vm_page_rename()'s caller, rather than vm_page_rename(),
responsible for acquiring it.
2002-12-29 07:17:06 +00:00
Jake Burkholder
24fbeaf9c3 Don't put a newline in KTR traces. 2002-12-28 23:22:22 +00:00
Jake Burkholder
dcc4093c7a Add a tunable kern.smp.disabled for disabling explicitly smp on an smp
kernel.
2002-12-28 23:21:13 +00:00
Poul-Henning Kamp
9f16282798 KASSERT that vop_revoke() gets a VCHR. 2002-12-28 22:27:14 +00:00
Poul-Henning Kamp
f53c6e5c9a Remove unused cdevsw_ALLOCSTART macro. 2002-12-28 21:47:43 +00:00
Poul-Henning Kamp
7068a01c6f Remove cdevsw_add calls, they are deprecated. 2002-12-28 21:39:46 +00:00
Matthew Dillon
45587e2514 Abstract-out the constants for the sequential heuristic.
No operational changes.

MFC after:	1 day
2002-12-28 20:28:10 +00:00
Julian Elischer
93a7aa79d6 Add code to ddb to allow backtracing an arbitrary thread.
(show thread {address})

Remove the IDLE kse state and replace it with a change in
the way threads sahre KSEs. Every KSE now has a thread, which is
considered its "owner" however a KSE may also be lent to other
threads in the same group to allow completion of in-kernel work.
n this case the owner remains the same and the KSE will revert to the
owner when the other work has been completed.

All creations of upcalls etc. is now done from
kse_reassign() which in turn is called from mi_switch or
thread_exit(). This means that special code can be removed from
msleep() and cv_wait().

kse_release() does not leave a KSE with no thread any more but
converts the existing thread into teh KSE's owner, and sets it up
for doing an upcall. It is just inhibitted from being scheduled until
there is some reason to do an upcall.

Remove all trace of the kse_idle queue since it is no-longer needed.
"Idle" KSEs are now on the loanable queue.
2002-12-28 01:23:07 +00:00
Robert Watson
f0bc12ee8d Improve consistency between devfs and MAKEDEV: use UID_ROOT and
GID_WHEEL instead of UID_BIN and GID_BIN for /dev/fd/* entries.

Submitted by:	kris
2002-12-27 16:54:44 +00:00
Alfred Perlstein
5590e7fdf0 Lock filedesc while performing a range check on the file descriptor.
Reviewed by: alc
2002-12-27 08:39:42 +00:00
Alan Cox
d746789347 Hold the page queues lock when calling vm_page_flag_clear(). 2002-12-27 06:52:32 +00:00
Jeffrey Hsu
6f782c4636 Ensure that the made-up inode number for a Unix domain socket is persistent. 2002-12-25 07:59:39 +00:00
Robert Watson
79191eca57 Flush vop_refreshlabel() definition, since it is no longer used.
Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2002-12-24 19:47:13 +00:00
Poul-Henning Kamp
a7010ee2f4 White-space changes. 2002-12-24 09:44:51 +00:00
Jeffrey Hsu
956b0b653c SMP locking for radix nodes. 2002-12-24 03:03:39 +00:00
Poul-Henning Kamp
08c7670a8b Move the declaration of the socket fileops from socketvar.h to file.h.
This allows us to use the new typedefs and removes the needs for a number
of forward struct declarations in socketvar.h
2002-12-23 22:46:47 +00:00
Poul-Henning Kamp
f3a682116c Detediousficate declaration of fileops array members by introducing
typedefs for them.
2002-12-23 21:53:20 +00:00
Poul-Henning Kamp
6ce9c72c30 s/sokqfilter/soo_kqfilter/ for consistency with the naming of all
other socket/file operations.
2002-12-23 21:37:28 +00:00
Alan Cox
0cb6c00463 - Hold the kernel_object's lock around vm_page_alloc(kernel_object,...).
- Hold the page queues lock around vm_page_wakeup().
2002-12-23 20:10:47 +00:00
Jake Burkholder
c3c2862df4 - Add a spin lock to single thread cache invalidation and tlb flush ipis,
which allows ipis to be sent outside of Giant.
- Remove the ap boot mutex, which is unused.
2002-12-22 20:50:23 +00:00
Kris Kennaway
4ef3d7a27b Enforce correct ordering of the filedesc structure and pipe mutex, because
WITNESS can get the order wrong if it guesses based on first use.

Reviewed by:	jhb, alfred
2002-12-22 16:32:34 +00:00
Jeffrey Hsu
b30a244c34 SMP locking for ifnet list. 2002-12-22 05:35:03 +00:00
Marcel Moolenaar
551d79e177 Fix multiple registration of the elf_legacy_coredump sysctl variable.
The duplication is caused by the fact that imgact_elf.c is included
by both imgact_elf32.c and imgact_elf64.c and both are compiled by
default on ia64. Consequently, we have two seperate copies of the
elf_legacy_coredump variable due to them being declared static, and
two entries for the same sysctl in the linker set, both referencing
the unique copy of the elf_legacy_coredump variable. Since the second
sysctl cannot be registered, one of the elf_legacy_coredump variables
can not be tuned (if ordering still holds, it's the ELF64 related one).

The only solution is to create two different sysctl variables, just
like the elf<32|64>_trace sysctl variables. This unfortunately is an
(user) interface change, but unavoidable. Thus, on ELF32 platforms
the sysctl variable is called elf32_legacy_coredump and on ELF64
platforms it is called elf64_legacy_coredump. Platforms that have
both ELF formats have both sysctl variables.

These variables should probably be retired sooner rather than later.
2002-12-21 01:15:39 +00:00
Sam Leffler
91974ce10b add generic rate limiting support from netbsd; ratelimit is purely time based,
ppsratecheck is for controlling packets/second

Obtained from:	netbsd
2002-12-20 23:54:47 +00:00
Alan Cox
2952e1fb58 Extend the scope of the page queues lock in vm_pgmoveco(). 2002-12-20 21:18:29 +00:00
Maxime Henrion
894db7b01f Don't forget to destroy the mutex if an error occurs
in the jail() system call.

Submitted by:	Pawel Jakub Dawidek <nick@garage.freebsd.pl>
2002-12-20 14:32:20 +00:00
Alan Cox
ee113343eb Hold the page queues lock when performing vm_page_busy(). 2002-12-18 20:16:22 +00:00
Poul-Henning Kamp
4d99ef8d55 Indent properly. 2002-12-17 19:31:26 +00:00
Poul-Henning Kamp
126c7e29fe Remove unused variable cn_devfsdev. 2002-12-17 19:30:50 +00:00
Poul-Henning Kamp
d321df47c3 Don't cast a pointer to (intptr_t) and then on to (int) when we cannot
be sure that (int) is large enough.  Instead cast only to (intptr_t) and
cast the switch/case values to (intptr_t) as well.
2002-12-17 19:13:03 +00:00
Matthew Dillon
fa7dd9c5bc Change the way ELF coredumps are handled. Instead of unconditionally
skipping read-only pages, which can result in valuable non-text-related
data not getting dumped, the ELF loader and the dynamic loader now mark
read-only text pages NOCORE and the coredump code only checks (primarily) for
complete inaccessibility of the page or NOCORE being set.

Certain applications which map large amounts of read-only data will
produce much larger cores.  A new sysctl has been added,
debug.elf_legacy_coredump, which will revert to the old behavior.

This commit represents collaborative work by all parties involved.
The PR contains a program demonstrating the problem.

PR:		kern/45994
Submitted by:	"Peter Edwards" <pmedwards@eircom.net>, Archie Cobbs <archie@dellroad.org>
Reviewed by:	jdp, dillon
MFC after:	7 days
2002-12-16 19:24:43 +00:00
Robert Drehmel
0adb6d7a49 Remove the hto(be|le)[slq] and (be|le)toh[slq] macros defined in
_KERNEL scope from "src/sys/sys/mchain.h".

Replace each occurrence of the above in _KERNEL scope with the
appropriate macro from the set of hto(be|le)(16|32|64) and
(be|le)toh(16|32|64) from "src/sys/sys/endian.h".

Tested by:		tjr
Requested by:		comment marked with XXX
2002-12-16 16:20:06 +00:00
Matthew Dillon
72e7f3ddc2 Regenerate system calls (swapoff added) 2002-12-15 19:19:15 +00:00
Matthew Dillon
92da00bb24 This is David Schultz's swapoff code which I am finally able to commit.
This should be considered highly experimental for the moment.

Submitted by:	David Schultz <dschultz@uclink.Berkeley.EDU>
MFC after:	3 weeks
2002-12-15 19:17:57 +00:00
Matthew Dillon
389d2b6e21 Fix a refcount race with the vmspace structure. In order to prevent
resource starvation we clean-up as much of the vmspace structure as we
can when the last process using it exits.  The rest of the structure
is cleaned up when it is reaped.  But since exit1() decrements the ref
count it is possible for a double-free to occur if someone else, such as
the process swapout code, references and then dereferences the structure.
Additionally, the final cleanup of the structure should not occur until
the last process referencing it is reaped.

This commit solves the problem by introducing a secondary reference count,
calling 'vm_exitingcnt'.  The normal reference count is decremented on exit
and vm_exitingcnt is incremented.  vm_exitingcnt is decremented when the
process is reaped.  When both vm_exitingcnt and vm_refcnt are 0, the
structure is freed for real.

MFC after:	3 weeks
2002-12-15 18:50:04 +00:00
Maxim Konovalov
9f59c468f3 o Clear a high bit of ipc_perm.seq so msgget(3) never returns a
negative message queue id.

PR:		kern/46122
Submitted by:	Vladimir B.Grebenschikov <vova@sw.ru>
MFC after:	2 weeks
2002-12-15 09:41:46 +00:00
Alan Cox
475e8011ab Perform vm_object_lock() and vm_object_unlock() around
vm_object_page_remove().
2002-12-15 05:41:56 +00:00
Alfred Perlstein
f97182acf8 unwrap lines made short enough by SCARGS removal 2002-12-14 08:18:06 +00:00
Alfred Perlstein
b80521fee5 remove syscallarg().
Suggested by: peter
2002-12-14 02:07:32 +00:00
Alfred Perlstein
d1e405c5ce SCARGS removal take II. 2002-12-14 01:56:26 +00:00
Kirk McKusick
0f5f789c0d The buffer daemon cannot skip over buffers owned by locked inodes as
they may be the only viable ones to flush. Thus it will now wait for
an inode lock if the other alternatives will result in rollbacks (and
immediate redirtying of the buffer). If only buffers with rollbacks
are available, one will be flushed, but then the buffer daemon will
wait briefly before proceeding. Failing to wait briefly effectively
deadlocks a uniprocessor since every other process writing to that
filesystem will wait for the buffer daemon to clean up which takes
close enough to forever to feel like a deadlock.

Reported by:	Archie Cobbs <archie@dellroad.org>
Sponsored by:   DARPA & NAI Labs.
Approved by:	re
2002-12-14 01:35:30 +00:00
Alfred Perlstein
bc9e75d7ca Backout removal SCARGS, the code freeze is only "selectively" over. 2002-12-13 22:41:47 +00:00
Alfred Perlstein
0bbe7292e1 Remove SCARGS.
Reviewed by: md5
2002-12-13 22:27:25 +00:00
Tim J. Robbins
9d0fffd3ca Drop filedesc lock and acquire Giant around calls to malloc() and free().
These call uma_large_malloc() and uma_large_free() which require Giant.
Fixes panic when descriptor table is larger than KMEM_ZMAX bytes
noticed by kkenn.

Reviewed by:	jhb
2002-12-13 09:59:40 +00:00
Julian Elischer
696058c3c5 Unbreak the KSE code. Keep track of zobie threads using the Per-CPU storage
during the context switch. Rearrange thread cleanups
to avoid problems with Giant. Clean threads when freed or
when recycled.

Approved by:	re (jhb)
2002-12-10 02:33:45 +00:00
Robert Watson
990b4b2dc5 Remove dm_root entry from struct devfs_mount. It's never set, and is
unused.  Replace it with a dm_mount back-pointer to the struct mount
that the devfs_mount is associated with.  Export that pointer to MAC
Framework entry points, where all current policies don't use the
pointer.  This permits the SEBSD port of SELinux's FLASK/TE to compile
out-of-the-box on 5.0-CURRENT with full file system labeling support.

Approved by:	re (murray)
Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2002-12-09 03:44:28 +00:00
Alan Cox
2e29a1f21f To avoid lock order reversals in getnewvnode(), the call to uma_zfree()
must be delayed until the vnode interlock is released.

Reported by:	kris@
Approved by:	re (jhb)
2002-12-08 05:06:50 +00:00