Commit Graph

8309 Commits

Author SHA1 Message Date
Jeff Roberson
92e251caf7 - Complete the implementation of td_locks. Track the number of outstanding
lockmgr locks that this thread owns.  This is complicated due to
   LK_KERNPROC and because lockmgr tolerates unlocking an unlocked lock.

Sponsored by:	Isilon Systes, Inc.
2005-03-24 09:35:06 +00:00
Jeff Roberson
d830f82824 - Pass LK_EXCLUSIVE to VFS_ROOT() to satisfy the new flags argument. For
now, all calls to VFS_ROOT() should still acquire exclusive locks.

Sponsored by:	Isilon Systems, Inc.
2005-03-24 07:31:38 +00:00
Jeff Roberson
aabb175391 - Fixup the default vfs_root function to match the new prototype.
Sponsored by:	Isilon Systems, Inc.
2005-03-24 07:30:00 +00:00
Jeff Roberson
5d14d29912 - Grab the lock type that the caller requests in vfs_hash_insert().
Sponsored by:	Isilon Systems, Inc.
2005-03-24 06:16:27 +00:00
Jeff Roberson
c167961e27 - If vput() is called with a shared lock it must upgrade to an exclusive
before it can call VOP_INACTIVE().  This must use the EXCLUPGRADE path
   because we may violate some lock order with another locked vnode if
   we drop and reacquire the lock.  If EXCLUPGRADE fails, we mark the
   vnode with VI_OWEINACT.  This case should be very rare.
 - Clear VI_OWEINACT in vinactive() and vbusy().
 - If VI_OWEINACT is set in vgone() do the VOP_INACTIVE call here as well.

Sponsored by:	Isilon Systems, Inc.
2005-03-24 06:08:58 +00:00
Jeff Roberson
3e6bcad375 - Remove some long dead LOOKUP_SHARED code that tracked the lock state.
- Always pass LOCKSHARED and rely on namei() to ignore it when
   LOOKUP_SHARED is not set.

Sponsored by:	Isilon Systems, Inc.
2005-03-24 06:04:35 +00:00
Jeff Roberson
ae88db8a72 - Remove the #ifdef LOOKUP_SHARED from some calls to NDINIT. The
LOCKSHARED flag is simply ignored in namei() if LOOKUP_SHARED is not
   enabled.

Sponsored by:	Isilon Systems, Inc.
2005-03-24 06:03:31 +00:00
Jeff Roberson
ad09e57f41 - Clear LOCKSHARED if LOOKUP_SHARED is not enabled. This is not strictly
necessary since we disable the shared locks in vfs_cache, but it is
   prefered that the option not leak out into filesystems when it is
   disabled.

Sponsored by:	Isilon Systems, Inc.
2005-03-24 06:02:37 +00:00
Jeff Roberson
fdd6a3ff3c - All of the bugs which lead to the complication of the LOOKUP_SHARED
config option have now been fixed.  All filesystems are properly locked
   and checked via DEBUG_VFS_LOCKS.  Remove the workaround code.

Sponsored by:	Isilon Systems, Inc.
2005-03-24 06:00:45 +00:00
Julian Elischer
b75b03116f Fix code freeing wrong cred pointer.
Submitted by:	das
Noticed by: Coverity tool
MFC after:	3 days

Note: usually the two pointers point to the same
thing but it was still a bug.
2005-03-21 22:55:38 +00:00
Robert Watson
6220dcba84 Add a read-only kern.sched.preemption sysctl so that user space can tell
if "options PREEMPTION" is compiled into the kernel.
2005-03-20 17:05:12 +00:00
Pawel Jakub Dawidek
c78941e69e Add ki_jid field to the kinfo_proc structure and store jail ID there.
Reviewed by:	gad
MFC after:	3 days
2005-03-20 10:35:23 +00:00
Poul-Henning Kamp
773eff9d97 Sleeping is not allowed in uma->fini 2005-03-19 08:22:13 +00:00
Sam Leffler
b53d6ac575 check copyin return value
Noticed by:	Coverity Prevent analysis tool
2005-03-19 04:34:23 +00:00
David Schultz
f7fdcd45f0 Add missing cases for PT_SYSCALL.
Found by:	Coverity Prevent analysis tool
2005-03-18 21:22:28 +00:00
Maxim Sobolev
2322a0a77d Impose the upper limit on signals that are allowed between kernel threads
in set[ug]id program for compatibility with Linux. Linuxthreads uses
4 signals from SIGRTMIN to SIGRTMIN+3.

Pointed out by:		rwatson
2005-03-18 13:33:18 +00:00
Poul-Henning Kamp
1ea7a6f806 Use subr_unit to allocate thread ID's with.
Tested by:	davidxu
2005-03-18 12:34:14 +00:00
Maxim Sobolev
f9cd63d436 Linuxthreads uses not only signal 32 but several signals >= 32.
PR:		kern/72922
Submitted by:	Andriy Gapon <avg@icyb.net.ua>
2005-03-18 11:08:55 +00:00
Poul-Henning Kamp
a1e1d551d8 Fix a bad copy&paste mistake I made.
Spotted by:	truckman
2005-03-18 06:01:21 +00:00
Warner Losh
36fed96550 Use STAILQ in preference to SLIST for the resources. Insert new resources
last in the list rather than first.

This makes the resouces print in the 4.x order rather than the 5.x order
(eg fdc0 at 0x3f0-0x3f5,0x3f7 is 4.x, but 0x3f7,0x3f0-0x3f5 is 5.x).  This
also means that the pci code will once again print the resources in BAR
ascending order.
2005-03-18 05:19:50 +00:00
John-Mark Gurney
c4c44d2935 fix aio+kq... I've been running ambrisko's test program for much longer
w/o problems than I was before...  This simply brings back the knote_delete
as knlist_delete which will also drop the knote's, instead of just clearing
the list and seeing _ONESHOT...

Fix a race where if a note was _INFLUX and _DETACHED, it could end up being
modified... whoopse..

MFC after:	1 week
Prodded by:	ambrisko and dwhite
2005-03-18 01:11:39 +00:00
John-Mark Gurney
7ac139a904 add m_copyup function.. This can be used to help make our ip stack less
alignment restrictive, and help performance on some ethernet cards which
currently copy the entire packet a couple bytes to get the packet aligned
properly...

Wordsmithing by:	dwhite
Obtained from:	NetBSD (code only)
I'll clean it up later:	rwatson
2005-03-17 19:34:57 +00:00
Robert Watson
bc60830675 A further step on the journey of meaking panics and debugging more reliable:
in the window between the beginning of panic() and entering the debugger,
it's possible to receive interrupts.  If we receive an interrupt, don't
preempt if panicstr != NULL, as the system is in the process of failing, and
the preempting thread is likely to stumble over the failure.  The typical
scenario is during the printf() in panic() prior to entering the debugger,
but when running with a slower console type such as serial console.

It could be that the panic string should be passed to the debugger to print,
so that it can run from the debugger's environment rather than a regular
kernel printf.

Glanced at by:	jhb
2005-03-17 15:18:01 +00:00
Poul-Henning Kamp
bde1a9c98b Kill MAJOR_AUTO 2005-03-17 13:37:28 +00:00
Poul-Henning Kamp
800b42bde0 Prepare for the final onslaught on devices:
Move uid/gid/mode from cdev to cdevsw.

Add kind field to use for devd(8) later.

Bump both D_VERSION and __FreeBSD_version
2005-03-17 12:07:00 +00:00
Poul-Henning Kamp
572b4402d1 In stange circumstances we may end up being the last reference to a
session in tprintf().   SESSRELE() needs to properly dispose of the
sessions mutex.

Add sessrele() which does the proper cleanup and have SESSRELE() call it.

Use SESSRELE also in pgdelete().

Found by:	Coverity (ID:526)
2005-03-17 08:44:41 +00:00
Poul-Henning Kamp
51f5ce0c8c Add two arguments to the vfs_hash() KPI so that filesystems which do
not have unique hashes (NFS) can also use it.
2005-03-16 11:20:51 +00:00
Poul-Henning Kamp
9068e77689 Fix a memoryleak in case of failed root filesystem mount.
Spotted by:     Coverity via sam
2005-03-16 11:06:49 +00:00
John-Mark Gurney
2a77000b75 MFp4: print a more useful error when we don't have a /dev to mount devfs
on..
2005-03-16 08:04:39 +00:00
Poul-Henning Kamp
78bb3c21ed Add mnt_hashseed to struct mount and initialize it witn PRNG bits, use
it to get better hashing in vfs_hash.

In case of an insert collision in vfs_hash_insert(), put the loosing vnode
on a special list so that vfs_hash_remove() can just assume that it is on
a list.

Drop the VI_HASHED flag.
2005-03-16 07:35:06 +00:00
Warner Losh
358fef538f Sometimes, when asked to return region A..C, we'd return A+N..C+N
instead of failing.

When looking for a region to allocate, we used to check to see if the
start address was < end.  In the case where A..B is allocated already,
and one wants to allocate A..C (B < C), then this test would
improperly fail (which means we'd examine that region as a possible
one), and we'd return the region B+1..C+(B-A+1) rather than NULL.
Since C+(B-A+1) is necessarily larger than C (end argument), this is
incorrect behavior for rman_reserve_resource_bound().

The fix is to exclude those regions where r->r_start + count - 1 > end
rather than r->r_start > end.  This bug has been in this code for a
very long time.  I believe that all other tests against end are
correctly done.

This is why sio0 generated a message about interrupts not being
enabled properly for the device.  When fdc had a bug that allocated
from 0x3f7 to 0x3fb, sio0 was then given 0x3fc-0x404 rather than the
0x3f8-0x3ff that it wanted.  Now when fdc has the same bug, sio0 fails
to allocate its ports, which is the proper behavior.  Since the probe
failed, we never saw the messed up resources reported.

I suspect that there are other places in the tree that have weird
looping or other odd work arounds to try to cope with the observed
weirdness this bug can introduce.  These workarounds should be located
and eliminated.

Minor debug write fix to match the above test done as well.

'nice' by: mdodd
Sponsored by: timing solutions (http://www.timing.com/)
2005-03-15 20:28:51 +00:00
Warner Losh
a33ab77447 Fix a debugging printf. The order of start/end was inconsistant with
all the other start/end debugs, causing momentary confusion when the
output was examined.
2005-03-15 20:15:15 +00:00
Poul-Henning Kamp
45c26fa2b6 Improve the vfs_hash() API: vput() the unneeded vnode centrally to
avoid replicating the vput in all the filesystems.
2005-03-15 20:00:03 +00:00
Jeff Roberson
b172f6c5f9 - Now that there are no external users of vfree() make it static.
- Move VSHOULDBUSY, VSHOULDFREE, and VTRYRECYCLE into vfs_subr.c so
   no one else attempts to grow a dependency on them.
 - Now that objects with pages hold the vnode we don't have to do unlocked
   checks for the page count in the vm object in VSHOULDFREE.  These three
   macros could simply check for holdcnt state transitions to determine
   whether the vnode is on the free list already, but the extra safety
   the flag affords us is probably worth the minimal cost.
 - The leafonly sysctl and code have been dead for several years now,
   remove the sysctl and the code that employed it from vtryrecycle().
 - vtryrecycle() also no longer has to check the object's page count as
   the object holds the vnode until it reaches 0.

Sponsored by:	Isilon Systems, Inc.
2005-03-15 14:38:16 +00:00
Poul-Henning Kamp
7933351a28 Fix a debug message to print a usable device name rather than useless
major+minor tupple.
2005-03-15 14:08:10 +00:00
Jeff Roberson
c178628d6e - Expose vholdl() so it may be used outside of vfs_subr.c 2005-03-15 13:43:10 +00:00
Poul-Henning Kamp
4ba679d6d0 Remove findcdev(). 2005-03-15 12:58:08 +00:00
Poul-Henning Kamp
0a2e49f1f8 Rename cdev->si_udev to cdev->si_drv0 to reflect the new nature of
the field.
2005-03-15 11:33:28 +00:00
Jeff Roberson
f5f0da0a0e - transferlockers() requires the interlock to be SMP safe.
Sponsored by:	Isilon Systems, Inc.
2005-03-15 09:27:45 +00:00
Poul-Henning Kamp
e82ef95c11 Simplify the vfs_hash calling convention. 2005-03-15 08:07:07 +00:00
Poul-Henning Kamp
ee148e2606 Cleanup accidentally include #if 0 section. 2005-03-14 10:25:09 +00:00
Poul-Henning Kamp
6c325a2a21 Currently (almost) all filesystems maintain a local inode hash table
to get from (mount + inode) to vnode.  These tables are mostly
copy&pasted from UFS, sized based on desiredvnodes and therefore
quite large (128K-512K).  Several filesystems are buggy enough that
they allocate the hash table even before they know if they will
ever be used or not.

Add "vfs_hash", a system wide hash table, which will replace all
the per-filesystem hash-tables.

The fields we add to struct vnode will more or less be saved in
the respective filesystems inodes.

Having one central implementation will save code and will allow us
to justify the complexity of code to dynamically (re)size the hash
at a later point.
2005-03-14 10:01:29 +00:00
Jeff Roberson
8045557f2b - Increment the holdcnt once for each usecount reference. This allows us
to use only the holdcnt to determine whether a vnode may be recycled,
   simplifying the V* macros as well as vtryrecycle(), etc.

Sponsored by:	Isilon Systems, Inc.
2005-03-14 09:25:19 +00:00
Jeff Roberson
159b454819 - We do not have to check the object's ref_count in VSHOULDFREE or
vtryrecycle().  All obj refs also ref the vnode.
 - Consistently use v_incr_usecount() to increment the usecount.  This will
   be more important later.

Sponsored by:	Isilon Systems, Inc.
2005-03-14 08:30:31 +00:00
Jeff Roberson
8f13a540ed - Slightly rearrange vrele() to move the common case in one indentation
level.

Sponsored by:	Isilon Systems, Inc.
2005-03-14 07:16:55 +00:00
Jeff Roberson
6fc16a838c - Rework vget() so we drop the usecount in two failure cases that were
missed by my last commit.

Sponsored by:	Isilon Systems, Inc.
2005-03-14 07:11:19 +00:00
Poul-Henning Kamp
93f6c81e25 Remove debugging printfs. 2005-03-14 06:51:29 +00:00
Jeff Roberson
0463dc9ef1 - Do a vn_start_write in vn_close, we may write if this is the last ref
on an unlinked file.  We can't know if this is the case until after we
   have the lock.
 - Lock the vnode in vn_close, many filesystems had code which was unsafe
   without the lock held, and holding it greatly simplifies vgone().
 - Adjust vn_lock() to check for the VI_DOOMED flag where appropriate.

Sponsored by:	Isilon Systems, Inc.
2005-03-13 11:56:28 +00:00
Jeff Roberson
6703c30bb5 - Remove vx_lock, vx_unlock, vx_wait, etc.
- Add a vn_start_write/vn_finished_write around vlrureclaim so we don't do
   writing ops without suspending.  This could suspend the vlruproc which
   should not be a problem under normal circumstances.
 - Manually implement VMIGHTFREE in vlrureclaim as this was the only instance
   where it was used.
 - Acquire a lock before calling vgone() as it now requires it.
 - Move the acquisition of the vnode interlock from vtryrecycle() to
   getnewvnode() so that if it fails we don't drop and reacquire the
   vnode_free_list_mtx.
 - Check for a usecount or holdcount at the end of vtryrecycle() in case
   someone grabbed a ref while we were recycling.  Abort the recycle, and
   on the final ref drop this vnode will be placed on the head of the free
   list.
 - Move the redundant VOP_INACTIVE protection code into the local
   vinactive() routine to avoid code bloat.
 - Keep the vnode lock held across calls to vgone() in several places.
 - vgonel() no longer uses XLOCK, instead callers must hold an exclusive
   vnode lock.  The VI_DOOMED flag is set to allow other threads to detect
   a vnode which is no longer valid.  This flag is set until the last
   reference is gone, and there are no chances for a new ref.  vgonel()
   holds this lock across the entire function, which greatly simplifies
   logic.
 _ Only vfree() in one place in vgone() not three.
 - Adjust vget() to check the VI_DOOMED flag prior to waiting on the lock
   in the LK_NOWAIT case.  In other cases, check after we have slept and
   acquired an exlusive lock.  This will simulate the old vx_wait()
   behavior.

Sponsored by:	Isilon Systems, Inc.
2005-03-13 11:54:28 +00:00
Jeff Roberson
2b3183a8b7 - A lock is required before calling VOP_REVOKE. Our reference protects us
from accessing another vnode so a naked VOP_LOCK is sufficient.

Sponsored by:	Isilon Systems, Inc.
2005-03-13 11:47:04 +00:00