94999 Commits

Author SHA1 Message Date
imp
3d151e1a1b 2662W-AR Wireless Adapter
Submitted by: Stuart Walsh <stu@ipng.org.uk>
2003-10-05 18:58:37 +00:00
bms
3a6a851fce Correct .Xr's in kiconv.3.
Submitted by:	osa
2003-10-05 13:39:28 +00:00
bms
3960d861d4 Bring back sysctl_wire_old_buffer(). Fix a bug in sysctl_handle_opaque()
whereby the pointers would not get reset on a retried SYSCTL_OUT() call.

Noticed by:	bde
2003-10-05 13:31:33 +00:00
iedowse
4357db17b4 Since the addition of the VI_DOINGINACT flag some time ago,
VOP_INACTIVE routines need not worry about their vnode getting
recycled if they block. Remove the code from nfs_inactive() that
used vget() to get an extra vnode reference that was held during
the nfs_vinvalbuf() call.
2003-10-05 12:41:35 +00:00
bms
3a1ffe0880 Revert previous commit. Come back vslock(), all is forgiven.
Pointy hat to:	bms
2003-10-05 12:41:08 +00:00
blackend
8911cdd373 Typos fixing:
paramters -> parameters
assoicated -> associated
2003-10-05 12:09:31 +00:00
nyan
9f426ad282 Merged from sys/dev/sio/sio.c revisions from 1.405 to 1.414. 2003-10-05 11:55:14 +00:00
bms
40395a5b73 Use the term 'physical memory' consistently. 2003-10-05 11:47:51 +00:00
bms
d95a8c6f96 Mark td_generation as volatile. 2003-10-05 11:15:18 +00:00
bms
6cbb5c3a4f Retire vslock() and vsunlock() with extreme prejudice.
Discussed with:	pete
2003-10-05 09:47:54 +00:00
jeff
e6698ba2f7 - Further simplify ffs_sync(). The vnode lock is required for UFS_UPDATE()
so make the code slightly more uniform.  The vnode lock is acquired in
   all cases and now the only difference between VCHR and other is we
   call UFS_UPDATE instead of VOP_FSYNC().
2003-10-05 09:42:24 +00:00
jeff
9765ac56e1 - In ffs_update() assert that either the vnode lock or the XLOCK is held. 2003-10-05 09:39:02 +00:00
bms
44fa0fe4a2 Fix a security problem in sysctl() the long way round.
Use pre-emption detection to avoid the need for wiring a userland buffer
when copying opaque data structures.

sysctl_wire_old_buffer() is now a no-op. Other consumers of this
API should use pre-emption detection to notice update collisions.

vslock() and vsunlock() should no longer be called by any code
and should be retired in subsequent commits.

Discussed with:	pete, phk
MFC after:	1 week
2003-10-05 09:37:47 +00:00
bms
0f917818e6 Add a pre-emption counter, td_generation, so that threads can notice
when they have been pre-empted by other threads. This is bumped from
within mi_switch() every time a context switch takes place.

Discussed with:	pete
2003-10-05 09:35:08 +00:00
hrs
1ba528a121 Add a sentence forgotten in the previous commit. 2003-10-05 09:17:25 +00:00
nyan
b60346b21b MFi386: revisions 1.572, 1.573 and 1.574. 2003-10-05 09:05:45 +00:00
nyan
937fbbdfd4 MFi386: revision 1.205 2003-10-05 08:56:49 +00:00
bms
b463ed3eb2 Fold the vslock() and vsunlock() calls in this file with #if 0's; they will
go away in due course. Involuntary pre-emption means that we can't count
on wiring of pages alone for consistency when performing a SYSCTL_OUT()
bigger than PAGE_SIZE.

Discussed with:	pete, phk
2003-10-05 08:38:22 +00:00
hrs
b069000f10 New release note: SA-03:18. 2003-10-05 08:17:53 +00:00
hrs
85a7ee1518 New errata: SA-03:14, SA-03:17, SA-03:18. 2003-10-05 08:15:54 +00:00
bde
871953665f Include <sys/mutex.h>. Don't depend on namespace pollution in <sys/vnode.h>.
Fixed a nearby style bug.  The include of vcoda.h used angle brackets and
was not used.
2003-10-05 07:44:45 +00:00
jeff
48e5ecf491 - Check the XLOCK before inspecting v_data.
- Slightly rewrite the fsync loop to be more lock friendly.  We must
   acquire the vnode interlock before dropping the mnt lock.  We must
   also check XLOCK to prevent vclean() races.
 - Use LK_INTERLOCK in the vget() in ffs_sync to further prevent vclean()
   races.
 - Use a local variable to store the results of the nvp == TAILQ_NEXT
   test so that we do not access the vp after we've vrele()d it.
 - Add an XXX comment about UFS_UPDATE() not being protected by any lock
   here.  I suspect that it should need the VOP lock.
2003-10-05 07:16:45 +00:00
jeff
d22d3e1bfb - Apply a big giant lock around the namecache. This has been sitting in
my tree since BSDcon.
2003-10-05 07:13:50 +00:00
jeff
2c3fea92c8 - Fix an XXX. Check the error of vn_lock() in vflush(). Don't specify
LK_RETRY either, we don't want this vnode if it turns into another.
 - Remove the code that checks the mount point after acquiring the lock
   we are guaranteed to either fail or get the vnode that we wanted.
2003-10-05 07:12:38 +00:00
alc
57538c344f Assert that the containing vm object's lock is held in
vm_page_set_invalid().
2003-10-05 06:58:07 +00:00
jeff
f92fddc187 - Skip over xvp if XLOCK is set. 2003-10-05 06:48:37 +00:00
jeff
f61f6f6aa8 - Remove an incorrect XXX comment. This code does respect the XLOCK since
it uses vget() which will fail if the identity changes.
2003-10-05 06:47:56 +00:00
jeff
2b4a2d9fbe - Check the XLOCK before we inspect the vnode. 2003-10-05 06:46:45 +00:00
jeff
5b01a09002 - We don't need to cache_purge() in nfs_reclaim(), vclean() does it for us. 2003-10-05 06:46:02 +00:00
jeff
f2649d6926 - Check the XLOCK prior to inspecting v_data. 2003-10-05 06:44:53 +00:00
jeff
9f28715adc - Check XLOCK prior to accessing v_data. 2003-10-05 06:43:30 +00:00
jeff
29b3b781c2 - File systems that wish to inspect the vnode contents or their private
v_data field before calling vget/vn_lock must check VI_XLOCK manually to
   be sure that v_data is still valid.  Implement this check in two places
   here.
2003-10-05 06:43:03 +00:00
bms
2afc542330 Correct a typo on line 552 of revision 1.92 which was breaking GENERIC:-
_FreeBSD_version should be __FreeBSD_version.
2003-10-05 06:06:09 +00:00
bms
c600018661 Remove magic numbers surrounding locking state in the sysctl module, and
replace them with more meaningful defines.
2003-10-05 05:38:30 +00:00
jeff
6b8324e841 - Rename vcanrecycle() to vtryrecycle() to reflect its new role.
- In vtryrecycle() try to vgonel the vnode if all of the previous checks
   passed.  We won't vgonel if someone has either acquired a hold or usecount
   or started the vgone process elsewhere.  This is because we may have been
   removed from the free list while we were inspecting the vnode for
   recycling.
 - The VI_TRYLOCK stops two threads from entering getnewvnode() and recycling
   the same vnode.  To further reduce the likelyhood of this event, requeue
   the vnode on the tail of the list prior to calling vtryrecycle().  We can
   not actually remove the vnode from the list until we know that it's
   going to be recycled because other interlock holders may see the VI_FREE
   flag and try to remove it from the free list.
 - Kill a bogus XXX comment.  If XLOCK is set we shouldn't wait for it
   regardless of MNT_WAIT because the vnode does not actually belong to
   this filesystem.
2003-10-05 05:35:41 +00:00
jeff
e15704d590 - Don't cache_purge() in getnewvnode. It's done in vclean(). With this
purge, the purge in vclean, and the filesystems purge, we had 3 purges
   per vnode.
 - Move the insmntque(vp, 0) to vclean() so that we may remove it from the
   two vgone() functions and reduce the number of lock operations required.
2003-10-05 02:48:04 +00:00
jeff
c38cbc3847 - Don't cache_purge() in cd9660_reclaim. vclean() does it for us so
this is redundant.
2003-10-05 02:45:36 +00:00
jeff
584caed26f - Don't cache_purge() in ufs_reclaim. vclean() does it for us so
this is redundant.
2003-10-05 02:45:00 +00:00
jeff
5556647d90 - Don't cache_purge() in ext2_reclaim. vclean() does it for us so
this is redundant.
2003-10-05 02:44:22 +00:00
jeff
177916fba6 - Don't cache_purge() in *_reclaim routines. vclean() does it for us so
this is redundant.
2003-10-05 02:43:30 +00:00
bms
4e6b30a14e Update the page_req classes VM_ALLOC_NOOBJ and VM_ALLOC_ZERO.
Suggested by:	alc
2003-10-05 01:31:51 +00:00
jeff
8d0f78003f - Solve a LOR with the sync_mtx by using the VI_ONWORKLST flag to determine
whether or not the sync failed.  This could potentially get set between
   the time that we VOP_UNLOCK and VI_LOCK() but the race would harmelssly
   lead to the sync being delayed by an extra 30 seconds.  If we do not move
   the vnode it could cause an endless loop if it continues to fail to sync.
 - Use vhold and vdrop to stop the vnode from changing identities while we
   have it unlocked.  Other internal vfs lists are likely to follow this
   scheme.
2003-10-05 00:35:41 +00:00
alc
7e17f7f78e Don't bother setting a page table page's valid field. It is unused and
not setting it is consistent with other uses of VM_ALLOC_NOOBJ pages.
2003-10-05 00:12:16 +00:00
jeff
8d72c43763 - Move the xlock 'locking' code into vx_lock() and vx_unlock().
- Create a new function, vgonechrl(), which performs vgone for an in-use
   character device.  Move the code from vflush() that did this into
   vgonechrl().
 - Hold the xlock across the entirety of vgonel() and vgonechrl() so that
   at no point will an invalid vnode exist on any list without XLOCK set.
 - Move the xlock code out of vclean() now that it is in the vgone*()
   functions.
2003-10-05 00:02:41 +00:00
alc
44ee8e2211 Synchronize access to a vm page's valid field using the containing
vm object's lock.
2003-10-04 23:37:38 +00:00
alc
74f19ddd0e Eliminate some unnecessary uses of the vm page queues lock around the
vm page's valid field.  This field is being synchronized using the
containing vm object's lock.
2003-10-04 22:47:20 +00:00
joe
1cffeebc9c Make it easier to run this code on RELENG_4.
Submitted by:	luoqi
2003-10-04 22:13:21 +00:00
peter
6e1ae7d559 Fix the apm problem for real. We leave the first 4K page for the bios to
work in, but we had it mapped read-only.  While this has always been the
case, the PG_PS enable hack hid it and the apm bios code ended up taking
advantage of it.
2003-10-04 22:04:54 +00:00
alc
632febaf7e Assert that the containing vm object's lock is held in
vm_page_zero_invalid().
2003-10-04 21:56:27 +00:00
joe
7241c9b300 Make it easier to run this code on RELENG_4.
Submitted by:	luoqi
2003-10-04 21:41:01 +00:00