Commit Graph

6734 Commits

Author SHA1 Message Date
Robert Watson
90e6b5447f Add a new cn_flags fields to struct consdev, the low-level console
definition structure.  Define one flag, CN_FLAG_NODEBUG, which
indicates the console driver cannot be used in the context of the
debugger.  This may be used, for example, if the console device
interacts with kernel services that cannot be used from the
debugger context, such as the network stack.  These drivers are
skipped over for calls to cn_checkc() and cn_putc(), and the
calling function simply moves on to the next available console.
2003-10-18 02:13:39 +00:00
Jeff Roberson
94816f6d52 - Remove the correct thread from the run queue in setrunqueue(). This
fixes ULE + KSE.
2003-10-17 20:53:04 +00:00
Poul-Henning Kamp
3da2d6a453 Simplify count_dev() 2003-10-17 11:56:48 +00:00
Peter Wemm
c9c373b093 Halt the cpu on amd64 as well. For some strange reason, this makes
a fair bit of difference to the power consumption and lets my cpu cool
down enough for the temperature sensitive fan controller to completely
stop the cpu fan at times.
2003-10-17 03:49:03 +00:00
Marcel Moolenaar
b0f865c1f3 Implement cpu_idle() on ia64. We put the processor in a lightweight
halt state that minimizes power consumption while still preserving
cache and TLB coherency. Halting the processor is not conditional at
this time. Tested with UP and SMP kernels.
2003-10-17 02:24:59 +00:00
Jeff Roberson
55f2099a70 - The kse may be null in sched_pctcpu().
Reported by:	kris
2003-10-16 21:13:14 +00:00
Jeff Roberson
0e0f626628 - Only kse_reassign() in the !running case.
Reported by:	kris
2003-10-16 20:32:57 +00:00
Jeff Roberson
0c7da3a43d - Call sched_add() with the correct argument on SMP.
Reported by:	Valentin Chopov <valentin@valcho.net>
2003-10-16 20:06:19 +00:00
Jeff Roberson
b72f347bdb - Fix a minor problem with my last commit, we don't want to return from
sched_switch if the thread is running, we want to fall through and pick
   a new thread because we have been preempted.
2003-10-16 10:04:54 +00:00
Doug Rabson
46ba7a35f2 * Add multiple inheritance to kobj. Each class can have zero or more base
classes and if a method is not found in a given class, its base classes
  are searched (in the order they were declared). This search is recursive,
  i.e. a method may be define in a base class of a base class.
* Change the kobj method lookup algorithm to one which is SMP-safe. This
  relies only on the constraint that an observer of a sequence of writes
  of pointer-sized values will see exactly one of those values, not a
  mixture of two or more values. This assumption holds for all processors
  which FreeBSD supports.
* Add locking to kobj class initialisation.
* Add a simpler form of 'inheritance' for devclasses. Each devclass can
  have a parent devclass. Searches for drivers continue up the chain of
  devclasses until either a matching driver is found or a devclass is
  reached which has no parent. This can allow, for instance, pci drivers
  to match cardbus devices (assuming that cardbus declares pci as its
  parent devclass).
* Increment __FreeBSD_version.

This preserves the driver API entirely except for one minor feature used
by the ISA compatibility shims. A workaround for ISA compatibility will
be committed separately. The kobj and newbus ABI has changed - all modules
must be recompiled.
2003-10-16 09:16:28 +00:00
Jeff Roberson
ae53b483cc - Collapse sched_switchin() and sched_switchout() into sched_switch(). Now
mi_switch() calls sched_switch() which calls cpu_switch().  This is
   actually one less function call than it had been.
2003-10-16 08:53:46 +00:00
Jeff Roberson
7cf90fb376 - Update the sched api. sched_{add,rem,clock,pctcpu} now all accept a td
argument rather than a kse.
2003-10-16 08:39:15 +00:00
Jeff Roberson
4c9612c622 - The non iterative algorithm for interact_update was broken due to
rounding errors.  This was the source of the majority of the
   interactivity problems.  Reintroduce the old algorithm and its XXX.
 - Up the interactivity threshold to 30.  It really could stand to be even
   a tiny bit higher.
 - Let the sleep and run time accumulate up to 5 seconds of history rather
   than two.  This helps stop XFree86 from becoming non-interactive during
   bursts of activity.
2003-10-16 08:17:43 +00:00
Jeff Roberson
08fd6713b2 - If our user_pri doesn't match our actual priority our priority has been
elevated either due to priority propagation or because we're in the
   kernel in either case, put us on the current queue so that we dont
   stop others from using important resources.  At some point the priority
   elevations from sleeping in the kernel should go away.
 - Remove an optimization in sched_userret().  Before we would only set
   NEEDRESCHED if there was something of a higher priority available.  This
   is a trivial optimization and it breaks priority propagation because it
   doesn't take threads which we may be blocking into account.  Notice that
   the thread which is blocking others gets up to one tick of cpu time before
   we honor this NEEDRESCHED in sched_clock().
2003-10-15 07:47:06 +00:00
Peter Wemm
25e247af44 The KERN_PROC_PROC sysctl took 4 args in 5.0-REL and 5.1-REL. We need to
accept this for a bit longer.  Requiring the new order of 3 args only
was not very helpful.
2003-10-15 03:11:46 +00:00
Sam Leffler
bd19669855 Change default for kern.polling.idle_poll back to 1. This was set to 0
because Luigi observed livelock but in recent testing it did not occur
so I'm re-enabling it by default.

Reviewed by:	luigi
2003-10-14 18:39:36 +00:00
Poul-Henning Kamp
b84044731d Made use of 'error' argument, which was unused (by mistake) before.
Submitted by:    Pawel Jakub Dawidek <nick@garage.freebsd.pl>
2003-10-14 08:09:43 +00:00
Warner Losh
d29516dd82 With DIAGNOSTICS, sometimes we get weird crashes when some driver
accesses softc after it is freed.  Use a different malloc type for
softc than the rest of the bus code to make it more clear when these
things happen that it is the driver that's at fault, not the bus code.

Suggested by: sam and/or phk (I think)
2003-10-14 06:22:07 +00:00
Jeff Roberson
85b9831dfa - Add a mising vn_finished_write()
Pointy hat:     jeff
Found by:       robert
Obtained from:  kirk
2003-10-14 00:38:34 +00:00
David Xu
3a2e2a0ec8 Don't clear signal mask in execsig(). RELENG_4 does not clear it and POSIX
asks to inherit signal mask for execv.
2003-10-13 14:03:08 +00:00
Jeff Roberson
736c97c7b3 - In SCHED_CURR() add holding Giant to the list of criteria that will keep
you on the current queue.  In the future, it would be nice if priority
   propagation could deterministicly pluck a thread off of the next queue
   and put it on the current queue.  Until then this hack stops us from
   holding up our entire current queue, including interrupt handlers, while
   a thread on the next queue is blocked while holding Giant.
 - Inherit our pctcpu information from our parent.
2003-10-12 21:07:31 +00:00
Alan Cox
d58e70a08d In vfs_bio_clrbuf(), ignore the state of the object lock if the page is the
"bogus" page.

Found by:	tegge
2003-10-12 18:26:48 +00:00
Poul-Henning Kamp
5108cd3652 Simplify vn_isdisk() a bit. 2003-10-12 14:04:39 +00:00
John-Mark Gurney
9e5de980c6 fix a problem referencing free'd memory. This is only a problem for
kqueue write events on a socket and you regularly create tons of pipes
which overwrites the structure causing a panic when removing the knote
from the list.  If the peer has gone away (and it's a write knote), then
don't bother trying to remove the knote from the list.

Submitted by:	Brian Buchanan and myself
Obtained from:	nCircle
2003-10-12 07:06:02 +00:00
Jeff Roberson
7dd1328c13 - Fix a typo, I meant & and not |. This was causing lockups from the syncer
looping forever due to list corruption.

Solved by:	tegge
2003-10-11 21:50:45 +00:00
Alan Cox
08814d66d5 - Synchronize access to a page's valid field in vfs_bio_clrbuf()
by using the lock from its containing object.
 - Remove GIANT_REQUIRED from vm_hold_load_pages().
2003-10-10 07:26:21 +00:00
Robert Drehmel
ea924c4cd3 Implement preliminary support for the PT_SYSCALL command to ptrace(2). 2003-10-09 10:17:16 +00:00
Tim J. Robbins
a50f62fd9f Remove support for the unused 4th component of the KERN_PROC_PROC sysctl. 2003-10-06 01:26:11 +00:00
Jeff Roberson
d1cf0fc7fc - Add a missing vn_start_write() to flushbufqueues(). This could have
caused snapshot related problems.
 - The vp can not be NULL here or we would panic in vfs_bio_awrite().  Stop
   confusing the logic by checking for it in several places.

Submitted by:	kirk and then rototilled by me to remove vp == NULL checks.
2003-10-05 22:16:08 +00:00
Bruce M Simpson
f05970242b Bring back sysctl_wire_old_buffer(). Fix a bug in sysctl_handle_opaque()
whereby the pointers would not get reset on a retried SYSCTL_OUT() call.

Noticed by:	bde
2003-10-05 13:31:33 +00:00
Bruce M Simpson
dcf59a59fc Fix a security problem in sysctl() the long way round.
Use pre-emption detection to avoid the need for wiring a userland buffer
when copying opaque data structures.

sysctl_wire_old_buffer() is now a no-op. Other consumers of this
API should use pre-emption detection to notice update collisions.

vslock() and vsunlock() should no longer be called by any code
and should be retired in subsequent commits.

Discussed with:	pete, phk
MFC after:	1 week
2003-10-05 09:37:47 +00:00
Bruce M Simpson
0c9601bc6b Add a pre-emption counter, td_generation, so that threads can notice
when they have been pre-empted by other threads. This is bumped from
within mi_switch() every time a context switch takes place.

Discussed with:	pete
2003-10-05 09:35:08 +00:00
Bruce M Simpson
51830edcc5 Fold the vslock() and vsunlock() calls in this file with #if 0's; they will
go away in due course. Involuntary pre-emption means that we can't count
on wiring of pages alone for consistency when performing a SYSCTL_OUT()
bigger than PAGE_SIZE.

Discussed with:	pete, phk
2003-10-05 08:38:22 +00:00
Jeff Roberson
98d7d155c1 - Apply a big giant lock around the namecache. This has been sitting in
my tree since BSDcon.
2003-10-05 07:13:50 +00:00
Jeff Roberson
bdcfcdecea - Fix an XXX. Check the error of vn_lock() in vflush(). Don't specify
LK_RETRY either, we don't want this vnode if it turns into another.
 - Remove the code that checks the mount point after acquiring the lock
   we are guaranteed to either fail or get the vnode that we wanted.
2003-10-05 07:12:38 +00:00
Bruce M Simpson
5be99846fc Remove magic numbers surrounding locking state in the sysctl module, and
replace them with more meaningful defines.
2003-10-05 05:38:30 +00:00
Jeff Roberson
45503a37dd - Rename vcanrecycle() to vtryrecycle() to reflect its new role.
- In vtryrecycle() try to vgonel the vnode if all of the previous checks
   passed.  We won't vgonel if someone has either acquired a hold or usecount
   or started the vgone process elsewhere.  This is because we may have been
   removed from the free list while we were inspecting the vnode for
   recycling.
 - The VI_TRYLOCK stops two threads from entering getnewvnode() and recycling
   the same vnode.  To further reduce the likelyhood of this event, requeue
   the vnode on the tail of the list prior to calling vtryrecycle().  We can
   not actually remove the vnode from the list until we know that it's
   going to be recycled because other interlock holders may see the VI_FREE
   flag and try to remove it from the free list.
 - Kill a bogus XXX comment.  If XLOCK is set we shouldn't wait for it
   regardless of MNT_WAIT because the vnode does not actually belong to
   this filesystem.
2003-10-05 05:35:41 +00:00
Jeff Roberson
85311d4b59 - Don't cache_purge() in getnewvnode. It's done in vclean(). With this
purge, the purge in vclean, and the filesystems purge, we had 3 purges
   per vnode.
 - Move the insmntque(vp, 0) to vclean() so that we may remove it from the
   two vgone() functions and reduce the number of lock operations required.
2003-10-05 02:48:04 +00:00
Jeff Roberson
ce13b187e7 - Solve a LOR with the sync_mtx by using the VI_ONWORKLST flag to determine
whether or not the sync failed.  This could potentially get set between
   the time that we VOP_UNLOCK and VI_LOCK() but the race would harmelssly
   lead to the sync being delayed by an extra 30 seconds.  If we do not move
   the vnode it could cause an endless loop if it continues to fail to sync.
 - Use vhold and vdrop to stop the vnode from changing identities while we
   have it unlocked.  Other internal vfs lists are likely to follow this
   scheme.
2003-10-05 00:35:41 +00:00
Jeff Roberson
894fbf9769 - Move the xlock 'locking' code into vx_lock() and vx_unlock().
- Create a new function, vgonechrl(), which performs vgone for an in-use
   character device.  Move the code from vflush() that did this into
   vgonechrl().
 - Hold the xlock across the entirety of vgonel() and vgonechrl() so that
   at no point will an invalid vnode exist on any list without XLOCK set.
 - Move the xlock code out of vclean() now that it is in the vgone*()
   functions.
2003-10-05 00:02:41 +00:00
Alan Cox
6ec2fca505 Eliminate some unnecessary uses of the vm page queues lock around the
vm page's valid field.  This field is being synchronized using the
containing vm object's lock.
2003-10-04 22:47:20 +00:00
Alan Cox
bf0da100d6 - Extend the scope the vm object lock to cover calls to
vm_page_is_valid().
 - Assert that the lock on the containing vm object is held in
   vm_page_is_valid().
2003-10-04 19:23:29 +00:00
Jeff Roberson
6f4b0863e0 - In sched_sync() test our preconditions prior to dropping the sync_mtx.
This is so that we may grab the interlock while still holding the
   sync_mtx.  We have to VI_TRYLOCK() because in all other cases the lock
   order runs the other way.
 - If we don't meet any of the preconditions, reinsert the vp into the
   list for the next second.
 - We don't need to panic if we fail to sync here because each FSYNC
   function handles this case.  Removing this redundant code also
   simplifies locking.
2003-10-04 18:03:53 +00:00
Jeff Roberson
8ec82641d8 - Change a lame iterative algorithm to a constant time algorithm. Remove
the XXX that complains about it as well.

Submitted by:	ThomasWuerfl@gmx.de
2003-10-04 17:41:13 +00:00
Jeff Roberson
e4c49d2b50 - In a Giantless world, the vn_lock() in vcanrecycle() could legitimately
fail.  Remove the panic from that case and document why it might fail.
 - Document the reason for calling cache_purge() on a newly created vnode.
 - In insmntque() order the operations so that we can call mtx_unlock()
   one fewer times.  This makes the code somewhat clearer as well.
 - Add XXX comments in sched_sync() and vflush().
 - In vget(), do not sleep while waiting for XLOCK to clear if LK_NOWAIT is
   set.
 - In vclean() we don't need to acquire a lock around a single TAILQ_FIRST
   call.  It's ok if we race here, the vinvalbuf will just do nothing.
 - Increase the scope of the lock in vgonel() to reduce the number of lock
   operations that are performed.
2003-10-04 15:10:40 +00:00
Jeff Roberson
1de1f935f2 - If we are called with LK_NOWAIT in vn_lock() we may be holding a mutex
and should not sleep while waiting for XLOCK to clear.  Care needs to be
   taken in functions that use this capability to avoid spinning.
2003-10-04 14:35:22 +00:00
Jacques Vidrine
8b7358ca43 Introduce a uiomove_frombuf helper routine that handles computing and
validating the offset within a given memory buffer before handing the
real work off to uiomove(9).

Use uiomove_frombuf in procfs to correct several issues with
integer arithmetic that could result in underflows/overflows.  As a
side-effect, the code is significantly simplified.

Add additional sanity checks when computing a memory allocation size
in pfs_read.

Submitted by:	rwatson  (original uiomove_frombuf -- bugs are mine :-)
Reported by:	Joost Pol <joost@pine.nl>  (integer underflows/overflows)
2003-10-02 15:00:55 +00:00
Robert Watson
c142b0fcfe Remove the global variable 'cmask', which was used to initialize the
fd_cmask field in the file descriptor structure for the first process
indirectly from CMASK, and when an fd structure is initialized before
being filled in, and instead just use CMASK.  This appears to be an
artifact left over from the initial integration of quotas into BSD.

Suggested by:	peter
2003-10-02 03:57:59 +00:00
Jeff Roberson
fa3f9daae5 - On my Pentium4-M laptop, invalpg takes ~1100 cycles if the page is found in
the TLB and ~1600 if it is not.  Therefore, it is more effecient to
   invalidate the TLB after operations that use CMAP rather than before.
 - So that the tlb is invalidated prior to switching off of a processor, we
   must change the switchin functions to switchout functions.
 - Remove td_switchout from the thread and move it to the x86 pcb.
 - Move the code that calls switchout into swtch.s.  These changes make this
   optimization truely x86 specific.
2003-09-30 08:11:36 +00:00
Robert Watson
cc7b13bfe0 If the struct mac copied into the kernel has a negative length, return
EINVAL rather than failing the following malloc due to the value being
too large.
2003-09-29 18:35:17 +00:00
Poul-Henning Kamp
431021789f Retire revoke_and_destroy_dev() with extreme prejudice. 2003-09-28 20:50:36 +00:00
Marcel Moolenaar
c31f2280ed Remove the regstkpages sysctl variable. We have a growable register
stack now.
2003-09-27 23:07:47 +00:00
Marcel Moolenaar
fd75d71049 Part 2 of implementing rstacks: add the ability to create rstacks and
use the ability on ia64 to map the register stack. The orientation of
the stack (i.e. its grow direction) is passed to vm_map_stack() in the
overloaded cow argument. Since the grow direction is represented by
bits, it is possible and allowed to create bi-directional stacks.
This is not an advertised feature, more of a side-effect.

Fix a bug in vm_map_growstack() that's specific to rstacks and which
we could only find by having the ability to create rstacks: when
the mapped stack ends at the faulting address, we have not actually
mapped the faulting address. we need to include or cover the faulting
address.

Note that at this time mmap(2) has not been extended to allow the
creation of rstacks by processes. If such a need arises, this can
be done.

Tested on: alpha, i386, ia64, sparc64
2003-09-27 22:28:14 +00:00
Poul-Henning Kamp
98c469d484 Make life a little bit easier for cloning device drivers. 2003-09-27 21:50:00 +00:00
Poul-Henning Kamp
b294143142 Introduce no_poll() default method for device drivers. Have it
do exactly the same as vop_nopoll() for consistency and put a
comment in the two pointing at each other.

Retire seltrue() in favour of no_poll().

Create private default functions in kern_conf.c instead of public
ones.

Change default strategy to return the bio with ENODEV instead of
doing nothing which would lead the bio stranded.

Retire public nullopen() and nullclose() as well as the entire band
of public no{read,write,ioctl,mmap,kqfilter,strategy,poll,dump}
funtions, they are the default actions now.

Move the final two trivial functions from subr_xxx.c to kern_conf.c
and retire the now empty subr_xxx.c
2003-09-27 12:53:33 +00:00
Poul-Henning Kamp
41cbb0b237 Don't use seltrue when that is not really what we mean. 2003-09-27 12:44:06 +00:00
Poul-Henning Kamp
70cd771337 The present defaults for the open and close for device drivers which
provide no methods does not make any sense, and is not used by any
driver.

It is a pretty hard to come up with even a theoretical concept of
a device driver which would always fail open and close with ENODEV.

Change the defaults to be nullopen() and nullclose() which simply
does nothing.

Remove explicit initializations to these from the drivers which
already used them.
2003-09-27 12:01:01 +00:00
Poul-Henning Kamp
3f99f14bf1 OK, I messed up /dev/console with what I had hoped would be compat
code.  Convert remaining console drivers and hope for the best.
2003-09-26 19:35:50 +00:00
Robert Drehmel
4cc9f52f78 Move some tracing related code into its own function as it will
be needed for system call related ptrace functionality I plan
to commit soon.
2003-09-26 15:09:46 +00:00
Poul-Henning Kamp
3d4274a52b Update the list of CDROM device names to try for booting with RB_CDROM
flag set.
2003-09-26 09:07:27 +00:00
Poul-Henning Kamp
0d44087987 Remove wrongly sized cnd_name field, we now store the name in the
consdev structure.

If the consdev name is not set and we have a cn_dev, set the name
from there.  Try to issue a printf about this, even though it may
not have a place to go.

Modify the sysctl related code to pick up the name from the consdev
instead.
2003-09-26 07:26:54 +00:00
Peter Wemm
c460ac3a00 Add sysentvec->sv_fixlimits() hook so that we can catch cases on 64 bit
systems where the data/stack/etc limits are too big for a 32 bit process.

Move the 5 or so identical instances of ELF_RTLD_ADDR() into imgact_elf.c.

Supply an ia32_fixlimits function.  Export the clip/default values to
sysctl under the compat.ia32 heirarchy.

Have mmap(0, ...) respect the current p->p_limits[RLIMIT_DATA].rlim_max
value rather than the sysctl tweakable variable.  This allows mmap to
place mappings at sensible locations when limits have been reduced.

Have the imgact_elf.c ld-elf.so.1 placement algorithm use the same
method as mmap(0, ...) now does.

Note that we cannot remove all references to the sysctl tweakable
maxdsiz etc variables because /etc/login.conf specifies a datasize
of 'unlimited'.  And that causes exec etc to fail since it can no
longer find space to mmap things.
2003-09-25 01:10:26 +00:00
Max Khon
b15572e3fc Avoid NULL pointer dereferencing in modlist_lookup2().
PR:		56570
Submitted by:	Thomas Wintergerst <Thomas.Wintergerst@nord-com.net>
2003-09-23 14:42:38 +00:00
Alan Cox
c76789caa6 - vm_hold_free_pages() should lock the kernel object. (The pages being
freed belong to the kernel object.)
 - Increase the granularity of the vm object locking in vm_hold_load_pages()
   in order to reduce the number of times that we acquire and release the
   same lock.
2003-09-22 04:58:09 +00:00
Doug Rabson
ab7a2646e0 The method link_preload_finish is not static. 2003-09-20 17:39:32 +00:00
Jeff Roberson
81de51bf1d - Somewhere along the line I stupidly removed critical logic from
sched_ptcpu_update().  This caused erroneous cpu times in TOP for
   processes that were asleep.  Replace the code that was removed.
2003-09-20 02:05:58 +00:00
Jeff Roberson
51b575490c - In reassignbuf() don't unlock vp and lock newvp if they are the same.
Doing so creates a race where the buf is on neither list.
 - Only vfree() in an error case in vclean() if VSHOULDFREE() thinks we
   should.
 - Convert the error case in vclean() to INVARIANTS from DIAGNOSTIC as this
   really should not happen and is fast to check.
2003-09-20 00:21:48 +00:00
Jeff Roberson
6b6c163a37 - Remove spls(). The locking that has replaced them is in place and they
no longer serve as guidelines for future work.
2003-09-19 23:52:06 +00:00
Alexander Kabaev
aebbeee812 Eliminate one case of VI_UNLOCK followed by an immediate
VI_LOCK.
2003-09-19 19:13:54 +00:00
Tim J. Robbins
3ddaef4034 Allow the KERN_PROC_PROC sysctl to be used without the useless 4th
name component, for consistency with KERN_PROC_ALL. Support for the
4-argument form will be removed some time before 5.2-R.
2003-09-19 14:16:50 +00:00
Jeff Roberson
9fb535dec5 - Only use UMA to cache malloc requests up to PAGE_SIZE. Values larger than
this are requested very infrequently and waste memory when we cache
   spares.
2003-09-19 04:39:08 +00:00
Alan Cox
35b86dc8de Correct a typo in the previous revision. 2003-09-15 02:56:48 +00:00
Robert Watson
62c45ef40a Add a new sysctl, security.bsd.conservative_signals, to disable
special signal-delivery protections for setugid processes.  In the
event that a system is relying on "unusual" signal delivery to
processes that change their credentials, this can be used to work
around application problems.

Also, add SIGALRM to the set of signals permitted to be delivered to
setugid processes by unprivileged subjects.

Reported by:	Joe Greco <jgreco@ns.sol.net>
2003-09-14 07:22:38 +00:00
Jacques Vidrine
5949ba2136 sched_setscheduler: Return EINVAL when a invalid policy is specified,
thus complying with POLA and the man page.  (Previously, no error was
returned for this case.)
2003-09-13 18:46:24 +00:00
Jacques Vidrine
b5e80ae344 Correct mostly harmless off-by-one error in getdomainname().
Reviewed by:	imp
2003-09-13 17:12:22 +00:00
Alan Cox
58abfe0051 Convert vmapbuf() from using pmap_extract() to using
pmap_extract_and_hold().  Note, however, that GIANT_REQUIRED should not be
removed until all platforms fully implement the "prot" parameter to
pmap_extract_and_hold().

Reviewed by:	tegge
2003-09-13 04:29:55 +00:00
Alan Cox
27d203eab3 pipe_build_write_buffer() only requires read access of the page that it
obtains from pmap_extract_and_hold().
2003-09-12 07:13:15 +00:00
Marcel Moolenaar
da13b8f9fe Introduce BUS_CONFIG_INTR(). The method allows devices to tell parents
about interrupt trigger mode and interrupt polarity. This allows ACPI
for example to pass interrupt resource information up the hierarchy.
The default implementation of the method therefore is to pass the
request to the parent.

Reviewed by: jhb, njl
2003-09-10 21:37:10 +00:00
Hidetoshi Shimokawa
8edbaf859d Fix asynchronous physio breakage introduced in rev 1.163.
We cannnot use bp->b_caller2 because DEV_STRATEGY will overwrite it.
2003-09-10 15:48:51 +00:00
John Baldwin
2b3c42a9e9 Update the license on this file to be a bit more sane. 2003-09-10 01:09:32 +00:00
Ian Dowse
ffe40c80ea In the !MNT_BYFSID case, return EINVAL from unmount(2) when the
specified directory is not found in the mount list. Before the
MNT_BYFSID changes, unmount(2) used to return ENOENT for a nonexistent
path and EINVAL for a non-mountpoint, but we can no longer distinguish
between these cases. Of the two error codes, EINVAL was more likely
to occur in practice, and it was the only one of the two that was
documented.

Update the manual page to match the current behaviour.

Suggested by:	tjr
Reviewed by:	tjr
2003-09-08 16:23:21 +00:00
Alan Cox
03be99d20c Use pmap_extract_and_hold() in pipe_build_write_buffer(). Consequently,
pipe_build_write_buffer() no longer requires Giant on entry.

Reviewed by:	tegge
2003-09-08 04:58:32 +00:00
Tim J. Robbins
f05a427aa6 Return EINVAL if the contested bit is not set on the umtx passed to
_umtx_unlock() instead of firing a KASSERT.
2003-09-07 11:14:52 +00:00
Alan Cox
ffe5125eac msync(2) should be declared MP-safe. 2003-09-07 05:42:07 +00:00
Sam Leffler
6c024e8ef6 add fast swi taskqueue spinlock to the order_list so witness doesn't complain
Submitted by:	Tor Egge <Tor.Egge@cvsup.no.freebsd.org>
2003-09-06 21:06:08 +00:00
Sam Leffler
7e2282a5a6 correct fast swi taskqueue spinlock name to be different from the sleep lock
Submitted by:	Tor Egge <Tor.Egge@cvsup.no.freebsd.org>
2003-09-06 21:05:18 +00:00
Alan Cox
603d3d4a44 Giant is no longer required by pipe_destroy_write_buffer(). Reduce
unnecessary white space from pipe_destroy_write_buffer().
2003-09-06 21:02:10 +00:00
Sam Leffler
f82c9e70f9 "fast swi" taskqueue support. This is a taskqueue that uses spinlocks
making it useful for dispatching swi tasks from fast interrupt handlers.

Sponsered by:	FreeBSD Foundation
2003-09-05 23:09:22 +00:00
Sam Leffler
7c00e355a2 Print a message at boot for interrupt handlers created with INTR_MPSAFE
and/or INTR_FAST.  This belongs elsehwere and perhaps under bootverbose;
I'm committing it for now as it's uesful to know which drivers have
been converted and which have not.
2003-09-05 22:51:18 +00:00
Peter Wemm
917cf8d2a3 Log involuntary context switches correctly. 2003-09-05 22:15:26 +00:00
Poul-Henning Kamp
ce914a08b0 Put the message about msgbuf cksum mismatch under bootverbose and tell
people what the consequence is.
2003-09-05 11:12:00 +00:00
Poul-Henning Kamp
c679c73452 Use the quality to disable timecounters for which we deem Hz too low. 2003-09-03 08:14:16 +00:00
Kenneth D. Merry
cb32189e23 Move dynamic sysctl(8) variable creation for the cd(4) and da(4) drivers
out of cdregister() and daregister(), which are run from interrupt context.

The sysctl code does blocking mallocs (M_WAITOK), which causes problems
if malloc(9) actually needs to sleep.

The eventual fix for this issue will involve moving the CAM probe process
inside a kernel thread.  For now, though, I have fixed the issue by moving
dynamic sysctl variable creation for these two drivers to a task queue
running in a kernel thread.

The existing task queues (taskqueue_swi and taskqueue_swi_giant) run in
software interrupt handlers, which wouldn't fix the problem at hand.  So I
have created a new task queue, taskqueue_thread, that runs inside a kernel
thread.  (It also runs outside of Giant -- clients must explicitly acquire
and release Giant in their taskqueue functions.)

scsi_cd.c:	Remove sysctl variable creation code from cdregister(), and
		move it to a new function, cdsysctlinit().  Queue
		cdsysctlinit() to the taskqueue_thread taskqueue once we
		have fully registered the cd(4) driver instance.

scsi_da.c:	Remove sysctl variable creation code from daregister(), and
		move it to move it to a new function, dasysctlinit().
		Queue dasysctlinit() to the taskqueue_thread taskqueue once
		we have fully registered the da(4) instance.

taskqueue.h:	Declare the new taskqueue_thread taskqueue, update some
		comments.

subr_taskqueue.c:
		Create the new kernel thread taskqueue.  This taskqueue
		runs outside of Giant, so any functions queued to it would
		need to explicitly acquire/release Giant if they need it.

cd.4:		Update the cd(4) man page to talk about the minimum command
		size sysctl/loader tunable.  Also note that the changer
		variables are available as loader tunables as well.

da.4:		Update the da(4) man page to cover the retry_count,
		default_timeout and minimum_cmd_size sysctl variables/loader
		tunables.  Remove references to /dev/r???, they aren't used
		any longer.

cd.9:		Update the cd(9) man page to describe the CD_Q_10_BYTE_ONLY
		quirk.

taskqueue.9:	Update the taskqueue(9) man page to describe the new thread
		task queue, and the taskqueue_swi_giant queue.

MFC after:	3 days
2003-09-03 04:46:28 +00:00
Sam Leffler
28ace1bf60 move domain list mutex initialization to earlier in the boot sequence so
statically configured modules like netgraph can call net_init_domain

Noticed by:	D.Rock@t-online.de (D. Rock)
2003-09-02 20:59:23 +00:00
Mike Silbersack
3390d47670 Implement MBUF_STRESS_TEST mark II.
Changes from the original implementation:

- Fragmentation is handled by the function m_fragment, which can
be called from whereever fragmentation is needed.  Note that this
function is wrapped in #ifdef MBUF_STRESS_TEST to discourage non-testing
use.

- m_fragment works slightly differently from the old fragmentation
code in that it allocates a seperate mbuf cluster for each fragment.
This defeats dma_map_load_mbuf/buffer's feature of coalescing adjacent
fragments.  While that is a nice feature in practice, it nerfed the
usefulness of mbuf_stress_test.

- Add two modes of random fragmentation.  Chains with fragments all of
the same random length and chains with fragments that are each uniquely
random in length may now be requested.
2003-09-01 05:55:37 +00:00
Sam Leffler
b9651df42c o interlock domain list when adding domains
o remove irrlevant spl

Notes:

1. We don't lock domain list traversals as this is safe until we start
   removing domains.
2. The calculation of max_datalen in net_init_domain appears safe as
   noone depends on max_hdr and max_datalen having consistent values.
3. Giant is still held for fast and slow timeouts; this must stay until
   each timeout routine is properly locked (coming soon).

Sponsored by:	FreeBSD Fondation
2003-09-01 05:01:55 +00:00
Jeff Roberson
d919a11d06 - Define a new flag for getblk(): GB_NOCREAT. This flag causes getblk() to
bail out if the buffer is not already present.
 - The buffer returned by incore() is not locked and should not be sent to
   brelse().  Use getblk() with the new GB_NOCREAT flag to preserve the
   desired semantics.
2003-08-31 08:50:11 +00:00
Jeff Roberson
a7db559087 - If there is no vp assume that BKGRDINPROG is not set and set RELPBUF in
brelse().
2003-08-31 01:07:45 +00:00
Jeff Roberson
b5c61abd82 - In some cases bp->b_vp can be NULL in brelse, don't try to lock the
interlock in that case.

Found by:	alc
2003-08-31 00:06:07 +00:00
Alan Cox
411d10a600 Migrate the sf_buf allocator that is used by sendfile(2) and zero-copy
sockets into machine-dependent files.  The rationale for this
migration is illustrated by the modified amd64 allocator.  It uses the
amd64's direct map to avoid emphemeral mappings in the kernel's
address space.  On an SMP, the emphemeral mappings result in an IPI
for TLB shootdown for each transmitted page.  Yuck.

Maintainers of other 64-bit platforms with direct maps should be able
to use the amd64 allocator as a reference implementation.
2003-08-29 20:04:10 +00:00