6526 Commits

Author SHA1 Message Date
alc
7199d3e24f Remove Giant from writev(2). Eliminate trivial style differences between
writev(2) and readv(2).
2003-08-01 02:21:54 +00:00
jhb
937519b3ea If a spin lock is held for too long and WITNESS is enabled, then call
witness_display_spinlock() to see if we can find out where the current
owner of the spin lock last acquired the lock.
2003-07-31 18:52:18 +00:00
jhb
a3b9c0d553 Add a new function to look for a spinlock's instance when it is held by
another thread.  We use the td_oncpu member of the other field to locate
it's associated CPU and then search the that CPU's list of spin locks
contained in its per-CPU data.  This is not always safe and may in fact
panic or just not work, but it is useful in at least one case.
2003-07-31 18:50:58 +00:00
jhb
bc9db472d8 Update the 'ps', 'show pci', and 'show ktr' ddb commands to use the new
pager callout instead of homerolling their own paging facility.
2003-07-31 17:29:42 +00:00
peter
8dd9d4012a When ktracing context switches, make sure we record involuntary switches.
Otherwise, when we get a evicted from the cpu, there is no record of it.
This is not a default ktrace flag.
2003-07-31 01:36:24 +00:00
davidxu
176657958f Use correct signal when calling sigexit. 2003-07-30 23:11:37 +00:00
pb
edb5fbc5cc Remove test in pipe_write() which causes write(2) to return EAGAIN
on a non-blocking pipe in cases where select(2) returns the file
descriptor as ready for write. This in turns causes libc_r, for
one, to busy wait in such cases.

Note: it is a quick performance fix, a more complex fix might be
required in case this turns out to have unexpected side effects.

Reviewed by:	silby
MFC after:	3 days
2003-07-30 22:50:37 +00:00
jhb
97e378fb00 When complaining about a sleeping thread owning a mutex, display the
thread's pid to make debugging easier for people who don't want to have to
use the intended tool for these panics (witness).

Indirectly prodded by:	kris
2003-07-30 20:42:15 +00:00
alc
fc6d1980cc The introduction of vm object locking has caused witness to reveal
a long-standing mistake in the way a portion of a pipe's KVA is
allocated.  Specifically, kmem_alloc_pageable() is inappropriate
for use in the "direct" case because it allows a preceding vm map entry
and vm object to be extended to support the new KVA allocation.
However, the direct case KVA allocation should not have a backing
vm object.  This is corrected by using kmem_alloc_nofault().

Submitted by:	tegge (with the above explanation by me)
2003-07-30 18:55:04 +00:00
alc
bbf702f5b5 Revision 1.51 of vm/uma_core.c modified uma_large_free() to acquire Giant
when needed.  So, don't do it here.
2003-07-29 05:23:19 +00:00
rwatson
d2f7ae9f88 Rename VOP_RMEXTATTR() to VOP_DELETEEXTATTR() for consistency with the
kernel ACL interfaces and system call names.

Break out UFS2 and FFS extattr delete and list vnode operations from
setextattr and getextattr to deleteextattr and listextattr, which
cleans up the implementations, and makes the results more readable,
and makes the APIs more clear.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, Network Associates Laboratories
2003-07-28 18:53:29 +00:00
rwatson
9bfbf98f8a When exporting file descriptor data for threads invoking the
kern.file sysctl, don't return information about processes that
fail p_cansee(td, p).  This prevents sockstat and related
programs from seeing file descriptors owned by processes not
in the same jail as the thread, as well as having implications
for MAC, etc.

This is a partial solution: it permits an information leak about
the number of descriptors in the sizing calculation (but this is
not new information, you can also get it from kern.openfiles),
and doesn't attempt to mask file descriptors based on the
properties of the descriptor, only the process referencing it.
However, it provides most of what you want under most
circumstances, without complicating the locking.

PR:	54211
Based on a patch submitted by:	Pawel Jakub Dawidek <nick@garage.freebsd.pl>
2003-07-28 16:03:53 +00:00
phk
e457974b5d Pass the file descriptor index down to vn_open.
If the method vector was replaced and we got the "special return code"
smile and trust that whatever happened below DTRT.
2003-07-27 20:09:13 +00:00
phk
b80d7fd8a0 Pass the fdidx argument from vn_open{_cred}() onto VOP_OPEN() 2003-07-27 20:05:36 +00:00
phk
d4d7ca154a Add fdidx argument to vn_open() and vn_open_cred() and pass -1 throughout. 2003-07-27 17:04:56 +00:00
phk
b40be37a17 Call the new argument "fdidx" that is more precise than "fd". 2003-07-27 17:03:20 +00:00
dwmalone
41c671f8e6 Now that we can call kmem_malloc without Giant it should be safe
to do mbuf allocation without Giant, so remove the GIANT_REQUIRED
from mb_alloc in the M_TRYWAIT case.
2003-07-27 14:19:23 +00:00
phk
6221ef9078 Add a "int fd" argument to VOP_OPEN() which in the future will
contain the filedescriptor number on opens from userland.

The index is used rather than a "struct file *" since it conveys a bit
more information, which may be useful to in particular fdescfs and /dev/fd/*

For now pass -1 all over the place.
2003-07-26 07:32:23 +00:00
scottl
d7b9b07222 Guard against MLEN growing larger than a uint8_t due to MSIZE grwoing to a
value of 512 in LINT.  This keeps gcc from complaining.
2003-07-26 07:23:24 +00:00
alc
90dbdc0a7e revision 1.51 of vm/uma_core.c modified uma_large_malloc() to acquire
Giant when needed.
2003-07-25 22:26:43 +00:00
mtm
e2309e4ab4 The POSIX spec also requires that kern_sigtimedwait return
EINVAL if tv_nsec of the timeout is less than zero.
2003-07-24 17:07:17 +00:00
peter
c25984e00e Initialize 'blocked' to NULL. I think this was a real problem, but I
am not sure about that.  The lack of -Werror and the inline noise hid
this for a while.
2003-07-23 20:29:13 +00:00
phk
eb30c92e49 Revert stuff which accidentally ended up in the previous commit. 2003-07-22 10:36:36 +00:00
phk
c4a9334fa6 Don't attempt to inline large functions mb_alloc() and mb_free(),
it more than doubles the text size of this file.

GCC has wisely ignored us on this previously
2003-07-22 10:24:41 +00:00
davidxu
ae38138034 Always deliver synchronous signal to UTS for SA threads. 2003-07-21 00:26:52 +00:00
mtm
c986a2c5cf Turn a KASSERT back into an EINVAL return value. So, next time someone
comes across it, it will turn into a core dump in userland instead of
a kernel panic. I had also inverted the sense of the test, so

Double pointy hat to:	mtm
2003-07-19 11:32:48 +00:00
silby
4eaad33c82 Three fixes:
- Make m_prepend use m_gethdr instead of m_get where
  appropriate

- Make m_copym use m_gethdr instead of m_get where
  appropriate

- Add a call to m_fixhdr in m_defrag; m_defrag can't
  deal with corrupted pkthdr.len counts.

MFC after:	3 days
2003-07-19 06:03:48 +00:00
mtm
390cdc2c27 Remove a lock held across casuptr() that snuck in last commit. 2003-07-18 21:26:45 +00:00
mtm
514148f083 Move the decision on whether to unset the contested
bit or not from lock to unlock time.

Suggested by:	jhb
2003-07-18 17:58:37 +00:00
robert
ab57004058 To avoid a kernel panic provoked by a NULL pointer dereference,
do not clear the `sb_sel' member of the sockbuf structure
while invalidating the receive sockbuf in sorflush(), called
from soshutdown().

The panic was reproduceable from user land by attaching a knote
with EVFILT_READ filters to a socket, disabling further reads
from it using shutdown(2), and then closing it.  knote_remove()
was called to remove all knotes from the socket file descriptor
by detaching each using its associated filterops' detach call-
back function, sordetach() in this case, which tried to remove
itself from the invalidated sockbuf's klist (sb_sel.si_note).

PR:	kern/54331
2003-07-17 23:49:10 +00:00
davidxu
97d2d9dfed Fix sigwait to conform to POSIX.
When a signal is being delivered to process, first find a sigwait
thread to deliver, POSIX's argument is speed of delivering signal
to sigwait thread is faster than other ways. A signal in its wait
set will cause sigwait to return the signal number, a signal not
in its wait set but in not blocked by the thread also causes sigwait
to return, but sigwait returns EINTR, sigwait is oneshot operation,
only one signal can be delivered to its wait set, when a signal is
delivered to the sigwait thread, the thread's sigwait state is canceled.
2003-07-17 22:52:55 +00:00
davidxu
fd1c90fb6f o Refine kse_thr_interrupt to allow it to handle different commands.
o Remove TDF_NOSIGPOST.
o Add a member td_waitset to proc structure, it will be used for sigwait.

Tested by: deischen
2003-07-17 22:45:33 +00:00
robert
5f9ce489aa Correct six return statements which returned zero instead of
an appropriate error number after a failure condition.

In particular, three of the changed statements return ESRCH for a
failed pfind(), and in also three places a non-zero return
from p_cansee() will be passed back,

Also noticed by:	rwatson
2003-07-17 22:44:41 +00:00
mtm
ed2409013e Fix umtx locking, for libthr, in the kernel.
1. There was a race condition between a thread unlocking
   a umtx and the thread contesting it. If the unlocking
   thread won the race it may try to wakeup a thread that
   was not yet in msleep(). The contesting thread would then
   go to sleep to await a wakeup that would never come. It's
   not possible to close the race by using a lock because
   calls to casuptr() may have to fault a page in from swap.
   Instead, the race was closed by introducing a flag that
   the unlocking thread will set when waking up a thread.
   The contesting thread will check for this flag before
   going to sleep. For now the flag is kept in td_flags,
   but it may be better to use some other member or create
   a new one because of the possible performance/contention
   issues of having to own sched_lock. Thanks to jhb for
   pointing me in the right direction on this one.

2. Once a umtx was contested all future locks and unlocks
   were happening in the kernel, regardless of whether it
   was contested or not. To prevent this from happening,
   when a thread locks a umtx it checks the queue for that
   umtx and unsets the contested bit if there are no other
   threads waiting on it. Again, this is slightly more
   complicated than it needs to be because we can't hold
   a lock across casuptr(). So, the thread has to check
   the queue again after unseting the bit, and reset the
   contested bit if it finds that another thread has put
   itself on the queue in the mean time.

3. Remove the if... block for unlocking an uncontested
   umtx, and replace it with a KASSERT. The _only_ time
   a thread should be unlocking a umtx in the kernel is
   if it is contested.
2003-07-17 11:06:40 +00:00
bmilekic
23022fa940 Change the style of the english used to print accounting enabled
and disabled.  This means no period at the end and changing
"Process accounting <foo>" to "Accounting <foo>".

Pointed out by: bde
2003-07-16 13:20:10 +00:00
bmilekic
6304495090 Log process accounting activation/deactivation.
Useful for some auditing purposes.

Submitted by: Christian S.J. Peron <maneo@bsdpro.com>
PR: kern/54529
2003-07-16 03:59:50 +00:00
truckman
68ed1d12ac Rearrange the SYSINIT order to call lockmgr_init() earlier so that
the runtime lockmgr initialization code in lockinit() can be eliminated.

Reviewed by:	jhb
2003-07-16 01:00:39 +00:00
davidxu
a2f4cb184e If initial thread is still a bound thread, don't change its signal mask. 2003-07-15 14:04:38 +00:00
harti
55e0e26eb4 Add a facility for devices, specifically network interfaces, that require
large to huge amounts of small or medium sized receive buffers. The problem
with these situations is that they eat up the available DMA address space
very quickly when using mbufs or even mbuf clusters. Additionally this
facility provides a direct mapping between 32-bit integers and these buffers.
This is needed for devices originally designed for 32-bit systems. Ususally
the virtual address of the buffer is used as a handle to find the buffer as
soon as it is returned by the card. This does not work for 64-bit machines
and hence this mapping is needed.
2003-07-15 08:59:38 +00:00
davidxu
15825cd99f Rename thread_siginfo to cpu_thread_siginfo 2003-07-15 04:26:26 +00:00
hsu
ae22e1e030 Rev 1.121 meant to pass the value 1 to soalloc() to indicate waitok.
Reported by:	arr
2003-07-14 20:39:22 +00:00
truckman
f985965120 Extend the mutex pool implementation to permit the creation and use of
multiple mutex pools with different options and sizes.  Mutex pools can
be created with either the default sleep mutexes or with spin mutexes.
A dynamically created mutex pool can now be destroyed if it is no longer
needed.

Create two pools by default, one that matches the existing pool that
uses the MTX_NOWITNESS option that should be used for building higher
level locks, and a new pool with witness checking enabled.

Modify the users of the existing mutex pool to use the appropriate pool
in the new implementation.

Reviewed by:	jhb
2003-07-13 01:22:21 +00:00
robert
580d762f5f Make the system call vector name of a process accessible to user
land applications by introducing the KERN_PROC_SV_NAME sysctl node,
which is searchable by PID.
2003-07-12 02:00:16 +00:00
davidxu
59f688ef90 If a thread is sending signal to its process, if the thread can handle
the signal itself, it should get it without looking for other threads.
2003-07-11 13:42:23 +00:00
silby
22ad6d5be5 Add init_param3() to subr_param. This function is called
immediately after the kernel map has been sized, and is
the optimal place for the autosizing of memory allocations
which occur within the kernel map to occur.

Suggested by:	bde
2003-07-11 00:01:03 +00:00
peter
fb79192cce unifdef -DLAZY_SWITCH and start to tidy up the associated glue. 2003-07-10 01:02:59 +00:00
silby
eb0aaa7a47 A few minor changes:
- Use atomic ops to update the bigpipe count
- Make the bigpipe count sysctl readable
- Remove a duplicate comparison in an if statement
- Comment two SYSCTLs.
2003-07-09 21:59:48 +00:00
silby
fa9cd99702 Pull in the entire kmem_map size calculation from kern_malloc, rather
than the shortcircuited version I had been using, which only worked
properly on i386 & amd64.

Also, change an autoscale constant to account for the more correct
kmem_map size.

Problem noticed by:     mux
2003-07-08 18:59:21 +00:00
jeff
f90fe69bb8 - When stealing a kse in kseq_move() ignore the current kseq's min nice
value.  We want to steal any thread, even one that is not given a slice
   on its current queue.
2003-07-08 06:19:40 +00:00
silby
bba10d998e Put some concrete limits on pipe memory consumption:
- Limit the total number of pipes so that we do not
  exhaust all vm objects in the kernel map.  When
  this limit is reached, a ratelimited message will
  be printed to the console.

- Put a soft limit on the amount of memory consumable
  by pipes.  Once the limit has been reached, all new
  pipes will be limited to 4K in size, rather than the
  default of 16K.

- Put a limit on the number of pages that may be used
  for high speed page flipping in order to reduce the
  amount of wired memory.  Pipe writes that occur
  while this limit is exceeded will fall back to
  non-page flipping mode.

The above values are auto-tuned in subr_param.c and
are scaled to take into account both the size of
physical memory and the size of the kernel map.

These limits help to reduce the "kernel resources exhausted"
panics that could be caused by opening a large
number of pipes.  (Pipes alone are no longer able
to exhaust all resources, but other kernel memory hogs
in league with pipes may still be able to do so.)

PR:			53627
Ideas / comments from:	hsu, tjr, dillon@apollo.backplane.com
MFC after:		1 week
2003-07-08 04:02:31 +00:00