Commit Graph

10536 Commits

Author SHA1 Message Date
Doug Rabson
a365ea5fba Fix compilation with LOCKF_DEBUG. 2008-04-16 14:08:12 +00:00
Konstantin Belousov
eab626f110 Move the head of byte-level advisory lock list from the
filesystem-specific vnode data to the struct vnode. Provide the
default implementation for the vop_advlock and vop_advlockasync.
Purge the locks on the vnode reclaim by using the lf_purgelocks().
The default implementation is augmented for the nfs and smbfs.
In the nfs_advlock, push the Giant inside the nfs_dolock.

Before the change, the vop_advlock and vop_advlockasync have taken the
unlocked vnode and dereferenced the fs-private inode data, racing with
with the vnode reclamation due to forced unmount. Now, the vop_getattr
under the shared vnode lock is used to obtain the inode size, and
later, in the lf_advlockasync, after locking the vnode interlock, the
VI_DOOMED flag is checked to prevent an operation on the doomed vnode.

The implementation of the lf_purgelocks() is submitted by dfr.

Reported by:	kris
Tested by:	kris, pho
Discussed with:	jeff, dfr
MFC after:	2 weeks
2008-04-16 11:33:32 +00:00
David Xu
d61f3de656 Implement POSIX function tcgetsid() which returns session id.
PR: stand/107561
2008-04-15 08:33:32 +00:00
Marcel Moolenaar
495168ba8d Support and switch to the ULE scheduler:
o  Implement IPI_PREEMPT,
o  Set td_lock for the thread being switched out,
o  For ULE & SMP, loop while td_lock points to blocked_lock for
   the thread being switched in,
o  Enable ULE by default in GENERIC and SKI,
2008-04-15 05:02:42 +00:00
Randall Stewart
cf71e4381a Add pru_flush routine so a transport can
flush itself during Shutdown

MFC after:	1 week
2008-04-14 18:06:04 +00:00
Alan Cox
e384d8a89b Initialize the vm object's flags to include OBJ_NOSPLIT, just like the
vm objects that are used by System V shared memory segments.
2008-04-13 21:08:34 +00:00
Attilio Rao
22dd228d5d Use a "rel" memory barrier for disowning the lock as it cames from an
exclusive locking operation.
2008-04-13 01:21:56 +00:00
Attilio Rao
0b0100db88 struct lock_instance and struct lock_list_entry don't need to be in the
public namespace for WITNESS as they are only used internally so just
move them in the private namespace for the subsystem (with all related
supporting definitions).
2008-04-13 01:20:47 +00:00
Poul-Henning Kamp
8d24f82310 fix printf type confusion on amd64 2008-04-12 21:51:54 +00:00
Poul-Henning Kamp
c9ad6040dd Emit summaries of struct c(alender)t(ime) <-> struct timespec conversions
under bootverbose.

Struct ct is used for setting/reading real time clocks and I'm about
to Do Things to some of those, so a bit of preemptive debugging is
in order.

Remove a pointless __inline.
2008-04-12 20:35:56 +00:00
Attilio Rao
e5f94314ad - Re-introduce WITNESS support for lockmgr. About the old implementation
the only one difference is that lockmgr*() functions now accept
  LK_NOWITNESS flag which skips ordering for the instanced calling.
- Remove an unuseful stub in witness_checkorder() (because the above check
  doesn't allow ever happening) and allow witness_upgrade() to accept
  non-try operation too.
2008-04-12 19:57:30 +00:00
Attilio Rao
872b7289fd - Remove a stale comment.
- Add an extra assertion in order to catch malformed requested operations.
2008-04-12 13:56:17 +00:00
Attilio Rao
1859cffaef Add missing stubs for spinlocks cpuset and intrcnt.
Submitted by:	kris
2008-04-12 13:51:18 +00:00
Xin LI
31c50f53da Instead of rolling our own jail number allocation procedure, use
alloc_unr() to do it.

Submitted by:	Ed Schouten <ed 80386 nl>
PR:		kern/122270
MFC after:	1 month
2008-04-11 21:31:15 +00:00
John Baldwin
03c7442d75 Use kthread_exit() to terminate a taskqueue thread rather than kproc_exit()
now that the taskqueue threads are kthreads rather than kprocs.

Reported by:	kris
2008-04-11 17:35:54 +00:00
Jeff Roberson
9b33b154b5 - Add the interrupt vector number to intr_event_create so MI code can
lookup hard interrupt events by number.  Ignore the irq# for soft intrs.
 - Add support to cpuset for binding hardware interrupts.  This has the
   side effect of binding any ithread associated with the hard interrupt.
   As per restrictions imposed by MD code we can only bind interrupts to
   a single cpu presently.  Interrupts can be 'unbound' by binding them
   to all cpus.

Reviewed by:	jhb
Sponsored by:	Nokia
2008-04-11 03:26:41 +00:00
Pawel Jakub Dawidek
b03d720760 - Use LK_TYPE_MASK where needed. Actually after sys/sys/lockmgr.h:1.69 it is
no longer needed, but for now we still want to be consistent with other
  similar checks in the tree.
- Call ASSERT_VOP_ELOCKED() only when vget() returns 0.

Reviewed by:	jeff
2008-04-09 20:19:55 +00:00
Sam Leffler
6c6eaea6dd Do image loading in a context known to have a root directory:
o create a private task queue thread that sets up root and current
  directories (hooking mountroot event as needed); this is necessary
  because task queue threads are parented from proc0 and it does not
  have a reference to rootvnode (lost when / mounting moved to init)
o bounce image load + unload requests through the private task q so
  we can load images even when the request is made from a thread that
  does not have sufficient context (e.g. task q thread)
o add a check in the task q thread to fail requests before root is
  mounted (just in case)

Reviewed by:	jhb, mlaier, luigi (glance)
MFC after:	1 month
2008-04-09 19:07:48 +00:00
Sam Leffler
00c71fb7c3 o add a mountroot event handler that fires when / is mounted; this information
was lost when root started being mounted by init
o remove SI_SUB_MOUNT_ROOT since it's no longer meaningful

MFC after:	2 weeks
2008-04-08 17:53:33 +00:00
Sam Leffler
175611b668 change taskqueue_start_threads to create threads instead of proc's
Reviewed by:	jhb
2008-04-08 17:48:02 +00:00
Konstantin Belousov
48b05c3f82 Implement the linux syscalls
openat, mkdirat, mknodat, fchownat, futimesat, fstatat, unlinkat,
    renameat, linkat, symlinkat, readlinkat, fchmodat, faccessat.

Submitted by:	rdivacky
Sponsored by:	Google Summer of Code 2007
Tested by:	pho
2008-04-08 09:45:49 +00:00
Attilio Rao
e0f62984c1 - Use a different encoding for lockmgr options: make them encoded by
bit in order to allow per-bit checks on the options flag, in particular
  in the consumers code [1]
- Re-enable the check against TDP_DEADLKTREAT as the anti-waiters
  starvation patch allows exclusive waiters to override new shared
  requests.

[1] Requested by:	pjd, jeff
2008-04-07 14:46:38 +00:00
Don Lewis
8a3724388b vfs_syscalls.c 1.452 mistakenly swapped the behavior of chown() and lchown(). 2008-04-07 00:29:32 +00:00
Attilio Rao
047dd67e96 Optimize lockmgr in order to get rid of the pool mutex interlock, of the
state transitioning flags and of msleep(9) callings.
Use, instead, an algorithm very similar to what sx(9) and rwlock(9)
alredy do and direct accesses to the sleepqueue(9) primitive.

In order to avoid writer starvation a mechanism very similar to what
rwlock(9) uses now is implemented, with the correspective per-thread
shared lockmgrs counter.

This patch also adds 2 new functions to lockmgr KPI: lockmgr_rw() and
lockmgr_args_rw().  These two are like the 2 "normal" versions, but they
both accept a rwlock as interlock.  In order to realize this, the general
lockmgr manager function "__lockmgr_args()" has been implemented through
the generic lock layer. It supports all the blocking primitives, but
currently only these 2 mappers live.

The patch drops the support for WITNESS atm, but it will be probabilly
added soon. Also, there is a little race in the draining code which is
also present in the current CVS stock implementation: if some sharers,
once they wakeup, are in the runqueue they can contend the lock with
the exclusive drainer.  This is hard to be fixed but the now committed
code mitigate this issue a lot better than the (past) CVS version.
In addition assertive KA_HELD and KA_UNHELD have been made mute
assertions because they are dangerous and they will be nomore supported
soon.

In order to avoid namespace pollution, stack.h is splitted into two
parts: one which includes only the "struct stack" definition (_stack.h)
and one defining the KPI.  In this way, newly added _lockmgr.h can
just include _stack.h.

Kernel ABI results heavilly changed by this commit (the now committed
version of "struct lock" is a lot smaller than the previous one) and
KPI results broken by lockmgr_rw() / lockmgr_args_rw() introduction,
so manpages and __FreeBSD_version will be updated accordingly.

Tested by:      kris, pho, jeff, danger
Reviewed by:    jeff
Sponsored by:   Google, Summer of Code program 2007
2008-04-06 20:08:51 +00:00
Jeff Roberson
ce62b59c88 - Correct a major error introduced in the per-cpu timeout commit. Sleep
and wakeup require the same wait channel to function properly.

Found by:	kris
Pointy hat:	me
2008-04-06 11:08:49 +00:00
John Baldwin
8aa9e82e67 Move INTR_FILTER from opt_global.h to its own header. 2008-04-05 20:13:15 +00:00
John Baldwin
1ee1b68792 Add a MI intr_event_handle() routine for the non-INTR_FILTER case. This
allows all the INTR_FILTER #ifdef's to be removed from the MD interrupt
code.
- Rename the intr_event 'eoi', 'disable', and 'enable' hooks to
  'post_filter', 'pre_ithread', and 'post_ithread' to be less x86-centric.
  Also, add a comment describe what the MI code expects them to do.
- On amd64, i386, and powerpc this is effectively a NOP.
- On arm, don't bother masking the interrupt unless the ithread is
  scheduled in the non-INTR_FILTER case to match what INTR_FILTER did.
  Also, don't bother unmasking the interrupt in the post_filter case if
  we never masked it.  The INTR_FILTER case had been doing this by having
  arm_unmask_irq for the post_filter (formerly 'eoi') hook.
- On ia64, stray interrupts are now masked for the non-INTR_FILTER case.
  They were already masked in the INTR_FILTER case.
- On sparc64, use the a NULL pre_ithread hook and use intr_enable_eoi() for
  both the 'post_filter' and 'post_ithread' hooks to match what the
  non-INTR_FILTER code did.
- On sun4v, retire the ithread wrapper hack by using an appropriate
  'post_ithread' hook instead (it's what 'post_ithread'/'enable' was
  designed to do even in 5.x).

Glanced at by:	piso
Reviewed by:	marius
Requested by:	marius [1], [5]
Tested on:	amd64, i386, arm, sparc64
2008-04-05 19:58:30 +00:00
Alan Cox
7630c26507 Reintroduce UMA_SLAB_KMAP; however, change its spelling to
UMA_SLAB_KERNEL for consistency with its sibling UMA_SLAB_KMEM.
(UMA_SLAB_KMAP met its original demise in revision 1.30 of
vm/uma_core.c.)  UMA_SLAB_KERNEL is now required by the jumbo frame
allocators.  Without it, UMA cannot correctly return pages from the
jumbo frame zones to the VM system because it resets the pages' object
field to NULL instead of the kernel object.  In more detail, the jumbo
frame zones are created with the option UMA_ZONE_REFCNT.  This causes
UMA to overwrite the pages' object field with the address of the slab.
However, when UMA wants to release these pages, it doesn't know how to
restore the object field, so it sets it to NULL.  This change teaches
UMA how to reset the object field to the kernel object.

Crashes reported by: kris
Fix tested by: kris
Fix discussed with: jeff
MFC after: 6 weeks
2008-04-04 18:41:12 +00:00
Jeff Roberson
00ca09449d - Add sysctls at debug.rwlock to control the behavior of the speculative
spinning when readers hold a lock.  This spinning is speculative because,
   unlike the write case, we can not test whether the owners are running.
 - Add speculative read spinning for readers who are blocked by pending
   writers while a read lock is still held.  This allows the thread to
   spin until the write lock succeeds after which it may spin until the
   writer has released the lock.  This prevents excessive context switches
   when readers and writers both hold the lock for brief periods.

Sponsored by:	Nokia
2008-04-04 10:00:46 +00:00
Jeff Roberson
3bc8c68d9f - Add a Nokia copyright to cpuset to reflect their generous
contribution to this work.
2008-04-04 01:22:04 +00:00
Jeff Roberson
0502fe2e43 - Allow static_boost to specify no boost with '0', traditional kernel
fixed pri boost with '1' or any priority less than the current thread's
   priority with a value greater than two.  Default the boost to
   PRI_MIN_TIMESHARE to prevent regular user-space threads from starving
   threads in the kernel.  This prevents these user-threads from also
   being scheduled as if they are high fixed-priority kernel threads.
 - Restore the setting of lowpri in tdq_choose().  It has to be either here
   or in sched_switch().  I accidentally removed it from both places.

Tested by:	kris
2008-04-04 01:16:18 +00:00
Jeff Roberson
03d17db7d5 - Don't check for the ITHD pri class in tdq_load_add and rem. 4BSD doesn't
do this either.  Simply check P_NOLOAD.  It'd be nice if this was
   in a thread flag so we didn't have an extra cache miss every time we
   add and remove a thread from the run-queue.
2008-04-04 01:04:43 +00:00
Jeff Roberson
e4b1aa6210 - Fix a mis-merge that crept in during the softclock changes.
Spotted by:	jhb
2008-04-04 01:03:23 +00:00
David Xu
44253336b6 let umtxq_busy() only spin on mp machine. make function name
do_rwlock_unlock to be consistent with others.
2008-04-03 11:49:20 +00:00
Jeff Roberson
e8245292a7 - Convert two timeout users to the new callout_reset_curcpu() api.
Sponsored by:	Nokia
2008-04-02 11:21:42 +00:00
Jeff Roberson
8d809d5061 Implement per-cpu callout threads, wheels, and locks.
- Move callout thread creation from kern_intr.c to kern_timeout.c
 - Call callout_tick() on every processor via hardclock_cpu() rather than
   inspecting callout internal details in kern_clock.c.
 - Remove callout implementation details from callout.h
 - Package up all of the global variables into a per-cpu callout structure.
 - Start one thread per-cpu.  Threads are not strictly bound.  They prefer
   to execute on the native cpu but may migrate temporarily if interrupts
   are starving callout processing.
 - Run all callouts by default in the thread for cpu0 to maintain current
   ordering and concurrency guarantees.  Many consumers may not properly
   handle concurrent execution.
 - The new callout_reset_on() api allows specifying a particular cpu to
   execute the callout on.  This may migrate a callout to a new cpu.
   callout_reset() schedules on the last assigned cpu while
   callout_reset_curcpu() schedules on the current cpu.

Reviewed by:	phk
Sponsored by:	Nokia
2008-04-02 11:20:30 +00:00
Konstantin Belousov
35b450291a Add two missed chunks from the rev. 1.210, for the giant_read() and
giant_ioctl().

PR:	kern/122287
MFC after:	3 days
2008-04-02 11:11:58 +00:00
Jeff Roberson
1fd9b6a577 - Destroy the bo mtx when the vnode is destroyed. 2008-04-02 10:40:03 +00:00
David Xu
fadd84c58f Fix compiling problem for amd64. 2008-04-02 05:54:41 +00:00
David Xu
11b1023b7d Er, don't restart a timeout version. 2008-04-02 04:26:59 +00:00
David Xu
1a30511c61 Introduce kernel based userland rwlock. Each umtx chain now has two lists,
one for readers and one for writers, other types of synchronization
object just use first list.

Asked by: jeff
2008-04-02 04:08:37 +00:00
Attilio Rao
b31a149bbb Add rw_try_rlock() and rw_try_wlock() to rwlocks.
These functions try the specified operation (rlocking and wlocking) and
true is returned if the operation completes, false otherwise.

The KPI is enriched by this commit, so __FreeBSD_version bumping and
manpage updating will happen soon.

Requested by:	jeff, kris
2008-04-01 20:31:55 +00:00
Doug Rabson
60cdfde09f Don't try to use an SX lock while holding the vnode interlock.
Sponsored by:	Isilon Systems
2008-04-01 16:07:01 +00:00
Konstantin Belousov
f2296b585e Regen 2008-03-31 12:12:27 +00:00
Konstantin Belousov
7104518b07 Add the openat(), fexecve() and other *at() syscalls to the table.
Based on the submission by rdivacky,
	sponsored by Google Summer of Code 2007
Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 12:06:55 +00:00
Konstantin Belousov
632dbc19e2 Implement the fexecve(2) syscall.
Based on the submission by rdivacky,
	sponsored by Google Summer of Code 2007
Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 12:05:52 +00:00
Konstantin Belousov
e4193f25cb Implement the
openat(2), faccessat(2), fchmodat(2), fchownat(2), fstatat(2),
	futimesat(2), linkat(2), mkdirat(2), mkfifoat(2), mknodat(2),
	readlinkat(2), renameat(2), symlinkat(2)
syscalls.

Based on the submission by rdivacky,
	sponsored by Google Summer of Code 2007
Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 12:04:20 +00:00
Konstantin Belousov
57b4252e45 Add the support for the AT_FDCWD and fd-relative name lookups to the
namei(9).

Based on the submission by rdivacky,
	sponsored by Google Summer of Code 2007
Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 12:01:21 +00:00
Konstantin Belousov
e314f69fff Add the support for the O_EXEC open(2) mode, as specified by the
POSIX Extended API Set Part 2 extension specification.

Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 11:57:18 +00:00
Konstantin Belousov
0a3af16a75 Add the utility function vn_commname() to retrieve the command name
from the vfs namecache, when available.

Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 11:53:03 +00:00
Jeff Roberson
a03ee0000e - Consistently return EDEADLK when presented with a new set that is
incompatible with existing bindings.
 - Try to copyout the setid in cpuset() before migrating the proc to the
   setid in case the user has supplied a bad buffer.
 - Rename cpuset_root() and cpuset_base() to cpuset_ref{root,base} to
   be more descriptive and free cpuset_root to be used as a different
   type of symbol.
 - Make cpuset_root the cpuset_t set of all cpus in the system.  This
   should contain the same bitmask as all_cpus presently.
 - Add a CPU_CMP() macro to compare two sets.
2008-03-30 11:31:14 +00:00
Jeff Roberson
5634d48667 - Don't allow calls to vn_lock() with no lock type requested. Callers
which simply want a reference should use vref().  Callers which want
   to check validity need to hold a lock while performing any action
   based on that validity.  vn_lock() would always release the interlock
   before returning making any action synchronous with the validity check
   impossible.
2008-03-29 23:36:26 +00:00
Jeff Roberson
069c6953a0 - Use vget() to lock the vnode rather than refing without a lock and
locking in separate steps.
2008-03-29 23:30:40 +00:00
Attilio Rao
71072af500 b_waiters cannot be adequately protected by the interlock because it is
dropped after the call to lockmgr() so just revert this approach using
something similar to the precedent one:
BUF_LOCKWAITERS() just checks if there are waiters (not the actual number
of them) and it is based on newly introduced lockmgr_waiters() which
returns if the lockmgr has waiters or not. The name has been choosen
differently by old lockwaiters() in order to not confuse them.

KPI results enriched by this commit so __FreeBSD_version bumping and
manpage update will be happening soon.
'struct buf' also changes, so kernel ABI is disturbed.

Bug found by:	jeff
Approved by:	jeff, kib
2008-03-28 12:30:12 +00:00
John Birrell
fc70c0bdd8 Regen after makesyscalls.sh change. 2008-03-27 01:55:06 +00:00
John Birrell
e994ea8e55 Generate another function for the DTrace syscall provider to specify
the syscall argument types.

This code is only compiled into the systrace kernel modul and has no
effect otherwise.
2008-03-27 01:53:44 +00:00
Poul-Henning Kamp
e465985885 The "free-lance" timer in the i8254 is only used for the speaker
these days, so de-generalize the acquire_timer/release_timer api
to just deal with speakers.

The new (optional) MD functions are:
	timer_spkr_acquire()
	timer_spkr_release()
and
	timer_spkr_setfreq()

the last of which configures the timer to generate a tone of a given
frequency, in Hz instead of 1/1193182th of seconds.

Drop entirely timer2 on pc98, it is not used anywhere at all.

Move sysbeep() to kern/tty_cons.c and use the timer_spkr*() if
they exist, and do nothing otherwise.

Remove prototypes and empty acquire-/release-timer() and sysbeep()
functions from the non-beeping archs.

This eliminate the need for the speaker driver to know about
i8254frequency at all.  In theory this makes the speaker driver MI,
contingent on the timer_spkr_*() functions existing but the driver
does not know this yet and still attaches to the ISA bus.

Syscons is more tricky, in one function, sc_tone(), it knows the hz
and things are just fine.

In the other function, sc_bell() it seems to get the period from
the KDMKTONE ioctl in terms if 1/1193182th second, so we hardcode
the 1193182 and leave it at that.  It's probably not important.

Change a few other sysbeep() uses which obviously knew that the
argument was in terms of i8254 frequency, and leave alone those
that look like people thought sysbeep() took frequency in hertz.

This eliminates the knowledge of i8254_freq from all but the actual
clock.c code and the prof_machdep.c on amd64 and i386, where I think
it would be smart to ask for help from the timecounters anyway [TBD].
2008-03-26 20:09:21 +00:00
Doug Rabson
a7ac0db6cb Regen. 2008-03-26 15:24:02 +00:00
Doug Rabson
dfdcada31e Add the new kernel-mode NFS Lock Manager. To use it instead of the
user-mode lock manager, build a kernel with the NFSLOCKD option and
add '-k' to 'rpc_lockd_flags' in rc.conf.

Highlights include:

* Thread-safe kernel RPC client - many threads can use the same RPC
  client handle safely with replies being de-multiplexed at the socket
  upcall (typically driven directly by the NIC interrupt) and handed
  off to whichever thread matches the reply. For UDP sockets, many RPC
  clients can share the same socket. This allows the use of a single
  privileged UDP port number to talk to an arbitrary number of remote
  hosts.

* Single-threaded kernel RPC server. Adding support for multi-threaded
  server would be relatively straightforward and would follow
  approximately the Solaris KPI. A single thread should be sufficient
  for the NLM since it should rarely block in normal operation.

* Kernel mode NLM server supporting cancel requests and granted
  callbacks. I've tested the NLM server reasonably extensively - it
  passes both my own tests and the NFS Connectathon locking tests
  running on Solaris, Mac OS X and Ubuntu Linux.

* Userland NLM client supported. While the NLM server doesn't have
  support for the local NFS client's locking needs, it does have to
  field async replies and granted callbacks from remote NLMs that the
  local client has contacted. We relay these replies to the userland
  rpc.lockd over a local domain RPC socket.

* Robust deadlock detection for the local lock manager. In particular
  it will detect deadlocks caused by a lock request that covers more
  than one blocking request. As required by the NLM protocol, all
  deadlock detection happens synchronously - a user is guaranteed that
  if a lock request isn't rejected immediately, the lock will
  eventually be granted. The old system allowed for a 'deferred
  deadlock' condition where a blocked lock request could wake up and
  find that some other deadlock-causing lock owner had beaten them to
  the lock.

* Since both local and remote locks are managed by the same kernel
  locking code, local and remote processes can safely use file locks
  for mutual exclusion. Local processes have no fairness advantage
  compared to remote processes when contending to lock a region that
  has just been unlocked - the local lock manager enforces a strict
  first-come first-served model for both local and remote lockers.

Sponsored by:	Isilon Systems
PR:		95247 107555 115524 116679
MFC after:	2 weeks
2008-03-26 15:23:12 +00:00
Scott Long
478cfc7300 Implement taskqueue_block() and taskqueue_unblock(). These functions allow
the owner of a queue to block and unblock execution of the tasks in the
queue while allowing tasks to continue to be added queue.  Combining this
with taskqueue_drain() allows a queue to be safely disabled.  The unblock
function may run (or schedule to run) the queue when it is called, just as
calling taskqueue_enqueue() would.

Reviewed by: jhb, sam
2008-03-25 22:38:45 +00:00
Ruslan Ermilov
ea26d58729 Replaced the misleading uses of a historical artefact M_TRYWAIT with M_WAIT.
Removed dead code that assumed that M_TRYWAIT can return NULL; it's not true
since the advent of MBUMA.

Reviewed by:	arch

There are ongoing disputes as to whether we want to switch to directly using
UMA flags M_WAITOK/M_NOWAIT for mbuf(9) allocation.
2008-03-25 09:39:02 +00:00
Ruslan Ermilov
b2798e2573 Regen after changing prototypes of cpuset_{get,set}affinity(). 2008-03-25 09:14:17 +00:00
Ruslan Ermilov
7f64829a5e Fixed type of the fourth argument of cpuset_{get,set}affinity(2) to be size_t.
Prodded by:	davidxu
2008-03-25 09:11:53 +00:00
Jeff Roberson
0ee6cecc9d - Greatly simplify vget() by removing the guarantee that any new
references to a vnode with VI_OWEINACT set will force the vinactive()
   call.  The kernel makes no guarantees about which reference was the
   last to close a file or when the actual inactive processing will
   happen.  The previous code was designed to preserve existing semantics
   in the face of shared locks, however, this was unnecessary.

Discussed with:	mckusick
2008-03-24 04:22:58 +00:00
Jeff Roberson
804e60d4cf - Don't acquire the vnode interlock in _vn_lock() unless no lock type
is requested.  Handle this case specially before the while loop.
 - Use the held vnode lock to check for VI_DOOMED.  The vnode lock and
   interlock must both be held to set VI_DOOMED so either one held, even
   shared, is sufficient to check it.

No objection by:	kib
2008-03-24 04:17:35 +00:00
Konstantin Belousov
1be222e9df Yield the cpu in the kernel while iterating the list of the
vnodes belonging to the mountpoint. Also, yield when in the
softdep_process_worklist() even when we are not going to sleep due to
buffer drain.

It is believed that the ULE fixed the problem [1], but the yielding
seems to be needed at least for the 4BSD case.

Discussed:	on stable@, with bde
Reviewed by:	tegge, jeff [1]
MFC after:	2 weeks
2008-03-23 13:45:24 +00:00
David Xu
34d05d83f6 Remove commented out code, thread suspension is done in thread library. 2008-03-23 02:03:06 +00:00
Jeff Roberson
e6b2545b3b - Only return 1 from sync_vnode() in cases where the vnode is still
at the head of the sync list.  This prevents sched_sync() from
   re-queueing a vnode which may have been freed already.

Discussed with:	kib
2008-03-23 01:44:28 +00:00
Jeff Roberson
f6a8cecfc6 - Pass BO_MTX(bo) to lockmgr in vtruncbuf, we don't own the vnode
interlock here anymore.

Reported by:	kris
2008-03-23 01:42:19 +00:00
Poul-Henning Kamp
4218a7310b In abort2(2): Accept a NULL arg pointer if nargs == 0 2008-03-22 16:32:52 +00:00
Jeff Roberson
698b1a6643 - Complete part of the unfinished bufobj work by consistently using
BO_LOCK/UNLOCK/MTX when manipulating the bufobj.
 - Create a new lock in the bufobj to lock bufobj fields independently.
   This leaves the vnode interlock as an 'identity' lock while the bufobj
   is an io lock.  The bufobj lock is ordered before the vnode interlock
   and also before the mnt ilock.
 - Exploit this new lock order to simplify softdep_check_suspend().
 - A few sync related functions are marked with a new XXX to note that
   we may not properly interlock against a non-zero bv_cnt when
   attempting to sync all vnodes on a mountlist.  I do not believe this
   race is important.  If I'm wrong this will make these locations easier
   to find.

Reviewed by:	kib (earlier diff)
Tested by:	kris, pho (earlier diff)
2008-03-22 09:15:16 +00:00
Alfred Perlstein
435cdf88ea Fix a race where timeout/untimeout could cause crashes for Giant locked
code.

The bug:

There exists a race condition for timeout/untimeout(9) due to the
way that the softclock thread dequeues timeouts.

The softclock thread sets the c_func and c_arg of the callout to
NULL while holding the callout lock but not Giant.  It then drops
the callout lock and acquires Giant.

It is at this point where untimeout(9) on another cpu/thread could
be called.

Since c_arg and c_func are cleared, untimeout(9) does not touch the
callout and returns as if the callout is canceled.

The softclock then tries to acquire Giant and likely blocks due to
the other cpu/thread holding it.

The other cpu/thread then likely deallocates the backing store that
c_arg points to and finishes working and hence drops Giant.

Softclock resumes and acquires giant and calls the function with
the now free'd c_arg and we have corruption/crash.

The fix:

We need to track curr_callout even for timeout(9) (LOCAL_ALLOC)
callouts.  We need to free the callout after the softclock processes
it to deal with the race here.

Obtained from: Juniper Networks, iedowse
Reviewed by: jhb, iedowse
MFC After: 2 weeks.
2008-03-22 07:29:45 +00:00
Konstantin Belousov
e7ffdf423a Reduce contention on the vnode interlock by not acquiring the BO_LOCK
around the check for the BV_BKGRDINPROG in the brelse() and bqrelse().
See the comment for the explanation why it is safe.

Tested by:	pho
Submitted by:	jeff
2008-03-21 12:38:44 +00:00
Jeff Roberson
0169d126a6 - Reduce contention on the global bdonelock and bpinlock by using
a pool mutex to protect these sleep/wakeup/counter races.  This
   still is preferable to bloating each bio with a mtx.
2008-03-21 10:00:05 +00:00
Jeff Roberson
b7edba7704 - Add a new td flag TDF_NEEDSUSPCHK that is set whenever a thread needs
to enter thread_suspend_check().
 - Set TDF_ASTPENDING along with TDF_NEEDSUSPCHK so we can move the
   thread_suspend_check() to ast() rather than userret().
 - Check TDF_NEEDSUSPCHK in the sleepq_catch_signals() optimization so
   that we don't miss a suspend request.  If this is set use the
   expensive signal path.
 - Set NEEDSUSPCHK when creating a new thread in thr in case the
   creating thread is due to be suspended as well but has not yet.

Reviewed by:	davidxu (Authored original patch)
2008-03-21 08:23:25 +00:00
John Baldwin
dcc8106854 Implement a BUS_BIND_INTR() method in the bus interface to bind an IRQ
resource to a CPU.  The default method is to pass the request up to the
parent similar to BUS_CONFIG_INTR() so that all busses don't have to
explicitly implement bus_bind_intr.  A bus_bind_intr(9) wrapper routine
similar to bus_setup/teardown_intr() is added for device drivers to use.
Unbinding an interrupt is done by binding it to NOCPU.  The IRQ resource
must be allocated, but it can happen in any order with respect to
bus_setup_intr().  Currently it is only supported on amd64 and i386 via
nexus(4) methods that simply call the intr_bind() routine.

Tested by:	gallatin
2008-03-20 21:24:32 +00:00
Konstantin Belousov
69aa768aef Fix the leak of the vmspace on the fork when the process limits
are exceeded.

Pointy hat to:	me
MFC after:	3 days
2008-03-20 15:24:49 +00:00
Jeff Roberson
9727e63745 - Restore runq to manipulating threads directly by putting runq links and
rqindex back in struct thread.
 - Compile kern_switch.c independently again and stop #include'ing it from
   schedulers.
 - Remove the ts_thread backpointers and convert most code to go from
   struct thread to struct td_sched.
 - Cleanup the ts_flags #define garbage that was causing us to sometimes
   do things that expanded to td->td_sched->ts_thread->td_flags in 4BSD.
 - Export the kern.sched sysctl node in sysctl.h
2008-03-20 05:51:16 +00:00
Jeff Roberson
52e95411f8 - Remove the unused and redundant sched_newproc() function.
- Remove the unused and redundant sched_newthread() which peaks into scheduler
   private structures.
2008-03-20 03:09:15 +00:00
Jeff Roberson
79813875ab - There is no sense in calling sched_newthread() at thread_init() and
thread_fini().  The schedulers initialize themselves properly during
   sched_fork_thread() anyhow.  fini is only called when we're returning
   the memory to the allocator which surely doesn't care what state the
   memory is in.
2008-03-20 03:07:57 +00:00
Jeff Roberson
8b16c208e6 - ULE and 4BSD share only one line of code from sched_newthread() so implement
the required pieces in sched_fork_thread().  The td_sched pointer is already
   setup by thread_init anyway.
2008-03-20 03:06:33 +00:00
Jeff Roberson
0ac213ef80 - Don't call the empty sched_newproc() function. sched_newproc() already
existed as sched_fork() which is a non empty function in both schedulers.
2008-03-20 03:05:17 +00:00
Jeff Roberson
a90f3f2547 - Move maybe_preempt() from kern_switch.c to sched_4bsd.c. This is function
is only used by 4bsd.
 - Create a new runq_choose_fuzz() function rather than polluting runq_choose()
   with 4BSD specific code.
 - Move the fuzz sysctl into sched_4bsd.c
 - Remove some dead code from kern_switch.c
2008-03-20 02:14:02 +00:00
Jeff Roberson
a564bfc7fa - Directly include opt_sched.h in sched_4bsd. 2008-03-20 01:32:48 +00:00
Maxim Sobolev
073d8ba485 Revert previous change - it appears that the limit I was hitting was a
maxsockets limit, not maxfiles limit. The question remains why those
limits are handled differently (with error code for maxfiles but with
sleep for maxsokets), but those would be addressed in a separate commit
if necessary.

Requested by:   rwhatson, jeff
2008-03-19 09:58:25 +00:00
Pawel Jakub Dawidek
4682cd0b7d Remove extra uihold() call that accidentally sneak in during perforce
change @125544.
2008-03-19 07:52:07 +00:00
Jeff Roberson
6d55b3ec9c - Remove some dead code and comments related to KSE.
- Don't set tdq_lowpri on every switch, it should be precisely maintained now.
 - Add some comments to sched_thread_priority().
2008-03-19 07:36:37 +00:00
Jeff Roberson
241fbd3d13 - At the top of sleepq_catch_signals() lock the thread and check TDF_NEEDSIGCHK
before doing the very expensive cursig() and related locking.  NEEDSIGCHK
   is updated whenever our signal mask change or when a signal is delivered and
   should be sufficient to avoid the more expensive tests.  This eliminates
   another source of PROC_LOCK contention in multithreaded programs.
2008-03-19 07:35:14 +00:00
Jeff Roberson
bd4e153568 - Remove stale comment.
- In the last revision the code was changed to use maxfilesperproc rather than
   the per-process file limit to restrict the size of the poll array.  This
   eliminates a significant source of process lock contention in multithreaded
   programs and is cheaper.  This had been committed with the wrong batch of
   changes.
2008-03-19 07:33:16 +00:00
Jeff Roberson
afc5854dbc - Add a facility similar to LOCK_PROFILING under SLEEPQUEUE_PROFILING. Keep
a simple (wmesg, count) tuple in a hash to keep track of how many times
   we sleep at each wait message.  We hash on message and not channel.  No
   line number information is given as typically wait messages are not used in
   more than one place.  Identical strings defined at different addresses will
   show up with seperate counters.
 - Use debug.sleepq.enable to enable, .reset to reset, and .stats dumps stats.
 - Do an unsynchronized check in sleepq_switch() prior to switching before
   calling sleepq_profile() which uses a global lock to synchronize the hash.
   Only sleeps which actually cause a context switch are counted.
2008-03-19 07:22:07 +00:00
Jeff Roberson
fbd762f197 - Fix the last of the threading bugs that were introduced as far back as
1.38 in 2001.  Break out of the FOREACH_THREAD_IN_PROC loop when we've
   discovered a new proc in the chain.
 - Increment i and check for maxlockdepth once per matching process not
   once per thread.  This didn't properly terminate the loop before.
 - Fix a bug which has existed potentially since rev 1.1.  waitblock->lf_next
   can be NULL when a thread has been woken-up but not yet scheduled.  Check
   for this condition rather than blindly dereferencing.

Found by:	libMicro
2008-03-19 07:13:24 +00:00
Jeff Roberson
45aea8de6e - Restore the NULL check for td_cpuset. This can happen if a partially
constructed thread was torn down as is the case when we fail to allocate
   a kernel stack.
2008-03-19 06:20:21 +00:00
Jeff Roberson
374ae2a393 - Relax requirements for p_numthreads, p_threads, p_swtick, and p_nice from
requiring the per-process spinlock to only requiring the process lock.
 - Reflect these changes in the proc.h documentation and consumers throughout
   the kernel.  This is a substantial reduction in locking cost for these
   fields and was made possible by recent changes to threading support.
2008-03-19 06:19:01 +00:00
John Baldwin
6d2d1c044f Simplify the interrupt code a bit:
- Always include the ie_disable and ie_eoi methods in 'struct intr_event'
  and collapse down to one intr_event_create() routine.  The disable and
  eoi hooks simply aren't used currently in the !INTR_FILTER case.
- Expand 'disab' to 'disable' in a few places.
- Use function casts for arm and i386:intr_eoi_src() instead of wrapper
  routines since to trim one extra indirection.

Compiled on:	{arm,amd64,i386,ia64,ppc,sparc64} x {FILTER, !FILTER}
Tested on:	{amd64,i386} x {FILTER, !FILTER}
2008-03-17 22:42:01 +00:00
Konstantin Belousov
aeeb4202df Fix two races in the handling of the d_gianttrick for the D_NEEDGIANT
drivers.

In the giant_XXX wrappers for the device methods of the D_NEEDGIANT
drivers, do not dereference the cdev->si_devsw. It is racing with
the destroy_devl() clearing of the si_devsw. Instead, use the
dev_refthread() and return ENXIO for the destroyed device. [1]

The check for the D_INIT in the prep_cdevsw() was not synchronized with
the call of the fini_cdevsw() in destroy_devl(), that under rapid device
creation/destruction may result in the use of uninitialized cdevsw [2].
Change the protocol for the prep_cdevsw(), requiring it to be called
under dev_mtx, where the check for D_INIT is done.

Do not free the memory allocated for the gianttrick cdevsw while holding
the dev_mtx, put it into the free list to be freed later. Reuse the
d_gianttrick pointer to keep the size and layout of the struct cdevsw
(requested by phk). Free the memory in the dev_unlock_and_free(), and do
all the free after the dev_mtx is dropped (suggested by jhb).

Reported by:	bsdimp + many [1], pho [2]
Reviewed by:	phk, jhb
Tested by:	pho
MFC after:	1 week
2008-03-17 13:17:10 +00:00
Pawel Jakub Dawidek
4582cb68b1 - There is no more "uidinfo struct" mutex.
- The "uidinfo hash" lock is now a rwlock.

Reminded by:	kib
2008-03-17 11:48:40 +00:00
Pawel Jakub Dawidek
709446e782 Whitespace cleanups. 2008-03-16 21:32:20 +00:00
Pawel Jakub Dawidek
1b072fbcab - Use wait-free method to manage ui_sbsize and ui_proccnt fields in the
uidinfo structure. This entirely removes contention observed on the
  ui_mtxp mutex (as it is now gone).
- Convert the uihashtbl_mtx mutex to a rwlock, as most of the time we just
  need to read-lock it.

Reviewed by:	jhb, jeff, kris & others
Tested by:	kris
2008-03-16 21:29:02 +00:00
Robert Watson
45fa2c8a87 Consistently use ANSI C declarationsfor all functions in kern_synch.c. 2008-03-16 18:59:21 +00:00
Pawel Jakub Dawidek
e056770745 Style fixes. 2008-03-16 18:26:59 +00:00
Pawel Jakub Dawidek
67e83b07c6 Fix information leak. We can find PIDs of running processes from within
a jail, etc. by simply calling setpriority(PRIO_PROCESS, <PID>, 0) and
checking the return value: 0 means that the process exists and -1 that
it doesn't exist.

Reviewed by:	rwatson
MFC after:	1 week
2008-03-16 17:55:06 +00:00
Robert Watson
237fdd787b In keeping with style(9)'s recommendations on macros, use a ';'
after each SYSINIT() macro invocation.  This makes a number of
lightweight C parsers much happier with the FreeBSD kernel
source, including cflow's prcc and lxr.

MFC after:	1 month
Discussed with:	imp, rink
2008-03-16 10:58:09 +00:00
Maxim Sobolev
c9370ff4d0 Properly set size of the file_zone to match kern.maxfiles parameter.
Otherwise the parameter is no-op, since zone by default limits number
of descriptors to some 12K entries. Attempt to allocate more ends up
sleeping on zonelimit.

MFC after:	2 weeks
2008-03-16 06:21:30 +00:00
Ruslan Ermilov
1f49b573e1 Fix panic on e.g. "kldload /dev/null".
PR:		kern/121427
Reviewed by:	sem
MFC after:	3 days
2008-03-15 17:40:18 +00:00
John Baldwin
eaf86d1678 Add preliminary support for binding interrupts to CPUs:
- Add a new intr_event method ie_assign_cpu() that is invoked when the MI
  code wishes to bind an interrupt source to an individual CPU.  The MD
  code may reject the binding with an error.  If an assign_cpu function
  is not provided, then the kernel assumes the platform does not support
  binding interrupts to CPUs and fails all requests to do so.
- Bind ithreads to CPUs on their next execution loop once an interrupt
  event is bound to a CPU.  Only shared ithreads are bound.  We currently
  leave private ithreads for drivers using filters + ithreads in the
  INTR_FILTER case unbound.
- A new intr_event_bind() routine is used to bind an interrupt event to
  a CPU.
- Implement binding on amd64 and i386 by way of the existing pic_assign_cpu
  PIC method.
- For x86, provide a 'intr_bind(IRQ, cpu)' wrapper routine that looks up
  an interrupt source and binds its interrupt event to the specified CPU.
  MI code can currently (ab)use this by doing:

	intr_bind(rman_get_start(irq_res), cpu);

  however, I plan to add a truly MI interface (probably a bus_bind_intr(9))
  where the implementation in the x86 nexus(4) driver would end up calling
  intr_bind() internally.

Requested by:	kmacy, gallatin, jeff
Tested on:	{amd64, i386} x {regular, INTR_FILTER}
2008-03-14 19:41:48 +00:00
John Baldwin
d628fbfa98 Make the function prototype for cpu_search() match the declaration so that
this still compiles with gcc3.
2008-03-14 15:22:38 +00:00
Jeff Roberson
f4d77e9e54 PR 117603
- Close a sleepqueue signal race by interlocking with the per-process
   spinlock.  This was mistakenly omitted from the thread_lock patch and
   has been a race since.

MFC After:	1 week
PR:		bin/117603
Reported by:	Danny Braniss <danny@cs.huji.ac.il>
2008-03-13 00:46:12 +00:00
Jeff Roberson
6617724c5f Remove kernel support for M:N threading.
While the KSE project was quite successful in bringing threading to
FreeBSD, the M:N approach taken by the kse library was never developed
to its full potential.  Backwards compatibility will be provided via
libmap.conf for dynamically linked binaries and static binaries will
be broken.
2008-03-12 10:12:01 +00:00
Jeff Roberson
c5aa6b581d - Pass the priority argument from *sleep() into sleepq and down into
sched_sleep().  This removes extra thread_lock() acquisition and
   allows the scheduler to decide what to do with the static boost.
 - Change the priority arguments to cv_* to match sleepq/msleep/etc.
   where 0 means no priority change.  Catch -1 in cv_broadcastpri() and
   convert it to 0 for now.
 - Set a flag when sleeping in a way that is compatible with swapping
   since direct priority comparisons are meaningless now.
 - Add a sysctl to ule, kern.sched.static_boost, that defaults to on which
   controls the boost behavior.  Turning it off gives better performance
   in some workloads but needs more investigation.
 - While we're modifying sleepq, change signal and broadcast to both
   return with the lock held as the lock was held on enter.

Reviewed by:	jhb, peter
2008-03-12 06:31:06 +00:00
Jeff Roberson
bdb5bdf0b7 - KSE may free a thread that was never actually forked. This will leave
td_cpuset NULL.  Check for this condition before dereferencing the
   cpuset.

Reported by:	david@catwhisker.org, miwi@freebsd.org
Sponsored by:	Nokia
2008-03-12 05:01:14 +00:00
Jeff Roberson
c143ac21af - Fix the invalid priority panics people are seeing by forcing
tdq_runq_add to select the runq rather than hoping we set it properly
   when we adjusted the priority.  This involves the same number of
   branches as before so should perform identically without the extra
   fragility.

Tested by:	bz
Reviewed by:	bz
2008-03-10 22:48:27 +00:00
Jeff Roberson
7217d8d1ee - Don't rely on a side effect of sched_prio() to set the initial ts_runq
for thread0.  Set it directly in sched_setup().  This fixes traps on boot
   seen on some machines.

Reported by:	phk
2008-03-10 09:50:29 +00:00
Jeff Roberson
8f93d79d05 - Handle kdb switch panics outside of mi_switch() to remove some instructions
from the common path and make the code more clear.  Whether this has any
   impact on performance may depend on optimization levels.

Sponsored by:	Nokia
2008-03-10 03:16:51 +00:00
Jeff Roberson
73daf66f41 Reduce ULE context switch time by over 25%.
- Only calculate timeshare priorities once per tick or when a thread is woken
   from sleeping.
 - Keep the ts_runq pointer valid after all priority changes.
 - Call tdq_runq_add() directly from sched_switch() without passing in via
   tdq_add().  We don't need to adjust loads or runqs anymore.
 - Sort tdq and ts_sched according to utilization to improve cache behavior.

Sponsored by:	Nokia
2008-03-10 03:15:19 +00:00
Warner Losh
9ab8f3544a Tiny bit of KNF to make bus_setup_intr() look like the rest of this
function.
2008-03-10 01:48:25 +00:00
Jeff Roberson
1bf6461e98 - Add the missing '2' case to the switch table for kern.smp.topology and
assign it to create the flat 'none' topology where all cpus are scheduled
   as if they are equal and unrelated.
2008-03-10 01:38:53 +00:00
Jeff Roberson
ff256d9c47 - Add an implementation of sched_preempt() that avoids excessive IPIs.
- Normalize the preemption/ipi setting code by introducing sched_shouldpreempt()
   so the logical is identical and not repeated between tdq_notify() and
   sched_setpreempt().
 - In tdq_notify() don't set NEEDRESCHED as we may not actually own the thread lock
   this could have caused us to lose td_flags settings.
 - Garbage collect some tunables that are no longer relevant.
2008-03-10 01:32:01 +00:00
Jeff Roberson
1e24c28f46 - Add a sched_preempt() routine to be called by md code after IPI_PREEMPT is
delivered.
 - Add a simple implementation to 4bsd.
2008-03-10 01:30:35 +00:00
Warner Losh
908e1e5df5 Any driver that relies on its parent to set the devclass has no way to
know if has siblings that need an actual probe.  Introduce a specail
return value called BUS_PROBE_NOOWILDCARD.  If the driver returns
this, the probe is only successful for devices that have had a
specific devclass set for them.

Reviewed by: current@, jhb@, grehan@
2008-03-09 05:10:22 +00:00
Antoine Brodin
e3ad7f6626 Introduce a new F_DUP2FD command to fcntl(2), for compatibility with
Solaris and AIX.
fcntl(fd, F_DUP2FD, arg) and dup2(fd, arg) are functionnaly equivalent.
Document it.
Add some regression tests (identical to the dup2(2) regression tests).

PR:		120233
Submitted by:	Jukka Ukkonen
Approved by:	rwaston (mentor)
MFC after:	1 month
2008-03-08 22:02:21 +00:00
Robert Watson
36b208e008 Use sbuf routines to construct core dump filenames rather than custom
string buffer handling, making the code both easier to read and more
robust against string-handling bugs.

MFC after:	1 week
2008-03-08 16:31:29 +00:00
Robert Watson
eeccc36738 Unlock the process lock when expand_name() fails, or we may leak the
process lock leading to a hang.  This bug was introduced in
kern_sig.c:1.351, when the call to expand_name() was moved earlier
bit this particular error case was not updated.
2008-03-08 15:48:06 +00:00
Robert Watson
b916b56b5a Add __FBSDID() tag.
MFC after:	3 days
Pointed out by:	antoine
2008-03-07 15:27:08 +00:00
Jeff Roberson
c6440f72b6 - Add a missing unlock to cpuset_setaffinity(CPU_LEVEL_CPUSET, CPU_WHICH_PID)
Found by:	gallatin
2008-03-06 20:11:24 +00:00
Jeff Roberson
8bd75bdde4 - Don't overwrite the recently allocated 'nset' in cpuset_setthread() by
passing it to cpuset_which().  Pass in 'set' instead.  This argument
   is not used but for convenience cpuset_which() nulls all incoming
   parameters.

Submitted by:	davidxu
2008-03-05 08:08:32 +00:00
Jeff Roberson
73c40187fd - Verify that when a user supplies a mask that is bigger than the kernel
mask none of the upper bits are set.
 - Be more careful about enforcing the boundaries of masks and child sets.
 - Introduce a few more CPU_* macros for implementing these tests.
 - Change the cpusetsize argument to be bytes rather than bits to match
   other apis.

Sponsored by:	Nokia
2008-03-05 01:49:20 +00:00
Ruslan Ermilov
9e47336389 Make it possible to continue working after calling doadump()
manually from debugger.  (This got broken in rev. 1.122.)
2008-03-04 07:39:31 +00:00
Rafal Jaworowski
6b7ba54456 Initial support for Freescale PowerQUICC III MPC85xx system-on-chip family.
The PQ3 is a high performance integrated communications processing system
based on the e500 core, which is an embedded RISC processor that implements
the 32-bit Book E definition of the PowerPC architecture. For details refer
to: http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=MPC8555E

This port was tested and successfully run on the following members of the PQ3
family: MPC8533, MPC8541, MPC8548, MPC8555.

The following major integrated peripherals are supported:

  * On-chip peripherals bus
  * OpenPIC interrupt controller
  * UART
  * Ethernet (TSEC)
  * Host/PCI bridge
  * QUICC engine (SCC functionality)

This commit brings the main functionality and will be followed by individual
drivers that are logically separate from this base.

Approved by:	cognet (mentor)
Obtained from:	Juniper, Semihalf
MFp4:		e500
2008-03-03 17:17:00 +00:00
Marcel Moolenaar
f5a3ef99c2 Unbreak after cpuset: initialize td_cpuset in sched_fork_thread(). 2008-03-02 21:34:57 +00:00
Jeff Roberson
62fa74d95a Add support for the new cpu topology api:
- When searching for affinity search backwards in the tree from the last
   cpu we ran on while the thread still has affinity for the group.   This
   can take advantage of knowledge of shared L2 or L3 caches among a
   group of cores.
 - When searching for the least loaded cpu find the least loaded cpu via
   the least loaded path through the tree.  This load balances system bus
   links, individual cache levels, and hyper-threaded/SMT cores.
 - Make the periodic balancer recursively balance the highest and lowest
   loaded cpu across each link.

Add support for cpusets:
 - Convert the cpuset to a simple native cpumask_t while the kernel still
   only supports cpumask.
 - Pass the derived cpumask down through the cpu_search functions to
   restrict the result cpus.
 - Make the various steal functions resilient to failure since all threads
   can not run on all cpus any longer.

General improvements:
 - Precisely track the lowest priority thread on every runq with
   tdq_setlowpri().  Before it was more advisory but this ended up having
   pathological behaviors.
 - Remove many #ifdef SMP conditions to simplify the code.
 - Get rid of the old cumbersome tdq_group.  This is more naturally
   expressed via the cpu_group tree.

Sponsored by:	Nokia
Testing by:	kris
2008-03-02 08:20:59 +00:00
Jeff Roberson
81aa71755b - Remove the old smp cpu topology specification with a new, more flexible
tree structure that encodes the level of cache sharing and other
   properties.
 - Provide several convenience functions for creating one and two level
   cpu trees as well as a default flat topology.  The system now always
   has some topology.
 - On i386 and amd64 create a seperate level in the hierarchy for HTT
   and multi-core cpus.  This will allow the scheduler to intelligently
   load balance non-uniform cores.  Presently we don't detect what level
   of the cache hierarchy is shared at each level in the topology.
 - Add a mechanism for testing common topologies that have more information
   than the MD code is able to provide via the kern.smp.topology tunable.
   This should be considered a debugging tool only and not a stable api.

Sponsored by:	Nokia
2008-03-02 07:58:42 +00:00
Jeff Roberson
4da2b9d42f - Regen for cpuset
Sponsored by:	Nokia
2008-03-02 07:41:10 +00:00
Jeff Roberson
d7f687fc9b Add cpuset, an api for thread to cpu binding and cpu resource grouping
and assignment.
 - Add a reference to a struct cpuset in each thread that is inherited from
   the thread that created it.
 - Release the reference when the thread is destroyed.
 - Add prototypes for syscalls and macros for manipulating cpusets in
   sys/cpuset.h
 - Add syscalls to create, get, and set new numbered cpusets:
   cpuset(), cpuset_{get,set}id()
 - Add syscalls for getting and setting affinity masks for cpusets or
   individual threads: cpuid_{get,set}affinity()
 - Add types for the 'level' and 'which' parameters for the cpuset.  This
   will permit expansion of the api to cover cpu masks for other objects
   identifiable with an id_t integer.  For example, IRQs and Jails may be
   coming soon.
 - The root set 0 contains all valid cpus.  All thread initially belong to
   cpuset 1.  This permits migrating all threads off of certain cpus to
   reserve them for special applications.

Sponsored by:	Nokia
Discussed with:	arch, rwatson, brooks, davidxu, deischen
Reviewed by:	antoine
2008-03-02 07:39:22 +00:00
Jeff Roberson
885d51a38a - Add a new sched_affinity() api to be used in the upcoming cpuset
implementation.
 - Add empty implementations of sched_affinity() to 4BSD and ULE.

Sponsored by:	Nokia
2008-03-02 07:19:35 +00:00
Attilio Rao
7fbfba7bf8 - Handle buffer lock waiters count directly in the buffer cache instead
than rely on the lockmgr support [1]:
  * bump the waiters only if the interlock is held
  * let brelvp() return the waiters count
  * rely on brelvp() instead than BUF_LOCKWAITERS() in order to check
    for the waiters number
- Remove a namespace pollution introduced recently with lockmgr.h
  including lock.h by including lock.h directly in the consumers and
  making it mandatory for using lockmgr.
- Modify flags accepted by lockinit():
  * introduce LK_NOPROFILE which disables lock profiling for the
    specified lockmgr
  * introduce LK_QUIET which disables ktr tracing for the specified
    lockmgr [2]
  * disallow LK_SLEEPFAIL and LK_NOWAIT to be passed there so that it
    can only be used on a per-instance basis
- Remove BUF_LOCKWAITERS() and lockwaiters() as they are no longer
  used

This patch breaks KPI so __FreBSD_version will be bumped and manpages
updated by further commits. Additively, 'struct buf' changes results in
a disturbed ABI also.

[2] Really, currently there is no ktr tracing in the lockmgr, but it
will be added soon.

[1] Submitted by:	kib
Tested by:	pho, Andrea Barberio <insomniac at slackware dot it>
2008-03-01 19:47:50 +00:00
Konstantin Belousov
e30cf87ba1 Do not assert any locks for VOP_PRINT. In particular, do not assert that
the vnode interlock is not held. vn_printf() already correctly handles
locked and unlocked vnode interlocks, and all the in-tree vop_print
methods are interlock-agnostic.

Some code calls vprintf() with the vnode interlock held, that causes
unjustified panics with INVARIANTS (ffs_syncvnode() as example).

Reported by:	Peter Holm
2008-02-26 12:16:35 +00:00
Attilio Rao
81c794f998 Axe the 'thread' argument from VOP_ISLOCKED() and lockstatus() as it is
always curthread.

As KPI gets broken by this patch, manpages and __FreeBSD_version will be
updated by further commits.

Tested by:	Andrea Barberio <insomniac at slackware dot it>
2008-02-25 18:45:57 +00:00
Attilio Rao
628f51d275 Introduce some functions in the vnode locks namespace and in the ffs
namespace in order to handle lockmgr fields in a controlled way instead
than spreading all around bogus stubs:
- VN_LOCK_AREC() allows lock recursion for a specified vnode
- VN_LOCK_ASHARE() allows lock sharing for a specified vnode

In FFS land:
- BUF_AREC() allows lock recursion for a specified buffer lock
- BUF_NOREC() disallows recursion for a specified buffer lock

Side note: union_subr.c::unionfs_node_update() is the only other function
directly handling lockmgr fields. As this is not simple to fix, it has
been left behind as "sole" exception.
2008-02-24 16:38:58 +00:00
Colin Percival
491869163b After finishing sending file data in sendfile(2), don't forget to send
the provided trailers.  This has been broken since revision 1.240.

Submitted by:	Dan Nelson
PR:		kern/120948
"sounds ok to me" from:	phk
MFC after:	3 days
2008-02-24 00:07:00 +00:00
Dag-Erling Smørgrav
60e15db992 This patch adds a new ktrace(2) record type, KTR_STRUCT, whose payload
consists of the null-terminated name and the contents of any structure
you wish to record.  A new ktrstruct() function constructs and emits a
KTR_STRUCT record.  It is accompanied by convenience macros for struct
stat and struct sockaddr.

In kdump(1), KTR_STRUCT records are handled by a dispatcher function
that runs stringent sanity checks on its contents before handing it
over to individual decoding funtions for each type of structure.
Currently supported structures are struct stat and struct sockaddr for
the AF_INET, AF_INET6 and AF_UNIX families; support for AF_APPLETALK
and AF_IPX is present but disabled, as I am unable to test it properly.

Since 's' was already taken, the letter 't' is used by ktrace(1) to
enable KTR_STRUCT trace points, and in kdump(1) to enable their
decoding.

Derived from patches by Andrew Li <andrew2.li@citi.com>.

PR:		kern/117836
MFC after:	3 weeks
2008-02-23 01:01:49 +00:00
Yaroslav Tykhiy
c6446de05d Undo the damage I did in sys/kern/vfs_mount.c #1.274 and
sbin/mount_nfs/mount_nfs.c #1.76.  Let the dragons sleep.

Requested by:	rodrigc, des
PR:		kern/120319 (welcome the bug back)
2008-02-18 20:58:57 +00:00
Yaroslav Tykhiy
37ed722f78 Add a remark on a questionable property of vfs_mergeopts(). 2008-02-18 10:10:42 +00:00
Antoine Brodin
370f990d30 Make sysctl_kern_arnd return a random buffer instead of a random long,
as it is expected by userland (stack protector guard setup for example).

PR:		119129
Approved by:	rwatson (mentor)
MFC after:	1 month
2008-02-17 16:44:48 +00:00
Kris Kennaway
e17660e79c Switch from conditionally dropping Giant in exit1() to asserting it is
not held, which appears to be always true.
2008-02-17 15:28:28 +00:00
Warner Losh
6b4d690c62 Fix typo in comment. 2008-02-17 02:46:54 +00:00
Antoine Brodin
74727f1209 Remove a superfluous line in run_interrupt_driven_config_hooks(),
next_entry is already initialized during TAILQ_FOREACH_SAFE().

PR:		kern/119604
Approved by:	rwatson (mentor)
MFC after:	1 month
2008-02-15 21:54:21 +00:00
Attilio Rao
24463dbbee - Introduce lockmgr_args() in the lockmgr space. This function performs
the same operation of lockmgr() but accepting a custom wmesg, prio and
  timo for the particular lock instance, overriding default values
  lkp->lk_wmesg, lkp->lk_prio and lkp->lk_timo.
- Use lockmgr_args() in order to implement BUF_TIMELOCK()
- Cleanup BUF_LOCK()
- Remove LK_INTERNAL as it is nomore used in the lockmgr namespace

Tested by:	Andrea Barberio <insomniac at slackware dot it>
2008-02-15 21:04:36 +00:00
Yaroslav Tykhiy
38a7fd05f7 In the new order of things dictated by nmount(2), a read-only mount
is to be requested via a "ro" option.  At the same time, MNT_RDONLY
is gradually becoming an indicator of the current state of the FS
instead of a command flag.  Today passing MNT_RDONLY alone to the
kernel's mount machinery will lead to various glitches.  (See the
PRs for examples.)

Therefore mount the root FS with a "ro" option instead of the
MNT_RDONLY flag.  (Note that MNT_RDONLY still is added to the mount
flags internally, by vfs_donmount(), if "ro" was specified.)

To be able to pass "ro" cleanly to kernel_vmount(), teach the latter
function to accept options with NULL values.

Also correct the comment explaining how mount_arg() handles length
of -1.

PR:		bin/106636 kern/120319
Submitted by:	Jaakko Heinonen <see PR kern/120319 for email> (originally)
2008-02-14 17:04:31 +00:00
Simon L. B. Nielsen
1b7089994c Fix sendfile(2) write-only file permission bypass.
Security:	FreeBSD-SA-08:03.sendfile
Submitted by:	kib
2008-02-14 11:44:31 +00:00
John Baldwin
ad69e26b69 Add KASSERT()'s to catch attempts to recurse on spin mutexes that aren't
marked recursable either via mtx_lock_spin() or thread_lock().

MFC after:	1 week
2008-02-13 23:39:05 +00:00