Commit Graph

133251 Commits

Author SHA1 Message Date
rafan
5cebee462b - Update for 2007/05/01 import
Approved by:	delphij (mentor)
Nodded by:	ru
Tested by:	make universe
2007-06-05 15:35:05 +00:00
rafan
1095f96bda - Update for 2007/05/01 import.
Approved by:	delphij (mentor)
Nodded by:	ru
2007-06-05 15:34:40 +00:00
rafan
202f9402cb This commit was generated by cvs2svn to compensate for changes in r170331,
which included commits to RCS files with non-trunk default branches.
2007-06-05 15:33:51 +00:00
rafan
d187e809f9 Vendor import of bwk's 01-May-2007 release.
Approved by:	delphij (mentor)
Nodded by:	ru
Tested by:	make universe
2007-06-05 15:33:51 +00:00
gallatin
e2b0046ba1 Use pmap_change_attr() to setup a write combine attribute for our
device memory, rather than relying on the less reliable MTRR method
used by mem_range_attr_set().

Glanced at by: jhb
2007-06-05 15:02:14 +00:00
delphij
0ad1e85cd4 Updated release notes: less was updated to v403. 2007-06-05 14:55:15 +00:00
bmah
a831a31e91 Updated release notes: BIND 9.4.1, IPFilter 4.1.23.
Removed release note:  CORE scheduler.
2007-06-05 14:30:36 +00:00
kib
e8f1c1e028 Restore non-SMP build.
Reviewed by:	attilio
2007-06-05 14:20:13 +00:00
simokawa
b5a10ff918 Remove GIANT_REQUIRED for upcoming changes in FireWire stack. 2007-06-05 14:15:45 +00:00
nyan
a1102c4ba8 MFi386: revision 1.656
Add the machine-specific definitions for configuring the new physical
  memory allocator.

  Set the size of phys_avail[] and dump_avail[] using one of these
  definitions.
2007-06-05 11:49:56 +00:00
phk
65f1a6c8ba Default to R/O root filesystem 2007-06-05 11:17:45 +00:00
kib
46af5f9d0c Update man page for VOP_OPEN() after fdidx->fp conversion.
Reminded by: ru
2007-06-05 10:48:29 +00:00
phk
a241b1520e Fix the fstab on the second image, just like updatep2.sh does. 2007-06-05 10:21:15 +00:00
des
96c25f467e Expose __stack_chk_fail_local() so -fstack-protector-all works. 2007-06-05 08:24:34 +00:00
delphij
ee31f13424 Write to slice name instead of directly to the disk device.
This fixes writing boot code upon upgrade.

PR:		bin/61587
Submitted by:	Nobuyuki Koganemaru <n-kogane syd.odn.ne.jp>
MFC after:	1 month
2007-06-05 05:44:41 +00:00
alc
414dfd6eee Add the machine-specific definitions for configuring the new physical
memory allocator.

Set the size of phys_avail[] and dump_avail[] using one of these
definitions.

Approved by:	re
2007-06-05 05:17:20 +00:00
scottl
1c9a6e5327 Satisfy witness during shutdown 2007-06-05 05:03:13 +00:00
jeff
dcba3b4fa8 - Better fix for previous error; use DEVOLATILE on the td_lock pointer
it can actually sometimes be something other than sched_lock even on
   schedulers which rely on a global scheduler lock.

Tested by:	kan
2007-06-05 04:12:46 +00:00
jeff
bec3346df4 - Pass &sched_lock as the third argument to cpu_switch() as this will
always be the correct lock and we don't get volatile warnings this
   way.

Pointed out by:	kan
2007-06-05 03:46:54 +00:00
jeff
c5295bd51b - Define TDQ_ID() for the !SMP case.
- Default pick_pri to off.  It is not faster in most cases.
2007-06-05 02:53:51 +00:00
jeff
f2eba6cb68 - ULE is no longer buggy or experimental. 2007-06-05 01:31:04 +00:00
delphij
a640564bf1 sched_core(4) removed. 2007-06-05 01:10:47 +00:00
davidch
63aa14efe0 - Added a new Ethernet media type (2500BaseSX) to support BCM5708 controllers
which support a 2.5Gbps mode over fiber using next page extensions during
  autonegotiation.  Typically only found in blade systems which also include
  a Broadcom 2.5Gbps capable switch.

MFC after:	2 weeks
2007-06-05 00:32:01 +00:00
jeff
f042fda7ca - Add a new argument to cpu_switch. This is a pointer to a mutex that
oldthread should point at before we return.
 - When cpu_switch() is called the td_lock pointer in the old thread may
   point at the blocked lock.  This prevents other processors from
   switching into this thread while we're still switching out.  Wait
   until we're done deactivating the vmspace before we release the
   thread by assigning to td_lock.
 - Before we can activate the new vmspace we must make sure that the new
   thread is not assigned to the blocked lock.  It may be in the process
   of switching out on another cpu.  Spin until the new thread is
   available.
2007-06-05 00:16:43 +00:00
jeff
ebaff18384 - Expose td_lock to assembly so it may be used in cpu_switch(). 2007-06-05 00:13:49 +00:00
jeff
832988698f - Remove sched_core.c. The maintainer has lost interest in pursuing this
and it has been neglected in the recent ksegrp removal as well as
   the thread_lock() changes.

Discussed with:	davidxu
2007-06-05 00:12:37 +00:00
jeff
91d1501790 Commit 14/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-05 00:00:57 +00:00
jeff
8297f778b9 Commit 13/14 of sched_lock decomposition.
- Add a new parameter to cpu_switch() that is used to release the lock on
   the outgoing thread and properly acquire the lock on the incoming
   thread.  This parameter is not required for schedulers that don't do
   per-cpu locking and architectures which do not support it may continue
   to use the 4BSD scheduler.  This feature is presently not supported
   on ia64

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:58:47 +00:00
jeff
be3241715a - Change comments and asserts to reflect the removal of the global
scheduler lock.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:57:32 +00:00
jeff
b980923789 Commit 11/14 of sched_lock decomposition.
- There is no globally visible scheduler lock any longer.  For now the
   watchdog can only check Giant.  This model of checking particular locks
   is flawed and should be revisited.  Other metrics should be considered.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:56:33 +00:00
jeff
20e0f793e8 Commit 10/14 of sched_lock decomposition.
- Use sched_throw() rather than replicating the same cpu_throw() code for
   each architecture.  This also allows the scheduler to use any locking it
   may want to.
 - Use the thread_lock() rather than sched_lock when preempting.
 - The scheduler lock is not required to synchronize release_aps.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:56:08 +00:00
jeff
3a4cdc52dc Commit 10/14 of sched_lock decomposition.
- Add new spinlocks to support thread_lock() and adjust ordering.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:55:45 +00:00
jeff
0e873af7bd Commit 9/14 of sched_lock decomposition.
- Attempt to return the ttyinfo() selection algorithm to something sane
   as it has been broken and disabled for some time.  Adapt this algorithm
   in such a way that it does not conflict with per-cpu scheduler locking.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:55:32 +00:00
jeff
b8fc17c4ae Commit 8/14 of sched_lock decomposition.
- Use a global umtx spinlock to protect the sleep queues now that there
   is no global scheduler lock.
 - Use thread_lock() to protect thread state.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:50 +00:00
jeff
b5c8e4a113 Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Use a global kse spinlock to protect upcall and thread assignment.  The
   per-process spinlock can not be used because this lock must be acquired
   via mi_switch() where we already hold a thread lock.  The kse spinlock
   is a leaf lock ordered after the process and thread spinlocks.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
jeff
2fc4033486 Commit 6/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Replace the tail-end of fork_exit() with a scheduler specific routine
   which can do the appropriate lock manipulations.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:53:34 +00:00
jeff
d72d912582 Commit 5/14 of sched_lock decomposition.
- Protect the cp_time tick counts with atomics instead of a global lock.
   There will only be one atomic per tick and this allows all processors
   to execute softclock concurrently.
 - In softclock, protect access to rusage and td_*tick data with the
   thread_lock(), expanding the scope of the thread lock over the whole
   function.
 - Do some creative re-arranging in hardclock() to avoid excess locking.
 - Protect the p_timer fields with the per-process spinlock.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:53:06 +00:00
jeff
8931208c40 Commit 4/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Move some common code into thread_suspend_switch() to handle the
   mechanics of suspending a thread.  The locking here is incredibly
   convoluted and should be simplified.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:52:24 +00:00
jeff
15c2dd7a1f Commit 3/14 of sched_lock decomposition.
- Add a per-turnstile spinlock to solve potential priority propagation
   deadlocks that are possible with thread_lock().
 - The turnstile lock order is defined as the exact opposite of the
   lock order used with the sleep locks they represent.  This allows us
   to walk in reverse order in priority_propagate and this is the only
   place we wish to multiply acquire turnstile locks.
 - Use the turnstile_chain lock to protect assigning mutexes to turnstiles.
 - Change the turnstile interface to pass back turnstile pointers to the
   consumers.  This allows us to reduce some locking and makes it easier
   to cancel turnstile assignment while the turnstile chain lock is held.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:51:44 +00:00
jeff
ea7c909871 Commit 2/14 of sched_lock decomposition.
- Adapt sleepqueues to the new thread_lock() mechanism.
 - Delay assigning the sleep queue spinlock as the thread lock until after
   we've checked for signals.  It is illegal for a thread to return in
   mi_switch() with any lock assigned to td_lock other than the scheduler
   locks.
 - Change sleepq_catch_signals() to do the switch if necessary to simplify
   the callers.
 - Simplify timeout handling now that locking a sleeping thread has the
   side-effect of locking the sleepqueue.  Some previous races are no
   longer possible.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:56 +00:00
jeff
186ae07cb6 Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
   similar to solaris's container locking.
 - A per-process spinlock is now used to protect the queue of threads,
   thread count, suspension count, p_sflags, and other process
   related scheduling fields.
 - The new thread lock is actually a pointer to a spinlock for the
   container that the thread is currently owned by.  The container may
   be a turnstile, sleepqueue, or run queue.
 - thread_lock() is now used to protect access to thread related scheduling
   fields.  thread_unlock() unlocks the lock and thread_set_lock()
   implements the transition from one lock to another.
 - A new "blocked_lock" is used in cases where it is not safe to hold the
   actual thread's lock yet we must prevent access to the thread.
 - sched_throw() and sched_fork_exit() are introduced to allow the
   schedulers to fix-up locking at these points.
 - Add some minor infrastructure for optionally exporting scheduler
   statistics that were invaluable in solving performance problems with
   this patch.  Generally these statistics allow you to differentiate
   between different causes of context switches.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
attilio
9bd4fdf7ce Do proper "locking" for missing vmmeters part.
Now, we assume no more sched_lock protection for some of them and use the
distribuited loads method for vmmeter (distribuited through CPUs).

Reviewed by: alc, bde
Approved by: jeff (mentor)
2007-06-04 21:45:18 +00:00
attilio
e333d0ff0e Rework the PCPU_* (MD) interface:
- Rename PCPU_LAZY_INC into PCPU_INC
- Add the PCPU_ADD interface which just does an add on the pcpu member
  given a specific value.

Note that for most architectures PCPU_INC and PCPU_ADD are not safe.
This is a point that needs some discussions/work in the next days.

Reviewed by: alc, bde
Approved by: jeff (mentor)
2007-06-04 21:38:48 +00:00
dwmalone
771efb08f5 Despite several examples in the kernel, the third argument of
sysctl_handle_int is not sizeof the int type you want to export.
The type must always be an int or an unsigned int.

Remove the instances where a sizeof(variable) is passed to stop
people accidently cut and pasting these examples.

In a few places this was sysctl_handle_int was being used on 64 bit
types, which would truncate the value to be exported.  In these
cases use sysctl_handle_quad to export them and change the format
to Q so that sysctl(1) can still print them.
2007-06-04 18:25:08 +00:00
dwmalone
360dc5b21c Add a function for exporting 64 bit types. 2007-06-04 18:14:28 +00:00
dwmalone
79a4f2f433 Use common code for printing ints and longs by coppying the sysctl
value into a variable of the right type and then printing it via
an intmax_t. This makes avoids some duplication and makes it easy
to add a new integer format Q for printing things of type CTLTYPE_QUAD.
2007-06-04 18:02:23 +00:00
marcel
c118c8a64b Revert to the previous version where the return value of uart_getenv()
is being ignored. It's optional and the lack of environment variable
is not an error condition.
2007-06-04 17:53:42 +00:00
brueffer
3f6ea62ed6 Xref altq(4). 2007-06-04 16:59:11 +00:00
ambrisko
7261dfb656 Add in a couple of things:
-	In the ioctl path let command get queued up and return
	when complete _without_ blocking the driving waiting for
	the response.  This way the driver doesn't "lock up" for
	~30s during a flash command.  Submitted by scottl.
      -	Add a guard so that if a DCMD of 0 is sent down the ioctl
	path don't send it to the controller.  Return with a
	status of OK.  This is a little strange since MegaCli
	doesn't seem to like something and will issue some DCMD
	of 0.  This doesn't happen under Linux.  So the emulation
	needs to be improved but I'm not sure what.  Another strange
	thing is that when a DCMD of 0 gets issued under i386 the
	controller returns OK but in amd64 the context is messed
	up.
      -	Add a guard so the context has to be with-in the legal
	limit so we get a reasonable error assertion versus random
	panic.

It's going to be a challenge to figure out why MegaCli is not totally
happy and then sends some bogus commands.  This means that flashing
firmware via the Linux tool won't work since it generates a DCMD of
0 when it should be opening the firmware for a flash update.  Without
this problem flashing works fine.  This means there is no publicly
available tool to upgrade the RAID firmware under FreeBSD right now.

I plan to MFC all of the mfi changes to 6.X shortly.  This might not
include the SCSI pass-through changes.

Submitted by:	scottl
Reviewed by:	scottl
MFC after:	3 days
2007-06-04 16:39:22 +00:00
mav
2f12bfb53b No need to update link queue stats when round-robin algorithm enabled.
Approved by:	glebius (mentor)
2007-06-04 13:50:09 +00:00