Commit Graph

64068 Commits

Author SHA1 Message Date
glebius
27c6e3f25e Do not leak lock in the case of EEXIST error.
PR:		kern/92776
Submitted by:	Ed Schouten <Ed.Schouten tunix.nl>
2007-06-06 14:21:49 +00:00
nyan
b9ae4d9a66 MFi386: revision 1.657
Backout experimental adaptive-spin umtx code.
2007-06-06 13:04:15 +00:00
davidxu
3a1a57d0eb Backout experimental adaptive-spin umtx code. 2007-06-06 07:35:08 +00:00
yongari
3eb2cf672b Add support IC Plus IP101 10/100 PHY that is found on nVidia network
adapters.

Submitted by:	Shigeaki Tagashira < shigeaki AT se DOT hiroshima-u DOT ac DOT jp >
2007-06-06 07:07:23 +00:00
yongari
279288948f Add IC Plus IP101 PHY 2007-06-06 07:05:02 +00:00
yongari
808e1dd382 Add support Vitesse VSC8601 PHY that is found on nVidia network
adapters.

Submitted by:	Shigeaki Tagashira < shigeaki AT se DOT hiroshima-u DOT ac DOT jp >
Tested by:	Yuri Pankov < yuri.pankov AT gmail DOT com>,
		Rainer Hurling <rhurlin AT gwdg DOT de >
2007-06-06 06:55:49 +00:00
yongari
00eff901d2 Add OUI for Vitesse Semiconductor.
Add Vitesse VSC8601 PHY.
2007-06-06 06:53:40 +00:00
grehan
806e204058 Fix the compile. Band-aid until it is worked out how to use the context
switch api on ppc.
2007-06-06 06:01:56 +00:00
marcel
ed1819a480 Prefix unknown (i.e. un-aliased) partition types with '!'. This is
how they had to be given with ctlreq.
2007-06-06 05:06:14 +00:00
marcel
1094a6916b Call sbuf_finish() before sbuf_data() and sbuf_len(). 2007-06-06 05:01:41 +00:00
sam
d82da159d3 copyright updates:
o update to include 2007
o switch back to a 2-clause bsd-only license

Reviewed by:	onoe
2007-06-06 04:56:04 +00:00
marcel
11092d3888 Include <sys/sched.h> for sched_throw(). 2007-06-06 04:44:19 +00:00
jeff
7a9c95c100 - Placing the 'volatile' on the right side of the * in the td_lock
declaration removes the need for __DEVOLATILE().

Pointed out by:	tegge
2007-06-06 03:40:47 +00:00
rrs
f084c2f975 - Fixes a case where doing a sysctl would leave locks held
when coping out association data.
- Fixes a small bug that prevented the SCTP_UNORDERED indication
  from going up to the app on a recv in the sinfo_flags field.
2007-06-06 00:40:41 +00:00
imp
0a9c3b889c Add more IDs for the uftdi driver. Slight tweaks to patch by me.
Submitted by:  Thorsten Trampisch
PR: 113384
2007-06-05 21:06:17 +00:00
ariff
fa73dc8565 - Do triple reads on reset register to detect read register bug. 2 reads
seems not enough to verify its consistencies.
- Define AC97_MIXER_SIZE as SOUND_MIXER_NRDEVICES (25), since we
  don't need more than that. Stop doing wild and random guess about
  its size since we're stricly bound to it.
2007-06-05 20:30:16 +00:00
ariff
719ee4cc9c Fix (enable) phone out for laptops with ALC655, specifically
for Amilo Pro V2055.

PR:		kern/113101
Submitted by:	konrad@egipt-medytacje.pl
MFC after:	3 days
2007-06-05 20:12:40 +00:00
jhb
7dc94bf582 Move a warning under bootverbose as no machines that trigger it have ended
up being broken.
2007-06-05 18:57:48 +00:00
attilio
8e2bea078c Fix a problem with not-preemptive kernels caming from mis-merging of
existing code with the new thread_lock patch.
This also cleans up a bit unlock operation for mutexes.

Approved by: jhb, jeff(mentor)
2007-06-05 18:57:09 +00:00
imp
3cd4cdf93d MFp4: When querying the operating condition of SD cards (using the
application specific SEND_OP_COND (CMD55 + ACMD41), go ahead and allow
100 tries.  This gives a timeout of a second rather than the ~100ms
the old style produces.

I've had one old 16MB SD card which needs the extra time.  I've now
had reports from the field that other cards need this too.

Originally done at BSDcan 2007 while waiting to give my embedding
madness minitalk.
2007-06-05 17:04:44 +00:00
gallatin
e2b0046ba1 Use pmap_change_attr() to setup a write combine attribute for our
device memory, rather than relying on the less reliable MTRR method
used by mem_range_attr_set().

Glanced at by: jhb
2007-06-05 15:02:14 +00:00
kib
e8f1c1e028 Restore non-SMP build.
Reviewed by:	attilio
2007-06-05 14:20:13 +00:00
simokawa
b5a10ff918 Remove GIANT_REQUIRED for upcoming changes in FireWire stack. 2007-06-05 14:15:45 +00:00
nyan
a1102c4ba8 MFi386: revision 1.656
Add the machine-specific definitions for configuring the new physical
  memory allocator.

  Set the size of phys_avail[] and dump_avail[] using one of these
  definitions.
2007-06-05 11:49:56 +00:00
alc
414dfd6eee Add the machine-specific definitions for configuring the new physical
memory allocator.

Set the size of phys_avail[] and dump_avail[] using one of these
definitions.

Approved by:	re
2007-06-05 05:17:20 +00:00
scottl
1c9a6e5327 Satisfy witness during shutdown 2007-06-05 05:03:13 +00:00
jeff
dcba3b4fa8 - Better fix for previous error; use DEVOLATILE on the td_lock pointer
it can actually sometimes be something other than sched_lock even on
   schedulers which rely on a global scheduler lock.

Tested by:	kan
2007-06-05 04:12:46 +00:00
jeff
bec3346df4 - Pass &sched_lock as the third argument to cpu_switch() as this will
always be the correct lock and we don't get volatile warnings this
   way.

Pointed out by:	kan
2007-06-05 03:46:54 +00:00
jeff
c5295bd51b - Define TDQ_ID() for the !SMP case.
- Default pick_pri to off.  It is not faster in most cases.
2007-06-05 02:53:51 +00:00
davidch
63aa14efe0 - Added a new Ethernet media type (2500BaseSX) to support BCM5708 controllers
which support a 2.5Gbps mode over fiber using next page extensions during
  autonegotiation.  Typically only found in blade systems which also include
  a Broadcom 2.5Gbps capable switch.

MFC after:	2 weeks
2007-06-05 00:32:01 +00:00
jeff
f042fda7ca - Add a new argument to cpu_switch. This is a pointer to a mutex that
oldthread should point at before we return.
 - When cpu_switch() is called the td_lock pointer in the old thread may
   point at the blocked lock.  This prevents other processors from
   switching into this thread while we're still switching out.  Wait
   until we're done deactivating the vmspace before we release the
   thread by assigning to td_lock.
 - Before we can activate the new vmspace we must make sure that the new
   thread is not assigned to the blocked lock.  It may be in the process
   of switching out on another cpu.  Spin until the new thread is
   available.
2007-06-05 00:16:43 +00:00
jeff
ebaff18384 - Expose td_lock to assembly so it may be used in cpu_switch(). 2007-06-05 00:13:49 +00:00
jeff
832988698f - Remove sched_core.c. The maintainer has lost interest in pursuing this
and it has been neglected in the recent ksegrp removal as well as
   the thread_lock() changes.

Discussed with:	davidxu
2007-06-05 00:12:37 +00:00
jeff
91d1501790 Commit 14/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-05 00:00:57 +00:00
jeff
8297f778b9 Commit 13/14 of sched_lock decomposition.
- Add a new parameter to cpu_switch() that is used to release the lock on
   the outgoing thread and properly acquire the lock on the incoming
   thread.  This parameter is not required for schedulers that don't do
   per-cpu locking and architectures which do not support it may continue
   to use the 4BSD scheduler.  This feature is presently not supported
   on ia64

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:58:47 +00:00
jeff
be3241715a - Change comments and asserts to reflect the removal of the global
scheduler lock.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:57:32 +00:00
jeff
b980923789 Commit 11/14 of sched_lock decomposition.
- There is no globally visible scheduler lock any longer.  For now the
   watchdog can only check Giant.  This model of checking particular locks
   is flawed and should be revisited.  Other metrics should be considered.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:56:33 +00:00
jeff
20e0f793e8 Commit 10/14 of sched_lock decomposition.
- Use sched_throw() rather than replicating the same cpu_throw() code for
   each architecture.  This also allows the scheduler to use any locking it
   may want to.
 - Use the thread_lock() rather than sched_lock when preempting.
 - The scheduler lock is not required to synchronize release_aps.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:56:08 +00:00
jeff
3a4cdc52dc Commit 10/14 of sched_lock decomposition.
- Add new spinlocks to support thread_lock() and adjust ordering.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:55:45 +00:00
jeff
0e873af7bd Commit 9/14 of sched_lock decomposition.
- Attempt to return the ttyinfo() selection algorithm to something sane
   as it has been broken and disabled for some time.  Adapt this algorithm
   in such a way that it does not conflict with per-cpu scheduler locking.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:55:32 +00:00
jeff
b8fc17c4ae Commit 8/14 of sched_lock decomposition.
- Use a global umtx spinlock to protect the sleep queues now that there
   is no global scheduler lock.
 - Use thread_lock() to protect thread state.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:50 +00:00
jeff
b5c8e4a113 Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Use a global kse spinlock to protect upcall and thread assignment.  The
   per-process spinlock can not be used because this lock must be acquired
   via mi_switch() where we already hold a thread lock.  The kse spinlock
   is a leaf lock ordered after the process and thread spinlocks.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
jeff
2fc4033486 Commit 6/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Replace the tail-end of fork_exit() with a scheduler specific routine
   which can do the appropriate lock manipulations.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:53:34 +00:00
jeff
d72d912582 Commit 5/14 of sched_lock decomposition.
- Protect the cp_time tick counts with atomics instead of a global lock.
   There will only be one atomic per tick and this allows all processors
   to execute softclock concurrently.
 - In softclock, protect access to rusage and td_*tick data with the
   thread_lock(), expanding the scope of the thread lock over the whole
   function.
 - Do some creative re-arranging in hardclock() to avoid excess locking.
 - Protect the p_timer fields with the per-process spinlock.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:53:06 +00:00
jeff
8931208c40 Commit 4/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Move some common code into thread_suspend_switch() to handle the
   mechanics of suspending a thread.  The locking here is incredibly
   convoluted and should be simplified.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:52:24 +00:00
jeff
15c2dd7a1f Commit 3/14 of sched_lock decomposition.
- Add a per-turnstile spinlock to solve potential priority propagation
   deadlocks that are possible with thread_lock().
 - The turnstile lock order is defined as the exact opposite of the
   lock order used with the sleep locks they represent.  This allows us
   to walk in reverse order in priority_propagate and this is the only
   place we wish to multiply acquire turnstile locks.
 - Use the turnstile_chain lock to protect assigning mutexes to turnstiles.
 - Change the turnstile interface to pass back turnstile pointers to the
   consumers.  This allows us to reduce some locking and makes it easier
   to cancel turnstile assignment while the turnstile chain lock is held.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:51:44 +00:00
jeff
ea7c909871 Commit 2/14 of sched_lock decomposition.
- Adapt sleepqueues to the new thread_lock() mechanism.
 - Delay assigning the sleep queue spinlock as the thread lock until after
   we've checked for signals.  It is illegal for a thread to return in
   mi_switch() with any lock assigned to td_lock other than the scheduler
   locks.
 - Change sleepq_catch_signals() to do the switch if necessary to simplify
   the callers.
 - Simplify timeout handling now that locking a sleeping thread has the
   side-effect of locking the sleepqueue.  Some previous races are no
   longer possible.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:56 +00:00
jeff
186ae07cb6 Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
   similar to solaris's container locking.
 - A per-process spinlock is now used to protect the queue of threads,
   thread count, suspension count, p_sflags, and other process
   related scheduling fields.
 - The new thread lock is actually a pointer to a spinlock for the
   container that the thread is currently owned by.  The container may
   be a turnstile, sleepqueue, or run queue.
 - thread_lock() is now used to protect access to thread related scheduling
   fields.  thread_unlock() unlocks the lock and thread_set_lock()
   implements the transition from one lock to another.
 - A new "blocked_lock" is used in cases where it is not safe to hold the
   actual thread's lock yet we must prevent access to the thread.
 - sched_throw() and sched_fork_exit() are introduced to allow the
   schedulers to fix-up locking at these points.
 - Add some minor infrastructure for optionally exporting scheduler
   statistics that were invaluable in solving performance problems with
   this patch.  Generally these statistics allow you to differentiate
   between different causes of context switches.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
attilio
9bd4fdf7ce Do proper "locking" for missing vmmeters part.
Now, we assume no more sched_lock protection for some of them and use the
distribuited loads method for vmmeter (distribuited through CPUs).

Reviewed by: alc, bde
Approved by: jeff (mentor)
2007-06-04 21:45:18 +00:00
attilio
e333d0ff0e Rework the PCPU_* (MD) interface:
- Rename PCPU_LAZY_INC into PCPU_INC
- Add the PCPU_ADD interface which just does an add on the pcpu member
  given a specific value.

Note that for most architectures PCPU_INC and PCPU_ADD are not safe.
This is a point that needs some discussions/work in the next days.

Reviewed by: alc, bde
Approved by: jeff (mentor)
2007-06-04 21:38:48 +00:00