Commit Graph

133024 Commits

Author SHA1 Message Date
Xin LI
4a95c0e83f Write to slice name instead of directly to the disk device.
This fixes writing boot code upon upgrade.

PR:		bin/61587
Submitted by:	Nobuyuki Koganemaru <n-kogane syd.odn.ne.jp>
MFC after:	1 month
2007-06-05 05:44:41 +00:00
Alan Cox
e5c45405f0 Add the machine-specific definitions for configuring the new physical
memory allocator.

Set the size of phys_avail[] and dump_avail[] using one of these
definitions.

Approved by:	re
2007-06-05 05:17:20 +00:00
Scott Long
d4a4ddc6ba Satisfy witness during shutdown 2007-06-05 05:03:13 +00:00
Jeff Roberson
95e3a0bca3 - Better fix for previous error; use DEVOLATILE on the td_lock pointer
it can actually sometimes be something other than sched_lock even on
   schedulers which rely on a global scheduler lock.

Tested by:	kan
2007-06-05 04:12:46 +00:00
Jeff Roberson
c219b097af - Pass &sched_lock as the third argument to cpu_switch() as this will
always be the correct lock and we don't get volatile warnings this
   way.

Pointed out by:	kan
2007-06-05 03:46:54 +00:00
Jeff Roberson
36b369163b - Define TDQ_ID() for the !SMP case.
- Default pick_pri to off.  It is not faster in most cases.
2007-06-05 02:53:51 +00:00
Jeff Roberson
d6cc759d20 - ULE is no longer buggy or experimental. 2007-06-05 01:31:04 +00:00
Xin LI
8bdbc89cd8 sched_core(4) removed. 2007-06-05 01:10:47 +00:00
David Christensen
051e756190 - Added a new Ethernet media type (2500BaseSX) to support BCM5708 controllers
which support a 2.5Gbps mode over fiber using next page extensions during
  autonegotiation.  Typically only found in blade systems which also include
  a Broadcom 2.5Gbps capable switch.

MFC after:	2 weeks
2007-06-05 00:32:01 +00:00
Jeff Roberson
5d68dad329 - Add a new argument to cpu_switch. This is a pointer to a mutex that
oldthread should point at before we return.
 - When cpu_switch() is called the td_lock pointer in the old thread may
   point at the blocked lock.  This prevents other processors from
   switching into this thread while we're still switching out.  Wait
   until we're done deactivating the vmspace before we release the
   thread by assigning to td_lock.
 - Before we can activate the new vmspace we must make sure that the new
   thread is not assigned to the blocked lock.  It may be in the process
   of switching out on another cpu.  Spin until the new thread is
   available.
2007-06-05 00:16:43 +00:00
Jeff Roberson
ebb6b0c0ec - Expose td_lock to assembly so it may be used in cpu_switch(). 2007-06-05 00:13:49 +00:00
Jeff Roberson
8e0185f604 - Remove sched_core.c. The maintainer has lost interest in pursuing this
and it has been neglected in the recent ksegrp removal as well as
   the thread_lock() changes.

Discussed with:	davidxu
2007-06-05 00:12:37 +00:00
Jeff Roberson
982d11f836 Commit 14/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-05 00:00:57 +00:00
Jeff Roberson
a8cdbf449a Commit 13/14 of sched_lock decomposition.
- Add a new parameter to cpu_switch() that is used to release the lock on
   the outgoing thread and properly acquire the lock on the incoming
   thread.  This parameter is not required for schedulers that don't do
   per-cpu locking and architectures which do not support it may continue
   to use the 4BSD scheduler.  This feature is presently not supported
   on ia64

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:58:47 +00:00
Jeff Roberson
1b1618fb12 - Change comments and asserts to reflect the removal of the global
scheduler lock.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:57:32 +00:00
Jeff Roberson
74aaec43e8 Commit 11/14 of sched_lock decomposition.
- There is no globally visible scheduler lock any longer.  For now the
   watchdog can only check Giant.  This model of checking particular locks
   is flawed and should be revisited.  Other metrics should be considered.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:56:33 +00:00
Jeff Roberson
e4b5aee3a8 Commit 10/14 of sched_lock decomposition.
- Use sched_throw() rather than replicating the same cpu_throw() code for
   each architecture.  This also allows the scheduler to use any locking it
   may want to.
 - Use the thread_lock() rather than sched_lock when preempting.
 - The scheduler lock is not required to synchronize release_aps.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:56:08 +00:00
Jeff Roberson
bd43e47156 Commit 10/14 of sched_lock decomposition.
- Add new spinlocks to support thread_lock() and adjust ordering.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:55:45 +00:00
Jeff Roberson
07a61420ff Commit 9/14 of sched_lock decomposition.
- Attempt to return the ttyinfo() selection algorithm to something sane
   as it has been broken and disabled for some time.  Adapt this algorithm
   in such a way that it does not conflict with per-cpu scheduler locking.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:55:32 +00:00
Jeff Roberson
3c2e44364e Commit 8/14 of sched_lock decomposition.
- Use a global umtx spinlock to protect the sleep queues now that there
   is no global scheduler lock.
 - Use thread_lock() to protect thread state.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:50 +00:00
Jeff Roberson
765b2891e8 Commit 7/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Use a global kse spinlock to protect upcall and thread assignment.  The
   per-process spinlock can not be used because this lock must be acquired
   via mi_switch() where we already hold a thread lock.  The kse spinlock
   is a leaf lock ordered after the process and thread spinlocks.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:54:27 +00:00
Jeff Roberson
11bda9b8d5 Commit 6/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Replace the tail-end of fork_exit() with a scheduler specific routine
   which can do the appropriate lock manipulations.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:53:34 +00:00
Jeff Roberson
40acdeabab Commit 5/14 of sched_lock decomposition.
- Protect the cp_time tick counts with atomics instead of a global lock.
   There will only be one atomic per tick and this allows all processors
   to execute softclock concurrently.
 - In softclock, protect access to rusage and td_*tick data with the
   thread_lock(), expanding the scope of the thread lock over the whole
   function.
 - Do some creative re-arranging in hardclock() to avoid excess locking.
 - Protect the p_timer fields with the per-process spinlock.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:53:06 +00:00
Jeff Roberson
a54e85fdbf Commit 4/14 of sched_lock decomposition.
- Use thread_lock() rather than sched_lock for per-thread scheduling
   sychronization.
 - Use the per-process spinlock rather than the sched_lock for per-process
   scheduling synchronization.
 - Move some common code into thread_suspend_switch() to handle the
   mechanics of suspending a thread.  The locking here is incredibly
   convoluted and should be simplified.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:52:24 +00:00
Jeff Roberson
2502c107ba Commit 3/14 of sched_lock decomposition.
- Add a per-turnstile spinlock to solve potential priority propagation
   deadlocks that are possible with thread_lock().
 - The turnstile lock order is defined as the exact opposite of the
   lock order used with the sleep locks they represent.  This allows us
   to walk in reverse order in priority_propagate and this is the only
   place we wish to multiply acquire turnstile locks.
 - Use the turnstile_chain lock to protect assigning mutexes to turnstiles.
 - Change the turnstile interface to pass back turnstile pointers to the
   consumers.  This allows us to reduce some locking and makes it easier
   to cancel turnstile assignment while the turnstile chain lock is held.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:51:44 +00:00
Jeff Roberson
d72e80f09a Commit 2/14 of sched_lock decomposition.
- Adapt sleepqueues to the new thread_lock() mechanism.
 - Delay assigning the sleep queue spinlock as the thread lock until after
   we've checked for signals.  It is illegal for a thread to return in
   mi_switch() with any lock assigned to td_lock other than the scheduler
   locks.
 - Change sleepq_catch_signals() to do the switch if necessary to simplify
   the callers.
 - Simplify timeout handling now that locking a sleeping thread has the
   side-effect of locking the sleepqueue.  Some previous races are no
   longer possible.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:56 +00:00
Jeff Roberson
7b20fb19fb Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
   similar to solaris's container locking.
 - A per-process spinlock is now used to protect the queue of threads,
   thread count, suspension count, p_sflags, and other process
   related scheduling fields.
 - The new thread lock is actually a pointer to a spinlock for the
   container that the thread is currently owned by.  The container may
   be a turnstile, sleepqueue, or run queue.
 - thread_lock() is now used to protect access to thread related scheduling
   fields.  thread_unlock() unlocks the lock and thread_set_lock()
   implements the transition from one lock to another.
 - A new "blocked_lock" is used in cases where it is not safe to hold the
   actual thread's lock yet we must prevent access to the thread.
 - sched_throw() and sched_fork_exit() are introduced to allow the
   schedulers to fix-up locking at these points.
 - Add some minor infrastructure for optionally exporting scheduler
   statistics that were invaluable in solving performance problems with
   this patch.  Generally these statistics allow you to differentiate
   between different causes of context switches.

Tested by:      kris, current@
Tested on:      i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
2007-06-04 23:50:30 +00:00
Attilio Rao
b4b7081961 Do proper "locking" for missing vmmeters part.
Now, we assume no more sched_lock protection for some of them and use the
distribuited loads method for vmmeter (distribuited through CPUs).

Reviewed by: alc, bde
Approved by: jeff (mentor)
2007-06-04 21:45:18 +00:00
Attilio Rao
6759608248 Rework the PCPU_* (MD) interface:
- Rename PCPU_LAZY_INC into PCPU_INC
- Add the PCPU_ADD interface which just does an add on the pcpu member
  given a specific value.

Note that for most architectures PCPU_INC and PCPU_ADD are not safe.
This is a point that needs some discussions/work in the next days.

Reviewed by: alc, bde
Approved by: jeff (mentor)
2007-06-04 21:38:48 +00:00
David Malone
041b706b2f Despite several examples in the kernel, the third argument of
sysctl_handle_int is not sizeof the int type you want to export.
The type must always be an int or an unsigned int.

Remove the instances where a sizeof(variable) is passed to stop
people accidently cut and pasting these examples.

In a few places this was sysctl_handle_int was being used on 64 bit
types, which would truncate the value to be exported.  In these
cases use sysctl_handle_quad to export them and change the format
to Q so that sysctl(1) can still print them.
2007-06-04 18:25:08 +00:00
David Malone
df82ff50ed Add a function for exporting 64 bit types. 2007-06-04 18:14:28 +00:00
David Malone
41e419cb61 Use common code for printing ints and longs by coppying the sysctl
value into a variable of the right type and then printing it via
an intmax_t. This makes avoids some duplication and makes it easy
to add a new integer format Q for printing things of type CTLTYPE_QUAD.
2007-06-04 18:02:23 +00:00
Marcel Moolenaar
e59febd747 Revert to the previous version where the return value of uart_getenv()
is being ignored. It's optional and the lack of environment variable
is not an error condition.
2007-06-04 17:53:42 +00:00
Christian Brueffer
22e1a9d50e Xref altq(4). 2007-06-04 16:59:11 +00:00
Doug Ambrisko
5be25877a1 Add in a couple of things:
-	In the ioctl path let command get queued up and return
	when complete _without_ blocking the driving waiting for
	the response.  This way the driver doesn't "lock up" for
	~30s during a flash command.  Submitted by scottl.
      -	Add a guard so that if a DCMD of 0 is sent down the ioctl
	path don't send it to the controller.  Return with a
	status of OK.  This is a little strange since MegaCli
	doesn't seem to like something and will issue some DCMD
	of 0.  This doesn't happen under Linux.  So the emulation
	needs to be improved but I'm not sure what.  Another strange
	thing is that when a DCMD of 0 gets issued under i386 the
	controller returns OK but in amd64 the context is messed
	up.
      -	Add a guard so the context has to be with-in the legal
	limit so we get a reasonable error assertion versus random
	panic.

It's going to be a challenge to figure out why MegaCli is not totally
happy and then sends some bogus commands.  This means that flashing
firmware via the Linux tool won't work since it generates a DCMD of
0 when it should be opening the firmware for a flash update.  Without
this problem flashing works fine.  This means there is no publicly
available tool to upgrade the RAID firmware under FreeBSD right now.

I plan to MFC all of the mfi changes to 6.X shortly.  This might not
include the SCSI pass-through changes.

Submitted by:	scottl
Reviewed by:	scottl
MFC after:	3 days
2007-06-04 16:39:22 +00:00
Alexander Motin
c35e19c430 No need to update link queue stats when round-robin algorithm enabled.
Approved by:	glebius (mentor)
2007-06-04 13:50:09 +00:00
Yaroslav Tykhiy
039e8df3cf Be robust to a bogus script specification or contents
when figuring out what the real interpreter is for an
interpreted command.  That is, check whether we can read
the script file in the first place and, if so, make sure
we got a valid shebang line from it.
2007-06-04 11:39:35 +00:00
Pawel Jakub Dawidek
b166b92692 Reimplement traverse() helper function:
1. Pass locking flags to VFS_ROOT().
2. Check v_mountedhere while the vnode is locked.
3. Always return locked vnode on success.

Change 1 fixes problem reported by Stephen M. Rumble - after
zfs_vfsops.c,1.9 change, zfs_root() no longer locks the vnode
unconditionally and traverse() didn't pass right lock type to
VFS_ROOT(). The result was that kernel paniced when .zfs/ directory
was accessed via NFS.
2007-06-04 11:31:46 +00:00
Brian Somers
7344a290dc Now that tone & delay times are correct (independent of hz), adjust
playtone() so that it uses times of 1/100ths of a second.

Now 'time echo T60ABC >/dev/speaker' takes ~3 seconds.

MFC after:		2 weeks
Problem noted by:	dwmalone
2007-06-04 09:27:13 +00:00
Brian Somers
b2424ac045 Speaker durations are specified in 1/100ths of a second according to
spkr(4).

PR:		70610, 67995
Submitted by:	dada at sbox dot tugraz dot at (modulo one fix)
MFC after:	2 weeks
2007-06-04 08:33:18 +00:00
Alan Cox
9211deca08 Add the machine-specific definitions for configuring the new physical
memory allocator.

Approved by:	re
2007-06-04 08:02:22 +00:00
Scott Long
a490742913 Track an update in the MPI headers that was missed earlier. 2007-06-04 06:18:07 +00:00
JINMEI Tatuya
5e9510e3b6 cleanup about the reassembly structures and routine:
- removed unused structure members
  - fixed a minor bug that the ECN code point may not be restored correctly

Approved by:	ume (mentor)
MFC after:	1 week
2007-06-04 06:06:35 +00:00
Pyun YongHyeon
185d3fe9e8 gem(4) supports altq(4) now. 2007-06-04 06:01:57 +00:00
Pyun YongHyeon
12fb0330d8 o Implemented Rx/Tx checksum offload. The simple checksum logic in
GEMs is unable to discriminate UDP from TCP packets such that
  it can generate 0x0000 checksum value for the UDP datagram. So the
  UDP checksum offload was disabled by default. You can enable it
  by setting link0 flag with ifconfig(8).
o bus_dma(9) clean up. It now correctly set number of required DMA
  segments/size and removed incorrect use of BUS_DMA_ALLOCNOW flag
  in static allocations done via bus_dmamem_alloc(9).
o Implemented ALTQ(9) support.
o Implemented Tx side bus_dmamap_load_mbuf_sg(9) which can remove
  several book keeping chores orginated from call-back mechanism.
  Therefore gem_txdma_callback() was removed and its functionality
  was reimplemented in gem_load_txmbuf().
o Don't set GEM_TD_START_OF_PACKET flag until all remaining mbuf
  chains are set. I think it was a long standing bug and it caused
  fluctuating interrupts/CPU usage patterns while netperf test
  is in progress. Previously it seems that we race with the device.
  Because I don't have a documentation for GEM I'm not sure this is
  correct but almost all other documentations I have stated this
  implications on setting SOP mark in descriptor ring(e.g. hme(4)).
o Borrowed gem_defrag() from ath(4) which is supposed to be much
  faster than m_defrag(9) since it's not need to defrag all
  mbuf chains.
o gem_load_txmbuf() was changed to allow passed mbuf chains to free.
  Caller of gem_load_txmbuf() correctly handles freed mbuf chains.
o In gem_start_locked(), added checks for availability of Tx
  descriptors before trying to load DMA maps which could save CPU
  cycles when number of available descriptors are low. Also, simplyfy
  IFF_DRV_OACTIVE detection logic.
o Removed hard-coded function names in CTR macros and replaced it
  with __func__.
o Moved statistics counter register access to gem_tick() to reduce
  number of PCI bus accesses. There is no reason to update statistics
  counters in interrupt handler.
o Removed unnecessary call of gem_start_locked() in gem_ioctl().

Reviewed by:	grehan (initial version), marius (with improvements and suggestions)
Tested by:	grehan (ppc), marius(sparc64)
2007-06-04 06:01:04 +00:00
Warner Losh
6e878bc765 Migrate from setting a CARD_OK flag in a shared word, to setting its
own entry in the softc.  This should allow more of cbb_pci_intr() to
migrate to a new cbb_pci_filt() so that we don't have to run cbb's ISR
in almost every case we get an interrupt.  We can't just move
cbb_pci_intr into cbb_pci_filt because it does things that aren't safe
to do from a fast interrupt handler, err I mean from a filter.  This is
an important first step.

# I wonder if I need to make cardok volatile or not.
2007-06-04 05:59:44 +00:00
Scott Long
a54876ea20 Free the portinfo object on unload. 2007-06-04 04:35:04 +00:00
Warner Losh
16f89cb420 Don't register cb_func_filt if the client driver doesn't have a filter.
ditto for the isr.

Reviewed/Suggested by: simokawa-san
2007-06-04 03:13:24 +00:00
Darren Reed
c485ab2d8d Remove files no longer required to build IPFilter 2007-06-04 03:07:34 +00:00
Darren Reed
d7eeb25225 Merge IPFilter 4.1.23 back to HEAD
See src/contrib/ipfilter/HISTORY for details of changes since 4.1.13
2007-06-04 02:54:36 +00:00