138565 Commits

Author SHA1 Message Date
rpaulo
be69e8bcc6 Actually, I was looking at the wrong Linux .c file. Set INIT2 to its
previous value.
While there, lower the delay for the misterious key.
2008-04-07 12:58:43 +00:00
rwatson
cb53c63d17 Add further TCP inpcb locking assertions to some TCP input code paths.
MFC after:	1 month
2008-04-07 12:41:45 +00:00
rpaulo
9c842efae3 * Add missing #else in the #ifdef DEBUG section.
* Fix the login in asmc_init().
* Change the INIT2 constant to reflect the same change in the Linux driver.
2008-04-07 12:09:59 +00:00
rpaulo
aef67c0d53 "Prettyfy" numbers in hexadecimal. No functional change. 2008-04-07 11:38:42 +00:00
rpaulo
e998ecd567 Change the EXAMPLE section to reflect reality (ISA -> ACPI). 2008-04-07 11:27:16 +00:00
rpaulo
44e0ecaea4 Remove isa_if.h. 2008-04-07 11:26:13 +00:00
rpaulo
e654fd2681 The SMC is represented on the acpi tables, so we can completely remove
dependency on isa. We are now an acpi child.

Also:
	* Add compile time debugging activation
	* Increase the delay for the SMS init flag.
2008-04-07 11:22:12 +00:00
rpaulo
0db35b0777 Add opt_intr_filter.h. 2008-04-07 11:08:45 +00:00
alc
f2ea5d4883 Update pmap_page_wired_mappings() so that it counts 2/4MB page mappings. 2008-04-07 07:38:02 +00:00
rwatson
6d3db5778b Maintain and observe a ZBUF_FLAG_IMMUTABLE flag on zero-copy BPF
buffer kernel descriptors, which is used to allow the buffer
currently in the BPF "store" position to be assigned to userspace
when it fills, even if userspace hasn't acknowledged the buffer
in the "hold" position yet.  To implement this, notify the buffer
model when a buffer becomes full, and check that the store buffer
is writable, not just for it being full, before trying to append
new packet data.  Shared memory buffers will be assigned to
userspace at most once per fill, be it in the store or in the
hold position.

This removes the restriction that at most one shared memory can
by owned by userspace, reducing the chances that userspace will
need to call select() after acknowledging one buffer in order to
wait for the next buffer when under high load.  This more fully
realizes the goal of zero system calls in order to process a
high-speed packet stream from BPF.

Update bpf.4 to reflect that both buffers may be owned by userspace
at once; caution against assuming this.
2008-04-07 02:51:00 +00:00
rwatson
d566093a40 Coerce if_loop.c in the general direction of style(9):
- Use ANSI function declarations
- Remove use of 'register' keyword
- Prefer style(9) return parens, white space

MFC after:	1 month
2008-04-07 01:43:30 +00:00
truckman
3ab6955dbc vfs_syscalls.c 1.452 mistakenly swapped the behavior of chown() and lchown(). 2008-04-07 00:29:32 +00:00
kan
1e160a4b21 Fix apparent mis-paste in previous check-in by author. 2008-04-06 22:08:17 +00:00
attilio
da8539bd62 Commit manpages for lockmgr_args_rw(9) and lockmgr_rw(9). 2008-04-06 21:22:12 +00:00
rwatson
00684a83a1 In in_pcbnotifyall() and in6_pcbnotify(), use LIST_FOREACH_SAFE() and
eliminate unnecessary local variable caching of the list head pointer,
making the code a bit easier to read.

MFC after:	3 weeks
2008-04-06 21:20:56 +00:00
brooks
98cf7d12e7 Fix a stupid typo.
Reviewed by:	bz
2008-04-06 20:39:33 +00:00
attilio
b841937e98 Bump __FreeBSD_version in order to reflect lockmgr_rw() and
lockmgr_args_rw() introduction.
2008-04-06 20:27:54 +00:00
attilio
07441f19e1 Optimize lockmgr in order to get rid of the pool mutex interlock, of the
state transitioning flags and of msleep(9) callings.
Use, instead, an algorithm very similar to what sx(9) and rwlock(9)
alredy do and direct accesses to the sleepqueue(9) primitive.

In order to avoid writer starvation a mechanism very similar to what
rwlock(9) uses now is implemented, with the correspective per-thread
shared lockmgrs counter.

This patch also adds 2 new functions to lockmgr KPI: lockmgr_rw() and
lockmgr_args_rw().  These two are like the 2 "normal" versions, but they
both accept a rwlock as interlock.  In order to realize this, the general
lockmgr manager function "__lockmgr_args()" has been implemented through
the generic lock layer. It supports all the blocking primitives, but
currently only these 2 mappers live.

The patch drops the support for WITNESS atm, but it will be probabilly
added soon. Also, there is a little race in the draining code which is
also present in the current CVS stock implementation: if some sharers,
once they wakeup, are in the runqueue they can contend the lock with
the exclusive drainer.  This is hard to be fixed but the now committed
code mitigate this issue a lot better than the (past) CVS version.
In addition assertive KA_HELD and KA_UNHELD have been made mute
assertions because they are dangerous and they will be nomore supported
soon.

In order to avoid namespace pollution, stack.h is splitted into two
parts: one which includes only the "struct stack" definition (_stack.h)
and one defining the KPI.  In this way, newly added _lockmgr.h can
just include _stack.h.

Kernel ABI results heavilly changed by this commit (the now committed
version of "struct lock" is a lot smaller than the previous one) and
KPI results broken by lockmgr_rw() / lockmgr_args_rw() introduction,
so manpages and __FreeBSD_version will be updated accordingly.

Tested by:      kris, pho, jeff, danger
Reviewed by:    jeff
Sponsored by:   Google, Summer of Code program 2007
2008-04-06 20:08:51 +00:00
alc
2f4904816f Introduce vm_reserv_reclaim_contig(). This function is used by
contigmalloc(9) as a last resort to steal pages from an inactive,
partially-used superpage reservation.

Rename vm_reserv_reclaim() to vm_reserv_reclaim_inactive() and
refactor it so that a separate subroutine is responsible for breaking
the selected reservation.  This subroutine is also used by
vm_reserv_reclaim_contig().
2008-04-06 18:09:28 +00:00
mav
254de061de Rewrite node's r/w/q-lock semantics using only atomics instead of mutex
and atomics combination. Mutex is now used only for queue protection.
Also avoid unneded extra swi scheduling calls.
2008-04-06 15:26:32 +00:00
dfr
dd48f773da Call listen(2) on bound tcp sockets before passing them to svc_tli_create. 2008-04-06 13:52:17 +00:00
jeff
d1c199f415 - Correct a major error introduced in the per-cpu timeout commit. Sleep
and wakeup require the same wait channel to function properly.

Found by:	kris
Pointy hat:	me
2008-04-06 11:08:49 +00:00
cognet
9fe523332b Remove bus_space_generic.c from the per-plarform files. Having it in the
per-cpu files should be enough.
2008-04-05 21:57:11 +00:00
cognet
7bb418c25a Add bus_space_generic.c for the i81342 as well. 2008-04-05 21:51:11 +00:00
imp
095f4ce1af Turn a tab into a space. This fixes a misalignment for ls -l.
Tabs Noticed by: Antoine Brodin
2008-04-05 21:26:25 +00:00
jhb
68917b32fc Move INTR_FILTER from opt_global.h to its own header. 2008-04-05 20:13:15 +00:00
jhb
79918c45a6 Add a MI intr_event_handle() routine for the non-INTR_FILTER case. This
allows all the INTR_FILTER #ifdef's to be removed from the MD interrupt
code.
- Rename the intr_event 'eoi', 'disable', and 'enable' hooks to
  'post_filter', 'pre_ithread', and 'post_ithread' to be less x86-centric.
  Also, add a comment describe what the MI code expects them to do.
- On amd64, i386, and powerpc this is effectively a NOP.
- On arm, don't bother masking the interrupt unless the ithread is
  scheduled in the non-INTR_FILTER case to match what INTR_FILTER did.
  Also, don't bother unmasking the interrupt in the post_filter case if
  we never masked it.  The INTR_FILTER case had been doing this by having
  arm_unmask_irq for the post_filter (formerly 'eoi') hook.
- On ia64, stray interrupts are now masked for the non-INTR_FILTER case.
  They were already masked in the INTR_FILTER case.
- On sparc64, use the a NULL pre_ithread hook and use intr_enable_eoi() for
  both the 'post_filter' and 'post_ithread' hooks to match what the
  non-INTR_FILTER code did.
- On sun4v, retire the ithread wrapper hack by using an appropriate
  'post_ithread' hook instead (it's what 'post_ithread'/'enable' was
  designed to do even in 5.x).

Glanced at by:	piso
Reviewed by:	marius
Requested by:	marius [1], [5]
Tested on:	amd64, i386, arm, sparc64
2008-04-05 19:58:30 +00:00
bmah
bcf7984652 New release notes: ULE default (+MFC), aac(4) >2TB volume support
(+MFC), kernel-mode NLM, adduser(8) -M (+MFC), ls(1) -D, nc(1) -O,
pkg_sign/pkg_check gone.

MFCs noted:  textdump(4), cmx(4), uslcom(4),
2008-04-05 18:11:39 +00:00
jhb
5279835925 During attach on some de(4) adapters the driver sends out a test packet as
part of detecting the media.  Explicitly ensure that we don't send it to
bpf(4) as bpf(4) isn't setup yet.  This worked by accident before the bpf
interface stuff was reworked to avoid other races (bpf_peers_present, etc.)
but now it needs an explicit check to avoid a panic.

MFC after:	3 days
PR:		kern/120915
2008-04-05 17:24:44 +00:00
ceri
7dac1581a7 Correct typo. 2008-04-05 15:51:14 +00:00
dfr
a38ef19667 On i386, don't try to do network-type stuff if the device name is'nt pxeN. 2008-04-05 15:03:29 +00:00
takawata
4d340bc45f GPE lock may recurse on resume path. 2008-04-05 14:21:01 +00:00
dfr
b7878b140b Allow for a zero length 'loader'. 2008-04-05 10:26:20 +00:00
alc
871b77b7f6 Eliminate an unnecessary test from vm_phys_unfree_page(). 2008-04-05 05:02:53 +00:00
yongari
47012d1725 Add support for IC Plus IP1001 PHY.
Tested by:	Stuart Fraser < stuart AT stuartfraser DOT net >
2008-04-05 00:52:07 +00:00
imp
98be5d42cb MFp4(mips2-jnpr):
Add mips support.
2008-04-04 21:35:13 +00:00
imp
6ee39a67b3 Add mips support. 2008-04-04 21:33:41 +00:00
imp
347f099cb1 MFp4 (mips2-jnpr):
o Default to -O on mips as well as arm.  -O2 has been strongly implicated
  in many problems in the past, so we're taking a conservative approach
  until the problems are well understood.
2008-04-04 21:12:40 +00:00
imp
ab840fe874 MFp4: Add mips support for dynamic linking.
This code came from the merged mips2 and Juniper mips repositories.
Warner Losh, Randall Seager, Oleksandr Tymoshenko and Olivier Houchard
worked to merge, debug and integrate this code.  This code may also
contain code derived from NetBSD.
2008-04-04 20:59:26 +00:00
imp
64685606b6 If you build a compiler with TARGET_BIG_ENDIAN, and then try to build
a little endian kernel, things break.  Be explicit about the endian
choice by setting it in the little endian case as well.
2008-04-04 19:33:09 +00:00
alc
927cf7f228 Update a comment to vm_map_pmap_enter(). 2008-04-04 19:14:58 +00:00
alc
067dba5f97 Reintroduce UMA_SLAB_KMAP; however, change its spelling to
UMA_SLAB_KERNEL for consistency with its sibling UMA_SLAB_KMEM.
(UMA_SLAB_KMAP met its original demise in revision 1.30 of
vm/uma_core.c.)  UMA_SLAB_KERNEL is now required by the jumbo frame
allocators.  Without it, UMA cannot correctly return pages from the
jumbo frame zones to the VM system because it resets the pages' object
field to NULL instead of the kernel object.  In more detail, the jumbo
frame zones are created with the option UMA_ZONE_REFCNT.  This causes
UMA to overwrite the pages' object field with the address of the slab.
However, when UMA wants to release these pages, it doesn't know how to
restore the object field, so it sets it to NULL.  This change teaches
UMA how to reset the object field to the kernel object.

Crashes reported by: kris
Fix tested by: kris
Fix discussed with: jeff
MFC after: 6 weeks
2008-04-04 18:41:12 +00:00
imp
8353654e96 Fix stupid typo 2008-04-04 18:22:16 +00:00
alc
b1ed68eb9c Eliminate an unnecessary test and its misleading comment from pmap_enter(). 2008-04-04 18:00:22 +00:00
raj
cd6e8c4dc8 Make kernel.tramp build properly on ARM9E.
Reviewed by:	imp
Approved by:	cognet (mentor)
2008-04-04 17:35:24 +00:00
imp
30afe603df Add note about PZERO being obsolete, because so much code uses it.
Feel free to improve the verbage, since this was a compromise between
conflicting feedback I got on my original version.
2008-04-04 16:59:58 +00:00
jeff
d6dabc3153 - Add sysctls at debug.rwlock to control the behavior of the speculative
spinning when readers hold a lock.  This spinning is speculative because,
   unlike the write case, we can not test whether the owners are running.
 - Add speculative read spinning for readers who are blocked by pending
   writers while a read lock is still held.  This allows the thread to
   spin until the write lock succeeds after which it may spin until the
   writer has released the lock.  This prevents excessive context switches
   when readers and writers both hold the lock for brief periods.

Sponsored by:	Nokia
2008-04-04 10:00:46 +00:00
dfr
fd80ed9056 Add some compatibility code so that software which is built to use the new
struct flock with l_sysid member can work properly on an an old kernel which
doesn't support l_sysid.

Sponsored by:	Isilon Systems
2008-04-04 09:43:03 +00:00
kib
dae02901d4 The temporary workaround for the call to the vget() without lock type in
the fdesc_allocvp(). The caller of the fdesc_allocvp() expects that the
returned vnode is not reclaimed. Do lock the vnode exclusive and drop
the lock after.

Reported by:	pho
Reviewed by:	jeff
2008-04-04 09:37:57 +00:00
ru
15c3f17df0 - Normalize usage(), add "ddb pathname" syntax.
- Revise the manpage.
2008-04-04 07:31:43 +00:00