80442 Commits

Author SHA1 Message Date
Konstantin Belousov
50cfe7fa50 Remove OBJ_CLEANING flag. The vfs_setdirty_locked_object() is the only
consumer of the flag, and it used the flag because OBJ_MIGHTBEDIRTY
was cleared early in vm_object_page_clean, before the cleaning pass
was done. This is no longer true after r216799.

 Moreover, since OBJ_CLEANING is a flag, and not the counter, it could
be reset too prematurely when parallel vm_object_page_clean() are
performed.

Reviewed by:	alc (as a part of the bigger patch)
MFC after:	1 month (after r216799 is merged)
2010-12-29 22:26:49 +00:00
Alan Cox
fef87167c9 There is no point in vm_contig_launder{,_page}() flushing held pages,
instead skip over them.  As long as a page is held, it can't be reclaimed by
contigmalloc(M_WAITOK).  Moreover, a held page may be undergoing
modification, e.g., vmapbuf(), so even if the hold were released before the
completion of contigmalloc(), the page might have to be flushed again.

MFC after:	3 weeks
2010-12-29 20:35:36 +00:00
Attilio Rao
3d7acbbabf Fix several callout migration races:
- Problem1:
   Hypothesis: thread1 is doing a callout_reset_on(), within his
   callout handler, willing to implicitly or explicitly migrate the
   callout.  thread2 is draining the callout.

   Thesys:
   * thread1 calls callout_lock() and locks the old callout cpu
   * thread1 performs the checks in the first path of the
     callout_reset_on()
   * thread1 hits this codepiece:
       /*
        * If the lock must migrate we have to check the state again as
        * we can't hold both the new and old locks simultaneously.
        */
       if (c->c_cpu != cpu) {
               c->c_cpu = cpu;
               CC_UNLOCK(cc);
               goto retry;
       }

     which means it will drop the lock and 'retry'
   * thread2 will callout_lock() and locks the new callout cpu.
     thread1 spins on the new lock and will not keep going for the
     moment.
   * thread2 checks that the callout is not pending (as callout is
     currently running) and that it is not on cc->cc_curr (because cc
     now refers to the new callout and the callout is running on the
     old callout cpu) thus it thinks it is done and returns.
   * thread1  will now acquire the lock and then adds the callout
     to the new callout cpu queue

   That seems an obvious race as callout_stop() falsely reports
   the callout stopped or worse, callout_drain() falsely returns
   while the callout is still in use.
 - Solution1:
   Fixing this problem would require, in general, to lock both
   callout cpus at once while switching the c_cpu field and avoid
   cyclic deadlocks between callout cpus locks.
   The concept of CPUBLOCK is then introduced (working more or less
   like the blocked_lock for thread_lock() function) meaning:
   "in callout_lock(), spin until the c->c_cpu is not different from
   CPUBLOCK". That way the "original" callout cpu, referred to the
   above mentioned code snippet, will remain blocked until the lock
   handover is over critical path will remain covered.

 - Problem2:
   Having the callout currently executed on a specific callout cpu
   and contemporary pending on another callout cpu (as it can happen
   with current code) breaks, at least, the assumption callout_drain()
   returns just once the callout cannot be referenced anymore.
 - Solution2:
   Callout migration is deferred if the current callout is already
   under execution.
   The best place to do that is in softclock() and new members are
   added to the callout cpu structure in order to specify a pending
   migration is requested. That is necessary because the callout
   cannot be trusted (not freed) the 100% of times after the execution
   of the callout handler.
   CPUBLOCK will prevent, in the "deferred migration" case, that the
   callout gets freed in this case, stopping any callout_stop() and
   callout_drain() possible activity until the migration is
   actually performed.

 - Problem3:
   There is a further race in callout_drain().
   In order to avoid a race between sleepqueue lock and callout cpu
   spinlock, in _callout_stop_safe(), the callout cpu lock is dropped,
   the sleepqueue lock is acquired and a new callout cpu lookup is
   performed.  Note that the channel used for locking the sleepqueue is
   obtained from the "current" callout cpu (&cc->cc_waiting).
   If the callout migrated in the meanwhile, callout_drain() will end up
   using the wrong wchan for the sleepqueue (the locked one will be the
   older, while the new one will not really be locked) leading to a
   lock leak and a race access to sleepqueue.
 - Solution3:
   It is enough to check if a migration happened between the operation
   of acquiring the sleepqueue lock and the new callout cpu lock and
   eventually unwind all those and try again.

This problems can lead to deathly races on moderate (4-ways) SMP
environment, leading to easy panic or deadlocks.
The 24-ways of the reporter, could easilly panic, with completely
normal workload, almost daily.
gianni@ kindly wrote the following prof-of-concept which can
panic a FreeBSD machine in less than one hour, in smaller SMP:
http://www.freebsd.org/~attilio/callout/test.c

Reported by:	Nicholas Esborn <nick at desert dot net>, DesertNet
In collabouration with:	gianni, pho, Nicholas Esborn
Reviewed by:	jhb
MFC after:	1 week (*)

* Usually, I would aim for a larger MFC timeout, but I really want this
  in before 8.2-RELEASE, thus re@ accepted a shorter timeout as a special
  case for this patch
2010-12-29 18:17:36 +00:00
Marius Strobl
4d05e7b184 On UltraSPARC-III+ and greater take advantage of ASI_ATOMIC_QUAD_LDD_PHYS,
which takes an physical address instead of an virtual one, for loading TTEs
of the kernel TSB so we no longer need to lock the kernel TSB into the dTLB,
which only has a very limited number of lockable dTLB slots. The net result
is that we now basically can handle a kernel TSB of any size and no longer
need to limit the kernel address space based on the number of dTLB slots
available for locked entries. Consequently, other parts of the trap handlers
now also only access the the kernel TSB via its physical address in order
to avoid nested traps, as does the PMAP bootstrap code as we haven't taken
over the trap table at that point, yet. Apart from that the kernel TSB now
is accessed via a direct mapping when we are otherwise taking advantage of
ASI_ATOMIC_QUAD_LDD_PHYS so no further code changes are needed. Most of this
is implemented by extending the patching of the TSB addresses and mask as
well as the ASIs used to load it into the trap table so the runtime overhead
of this change is rather low. Currently the use of ASI_ATOMIC_QUAD_LDD_PHYS
is not yet enabled on SPARC64 CPUs due to lack of testing and due to the
fact it might require minor adjustments there.
Theoretically it should be possible to use the same approach also for the
user TSB, which already is not locked into the dTLB, avoiding nested traps.
However, for reasons I don't understand yet OpenSolaris only does that with
SPARC64 CPUs. On the other hand I think that also addressing the user TSB
physically and thus avoiding nested traps would get us closer to sharing
this code with sun4v, which only supports trap level 0 and 1, so eventually
we could have a single kernel which runs on both sun4u and sun4v (as does
Linux and OpenBSD).

Developed at and committed from:	27C3
2010-12-29 16:59:33 +00:00
Marius Strobl
62cf53e2ea - Move the macros for generating load and store instructions to asmacros.h
so they can be shared by different source files and extend them by a
  variant for atomic compare and swap.
- Consistently use EMPTY.
2010-12-29 14:14:50 +00:00
Marius Strobl
b5b0068b4b Rename the "xor" parameter to "xorval" as the former is a reserved keyword
in C++.

Submitted by:	gahr
2010-12-29 14:11:46 +00:00
Konstantin Belousov
3280870dca Move the increment of vm object generation count into
vm_object_set_writeable_dirty().

Fix an issue where restart of the scan in vm_object_page_clean() did
not removed write permissions for newly added pages or, if the mapping
for some already scanned page changed to writeable due to fault.
Merge the two loops in vm_object_page_clean(), doing the remove of
write permission and cleaning in the same loop. The restart of the
loop then correctly downgrade writeable mappings.

Fix an issue where a second caller to msync() might actually return
before the first caller had actually completed flushing the
pages. Clear the OBJ_MIGHTBEDIRTY flag after the cleaning loop, not
before.

Calls to pmap_is_modified() are not needed after pmap_remove_write()
there.

Proposed, reviewed and tested by:	alc
MFC after:	1 week
2010-12-29 12:53:53 +00:00
Konstantin Belousov
8c2a54de80 Add kernel side support for BIO_DELETE/TRIM on UFS.
The FS_TRIM fs flag indicates that administrator requested issuing of
TRIM commands for the volume. UFS will only send the command to disk
if the disk reports GEOM::candelete attribute.

Since disk queue is reordered, data block is marked as free in the bitmap
only after TRIM command completed. Due to need to sleep waiting for
i/o to finish, TRIM bio_done routine schedules taskqueue to set the
bitmap bit.

Based on the patch by:	mckusick
Reviewed by:	mckusick, pjd
Tested by:	pho
MFC after:	1 month
2010-12-29 12:25:28 +00:00
Konstantin Belousov
d2d6c59245 Move the definition of mkdirlisthd from header to C file.
Reviewed by:	mckusick
Tested by:	pho
2010-12-29 12:16:06 +00:00
Konstantin Belousov
d91e813c7b Add reporting of GEOM::candelete BIO_GETATTR for md(4) and geom_disk(4).
Non-zero value of attribute means that device supports BIO_DELETE.

Suggested and reviewed by:	pjd
Tested by:	pho
MFC after:	1 week
2010-12-29 12:11:07 +00:00
Konstantin Belousov
c44d423ed8 Add sysctl vm.md_malloc_wait, non-zero value of which switches malloc-backed
md(4) to using M_WAITOK malloc calls.

M_NOWAITOK allocations may fail when enough memory could be freed, but not
immediately. E.g. SU UFS becomes quite unhappy when metadata write return
error, that would happen for failed malloc() call.

Reported and tested by:	pho
MFC after:	1 week
2010-12-29 11:39:15 +00:00
Konstantin Belousov
abf6c181e4 Use a proper type for the variable holding the summary size of the inode
data. Otherwise, on 32bit systems, unlinked inode which size is the
multiple of 4GB was not truncated, causing corruption.

Reported by:	brucec
Reviewed by:	mckusick
Tested by:	pho
2010-12-29 11:19:39 +00:00
David Xu
c8e368a933 - Follow r216313, the sched_unlend_user_prio is no longer needed, always
use sched_lend_user_prio to set lent priority.
- Improve pthread priority-inherit mutex, when a contender's priority is
  lowered, repropagete priorities, this may cause mutex owner's priority
  to be lowerd, in old code, mutex owner's priority is rise-only.
2010-12-29 09:26:46 +00:00
Colin Percival
eb68ccd676 A lack of console input is not the same thing as a byte of \0 input.
Correctly return -1 from cngetc when no input is available to be read.

This fixes the '(CTRL-C to abort)' spam while dumping.

MFC after:	3 days
2010-12-29 05:13:21 +00:00
Rick Macklem
bd2fa726e0 Delete the nfsvno_localconflict() function in the experimental
NFS server since it is no longer used and is broken.

MFC after:	2 weeks
2010-12-28 23:50:13 +00:00
Warner Losh
6e6abcfd24 MIPS has lots of flavors as well 2010-12-28 22:49:28 +00:00
Warner Losh
714cf6c0df Revert r216777, per jhb@ 2010-12-28 22:45:29 +00:00
Warner Losh
e83e229d01 Revert r216775, per jhb@ 2010-12-28 22:44:32 +00:00
Warner Losh
1977f3f168 Comment out npx and isa from NOTES file. We don't need them here
since DEFAULTS already pulls them in.
2010-12-28 21:22:08 +00:00
Warner Losh
78b92d19e0 Remove mem, io, isa and npx since they are duplicative of the entries
in DEFAULTS.  Saves 8 lines of warnings when we build XBOX.
2010-12-28 21:20:58 +00:00
Warner Losh
092a687dc6 Due to the automatic inclusion of DEFAULTS everywhere, and since it
has device mem in it almost everywhere, we get warnings about
duplicated device almost everywhere.  Comment it out, with a note
about why, so that we don't get those warnings.
2010-12-28 21:18:58 +00:00
Pawel Jakub Dawidek
8449945a4d ZFS might not return monotonically increasing directory offset cookies,
so turn off UFS-specific hack that assumes so in ZFS case.
Before the change we can miss returning some directory entries to a
NFS client.

I believe that the hack should be moved to ufs_readdir(), but until we find
somebody who will do it, turn it off for ZFS in NFS server code.

Submitted by:	rmacklem
Discussed with:	rmacklem, mckusick
MFC after:	3 days
2010-12-28 21:12:15 +00:00
Juli Mallett
1dadcedcfc When allocating memory from bootmem for the kernel to use, try to leave about
2MB of memory in the bootmem allocator for the SDK to use internally at a later
point.  It'd be nice if there were some functions we could call before
allocating memory to let various facilities reserve some memory, but for now
this seems sufficient.  Previously some unfortunate systems could give up all
(or at least most) of their memory to the kernel from bootmem, and then
allocating command queues for packet output and the like would fail later in
the boot process (which in turn would lead to crashes even later.)

Reported by:	kan
2010-12-28 20:11:54 +00:00
Alan Cox
a5dbab5444 Correct a typo in vm_fault_quick_hold_pages().
Reported by:	Bartosz Stec
2010-12-28 20:02:30 +00:00
Pyun YongHyeon
deb4ef8291 Add device id for RDC M3010 which is found on Vortex86 SoC.
Reviewed by:	mav
2010-12-28 17:45:43 +00:00
Nathan Whitehorn
52a190480f Only keep track of PTE validity statistics for pages not locked in the
table. The 'locked' attribute is used to circumvent the regular page table
locking for some special pages, with the result that including locked pages
here causes races when updating the stats.
2010-12-28 17:02:15 +00:00
John Baldwin
e58ed10207 Use bus_alloc_resource_any().
MFC after:	2 weeks
2010-12-28 16:57:29 +00:00
Colin Percival
4a416f8375 Remove a "not strictly correct" (and panic-inducing) workaround for a bug
which doesn't seem to exist.

PR:		kern/141328
MFC after:	3 days
2010-12-28 14:36:32 +00:00
Lawrence Stewart
e29f3cc76d Add a comment for the ccv member of struct tcpcb.
Sponsored by:	FreeBSD Foundation
MFC after:	5 weeks
X-MFC with:	r215166
2010-12-28 12:37:57 +00:00
Lawrence Stewart
39bc9de532 - Add some helper hook points to the TCP stack. The hooks allow Khelp modules to
access inbound/outbound events and associated data for established TCP
  connections. The hooks only run if at least one hook function is registered
  for the hook point, ensuring the impact on the stack is effectively nil when
  no TCP Khelp modules are loaded. struct tcp_hhook_data is passed as contextual
  data to any registered Khelp module hook functions.

- Add an OSD (Object Specific Data) pointer to struct tcpcb to allow Khelp
  modules to associate per-connection data with the TCP control block.

- Bump __FreeBSD_version and add a note to UPDATING regarding to ABI changes
  introduced by this commit and r216753.

In collaboration with:	David Hayes <dahayes at swin edu au> and
				Grenville Armitage <garmitage at swin edu au>
Sponsored by:	FreeBSD Foundation
Reviewed by:	bz, others along the way
MFC after:	3 months
2010-12-28 12:13:30 +00:00
Andrey V. Elsukov
f25481193e Allow destroying EBR in COMPAT (default) mode.
MFC after:	2 week
2010-12-28 08:42:12 +00:00
Andrey V. Elsukov
d3507dff37 Make EBR probe method less strictly to be able detect EBRs with
small non fatal inconsistency. EBR may contain boot loader and sometimes
it just has some garbage data. Now this does not prevent FreeBSD to use
extended partitions. But since we do not support bootcode for EBR we mark
tables which have non empty boot area as corrupt. This does make them
readonly and we can not damage this data.

PR:		kern/141235
MFC after:	1 month
2010-12-28 08:36:44 +00:00
Lawrence Stewart
bee9ab2bc5 Add a new sack hint to track the most recent and highest sacked sequence number.
This will be used by the incoming Enhanced RTT Khelp module.

Sponsored by:	FreeBSD Foundation
Submitted by:	David Hayes <dahayes at swin edu au>
Reviewed by:	bz and others (as part of a larger patch)
MFC after:	3 months
2010-12-28 03:27:20 +00:00
Lawrence Stewart
22968a7d56 Fix a whitespace nit introduced in r215166.
Sponsored by:	FreeBSD Foundation
Spotted by:	bz
MFC after:	5 weeks
X-MFC with:	r215166
2010-12-28 01:38:52 +00:00
Colin Percival
76c9650713 Build the modules which can be built. The excluded modules fall into two
categories: Those which can't build with PAE because they attempt to cast
a pointer to a bus_addr_t (mostly scsi drivers); and those which can't be
built with XEN because they conflict with something in xen-os.h (e.g., in
cxgb there is a conflicting definition of test_and_clear_bit).

MFC after:	1 week
2010-12-27 23:59:27 +00:00
Colin Percival
2b3e1fb085 Make it possible to specify WITHOUT_MODULES in a kernel config file.
MFC after:	1 week
2010-12-27 23:52:40 +00:00
Robert Watson
eab54f6a13 Remove comment bemoaning the lack of an INP_INHASHLIST above in_pcbdrop();
I fixed this in r189657 in early 2009, so the comment is OBE.

Reviewed by:	bz
MFC after:	3 days
2010-12-27 19:38:25 +00:00
Konstantin Belousov
7dbb59c7ce Teach ddb "show mount" about MNTK_SUJ flag. 2010-12-27 12:06:38 +00:00
Alan Cox
4de2261903 Move vm_object_print()'s prototype to the expected place. 2010-12-27 07:12:22 +00:00
Colin Percival
8ea0b3bb2f Lock the vm page queue mutex in pmap_pte_release around the call
to PMAP_SET_VA; this fixes a mutex-not-held panic when a process
which called mlock(2) exits, and parallels a change made in
pmap_pte 10 months ago (svn r204160).

Note: The locking in this code is utterly broken.  We should not
be using the VM page queue mutex to protect the queue of pending
Xen page mapping hypervisor calls.  Even if it made sense to do
so, this commit and r204160 introduce LORs between the vm page
queue mutex and PMAP2mutex.

(However, a possible deadlock is better than a guaranteed panic,
and this change will hopefully make life easier for whoever fixes
the Xen pmap locking in the future.)

PR:		kern/140313
MFC after:	3 days
2010-12-26 13:05:43 +00:00
Alan Cox
5b2d228c44 Correct the order of the arguments to vm_fault_quick_hold_pages(). 2010-12-26 01:42:52 +00:00
Alan Cox
0b47b37621 Retire vm_fault_quick(). It's no longer used.
Reviewed by:	kib@
2010-12-25 23:54:50 +00:00
Rick Macklem
17891d0082 Modify the experimental NFS server so that it uses LK_SHARED
for RPC operations when it can. Since VFS_FHTOVP() currently
always gets an exclusively locked vnode and is usually called
at the beginning of each RPC, the RPCs for a given vnode will
still be serialized. As such, passing a lock type argument to
VFS_FHTOVP() would be preferable to doing the vn_lock() with
LK_DOWNGRADE after the VFS_FHTOVP() call.

Reviewed by:	kib
MFC after:	2 weeks
2010-12-25 21:56:25 +00:00
Alan Cox
82de724fe1 Introduce and use a new VM interface for temporarily pinning pages. This
new interface replaces the combined use of vm_fault_quick() and
pmap_extract_and_hold() throughout the kernel.

In collaboration with:	kib@
2010-12-25 21:26:56 +00:00
Rick Macklem
0cf42b622b Add an argument to nfsvno_getattr() in the experimental
NFS server, so that it can avoid calling VOP_ISLOCKED()
when the vnode is known to be locked. This will allow
LK_SHARED to be used for these cases, which happen to
be all the cases that can use LK_SHARED. This does not
fix any bug, but it reduces the number of calls to
VOP_ISLOCKED() and prepares the code so that it can be
switched to using LK_SHARED in a future patch.

Reviewed by:	kib
MFC after:	2 weeks
2010-12-24 21:31:18 +00:00
Rick Macklem
a852f40b7a Simplify vnode locking in the expeimental NFS server's
readdir functions. In particular, get rid of two bogus
VOP_ISLOCKED() calls. Removing the VOP_ISLOCKED() calls
is the only actual bug fixed by this patch.

Reviewed by:	kib
MFC after:	2 weeks
2010-12-24 20:24:07 +00:00
Rick Macklem
63e1cb4308 Since VOP_READDIR() for ZFS does not return monotonically
increasing directory offset cookies, disable the UFS related
loop that skips over directory entries at the beginning of
the block for the experimental NFS server. This loop is
required for UFS since it always returns directory entries
starting at the beginning of the block that
the requested directory offset is in. In discussion with pjd@
and mckusick@ it seems that this behaviour of UFS should maybe
change, with this fix being an interim patch until then.
This patch only fixes the experimental server, since pjd@ is
working on a patch for the regular server.

Discussed with:	pjd, mckusick
MFC after:	5 days
2010-12-24 18:46:44 +00:00
Warner Losh
72497ecf7e IXP4XX_GPIO_{,UN}LOCK() don't take args. Remove the sc here to make
this compile again.
2010-12-23 19:28:50 +00:00
John Baldwin
0d81cf1227 Don't try to reserve a resource that is already allocated. If the ECDT
table is present, then the acpi_ec(4) driver will allocate its resources
from nexus0 before the acpi0 device reserves resources for child devices.

Reviewed by:	jkim
2010-12-23 18:50:14 +00:00
John Baldwin
e1070bf509 Drop the icu_lock spinlock while pausing briefly after masking the
interrupt in the I/O APIC before moving it to a different CPU.  If the
interrupt had been triggered by the I/O APIC after locking icu_lock but
before we masked the pin in the I/O APIC, then this could cause the
interrupt to be pending on the "old" CPU and it would finally trigger
after we had moved the interrupt to the new CPU.  This could cause us to
panic as there was no interrupt source associated with the old IDT vector
on the old CPU.  Dropping the lock after the interrupt is masked but
before it is moved allows the interrupt to fire and be handled in this
case before it is moved.

Tested by:	Daniel Braniss  danny of cs huji ac il
MFC after:	1 week
2010-12-23 15:17:28 +00:00