Commit Graph

811 Commits

Author SHA1 Message Date
Stanislav Sedov
1e6774a44e - Don't return 0xffff if PHY id isn't equal 0. This allows PHYs with
non-zero addresses to be used.

Approved by:	cognet
MFC after:	2 weeks
2007-12-16 12:57:12 +00:00
Olivier Houchard
c8ffd860a5 There's no need to call pmap_vac_me_harder() in pmap_protect(), as it
already happened in pmap_modify_pv().

Submitted by:	Mark Tinguely <tinguely AT casselton DOT net>
2007-12-11 20:35:44 +00:00
Joseph Koshy
0da7aa7a7d Add stubs to unbreak LINT. 2007-12-07 13:45:47 +00:00
Olivier Houchard
b358d3906a Fix style in previous commit.
Pointed out by:	njl
2007-12-07 10:42:11 +00:00
Olivier Houchard
91f2b6797a Erm, add a missing else, we do not want to increase the mapping counters for
both kernel and userland when we create a pv for pmap_kernel.

Reported by:	Mark Tinguely <tinguely AT casselton DOT net>
MFC After:	3 days
2007-12-06 23:17:24 +00:00
Robert Watson
3c90d1ea74 Break out stack(9) from ddb(4):
- Introduce per-architecture stack_machdep.c to hold stack_save(9).
- Introduce per-architecture machine/stack.h to capture any common
  definitions required between db_trace.c and stack_machdep.c.
- Add new kernel option "options STACK"; we will build in stack(9) if it is
  defined, or also if "options DDB" is defined to provide compatibility
  with existing users of stack(9).

Add new stack_save_td(9) function, which allows the capture of a stacktrace
of another thread rather than the current thread, which the existing
stack_save(9) was limited to.  It requires that the thread be neither
swapped out nor running, which is the responsibility of the consumer to
enforce.

Update stack(9) man page.

Build tested:	amd64, arm, i386, ia64, powerpc, sparc64, sun4v
Runtime tested:	amd64 (rwatson), arm (cognet), i386 (rwatson)
2007-12-02 20:40:35 +00:00
Olivier Houchard
18836eac48 Fix a potential bug in pmap :
We used to allocate the domains 0-14 for userland, and leave the domain 15
for the kernel. Now supersections requires the use of domain 0, so we
switched the kernel domain to 0, and use 1-15 for userland.
How it's done currently, the kernel domain could be allocated for a
userland process.
So switch back to the previous way we did things, set the first available
domain to 0, and just add 1 to get the real domain number in the struct pmap.

Reported by:	Mark Tinguely <tinguely AT casselton DOT net>
MFC After:	3 days
2007-12-02 15:26:30 +00:00
Olivier Houchard
35af41b0a6 Move the strongarm-specific files from conf/files.arm to sa11x0/files.sa11xO.
Submitted by:	Rafal Jaworowski <raj AT semihalf DOT com>
2007-12-02 13:12:21 +00:00
Olivier Houchard
f9af595fc3 Cleanup : make nexus standard, as it is mandatory anyway.
Garbage-collect unused nexus_io.c and nexus_io_asm.S

Submitted by:	Rafal Jaworowski <raj AT semihalf DOT com>
2007-12-02 13:10:42 +00:00
Olivier Houchard
b21a1da537 Close a race.
The RAS implementation would set the end address, then the start
address.  These were used by the kernel to restart a RAS sequence if
it was interrupted.  When the thread switching code ran, it would
check these values and adjust the PC and clear them if it did.

However, there's a small flaw in this scheme.  Thread T1, sets the end
address and gets preempted.  Thread T2 runs and also does a RAS
operation.  This resets end to zero.  Thread T1 now runs again and
sets start and then begins the RAS sequence, but is preempted before
the RAS sequence executes its last instruction.  The kernel code that
would ordinarily restart the RAS sequence doesn't because the PC isn't
between start and 0, so the PC isn't set to the start of the sequence.
So when T1 is resumed again, it is at the wrong location for RAS to
produce the correct results.  This causes the wrong results for the
atomic sequence.

The window for the first race is 3 instructions.  The window for the
second race is 5-10 instructions depending on the atomic operation.
This makes this failure fairly rare and hard to reproduce.

Mutexs are implemented in libthr using atomic operations.  When the
above race would occur, a lock could get stuck locked, causing many
downstream problems, as you might expect.

Also, make sure to reset the start and end address when doing a syscall, or
a malicious process could set them before doing a syscall.

Reviewed by: imp, ups (thanks guys)
Pointy hat to:	cognet
MFC After:	3 days
2007-12-02 12:49:28 +00:00
Olivier Houchard
43e23d1b4c Fixes for ARM9/ARM10 :
Call uma_sel_align() there at well.
Set CPU_CONTROL_VECRELOC if we're using the high vectors page.

Submitted by:	Rafal Jaworowski <raj AT semihalf DOT com>
MFC After:	1 week
2007-11-28 22:55:55 +00:00
Olivier Houchard
85d18774de Correct the logic : we can just invalidate the cache lines, and not
write-back them, only if PREWRITE is not set, and if the buffer is
cache-line aligned.

MFC After:	1 week
2007-11-28 22:21:17 +00:00
Olivier Houchard
9acb0e651b In atomic_fetchadd_32(), do not blindly increase the value of %3.
It should just contain the value we want to add, as if we're interrupted
between the add and the str, we will restart from the beginning. Just use
a register we can scratch instead.

MFC After:	1 week
2007-11-27 22:12:05 +00:00
John Baldwin
23d34db956 Remove the 'needbounce' variable from the _bus_dmamap_load_buffer()
routine.  It is not needed as the existing tests for segment coalescing
already handle bounced addresses and it prevents legal segment coalescing
in certain edge cases.

MFC after:	1 week
Reviewed by:	scottl
2007-11-27 17:28:12 +00:00
Alan Cox
59677d3c0e Prevent the leakage of wired pages in the following circumstances:
First, a file is mmap(2)ed and then mlock(2)ed.  Later, it is truncated.
Under "normal" circumstances, i.e., when the file is not mlock(2)ed, the
pages beyond the EOF are unmapped and freed.  However, when the file is
mlock(2)ed, the pages beyond the EOF are unmapped but not freed because
they have a non-zero wire count.  This can be a mistake.  Specifically,
it is a mistake if the sole reason why the pages are wired is because of
wired, managed mappings.  Previously, unmapping the pages destroys these
wired, managed mappings, but does not reduce the pages' wire count.
Consequently, when the file is unmapped, the pages are not unwired
because the wired mapping has been destroyed.  Moreover, when the vm
object is finally destroyed, the pages are leaked because they are still
wired.  The fix is to reduce the pages' wired count by the number of
wired, managed mappings destroyed.  To do this, I introduce a new pmap
function pmap_page_wired_mappings() that returns the number of managed
mappings to the given physical page that are wired, and I use this
function in vm_object_page_remove().

Reviewed by: tegge
MFC after: 6 weeks
2007-11-17 22:52:29 +00:00
Olivier Houchard
49ec6888e2 Add a kernel config file for the Hot-e HL200 (AT91RM92 based).
Many thanks to John Nicholls from Thinklinx for sending sample hardware.
2007-11-17 17:25:22 +00:00
Marcel Moolenaar
0c3967e7fe o Rename cpu_thread_setup() to cpu_thread_alloc() to better
communicate that it relates to (is called by) thread_alloc()
o  Add cpu_thread_free() which is called from thread_free()
   to counter-act cpu_thread_alloc().

i386:	Have cpu_thread_free() call cpu_thread_clean() to
	preserve behaviour.
ia64:	Have cpu_thread_free() call mtx_destroy() for the
	mutex initialized in cpu_thread_alloc().

PR: ia64/118024
2007-11-14 20:21:54 +00:00
Julian Elischer
431f890614 generally we are interested in what thread did something as
opposed to what process. Since threads by default have teh name of the
process unless over-written with more useful information, just print the
thread name instead.
2007-11-14 06:21:24 +00:00
Olivier Houchard
0559b904bc Add entries for the L2 cache-related functions for armv5.
Spotted out by: Rafal Jaworowski
2007-11-08 13:19:08 +00:00
Konstantin Belousov
89b57fcf01 Fix for the panic("vm_thread_new: kstack allocation failed") and
silent NULL pointer dereference in the i386 and sparc64 pmap_pinit()
when the kmem_alloc_nofault() failed to allocate address space. Both
functions now return error instead of panicing or dereferencing NULL.

As consequence, vmspace_exec() and vmspace_unshare() returns the errno
int. struct vmspace arg was added to vm_forkproc() to avoid dealing
with failed allocation when most of the fork1() job is already done.

The kernel stack for the thread is now set up in the thread_alloc(),
that itself may return NULL. Also, allocation of the first process
thread is performed in the fork1() to properly deal with stack
allocation failure. proc_linkup() is separated into proc_linkup()
called from fork1(), and proc_linkup0(), that is used to set up the
kernel process (was known as swapper).

In collaboration with:	Peter Holm
Reviewed by:	jhb
2007-11-05 11:36:16 +00:00
Olivier Houchard
64a2135deb Remove a staled comment, NPE-C should work fine.
Reviewed by:	sam
2007-11-04 21:54:52 +00:00
Kevin Lo
92e7748daf __CPU_XSCALE_PXA2XX -> CPU_XSCALE_PXA2X0 2007-11-01 10:01:15 +00:00
Kevin Lo
0c6faf446d Don't define get_cachetype() for CPU_ARM9E unless it's going to be used. 2007-10-31 07:27:31 +00:00
Warner Losh
855f957fc1 kill commented out line of code. 2007-10-29 21:01:50 +00:00
Olivier Houchard
ed0b604f1f Add an option to be able to override the value of the AT91 master clock
frequency. It'd be better to be able to calculate it at runtime, but we need
the information very early, to setup the uart.
2007-10-25 23:02:42 +00:00
Olivier Houchard
2b953358ed Move some KB920x-specific options into the KB920x file. 2007-10-25 22:57:19 +00:00
Olivier Houchard
9e753c174f Oooops, get the end of the memory right. 2007-10-25 22:43:17 +00:00
Olivier Houchard
cb3d8b2510 KERNBASE should really be KERNVIRTADDR there too.
MFC after:	1 week
2007-10-24 23:41:46 +00:00
Olivier Houchard
b7630a1145 In ate_get_mac(), try to get the mac address in the right order, at least
in the same order as it's set in ate_set_mac.
I remember a discussion about this on -arm, but apparently nothing was done.
Warner, is this wrong ?

X-MFC After:	proper review
2007-10-24 23:12:19 +00:00
Olivier Houchard
12e12ab1a8 Handle the case where PHYSADDR != KERNPHYSADDR (ie we do not load the kernel
at the beginning of the RAM).

MFC After:	1 week
2007-10-24 22:26:54 +00:00
Olivier Houchard
b2c9a0439a Correct a comment, this was not true anymore. 2007-10-24 22:24:32 +00:00
Warner Losh
5a4eb2d84b correct guard variable names. 2007-10-18 05:43:44 +00:00
Warner Losh
63b2597849 Merge support from p4 (from NetBSD) for arm9e and arm10, arm11 cores. Not
yet connected to the build, but reduces diffs to p4 repo.

Obtained from: NetBSD
2007-10-18 05:33:06 +00:00
Warner Losh
dfb7d4cdef Merge definitions for ARM9E, ARM10 and ARM11 processors from p4 (which
got them from NetBSD).
2007-10-18 05:06:58 +00:00
Olivier Houchard
f60a7dc355 Use the direct mapping, if available, for pmap_zero_page_xscale() as well. 2007-10-16 20:40:04 +00:00
Olivier Houchard
0f7432f516 Do not use __XSCALE__ to detect if pld/strd/ldrd is available, use
_ARM_ARCH_5E instead.

MFC After:	3 days
2007-10-13 12:05:03 +00:00
Olivier Houchard
258f866cbf Define _ARM_ARCH_5E too, so that we know if pld/strd/ldrd are available.
MFC After:	3 days
2007-10-13 12:04:10 +00:00
Kevin Lo
976b010645 Spelling fix for interupt -> interrupt 2007-10-12 06:03:46 +00:00
Marius Strobl
55aaf894e8 Make the PCI code aware of PCI domains (aka PCI segments) so we can
support machines having multiple independently numbered PCI domains
and don't support reenumeration without ambiguity amongst the
devices as seen by the OS and represented by PCI location strings.
This includes introducing a function pci_find_dbsf(9) which works
like pci_find_bsf(9) but additionally takes a domain number argument
and limiting pci_find_bsf(9) to only search devices in domain 0 (the
only domain in single-domain systems). Bge(4) and ofw_pcibus(4) are
changed to use pci_find_dbsf(9) instead of pci_find_bsf(9) in order
to no longer report false positives when searching for siblings and
dupe devices in the same domain respectively.
Along with this change the sole host-PCI bridge driver converted to
actually make use of PCI domain support is uninorth(4), the others
continue to use domain 0 only for now and need to be converted as
appropriate later on.
Note that this means that the format of the location strings as used
by pciconf(8) has been changed and that consumers of <sys/pciio.h>
potentially need to be recompiled.

Suggested by:	jhb
Reviewed by:	grehan, jhb, marcel
Approved by:	re (kensmith), jhb (PCI maintainer hat)
2007-09-30 11:05:18 +00:00
Olivier Houchard
f530d4f06d Ok I hope I got it right this time.
After discussion with Sam, switch back to use firmware(9) instead of
having the firmware in hex format.
Put the binary firmware uuencoded into sys/contrib/dev/npe, and slap a
LICENSE file, as found on the Intel website.

Approved by:	re (blanket), mux (mentor)
MFC After:	1 week
2007-09-27 22:39:49 +00:00
Olivier Houchard
88af309a0b Now that Intel changed the license for the NPE firmware, import it directly
hexed into our tree, instead of requiring the user to download it.

Approved by:	re (blanket)
MFC after:	1 week
2007-09-27 21:18:34 +00:00
Olivier Houchard
857539e578 Fix a comment to reflect the truth.
Spotted out by:	Marius Nuennerich <marius.nuennerich AT gmx D0T de>
Approved by:	re (blanket)
2007-09-27 20:52:17 +00:00
Alan Cox
7bfda801a8 Change the management of cached pages (PQ_CACHE) in two fundamental
ways:

(1) Cached pages are no longer kept in the object's resident page
splay tree and memq.  Instead, they are kept in a separate per-object
splay tree of cached pages.  However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock.  Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.

This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE).  The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held.  Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.

Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case.  Cached pages
are reclaimed far, far more often than they are reactivated.  Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.

(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.

Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated.  Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page.  Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.

Discussed with: many over the course of the summer, including jeff@,
   Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
2007-09-25 06:25:06 +00:00
Olivier Houchard
afecb69ae1 Make sure we do not call _arm_bzero() or _arm_memcpy() if the size is not at
least the minimum asked by the driver.

Approved by:	re (blanket)
2007-09-22 22:47:48 +00:00
Olivier Houchard
4c865ababe Add various macros for the ADMA unit.
Approved by:	re (blanket)
2007-09-22 22:25:24 +00:00
Olivier Houchard
16dcd342a9 Add a driver for the 7seg found on the CRB board, largely based on the
IQ31244 version.

Approved by:	re (blanket)
2007-09-22 16:25:43 +00:00
Olivier Houchard
75f66155bf Twist the RAS logic a bit to avoid branching.
MFC After:	1 week
Approved by:	re (blanket)
2007-09-22 14:23:52 +00:00
Olivier Houchard
ea8979747e Remove dead code.
Approved by:	re (blanket)
Beer from:	jadawin
2007-09-19 15:30:25 +00:00
Warner Losh
94ab036295 Kill bogus printf debugs.
Approved by: re@ (blanket)
2007-09-16 07:51:02 +00:00
Warner Losh
f672b4aee5 Kill overly verbose messages about setting bus width.
Approved by: re@ (blanket)
2007-09-16 07:48:58 +00:00