- Pull all the code to deal with the trampoline stuff into one
centeralized place and use it from everywhere.
- Some minor style tidiness
Reviewed by: tinguely
platform, so use the latter in preference to the former. This makes
the fake_preload setup be the same between kb920x_machdep.c and
avila_machdep.c....
to detect (or load) kernel NLM support in rpc.lockd. Remove the '-k'
option to rpc.lockd and make kernel NLM the default. A user can still
force the use of the old user NLM by building a kernel without NFSLOCKD
and/or removing the nfslockd.ko module.
layouts different than the defaults:
o hint.npe.0.mac="A", "B", etc. specifies the window for MAC register accesses
o hint.npe.0.mii="A", "B", etc. specifies PHY registers
o hint.npe.1.phy=%d specifies the PHY to map to a port
This allows devices like NSLU to be setup w/o code changes and will
also be used for forthcoming support for more Avila boards.
Reviewed by: imp
MFC after 1 week
- Always include the ie_disable and ie_eoi methods in 'struct intr_event'
and collapse down to one intr_event_create() routine. The disable and
eoi hooks simply aren't used currently in the !INTR_FILTER case.
- Expand 'disab' to 'disable' in a few places.
- Use function casts for arm and i386:intr_eoi_src() instead of wrapper
routines since to trim one extra indirection.
Compiled on: {arm,amd64,i386,ia64,ppc,sparc64} x {FILTER, !FILTER}
Tested on: {amd64,i386} x {FILTER, !FILTER}
after each SYSINIT() macro invocation. This makes a number of
lightweight C parsers much happier with the FreeBSD kernel
source, including cflow's prcc and lxr.
MFC after: 1 month
Discussed with: imp, rink
- Add a new intr_event method ie_assign_cpu() that is invoked when the MI
code wishes to bind an interrupt source to an individual CPU. The MD
code may reject the binding with an error. If an assign_cpu function
is not provided, then the kernel assumes the platform does not support
binding interrupts to CPUs and fails all requests to do so.
- Bind ithreads to CPUs on their next execution loop once an interrupt
event is bound to a CPU. Only shared ithreads are bound. We currently
leave private ithreads for drivers using filters + ithreads in the
INTR_FILTER case unbound.
- A new intr_event_bind() routine is used to bind an interrupt event to
a CPU.
- Implement binding on amd64 and i386 by way of the existing pic_assign_cpu
PIC method.
- For x86, provide a 'intr_bind(IRQ, cpu)' wrapper routine that looks up
an interrupt source and binds its interrupt event to the specified CPU.
MI code can currently (ab)use this by doing:
intr_bind(rman_get_start(irq_res), cpu);
however, I plan to add a truly MI interface (probably a bus_bind_intr(9))
where the implementation in the x86 nexus(4) driver would end up calling
intr_bind() internally.
Requested by: kmacy, gallatin, jeff
Tested on: {amd64, i386} x {regular, INTR_FILTER}
While the KSE project was quite successful in bringing threading to
FreeBSD, the M:N approach taken by the kse library was never developed
to its full potential. Backwards compatibility will be provided via
libmap.conf for dynamically linked binaries and static binaries will
be broken.
revision 1.6
date: 2004/08/21 18:50:34; author: alc; state: Exp; lines: +3 -1
Properly free the temporary sf_buf in uiomove_fromphys() if a copyin or
copyout fails.
Obtained from: DragonFlyBSD
Spotted out by: Mark Tinguely
MFC After: 3 days
De-hardcode usage of ARM_TP_ADDRESS and RAS local storage, and move this
special purpose page to a more convenient place i.e. after the vectors high
page, more towards the end of address space. Previous location (0xe000_0000)
caused grief if KVA was to go beyond the default limit.
Note that ARM world rebuilding is required after this change since the
location of ARM_TP_ADDRESS is shared between kernel and userland.
Submitted by: Grzegorz Bernacki (gjb AT semihalf dot com)
Reviewed by: imp
Approved by: cognet (mentor)
The only downside is that it renames pmap_vac_me_harder() to pmap_fix_cache().
From Mark's email on -arm :
pmap_get_vac_flags(), pmap_vac_me_harder(), pmap_vac_me_kpmap(), and
pmap_vac_me_user() has been rewritten as pmap_fix_cache() to be more
efficient in the kernel map case. I also removed the reference to
the md.kro_mappings, md.krw_mappings, md.uro_mappings, and md.urw_mappings
counts.
In pmap_clearbit(), we can also skip over tests and writeback/invalidations
in the PVF_MOD and PVF_REF cases if those bits are not set in the pv_flag.
PVF_WRITE will turn caching back on and remove the PV_MOD bit.
In pmap_nuke_pv(), the vm_page_flag_clear(pg, PG_WRITEABLE) has been moved
to the pmap_fix_cache().
We can be more agressive in attempting to turn caching back on by calling
pmap_fix_cache() at times that may be appropriate to turn cache on
(a kernel mapping has been removed, a write has been removed or a read
has been removed and we know the mapping does not have multiple write
mappings to a page).
In pmap_remove_pages() the cpu_idcache_wbinv_all() is moved to happen
before the page tables are NULLed because the caches are virtually
indexed and virtually tagged.
In pmap_remove_all(), the pmap_remove_write(m) is added before the
page tables are NULLed because the caches are virtually indexed and
virtually tagged. This also removes the need for the caches fixing routine
(whichever is being used pmap_vac_me_harder() or pmap_fix_cache()) to be
called on any of these mappings.
In pmap_remove(), I simplified the cache cleaning process and removed
extra TLB removals. Basically if more than PMAP_REMOVE_CLEAN_LIST_SIZE
are removed, then just flush the entire cache.
memcpy/memset/memcmp and friends from libkern/arm to arm/arm/support.S, and so
I did, but in the process, I didn't add the appropriate copyrights.
This is a major oversight from me, and I apology to the NetBSD people for it.
MFC After: 1 day
for that argument. This will allow DDB to detect the broad category of
reason why the debugger has been entered, which it can use for the
purposes of deciding which DDB script to run.
Assign approximate why values to all current consumers of the
kdb_enter() interface.
- Introduce per-architecture stack_machdep.c to hold stack_save(9).
- Introduce per-architecture machine/stack.h to capture any common
definitions required between db_trace.c and stack_machdep.c.
- Add new kernel option "options STACK"; we will build in stack(9) if it is
defined, or also if "options DDB" is defined to provide compatibility
with existing users of stack(9).
Add new stack_save_td(9) function, which allows the capture of a stacktrace
of another thread rather than the current thread, which the existing
stack_save(9) was limited to. It requires that the thread be neither
swapped out nor running, which is the responsibility of the consumer to
enforce.
Update stack(9) man page.
Build tested: amd64, arm, i386, ia64, powerpc, sparc64, sun4v
Runtime tested: amd64 (rwatson), arm (cognet), i386 (rwatson)
We used to allocate the domains 0-14 for userland, and leave the domain 15
for the kernel. Now supersections requires the use of domain 0, so we
switched the kernel domain to 0, and use 1-15 for userland.
How it's done currently, the kernel domain could be allocated for a
userland process.
So switch back to the previous way we did things, set the first available
domain to 0, and just add 1 to get the real domain number in the struct pmap.
Reported by: Mark Tinguely <tinguely AT casselton DOT net>
MFC After: 3 days
The RAS implementation would set the end address, then the start
address. These were used by the kernel to restart a RAS sequence if
it was interrupted. When the thread switching code ran, it would
check these values and adjust the PC and clear them if it did.
However, there's a small flaw in this scheme. Thread T1, sets the end
address and gets preempted. Thread T2 runs and also does a RAS
operation. This resets end to zero. Thread T1 now runs again and
sets start and then begins the RAS sequence, but is preempted before
the RAS sequence executes its last instruction. The kernel code that
would ordinarily restart the RAS sequence doesn't because the PC isn't
between start and 0, so the PC isn't set to the start of the sequence.
So when T1 is resumed again, it is at the wrong location for RAS to
produce the correct results. This causes the wrong results for the
atomic sequence.
The window for the first race is 3 instructions. The window for the
second race is 5-10 instructions depending on the atomic operation.
This makes this failure fairly rare and hard to reproduce.
Mutexs are implemented in libthr using atomic operations. When the
above race would occur, a lock could get stuck locked, causing many
downstream problems, as you might expect.
Also, make sure to reset the start and end address when doing a syscall, or
a malicious process could set them before doing a syscall.
Reviewed by: imp, ups (thanks guys)
Pointy hat to: cognet
MFC After: 3 days
Call uma_sel_align() there at well.
Set CPU_CONTROL_VECRELOC if we're using the high vectors page.
Submitted by: Rafal Jaworowski <raj AT semihalf DOT com>
MFC After: 1 week
It should just contain the value we want to add, as if we're interrupted
between the add and the str, we will restart from the beginning. Just use
a register we can scratch instead.
MFC After: 1 week
routine. It is not needed as the existing tests for segment coalescing
already handle bounced addresses and it prevents legal segment coalescing
in certain edge cases.
MFC after: 1 week
Reviewed by: scottl
First, a file is mmap(2)ed and then mlock(2)ed. Later, it is truncated.
Under "normal" circumstances, i.e., when the file is not mlock(2)ed, the
pages beyond the EOF are unmapped and freed. However, when the file is
mlock(2)ed, the pages beyond the EOF are unmapped but not freed because
they have a non-zero wire count. This can be a mistake. Specifically,
it is a mistake if the sole reason why the pages are wired is because of
wired, managed mappings. Previously, unmapping the pages destroys these
wired, managed mappings, but does not reduce the pages' wire count.
Consequently, when the file is unmapped, the pages are not unwired
because the wired mapping has been destroyed. Moreover, when the vm
object is finally destroyed, the pages are leaked because they are still
wired. The fix is to reduce the pages' wired count by the number of
wired, managed mappings destroyed. To do this, I introduce a new pmap
function pmap_page_wired_mappings() that returns the number of managed
mappings to the given physical page that are wired, and I use this
function in vm_object_page_remove().
Reviewed by: tegge
MFC after: 6 weeks
communicate that it relates to (is called by) thread_alloc()
o Add cpu_thread_free() which is called from thread_free()
to counter-act cpu_thread_alloc().
i386: Have cpu_thread_free() call cpu_thread_clean() to
preserve behaviour.
ia64: Have cpu_thread_free() call mtx_destroy() for the
mutex initialized in cpu_thread_alloc().
PR: ia64/118024
opposed to what process. Since threads by default have teh name of the
process unless over-written with more useful information, just print the
thread name instead.
silent NULL pointer dereference in the i386 and sparc64 pmap_pinit()
when the kmem_alloc_nofault() failed to allocate address space. Both
functions now return error instead of panicing or dereferencing NULL.
As consequence, vmspace_exec() and vmspace_unshare() returns the errno
int. struct vmspace arg was added to vm_forkproc() to avoid dealing
with failed allocation when most of the fork1() job is already done.
The kernel stack for the thread is now set up in the thread_alloc(),
that itself may return NULL. Also, allocation of the first process
thread is performed in the fork1() to properly deal with stack
allocation failure. proc_linkup() is separated into proc_linkup()
called from fork1(), and proc_linkup0(), that is used to set up the
kernel process (was known as swapper).
In collaboration with: Peter Holm
Reviewed by: jhb
in the same order as it's set in ate_set_mac.
I remember a discussion about this on -arm, but apparently nothing was done.
Warner, is this wrong ?
X-MFC After: proper review
support machines having multiple independently numbered PCI domains
and don't support reenumeration without ambiguity amongst the
devices as seen by the OS and represented by PCI location strings.
This includes introducing a function pci_find_dbsf(9) which works
like pci_find_bsf(9) but additionally takes a domain number argument
and limiting pci_find_bsf(9) to only search devices in domain 0 (the
only domain in single-domain systems). Bge(4) and ofw_pcibus(4) are
changed to use pci_find_dbsf(9) instead of pci_find_bsf(9) in order
to no longer report false positives when searching for siblings and
dupe devices in the same domain respectively.
Along with this change the sole host-PCI bridge driver converted to
actually make use of PCI domain support is uninorth(4), the others
continue to use domain 0 only for now and need to be converted as
appropriate later on.
Note that this means that the format of the location strings as used
by pciconf(8) has been changed and that consumers of <sys/pciio.h>
potentially need to be recompiled.
Suggested by: jhb
Reviewed by: grehan, jhb, marcel
Approved by: re (kensmith), jhb (PCI maintainer hat)
After discussion with Sam, switch back to use firmware(9) instead of
having the firmware in hex format.
Put the binary firmware uuencoded into sys/contrib/dev/npe, and slap a
LICENSE file, as found on the Intel website.
Approved by: re (blanket), mux (mentor)
MFC After: 1 week
ways:
(1) Cached pages are no longer kept in the object's resident page
splay tree and memq. Instead, they are kept in a separate per-object
splay tree of cached pages. However, access to this new per-object
splay tree is synchronized by the _free_ page queues lock, not to be
confused with the heavily contended page queues lock. Consequently, a
cached page can be reclaimed by vm_page_alloc(9) without acquiring the
object's lock or the page queues lock.
This solves a problem independently reported by tegge@ and Isilon.
Specifically, they observed the page daemon consuming a great deal of
CPU time because of pages bouncing back and forth between the cache
queue (PQ_CACHE) and the inactive queue (PQ_INACTIVE). The source of
this problem turned out to be a deadlock avoidance strategy employed
when selecting a cached page to reclaim in vm_page_select_cache().
However, the root cause was really that reclaiming a cached page
required the acquisition of an object lock while the page queues lock
was already held. Thus, this change addresses the problem at its
root, by eliminating the need to acquire the object's lock.
Moreover, keeping cached pages in the object's primary splay tree and
memq was, in effect, optimizing for the uncommon case. Cached pages
are reclaimed far, far more often than they are reactivated. Instead,
this change makes reclamation cheaper, especially in terms of
synchronization overhead, and reactivation more expensive, because
reactivated pages will have to be reentered into the object's primary
splay tree and memq.
(2) Cached pages are now stored alongside free pages in the physical
memory allocator's buddy queues, increasing the likelihood that large
allocations of contiguous physical memory (i.e., superpages) will
succeed.
Finally, as a result of this change long-standing restrictions on when
and where a cached page can be reclaimed and returned by
vm_page_alloc(9) are eliminated. Specifically, calls to
vm_page_alloc(9) specifying VM_ALLOC_INTERRUPT can now reclaim and
return a formerly cached page. Consequently, a call to malloc(9)
specifying M_NOWAIT is less likely to fail.
Discussed with: many over the course of the summer, including jeff@,
Justin Husted @ Isilon, peter@, tegge@
Tested by: an earlier version by kris@
Approved by: re (kensmith)
of pages don't sum to anywhere near the total number of pages on amd64.
This is for the most part because uma_small_alloc() pages have never been
counted as wired pages, like their kmem_malloc() brethren. They should
be. This changes fixes that.
It is no longer necessary for the page queues lock to be held to free
pages allocated by uma_small_alloc(). I removed the acquisition and
release of the page queues lock from uma_small_free() on amd64 and ia64
weeks ago. This patch updates the other architectures that have
uma_small_alloc() and uma_small_free().
Approved by: re (kensmith)
Fix a few while (!uart_getreg() & SR1_TNF) when
while (!(uart_getreg() & SR18TNF)) was really meant.
This driver should die anyway, it's awful, and uart_ns8250 should be fine
for the StrongArm 1110. I'll kill it later.
Submitted by: Mikhael Skvorts
Approved by: re (blanket)
only USB 1.1 speeds available, but this shouldn't hurt. Now that we have
working usb support for this board, this is a natural followup.
Approved by: re (kensmith)
7 months. You must have JP6 in the 1-2 position to supply power to the
USB devices, but I've used uftdi, uplcom and umass successfully. If you
have it in 2-3, then nothing will show up. Also, if you have the FQPA
packaging for the AT91RM9200 (like the KN9202 boards have), you will get
the following message
uhub0: device problem (IOERROR), disabling port 2
due to a hardware erratum. It is safe to ignore as it is about pins that
aren't brought out on the FQPA package and aren't proeprly terminated either.
Alas, there's no register to read to tell the FQPA from the BGA versions.
Submitted by: Daan Vreeken
Approved by: re (kensmith)
revision 1.66
date: 2007/07/31 06:23:26; author: marcel; state: Exp; lines: +2 -2
Fix backward compatibility of the "old" (i.e. FreeBSD6) lseek
syscall. It was broken when a new lseek syscall was introduced.
The problem is that we need to swap the 32-bit td_retval values
for the __syscall indirect syscall when the actual syscall has
a 32-bit return value. Hence, we need to exclude lseek(2). And
this means the "old" lseek(2) as well -- which we didn't.
Based on a patch from: grehan@
Approved by: re (blanket)