wait for those cpus, instead of all of them by using a count. Oops.
Make the pointer to the mask that the primary cpu spins on volatile, so
gcc doesn't optimize out an important load. Oops again.
Activate tlb shootdown ipi synchronization now that it works. We have
all involved cpus wait until all the others are done. This may not be
necessary, it is mostly for sanity.
Make the trigger level interrupt ipi handler work.
Submitted by: tmm
the number of physical pages per KVA page allocated scales properly with
memory size. This fixes problems with kmem_map being too small.
Noticed by: mike, wollman
Submitted by: tmm
o In i386's <machine/endian.h>, macros have some advantages over
inlines, so change some inlines to macros.
o In i386's <machine/endian.h>, ungarbage collect word_swap_int()
(previously __uint16_swap_uint32), it has some uses on i386's with
PDP endianness.
Submitted by: bde
o Move a comment up in <machine/endian.h> that was accidentially moved
down a few revisions ago.
o Reenable userland's use of optimized inline-asm versions of
byteorder(3) functions.
o Fix ordering of prototypes vs. redefinition of byteorder(3)
functions, so that the non-GCC (libc asm) case has proper
prototypes.
o Add proper prototypes for byteorder(3) functions in <sys/param.h>.
o Prevent redundant duplicate prototypes by making use of the
_BYTEORDER_PROTOTYPED define.
o Move the bswap16(), bswap32(), bswap64() C functions into MD space
for platforms in which asm versions don't exist. This significantly
reduces the complexity of some things at the cost of duplicate code.
Reviewed by: bde
than the other implementations; we have complete control over the tlb, so we
only demap specific pages. We take advantage of the ranged tlb flush api
to send one ipi for a range of pages, and due to the pm_active optimization
we rarely send ipis for demaps from user pmaps.
Remove now unused routines to load the tlb; this is only done once outside
of the tlb fault handlers.
Minor cleanups to the smp startup code.
This boots multi user with both cpus active on a dual ultra 60 and on a
dual ultra 2.
Due to allocating tlb contexts on the fly, we only ever need to demap the
primary context, non-primary contexts have already been implicitly flushed
by context switching. All we really need to tell is if its a kernel demap
or not, and its easier just to compare against the kernel_pmap which is a
constant.
the context is not actually stolen, as it would be for i386. Instead of
deactivating a user vmspace immediately when switching out, and recycling
its tlb context, wait until the next context switch to a different user
vmspace. In this way we can switch from a user process to any number of
kernel threads and back to the same user process again, without losing any
of its mappings in the tlb that would not already be knocked by the automatic
replacement algorithm. This is not expected to have a measurable performance
improvement on the machines we currently run on, but it sounds cool and makes
the sparc64 port SMPng buzz word compliant.
on the loader to do it. Improve smp startup code to be less racy and to
defer certain things until the right time. This almost boots single user
on my dual ultra 60, it is still very fragile:
SMP: AP CPU #1 Launched!
Enter full pathname of shell or RETURN for /bin/sh:
# ls
Debugger("trapsig")
Stopped at Debugger+0x1c: ta %xcc, 1
db> heh
No such command
db>
with pmaps. When the context numbers wrap around we flush all user mappings
from the tlb. This makes use of the array indexed by cpuid to allow a pmap
to have a different context number on a different cpu. If the context numbers
are then divided evenly among cpus such that none are shared, we can avoid
sending tlb shootdown ipis in an smp system for non-shared pmaps. This also
removes a limit of 8192 processes (pmaps) that could be active at any given
time due to running out of tlb contexts.
Inspired by: the brown book
Crucial bugfix from: tmm
clobbered by the child. This is more complicated than usual because the
window that could get clobbered is pushed in kernel mode, so a lot of
registers would have to be saved in other registers in userland and we
don't have enough. What we do have is space in the pcb to temporarily
store user windows that were spilled in kernel mode, but could not be
immediately stored to the user stack. So we copy in the parent's topmost
window and save it in the pcb, and arrange for it to be copied back out
when the child is done frobbing the stack.
Reviewed by: tmm
Previously, the UPAGES/KSTACK area of processes/threads would leak memory
at the time that a previously swapped process was terminated. Lukcily, the
leak was only 12K/proc, so it was unlikely to be a major problem unless you
had an undersized swap partition.
Submitted by: dillon
Reviewed by: silby
MFC after: 1 week
In order to determine what to page out, the vm_daemon checks
reference bits on all pages belonging to all processes. Unfortunately,
the algorithm used reacted badly with shared pages; each shared page
would be checked once per process sharing it; this caused an O(N^2)
growth of tlb invalidations. The algorithm has been changed so that
each page will be checked only 16 times.
Prior to this change, a fork/sleepbomb of 1300 processes could cause
the vm_daemon to take over 60 seconds to complete, effectively
freezing the system for that time period. With this change
in place, the vm_daemon completes in less than a second. Any system
with hundreds of processes sharing pages should benefit from this change.
Note that the vm_daemon is only run when the system is under extreme
memory pressure. It is likely that many people with loaded systems saw
no symptoms of this problem until they reached the point where swapping
began.
Special thanks go to dillon, peter, and Chuck Cranor, who helped me
get up to speed with vm internals.
PR: 33542, 20393
Reviewed by: dillon
MFC after: 1 week
device drivers for bus system with other endinesses than the CPU (using
interfaces compatible to NetBSD):
- bwap16() and bswap32(). These have optimized implementations on some
architectures; for those that don't, there exist generic implementations.
- macros to convert from a certain byte order to host byte order and vice
versa, using a naming scheme like le16toh(), htole16().
These are implemented using the bswap functions.
- stream bus space access functions, which do not perform a byte order
conversion (while the normal access functions would if the bus endianess
differs from the CPU endianess).
htons(), htonl(), ntohs() and ntohl() are implemented using the new
functions above for kernel usage. None of the above interfaces is currently
exported to user land.
Make use of the new functions in a few places where local implementations
of the same functionality existed.
Reviewed by: mike, bde
Tested on alpha by: mike
work loads. It tapers off after that as gcc's working set generally just fits.
compiling bin/csh:
TSB_PAGES = 2
213.33 real 77.59 user 110.01 sys
TSB_PAGES = 4
116.43 real 75.78 user 19.16 sys
TSB_PAGES = 8
119.27 real 76.38 user 18.12 sys
Testing by: tmm
due to them being faster in certain cases. Therefore we need to save
and restore the v8 %y register around traps in kernel mode as well as
traps in usermode.
Tested by: obrien, tmm
until we do some testing to see what's best. This gives a massive reduction
in system time for processes with a relatively large working set. The size
of the tsb directly affects the rss size that a user process can keep mapped.
When it starts to get full replacements occur and the process takes a lot of
soft vm faults. Increasing the default from 1 page to 2 gives the following
before and after numbers for compiling vfs_bio.c:
before:
14.27 real 6.56 user 5.69 sys
after:
8.57 real 6.11 user 1.62 sys
This should make self hosted builds more tolerable.
I don't believe anyone is quite using the sparc64 kernel sources in CVS
yet -- things aren't just quite ready (but almost). So this commit should
be OK to make.
window to the user stack while in a nested kernel trap. We do this for
entry to the kernel from user mode, but if we get an interrupt in kernel
mode while there are still user windows in the cpu, and we attempt to spill
to the user stack, we may take too many nested traps and overflow the trap
stack, causing a red state exception. This is needed by upcoming changes
to allow the user tsb to not be locked in the tlb.
Reviewed by: tmm
virtual page number in a much more convenient way; all in one piece. This
greatly simplifies the comparison for a matching tte, and allows the fault
handlers to be much simpler due to not having to load wierd masks.
Rewrite the tlb fault handlers to account for the new format. These are also
written to allow faults on the user tsb inside of the fault handlers; the
kernel fault handler must be aware of this and not clobber the other's
registers. The faults do not yet occur due to other support that is needed
(and still under my desk).
Bug fixes from: tmm
Most of the contents are commented out as they are as-yet untested.
However, I wanted the contents to match our other arches, so that when
people make changes to {i386,alpha,ia64}, they will also make the same
changes here.
pmap_qenter and pmap_qremove in preference to pmap_kenter/pmap_kremove.
The former maps in multiple pages at a time, and so can do a ranged
flush. Don't assume that pmap_kenter and pmap_kremove will flush the tlb,
even though they still do. It will not once the MI code is updated to use
pmap_qenter and pmap_qremove.
will be used to reduce the number of tlb shootdown ipis in an smp system
by sending one ipi for a whole range of pages, instead of one per page.
Munge the context demap operations slightly to support demapping a non-primary
context.
If we don't do this here there's a 1 instruction race where an interrupt
could come in and crash the user process due to having no stack.
2. Pass %fsr to the user trap handler in %l4. Since %fsr can only be loaded
from or stored to memory, we need to do some contortions and temporarily
save it to the alternate global stack.
3. Reload the pcb and pcpu registers for traps in kernel mode, for sanity.
Submitted by: tmm (1, 2)
While in userland, keep the thread's ucred reference in a shadow
field so that the usual place to store it is NULL.
If DIAGNOSTIC is not set, the thread ucred is kept valid until the next
kernel entry, at which time it is checked against the process cred
and possibly corrected. Produces a BIG speedup in
kernels with INVARIANTS set. (A previous commit corrected it
for the non INVARIANTS case already)
Reviewed by: dillon@freebsd.org
deprecated in favor of the POSIX-defined lowercase variants.
o Change all occurrences of NTOHL() and associated marcros in the
source tree to use the lowercase function variants.
o Add missing license bits to sparc64's <machine/endian.h>.
Approved by: jake
o Clean up <machine/endian.h> files.
o Remove unused __uint16_swap_uint32() from i386's <machine/endian.h>.
o Remove prototypes for non-existent bswapXX() functions.
o Include <machine/endian.h> in <arpa/inet.h> to define the
POSIX-required ntohl() family of functions.
o Do similar things to expose the ntohl() family in libstand, <netinet/in.h>,
and <sys/param.h>.
o Prepend underscores to the ntohl() family to help deal with
complexities associated with having MD (asm and inline) versions, and
having to prevent exposure of these functions in other headers that
happen to make use of endian-specific defines.
o Create weak aliases to the canonical function name to help deal with
third-party software forgetting to include an appropriate header.
o Remove some now unneeded pollution from <sys/types.h>.
o Add missing <arpa/inet.h> includes in userland.
Tested on: alpha, i386
Reviewed by: bde, jake, tmm
patch from a year ago: give file flags their own type. This does not
(yet) change the type used by system calls or library functions.
The underlying type was chosen to match what is returned by stat().
support for managing both streaming caches on psycho pairs).
Use explicit bus space accesses instead of mapping the device memory into
kva.
Move DVMA allocation to the map creation/dma memory allocation functions.
disable interrupts completely, and stxa_sync(), which performs a store
immediately followed by a membar #Sync with interrupts disabled (this
is needed for writes to diagnostic registers).
this is a low-functionality change that changes the kernel to access the main
thread of a process via the linked list of threads rather than
assuming that it is embedded in the process. It IS still embeded there
but remove all teh code that assumes that in preparation for the next commit
which will actually move it out.
Reviewed by: peter@freebsd.org, gallatin@cs.duke.edu, benno rice,
some arches and the syscall table is machine-independent. It was
(bogusly) conditional on COMPAT_43, so this usually makes no difference.
ia64: in addition:
- replace the bogus cloned comment before osigreturn() by a correct one.
osigreturn() is just a stub fo ia64's.
- fix the formatting of cloned comment before sigreturn().
- fix the return code. use nosys() instead of returning ENOSYS to get
the same semantics as if the syscall is not in the syscall table.
Generating SIGSYS is actually correct here.
- fix style bugs.
powerpc: copy the cleaned up ia64 stub. This mainly fixes a bogus comment.
sparc64: copy the cleaned up the ia64 stub, since there was no stub before.
cpu(s) into the kernel, and sync-ing them up to "kernel" mode so we can
send them ipis, which also work.
Thanks to John Baldwin for providing me with access to the hardware
that made this possible.
Parts obtained from: bsd/os
Call critical_enter/critical_exit around (fast) interrupt handlers. All
non-threaded interrupts are fast, and the threaded interrupt scheduler is
itself a fast interrupt.
Assert that an interrupt handler we are about to call is non-zero.
Be paranoid about restoring the users global registers. Do it as the
last thing before switching to alternate globals (when we magically get
our preloaded registers back), and do it with interrupts disabled. Any
kind of kernel trap when the globals are not setup properly is bad news.
Don't save and restore the kernel g6, it invariably points to the current
pcb now.
data word in an interrupt packet is non-zero, it points to code to execute
to handle the ipi, so jump to it instead of enqueueing the packet. It
is unclear if we will need queued ipis.
Interrupt g7 now points to pcpu, instead of to the per-cpu interrupt queue
itself, so use that instead. Interrupt g6 is no longer reserved.
parameters needed for smp support.
If we are not the boot processor, jump to the smp startup code instead.
Implement a per-cpu panic stack, which is used for bootstrapping both
primary and secondary processors and during faults on the kernel stack.
Arrange the per-cpu page like the pcb, with the struct pcpu at the end
of the page and the panic stack before it.
Use the boot processor's panic stack for calling sparc64_init.
Split the code to set preloaded global registers and to map the kernel
tsb out into functions, which non-boot processors can call.
Allocate the kstack for thread0 dynamically in pmap_bootstrap, and give
it a guard page too.
to the current pcb.
Remove interrupt global defines; they use PCPU_REG now.
Move ATOMIC_INC_INT here from exception.s, add ATOMIC_DEC_INT.
Add a KASSERT macro for use in assembler.
substantial fraction of the number of entries of tte's in the tsb
would need to be looked up, traverse the tsb instead. This is crucial
in some places, e.g. when swapping out a process, where a certain
pmap_remove() call would take very long time to complete without this.
2. Implement pmap_qenter_flags(), which will become used later
3. Reactivate the instruction cache flush done when mapping as executable.
This is required e.g. when executing files via NFS, but is known to
cause problems on UltraSPARC-IIe CPU's. If you have such a CPU, you
will need to comment this call out for now.
Submitted by: jake (3)
struct ofw_nexus_reg. Implement UPA device memory management in the
nexus driver.
Adapt the psycho driver to these changes, and do some minor cleanup work
while being there.
for certain user pages, stores to kernel pages would not update the
affected cache lines, which would sometimes cause the wrong data to be
returned for loads from kernel pages. This was especially fatal when
the addresses affected held the kernel stack pointer, and a random
value was loaded into it.
Fix a harmless off by one error in a dcache_inval_phys call.
Fix a potential race in setting up the per-cpu pointer if the special
restore fails on return to user mode fails and we need to trap back
into the kernel to fault in more stack.
Remove debug code.
an efficient way for the kernel to bounce certain mundane traps back to
userland for handling there. A user trap handler returns directly to the
trapping user code, rather than going through the kernel again. Only a
handful of instructions are actually executed in kernel mode.
Implement sysarch(SPARC_UTRAP_INSTALL).
Add code to handle sharing of the user trap table across forks and unsharing
at exec.
This can be used to implement efficient tracking of floating point register
usage in userland, fe by a thread library, and to handle alignment fault
fixups and instruction emulation in userland, for which the code may need
to be different for 32bit and 64bit binaries.
something wrong with the kernel stack.
Add code to check the kernel stack pointer in various important places
and try hard not to go down in flames if its wrong.
Add fields to md_page for tracking virtual page color, and pv entry
lists.
Fix pmap_track_modified to work for non-kernel pmaps. This is due to
kernel virtual addresses potentially overlapping with userland addresses.
much less magic, fragile, broken. Use ttes rather than sttes.
We still use the replacement scheme used by the original code, which
is pretty cool.
Many crucial bug fixes from: tmm
of sttes. Also removes many differences between this and the other pmaps.
Reserve the kva space used by the openfirmware translations.
Use physical addresses directly in pmap_zero_page and pmap_copy_page, now
that we have the cache line shooting support.
Add code to track the virtual cachability of mapped pages. The dmmu
requires that multiple mappings of the same phsyical address have the
save virtual address bits up to a colour boundary. Violating this
requires all mappings to be mapped uncacheable. We do not yet handle
the case of a badly aliased mapping becoming cachable again.
Many crucial bug fixes from: tmm
the registers so we don't uselessly save them over and over again for
each context switch until another floating point instruction is executed.
Use a non-specific tlb slot for the tsb, which needs to have a locked
entry.
Remove overly verbose traces.
Add macros to atomically increment an integer variable in the data
section and to atomically set a bit in a tte. Note that the latter
does not return the new value.
Rewrite RESUME_SPILLFILL_MAGIC to use more sensical calculations, and
to preserve all alternate globals religiously. Must now be called on
alternate globals.
Defer switching to the kernel stack until inside the syscall, trap,
interrupt wrappers. Splitting the windows is all that's really urgent.
Adapt to new trap types.
Add %xcc where appropriate in order to not use v8 opcodes inadvertantly
(which work fine).
Modify the low level tlb fault handlers to operate on a tsb made up of
ttes, not sttes. This effectively makes the tsb twice as large.
After atomically updating tte bits in memory, also set the bit in the
register that holds the data which will be loaded into the tlb. The
macro returns the old value.
Use the preloaded mmu global which holds the address of the current
user tsb.
Add back a low level protection fault handler instead of just punting
into the vm system. This effectively saves a soft fault per COW fault.
Add a trace to intr_enqueue.
Pass arguments to the trap, interrupt, syscall wrappers in the out
registers instead of some on the stack, some in registers.
Use the preloaded alternate global pcb register.
2. Make trap_pfault more like it is on other architectures.
3. Fix a bug in syscall() which caused system calls with more than
six arguments that are called through the wild card syscall to
have their arguments scrambled. This affected mmap due to the
(bogus) wrapper in libc.
Submitted by: tmm (3)
Add some traces that can be useful but are also very loud.
Use defines for offsets into jmpbuf instead of magic numbers.
Fix a style bug.
Fixup comments.
specified by the sparc abi. We use numerically higher values for all
internal kernel types.
Remove soft trap types which need to be exposed to userland. They will
move to utrap.h.
their duration. This is still only effective as long as they are
only used in the static kernel. Code in modules may cause instruction
faults which makes these break in different ways anyway.
2. Add a load bearing membar #Sync.
3. Add an inline for demapping an entire context.
Submitted by: tmm (1, 2)
Bloat trapframe with many extra fields so we don't need extra structures.
Use small data types where possible.
Remove second copy of TF_DONE.
Remove mmuframe.
- The MD functions critical_enter/exit are renamed to start with a cpu_
prefix.
- MI wrapper functions critical_enter/exit maintain a per-thread nesting
count and a per-thread critical section saved state set when entering
a critical section while at nesting level 0 and restored when exiting
to nesting level 0. This moves the saved state out of spin mutexes so
that interlocking spin mutexes works properly.
- Most low-level MD code that used critical_enter/exit now use
cpu_critical_enter/exit. MI code such as device drivers and spin
mutexes use the MI wrappers. Note that since the MI wrappers store
the state in the current thread, they do not have any return values or
arguments.
- mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is
assigned to curthread->td_savecrit during fork_exit().
Tested on: i386, alpha
- The MI portions of struct globaldata have been consolidated into a MI
struct pcpu. The MD per-CPU data are specified via a macro defined in
machine/pcpu.h. A macro was chosen over a struct mdpcpu so that the
interface would be cleaner (PCPU_GET(my_md_field) vs.
PCPU_GET(md.md_my_md_field)).
- All references to globaldata are changed to pcpu instead. In a UP kernel,
this data was stored as global variables which is where the original name
came from. In an SMP world this data is per-CPU and ideally private to each
CPU outside of the context of debuggers. This also included combining
machine/globaldata.h and machine/globals.h into machine/pcpu.h.
- The pointer to the thread using the FPU on i386 was renamed from
npxthread to fpcurthread to be identical with other architectures.
- Make the show pcpu ddb command MI with a MD callout to display MD
fields.
- The globaldata_register() function was renamed to pcpu_init() and now
init's MI fields of a struct pcpu in addition to registering it with
the internal array and list.
- A pcpu_destroy() function was added to remove a struct pcpu from the
internal array and list.
Tested on: alpha, i386
Reviewed by: peter, jake
o Hide nonstandard functions and types in <netinet/in.h> when
_POSIX_SOURCE is defined.
o Add some missing types (required by POSIX.1-200x) to <netinet/in.h>.
o Restore vendor ID from Rev 1.1 in <netinet/in.h> and make use of new
__FBSDID() macro.
o Fix some miscellaneous issues in <arpa/inet.h>.
o Correct final argument for the inet_ntop() function (POSIX.1-200x).
o Get rid of the namespace pollution from <sys/types.h> in
<arpa/inet.h>.
Reviewed by: fenner
Partially submitted by: bde
in asm files.
2. Temporarily cause subnormal operands in floating point operations
to be treated as zeros so that comlpetion of the operation does not
need to be emulated.
3. Catch fp_exception_other and correctly skip over the unfinished
instruction, but basically ignore them. Emulating the instruction
is not yet supported.
4. Zero td_retval[1] as well in syscall().
Submitted by: tmm (2, 3)
2. Add a TF_DONE macro, which fiddles a trapframe to make the retry on
return from traps act like a done (advance past the trapping
instruction instead of re-executing).
3. Flush the windows before entering the debugger, since it is no
longer done in the breakpoint trap vector.
4. Print a warning if trace <pid> is attempted, it is not yet implemented.
5. Print traps better and decode system calls in traces.
Submitted by: rwatson (4)
to determine if a process is using floating point. in order to avoid
sign extending a 13 bit immediate.
2. We don't need to context switch cwp anymore, it is better to just
fiddle the save tstate on return from traps. See exception.s 1.10
and 1.12.
3. Completely remove pcb_cwp.
4. Implement vmapbuf, vunmapbuf and vm_fault_quick. Completely remove
TODOs from vm_machdep.c (yay!).
Submitted by: tmm (1, 3, 4)
Obtained from: existing archs (4)
space from kernel space and from an alternate address space to kernel
space.
2. Remove the unused and unprototyped physcopy() and physzero() and replace
with the more versatile ascopy() and aszero(), inspired by the above.
These can be used to copy and zero physical pages of memory without mapping
them into kernel space first.
3. Use magic numbers for the offsets in the jmpbuf structure like other
platforms.
4. Use SET.
Submitted by: tmm (1, 4)
in the window trap vectors were mixed up. All this did is cause unnecesary
traps and look wierd in traces. Superfluous traps happen a lot in normal
operation, so we are rather good at recovering from them.
2. Store the arguments for a ktr trace in the right place.
3. Use a generic trap vector for breakpoints. It should not be special.
4. Save the frame pointer in the trap frame for kernel traps if DDB is compiled
in, otherwsie we don't save the out registers for kernel traps and stack
traces can't go through nested traps.
5. Apply the same fix to the return from kernel mode trap code as for user
mode traps. Ensure that the window we're returning to is the same one
that we restore to by fiddling the cwp in the saved tstate. This requires
that we transfer the values loaded from the trap frame into alternate
globals before restore-ing, but doing so is not very expensive and not
worth worrying about. Not changing the saved cwp can result in the register
values magically changing on return from traps if we happen to have slept
and the windows don't work out exactly the same. Fix the trace just before
the retry to account for different register usage.
6. Use a SET macro for loading address constants rather than a variation of
set and setx. set only works for 32 bit constants, while setx works for
64 bit constants as well, but produces bloated code when unnecessary.
Gas always generates the canonical 2 register, 6 instruction form, even
when it could be optimized; set uses 1 register and 2 instructions. At
the moment we assume that the kernel binary is below 4GB so set is
always sufficient, but the macro allows it to be configured. Note that
this has nothing to do with 32 vs. 64 bit address space, it only applies
to addresses of symbols which are known at compile/link time.
Submitted by: tmm (6)
this case, the firmware trap table needs to be restored). Make use of
it in cpu_halt() and cpu_reset(), and make cpu_reset() reboot the kernel
that was used previously insead of behaving like cpu_halt().
Add a shutdown_final event handler that turns the power off if requested.
unaligned accesses, and instr.h, which contrains definitions for the
sparc64 instruction set (partly from NetBSD).
Make use of some definitions from instr.h in db_disasm.c.
o Make <stdint.h> a symbolic link to <sys/stdint.h>.
o Move most of <sys/inttypes.h> into <sys/stdint.h>, as per C99.
o Remove <sys/inttypes.h>.
o Adjust includes in sys/types.h and boot/efi/include/ia64/efibind.h
to reflect new location of integer types in <sys/stdint.h>.
o Remove previously symbolicly linked <inttypes.h>, instead create a
new file.
o Add MD headers <machine/_inttypes.h> from NetBSD.
o Include <sys/stdint.h> in <inttypes.h>, as required by C99; and
include <machine/_inttypes.h> in <inttypes.h>, to fill in the
remaining requirements for <inttypes.h>.
o Add additional integer types in <machine/ansi.h> and
<machine/limits.h> which are included via <sys/stdint.h>.
Partially obtain from: NetBSD
Tested on: alpha, i386
Discussed on: freebsd-standards@bostonradio.org
Reviewed by: bde, fenner, obrien, wollman
userland. The per thread ucred reference is immutable and thus needs no
locks to be read. However, until all the proc locking associated with
writes to p_ucred are completed, it is still not safe to use the per-thread
reference.
Tested on: x86 (SMP), alpha, sparc64
{set,fill}_{,fp,db}regs() fixup:
- Add dummy {set,fill}_dbregs() on architectures that don't have them.
- KSEfy the powerpc versions (struct proc -> struct thread).
- Some architectures had the prototypes in md_var.h, some in reg.h, and
some in both; for consistency, move them to reg.h on all platforms.
These functions aren't really MD (the implementation is MD, but the interface
is MI), so they should move to an MI header, but I haven't figured out which
one yet.
Run-tested on i386, build-tested on Alpha, untested on other platforms.
was used. This resulted in bogus bad window traps (invalid wstate).
Add a trace to sfsr traps (alignment among other things).
Use KTR_TRAP instead of KTR_CT1.
Use the right registers when storing the values of various
mmu registers into the trap frame. This fixes a bug where sometimes
the context number reported by a fault would be garbage. Sometimes
it would be zero for faults on user address space so the kernel would
wrongly think that it was a fault on kernel address space and fail.
Use the preloaded registers in the vectored interrupt trap instead
of reading pointers from memory. Remove traces due to register
pressure and excess verbosity. We can probably still sneak in one
trace. Remove some debug code.
Go back to using the tsb register during kernel page table lookups.
This is the best way to not have to have the address of the kernel tsb be
a compile time constant. We lie and say we have 1 page tsb when really
its much larger. This way the hardware provides bits 13-22 of the
virtual address (the lower 9 bits of the virtual page number) in the
form of the address of the tte corresponding to the fault address in
the (1 page) kernel tsb. With some clever arithmetic we can then get
bits 22 and up from the tte tag and add them to the tte address in
order to index massive tsbs (basically unlimited).
Add traps for physical address hardware watchpoints.
Don't try to pass the window state from the trap table entry point
all the way down to the common trap code. Its too easy to clobber
and reading it again doesn't cost much.
Fixup some traces.
Fiddle the cwp bits on return from the kernel to user mode so that
the window we are returning to is always the same as the one we
restore to in the trap code. Strictly speaking this is not necessary,
it only affects return from fork and exec, but setting up the windows
right would require hard coding the right cwp values in cpu_fork and
setregs, basically hard coding the number of frames between syscall and
tl0_ret. The result of getting it wrong is usually a spill to an invalid
stack pointer; either 0 or pointing into kernel space. This should also
alleviate the need to context switch the cwp.
Transfer the trap state from locals to alternate globals in the trap
return code so that we can do a restore and rotate the windows before
reloading the trap registers. If the restore fails we'll trap back
into the kernel, so there's no point in loading the trap registers
before hand. Its is crucial that the window trap recovery code not
clobber the alternate globals.
boundary. It must be on at least an 8 byte boundary so that the length
of the signal code is a multiple of 8 (well aligned). The size is used
in the calculation of the address of the argument and environment vectors
on the user stack; getting it wrong results in the string pointers being
misaligned and causes alignment faults in getenv() among other things.
Allocate a regular stack frame below the signal frame on the user stack
and join up the frame pointer to the previous frame. This fixes longjmp-ing
out of signal handlers. Longjmp traverses the stack upwards in order to
find the right frame to return to, so the frame pointers must join up
seamlessly. I thought this would just work, but obviously the frame
needs to be below the signal frame, not above it like before. Account
for the extra space in the signal code.
Preload pointers to interrupt data structures in interrupt globals.
This avoids the need to load the pointers from memory in the vectored
interrupt trap handler.
Transfer the first 2 out registers into td_retval in setregs. We use
the same registers for system call arguments as return values, so these
registers got clobbered by the system call return values on return from
execve. They now get clobbered by the right values. We must put the values
in both the out registers in the trapframe and in td_retval because init
calls exec but fails to transfer the return value into the out registers.
This fixes a bug where the first exec after init would pass junk to the
c runtime, instead of a pointer to the argument strings. A better solution
would be to return EJUSTRETURN on success from execve.
Adjust for change in pmap_bootstraps prototype.
Map the message buffer after the trap table is setup. We will fault
on it immediately.
Don't use a hard coded address constant for the virtual address of the
kernel tsb. Allocate kernel virtual address space for the kernel tsb
at runtime.
Remove unused parameter to pmap_bootstrap.
Adapt pmap.c to use KVA_PAGES.
Map the message buffer too.
Add some traces.
Implement pmap_protect.
be used to index tables of counters.
Remove intr_dispatch() inline, it is implemented directly in tl*_intr now.
Count stray interrupts in a table of counters like intrcnt.
Disable interrupts briefly when setting up the interrupt vector table.
We must disable interrupts completely, not just raise the pil.
Pass pointers to the intr_vector structures rather than a vector number
to sched_ithd and intr_stray.
the existence of the __gnuc_va_list type[*] because our compiler is GCC.
[*] __gnuc_va_list is defined in the GCC ginclude/stdarg.h replacement
headerwhich we don't use.
Until now, the ptrace syscall was implemented as a wrapper that called
various functions in procfs depending on which ptrace operation was
requested. Most of these functions were themselves wrappers around
procfs_{read,write}_{,db,fp}regs(), with only some extra error checks,
which weren't necessary in the ptrace case anyway.
This commit moves procfs_rwmem() from procfs_mem.c into sys_process.c
(renaming it to proc_rwmem() in the process), and implements ptrace()
directly in terms of procfs_{read,write}_{,db,fp}regs() instead of
having it fake up a struct uio and then call procfs_do{,db,fp}regs().
It also moves the prototypes for procfs_{read,write}_{,db,fp}regs()
and proc_rwmem() from proc.h to ptrace.h, and marks all procfs files
except procfs_machdep.c as "optional procfs" instead of "standard".
Handle overlap in bcopy.
Add routines for copying and zeroing pages using physical addresses
directly.
Remove all the hacks to account for calling the firmware on its own
trap table, we use the kernel trap table. There is still a problem
with OF_exit().
easier and hopefully this code is done changing radically.
Don't use the mmu tlb register to address the kernel page table, nor
the 8k pointer register. The hardware will do some of the page table
lookup by storing the the base address in an internal register and
calculating the address of the tte in the table. However it is limited
to a 1 meg tsb, which only maps 512 megs. The kernel page table only
has one level, so its easy to just do it by hand, which has the advantage
of supporting abitrary amounts of kvm and only costs a few more instructions.
Increase kvm to 1 gig now that its easy to do so and so we don't waste
most of a 4 meg page.
Fix some traces. Fix more proc locking.
Call tsb_stte_promote if we get a soft fault on a mapping in the upper
levels of the tsb. If there is an invalid or unreferenced mapping
in the primary tsb, it will be replaced.
Immediately fail for faults occuring in {f,s}uswintr.
one 4 meg page can map both the kernel and the openfirmware mappings.
Add the openfirmware mappings to the kernel tsb so we can call the firmware
on the kernel trap table and access kernel memory normally.
Implement pmap_swapout_proc, pmap_swapin_proc, pmap_swapout_thread,
pmap_swapin_thread, pmap_activate, pmap_page_exists, and pmap_phys_address.
Add a guard page at the bottom of the kernel stack. Its unclear how easy
it will be to detect these faults and do something useful.
Setup the registers on exec how the c runtime expects.
Implement various {fill,set}_*regs.
Fix proc locking.
will be private to each CPU.
- Re-style(9) the globaldata structures. There really needs to be a MI
struct pcpu that has a MD struct mdpcpu member at some point.
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.
Sorry john! (your next MFC will be a doosie!)
Reviewed by: peter@freebsd.org, dillon@freebsd.org
X-MFC after: ha ha ha ha
it to the MI area. KSE touched cpu_wait() which had the same change
replicated five ways for each platform. Now it can just do it once.
The only MD parts seemed to be dealing with fpu state cleanup and things
like vm86 cleanup on x86. The rest was identical.
XXX: ia64 and powerpc did not have cpu_throw(), so I've put a functional
stub in place.
Reviewed by: jake, tmm, dillon
with user windows in kernel mode. We split the windows using %otherwin,
but instead of spilling user window directly to the pcb, we attempt to
spill to user space. If this fails because a stack page is not resident
(or the stack is smashed), the fault handler at tl 2 will detect the
situation and resume at tl 1 again where recovery code can spill to the
pcb. Any windows that have been saved to the pcb will be copied out to
the user stack on return from kernel mode.
Add a first stab at 32 bit window handling. This uses much of the same
recovery code as above because the alignment of the stack pointer is used
to detect 32 bit code. Attempting to spill a 32 bit window to a 64 bit
stack, or vice versa, will cause an alignment fault. The recovery code
then changes the window state to vector to a 32 bit spill/fill handler
and retries the faulting instruction.
Add ktr traces in useful places during trap processing.
Adjust comments to reflect new code and add many more.
Remove the modified tte bit and add a softwrite bit. Mappings are only
writeable if they have been written to, thus in general modify just
duplicates the write bit. The softwrite bit makes it easier to distinguish
mappings which should be writeable but are not yet modified.
Move the exec bit down one, it was being sign extended when used as an
immediate operand.
Use the lock bit to mean tsb page and remove the tsb bit. These are the
only form of locked (tsb) entries we support and we need to conserve bits
where possible.
Implement pmap_copy_page and pmap_is_modified and friends.
Detect mappings that are being being upgraded from read-only to read-write
due to copy-on-write and update the write bit appropriately.
Make trap_mmu_fault do the right thing for protection faults, which is
necessary to implement copy on write correctly. Also handle a bunch
more userland trap types and add ktr traces.