deprecated in favor of the POSIX-defined lowercase variants.
o Change all occurrences of NTOHL() and associated marcros in the
source tree to use the lowercase function variants.
o Add missing license bits to sparc64's <machine/endian.h>.
Approved by: jake
o Clean up <machine/endian.h> files.
o Remove unused __uint16_swap_uint32() from i386's <machine/endian.h>.
o Remove prototypes for non-existent bswapXX() functions.
o Include <machine/endian.h> in <arpa/inet.h> to define the
POSIX-required ntohl() family of functions.
o Do similar things to expose the ntohl() family in libstand, <netinet/in.h>,
and <sys/param.h>.
o Prepend underscores to the ntohl() family to help deal with
complexities associated with having MD (asm and inline) versions, and
having to prevent exposure of these functions in other headers that
happen to make use of endian-specific defines.
o Create weak aliases to the canonical function name to help deal with
third-party software forgetting to include an appropriate header.
o Remove some now unneeded pollution from <sys/types.h>.
o Add missing <arpa/inet.h> includes in userland.
Tested on: alpha, i386
Reviewed by: bde, jake, tmm
patch from a year ago: give file flags their own type. This does not
(yet) change the type used by system calls or library functions.
The underlying type was chosen to match what is returned by stat().
slower, and may be impeding adoption of -CURRENT by developers. We
recommend turning on WITNESS by default on crash boxes, and when doing
locking development. It will probably get turned on by default for a week
or two following any major locking commits, also.
Approved by: all and sundry (jhb, phk, ...)
this is a low-functionality change that changes the kernel to access the main
thread of a process via the linked list of threads rather than
assuming that it is embedded in the process. It IS still embeded there
but remove all teh code that assumes that in preparation for the next commit
which will actually move it out.
Reviewed by: peter@freebsd.org, gallatin@cs.duke.edu, benno rice,
some arches and the syscall table is machine-independent. It was
(bogusly) conditional on COMPAT_43, so this usually makes no difference.
ia64: in addition:
- replace the bogus cloned comment before osigreturn() by a correct one.
osigreturn() is just a stub fo ia64's.
- fix the formatting of cloned comment before sigreturn().
- fix the return code. use nosys() instead of returning ENOSYS to get
the same semantics as if the syscall is not in the syscall table.
Generating SIGSYS is actually correct here.
- fix style bugs.
powerpc: copy the cleaned up ia64 stub. This mainly fixes a bogus comment.
sparc64: copy the cleaned up the ia64 stub, since there was no stub before.
db_machdep.h to fix the link failure (multiple definitions)
caused by disabling the emission of common symbols. As a result,
there were no definitions at all. While here, remove useless
declarations.
mutex releases to not require flags for the cases when preemption is
not allowed:
The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent
switching to a higher priority thread on mutex releease and swi schedule,
respectively when that switch is not safe. Now that the critical section
API maintains a per-thread nesting count, the kernel can easily check
whether or not it should switch without relying on flags from the
programmer. This fixes a few bugs in that all current callers of
swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from
fast interrupt handlers and the swi_sched of softclock needed this flag.
Note that to ensure that swi_sched()'s in clock and fast interrupt
handlers do not switch, these handlers have to be explicitly wrapped
in critical_enter/exit pairs. Presently, just wrapping the handlers is
sufficient, but in the future with the fully preemptive kernel, the
interrupt must be EOI'd before critical_exit() is called. (critical_exit()
can switch due to a deferred preemption in a fully preemptive kernel.)
I've tested the changes to the interrupt code on i386 and alpha. I have
not tested ia64, but the interrupt code is almost identical to the alpha
code, so I expect it will work fine. PowerPC and ARM do not yet have
interrupt code in the tree so they shouldn't be broken. Sparc64 is
broken, but that's been ok'd by jake and tmm who will be fixing the
interrupt code for sparc64 shortly.
Reviewed by: peter
Tested on: i386, alpha
o Do not use a special struct to keep track of CPUs we found;
instead, use struct pcpu. This handles all the magic WRT
thread creation (yay!).
o Respect MAXCPU.
o Use the vhpt_base and vhpt_size values to initialize the AP.
o Style fixes.
Note that this commit temporarily breaks SMP configurations.
Previously APs didn't do anything, but they now enter the
scheduler. They hold sched_lock for more than 5 secs though
and cause a panic. That's what I call progress :-)
ia64_pal_base instead of scanning the EFI tables. This way
AP startup code can more easily use the function.
o Initialize ia64_pal_base in ia64_init(). When the PAL code
doesn't need explicit mapping or no PAL code has been found,
ia64_pal_base will be 0.
o Remove some unused global variables.
o Also in ia64_init(), allocate only 1 page for struct pcpu
and remove some Alpha leftovers.
o Initialize pc_pcb in cpu_pcpu_init().
- The MD functions critical_enter/exit are renamed to start with a cpu_
prefix.
- MI wrapper functions critical_enter/exit maintain a per-thread nesting
count and a per-thread critical section saved state set when entering
a critical section while at nesting level 0 and restored when exiting
to nesting level 0. This moves the saved state out of spin mutexes so
that interlocking spin mutexes works properly.
- Most low-level MD code that used critical_enter/exit now use
cpu_critical_enter/exit. MI code such as device drivers and spin
mutexes use the MI wrappers. Note that since the MI wrappers store
the state in the current thread, they do not have any return values or
arguments.
- mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is
assigned to curthread->td_savecrit during fork_exit().
Tested on: i386, alpha
- The MI portions of struct globaldata have been consolidated into a MI
struct pcpu. The MD per-CPU data are specified via a macro defined in
machine/pcpu.h. A macro was chosen over a struct mdpcpu so that the
interface would be cleaner (PCPU_GET(my_md_field) vs.
PCPU_GET(md.md_my_md_field)).
- All references to globaldata are changed to pcpu instead. In a UP kernel,
this data was stored as global variables which is where the original name
came from. In an SMP world this data is per-CPU and ideally private to each
CPU outside of the context of debuggers. This also included combining
machine/globaldata.h and machine/globals.h into machine/pcpu.h.
- The pointer to the thread using the FPU on i386 was renamed from
npxthread to fpcurthread to be identical with other architectures.
- Make the show pcpu ddb command MI with a MD callout to display MD
fields.
- The globaldata_register() function was renamed to pcpu_init() and now
init's MI fields of a struct pcpu in addition to registering it with
the internal array and list.
- A pcpu_destroy() function was added to remove a struct pcpu from the
internal array and list.
Tested on: alpha, i386
Reviewed by: peter, jake
o Hide nonstandard functions and types in <netinet/in.h> when
_POSIX_SOURCE is defined.
o Add some missing types (required by POSIX.1-200x) to <netinet/in.h>.
o Restore vendor ID from Rev 1.1 in <netinet/in.h> and make use of new
__FBSDID() macro.
o Fix some miscellaneous issues in <arpa/inet.h>.
o Correct final argument for the inet_ntop() function (POSIX.1-200x).
o Get rid of the namespace pollution from <sys/types.h> in
<arpa/inet.h>.
Reviewed by: fenner
Partially submitted by: bde
pointless and would be inadequate for SMP systems. We will rely on the
VM system's locks to serialise this for now.
* Change pmap_remove() so that if the range being removed is larger than
the number of pages mapped by the pmap, we iterate over the currently
mapped pages instead of over the virtual address range. This should
make a difference when removing large virtual address ranges from an
address space.
Protect against an infinite loop when prefaulting pages. This can
happen when the vm system maps past the end of an object or tries
to map a zero length object, the pmap layer misses the fact that
offsets wrap into negative numbers and we get stuck.
Make flushing dirty pages work correctly on filesystems that
unexpectedly do not complete writes even with sync I/O requests.
This should help the behavior of mmaped files when using
softupdates (and perhaps in other circumstances also.)
reading cr.ivr, as well as writing to cr.eoi.
o use global variables to pass information to os_boot_rendez
so that it doesn't have to jump through hoops to find it
out. This avoids traps on the AP without it even being
initialized. This fixes SMP configurations.
o Move the probing of the MADT to the end of cpu_startup,
instead of at the start of cpu_mp_probe. We need to probe
the MADT for non-SMP configurations as well. This fixes
uniprocessor configurations.
o Serialize AP wake-up by waiting for the AP. We need to do
this since we use global variables to for the AP to use.
As a side-effect, we can use printf() more easily to see
what's going on.
It doesn't help us catch overflowing vector entries at compile time.
Instead use the .org directive. The last entry in the IVT doesn't
strictly need to be limited to 256 bytes, but doing so allows the
the VHPT to be placed immediately following the IVT without wasting
any space due to alignment.
* Re-organise RID allocation so that we don't accidentally give a RID
to two different processes. Also randomise the order to try to reduce
collisions in VHPT and TLB. Don't allocate RIDs for regions which are
unused.
* Allocate space for VHPT based on the size of physical memory. More
tuning is needed here.
* Add sysctl instrumentation for VHPT - see sysctl vm.stats.vhpt
* Fix a bug in pmap_prefault() which prevented it from actually adding
pages to the pmap.
* Remove ancient dead debugging code.
* Add DDB commands for examining translation registers and region
registers.
The first change fixes the 'free/cache page %p was dirty' panic which I
have been seeing when the system is put under moderate load. It also
fixes the negative RSS values in ps which have been confusing me for a
while.
With this set of changes the ia64 port is reliable enough to build its
own kernels, even with a 20-way parallel build. Next stop buildworld.
actually the address of the function descriptor. The fdesc has both
the address of the function and it's corresponding gp value. Now
that we have a gp value, use it instead of passing 0.
o Make <stdint.h> a symbolic link to <sys/stdint.h>.
o Move most of <sys/inttypes.h> into <sys/stdint.h>, as per C99.
o Remove <sys/inttypes.h>.
o Adjust includes in sys/types.h and boot/efi/include/ia64/efibind.h
to reflect new location of integer types in <sys/stdint.h>.
o Remove previously symbolicly linked <inttypes.h>, instead create a
new file.
o Add MD headers <machine/_inttypes.h> from NetBSD.
o Include <sys/stdint.h> in <inttypes.h>, as required by C99; and
include <machine/_inttypes.h> in <inttypes.h>, to fill in the
remaining requirements for <inttypes.h>.
o Add additional integer types in <machine/ansi.h> and
<machine/limits.h> which are included via <sys/stdint.h>.
Partially obtain from: NetBSD
Tested on: alpha, i386
Discussed on: freebsd-standards@bostonradio.org
Reviewed by: bde, fenner, obrien, wollman
When we truncate the msgbuf size because the last chunk is too small,
correctly terminate the phys_avail[] array - the VM system tests
the *end* for zero, not the start. This leads the VM startup to
attempt to recreate a duplicate set of pages for all physical memory.
XXX the msgbuf handling is suspiciously different on i386 vs
alpha/ia64...
* Implement a fairly simplistic parser for unwinding stack frames.
* Use unwind records for DDB's 'trace' command. Also add support for
tracing past exceptions to the context which generated the exception.
The stack unwind code requires a toolchain based on binutils-2.11.2 or
later and gcc-3.0.1 or later.
starts at offset 8; not 6. Hence the structure is 12 bytes and
not 10 bytes. Adjust the definition so that the ProcessorEnabled
flag is moved from bit 15 to bit 31 in the Flags field.
The definition now matches ACPI 2.0 Errata 1.5.
do it as a side-effect of probing for MP hardware. This allows
us to scan for local SAPICs early (especially before MBUF
initialization).
o Fix the Local SAPIC structure so that matches the Local SAPIC
table entry. Now that the Local SAPIC info is the same as the
Local APIC info, stop dumping the Local APIC entries.
o For every Local SAPIC entry in the MADT that's not disabled,
let the SMP code know about it. They represent actual CPUs.
o Register the OS_BOOT_RENDEZ entry point and provide a (bogus)
implementation for the entry point.
o Provide a mapping for internal IPI numbers to ExtINT vectors.
o In a MP system, announce the CPUs and start them by sending
IPI_AP_WAKEUP to each of them. Not that it makes a difference
at this time :-)
o Miscellaneous style fixes and other adjustments.
This emulates APM device node interface APIs (mainly ioctl) and
provides APM services for the applications. The goal is to support
most of APM applications without any changes.
Implemented ioctls in this commit are:
- APMIO_SUSPEND (mapped ACPI S3 as default but changable by sysctl)
- APMIO_STANDBY (mapped ACPI S1 as default but changable by sysctl)
- APMIO_GETINFO and APMIO_GETINFO_OLD
- APMIO_GETPWSTATUS
With above, many APM applications which get batteries, ac-line
info. and transition the system into suspend/standby mode (such as
wmapm, xbatt) should work with ACPI enabled kernel (if ACPI works well :-)
Reviewed by: arch@, audit@ and some guys
userland. The per thread ucred reference is immutable and thus needs no
locks to be read. However, until all the proc locking associated with
writes to p_ucred are completed, it is still not safe to use the per-thread
reference.
Tested on: x86 (SMP), alpha, sparc64
pcb_onfault is not set) or arrange to restart at the location in
pcb_onfault.
This ought to help the stability of a system under moderate load. It
certainly stops DDB from hanging the kernel when it tries to access a
non-present page.
{set,fill}_{,fp,db}regs() fixup:
- Add dummy {set,fill}_dbregs() on architectures that don't have them.
- KSEfy the powerpc versions (struct proc -> struct thread).
- Some architectures had the prototypes in md_var.h, some in reg.h, and
some in both; for consistency, move them to reg.h on all platforms.
These functions aren't really MD (the implementation is MD, but the interface
is MI), so they should move to an MI header, but I haven't figured out which
one yet.
Run-tested on i386, build-tested on Alpha, untested on other platforms.
- Add dummy {set,fill}_dbregs() on architectures that don't have them.
- KSEfy the powerpc versions (struct proc -> struct thread).
- Some architectures had the prototypes in md_var.h, some in reg.h, and
some in both; for consistency, move them to reg.h on all platforms.
These functions aren't really MD (the implementation is MD, but the interface
is MI), so they should move to an MI header, but I haven't figured out which
{set,fill}_{,fp,db}regs() fixup:
- Add dummy {set,fill}_dbregs() on architectures that don't have them.
- KSEfy the powerpc versions (struct proc -> struct thread).
- Some architectures had the prototypes in md_var.h, some in reg.h, and
some in both; for consistency, move them to reg.h on all platforms.
These functions aren't really MD (the implementation is MD, but the interface
is MI), so they should move to an MI header, but I haven't figured out which
one yet.
Run-tested on i386, build-tested on Alpha, untested on other platforms.
structure. This makes it possible to pre-allocate PTEs for the kernel,
which is necessary for a reliable implementation of pmap_kenter(). This
also avoids wasting space (about 48 bytes per page) for kernel mappings
and user mappings of memory-mapped devices.
This also fixes a bug with the previous version where the implementation
required the pv_entry structure to be physically contiguous but did not
enforce this (the structure size was not a power of two). This meant
that the pv_entry free list was quickly corrupted as soon as the system
was even mildly loaded.
the existence of the __gnuc_va_list type[*] because our compiler is GCC.
[*] __gnuc_va_list is defined in the GCC ginclude/stdarg.h replacement
headerwhich we don't use.
- Only release Giant in trap() if we locked it, otherwise we could release
Giant in a kernel trap if we didn't get it for a page fault and the
previous frame had grabbed the lock.
- Only get Giant for !MP safe syscalls.
be set. We need to check isr.w before isr.r so that we can correctly
handle a cmpxchg to a copy-on-write page.
This fixes the hang-after-fork problem for dynamically linked programs.
C calling conventions. This allows crt1.c to be written nearly without
any inline assembler.
* Initialise cpu_model[] so that the hw.model sysctl works properly.
Until now, the ptrace syscall was implemented as a wrapper that called
various functions in procfs depending on which ptrace operation was
requested. Most of these functions were themselves wrappers around
procfs_{read,write}_{,db,fp}regs(), with only some extra error checks,
which weren't necessary in the ptrace case anyway.
This commit moves procfs_rwmem() from procfs_mem.c into sys_process.c
(renaming it to proc_rwmem() in the process), and implements ptrace()
directly in terms of procfs_{read,write}_{,db,fp}regs() instead of
having it fake up a struct uio and then call procfs_do{,db,fp}regs().
It also moves the prototypes for procfs_{read,write}_{,db,fp}regs()
and proc_rwmem() from proc.h to ptrace.h, and marks all procfs files
except procfs_machdep.c as "optional procfs" instead of "standard".
* Don't get confused when memory regions don't lie on page boundaries -
remember our page size is typically larger than the firmware's page size.
* Add a function ia64_running_in_simulator() which is intended to detect
whether the kernel is running in SKI or on real hardware.