Commit Graph

1278 Commits

Author SHA1 Message Date
Mike Silbersack
5caf2b00f0 Move the declaration of sfbufspeak and sfbufsused to mbuf.h,
and use imax instead of max, as sfbufspeak and sfbufsused
are signed.

Submitted by:   bde
2003-12-28 01:43:22 +00:00
Mike Silbersack
5eda9873e9 Track current and peak sfbuf usage, export the values via sysctl. 2003-12-27 07:52:47 +00:00
Marcel Moolenaar
b3378ed911 Don't use NULL with integral types. 2003-12-24 19:55:07 +00:00
Peter Wemm
7655ebdaaa Return AE_OK for stub functions returning ACPI_STATUS, not NULL 2003-12-24 05:26:26 +00:00
Peter Wemm
c15e347e22 GC the unused <machine/kse.h> file. 2003-12-24 00:51:30 +00:00
Peter Wemm
9b68618df0 Add an additional field to the elf brandinfo structure to support
quicker exec-time replacement of the elf interpreter on an emulation
environment where an entire /compat/* tree isn't really warranted.
2003-12-23 02:42:39 +00:00
Peter Wemm
59553d37df Add missing #include "opt_compat.h" so that the compatability function
freebsd4_freebsd32_sigreturn() is defined when expected.  This should
unbreak the tinderbox. Sorry.
2003-12-18 06:59:18 +00:00
Marcel Moolenaar
a8434283fc In set_mcontext(), take into account that kse_switchin(2) will
eventually be passed an async. context as well as a syscall
context.
While here, fix a serious bug in that if the trapframe is a
syscall frame, but we're restoring an async context, we need
to clear the FRAME_SYSCALL flag so that we leave the kernel
via exception_restore.
2003-12-14 01:59:31 +00:00
Peter Wemm
64d85faa1c Assimilate ia64 back into the fold with the common freebsd32/ia32 code.
The split-up code is derived from the ia64 code originally.

Note that I have only compile-tested this, not actually run-tested it.
The ia64 side of the force is missing some significant chunks of signal
delivery code.
2003-12-11 01:05:09 +00:00
Peter Wemm
ba3fa22e8c Fix last second typo. 2003-12-10 22:59:03 +00:00
Peter Wemm
419d43c635 Use gcc's superior ffs() builtin. 2003-12-10 22:51:40 +00:00
Peter Wemm
a80f5f272c Use ffs(x) == popcnt(x ^ (x - 1)) to implement 64 bit ffsl(). gcc's
ffs() builtin uses this already but truncates the upper 32 bits.
2003-12-10 22:47:02 +00:00
Marcel Moolenaar
b291daba6f Don't panic for misalignment traps when the onfault handler is set.
Not all transfers between kernel and user space are byte oriented
and thus alignment safe. Especially fuword*() and suword*() are
sensitive to alignment but in general more optimal than block copies.
By catching the misalignment trap we avoid pessimizing the common
case of properly aligned memory accesses which we would do if we
were to use byte copies or adding tests for proper alignment.

Note that the expectation that the kernel produces aligned pointers
is unchanged. This change therefore relates to possible unaligned
pointers generated in userland.
2003-12-09 09:52:14 +00:00
Nate Lawson
cac6460cfe Use the ACPI-CA definitions for the various APIC tables instead of our
own.
2003-12-09 03:04:19 +00:00
David E. O'Brien
a5b5101f5e Move the bktr(4) <arch>/include/ioctl_{bt848,meteor}.h files to dev/bktr
as these ioctl's aren't MD.  This also means they are installed in
/usr/include/dev/bktr now.  Also provide compatability wrappers for
where these headers lived in 4.x.
2003-12-08 07:22:42 +00:00
Marcel Moolenaar
47eb01b822 Simplify the contexts created by the kernel and remove the related
flags. We now create asynchronous contexts or syscall contexts only.
Syscall contexts differ from the minimal ABI dictated contexts by
having the scratch registers saved and restored because that's where
we keep the syscall arguments and syscall return values.
Since this change affects KSE, have it use kse_switchin(2) for the
"new" syscall context.
2003-12-07 20:47:33 +00:00
Warner Losh
05a463a03d Ooops. These are still used by the bktr driver. David O'Brien has
plans for dealing, but I'll let him deal.

Pointy hat to: imp@
2003-12-07 06:37:32 +00:00
Warner Losh
65b4a1b917 Remote meteor driver. It hasn't compiled in over 3 years. If someone
makes it compile again, and can test it, we can restore the driver to
the tree.
2003-12-07 04:41:11 +00:00
John Baldwin
798a45964d - Split cpu_mp_probe() into two parts. cpu_mp_setmaxid() is still called
very early (SI_SUB_TUNABLES - 1) and is responsible for setting mp_maxid.
  cpu_mp_probe() is now called at SI_SUB_CPU and determines if SMP is
  actually present and sets mp_ncpus and all_cpus.  Splitting these up
  allows an architecture to probe CPUs later than SI_SUB_TUNABLES by just
  setting mp_maxid to MAXCPU in cpu_mp_setmaxid().  This could allow the
  CPU probing code to live in a module, for example, since modules
  sysinit's in modules cannot be invoked prior to SI_SUB_KLD.  This is
  needed to re-enable the ACPI module on i386.
- For the alpha SMP probing code, use LOCATE_PCS() instead of duplicating
  its contents in a few places.  Also, add a smp_cpu_enabled() function
  to avoid duplicating some code.  There is room for further code
  reduction later since much of this code is also present in cpu_mp_start().
- All archs besides i386 still set mp_maxid to the same values they set it
  to before this change.  i386 now sets mp_maxid to MAXCPU.

Tested on:	alpha, amd64, i386, ia64, sparc64
Approved by:	re (scottl)
2003-11-21 22:23:26 +00:00
Marcel Moolenaar
1fc7ca0fb1 Set the ACPI processor Id in the PCPU structure so that CPU idling
on SMP systems has a chance of working. This was a loose end of the
implementation of the ACPI Cx idle states. Since our logical CPU Id
is the ACPI processor Id, we do not need to jump through hoops to
obtain it.

Approved: re@ (jhb)
2003-11-20 16:42:39 +00:00
Peter Wemm
0bfbe7b935 Widen the enable/disable helper function's argument in line with the
ithread_create() changes etc.  This should be mostly a NOP.
2003-11-17 06:10:15 +00:00
Bruce Evans
81bbee5996 Fixed a pedantic syntax error (a stray semicolon at the end of
PCPU_MD_FIELDS).
2003-11-17 03:40:41 +00:00
Alan Cox
0ec3db3072 - Remove unnecessary synchronization from sf_buf_init(). (There is only
one active CPU when sf_buf_init() is performed.)
2003-11-16 23:40:06 +00:00
Alan Cox
e45db9b837 - Modify alpha's sf_buf implementation to use the direct virtual-to-
physical mapping.
 - Move the sf_buf API to its own header file; make struct sf_buf's
   definition machine dependent.  In this commit, we remove an
   unnecessary field from struct sf_buf on the alpha, amd64, and ia64.
   Ultimately, we may eliminate struct sf_buf on those architecures
   except as an opaque pointer that references a vm page.
2003-11-16 06:11:26 +00:00
Nate Lawson
b72e9cf526 Add the pc_acpi_id PCPU member. The new acpi_cpu driver uses this to
dereference the softc.
2003-11-15 18:58:29 +00:00
Marcel Moolenaar
eea3bbdff8 Remove ia64_highfp_load() now that it's unused. 2003-11-12 03:24:34 +00:00
Marcel Moolenaar
0d9ae4e24e Further work-out the handling of the high FP registers. The most
important change is in cpu_switch() where we disable the high FP
registers for the thread that we switch-out if the CPU currently
has its high FP registers. This avoids that the high FP registers
remain enabled for the thread even when the CPU has unloaded them
or the thread migrated to another processor.
Likewise, when we switch-in a thread of that has its high FP
registers on the CPU, we enable them. This avoids an otherwise
harmless, but unnecessary trap to have them enabled.

The code that handles the disabled high FP trap (in trap()) has
been turned into a critical section for the most part to avoid
being preempted. If there's a race, we bail out and have the
processor trap again if necessary.

Avoid using the generic ia64_highfp_save() function when the
context is predictable. The function adds unnecessary overhead.
Don't use ia64_highfp_load() for the same reason. The function
is now unused and can be removed.

These changes make the lazy context switching of the high FP
registers in an UP kernel functional.
2003-11-12 01:26:02 +00:00
Marcel Moolenaar
a5ba2b5cc4 Save and restore the high FP registers in {g|s}_mcontext(). Note
that we currently do not keep track of whether the thread has
actually used the high FP registers before. If not, we should
not save them in the context which automaticly means that we
also would not restore them from the context. For now, do it
unconditionally so that we can reach functional completeness.
2003-11-11 09:53:37 +00:00
Marcel Moolenaar
9d52656a5a Fix a nasty bug that got exposed when the sendsig() and sigreturn()
functions switched to using {g|s}et_mcontext(). The problem is that
sigreturn(), being a syscall, can be given an async. context (i.e.
one corresponding to an interrupt or trap). When this happens, we
try to return to user mode via epc_syscall_return with a trapframe
that can only be used to return to user mode via exception_restore.

To fix this, we check the frame's flags immediately prior to
epc_syscall_return and branch to exception_restore for non-syscall
frames. Modify the assertion in set_mcontext() to check that if
there's a mismatch, it's because of sigreturn().
2003-11-11 09:25:19 +00:00
Marcel Moolenaar
9422d61a1f In get_mcontext(), do not update bspstore and ndirty in the trapframe.
Only update them in the newly created context to reflect the state
after copying the dirty registers onto the user stack. If we were to
update the trapframe, we lose the state at entry into the kernel. We
may need that after we create the context, such as for KSE upcalls.

We have to update the trapframe after writing the dirty registers to
the user stack for signal delivery to work. But this is best done in
sendsig() itself where it applies, not in get_mcontext() where it's
done unconditionally.
2003-11-10 05:28:05 +00:00
Marcel Moolenaar
3534a08109 When a thread is being swapped-out, save the high FP registers. We
have a pointer in the PCPU to the PCB of the thread that currently
has its high FP registers loaded.
2003-11-09 23:13:23 +00:00
Marcel Moolenaar
ac8c7680a6 Use get_mcontext() to construct the signal context in sendsig() and
use set_mcontext() to restore the context in sigreturn(). Since we
put the syscall number and the syscall arguments in the trapframe
(we don't save the scratch registers for syscalls, which allows us
to reuse the space to our advantage), create a MD specific flag so
that we save the scratch registers even for syscalls. We would not
be able to restart a syscall otherwise.

The signal trampoline does not need to flush the regiters anymore,
because get_mcontext() already handles that. In fact, if we set up
the context correctly, we do not need to have a trampoline at all.
This change however only minimally changes the trampoline code. In
follow-up commits this can be further optimized.

Note that normally we preserve cfm and iip in the trapframe created
by the EPC syscall path when we restore a context in set_mcontext()
because those fields are not normally set for a synchronuous context.
The kernel puts the return address and frame info of the syscall
stub in there. By preserving these fields we hide this detail from
userland which allows us to use setcontext(2) for user created
contexts. However, sigreturn() is commonly called from the trampoline,
which means that if we preserve cfm and iip in all cases, we would
return to the trampoline after the sigreturn(), which means we hit
the safety net: we call exit(2). So, we do not preserve cfm and iip
when we have a synchronous context that also has scratch registers
(the uncommon context created by sendsig() only), under the assumption
that if such a context is created in userland, something special is
going on and the use of cfm and iip is then just another quirk. All
this is invisible in the common case.
2003-11-09 22:17:36 +00:00
Marcel Moolenaar
fcaa2925a9 Change the clear_ret argument of get_mcontext() to be a flags argument.
Since all callers either passed 0 or 1 for clear_ret, define bit 0 in
the flags for use as clear_ret. Reserve bits 1, 2 and 3 for use by MI
code for possible (but unlikely) future use. The remaining bits are for
use by MD code.

This change is triggered by a need on ia64 to have another knob for
get_mcontext().
2003-11-09 20:31:04 +00:00
Marcel Moolenaar
00bd917263 Remove the atkbd, psm, sc and vga devices. Most ia64 boxes out there
are zx1 based machines and they don't particularly like it when we
poke at them with PC legacy code. The atkbd and psm devices were
disabled in the hints file so that one could enable them on machines
that support legacy devices, but that's not really something you can
expect from a first-time installer. This still leaves syscons (sc)
and the vga device, which were enabled by default and wrecking havoc
anyway. We could disable them by default like the atkbd and psm
devices, but there's really no point in pretending we're in a better
shape that way.
2003-11-08 23:19:13 +00:00
Scott Long
eb3b7bf69f Document the lockfunc and lockfuncarg arguments to bus_dma_tag_create() in
the busdma headers.
2003-11-07 23:29:42 +00:00
John Baldwin
dac33f12cc Regen. 2003-11-07 20:30:30 +00:00
John Baldwin
a060e9b7ef Sync with global syscalls.master. ptrace(), dup(), pipe(), ktrace(),
ia32_sigaltstack(), sysarch(), issetugid(), utrace(), and ia32_sigaction()
are MP safe.
2003-11-07 20:27:16 +00:00
Marcel Moolenaar
51e25af386 Add support for unaligned ld2, st2, st4 and st8. While here, make
sure we handle stacked registers properly by taking into account
that:
1. bspstore points after the frame (due to cover),
2. we need to adjust for intermediate NaT collections.
2003-11-06 04:26:40 +00:00
Marcel Moolenaar
2642a8845b Handle unaligned 4-byte loads. While in the neighborhood, remove the
cr.isr sanity check. We actually encounter insanities, which very
likely means that the insanity check itself is insane. Remove an empty
comment while I'm at it.
2003-11-03 08:04:04 +00:00
Marcel Moolenaar
6537124772 Add a bogus definition of __va_list for use by lint. Make it visible
only when lint is defined to protect builds with non-GNU compilers.
2003-11-03 05:04:09 +00:00
Marcel Moolenaar
fcca8c1dde Remove headers copied from i386 and either useless or wrong on ia64.
An example of useless is bios.h. An example of wrong is msdos.h (due
to the use of long for 32-bit fields).

display.h cannot be removed because it's used by syscons. That header
however has no platform dependency and shouldn't really be here.

Removal if these headers may cause build failures in the ports tree.
It's the ports that need fixing in that case.

Tested with: buildworld, LINT
2003-11-02 09:19:07 +00:00
Marcel Moolenaar
3bdfa17c6c When switching the RSE to use the kernel stack as backing store, keep
the RNAT bit index constant. The net effect of this is that there's
no discontinuity WRT NaT collections which greatly simplifies certain
operations. The cost of this is that there can be up to 504 bytes of
unused stack between the true base of the kernel stack and the start
of the RSE backing store. The cost of adjusting the backing store
pointer to keep the RNAT bit index constant, for each kernel entry,
is negligible.

The primary reasons for this change are:
1. Asynchronuous contexts in KSE processes have the disadvantage of
   having to copy the dirty registers from the kernel stack onto the
   user stack. The implementation we had so far copied the registers
   one at a time without calculating NaT collection values. A process
   that used speculation would not work. Now that the RNAT bit index
   is constant, we can block-copy the registers from the kernel stack
   to the user stack without having to worry about NaT collections.
   They will be in the right place on the user stack.
2. The ndirty field in the trapframe is now also usable in userland.
   This was previously not the case because ndirty also includes the
   space occupied by NaT collections. The value could be off by 8,
   depending on the discontinuity. Now that the RNAT bit index is
   contants, we have exactly the same number of NaT collection points
   on the kernel stack as we would have had on the user stack if we
   didn't switch backing stores.
3. Debuggers and other applications that use ptrace(2) can now copy
   the dirty registers from the kernel stack (using ptrace(2)) and
   copy them whereever they want them (onto the user stack of the
   inferior as might be the case for gdb) without having to worry
   about NaT collections in the same way the kernel doesn't have to
   worry about them.

There's a second order effect caused by the randomization of the
base of the backing store, for it depends on the number of dirty
registers the processor happened to have at the time of entry into
the kernel. The second order effect is that the RSE will have a
better cache utilization as compared to having the backing store
always aligned at page boundaries. This has not been measured and
may be in practice only minimally beneficial, if at all measurable.
2003-10-28 19:38:26 +00:00
Marcel Moolenaar
95b0df9df2 The previous commit removed both clause 3 and clause 4 from the UCB
license. Only clause 3 has been revoked. Restore the fourth clause
as clause 3.

Pointed out by: das@

Remove my name as a copyright holder since I don't use a BSD license
compatible or comparable to the UCB license. I choose not to add a
complete second license for my work for aesthetic reasons, nor to
replace the UCB license on grounds of rewriting more than 90% of the
source files. The rewrite can also be seen as an enhancement and since
the files were practically empty, it's rather trivial to have changed
90% of the files.
2003-10-27 22:54:34 +00:00
Marcel Moolenaar
f74fae21b8 Add support for userland to access I/O port space. This is primarily
added for XFree86. There are 2 reasons for doing this with sysarch():
1. The memory mapped I/O space is not at a fixed physical address. An
   application has to use some interface to get the base address. It
   gets worse if the machine has multiple memory mapped I/O spaces.
2. Access to the memory mapped I/O space needs to happen through a
   translation that is flagged as uncachable. There's no interface
   that allows a process to do uncached memory I/O, other than though
   /dev/mem (possibly).

So, until we either disallow direct access to I/O or bus space from
userland or have a better way of doing this, sysarch() has the least
negative impact on existing interfaces.
2003-10-27 05:45:35 +00:00
Marcel Moolenaar
3a988c5c87 Remove unused header. See also ia64/disasm/disasm.h. 2003-10-24 06:53:43 +00:00
Marcel Moolenaar
2a0a749f39 Remove ia64_pack_bundle() and ia64_unpack_bundle(). They are not
used anymore.
2003-10-24 06:52:21 +00:00
Marcel Moolenaar
4d85274d1a Remove unused file. db_disasm() has been implemented in db_interface.c
now.
2003-10-24 06:48:41 +00:00
Marcel Moolenaar
5664617492 Implement db_disasm() by using the new disassembler. Temporarily
unimplement db_write_breakpoint() and db_clear_breakpoint().
2003-10-24 06:42:03 +00:00
Arun Sharma
f47392f4c2 Use a TR of size 1 << IA64_ID_PAGE_SHIFT instead of 16M to avoid
overlapping TR/TC entries (which results in a machine check). Note
that we don't look at the size of the memory descriptor, because
it doesn't guarantee non-overlap.

With this change, a UP kernel could boot on a Intel Tiger4 machine
with the following options:

options         LOG2_ID_PAGE_SIZE=26		# 64M
options         LOG2_PAGE_SIZE=14               # 16K

Approved by: marcel
2003-10-24 04:56:58 +00:00
Marcel Moolenaar
5c03a7c7f9 Don't use fuword() or suword() unconditionally. They explicitly
disallow reading or writing.
2003-10-24 02:33:26 +00:00
Marcel Moolenaar
3fc58f92dc Remove two unused fields in the operand structure (o_read & o_write). 2003-10-24 02:05:53 +00:00
Marcel Moolenaar
764015afda Cleanup. Remove the md_flags for threads. It's not used. The flags
we had were bogus.
While here, reassign the copyright to the Project. There's nothing
in this files that originates from NetBSD, especially now that the
FreeBSD/alpha bits have been removed, but even then the amount of
inherited code that we actually used was nil.
2003-10-23 06:41:59 +00:00
Marcel Moolenaar
32efda28bf Reimplement unaligned_fixup() using the new disassembler and a
mcontext_t for the register values. Currently only ld8 and ldfd
instructions are handled as those are the ones we need now (a
misaligned ld8 occurs 4 times in ntpd(8) and a misaligned ldfd
occurs once in mozilla 1.4 and 1.5). Other instructions are added
when needed.
2003-10-23 06:32:34 +00:00
Marcel Moolenaar
49e4ce1f63 Remove unused include of <machine/inst.h> 2003-10-23 06:23:55 +00:00
Marcel Moolenaar
5a931213f0 Remove prototype of unaligned_fixup() and fix a nearby style(9)
bug.
2003-10-23 06:21:44 +00:00
Marcel Moolenaar
26c41f9dd1 Add prototypes for spillfd() and unaligned_fixup(). 2003-10-23 06:20:38 +00:00
Marcel Moolenaar
075f7fe484 Add spillfd(). This function loads a double-precision FP register
at the first address and spills it to the second address. This
allows unaligned_fixup() to update the context of the process in
a way that assures proper rounding.
Similar functions for single-and extended-precision are added when
needed.
2003-10-23 06:19:06 +00:00
Marcel Moolenaar
b9eabb421b Add a new disassembler that improves over the previous disassembler
in that it provides an abstract (intermediate) representation for
instructions. This significantly improves working with instructions
such as emulation of instructions that are not implemented by the
hardware (e.g. long branch) or enhancing implemented instructions
(e.g. handling of misaligned memory accesses). Not to mention that
it's much easier to print instructions.

Functions are included that provide a textual representation for
opcodes, completers and operands.

The disassembler supports all ia64 instructions defined by revision
2.1 of the SDM (Oct 2002).
2003-10-23 06:01:52 +00:00
Marcel Moolenaar
9ee99eb496 Remove md_bspstore from the MD fields of struct thread. Now that
the backing store is at a fixed address, there's no need for a
per-thread variable.
2003-10-21 01:13:49 +00:00
Marcel Moolenaar
bab1f05277 Put the RSE backing store at a fixed address. This change is triggered
by libguile that needs to know the base of the RSE backing store. We
currently do not export the fixed address to userland by means of a
sysctl so user code needs to hardcode it for now. This will be revisited
later.

The RSE backing store is now at the bottom of region 4. The memory stack
is at the top of region 4. This means that the whole region is usable
for the stacks, giving a 61-bit stack space.

Port: lang/guile (depended of x11/gnome2)
2003-10-20 05:34:10 +00:00
Nate Lawson
4c3655b418 Add the cpu_idle_hook() function pointer so that other idlers can be
hooked at runtime.  Make C1 sleep (e.g., HLT) be the default.  This
prepares the way for further ACPI sleep states.
2003-10-18 22:25:07 +00:00
Marcel Moolenaar
b0f865c1f3 Implement cpu_idle() on ia64. We put the processor in a lightweight
halt state that minimizes power consumption while still preserving
cache and TLB coherency. Halting the processor is not conditional at
this time. Tested with UP and SMP kernels.
2003-10-17 02:24:59 +00:00
Robert Drehmel
ea924c4cd3 Implement preliminary support for the PT_SYSCALL command to ptrace(2). 2003-10-09 10:17:16 +00:00
Marcel Moolenaar
c3f4e4fbb5 With BETA 5 of libuwx some of the application registers are renamed
from UWX_REG_MUMBLE to UWX_REG_AR_MUMBLE. Compatibility defines are
present in libuwx. Change the names here so that we don't depend on
compatibility defines.

Note that there's now an UWX_REG_PFS and an UWX_REG_AR_PFS and the
former is not a compatibility define for the latter AFAICT. Change
to UWX_REG_AR_PFS as that seems to be the one we need to handle.
2003-10-09 03:11:37 +00:00
Marcel Moolenaar
f3e533d270 Include <sys/smp.h> for the prototype of smp_rendezvous(). 2003-10-08 19:55:45 +00:00
Bruce M Simpson
2bc7dd5661 Move pmap_resident_count() from the MD pmap.h to the MI pmap.h.
Add a definition of pmap_wired_count().
Add a definition of vmspace_wired_count().

Reviewed by:	truckman
Discussed with:	peter
2003-10-06 01:47:12 +00:00
Alan Cox
566526a957 Migrate pmap_prefault() into the machine-independent virtual memory layer.
A small helper function pmap_is_prefaultable() is added.  This function
encapsulate the few lines of pmap_prefault() that actually vary from
machine to machine.  Note: pmap_is_prefaultable() and pmap_mincore() have
much in common.  Going forward, it's worth considering their merger.
2003-10-03 22:46:53 +00:00
Marcel Moolenaar
5bf2d2b6b4 Swap the syscall caller frame info (i.e. the return pointer and
frame marker) and the syscall stub frame info in the trap frame.
Previously we stored the stub frame info in (rp,pfs) and the
caller frame info in (iip,cfm). This ends up being suboptimal
for the following reasons:
1. When we create a new context, such as for an execve(2), we had
   to set the (rp,pfs) pair for the entry point when using the
   syscall path out of the kernel but we need to set the (iip,cfm)
   pair when we take the interrupt way out. This is mostly just
   an inconsistency from the kernel's point of view, but an ugly
   irregularity from gdb(1)'s point of view.
2. The getcontext(2) and setcontext(2) syscalls had to swap the
   (rp,pfs) and (iip,cfm) pairs to make the context compatible
   with one created purely in userland.

Swapping the (rp,pfs) and (iip,cfm) pairs is visible to signal
handlers that actually peek at the mcontext_t and to gdb(1).
Since this change is made for gdb(1) and we don't care about
signal handlers that peek at the mcontext_t because we're still
a tier 2 platform, this ABI breakage is academic at this moment
in time.

Note that there was no real reason to save the caller frame info
in (iip,cfm) and the stub frame info in (rp,pfs).
2003-10-03 03:50:29 +00:00
Marcel Moolenaar
c0e56dc2c3 Drop any and all support for varargs. There's no history to worry
about because we're still tier 2 and our current compiler, as well
as future compilers will not support varargs. This is mostly a
no-op in practice, because <sys/varargs.h> should already cause
compile failures.
2003-09-28 05:34:07 +00:00
Poul-Henning Kamp
26b0e90ca2 Set cn_name, not cn_dev 2003-09-26 10:37:16 +00:00
Peter Wemm
c460ac3a00 Add sysentvec->sv_fixlimits() hook so that we can catch cases on 64 bit
systems where the data/stack/etc limits are too big for a 32 bit process.

Move the 5 or so identical instances of ELF_RTLD_ADDR() into imgact_elf.c.

Supply an ia32_fixlimits function.  Export the clip/default values to
sysctl under the compat.ia32 heirarchy.

Have mmap(0, ...) respect the current p->p_limits[RLIMIT_DATA].rlim_max
value rather than the sysctl tweakable variable.  This allows mmap to
place mappings at sensible locations when limits have been reduced.

Have the imgact_elf.c ld-elf.so.1 placement algorithm use the same
method as mmap(0, ...) now does.

Note that we cannot remove all references to the sysctl tweakable
maxdsiz etc variables because /etc/login.conf specifies a datasize
of 'unlimited'.  And that causes exec etc to fail since it can no
longer find space to mmap things.
2003-09-25 01:10:26 +00:00
Yoshihiro Takahashi
33e38a2cc8 Implement the bus_space_map() function to allocate resources and initialize
a bus_handle, but currently it does only initializing a bus_handle.
2003-09-23 08:22:34 +00:00
Marcel Moolenaar
719325db8e Fix the last remaining problem encountered by KSE: apparently it is
not guaranteed that the RSE writes the NaT collection immediately,
sort of atomically, to the backing store when it writes the register
immediately prior to the NaT collection point. This means that we
cannot assume that the low 9 bits of the backingstore pointer do not
point to the NaT collection. This is rather a surprise and I don't
know at this time if it's a bug in the Merced or that it's actually
a valid condition of the architecture. A quick scan over the sources
does not indicate that we depend on the false assumption elsewhere,
but it's something to keep in mind.

The fix is to write the saved contents of the ar.rnat register to
the backingstore prior to entering the loop that copies the dirty
registers from the kernel stack to the user stack.
2003-09-20 20:34:58 +00:00
Marcel Moolenaar
b8d941f010 Move uma_small_alloc() and uma_small_free() to uma_machdep.c. These
functions reference UMA internals from <vm/uma_int.h>, which makes
them highly unwanted in non-UMA specific files.

While here, prune the includes in pmap.c and use __FBSDID(). Move
the includes above the descriptive comment.

The copyright of uma_machdep.c is assigned to the project and can
be reassigned to the foundation if and when when such is preferrable.
2003-09-20 19:27:48 +00:00
Marcel Moolenaar
fe4723c884 Fix the most significant KSE breakage caused by not restoring the
restart instruction bits in the PSR. As such, we were returning
from interrupt to the instruction in the bundle that caused us
to enter the kernel, only now we're returning to a completely
different bundle.

While close here: add two KASSERTs to make sure that we restore
sync contexts only when entered the kernel through a syscall and
restore an async context only when entered the kernel through an
interrupt, trap or fault.

While not exactly here, but close enough: use suword64() when we
copy the dirty registers from the kernel stack to the user stack.
The code was intended to be be replaced shortly after being added,
but that was a couple of weeks ago. I might as well avoid that it
is a source for panics until it's replaced.
2003-09-19 22:51:26 +00:00
Marcel Moolenaar
ebe42add33 Revamp trap(): make it more explicit which kinds of traps/faults we
can get (or not) and what we do with them. This fixes the behaviour
for NaT consumption and speculation faults in that we now don't panic
for user faults.

Remove the dopanic label and move the code to a function. This makes
it easier in the simulator to set a breakpoint.

While here, remove the special handling of the old break-based syscall
path and move it to where we handle the break vector. While here,
reserve a new break immediate for KSE. We currently use the old break-
based syscall to deal with restoring async contexts. However, it has
the side-effect of also setting the signal mask and callong ast() on
the way out. The new break immediate simply restores the context and
returns without calling ast().
2003-09-19 22:41:52 +00:00
Marcel Moolenaar
d6c3e38bb2 Change TRAPF_USERMODE and CLOCKF_USERMODE to not test for CPL == 3,
but for CPL != 0. For some reason yet unknown it is possible for the
CPL to be 2. This would previously be counted as kernel mode, which
resulted in nasty panics. By changing the test it is now treated as
user mode, which is more correct. We still need to figure out how it
is possible that the privilege level can be 2 (or 1 for that matter),
because it's not used by us. We only use 3 (user mode) and 0 (kernel
mode).
2003-09-19 07:48:22 +00:00
Marcel Moolenaar
549ab7a654 Include "opt_kstack_pages.h". We export KSTACK_PAGES to assembly and
better have the right value.
2003-09-19 00:37:41 +00:00
Alan Cox
b9850eb224 Add a new parameter to pmap_extract_and_hold() that is needed to eliminate
Giant from vmapbuf().

Idea from:	tegge
2003-09-12 07:07:49 +00:00
Marcel Moolenaar
87ad0260ff Rewrite the SAPIC initialization to always program the RTEs with what
we think is the correct trigger mode and polarity. This allows us to
implement BUS_CONFIG_INTR() as an update of the RTE in question.
Consequently, we can trust the RTE when we enable an interrupt and
avoids that we need to know about the trigger mode and polarity at
that time.
2003-09-10 22:49:38 +00:00
John Baldwin
42a12b2bd8 Move the definitions for ACPI MADT table entries not present in the ACPICA
distribution to a MI header so it can be shared with other architectures.
2003-09-10 06:32:27 +00:00
Marcel Moolenaar
e6882c3469 Introduce IA64_ID_PAGE_{MASK|SHIFT|SIZE} and LOG2_ID_PAGE_SIZE. The
latter is a kernel option for IA64_ID_PAGE_SHIFT, which in turn
determines IA64_ID_PAGE_MASK and IA64_ID_PAGE_SIZE.

The constants are used instead of the literal hardcoding (in its
various forms) of the size of the direct mappings created in region
6 and 7. The default and probably only workable size is still 256M,
but for kicks we use 128M for LINT.
2003-09-09 05:59:09 +00:00
Alan Cox
ba2157f218 Introduce a new pmap function, pmap_extract_and_hold(). This function
atomically extracts and holds the physical page that is associated with the
given pmap and virtual address.  Such a function is needed to make the
memory mapping optimizations used by, for example, pipes and raw disk I/O
MP-safe.

Reviewed by:	tegge
2003-09-08 02:45:03 +00:00
Bill Paul
a94100fa9b Take the support for the 8139C+/8169/8169S/8110S chips out of the
rl(4) driver and put it in a new re(4) driver. The re(4) driver shares
the if_rlreg.h file with rl(4) but is a separate module. (Ultimately
I may change this. For now, it's convenient.)

rl(4) has been modified so that it will never attach to an 8139C+
chip, leaving it to re(4) instead. Only re(4) has the PCI IDs to
match the 8169/8169S/8110S gigE chips. if_re.c contains the same
basic code that was originally bolted onto if_rl.c, with the
following updates:

- Added support for jumbo frames. Currently, there seems to be
  a limit of approximately 6200 bytes for jumbo frames on transmit.
  (This was determined via experimentation.) The 8169S/8110S chips
  apparently are limited to 7.5K frames on transmit. This may require
  some more work, though the framework to handle jumbo frames on RX
  is in place: the re_rxeof() routine will gather up frames than span
  multiple 2K clusters into a single mbuf list.

- Fixed bug in re_txeof(): if we reap some of the TX buffers,
  but there are still some pending, re-arm the timer before exiting
  re_txeof() so that another timeout interrupt will be generated, just
  in case re_start() doesn't do it for us.

- Handle the 'link state changed' interrupt

- Fix a detach bug. If re(4) is loaded as a module, and you do
  tcpdump -i re0, then you do 'kldunload if_re,' the system will
  panic after a few seconds. This happens because ether_ifdetach()
  ends up calling the BPF detach code, which notices the interface
  is in promiscuous mode and tries to switch promisc mode off while
  detaching the BPF listner. This ultimately results in a call
  to re_ioctl() (due to SIOCSIFFLAGS), which in turn calls re_init()
  to handle the IFF_PROMISC flag change. Unfortunately, calling re_init()
  here turns the chip back on and restarts the 1-second timeout loop
  that drives re_tick(). By the time the timeout fires, if_re.ko
  has been unloaded, which results in a call to invalid code and
  blows up the system.

  To fix this, I cleared the IFF_UP flag before calling ether_ifdetach(),
  which stops the ioctl routine from trying to reset the chip.

- Modified comments in re_rxeof() relating to the difference in
  RX descriptor status bit layout between the 8139C+ and the gigE
  chips. The layout is different because the frame length field
  was expanded from 12 bits to 13, and they got rid of one of the
  status bits to make room.

- Add diagnostic code (re_diag()) to test for the case where a user
  has installed a broken 32-bit 8169 PCI NIC in a 64-bit slot. Some
  NICs have the REQ64# and ACK64# lines connected even though the
  board is 32-bit only (in this case, they should be pulled high).
  This fools the chip into doing 64-bit DMA transfers even though
  there is no 64-bit data path. To detect this, re_diag() puts the
  chip into digital loopback mode and sets the receiver to promiscuous
  mode, then initiates a single 64-byte packet transmission. The
  frame is echoed back to the host, and if the frame contents are
  intact, we know DMA is working correctly, otherwise we complain
  loudly on the console and abort the device attach. (At the moment,
  I don't know of any way to work around the problem other than
  physically modifying the board, so until/unless I can think of a
  software workaround, this will have do to.)

- Created re(4) man page

- Modified rlphy.c to allow re(4) to attach as well as rl(4).

Note that this code works for the sample 8169/Marvell 88E1000 NIC
that I have, but probably won't work for the 8169S/8110S chips.
RealTek has sent me some sample NICs, but they haven't arrived yet.
I will probably need to add an rlgphy driver to handle the on-board
PHY in the 8169S/8110S (it needs special DSP initialization).
2003-09-08 02:11:25 +00:00
Marcel Moolenaar
5e3cb29a6b Untangle the code in this file to improve understandability. Both
ia64_count_cpus() and ia64_probe_sapics() called a single function
to do the the actual work. The difference in behaviour was handled
in that function and was further complicated by adding bootverbose
related code. As such, even the simplest of changes was hard to
comprehend.

Untangling has been done by increasing code duplication and using
a more naive style of coding. FWIW, the object file is slightly
smaller than before, so things aren't as bad as it may seem.

Triggered by: a simple fix on the P4 branch that never got merged.
2003-09-07 23:09:08 +00:00
Alan Cox
5d314346f5 MFamd64/i386
Add necessary page locking to pmap_mincore().
2003-09-07 20:02:38 +00:00
Marcel Moolenaar
10a686623d MFp4: Revamped GENERIC (and hints). This is some much more pleasant
to look at...
2003-09-07 06:39:51 +00:00
Marcel Moolenaar
f1220bfe41 Replace sio(4) with uart(4). Remove the sio(4) hints and only add
those hints used by uart(4) for the determination of the serial
console in the absence of the HCDP table.
2003-09-07 05:47:10 +00:00
Marcel Moolenaar
8d8d970db1 Fix a place where I forgot to change the code that checks whether
we return to kernel or userland. This triggered a panic in a KSE
application when TDF_USTATCLOCK was set in the case userland was
interrupted, but we never called ast() on our way out. As such,
we called ast() at some other time. Unfortunately, TDF_USTATCLOCK
handling assumes running in the interrupt thread. This was not
the case anymore.

To avoid making the same mistake later, interrupt() now returns
to its caller whether we interrupted userland or not. This avoids
that we have to duplicate the check in assembly, where it's bound
to fall off the scope. Now we simply check the return value and
call ast() if appropriate.

Run into this: davidxu
2003-09-05 22:50:10 +00:00
Marcel Moolenaar
f02e8e8122 Use pmap_steal_memory() for the msgbuf instead of trying to squeeze
it in the last chunk (phys_avail block). The last chunk very often is
not larger than one or two pages, resulting in a msgbuf that's too
small to hold a complete verbose boot.
Note that pmap_steal_memory() will bzero the memory it "allocates".
Consequently, ia64 will never preserve previous msgbufs. This is not
a noticable difference in practice. If the msgbuf could be reused,
it was invariably too small to have anything preserved anyway.
2003-09-01 07:06:57 +00:00
Marcel Moolenaar
47f756866a Use direct mapped KVA for the sf_buf allocator, as made possible
by the previous commit. While here, fix a typo, reformat comments
and fix a long line.

Tested with: ftpd
2003-09-01 00:12:27 +00:00
Alan Cox
411d10a600 Migrate the sf_buf allocator that is used by sendfile(2) and zero-copy
sockets into machine-dependent files.  The rationale for this
migration is illustrated by the modified amd64 allocator.  It uses the
amd64's direct map to avoid emphemeral mappings in the kernel's
address space.  On an SMP, the emphemeral mappings result in an IPI
for TLB shootdown for each transmitted page.  Yuck.

Maintainers of other 64-bit platforms with direct maps should be able
to use the amd64 allocator as a reference implementation.
2003-08-29 20:04:10 +00:00
Nate Lawson
5a4d072c93 Minor style cleanups. 2003-08-28 16:30:31 +00:00
Marcel Moolenaar
d0adfaea93 Change LOG2_PAGE_SIZE from 14 to 15 bits. This will cause the CTASSERT
in vm_page.h to be reached and thus slightly increases the overall
coverage of LINT on ia64.
2003-08-25 20:02:18 +00:00
Marcel Moolenaar
5b6a41bddf Add the bits for a LINT kernel. It has been verified to compile. We
may need to polish this.
2003-08-23 21:47:33 +00:00
Marcel Moolenaar
9539d5b4f6 Remove PAGE_SIZE_4K, PAGE_SIZE_8K and PAGE_SIZE_16K and replace them
with LOG2_PAGE_SIZE. A single option is better to LINT than multiple
mutual exclusive ones.
2003-08-23 03:39:55 +00:00
Marcel Moolenaar
ca668eda45 Remove unused inclusion of opt_acpi.h 2003-08-23 00:07:52 +00:00
John Baldwin
e7411b9d71 Regen. 2003-08-21 14:16:41 +00:00
John Baldwin
daf54a1e05 Swap sigaction/sigreturn since they are in the wrong order.
Noticed indirectly by:	peter
2003-08-21 14:16:00 +00:00
Marcel Moolenaar
4a98d8b095 Undo the mistake made in revision 1.77 of trap.c and which was the
ultimate trigger for the follow-up fixes in revisions 1.78, 1.80,
1.81 and 1.82 of trap.c. I was simply too pre-occupied with the
gateway page and how it blurs kernel space with user space and
vice versa that I couldn't see that it was all a load of bollocks.

It's not the IP address that matters, it's the privilege level that
counts. We never run in user space with lifted permissions and we
sure can not run in kernel space without it. Sure, the gateway page
is the exception, but not if you look at the privilege level. It's
user space if you run with user permissions and kernel space otherwise.

So, we're back to looking at the privilege level like it should be.
There's no other way.

Pointy hat: marcel
2003-08-20 05:30:35 +00:00
Gordon Tetlow
df3d69c217 Fixup the ELF branding information to point to the new home of rtld. 2003-08-17 08:08:38 +00:00
Marcel Moolenaar
710338e94f In vm_thread_swap{in|out}(), remove the alpha specific conditional
compilation and replace it with a call to cpu_thread_swap{in|out}().
This allows us to add similar code on ia64 without cluttering the
code even more.
2003-08-16 23:15:15 +00:00
Marcel Moolenaar
26502503e5 Further cleanup <machine/cpu.h> and <machine/md_var.h>: move the MI
prototypes of cpu_halt(), cpu_reset() and swi_vm() from md_var.h to
cpu.h. This affects db_command.c and kern_shutdown.c.

ia64: move all MD prototypes from cpu.h to md_var.h. This affects
madt.c, interrupt.c and mp_machdep.c. Remove is_physical_memory().
It's not used (vm_machdep.c).

alpha: the MD prototypes have been left in cpu.h with a comment
that they should be there. Moving them is left for later. It was
expected that the impact would be significant enough to be done in
a seperate commit.

powerpc: MD prototypes left in cpu.h. Comment added.

Suggested by: bde
Tested with: make universe (pc98 incomplete)
2003-08-16 16:57:57 +00:00
Marcel Moolenaar
c6d402d3f2 Fix a range check bug. Don't left-shift the integer argument 'data'.
Sign extension happens after the shift, not before so that boundary
cases like 0x40000000 will not be caught properly.
Instead, right shift ndirty. It is guaranteed to be a multiple of 8.
While here, do some manual code motion and code commoning.

Range check bug pointed out by: iedowse
2003-08-16 01:49:38 +00:00
Marcel Moolenaar
1fdb0ba9bb Fix the generation of coredumps. We did not take the dirty registers
that were on the kernel stack into account. For now we write them
out to the register stack of the process before creating the dump.
This however is not the final solution. The problem is that we may
invalidate the coredump by overwriting vital information due to an
invalid backing store pointer. Instead we need to write the dirty
registers to an unused region of VM which will result in a seperate
segment in the coredump. For now we can at least get to all the
registers from a coredump.
2003-08-15 05:52:48 +00:00
Marcel Moolenaar
b00555136c Add an instruction group break after the move to application register
and the move to control register to avoid dependency violations when
these functions are used. Note that explicit data and instruction
serialization also need to be in a subsequent instruction group.
This too requires that we have an igrp break here.
2003-08-15 05:46:33 +00:00
Marcel Moolenaar
60518ee41c Introduce two machine specific ptrace(2) requests: PT_GETKSTACK and
PT_SETKSTACK. These requests allow the tracing process to access the
dirty registers of the traced process that are on the kernel stack.

Note that there's currently no way to access the rnat register for
those dirty registers that are not (yet) covered by a nat collection
point. The interface for this is still being slept on.

Also note that implied by these requests is the division of work:
The tracing process has to keep track of where registers are spilled
and is responsible to figure out where the NaT bit of the stacked
registers are at any time during the execution of the traced process.
The kernel provides the interfaces but will not abstract the fact
that the register stack can be split. This model does not follow
the approach taken in Linux where PT_PEEK and PT_POKE deals with
this automagically.
2003-08-15 05:40:59 +00:00
Marcel Moolenaar
6e1f209af1 Don't use VM_MIN_KERNEL_ADDRESS to check if the faulting address is
in user space or kernel space. VM_MIN_KERNEL_ADDRESS starts after the
gateway page, which means that improper memory accesses to the gateway
page while in user mode would panic the kernel. Use VM_MAX_ADDRESS
instead. It ends before the gateway page. The difference between
VM_MIN_KERNEL_ADDRESS and VM_MAX_ADDRESS is exactly the gateway page.
2003-08-13 03:20:10 +00:00
Marcel Moolenaar
dfcba5aae3 Put an instruction group break between the move to ar.rnat and the
move to ar.rsc. The RSE must be in enforced lazy mode when writing
to RSE modifyable registers. In this case we restore the RSE NaT
collection register ar.rnat. I have seen 2 general exception faults
on pluto1 now that indicate that the move to ar.rsc has already
happened prior to the move to ar.rnat, meaning that the RSE is not
in enforced lazy mode anymore. The ia64 dependency and instruction
ordering rules seem to allow having both registers written to in
the same instruction group, provided ar.rsc is written to later than
ar.rnat (based on the ordering semantics). It appears that we may
be pushing our luck. For now, put them in seperate cycles (by means
of the instruction group break). If we ever get a general exception
fault on the move to ar.rnat again, we have definite proof that
something else is fishy.
2003-08-13 02:49:50 +00:00
Warner Losh
06b4bf3e55 Expand inline the relevant parts of src/COPYRIGHT for Matt Dillon's
copyrighted files.

Approved by: Matt Dillon
2003-08-12 23:24:05 +00:00
Marcel Moolenaar
75cf31a016 Extend identifycpu():
o  Differentiate between CPU family and CPU model. There are multiple
   Itanium 2 models and it's nice to differentiate between them.
o  Seperately export the CPU family and CPU model with sysctl.
o  Merced is the only model in the Itanium family.
o  Add Madison to the Itanium 2 family. We already knew about McKinley.
o  Print the CPU family between parenthesis, like we do with the i386
   CPU class.

My prototype now identifies itself as:
	CPU: Merced (800.03-Mhz Itanium)

pluto1 and pluto2 will eventually identify themselves as:
	CPU: McKinley (900.00-Mhz Itanium 2)
2003-08-12 08:10:16 +00:00
Marcel Moolenaar
e57196b3db Cleanup prototypes in cpu.h, including fswintrberr and any references
to it. Sort the remaining prototypes in cpu.h.

No functional change.
2003-08-12 03:51:53 +00:00
Marcel Moolenaar
322d6e0236 Cleanup and style(9) fixes. No functional change. 2003-08-11 21:25:19 +00:00
Marcel Moolenaar
425963bb80 o move cpu_reset() from vm_machdep.c to machdep.c.
o reorder cpu_boot(), cpu_halt() and identifycpu().

No functional change.
2003-08-10 21:33:07 +00:00
Marcel Moolenaar
29952636d3 Now that we can ignore up to 8KB of dirty registers, remove the RSE
magic from exec_setregs(). In set_mcontext() we now also don't have
to worry that we entered the kernel with more that 512 bytes of
dirty registers on the kernel stack. Note that we cannot make any
assumptions anymore WRT to NaT collection points in exec_setregs(),
so we have to deal with them now.
2003-08-10 08:04:21 +00:00
Marcel Moolenaar
f8e1f6d036 MFi386 1.422 & 1.423: lock page queues in pmap_insert_entry(). 2003-08-08 00:30:26 +00:00
John Baldwin
8b149b5131 Consistently use the BSD u_int and u_short instead of the SYSV uint and
ushort.  In most of these files, there was a mixture of both styles and
this change just makes them self-consistent.

Requested by:	bde (kern_ktrace.c)
2003-08-07 15:04:27 +00:00
Marcel Moolenaar
1634f50b1b Better define the flags in the mcontext_t and properly set the flags
when we create contexts. The meaning of the flags are documented in
<machine/ucontext.h>. I only list them here to help browsing the
commit logs:
	_MC_FLAGS_ASYNC_CONTEXT
	_MC_FLAGS_HIGHFP_VALID
	_MC_FLAGS_KSE_SET_MBOX
	_MC_FLAGS_RETURN_VALID
	_MC_FLAGS_SCRATCH_VALID

Yes, _MC_FLAGS_KSE_SET_MBOX is a hack and I'm proud of it :-)
2003-08-07 07:52:39 +00:00
Marcel Moolenaar
a50bc30203 o Fix cut-n-paste whitespace corruption in previous commit
o  For trap-based upcalls the argument (the kse_mailbox) to
   the UTS must be written onto the kernel stack, not the
   user stack. While here, deal with the fact that we may
   be at a NaT collection point.
2003-08-07 07:40:19 +00:00
Marcel Moolenaar
bee4e73025 In cpu_set_upcall_kse(), create the upcall according to the entry
path into the kernel. Normally it's due to a syscall, but one can
also be created as the result of a clock interrupt (for example).
This now even more looks like exec_setregs().

While here, add an assert that we don't expect more than 8KB of
dirty registers on the kernel stack.
2003-08-06 23:28:19 +00:00
Marcel Moolenaar
5f20d75a5f o In revision 1.45 of exception.S we changed exception_restore to
unconditionally restore ar.k7 (kernel memory stack) and ar.k6
   (kernel register stack). I don't know what I was smoking then,
   but if you unconditionally restore ar.k6, you also want to
   compute its value unconditionally. By having the computation
   predicated and dependent on whether we return to user mode, we
   would end up writing junk (= invalid value for ar.bspstore) if
   we would return to kernel mode. But the whole point of the
   unconditional restoration was that there is a grey area where
   we still need to have ar.k6 restored. If we restore with a junk
   value, we would end up wedging the machine on the next interrupt.
   So, unconditionally calculate the value we unconditionally write
   to ar.k6.

o  The previous braino was found while making the following change:
   We used to clear the lower 9 bits of the value we write to ar.k6.
   The meaning being that we know that the kernel register stack is
   at least 512 byte aligned and simply clearing the lower 9 bits
   allows us to return to a context of which we don't have dirty
   registers on the kernel stack, even though the context that
   entered the kernel does have dirty registers on the kernel stack.
   By masking-off the lower bits, we correctly obtain the base of
   the register stack without having to worry that we didn't actually
   reached the base while unwinding it.
   The change is to mask off the lower 13 bits, knowing that the
   kernel register stack is always 8KB aligned. The advantage is that
   we don't have to worry anymore if there's more than 512 bytes of
   dirty registers on the kernel stack. A situation that frequently
   occurs. In exec_setregs() in machdep.c:1.147 or older, we had to
   deal with that situation by copying the active portion of the
   register stack down in multiples of 512 bytes. Now that we mask off
   the lower 13 bits we don't have to do that at all. Contemporary
   IPF processors have a register file that can hold up to 96 stacked
   registers (=784 bytes [incl. 2 NaT collections]). With no indication
   that register files grow beyond a couple of hundred registers, we
   should not have to worry about it anymore... and yes, 640KB is
   enough for everybody :-)
   This change helps setcontext(2) and cpu_set_upcall_kse() in that
   they can return to completely different contexts without having to
   mess with the kernel stack. Of course exec_setregs() doesn't need
   to do that anymore as well.
2003-08-06 21:32:38 +00:00
Marcel Moolenaar
7f36189f8a o Put the syscall return registers in the context. Not only do we
need this for swapcontext(), KSE upcalls initiated from ast()
   also need to save them so that we properly return the syscall
   results after having had a context switch. Note that we don't
   use r11 in the kernel. However, the runtime specification has
   defined r8-r11 as return registers, so we put r11 in the context
   as well. I think deischen@ was trying to tell me that we should
   save the return registers before. I just wasn't ready for it :-)

o  The EPC syscall code has 2 return registers and 2 frame markers
   to save. The first (rp/pfs) belongs to the syscall stub itself.
   The second (iip/cfm) belongs to the caller of the syscall stub.
   We want to put the second in the context (note that iip and cfm
   relate to interrupts. They are only being misused by the syscall
   code, but are not part of a regular context).
   This way, when the context is switched to again, we return to
   the caller of setcontext(2) as one would expect.

o  Deal with dirty registers on the kernel stack. The getcontext()
   syscall will flush the RSE, so we don't expect any dirty registers
   in that case. However, in thread_userret() we also need to save
   the context in certain cases. When that happens, we are sure that
   there are dirty registers on the kernel stack.
   This implementation simply copies the registers, one at a time,
   from the kernel stack to the user stack. NAT collections are not
   dealt with. Hence we don't preserve NaT bits. A better solution
   needs to be found at some later time.
   We also don't deal with this in all cases in set_mcontext. No
   temporay solution is implemented because it's not a showstopper.
   The problem is that we need to ignore the dirty registers and we
   automaticly do that for at most 62 registers. When there are more
   than 62 dirty registers we have a memory "leak".

This commit is fundamental for KSE support.
2003-08-05 18:52:02 +00:00
Marcel Moolenaar
02cc6a6f35 Fix logic bug in the previous commit. Any region less than 5 is a
user space region. Hence, we need to test if 5 is greater than the
region; not greater equal.
This bug caused us to call ast() while interrupting kernel mode.
2003-08-04 22:00:48 +00:00
John Baldwin
3bdbd658f1 - Since td_critnest is now initialized in MI code, it doesn't have to be
set in cpu_critical_fork_exit() anymore.
- As far as I can tell, cpu_thread_link() has never been used, not even
  when it was originally added, so remove it.
2003-08-04 20:32:45 +00:00
Marcel Moolenaar
46e31b2612 Cleanup the clock code. This includes:
o  Remove alpha specific timer code (mc146818A) and compiled-out
   calibration of said timer.
o  Remove i386 inherited timer code (i8253) and related acquire and
   release functions.
o  Move sysbeep() from clock.c to machdep.c and have it return
   ENODEV. Console beeps should be implemented using ACPI or if no
   such device is described, using the sound driver.
o  Move the sysctls related to adjkerntz, disable_rtc_set and
   wall_cmos_clock from machdep.c to clock.c, where the variables
   are.
o  Don't hardcode a hz value of 1024 in cpu_initclocks() and don't
   bother faking a stathz that's 1/8 of that. Keep it simple: hz
   defaults to HZ and stathz equals hz. This is also how it's done
   for sparc64.
o  Keep a per-CPU ITC counter (pc_clock) and adjustment (pc_clockadj)
   to calculate ITC skew and corrections. On average, we adjust the
   ITC match register once every ~1500 interrupts for a duration of
   2 consequtive interruprs. This is to correct the non-deterministic
   behaviour of the ITC interrupt (there's a delay between the match
   and the raising of the interrupt).
o  Add 4 debugging sysctls to monitor clock behaviour. Those are
   debug.clock_adjust_edges, debug.clock_adjust_excess,
   debug.clock_adjust_lost and debug.clock_adjust_ticks. The first
   counts the individual adjustment cycles (when the skew first
   crosses the threshold), the second counts the number of times the
   adjustment was excessive (any non-zero value is to be considered
   a bug), the third counts lost clock interrupts and the last counts
   the number of interrupts for which we applied an adjustment
   (debug.clock_adjust_ticks / debug.clock_adjust_edges gives the
   avarage duration of an individual adjustment -- should be ~2).

While here, remove some nearby (trivial) left-overs from alpha and
other cleanups.
2003-08-04 05:13:18 +00:00
Marcel Moolenaar
5192a6fc07 Fix handling of external interrupts: we weren't calling ast() when
interrupting user mode. The net effect of this bug is that a clock
interrupt does not cause rescheduling and processes are not
preempted. It only takes a "while (1);" to render the machine
useless.

This bug was introduced by the context changes and EPC syscall code.
Handling of ASTs was moved to C for clarity and ease of maintenance,
but was not added for the external interrupt case.

This needs to be revisited. We now have calls to do_ast() in trap(),
break_syscall() and ivt_External_Interrupt(). A single call in
exception_restore covers these 3 places without duplication. This
is where we handled ASTs prior to the overhaul, except that the
meat has been moved to do_ast(), a C function. This was the goal
to begin with.

Pointy hat: marcel
2003-08-04 00:08:39 +00:00
David E. O'Brien
a98a5f06d3 Style sync. 2003-08-03 07:50:19 +00:00
Marcel Moolenaar
6a1909919b Don't use uint64_t. Use unsigned long instead. One is supposed to use
ucontext_t without having to include headers other than <ucontext.h>.
2003-08-02 01:12:31 +00:00
Marcel Moolenaar
b65384050e Write the preserved registers to (and read them from) struct reg and
struct fpreg.
2003-08-01 07:21:34 +00:00
Bosko Milekic
b053bc8407 Make sure that when the PV ENTRY zone is created in pmap, that it's
created not only with UMA_ZONE_VM but also with UMA_ZONE_NOFREE.  In
the i386 case in particular, the pmap code would hook a special
page allocation routine that allocated from kernel_map and not kmem_map,
and so when/if the pageout daemon drained the zones, it could actually
push out slabs from the PV ENTRY zone but call UMA's default page_free,
which resulted in pages allocated from kernel_map being freed to
kmem_map; bad.  kmem_free() ignores the return value of the
vm_map_delete and just returns.  I'm not sure what the exact
repercussions could be, but it doesn't look good.

In the PAE case on i386, we also set-up a zone in pmap, so be
conservative for now and make that zone also ZONE_NOFREE and
ZONE_VM.  Do this for the pmap zones for the other archs too,
although in some cases it may not be entirely necessarily.  We'd
rather be safe than sorry at this point.

Perhaps all UMA_ZONE_VM zones should by default be also
UMA_ZONE_NOFREE?

May fix some of silby's crashes on the PV ENTRY zone.
2003-07-31 03:39:51 +00:00
Peter Wemm
ad7a226f9d Deal with 'options KSTACK_PAGES' being a global option. 2003-07-31 01:31:32 +00:00
Peter Wemm
aac6412bcd Cosmetic: fix some disorder of #include "opt_...." files 2003-07-31 01:29:09 +00:00
Peter Wemm
edc367db34 Remove leftover relic of pmap_new_thread() etc. 2003-07-31 01:28:41 +00:00
Maxime Henrion
d5afecd068 - Introduce a new busdma flag BUS_DMA_ZERO to request for zero'ed
memory in bus_dmamem_alloc().  This is possible now that
  contigmalloc() supports the M_ZERO flag.
- Remove the locking of Giant around calls to contigmalloc() since
  contigmalloc() now grabs Giant itself.
2003-07-27 13:52:10 +00:00
Marcel Moolenaar
e2fe99a2e0 Remove prototype of ia64_pa_access(). The function has been moved to
mem.c where it's been made static.
2003-07-26 10:13:30 +00:00
Marcel Moolenaar
4f373ec187 Avoid using __aligned(16). Instead define the jmp_buf in terms of
long doubles. This gives us 16-byte alignment. Add a CTASSERT for
the size of the jmp_buf to detect ABI breakages.
2003-07-26 08:03:43 +00:00
Marcel Moolenaar
dc539f3ee0 Unbreak ia64 builds now -Werror is enabled again. Avoid obsolete
memory operand construct.
2003-07-26 07:23:25 +00:00
Marcel Moolenaar
938b878e45 Revert previous commit. We don't use setjmp()/longjmp() for context
switching anymore, so there's no need to save and restore GP. This
change breaks threaded applications linked against libc_r. Pull the
tier 2 card again: relink. This will link against libthr instead.
2003-07-25 22:36:48 +00:00
Alan Cox
059358675e MFi386 revision 1.416
Add vm object locking to pmap_prefault().

Note: powerpc and sparc64 do not implement this function.
2003-07-25 18:58:39 +00:00
Marcel Moolenaar
c1e97bb458 Remove __aligned(16) from the definition of struct _ia64_fpreg. It's
a non-standard construct. Instead, redefine struct _ia64_fpreg as a
union and put a long double in it. On ia64 and for LP64, this is
defined by the ABI to have 16-byte alignment. For ILP32 a long double
has 4-byte alignment, but we don't support ILP32.

Note that the in-memory image of a long double does not match the in-
memory image of spilled FP registers. This means that one cannot use
the fpr_flt field to interpet the bits. For this reason we continue
to use an aggregate type.
2003-07-25 08:02:24 +00:00
Marcel Moolenaar
cd7e5d6eb4 Remove INVARIANT* and WITNESS. This makes the simulator much more
pleasant to use.
2003-07-25 07:52:20 +00:00
Marcel Moolenaar
076f523998 Move ia64_pa_access() from machdep.c to mem.c and declare it static.
It's only used in mem.c and cannot accidentally be used elsewhere
this way.
2003-07-25 05:37:13 +00:00
Marcel Moolenaar
c5262d75ed Disable the single-step trap on a debug related trap, including of
course the single-step trap itself.
2003-07-25 00:11:14 +00:00
Marcel Moolenaar
793e17ba11 We sloppily created an array for the high FP registers (f32-f127),
but this just created a weird inconsistency when porting gdb(1).
Instead, we name each high FP register seperately, like we do for
all the other registers.
2003-07-23 03:08:34 +00:00
Marcel Moolenaar
e90153536e Rename thread_siginfo to cpu_thread_siginfo. 2003-07-15 04:43:33 +00:00
Marcel Moolenaar
ed4ee6b2af Enable the high FP registers when we call the FPSWA handler and disable
them again afterwards. This fixes a disabled FP fault while in the FPSWA
handler.
While here, merge the FP fault and FP trap handling code to reduce code
duplication. Where code was different, it was not sure it should be.

Trigger case: ports/math/atlas
2003-07-13 04:08:16 +00:00
Marcel Moolenaar
480d3dd2ea Add logic to trace across/over a trapframe. We have ABI markers in
our unwind information for functions that are entry points into the
kernel. When stepping to the next frame, the unwinder will let us
know when sych a marker was encountered. We use this to stop the
current unwind session, query the trapframe and restart a new
unwind session based on the new trapframe.

The implementation is a bit sloppy, but at this time there are
bigger fish to fry.
2003-07-12 04:35:09 +00:00
Marcel Moolenaar
2c75b47793 Add a body directive before the first instruction in epc_syscall().
This results in a zero length prologue and a body that covers the
whole function. This is more correct.
2003-07-11 08:52:48 +00:00
Marcel Moolenaar
67f79f5a15 Remove a gratuitous align directive after the endp directive for
IVT entries.
2003-07-11 08:49:26 +00:00
Marcel Moolenaar
290245ea4c Don't call malloc() and free() while in the debugger and unwinding
to get a stacktrace. This does not work even with M_NOWAIT when we
have WITNESS and is generally a bad idea (pointed out by bde@). We
allocate an 8K heap for use by the unwinder when ddb is active. A
stack trace roughly takes up half of that in any case, so we have
some room for complex unwind situations. We don't want to waste too
much space though. Due to the nature of unwinding, we don't worry
too much about fragmentation or performance of unwinding while in
the debugger. For now we have our own heap management, but we may
be able to leverage from existing code at some later time.

While here:
o  Make sure we actually free the unwind environment after unwinding.
   This fixes a memory leak.
o  Replace Doug's license with mine in unwind.c and unwind.h. Both
   files don't have much, if any, of Doug's code left since the EPC
   syscall overhaul and the import of the unwinder.
o  Remove dead code.
o  Replace M_NOWAIT with M_WAITOK for all remaining malloc() calls.
2003-07-05 23:21:58 +00:00
Alan Cox
1f78f902a8 Background: pmap_object_init_pt() premaps the pages of a object in
order to avoid the overhead of later page faults.  In general, it
implements two cases: one for vnode-backed objects and one for
device-backed objects.  Only the device-backed case is really
machine-dependent, belonging in the pmap.

This commit moves the vnode-backed case into the (relatively) new
function vm_map_pmap_enter().  On amd64 and i386, this commit only
amounts to code rearrangement.  On alpha and ia64, the new machine
independent (MI) implementation of the vnode case is smaller and more
efficient than their pmap-based implementations.  (The MI
implementation takes advantage of the fact that objects in -CURRENT
are ordered collections of pages.)  On sparc64, pmap_object_init_pt()
hadn't (yet) been implemented.
2003-07-03 20:18:02 +00:00
Ruslan Ermilov
0be33d3321 The .s files were repo-copied to .S files.
Approved by:	marcel
Repocopied by:	joe
2003-07-02 12:57:07 +00:00
Marcel Moolenaar
c55b999c72 The use of SYSINIT requires the inclusion of <sys/kernel.h> 2003-07-02 01:22:29 +00:00
Maxime Henrion
75f9bf73ec Make this even closer to other busdma backends. 2003-07-01 21:21:45 +00:00
Maxime Henrion
02681c8bc2 Sync bounce pages support with the alpha backend. More precisely:
o use a mutex to protect the bounce pages structure.
	o use a SYSINIT function to initialize the bounce pages structures
	  and thus avoid a race condition in alloc_bounce_pages().
	o add support for the BUS_DMA_NOWAIT flag in bus_dmamap_load().
	o remove obsolete splhigh()/splx() calls.
	o remove printf() about incorrect locking in busdma_swi() and sync
	  busdma_swi() with the one of the alpha backend.
	o use __FBSDID.
2003-07-01 18:08:05 +00:00
Maxime Henrion
4813f72a9b Honor the boundary of the busdma tag when allocating bounce pages.
This was fixed in revision 1.5 of alpha/alpha/busdma_machdep.c and
was never fixed in other busdma backends using bounce pages.
2003-07-01 16:54:54 +00:00
Scott Long
f6b1c44d1f Mega busdma API commit.
Add two new arguments to bus_dma_tag_create(): lockfunc and lockfuncarg.
Lockfunc allows a driver to provide a function for managing its locking
semantics while using busdma.  At the moment, this is used for the
asynchronous busdma_swi and callback mechanism.  Two lockfunc implementations
are provided: busdma_lock_mutex() performs standard mutex operations on the
mutex that is specified from lockfuncarg.  dftl_lock() is a panic
implementation and is defaulted to when NULL, NULL are passed to
bus_dma_tag_create().  The only time that NULL, NULL should ever be used is
when the driver ensures that bus_dmamap_load() will not be deferred.
Drivers that do not provide their own locking can pass
busdma_lock_mutex,&Giant args in order to preserve the former behaviour.

sparc64 and powerpc do not provide real busdma_swi functions, so this is
largely a noop on those platforms.  The busdma_swi on is64 is not properly
locked yet, so warnings will be emitted on this platform when busdma
callback deferrals happen.

If anyone gets panics or warnings from dflt_lock() being called, please
let me know right away.

Reviewed by:	tmm, gibbs
2003-07-01 15:52:06 +00:00
Alan Cox
dca96f1adc - Export pmap_enter_quick() to the MI VM. This will permit the
implementation of a largely MI pmap_object_init_pt() for vnode-backed
   objects.  pmap_enter_quick() is implemented via pmap_enter() on sparc64
   and powerpc.
 - Correct a mismatch between pmap_object_init_pt()'s prototype and its
   various implementations.  (I plan to keep pmap_object_init_pt() as
   the MD hook for device-backed objects on i386 and amd64.)
 - Correct an error in ia64's pmap_enter_quick() and adjust its interface
   to match the other versions.  Discussed with: marcel
2003-06-29 21:20:04 +00:00
Alan Cox
269acda954 - Remove the calls to pmap_install() from pmap_object_init_pt(); they are
redundant.  Discussed with: marcel
 - MFi386: Add vm object locking to pmap_object_init_pt().
2003-06-29 06:10:32 +00:00
Marcel Moolenaar
d9a4740f18 Implement cpu_set_upcall_kse(). Elementary testing shows that this
function behaves correctly in principle, but is not expected to be
100% complete. In any case, with this commit we have KSE ported
enough to start runtime testing with threaded applications and fix
whatever bugs or omissions we encounter. Yay!
2003-06-28 09:22:25 +00:00
David Xu
b8f480ab94 Add a machine depended function thread_siginfo, SA signal code
will use the function to construct a siginfo structure and use
the result to export to userland.

Reviewed by: julian
2003-06-28 06:34:08 +00:00
Scott Long
3eaffdf7e0 Do the first and mostly mechanical step of adding mutex support to the
bus_dma async callback scheme.  Note that sparc64 does not seem to do
async callbacks.  Note that ia64 callbacks might not be MPSAFE at the
moment.  Note that powerpc doesn't seem to do async callbacks due to
the implementation being incomplete.

Reviewed by:	mostly silence on arch@
2003-06-27 08:31:48 +00:00
Marcel Moolenaar
e2905ce3a0 Add TLS related relocation. 2003-06-19 06:51:43 +00:00
Alan Cox
40ebf3e43a Fix a performance bug in all of the various implementations of
uma_small_alloc(): They always zeroed the page regardless of what the
caller requested.
2003-06-18 02:57:38 +00:00
David Xu
0e2a4d3aeb Rename P_THREADED to P_SA. P_SA means a process is using scheduler
activations.
2003-06-15 00:31:24 +00:00
Alan Cox
49a2507bd1 Migrate the thread stack management functions from the machine-dependent
to the machine-independent parts of the VM.  At the same time, this
introduces vm object locking for the non-i386 platforms.

Two details:

1. KSTACK_GUARD has been removed in favor of KSTACK_GUARD_PAGES.  The
different machine-dependent implementations used various combinations
of KSTACK_GUARD and KSTACK_GUARD_PAGES.  To disable guard page, set
KSTACK_GUARD_PAGES to 0.

2. Remove the (unnecessary) clearing of PG_ZERO in vm_thread_new.  In
5.x, (but not 4.x,) PG_ZERO can only be set if VM_ALLOC_ZERO is passed
to vm_page_alloc() or vm_page_grab().
2003-06-14 23:23:55 +00:00
Alan Cox
89f4fca265 Move the *_new_altkstack() and *_dispose_altkstack() functions out of the
various pmap implementations into the machine-independent vm.  They were
all identical.
2003-06-14 06:20:25 +00:00
Marcel Moolenaar
222a7e518c Remove kernel event tracing. The overhead is significant when running
under ski.
2003-06-14 00:01:24 +00:00
Marcel Moolenaar
58f2d986a6 Make sure pcpu->pc_pcb is pointing to a 16-byte aligned address. The
PCB contains FP registers, whose alignment must be 16 bytes at least.
Since the PCB pointed to by pc_pcb is immediately after the PCPU
itself, round-up the size of thge PCPU to a multiple of 16 bytes. The
PCPU is page aligned.

This fixes a misalignment trap caused by stopping a CPU in a SMP
kernel, such as been done when entering the debugger.

Reported by: Alan Robinson <alan.robinson@fujitsu-siemens.com>
2003-06-12 00:15:18 +00:00
Peter Wemm
77e2a274d0 GC unused cpu_wait() function 2003-06-11 05:20:33 +00:00
Juli Mallett
d196a10856 Note that scbus is required for SCSI, not just "required" in general.
Submitted by:	Edward Kaplan (tmbg37 on IRC)
Reviewed by:	rwatson (in principle)
2003-06-08 02:03:02 +00:00
Marcel Moolenaar
6f2071769f pmap_find_vhpt() has been observed to return a NULL pointer when
the caller assumes this to not happen by means of performing an
indirection without checking the return value. Add KASSERTs to
force a kernel with INVARIANTS to panic. This is a short-term
measure. The pmap code is scheduled to be overhauled.
2003-06-07 04:17:39 +00:00
Marcel Moolenaar
f09b81f8be If we get a fault in the gateway page, which would happen if we try
to deliver a signal and the RSE backing store has been exhausted or
the backing store pointer has been clobbered, we need to make sure
we call userret() and do_ast() when we exit from trap(). Not adjusting
the local variable 'user' in this case will prevent the faulty process
from being terminated and we end up in an infinite fault repetition.

Faulty process provided by: bento
2003-06-07 04:10:07 +00:00
Marcel Moolenaar
0785ee125b Use TRAPF_USERMODE() to replace an equivalent check in trap(). While
here, amend the related comment.
2003-06-06 23:44:05 +00:00
Marcel Moolenaar
eaa7bda4a5 Have TRAPF_USERMODE() take into account that the gateway page is not
always kernel space. It should be treated as user space when run with
user privileges (which is the case for the signal trampolines). This
fixes its only use in a KASSERT in subr_trap.c.
2003-06-06 23:27:18 +00:00
Marcel Moolenaar
3f52f44add Fix the dreaded double counting that was present on alpha as well and
got fixed two weeks after the ia64 version was copied from the alpha
version (see rev 1.32 of sys/alpha/alpha/mem.c). As such, we were
missing the same continue as on alpha.

While here, add a default case for the device minor switch and do
some general style(9) cleanups.

WARNING: this file still has bugs. When reading from region 6 or
region 7, we don't validate the physical address. One can trivially
cause a machine check by trying to read from address 0xFFFFFFFFFFFFFFF0
or something that uses the unimplemented physical address bits.

Reported by: Alan Robinson <alan.robinson@fujitsu-siemens.com>
2003-06-04 21:56:10 +00:00
Marcel Moolenaar
11e0f8e16d Change the second (and last) argument of cpu_set_upcall(). Previously
we were passing in a void* representing the PCB of the parent thread.
Now we pass a pointer to the parent thread itself.
The prime reason for this change is to allow cpu_set_upcall() to copy
(parts of) the trapframe instead of having it done in MI code in each
caller of cpu_set_upcall(). Copying the trapframe cannot always be
done with a simply bcopy() or may not always be optimal that way. On
ia64 specifically the trapframe contains information that is specific
to an entry into the kernel and can only be used by the corresponding
exit from the kernel. A trapframe copied verbatim from another frame
is in most cases useless without some additional normalization.

Note that this change removes the assignment to td->td_frame in some
implementations of cpu_set_upcall(). The assignment is redundant.
A previous call to cpu_thread_setup() already did the exact same
assignment. An added benefit of removing the redundant assignment is
that we can now change td_pcb without nasty side-effects.

This change officially marks the ability on ia64 for 1:1 threading.

Not tested on: amd64, powerpc
Compile & boot tested on: alpha, sparc64
Functionally tested on: i386, ia64
2003-06-04 21:13:21 +00:00
Marcel Moolenaar
0fa2b83829 Improve set_mcontext:
o  Don't copy psr verbatim from the user supplied context. Only allow
   userland to change the processor settings that are part of the user
   mask.
2003-06-01 23:22:56 +00:00
Marcel Moolenaar
86f4f6f7b8 Improve on cpu_set_upcall:
o  Use pcb and tf for the new pcb and the new trapframe and use pcb0
   for the old (current) pcb. The mix of pcb, pcb2 and tf was slightly
   confusing.
o  Don't define td->td_frame here. It has already been set previously
   by cpu_thread_setup. Add a KASSERT to make sure pcb and tf are both
   non-NULL.
o  Make sure the number of dirty registers is 0 for the new thread.
   There are no user registers on the backing store because we heven't
   enter userland yet.
2003-06-01 23:19:21 +00:00
Marcel Moolenaar
798c9e50b0 Implement cpu_thread_setup(). This is mostly the same as on i386,
except for the fact that trapframes have a size recorded in it
that we set here too. We need this for proper thread setup.

Pointed out by: mtm
2003-06-01 08:29:43 +00:00
Marcel Moolenaar
5e7019bf32 Now that we have the signal trampolines in the gateway page and the
gateway page is considered kernel space, we can panic when we should
only SIGSEGV. Hence, add the additional constraint that for page
faults we also require running with kernel privileges. The gateway
page is the only kernel code running with user privileges, iso this
is a correct way to exclude the gateway page from kernel land.

We do not currently exclude the gateway page for other faults as it
is not always the right way to do it. Further tuning will happen on
a case by case bases.
2003-05-31 21:21:35 +00:00
Marcel Moolenaar
480a728dee Implement cpu_set_upcall(). Required by libthr and used by
thr_create(2). This implementation is so far only compile tested.
But since this is also the last of the functions required to
support libthr, we're now functionally complete (for some weird
definition of functionally; and complete). Runtime testing can
commence.
2003-05-31 21:14:25 +00:00
Marcel Moolenaar
c01da18e22 Implement set_mcontext() and get_mcontext(). Just as for sendsig() and
sigreturn(), we cheat and assume the preserved registers are still
on-chip and unmodified. This is actually the case, but more by accident
than by design. We need to use unwinding eventually or explicitly
compile the kernel in a way that the compiler steers clear from using
the preserved registers completely.
2003-05-31 21:07:08 +00:00
Marcel Moolenaar
8ef6f226da Make the regset pointers const pointers for the context restore functions.
This works better with set_mcontext() and is more precise in general.
2003-05-31 21:02:18 +00:00
Marcel Moolenaar
3893a77138 Some ia32 related finetuning for the EPC syscall path:
o  The SDM states that flushing the RSE in the cycle prior to the
   call to ia32 code yields the best performance. We don't really
   care to much about performance here, but we do the same anyway.
   I'm being paranoia and conservative here.
o  Only initialize the ia32 state registers, not the registers used
   as scratch by the ia32 engine. This saves a couple of loads from
   the trapframe, but also helps debugging: we don't clobber useful
   debugging data (engineering hints :-)
o  Make sure all general registers constituting ia32 state have been
   initialized. If there's no useful to be loaded from the trapframe,
   clear the register. This avoids accidentally leaking NaT bits.
o  Make sure we set ar.k6 prior to clobbering ar.bspstore and also
   set ar.k7 prior to setting sp. This fixes a race seen for ia64
   native code as well (and previously fixed too).
2003-05-31 20:57:26 +00:00
Marcel Moolenaar
62931aa266 Make sure we have all the dirty registers in user frames on the
backing store before we discard them. It is possible that we
enter the kernel (due to an execve in this case) with a lot of
dirty user registers and that the RSE has only partially spilled
them (to make room for new frames). We cannot move the backing
store pointer down (to discard user registers) when not all of
the user registers are on the backing store.
So, we flush the register stack IFF this happens. Unconditionally
doing the flush is too costly, because the condition in which we
need to flush is very rare.

This change appears to fix the SIGSEGV that sometimes happen for
newly executed processes and so far also appears to fix the last
of the corruption. It is possible, although not likely, that this
change prevents some other bug from happening, even though it is
itself not a fix. Hence the uncertainty. We'll know in a couple
of months I guess :-)
2003-05-31 20:42:35 +00:00
Hiten Pandya
b77c32a07e Rename BUS_DMAMEM_NOSYNC to BUS_DMA_COHERENT.
The current name is confusing, because it indicates to
the client that a bus_dmamap_sync() operation is not
necessary when the flag is specified, which is wrong.

The main purpose of this flag is to hint the underlying
architecture that DMA memory should be mapped in a coherent
way, but the architecture can ignore it.  But if the
architecture does supports coherent mapping of memory, then
it makes bus_dmamap_sync() calls cheap.

This flag is the same as the one in NetBSD's Bus DMA.

Reviewed by: gibbs, scottl, des (implicitly)
Approved by: re@ (jhb)
2003-05-30 20:40:33 +00:00
Marcel Moolenaar
3a8c4f9f9c Move the sysctls of the misalignment handler to where they belong
and use OID_AUTO instead of fixed IDs.

Approved by: re@ (blanket)
2003-05-29 06:30:36 +00:00
Marcel Moolenaar
12cd60b726 Fix what I think is a cut-n-paste bug: use OID_AUTO for the
print_usertrap sysctl instead of CPU_UNALIGNED_PRINT. The
latter is used already.

Approved by: re@ (blanket)
2003-05-29 05:09:15 +00:00
Marcel Moolenaar
81d77e2eed A flushrs must be the first in an instruction group.
Approved by: re@ (blanket)
2003-05-27 07:10:58 +00:00
Scott Long
7e71df9339 Bring back bus_dmasync_op_t. It is now a typedef to an int, though the
BUS_DMASYNC_ definitions remain as before.  The does not change the ABI,
and reverts the API to be a bit more compatible and flexible.  This has
survived a full 'make universe'.

Approved by:	re (bmah)
2003-05-27 04:59:59 +00:00
Marcel Moolenaar
1093ceb088 Have the unwinder allocate memory with M_NOWAIT. The unwinder is
used by DDB and we cannot know in advance whether it's save to
sleep. It often enough isn't. We may want to pre-allocate space
to cover the most common cases without having to use malloc at
all, but that requires some analysis. We leave that for later.

Approved by: re@ (blanket)
2003-05-27 01:15:16 +00:00
Marcel Moolenaar
a47e5d473b Fix fu{byte|word*} and su{byte|word*}:
o  If the address was not within user space we jumped to fusufault
   where we would clear pcb_onfault and return 0. There are two
   bugs here:
   1. We never got to the point where we assigned the address of
      pcb_onfault to r15, which means that we would clobber some
      random memory location, including I/O space or ROM.
   2. We're supposed to return -1 on error.
o  Make sure we have proper memory ordering for setting pcb_onfault,
   doing the memory access to user space and clearing pcb_onfault.
   For the fu* family of functions this means that we need a mf
   instruction, because we don't have acquire semantics on stores
   and release semantics on loads (hence st;ld cannot be ordered
   without intermediate mf).

While here, implement casuptr() so that we are a (small) step
closer to supporting libthr and deobfuscate the non-implementation
of {f|s}uswintr.

Approved by: re@ (blanket)
2003-05-27 01:00:12 +00:00
Marcel Moolenaar
941a057663 Revision 1.99 of this file changed the allocation request from
VM_ALLOC_INTERRUPT to VM_ALLOC_SYSTEM. There was no mention of
this in commit log as it was considered harmless. Guess what:
it does harm. WITNESS showed that we can not safely grab the
page queue lock in vm_page_alloc() in all cases as we may have
to sleep on it. Revert the request to VM_ALLOC_INTERRUPT to
circumvent this. We panic if vm_page_alloc returns 0. I'm not
entirely happy about this, but we have bigger fish to fry.

Approved by: re@ (blanket)
2003-05-26 22:54:18 +00:00
Marcel Moolenaar
dc0545462e Now that we define user mode as any IP address that isn't in the
kernel's VA regions, we cannot limit the use of break-based
syscalls to user mode only. The signal trampolines are in the
gateway page, which is mapped into the process address space in
region 5 and thus is kernel space.

We don't special case the gateway page here. Allow break-based
syscalls from anywhere in the kernel VA space.

Approved by: re@ (blanket)
2003-05-25 01:01:28 +00:00
Marcel Moolenaar
d7f827116f Fix a source of instability specific to an EPC userland. We return
to userland with interrupts disabled until we restore PSR. However,
it has been observed that interrupts do actually happen before they
are enabled again. This is a bit surprising and I don't know yet
what's going on exactly. Nevertheless, the code was not crafted
carefully enough to allow interrupts to happen and we could
clobber the kernel stack of another thread when interrupts did
happen.

This is what happens: we restore the (memory) stack pointer (sp)
and the register stack base prior to restoring ar.k6 and ar.k7.
This is not a problem if interrupts don't happen between setting
sp/ar.bspstore and ar.k6/ar.k7. Alas, interrupts can happen.
Since sp/ar.bspstore already point to the userland stacks, we
need to switch to the kernel stack in interrupt. However, ar.k6
and ar.k7 have not been set, which means that we were switching
to some unrelated kstack and happily clobbered the trapframe
present there if the thread to which the kstack belonged was
in kernel mode or otherwise we could have our trapframe clobbered
if that other thread enters the kernel. Nasty either way.

We now carefully restore ar.k6 prior to restoring ar.bspstore and
likewise for ar.k7 and sp. All we need is the guarantee that an
interrupt does not clobber ar.k6 or ar.k7 before we're back in
userland. That has been achieved by restoring ar.k6/ar.k7
unconditionally (see exception.s)

While here, remove the disabling of interrupts on EPC entry. It
was added as a way to "resolve" the crashes until it was understood
what was going on. I think I achieved the latter, so we can remove
the patch. Note that setting up a trapframe with interrupts
enabled has it's own share of corner cases, but it's better to
properly fixed those than to keep a mostly wrong patch around
because we're afraid to remove it...

Approved by: re@ (blanket)
2003-05-24 22:53:10 +00:00
Marcel Moolenaar
a7b90d80fc Be more careful how we restore interrupts. Don't rewrite most of the
PSR only to achieve setting PSR.i back to it's previous value. It
makes it impossible to change any of the 30+ other unrelated bits
when done between intr_disable() and intr_restore(). That's bad.

Instead have intr_disable() return 1 when interrupts were previously
enabled and 0 otherwise and only enable interrupts in intr_restore()
when given a non-0 value.

This change specifically disallows using intr_restore() to disable
interrupts. The reason is simple: interrupts only need to be restored
after they are being disabled, which means that intr_restore() is
called with interrupts disabled and we only need to enable them if
they were previously enabled.

This change does not fix any bugs, other than that it bugged me...

Approved by: re@ (blanket)
2003-05-24 21:44:24 +00:00
Marcel Moolenaar
95f2dbba40 Consistently us the same metric to differentiate between kernel mode
and user mode. We need to take into account that the EPC syscall path
introduces a grey area in which one can argue either way, including a
third: neither.

We now use the region in which the IP address lies. Regions 5, 6 and 7
are kernel VA regions and if the IP lies any any of those regions we
assume we're in kernel mode. Hence, we can be in kernel mode even if
we're not on the kernel stack and/or have user privileges. There're
gremlins living in the twilight zone :-)

For the EPC syscall path this particularly means that the process
leaves user mode the moment it calls into the gateway page. This
makes the most sense because from a process' point of view the call
represents a request to the kernel for some service and that service
has been performed if the call returns. With the metric we picked,
this also means that we're back in user mode IFF the call returns.

Approved by: re@ (blanket)
2003-05-24 21:16:19 +00:00
Marcel Moolenaar
fb4aa34f3b Unconditionally restore ar.k7 (memory stack) and ar.k6 (register stack)
when returning from an interrupt. Both registers are used on interrupt
to switch to the right kernel stack, but other than that they are not
used. This means we only have to make sure they contain proper values
while in user mode. As such, we conditionally restored these registers
based on whether we returned to userland or not. A nice property of
conditionally restoring ar.k6 and ar.k7 is that it introduces two
invariants: ar.k6 always points to the bottom of the kernel stack and
ar.k7 always points to the top of the kernel stack (immediately below
the PCB we have there).

However, the EPC syscall path introduces an irregularity: there's no
"thin red line" between user and kernel. There's a grey area that's a
couple of instructions wide. Any interruption in that grey area is
bound to see an inconsistent state. One such state is that we're in
kernel space for all practical purposes, but we still need to have
ar.k6 and ar.k7 restored as if we're in userland.

Thus: restore ar.k6 and ar.k7 unconditionally at the cost of losing
a valuable invariant. Both registers now hold the extend of the
usable portion of the kernel stack at any interrupt nesting, which
when in userland mean the bottom and the top of the kstack.
2003-05-24 20:51:55 +00:00
Marcel Moolenaar
d1d7df1905 Fix an alpha inheritance bug:
On alpha, PAL is involved in context management and after wiring
the CPU (in alpha_init()) a context switch was performed to tell
PAL about the context. This was bogusly brought over to ia64
where it introduced bugs, because we restored the context from
a mostly uninitialized PCB.

The cleanup constitutes:
o  Remove the unused arguments from ia64_init().
o  Don't return from ia64_init(), but instead call mi_startup()
   directly. This reduces the amount of muckery in assembly and
   also allows for the next bullet:
o  Save our currect context prior to calling mi_startup(). The
   reason for this is that many threads are created from thread0
   by cloning the PCB. By saving our context in the PCB, we have
   something sane to clone. It also ensures that a cloned thread
   that does not alter the context in any way will return to
   the saved context, where we're ready for the eventuality with
   a nice, user unfriendly panic().

The cleanup fixes at least the following bugs:
o  Entering mi_startup() with the RSE in enforced lazy mode.
o  Re-execution of ia64_init() in certain "lab" conditions.

While here, add proper unwind directives to __start() so that
the unwind knows it has reached the bottom of the (call) stack.

Approved by: re@ (blanket)
2003-05-24 00:17:34 +00:00