Commit Graph

1136 Commits

Author SHA1 Message Date
Jeff Roberson
048ac395be - Recruit some new ULE users by making it the default scheduler in GENERIC.
ULE will be in a probationary period to determine whether it will be left
   as the default in 5.3 which would likely mean the rest of the 5.x series.
2004-01-24 21:38:52 +00:00
Jacques Vidrine
5864cda7c6 Add PFIL_HOOKS to the GENERIC kernel configuration, primarily so
that one can load the IPFilter module (which requires PFIL_HOOKS).

Requested by:	Many, for over a year
2004-01-24 14:59:51 +00:00
Marcel Moolenaar
28466ae036 Fix handling of FP traps:
o  For traps, the cr.iip register points to the next instruction to
   execute on interrupt return (modulo slot). Since we need to get
   the bundle of the instruction that caused the FP fault/trap, make
   sure we fetch the previous bundle if the next instruction is in
   fact the first in a bundle.
o  When we call the FPSWA handler, we need to tell it whether it's
   a trap or a fault (first argument). This was hardcoded to mean a
   fault.

Also, for FP faults, when a fault is converted to a trap, adjust the
cr.iip and cr.ipsr registers to point to the next instruction. This
makes sure that the SIGFPE handler gets a consistent state.
2004-01-20 03:29:24 +00:00
Marcel Moolenaar
dd45fd9791 s/framep/tf/g -- this normalizes on the use of tf to point to the
trapframe and improves grep-ability.
2004-01-20 02:35:46 +00:00
Dag-Erling Smørgrav
2d6853a650 Whitespace nit. 2004-01-13 15:30:36 +00:00
Jacques Vidrine
e4dc8baa84 Provide sysarch(2) prototypes in the MD sysarch.h headers. While I'm
at it, use the ANSI C generic pointer type for the second argument,
thus matching the documentation.

Remove the now extraneous (and now conflicting) function declarations
in various libc sources.  Remove now unnecessary casts.

Reviewed by:	bde
2004-01-09 16:52:09 +00:00
David Xu
a30ec4b99c Make sigaltstack as per-threaded, because per-process sigaltstack state
is useless for threaded programs, multiple threads can not share same
stack.
The alternative signal stack is private for thread, no lock is needed,
the orignal P_ALTSTACK is now moved into td_pflags and renamed to
TDP_ALTSTACK.
For single thread or Linux clone() based threaded program, there is no
semantic changed, because those programs only have one kernel thread
in every process.

Reviewed by: deischen, dfr
2004-01-03 02:02:26 +00:00
Mike Silbersack
ddeb5b242e Track three new sendfile-related statistics:
- The number of times sendfile had to do disk I/O
- The number of times sfbuf allocation failed
- The number of times sfbuf allocation had to wait
2003-12-28 08:57:09 +00:00
Mike Silbersack
5caf2b00f0 Move the declaration of sfbufspeak and sfbufsused to mbuf.h,
and use imax instead of max, as sfbufspeak and sfbufsused
are signed.

Submitted by:   bde
2003-12-28 01:43:22 +00:00
Mike Silbersack
5eda9873e9 Track current and peak sfbuf usage, export the values via sysctl. 2003-12-27 07:52:47 +00:00
Marcel Moolenaar
b3378ed911 Don't use NULL with integral types. 2003-12-24 19:55:07 +00:00
Peter Wemm
7655ebdaaa Return AE_OK for stub functions returning ACPI_STATUS, not NULL 2003-12-24 05:26:26 +00:00
Peter Wemm
c15e347e22 GC the unused <machine/kse.h> file. 2003-12-24 00:51:30 +00:00
Peter Wemm
9b68618df0 Add an additional field to the elf brandinfo structure to support
quicker exec-time replacement of the elf interpreter on an emulation
environment where an entire /compat/* tree isn't really warranted.
2003-12-23 02:42:39 +00:00
Peter Wemm
59553d37df Add missing #include "opt_compat.h" so that the compatability function
freebsd4_freebsd32_sigreturn() is defined when expected.  This should
unbreak the tinderbox. Sorry.
2003-12-18 06:59:18 +00:00
Marcel Moolenaar
a8434283fc In set_mcontext(), take into account that kse_switchin(2) will
eventually be passed an async. context as well as a syscall
context.
While here, fix a serious bug in that if the trapframe is a
syscall frame, but we're restoring an async context, we need
to clear the FRAME_SYSCALL flag so that we leave the kernel
via exception_restore.
2003-12-14 01:59:31 +00:00
Peter Wemm
64d85faa1c Assimilate ia64 back into the fold with the common freebsd32/ia32 code.
The split-up code is derived from the ia64 code originally.

Note that I have only compile-tested this, not actually run-tested it.
The ia64 side of the force is missing some significant chunks of signal
delivery code.
2003-12-11 01:05:09 +00:00
Peter Wemm
ba3fa22e8c Fix last second typo. 2003-12-10 22:59:03 +00:00
Peter Wemm
419d43c635 Use gcc's superior ffs() builtin. 2003-12-10 22:51:40 +00:00
Peter Wemm
a80f5f272c Use ffs(x) == popcnt(x ^ (x - 1)) to implement 64 bit ffsl(). gcc's
ffs() builtin uses this already but truncates the upper 32 bits.
2003-12-10 22:47:02 +00:00
Marcel Moolenaar
b291daba6f Don't panic for misalignment traps when the onfault handler is set.
Not all transfers between kernel and user space are byte oriented
and thus alignment safe. Especially fuword*() and suword*() are
sensitive to alignment but in general more optimal than block copies.
By catching the misalignment trap we avoid pessimizing the common
case of properly aligned memory accesses which we would do if we
were to use byte copies or adding tests for proper alignment.

Note that the expectation that the kernel produces aligned pointers
is unchanged. This change therefore relates to possible unaligned
pointers generated in userland.
2003-12-09 09:52:14 +00:00
Nate Lawson
cac6460cfe Use the ACPI-CA definitions for the various APIC tables instead of our
own.
2003-12-09 03:04:19 +00:00
David E. O'Brien
a5b5101f5e Move the bktr(4) <arch>/include/ioctl_{bt848,meteor}.h files to dev/bktr
as these ioctl's aren't MD.  This also means they are installed in
/usr/include/dev/bktr now.  Also provide compatability wrappers for
where these headers lived in 4.x.
2003-12-08 07:22:42 +00:00
Marcel Moolenaar
47eb01b822 Simplify the contexts created by the kernel and remove the related
flags. We now create asynchronous contexts or syscall contexts only.
Syscall contexts differ from the minimal ABI dictated contexts by
having the scratch registers saved and restored because that's where
we keep the syscall arguments and syscall return values.
Since this change affects KSE, have it use kse_switchin(2) for the
"new" syscall context.
2003-12-07 20:47:33 +00:00
Warner Losh
05a463a03d Ooops. These are still used by the bktr driver. David O'Brien has
plans for dealing, but I'll let him deal.

Pointy hat to: imp@
2003-12-07 06:37:32 +00:00
Warner Losh
65b4a1b917 Remote meteor driver. It hasn't compiled in over 3 years. If someone
makes it compile again, and can test it, we can restore the driver to
the tree.
2003-12-07 04:41:11 +00:00
John Baldwin
798a45964d - Split cpu_mp_probe() into two parts. cpu_mp_setmaxid() is still called
very early (SI_SUB_TUNABLES - 1) and is responsible for setting mp_maxid.
  cpu_mp_probe() is now called at SI_SUB_CPU and determines if SMP is
  actually present and sets mp_ncpus and all_cpus.  Splitting these up
  allows an architecture to probe CPUs later than SI_SUB_TUNABLES by just
  setting mp_maxid to MAXCPU in cpu_mp_setmaxid().  This could allow the
  CPU probing code to live in a module, for example, since modules
  sysinit's in modules cannot be invoked prior to SI_SUB_KLD.  This is
  needed to re-enable the ACPI module on i386.
- For the alpha SMP probing code, use LOCATE_PCS() instead of duplicating
  its contents in a few places.  Also, add a smp_cpu_enabled() function
  to avoid duplicating some code.  There is room for further code
  reduction later since much of this code is also present in cpu_mp_start().
- All archs besides i386 still set mp_maxid to the same values they set it
  to before this change.  i386 now sets mp_maxid to MAXCPU.

Tested on:	alpha, amd64, i386, ia64, sparc64
Approved by:	re (scottl)
2003-11-21 22:23:26 +00:00
Marcel Moolenaar
1fc7ca0fb1 Set the ACPI processor Id in the PCPU structure so that CPU idling
on SMP systems has a chance of working. This was a loose end of the
implementation of the ACPI Cx idle states. Since our logical CPU Id
is the ACPI processor Id, we do not need to jump through hoops to
obtain it.

Approved: re@ (jhb)
2003-11-20 16:42:39 +00:00
Peter Wemm
0bfbe7b935 Widen the enable/disable helper function's argument in line with the
ithread_create() changes etc.  This should be mostly a NOP.
2003-11-17 06:10:15 +00:00
Bruce Evans
81bbee5996 Fixed a pedantic syntax error (a stray semicolon at the end of
PCPU_MD_FIELDS).
2003-11-17 03:40:41 +00:00
Alan Cox
0ec3db3072 - Remove unnecessary synchronization from sf_buf_init(). (There is only
one active CPU when sf_buf_init() is performed.)
2003-11-16 23:40:06 +00:00
Alan Cox
e45db9b837 - Modify alpha's sf_buf implementation to use the direct virtual-to-
physical mapping.
 - Move the sf_buf API to its own header file; make struct sf_buf's
   definition machine dependent.  In this commit, we remove an
   unnecessary field from struct sf_buf on the alpha, amd64, and ia64.
   Ultimately, we may eliminate struct sf_buf on those architecures
   except as an opaque pointer that references a vm page.
2003-11-16 06:11:26 +00:00
Nate Lawson
b72e9cf526 Add the pc_acpi_id PCPU member. The new acpi_cpu driver uses this to
dereference the softc.
2003-11-15 18:58:29 +00:00
Marcel Moolenaar
eea3bbdff8 Remove ia64_highfp_load() now that it's unused. 2003-11-12 03:24:34 +00:00
Marcel Moolenaar
0d9ae4e24e Further work-out the handling of the high FP registers. The most
important change is in cpu_switch() where we disable the high FP
registers for the thread that we switch-out if the CPU currently
has its high FP registers. This avoids that the high FP registers
remain enabled for the thread even when the CPU has unloaded them
or the thread migrated to another processor.
Likewise, when we switch-in a thread of that has its high FP
registers on the CPU, we enable them. This avoids an otherwise
harmless, but unnecessary trap to have them enabled.

The code that handles the disabled high FP trap (in trap()) has
been turned into a critical section for the most part to avoid
being preempted. If there's a race, we bail out and have the
processor trap again if necessary.

Avoid using the generic ia64_highfp_save() function when the
context is predictable. The function adds unnecessary overhead.
Don't use ia64_highfp_load() for the same reason. The function
is now unused and can be removed.

These changes make the lazy context switching of the high FP
registers in an UP kernel functional.
2003-11-12 01:26:02 +00:00
Marcel Moolenaar
a5ba2b5cc4 Save and restore the high FP registers in {g|s}_mcontext(). Note
that we currently do not keep track of whether the thread has
actually used the high FP registers before. If not, we should
not save them in the context which automaticly means that we
also would not restore them from the context. For now, do it
unconditionally so that we can reach functional completeness.
2003-11-11 09:53:37 +00:00
Marcel Moolenaar
9d52656a5a Fix a nasty bug that got exposed when the sendsig() and sigreturn()
functions switched to using {g|s}et_mcontext(). The problem is that
sigreturn(), being a syscall, can be given an async. context (i.e.
one corresponding to an interrupt or trap). When this happens, we
try to return to user mode via epc_syscall_return with a trapframe
that can only be used to return to user mode via exception_restore.

To fix this, we check the frame's flags immediately prior to
epc_syscall_return and branch to exception_restore for non-syscall
frames. Modify the assertion in set_mcontext() to check that if
there's a mismatch, it's because of sigreturn().
2003-11-11 09:25:19 +00:00
Marcel Moolenaar
9422d61a1f In get_mcontext(), do not update bspstore and ndirty in the trapframe.
Only update them in the newly created context to reflect the state
after copying the dirty registers onto the user stack. If we were to
update the trapframe, we lose the state at entry into the kernel. We
may need that after we create the context, such as for KSE upcalls.

We have to update the trapframe after writing the dirty registers to
the user stack for signal delivery to work. But this is best done in
sendsig() itself where it applies, not in get_mcontext() where it's
done unconditionally.
2003-11-10 05:28:05 +00:00
Marcel Moolenaar
3534a08109 When a thread is being swapped-out, save the high FP registers. We
have a pointer in the PCPU to the PCB of the thread that currently
has its high FP registers loaded.
2003-11-09 23:13:23 +00:00
Marcel Moolenaar
ac8c7680a6 Use get_mcontext() to construct the signal context in sendsig() and
use set_mcontext() to restore the context in sigreturn(). Since we
put the syscall number and the syscall arguments in the trapframe
(we don't save the scratch registers for syscalls, which allows us
to reuse the space to our advantage), create a MD specific flag so
that we save the scratch registers even for syscalls. We would not
be able to restart a syscall otherwise.

The signal trampoline does not need to flush the regiters anymore,
because get_mcontext() already handles that. In fact, if we set up
the context correctly, we do not need to have a trampoline at all.
This change however only minimally changes the trampoline code. In
follow-up commits this can be further optimized.

Note that normally we preserve cfm and iip in the trapframe created
by the EPC syscall path when we restore a context in set_mcontext()
because those fields are not normally set for a synchronuous context.
The kernel puts the return address and frame info of the syscall
stub in there. By preserving these fields we hide this detail from
userland which allows us to use setcontext(2) for user created
contexts. However, sigreturn() is commonly called from the trampoline,
which means that if we preserve cfm and iip in all cases, we would
return to the trampoline after the sigreturn(), which means we hit
the safety net: we call exit(2). So, we do not preserve cfm and iip
when we have a synchronous context that also has scratch registers
(the uncommon context created by sendsig() only), under the assumption
that if such a context is created in userland, something special is
going on and the use of cfm and iip is then just another quirk. All
this is invisible in the common case.
2003-11-09 22:17:36 +00:00
Marcel Moolenaar
fcaa2925a9 Change the clear_ret argument of get_mcontext() to be a flags argument.
Since all callers either passed 0 or 1 for clear_ret, define bit 0 in
the flags for use as clear_ret. Reserve bits 1, 2 and 3 for use by MI
code for possible (but unlikely) future use. The remaining bits are for
use by MD code.

This change is triggered by a need on ia64 to have another knob for
get_mcontext().
2003-11-09 20:31:04 +00:00
Marcel Moolenaar
00bd917263 Remove the atkbd, psm, sc and vga devices. Most ia64 boxes out there
are zx1 based machines and they don't particularly like it when we
poke at them with PC legacy code. The atkbd and psm devices were
disabled in the hints file so that one could enable them on machines
that support legacy devices, but that's not really something you can
expect from a first-time installer. This still leaves syscons (sc)
and the vga device, which were enabled by default and wrecking havoc
anyway. We could disable them by default like the atkbd and psm
devices, but there's really no point in pretending we're in a better
shape that way.
2003-11-08 23:19:13 +00:00
Scott Long
eb3b7bf69f Document the lockfunc and lockfuncarg arguments to bus_dma_tag_create() in
the busdma headers.
2003-11-07 23:29:42 +00:00
John Baldwin
dac33f12cc Regen. 2003-11-07 20:30:30 +00:00
John Baldwin
a060e9b7ef Sync with global syscalls.master. ptrace(), dup(), pipe(), ktrace(),
ia32_sigaltstack(), sysarch(), issetugid(), utrace(), and ia32_sigaction()
are MP safe.
2003-11-07 20:27:16 +00:00
Marcel Moolenaar
51e25af386 Add support for unaligned ld2, st2, st4 and st8. While here, make
sure we handle stacked registers properly by taking into account
that:
1. bspstore points after the frame (due to cover),
2. we need to adjust for intermediate NaT collections.
2003-11-06 04:26:40 +00:00
Marcel Moolenaar
2642a8845b Handle unaligned 4-byte loads. While in the neighborhood, remove the
cr.isr sanity check. We actually encounter insanities, which very
likely means that the insanity check itself is insane. Remove an empty
comment while I'm at it.
2003-11-03 08:04:04 +00:00
Marcel Moolenaar
6537124772 Add a bogus definition of __va_list for use by lint. Make it visible
only when lint is defined to protect builds with non-GNU compilers.
2003-11-03 05:04:09 +00:00
Marcel Moolenaar
fcca8c1dde Remove headers copied from i386 and either useless or wrong on ia64.
An example of useless is bios.h. An example of wrong is msdos.h (due
to the use of long for 32-bit fields).

display.h cannot be removed because it's used by syscons. That header
however has no platform dependency and shouldn't really be here.

Removal if these headers may cause build failures in the ports tree.
It's the ports that need fixing in that case.

Tested with: buildworld, LINT
2003-11-02 09:19:07 +00:00
Marcel Moolenaar
3bdfa17c6c When switching the RSE to use the kernel stack as backing store, keep
the RNAT bit index constant. The net effect of this is that there's
no discontinuity WRT NaT collections which greatly simplifies certain
operations. The cost of this is that there can be up to 504 bytes of
unused stack between the true base of the kernel stack and the start
of the RSE backing store. The cost of adjusting the backing store
pointer to keep the RNAT bit index constant, for each kernel entry,
is negligible.

The primary reasons for this change are:
1. Asynchronuous contexts in KSE processes have the disadvantage of
   having to copy the dirty registers from the kernel stack onto the
   user stack. The implementation we had so far copied the registers
   one at a time without calculating NaT collection values. A process
   that used speculation would not work. Now that the RNAT bit index
   is constant, we can block-copy the registers from the kernel stack
   to the user stack without having to worry about NaT collections.
   They will be in the right place on the user stack.
2. The ndirty field in the trapframe is now also usable in userland.
   This was previously not the case because ndirty also includes the
   space occupied by NaT collections. The value could be off by 8,
   depending on the discontinuity. Now that the RNAT bit index is
   contants, we have exactly the same number of NaT collection points
   on the kernel stack as we would have had on the user stack if we
   didn't switch backing stores.
3. Debuggers and other applications that use ptrace(2) can now copy
   the dirty registers from the kernel stack (using ptrace(2)) and
   copy them whereever they want them (onto the user stack of the
   inferior as might be the case for gdb) without having to worry
   about NaT collections in the same way the kernel doesn't have to
   worry about them.

There's a second order effect caused by the randomization of the
base of the backing store, for it depends on the number of dirty
registers the processor happened to have at the time of entry into
the kernel. The second order effect is that the RSE will have a
better cache utilization as compared to having the backing store
always aligned at page boundaries. This has not been measured and
may be in practice only minimally beneficial, if at all measurable.
2003-10-28 19:38:26 +00:00