108 Commits

Author SHA1 Message Date
davidxu
bc8b519d0f Validate if the value written into {FS,GS}.base is a canonical
address, writting non-canonical address can cause kernel a panic,
by restricting base values to 0..VM_MAXUSER_ADDRESS, ensuring
only canonical values get written to the registers.

Reviewed by: peter, Josepha Koshy < joseph.koshy at gmail dot com >
Approved by: re (scottl)
2005-07-10 23:31:11 +00:00
davidxu
2155a04472 Change cpu_set_kse_upcall to more generic style, so we can reuse it
in other codes. Add cpu_set_user_tls, use it to tweak user register
and setup user TLS. I ever wanted to merge it into cpu_set_kse_upcall,
but since cpu_set_kse_upcall is also used by M:N threads which may
not need this feature, so I wrote a separated cpu_set_user_tls.
2005-04-23 02:32:32 +00:00
jhb
41cadaa11e Divorce critical sections from spinlocks. Critical sections as denoted by
critical_enter() and critical_exit() are now solely a mechanism for
deferring kernel preemptions.  They no longer have any affect on
interrupts.  This means that standalone critical sections are now very
cheap as they are simply unlocked integer increments and decrements for the
common case.

Spin mutexes now use a separate KPI implemented in MD code: spinlock_enter()
and spinlock_exit().  This KPI is responsible for providing whatever MD
guarantees are needed to ensure that a thread holding a spin lock won't
be preempted by any other code that will try to lock the same lock.  For
now all archs continue to block interrupts in a "spinlock section" as they
did formerly in all critical sections.  Note that I've also taken this
opportunity to push a few things into MD code rather than MI.  For example,
critical_fork_exit() no longer exists.  Instead, MD code ensures that new
threads have the correct state when they are created.  Also, we no longer
try to fixup the idlethreads for APs in MI code.  Instead, each arch sets
the initial curthread and adjusts the state of the idle thread it borrows
in order to perform the initial context switch.

This change is largely a big NOP, but the cleaner separation it provides
will allow for more efficient alternative locking schemes in other parts
of the kernel (bare critical sections rather than per-CPU spin mutexes
for per-CPU data for example).

Reviewed by:	grehan, cognet, arch@, others
Tested on:	i386, alpha, sparc64, powerpc, arm, possibly more
2005-04-04 21:53:56 +00:00
jhb
a8f8001b38 - Remove some OBE comments regarding cpu_exit(). cpu_exit() is no longer
the last action of kern_exit().  Instead, it is a MD callout to cleanup
  per-process state during exit.
- Add notes of concern to Alpha and ia64 about the possible need to drop
  fp state in cpu_thread_exit() rather than in cpu_exit() since it is
  per-thread state rather than per-process.
2005-01-14 20:13:04 +00:00
imp
f0bf889d0d /* -> /*- for license, minor formatting changes 2005-01-07 02:29:27 +00:00
das
130bed6547 Don't include sys/user.h merely for its side-effect of recursively
including other headers.
2004-11-27 06:51:39 +00:00
grehan
1248963e47 Update the callframe structure to leave space for the frame pointer
and saved link register as per the ABI call sequence. Update code
that uses this (fork_trampoline etc) to use the correct genassym'd
offsets.

 This fixes the 'invalid LR' message when backtracing kernel
threads in DDB.
2004-07-22 01:28:51 +00:00
tmm
7b769ce88f Retire cpu_sched_exit(); it is not used any more. 2004-05-26 12:09:39 +00:00
alc
3edd6abfa6 MFamd64
Simplify the sf_buf implementation.  In short, make it a veneer
 over the direct virtual-to-physical mapping.
2004-04-18 08:10:04 +00:00
alc
1ec4d75266 In some cases, sf_buf_alloc() should sleep with pri PCATCH; in others, it
should not.  Add a new parameter so that the caller can specify which is
the case.

Reported by:	dillon
2004-04-03 09:16:27 +00:00
benno
79b30192e5 Replace td2 with td on the assumption that this was a typo. This should at
least unbreak the build.

Pointy hat to: peter
Not tested either by: benno
2004-03-30 13:57:34 +00:00
peter
09ff2f9fe5 Finish tidying up a couple of leftovers from the KSTACK_PAGES stuff. Some
files still #included the opt_ file.  powerpc hadn't been updated yet.
2004-03-29 19:38:05 +00:00
alc
a2e820d27b Refactor the existing machine-dependent sf_buf_free() into a machine-
dependent function by the same name and a machine-independent function,
sf_buf_mext().  Aside from the virtue of making more of the code machine-
independent, this change also makes the interface more logical.  Before,
sf_buf_free() did more than simply undo an sf_buf_alloc(); it also
unwired and if necessary freed the page.  That is now the purpose of
sf_buf_mext().  Thus, sf_buf_alloc() and sf_buf_free() can now be used
as a general-purpose emphemeral map cache.
2004-03-16 19:04:28 +00:00
grehan
256bcf5eae Kernel changes for libthr (and probably libpthread).
include/ucontext.h
 - remove trapframe and switch over to 'generic' description of machine
   state. Include version field to help with future modifications.
   Include floating point and altivec state, and hopefully align
   correctly

powerpc/copyinout.c
 - fill out casuptr() sync primitive, required by kern_umtx.c

powerpc/machdep.c
 - shifted proc0/thread0/pcpu setup to before cninit, since
   syscons -> make_dev -> devlock requires a valid curthread
 - implemented get_mcontext/set_mcontext
 - recast sendsig/sigreturn to use get/set_mcontext and new
   ucontext struct. floating point now saved
 - TODO: save/restore altivec state

powerpc/vm_machdep.c
 - implemented cpu_thread_setup/cpu_set_upcall/cpu_set_upcall_kse
 - eliminated trailing whitespace

Submitted by:  Suleiman Souhlal <refugee@segfaulted.com>, ucontext by grehan
2004-03-02 06:13:09 +00:00
silby
a7d8091ae5 Track three new sendfile-related statistics:
- The number of times sendfile had to do disk I/O
- The number of times sfbuf allocation failed
- The number of times sfbuf allocation had to wait
2003-12-28 08:57:09 +00:00
silby
a58bddbe36 Move the declaration of sfbufspeak and sfbufsused to mbuf.h,
and use imax instead of max, as sfbufspeak and sfbufsused
are signed.

Submitted by:   bde
2003-12-28 01:43:22 +00:00
silby
5c5418dd6e Track current and peak sfbuf usage, export the values via sysctl. 2003-12-27 07:52:47 +00:00
alc
aea6af995e - Remove unnecessary synchronization from sf_buf_init(). (There is only
one active CPU when sf_buf_init() is performed.)
2003-11-16 23:40:06 +00:00
alc
74614e7f63 - Modify alpha's sf_buf implementation to use the direct virtual-to-
physical mapping.
 - Move the sf_buf API to its own header file; make struct sf_buf's
   definition machine dependent.  In this commit, we remove an
   unnecessary field from struct sf_buf on the alpha, amd64, and ia64.
   Ultimately, we may eliminate struct sf_buf on those architecures
   except as an opaque pointer that references a vm page.
2003-11-16 06:11:26 +00:00
alc
8b0114def1 Migrate the sf_buf allocator that is used by sendfile(2) and zero-copy
sockets into machine-dependent files.  The rationale for this
migration is illustrated by the modified amd64 allocator.  It uses the
amd64's direct map to avoid emphemeral mappings in the kernel's
address space.  On an SMP, the emphemeral mappings result in an IPI
for TLB shootdown for each transmitted page.  Yuck.

Maintainers of other 64-bit platforms with direct maps should be able
to use the amd64 allocator as a reference implementation.
2003-08-29 20:04:10 +00:00
marcel
4194d813c1 In vm_thread_swap{in|out}(), remove the alpha specific conditional
compilation and replace it with a call to cpu_thread_swap{in|out}().
This allows us to add similar code on ia64 without cluttering the
code even more.
2003-08-16 23:15:15 +00:00
grehan
e6a3a6744e Update powerpc to use the (old thread,new thread) calling convention
for cpu_throw() and cpu_switch().
2003-08-14 03:56:24 +00:00
peter
1c887bc40f Deal with 'options KSTACK_PAGES' being a global option. 2003-07-31 01:31:32 +00:00
peter
fda03b7cfc GC unused cpu_wait() function 2003-06-11 05:20:33 +00:00
marcel
482a35058c Change the second (and last) argument of cpu_set_upcall(). Previously
we were passing in a void* representing the PCB of the parent thread.
Now we pass a pointer to the parent thread itself.
The prime reason for this change is to allow cpu_set_upcall() to copy
(parts of) the trapframe instead of having it done in MI code in each
caller of cpu_set_upcall(). Copying the trapframe cannot always be
done with a simply bcopy() or may not always be optimal that way. On
ia64 specifically the trapframe contains information that is specific
to an entry into the kernel and can only be used by the corresponding
exit from the kernel. A trapframe copied verbatim from another frame
is in most cases useless without some additional normalization.

Note that this change removes the assignment to td->td_frame in some
implementations of cpu_set_upcall(). The assignment is redundant.
A previous call to cpu_thread_setup() already did the exact same
assignment. An added benefit of removing the redundant assignment is
that we can now change td_pcb without nasty side-effects.

This change officially marks the ability on ia64 for 1:1 threading.

Not tested on: amd64, powerpc
Compile & boot tested on: alpha, sparc64
Functionally tested on: i386, ia64
2003-06-04 21:13:21 +00:00
jeff
590a39e29b - Split the struct kse into struct upcall and struct kse. struct kse will
soon be visible only to schedulers.  This greatly simplifies much the
   KSE code.

Submitted by:	davidxu
2003-02-17 05:14:26 +00:00
julian
e8efa7328e Reversion of commit by Davidxu plus fixes since applied.
I'm not convinced there is anything major wrong with the patch but
them's the rules..

I am using my "David's mentor" hat to revert this as he's
offline for a while.
2003-02-01 12:17:09 +00:00
davidxu
4b9b549ca2 Move UPCALL related data structure out of kse, introduce a new
data structure called kse_upcall to manage UPCALL. All KSE binding
and loaning code are gone.

A thread owns an upcall can collect all completed syscall contexts in
its ksegrp, turn itself into UPCALL mode, and takes those contexts back
to userland. Any thread without upcall structure has to export their
contexts and exit at user boundary.

Any thread running in user mode owns an upcall structure, when it enters
kernel, if the kse mailbox's current thread pointer is not NULL, then
when the thread is blocked in kernel, a new UPCALL thread is created and
the upcall structure is transfered to the new UPCALL thread. if the kse
mailbox's current thread pointer is NULL, then when a thread is blocked
in kernel, no UPCALL thread will be created.

Each upcall always has an owner thread. Userland can remove an upcall by
calling kse_exit, when all upcalls in ksegrp are removed, the group is
atomatically shutdown. An upcall owner thread also exits when process is
in exiting state. when an owner thread exits, the upcall it owns is also
removed.

KSE is a pure scheduler entity. it represents a virtual cpu. when a thread
is running, it always has a KSE associated with it. scheduler is free to
assign a KSE to thread according thread priority, if thread priority is changed,
KSE can be moved from one thread to another.

When a ksegrp is created, there is always N KSEs created in the group. the
N is the number of physical cpu in the current system. This makes it is
possible that even an userland UTS is single CPU safe, threads in kernel still
can execute on different cpu in parallel. Userland calls kse_create to add more
upcall structures into ksegrp to increase concurrent in userland itself, kernel
is not restricted by number of upcalls userland provides.

The code hasn't been tested under SMP by author due to lack of hardware.

Reviewed by: julian
2003-01-26 11:41:35 +00:00
dillon
aba727244c Merge all the various copies of vm_fault_quick() into a single
portable copy.
2003-01-16 00:02:21 +00:00
dillon
bd6fdb8977 Merge all the various copies of vmapbuf() and vunmapbuf() into a single
portable copy.  Note that pmap_extract() must be used instead of
pmap_kextract().

This is precursor work to a reorganization of vmapbuf() to close remaining
user/kernel races (which can lead to a panic).
2003-01-15 23:54:35 +00:00
grehan
8e6417022b Add page queues locking to vunmapbuf().
Obtained from: sparc64
Approved by:  benno
2003-01-08 12:29:59 +00:00
julian
9868d96f1f Unbreak the KSE code. Keep track of zobie threads using the Per-CPU storage
during the context switch. Rearrange thread cleanups
to avoid problems with Giant. Clean threads when freed or
when recycled.

Approved by:	re (jhb)
2002-12-10 02:33:45 +00:00
mux
8169a213d9 Under certain circumstances, we were calling kmem_free() from
i386 cpu_thread_exit().  This resulted in a panic with WITNESS
since we need to hold Giant to call kmem_free(), and we weren't
helding it anymore in cpu_thread_exit().  We now do this from a
new MD function, cpu_thread_dtor(), called by thread_dtor().

Approved by:	re@
Suggested by:	jhb
2002-11-22 23:57:02 +00:00
grehan
0ef7fc1c4a Add the USER_SR segment register to pcb state. Initialize correctly,
and save/restore during a context switch.

The USER_SR could be overwritten when the current thread was switched
out with a faulting copyin/copyout.

Approved by: Benno
2002-10-21 05:27:41 +00:00
grehan
d85d437918 - bring vm_mapbuf/unmapbuf in line with other archs
- update for recent KSE changes

Approved by: benno
2002-09-19 04:39:28 +00:00
peter
e3b1e6d8fa Zap the implementations of the i386-aout specific cpu_coredump function.
Most of the non-i386 platforms had rather broken implementations anyway.
2002-09-07 01:26:34 +00:00
rwatson
44404e4547 In order to better support flexible and extensible access control,
make a series of modifications to the credential arguments relating
to file read and write operations to cliarfy which credential is
used for what:

- Change fo_read() and fo_write() to accept "active_cred" instead of
  "cred", and change the semantics of consumers of fo_read() and
  fo_write() to pass the active credential of the thread requesting
  an operation rather than the cached file cred.  The cached file
  cred is still available in fo_read() and fo_write() consumers
  via fp->f_cred.  These changes largely in sys_generic.c.

For each implementation of fo_read() and fo_write(), update cred
usage to reflect this change and maintain current semantics:

- badfo_readwrite() unchanged
- kqueue_read/write() unchanged
  pipe_read/write() now authorize MAC using active_cred rather
  than td->td_ucred
- soo_read/write() unchanged
- vn_read/write() now authorize MAC using active_cred but
  VOP_READ/WRITE() with fp->f_cred

Modify vn_rdwr() to accept two credential arguments instead of a
single credential: active_cred and file_cred.  Use active_cred
for MAC authorization, and select a credential for use in
VOP_READ/WRITE() based on whether file_cred is NULL or not.  If
file_cred is provided, authorize the VOP using that cred,
otherwise the active credential, matching current semantics.

Modify current vn_rdwr() consumers to pass a file_cred if used
in the context of a struct file, and to always pass active_cred.
When vn_rdwr() is used without a file_cred, pass NOCRED.

These changes should maintain current semantics for read/write,
but avoid a redundant passing of fp->f_cred, as well as making
it more clear what the origin of each credential is in file
descriptor read/write operations.

Follow-up commits will make similar changes to other file descriptor
operations, and modify the MAC framework to pass both credentials
to MAC policy modules so they can implement either semantic for
revocation.

Obtained from:	TrustedBSD Project
Sponsored by:	DARPA, NAI Labs
2002-08-15 20:55:08 +00:00
benno
3b1e5e1a74 Changes for KSE3.
Submitted by:	Peter Grehan <peterg@ptree32.com.au>
2002-07-09 12:57:23 +00:00
jake
e102a9b6dd Add an MD callout like cpu_exit, but which is called after sched_lock is
obtained, when all other scheduling activity is suspended.  This is needed
on sparc64 to deactivate the vmspace of the exiting process on all cpus.
Otherwise if another unrelated process gets the exact same vmspace structure
allocated to it (same address), its address space will not be activated
properly.  This seems to fix some spontaneous signal 11 problems with smp
on sparc64.
2002-06-24 15:48:02 +00:00
benno
4fc9c21665 - Move macros that represent where syscall args are kept in a trapframe from
trap.c to frame.h
- Use the macros in vm_machdep.c:cpu_fork() to set up the trap frame of the
  new thread.
2002-05-28 12:24:29 +00:00
alc
34c6c68a2d Use the MI vm_map_growstack() instead of the MD grow_stack() in trap(). Remove
the MD grow_stack().
2002-03-30 20:44:31 +00:00
alfred
f1b2b9896d Remove __P.
Reveiwed by: benno
2002-03-20 23:17:50 +00:00
benno
77973ba896 GC an unused variable in cpu_fork(). 2002-02-28 08:48:58 +00:00
benno
a0268a0622 Make fork work, at least for kthreads. Switching still has some issues. 2002-02-28 03:24:07 +00:00
benno
8c67ca76f7 Complete rework of the PowerPC pmap and a number of other bits in the early
boot sequence.

The new pmap.c is based on NetBSD's newer pmap.c (for the mpc6xx processors)
which is 70% faster than the older code that the original pmap.c was based
on.  It has also been based on the framework established by jake's initial
sparc64 pmap.c.

There is no change to how far the kernel gets (it makes it to the mountroot
prompt in psim) but the new pmap code is a lot cleaner.

Obtained from:	NetBSD (pmap code)
2002-02-14 01:39:11 +00:00
julian
b5eb64d6f0 Pre-KSE/M3 commit.
this is a low-functionality change that changes the kernel to access the main
thread of a process via the linked list of threads rather than
assuming that it is embedded in the process. It IS still embeded there
but remove all teh code that assumes that in preparation for the next commit
which will actually move it out.

Reviewed by: peter@freebsd.org, gallatin@cs.duke.edu, benno rice,
2002-02-07 20:58:47 +00:00
mp
b83678939d Fix includes based on recent changes to lock.h, mutex.h and ktr.h. 2001-10-19 22:45:46 +00:00
benno
2fe7725b13 Flesh out cpu_fork() and cpu_set_fork_handler(). This is a work in progress. 2001-10-15 12:24:43 +00:00
mjacob
37494cc800 Fix problem where a user buffer outside of the area being tested
will be corrupted.

PR:		29194
Obtained from:	Tor.Egge@fast.no
MFC after:	2 weeks
2001-10-02 18:34:20 +00:00
mp
a2e5cb9c1e Update PowerPC MD code to compile and do initial bootstrap based on
recent changes (KSE and VM requiring physmem to be setup).

Reviewed by:	benno, jhb, julian
2001-09-20 00:47:17 +00:00