2005-01-06 22:18:23 +00:00
|
|
|
/*-
|
1993-06-12 14:58:17 +00:00
|
|
|
* Copyright (c) 1991 Regents of the University of California.
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
1993-10-16 14:40:57 +00:00
|
|
|
* from: @(#)proc.h 7.1 (Berkeley) 5/15/91
|
1999-08-28 01:08:13 +00:00
|
|
|
* $FreeBSD$
|
1993-06-12 14:58:17 +00:00
|
|
|
*/
|
|
|
|
|
1993-11-07 17:43:17 +00:00
|
|
|
#ifndef _MACHINE_PROC_H_
|
1997-05-07 19:55:13 +00:00
|
|
|
#define _MACHINE_PROC_H_
|
1993-11-07 17:43:17 +00:00
|
|
|
|
2001-10-25 00:53:43 +00:00
|
|
|
#include <machine/segments.h>
|
|
|
|
|
|
|
|
struct proc_ldt {
|
|
|
|
caddr_t ldt_base;
|
|
|
|
int ldt_len;
|
|
|
|
int ldt_refcnt;
|
|
|
|
u_long ldt_active;
|
|
|
|
struct segment_descriptor ldt_sd;
|
|
|
|
};
|
1999-04-28 01:04:33 +00:00
|
|
|
|
1993-06-12 14:58:17 +00:00
|
|
|
/*
|
1996-04-10 05:27:11 +00:00
|
|
|
* Machine-dependent part of the proc structure for i386.
|
2007-05-20 22:03:57 +00:00
|
|
|
* Table of MD locks:
|
|
|
|
* t - Descriptor tables lock
|
1993-06-12 14:58:17 +00:00
|
|
|
*/
|
2001-09-12 08:38:13 +00:00
|
|
|
struct mdthread {
|
Divorce critical sections from spinlocks. Critical sections as denoted by
critical_enter() and critical_exit() are now solely a mechanism for
deferring kernel preemptions. They no longer have any affect on
interrupts. This means that standalone critical sections are now very
cheap as they are simply unlocked integer increments and decrements for the
common case.
Spin mutexes now use a separate KPI implemented in MD code: spinlock_enter()
and spinlock_exit(). This KPI is responsible for providing whatever MD
guarantees are needed to ensure that a thread holding a spin lock won't
be preempted by any other code that will try to lock the same lock. For
now all archs continue to block interrupts in a "spinlock section" as they
did formerly in all critical sections. Note that I've also taken this
opportunity to push a few things into MD code rather than MI. For example,
critical_fork_exit() no longer exists. Instead, MD code ensures that new
threads have the correct state when they are created. Also, we no longer
try to fixup the idlethreads for APs in MI code. Instead, each arch sets
the initial curthread and adjusts the state of the idle thread it borrows
in order to perform the initial context switch.
This change is largely a big NOP, but the cleaner separation it provides
will allow for more efficient alternative locking schemes in other parts
of the kernel (bare critical sections rather than per-CPU spin mutexes
for per-CPU data for example).
Reviewed by: grehan, cognet, arch@, others
Tested on: i386, alpha, sparc64, powerpc, arm, possibly more
2005-04-04 21:53:56 +00:00
|
|
|
int md_spinlock_count; /* (k) */
|
|
|
|
register_t md_saved_flags; /* (k) */
|
Handle spurious page faults that may occur in no-fault sections of the
kernel.
When access restrictions are added to a page table entry, we flush the
corresponding virtual address mapping from the TLB. In contrast, when
access restrictions are removed from a page table entry, we do not
flush the virtual address mapping from the TLB. This is exactly as
recommended in AMD's documentation. In effect, when access
restrictions are removed from a page table entry, AMD's MMUs will
transparently refresh a stale TLB entry. In short, this saves us from
having to perform potentially costly TLB flushes. In contrast,
Intel's MMUs are allowed to generate a spurious page fault based upon
the stale TLB entry. Usually, such spurious page faults are handled
by vm_fault() without incident. However, when we are executing
no-fault sections of the kernel, we are not allowed to execute
vm_fault(). This change introduces special-case handling for spurious
page faults that occur in no-fault sections of the kernel.
In collaboration with: kib
Tested by: gibbs (an earlier version)
I would also like to acknowledge Hiroki Sato's assistance in
diagnosing this problem.
MFC after: 1 week
2012-03-22 04:52:51 +00:00
|
|
|
register_t md_spurflt_addr; /* (k) Spurious page fault address. */
|
2001-09-12 08:38:13 +00:00
|
|
|
};
|
|
|
|
|
1993-06-12 14:58:17 +00:00
|
|
|
struct mdproc {
|
2007-05-20 22:03:57 +00:00
|
|
|
struct proc_ldt *md_ldt; /* (t) per-process ldt */
|
1993-06-12 14:58:17 +00:00
|
|
|
};
|
|
|
|
|
2010-04-27 09:48:43 +00:00
|
|
|
#define KINFO_PROC_SIZE 768
|
2010-04-24 12:49:52 +00:00
|
|
|
|
2001-10-25 00:53:43 +00:00
|
|
|
#ifdef _KERNEL
|
|
|
|
|
2008-01-31 08:24:27 +00:00
|
|
|
/* Get the current kernel thread stack usage. */
|
|
|
|
#define GET_STACK_USAGE(total, used) do { \
|
|
|
|
struct thread *td = curthread; \
|
|
|
|
(total) = td->td_kstack_pages * PAGE_SIZE; \
|
|
|
|
(used) = (char *)td->td_kstack + \
|
|
|
|
td->td_kstack_pages * PAGE_SIZE - \
|
|
|
|
(char *)&td; \
|
|
|
|
} while (0)
|
|
|
|
|
2002-03-20 05:48:58 +00:00
|
|
|
void set_user_ldt(struct mdproc *);
|
|
|
|
struct proc_ldt *user_ldt_alloc(struct mdproc *, int);
|
|
|
|
void user_ldt_free(struct thread *);
|
2008-09-12 09:53:29 +00:00
|
|
|
void user_ldt_deref(struct proc_ldt *pldt);
|
2001-10-25 00:53:43 +00:00
|
|
|
|
2007-05-20 22:03:57 +00:00
|
|
|
extern struct mtx dt_lock;
|
|
|
|
|
Reorganize syscall entry and leave handling.
Extend struct sysvec with three new elements:
sv_fetch_syscall_args - the method to fetch syscall arguments from
usermode into struct syscall_args. The structure is machine-depended
(this might be reconsidered after all architectures are converted).
sv_set_syscall_retval - the method to set a return value for usermode
from the syscall. It is a generalization of
cpu_set_syscall_retval(9) to allow ABIs to override the way to set a
return value.
sv_syscallnames - the table of syscall names.
Use sv_set_syscall_retval in kern_sigsuspend() instead of hardcoding
the call to cpu_set_syscall_retval().
The new functions syscallenter(9) and syscallret(9) are provided that
use sv_*syscall* pointers and contain the common repeated code from
the syscall() implementations for the architecture-specific syscall
trap handlers.
Syscallenter() fetches arguments, calls syscall implementation from
ABI sysent table, and set up return frame. The end of syscall
bookkeeping is done by syscallret().
Take advantage of single place for MI syscall handling code and
implement ptrace_lwpinfo pl_flags PL_FLAG_SCE, PL_FLAG_SCX and
PL_FLAG_EXEC. The SCE and SCX flags notify the debugger that the
thread is stopped at syscall entry or return point respectively. The
EXEC flag augments SCX and notifies debugger that the process address
space was changed by one of exec(2)-family syscalls.
The i386, amd64, sparc64, sun4v, powerpc and ia64 syscall()s are
changed to use syscallenter()/syscallret(). MIPS and arm are not
converted and use the mostly unchanged syscall() implementation.
Reviewed by: jhb, marcel, marius, nwhitehorn, stas
Tested by: marcel (ia64), marius (sparc64), nwhitehorn (powerpc),
stas (mips)
MFC after: 1 month
2010-05-23 18:32:02 +00:00
|
|
|
struct syscall_args {
|
|
|
|
u_int code;
|
|
|
|
struct sysent *callp;
|
|
|
|
register_t args[8];
|
|
|
|
int narg;
|
|
|
|
};
|
2001-10-25 00:53:43 +00:00
|
|
|
#endif /* _KERNEL */
|
|
|
|
|
1997-05-07 19:55:13 +00:00
|
|
|
#endif /* !_MACHINE_PROC_H_ */
|