57b097e18c
enabled in critical sections and streamline critical_enter() and critical_exit(). This commit allows an architecture to leave interrupts enabled inside critical sections if it so wishes. Architectures that do not wish to do this are not effected by this change. This commit implements the feature for the I386 architecture and provides a sysctl, debug.critical_mode, which defaults to 1 (use the feature). For now you can turn the sysctl on and off at any time in order to test the architectural changes or track down bugs. This commit is just the first stage. Some areas of the code, specifically the MACHINE_CRITICAL_ENTER #ifdef'd code, is strictly temporary and will be cleaned up in the STAGE-2 commit when the critical_*() functions are moved entirely into MD files. The following changes have been made: * critical_enter() and critical_exit() for I386 now simply increment and decrement curthread->td_critnest. They no longer disable hard interrupts. When critical_exit() decrements the counter to 0 it effectively calls a routine to deal with whatever interrupts were deferred during the time the code was operating in a critical section. Other architectures are unaffected. * fork_exit() has been conditionalized to remove MD assumptions for the new code. Old code will still use the old MD assumptions in regards to hard interrupt disablement. In STAGE-2 this will be turned into a subroutine call into MD code rather then hardcoded in MI code. The new code places the burden of entering the critical section in the trampoline code where it belongs. * I386: interrupts are now enabled while we are in a critical section. The interrupt vector code has been adjusted to deal with the fact. If it detects that we are in a critical section it currently defers the interrupt by adding the appropriate bit to an interrupt mask. * In order to accomplish the deferral, icu_lock is required. This is i386-specific. Thus icu_lock can only be obtained by mainline i386 code while interrupts are hard disabled. This change has been made. * Because interrupts may or may not be hard disabled during a context switch, cpu_switch() can no longer simply assume that PSL_I will be in a consistent state. Therefore, it now saves and restores eflags. * FAST INTERRUPT PROVISION. Fast interrupts are currently deferred. The intention is to eventually allow them to operate either while we are in a critical section or, if we are able to restrict the use of sched_lock, while we are not holding the sched_lock. * ICU and APIC vector assembly for I386 cleaned up. The ICU code has been cleaned up to match the APIC code in regards to format and macro availability. Additionally, the code has been adjusted to deal with deferred interrupts. * Deferred interrupts use a per-cpu boolean int_pending, and masks ipending, spending, and fpending. Being per-cpu variables it is not currently necessary to lock; bus cycles modifying them. Note that the same mechanism will enable preemption to be incorporated as a true software interrupt without having to further hack up the critical nesting code. * Note: the old critical_enter() code in kern/kern_switch.c is currently #ifdef to be compatible with both the old and new methodology. In STAGE-2 it will be moved entirely to MD code. Performance issues: One of the purposes of this commit is to enhance critical section performance, specifically to greatly reduce bus overhead to allow the critical section code to be used to protect per-cpu caches. These caches, such as Jeff's slab allocator work, can potentially operate very quickly making the effective savings of the new critical section code's performance very significant. The second purpose of this commit is to allow architectures to enable certain interrupts while in a critical section. Specifically, the intention is to eventually allow certain FAST interrupts to operate rather then defer. The third purpose of this commit is to begin to clean up the critical_enter()/critical_exit()/cpu_critical_enter()/ cpu_critical_exit() API which currently has serious cross pollution in MI code (in fork_exit() and ast() for example). The fourth purpose of this commit is to provide a framework that allows kernel-preempting software interrupts to be implemented cleanly. This is currently used for two forward interrupts in I386. Other architectures will have the choice of using this infrastructure or building the functionality directly into critical_enter()/ critical_exit(). Finally, this commit is designed to greatly improve the flexibility of various architectures to manage critical section handling, software interrupts, preemption, and other highly integrated architecture-specific details.
724 lines
15 KiB
C
724 lines
15 KiB
C
/*-
|
|
* Copyright (c) 1993 The Regents of the University of California.
|
|
* All rights reserved.
|
|
*
|
|
* Redistribution and use in source and binary forms, with or without
|
|
* modification, are permitted provided that the following conditions
|
|
* are met:
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
* notice, this list of conditions and the following disclaimer.
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
* documentation and/or other materials provided with the distribution.
|
|
* 3. All advertising materials mentioning features or use of this software
|
|
* must display the following acknowledgement:
|
|
* This product includes software developed by the University of
|
|
* California, Berkeley and its contributors.
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
* may be used to endorse or promote products derived from this software
|
|
* without specific prior written permission.
|
|
*
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
* SUCH DAMAGE.
|
|
*
|
|
* $FreeBSD$
|
|
*/
|
|
|
|
/*
|
|
* Functions to provide access to special i386 instructions.
|
|
*/
|
|
|
|
#ifndef _MACHINE_CPUFUNC_H_
|
|
#define _MACHINE_CPUFUNC_H_
|
|
|
|
#include <sys/cdefs.h>
|
|
#include <machine/psl.h>
|
|
|
|
__BEGIN_DECLS
|
|
#define readb(va) (*(volatile u_int8_t *) (va))
|
|
#define readw(va) (*(volatile u_int16_t *) (va))
|
|
#define readl(va) (*(volatile u_int32_t *) (va))
|
|
|
|
#define writeb(va, d) (*(volatile u_int8_t *) (va) = (d))
|
|
#define writew(va, d) (*(volatile u_int16_t *) (va) = (d))
|
|
#define writel(va, d) (*(volatile u_int32_t *) (va) = (d))
|
|
|
|
#define MACHINE_CRITICAL_ENTER /* MD code defines critical_enter/exit/fork */
|
|
|
|
#ifdef __GNUC__
|
|
|
|
#ifdef SWTCH_OPTIM_STATS
|
|
extern int tlb_flush_count; /* XXX */
|
|
#endif
|
|
|
|
static __inline void
|
|
breakpoint(void)
|
|
{
|
|
__asm __volatile("int $3");
|
|
}
|
|
|
|
static __inline u_int
|
|
bsfl(u_int mask)
|
|
{
|
|
u_int result;
|
|
|
|
__asm __volatile("bsfl %1,%0" : "=r" (result) : "rm" (mask));
|
|
return (result);
|
|
}
|
|
|
|
static __inline u_int
|
|
bsrl(u_int mask)
|
|
{
|
|
u_int result;
|
|
|
|
__asm __volatile("bsrl %1,%0" : "=r" (result) : "rm" (mask));
|
|
return (result);
|
|
}
|
|
|
|
static __inline void
|
|
disable_intr(void)
|
|
{
|
|
__asm __volatile("cli" : : : "memory");
|
|
}
|
|
|
|
static __inline void
|
|
enable_intr(void)
|
|
{
|
|
__asm __volatile("sti");
|
|
}
|
|
|
|
#define HAVE_INLINE_FFS
|
|
|
|
static __inline int
|
|
ffs(int mask)
|
|
{
|
|
/*
|
|
* Note that gcc-2's builtin ffs would be used if we didn't declare
|
|
* this inline or turn off the builtin. The builtin is faster but
|
|
* broken in gcc-2.4.5 and slower but working in gcc-2.5 and later
|
|
* versions.
|
|
*/
|
|
return (mask == 0 ? mask : bsfl((u_int)mask) + 1);
|
|
}
|
|
|
|
#define HAVE_INLINE_FLS
|
|
|
|
static __inline int
|
|
fls(int mask)
|
|
{
|
|
return (mask == 0 ? mask : bsrl((u_int)mask) + 1);
|
|
}
|
|
|
|
#if __GNUC__ < 2
|
|
|
|
#define inb(port) inbv(port)
|
|
#define outb(port, data) outbv(port, data)
|
|
|
|
#else /* __GNUC >= 2 */
|
|
|
|
/*
|
|
* The following complications are to get around gcc not having a
|
|
* constraint letter for the range 0..255. We still put "d" in the
|
|
* constraint because "i" isn't a valid constraint when the port
|
|
* isn't constant. This only matters for -O0 because otherwise
|
|
* the non-working version gets optimized away.
|
|
*
|
|
* Use an expression-statement instead of a conditional expression
|
|
* because gcc-2.6.0 would promote the operands of the conditional
|
|
* and produce poor code for "if ((inb(var) & const1) == const2)".
|
|
*
|
|
* The unnecessary test `(port) < 0x10000' is to generate a warning if
|
|
* the `port' has type u_short or smaller. Such types are pessimal.
|
|
* This actually only works for signed types. The range check is
|
|
* careful to avoid generating warnings.
|
|
*/
|
|
#define inb(port) __extension__ ({ \
|
|
u_char _data; \
|
|
if (__builtin_constant_p(port) && ((port) & 0xffff) < 0x100 \
|
|
&& (port) < 0x10000) \
|
|
_data = inbc(port); \
|
|
else \
|
|
_data = inbv(port); \
|
|
_data; })
|
|
|
|
#define outb(port, data) ( \
|
|
__builtin_constant_p(port) && ((port) & 0xffff) < 0x100 \
|
|
&& (port) < 0x10000 \
|
|
? outbc(port, data) : outbv(port, data))
|
|
|
|
static __inline u_char
|
|
inbc(u_int port)
|
|
{
|
|
u_char data;
|
|
|
|
__asm __volatile("inb %1,%0" : "=a" (data) : "id" ((u_short)(port)));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
outbc(u_int port, u_char data)
|
|
{
|
|
__asm __volatile("outb %0,%1" : : "a" (data), "id" ((u_short)(port)));
|
|
}
|
|
|
|
#endif /* __GNUC <= 2 */
|
|
|
|
static __inline u_char
|
|
inbv(u_int port)
|
|
{
|
|
u_char data;
|
|
/*
|
|
* We use %%dx and not %1 here because i/o is done at %dx and not at
|
|
* %edx, while gcc generates inferior code (movw instead of movl)
|
|
* if we tell it to load (u_short) port.
|
|
*/
|
|
__asm __volatile("inb %%dx,%0" : "=a" (data) : "d" (port));
|
|
return (data);
|
|
}
|
|
|
|
static __inline u_int
|
|
inl(u_int port)
|
|
{
|
|
u_int data;
|
|
|
|
__asm __volatile("inl %%dx,%0" : "=a" (data) : "d" (port));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
insb(u_int port, void *addr, size_t cnt)
|
|
{
|
|
__asm __volatile("cld; rep; insb"
|
|
: "+D" (addr), "+c" (cnt)
|
|
: "d" (port)
|
|
: "memory");
|
|
}
|
|
|
|
static __inline void
|
|
insw(u_int port, void *addr, size_t cnt)
|
|
{
|
|
__asm __volatile("cld; rep; insw"
|
|
: "+D" (addr), "+c" (cnt)
|
|
: "d" (port)
|
|
: "memory");
|
|
}
|
|
|
|
static __inline void
|
|
insl(u_int port, void *addr, size_t cnt)
|
|
{
|
|
__asm __volatile("cld; rep; insl"
|
|
: "+D" (addr), "+c" (cnt)
|
|
: "d" (port)
|
|
: "memory");
|
|
}
|
|
|
|
static __inline void
|
|
invd(void)
|
|
{
|
|
__asm __volatile("invd");
|
|
}
|
|
|
|
static __inline u_short
|
|
inw(u_int port)
|
|
{
|
|
u_short data;
|
|
|
|
__asm __volatile("inw %%dx,%0" : "=a" (data) : "d" (port));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
outbv(u_int port, u_char data)
|
|
{
|
|
u_char al;
|
|
/*
|
|
* Use an unnecessary assignment to help gcc's register allocator.
|
|
* This make a large difference for gcc-1.40 and a tiny difference
|
|
* for gcc-2.6.0. For gcc-1.40, al had to be ``asm("ax")'' for
|
|
* best results. gcc-2.6.0 can't handle this.
|
|
*/
|
|
al = data;
|
|
__asm __volatile("outb %0,%%dx" : : "a" (al), "d" (port));
|
|
}
|
|
|
|
static __inline void
|
|
outl(u_int port, u_int data)
|
|
{
|
|
/*
|
|
* outl() and outw() aren't used much so we haven't looked at
|
|
* possible micro-optimizations such as the unnecessary
|
|
* assignment for them.
|
|
*/
|
|
__asm __volatile("outl %0,%%dx" : : "a" (data), "d" (port));
|
|
}
|
|
|
|
static __inline void
|
|
outsb(u_int port, const void *addr, size_t cnt)
|
|
{
|
|
__asm __volatile("cld; rep; outsb"
|
|
: "+S" (addr), "+c" (cnt)
|
|
: "d" (port));
|
|
}
|
|
|
|
static __inline void
|
|
outsw(u_int port, const void *addr, size_t cnt)
|
|
{
|
|
__asm __volatile("cld; rep; outsw"
|
|
: "+S" (addr), "+c" (cnt)
|
|
: "d" (port));
|
|
}
|
|
|
|
static __inline void
|
|
outsl(u_int port, const void *addr, size_t cnt)
|
|
{
|
|
__asm __volatile("cld; rep; outsl"
|
|
: "+S" (addr), "+c" (cnt)
|
|
: "d" (port));
|
|
}
|
|
|
|
static __inline void
|
|
outw(u_int port, u_short data)
|
|
{
|
|
__asm __volatile("outw %0,%%dx" : : "a" (data), "d" (port));
|
|
}
|
|
|
|
static __inline u_int
|
|
read_eflags(void)
|
|
{
|
|
u_int ef;
|
|
|
|
__asm __volatile("pushfl; popl %0" : "=r" (ef));
|
|
return (ef);
|
|
}
|
|
|
|
static __inline void
|
|
do_cpuid(u_int ax, u_int *p)
|
|
{
|
|
__asm __volatile(
|
|
"cpuid"
|
|
: "=a" (p[0]), "=b" (p[1]), "=c" (p[2]), "=d" (p[3])
|
|
: "0" (ax)
|
|
);
|
|
}
|
|
|
|
static __inline u_int64_t
|
|
rdmsr(u_int msr)
|
|
{
|
|
u_int64_t rv;
|
|
|
|
__asm __volatile("rdmsr" : "=A" (rv) : "c" (msr));
|
|
return (rv);
|
|
}
|
|
|
|
static __inline u_int64_t
|
|
rdpmc(u_int pmc)
|
|
{
|
|
u_int64_t rv;
|
|
|
|
__asm __volatile("rdpmc" : "=A" (rv) : "c" (pmc));
|
|
return (rv);
|
|
}
|
|
|
|
static __inline u_int64_t
|
|
rdtsc(void)
|
|
{
|
|
u_int64_t rv;
|
|
|
|
__asm __volatile("rdtsc" : "=A" (rv));
|
|
return (rv);
|
|
}
|
|
|
|
static __inline void
|
|
wbinvd(void)
|
|
{
|
|
__asm __volatile("wbinvd");
|
|
}
|
|
|
|
static __inline void
|
|
write_eflags(u_int ef)
|
|
{
|
|
__asm __volatile("pushl %0; popfl" : : "r" (ef));
|
|
}
|
|
|
|
static __inline void
|
|
wrmsr(u_int msr, u_int64_t newval)
|
|
{
|
|
__asm __volatile("wrmsr" : : "A" (newval), "c" (msr));
|
|
}
|
|
|
|
static __inline void
|
|
load_cr0(u_int data)
|
|
{
|
|
|
|
__asm __volatile("movl %0,%%cr0" : : "r" (data));
|
|
}
|
|
|
|
static __inline u_int
|
|
rcr0(void)
|
|
{
|
|
u_int data;
|
|
|
|
__asm __volatile("movl %%cr0,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
static __inline u_int
|
|
rcr2(void)
|
|
{
|
|
u_int data;
|
|
|
|
__asm __volatile("movl %%cr2,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
load_cr3(u_int data)
|
|
{
|
|
|
|
__asm __volatile("movl %0,%%cr3" : : "r" (data) : "memory");
|
|
#if defined(SWTCH_OPTIM_STATS)
|
|
++tlb_flush_count;
|
|
#endif
|
|
}
|
|
|
|
static __inline u_int
|
|
rcr3(void)
|
|
{
|
|
u_int data;
|
|
|
|
__asm __volatile("movl %%cr3,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
load_cr4(u_int data)
|
|
{
|
|
__asm __volatile("movl %0,%%cr4" : : "r" (data));
|
|
}
|
|
|
|
static __inline u_int
|
|
rcr4(void)
|
|
{
|
|
u_int data;
|
|
|
|
__asm __volatile("movl %%cr4,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
/*
|
|
* Global TLB flush (except for thise for pages marked PG_G)
|
|
*/
|
|
static __inline void
|
|
cpu_invltlb(void)
|
|
{
|
|
|
|
load_cr3(rcr3());
|
|
}
|
|
|
|
/*
|
|
* TLB flush for an individual page (even if it has PG_G).
|
|
* Only works on 486+ CPUs (i386 does not have PG_G).
|
|
*/
|
|
static __inline void
|
|
cpu_invlpg(u_int addr)
|
|
{
|
|
|
|
#ifndef I386_CPU
|
|
__asm __volatile("invlpg %0" : : "m" (*(char *)addr) : "memory");
|
|
#else
|
|
cpu_invltlb();
|
|
#endif
|
|
}
|
|
|
|
#ifdef PAGE_SIZE /* Avoid this file depending on sys/param.h */
|
|
/*
|
|
* Same as above but for a range of pages.
|
|
*/
|
|
static __inline void
|
|
cpu_invlpg_range(u_int startva, u_int endva)
|
|
{
|
|
#ifndef I386_CPU
|
|
u_int addr;
|
|
|
|
for (addr = startva; addr < endva; addr += PAGE_SIZE)
|
|
__asm __volatile("invlpg %0" : : "m" (*(char *)addr));
|
|
__asm __volatile("" : : : "memory");
|
|
#else
|
|
cpu_invltlb();
|
|
#endif
|
|
}
|
|
#endif
|
|
|
|
#ifdef SMP
|
|
extern void smp_invlpg(u_int addr);
|
|
extern void smp_masked_invlpg(u_int mask, u_int addr);
|
|
#ifdef PAGE_SIZE /* Avoid this file depending on sys/param.h */
|
|
extern void smp_invlpg_range(u_int startva, u_int endva);
|
|
extern void smp_masked_invlpg_range(u_int mask, u_int startva, u_int endva);
|
|
#endif
|
|
extern void smp_invltlb(void);
|
|
extern void smp_masked_invltlb(u_int mask);
|
|
#endif
|
|
|
|
/*
|
|
* Generic page TLB flush. Takes care of SMP.
|
|
*/
|
|
static __inline void
|
|
invlpg(u_int addr)
|
|
{
|
|
|
|
cpu_invlpg(addr);
|
|
#ifdef SMP
|
|
smp_invlpg(addr);
|
|
#endif
|
|
}
|
|
|
|
#ifdef PAGE_SIZE /* Avoid this file depending on sys/param.h */
|
|
/*
|
|
* Generic TLB flush for a range of pages. Takes care of SMP.
|
|
* Saves many IPIs for SMP mode.
|
|
*/
|
|
static __inline void
|
|
invlpg_range(u_int startva, u_int endva)
|
|
{
|
|
|
|
cpu_invlpg_range(startva, endva);
|
|
#ifdef SMP
|
|
smp_invlpg_range(startva, endva);
|
|
#endif
|
|
}
|
|
#endif
|
|
|
|
/*
|
|
* Generic global TLB flush (except for thise for pages marked PG_G)
|
|
*/
|
|
static __inline void
|
|
invltlb(void)
|
|
{
|
|
|
|
cpu_invltlb();
|
|
#ifdef SMP
|
|
smp_invltlb();
|
|
#endif
|
|
}
|
|
|
|
static __inline u_int
|
|
rfs(void)
|
|
{
|
|
u_int sel;
|
|
__asm __volatile("movl %%fs,%0" : "=rm" (sel));
|
|
return (sel);
|
|
}
|
|
|
|
static __inline u_int
|
|
rgs(void)
|
|
{
|
|
u_int sel;
|
|
__asm __volatile("movl %%gs,%0" : "=rm" (sel));
|
|
return (sel);
|
|
}
|
|
|
|
static __inline void
|
|
load_fs(u_int sel)
|
|
{
|
|
__asm __volatile("movl %0,%%fs" : : "rm" (sel));
|
|
}
|
|
|
|
static __inline void
|
|
load_gs(u_int sel)
|
|
{
|
|
__asm __volatile("movl %0,%%gs" : : "rm" (sel));
|
|
}
|
|
|
|
static __inline u_int
|
|
rdr0(void)
|
|
{
|
|
u_int data;
|
|
__asm __volatile("movl %%dr0,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
load_dr0(u_int sel)
|
|
{
|
|
__asm __volatile("movl %0,%%dr0" : : "r" (sel));
|
|
}
|
|
|
|
static __inline u_int
|
|
rdr1(void)
|
|
{
|
|
u_int data;
|
|
__asm __volatile("movl %%dr1,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
load_dr1(u_int sel)
|
|
{
|
|
__asm __volatile("movl %0,%%dr1" : : "r" (sel));
|
|
}
|
|
|
|
static __inline u_int
|
|
rdr2(void)
|
|
{
|
|
u_int data;
|
|
__asm __volatile("movl %%dr2,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
load_dr2(u_int sel)
|
|
{
|
|
__asm __volatile("movl %0,%%dr2" : : "r" (sel));
|
|
}
|
|
|
|
static __inline u_int
|
|
rdr3(void)
|
|
{
|
|
u_int data;
|
|
__asm __volatile("movl %%dr3,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
load_dr3(u_int sel)
|
|
{
|
|
__asm __volatile("movl %0,%%dr3" : : "r" (sel));
|
|
}
|
|
|
|
static __inline u_int
|
|
rdr4(void)
|
|
{
|
|
u_int data;
|
|
__asm __volatile("movl %%dr4,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
load_dr4(u_int sel)
|
|
{
|
|
__asm __volatile("movl %0,%%dr4" : : "r" (sel));
|
|
}
|
|
|
|
static __inline u_int
|
|
rdr5(void)
|
|
{
|
|
u_int data;
|
|
__asm __volatile("movl %%dr5,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
load_dr5(u_int sel)
|
|
{
|
|
__asm __volatile("movl %0,%%dr5" : : "r" (sel));
|
|
}
|
|
|
|
static __inline u_int
|
|
rdr6(void)
|
|
{
|
|
u_int data;
|
|
__asm __volatile("movl %%dr6,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
load_dr6(u_int sel)
|
|
{
|
|
__asm __volatile("movl %0,%%dr6" : : "r" (sel));
|
|
}
|
|
|
|
static __inline u_int
|
|
rdr7(void)
|
|
{
|
|
u_int data;
|
|
__asm __volatile("movl %%dr7,%0" : "=r" (data));
|
|
return (data);
|
|
}
|
|
|
|
static __inline void
|
|
load_dr7(u_int sel)
|
|
{
|
|
__asm __volatile("movl %0,%%dr7" : : "r" (sel));
|
|
}
|
|
|
|
static __inline critical_t
|
|
cpu_critical_enter(void)
|
|
{
|
|
critical_t eflags;
|
|
|
|
eflags = read_eflags();
|
|
disable_intr();
|
|
return (eflags);
|
|
}
|
|
|
|
static __inline void
|
|
cpu_critical_exit(critical_t eflags)
|
|
{
|
|
write_eflags(eflags);
|
|
}
|
|
|
|
#else /* !__GNUC__ */
|
|
|
|
int breakpoint __P((void));
|
|
u_int bsfl __P((u_int mask));
|
|
u_int bsrl __P((u_int mask));
|
|
void cpu_invlpg __P((u_int addr));
|
|
void cpu_invlpg_range __P((u_int start, u_int end));
|
|
void disable_intr __P((void));
|
|
void do_cpuid __P((u_int ax, u_int *p));
|
|
void enable_intr __P((void));
|
|
u_char inb __P((u_int port));
|
|
u_int inl __P((u_int port));
|
|
void insb __P((u_int port, void *addr, size_t cnt));
|
|
void insl __P((u_int port, void *addr, size_t cnt));
|
|
void insw __P((u_int port, void *addr, size_t cnt));
|
|
void invd __P((void));
|
|
void invlpg __P((u_int addr));
|
|
void invlpg_range __P((u_int start, u_int end));
|
|
void invltlb __P((void));
|
|
u_short inw __P((u_int port));
|
|
void load_cr0 __P((u_int cr0));
|
|
void load_cr3 __P((u_int cr3));
|
|
void load_cr4 __P((u_int cr4));
|
|
void load_fs __P((u_int sel));
|
|
void load_gs __P((u_int sel));
|
|
void outb __P((u_int port, u_char data));
|
|
void outl __P((u_int port, u_int data));
|
|
void outsb __P((u_int port, void *addr, size_t cnt));
|
|
void outsl __P((u_int port, void *addr, size_t cnt));
|
|
void outsw __P((u_int port, void *addr, size_t cnt));
|
|
void outw __P((u_int port, u_short data));
|
|
u_int rcr0 __P((void));
|
|
u_int rcr2 __P((void));
|
|
u_int rcr3 __P((void));
|
|
u_int rcr4 __P((void));
|
|
u_int rfs __P((void));
|
|
u_int rgs __P((void));
|
|
u_int64_t rdmsr __P((u_int msr));
|
|
u_int64_t rdpmc __P((u_int pmc));
|
|
u_int64_t rdtsc __P((void));
|
|
u_int read_eflags __P((void));
|
|
void wbinvd __P((void));
|
|
void write_eflags __P((u_int ef));
|
|
void wrmsr __P((u_int msr, u_int64_t newval));
|
|
critical_t cpu_critical_enter __P((void));
|
|
void cpu_critical_exit __P((critical_t eflags));
|
|
|
|
#endif /* __GNUC__ */
|
|
|
|
void ltr __P((u_short sel));
|
|
void reset_dbregs __P((void));
|
|
__END_DECLS
|
|
|
|
#endif /* !_MACHINE_CPUFUNC_H_ */
|