Commit Graph

963 Commits

Author SHA1 Message Date
John Baldwin
e59ae32f18 - Document the thermal and performance counter LVT entries in the local
APIC.
- Add a lvt_thermal member to the LAPIC struct.
- Add constants for the SMI and INIT LVT delivery modes.
2003-06-06 17:22:15 +00:00
Peter Wemm
7fc03ef474 Fix ALIGNED_POINTER(). sizeof((u_int32_t)) is not legal C. 2003-06-04 02:15:13 +00:00
Peter Wemm
c35518b4ed Make this more compatable with libc_r. Make the internal types for storing
registers an array of longs rather than int.
2003-06-02 21:49:35 +00:00
David E. O'Brien
9676a785e7 Use __FBSDID(). 2003-06-02 06:43:15 +00:00
Peter Wemm
193b147c05 MFi386: i386/include/asm.h rev 1.11: Do not abuse ##. 2003-06-02 05:59:35 +00:00
David E. O'Brien
69bb404192 Use C99 compatable asm statements. 2003-06-02 00:29:35 +00:00
Peter Wemm
cc71eb5e10 With the help of jhb, fix the ACPI_ACQUIRE_GLOBAL_LOCK() macros and
port to amd64 after repocopy.

Approved by: re (amd64/*)
2003-05-31 06:43:55 +00:00
Hiten Pandya
b77c32a07e Rename BUS_DMAMEM_NOSYNC to BUS_DMA_COHERENT.
The current name is confusing, because it indicates to
the client that a bus_dmamap_sync() operation is not
necessary when the flag is specified, which is wrong.

The main purpose of this flag is to hint the underlying
architecture that DMA memory should be mapped in a coherent
way, but the architecture can ignore it.  But if the
architecture does supports coherent mapping of memory, then
it makes bus_dmamap_sync() calls cheap.

This flag is the same as the one in NetBSD's Bus DMA.

Reviewed by: gibbs, scottl, des (implicitly)
Approved by: re@ (jhb)
2003-05-30 20:40:33 +00:00
Peter Wemm
5feb2148ba Initial port to amd64 after repocopy from i386. Note that the
disassembler has not been updated yet, and will do some very strange
things.  It does tracebacks (without function arguments due to regparm
calling conventions) if -fno-omit-frame-pointer is used (to come later).
This achieves basic functionality.

Approved by:	re (amd64/* blanket)
2003-05-30 01:01:07 +00:00
Scott Long
7e71df9339 Bring back bus_dmasync_op_t. It is now a typedef to an int, though the
BUS_DMASYNC_ definitions remain as before.  The does not change the ABI,
and reverts the API to be a bit more compatible and flexible.  This has
survived a full 'make universe'.

Approved by:	re (bmah)
2003-05-27 04:59:59 +00:00
Scott Long
c87d464f28 De-orbit bus_dmamem_alloc_size(). It's a hack and was never used anyways.
No need for it to pollute the 5.x API any further.

Approved by:	re (bmah)
2003-05-26 04:00:52 +00:00
Peter Wemm
3ebd9b48ce Stop profiled libc from exploding, matching gcc's generated code.
Approved by: re (amd64/* blanket)
2003-05-24 18:24:03 +00:00
Peter Wemm
d9cd1af4aa Typo fix. oops.
Submitted by:  jmallett
Approved by:   re (blanket amd64/*)
2003-05-23 06:36:46 +00:00
Peter Wemm
cbd667fa2f Update comments. Note that the kernel is at -1GB, not -2GB as erroniously
implied by the previous commit.  KVM is still only 1GB until
pmap_growkernel() learns about the extra page table level.

Approved by:  re (blanket)
2003-05-23 06:35:45 +00:00
Peter Wemm
f229f5cf85 As suggested by the gdb folks, pad the 'struct fpreg' to a full 512 bytes
to match the native fxsave/fxrstor object size since thats apparently what
the Linux/NetBSD folks do.
2003-05-23 06:31:56 +00:00
Peter Wemm
3c9a3c9ca3 Major pmap rework to take advantage of the larger address space on amd64
systems.  Of note:
- Implement a direct mapped region using 2MB pages.  This eliminates the
  need for temporary mappings when getting ptes.  This supports up to
  512GB of physical memory for now.  This should be enough for a while.
- Implement a 4-tier page table system.  Most of the infrastructure is
  there for 128TB of userland virtual address space, but only 512GB is
  presently enabled due to a mystery bug somewhere.  The design of this
  was heavily inspired by the alpha pmap.c.
- The kernel is moved into the negative address space(!).
- The kernel has 2GB of KVM available.
- Provide a uma memory allocator to use the direct map region to take
  advantage of the 2MB TLBs.
- Fixed some assumptions in the bus_space macros about the ability
  to fit virtual addresses in an 'int'.

Notable missing things:
- pmap_growkernel() should be able to grow to 512GB of KVM by expanding
  downwards below kernbase.  The kernel must be at the top 2GB of the
  negative address space because of gcc code generation strategies.
- need to fix the >512GB user vm code.

Approved by:	re (blanket)
2003-05-23 05:04:54 +00:00
Alexander Kabaev
980ded9a7d sys/sys/limits.h:
- Fix visibilty test for LONG_BIT and WORD_BIT.  `#if defined(__FOO_VISIBLE)'
   is alays wrong because __FOO_VISIBLE is always defined (to 0 for
   invisibility).

sys/<arch>/include/limits.h
sys/<arch>/include/_limits.h:

 - Style fixes.

Submitted by:	bde
Reviewed by:	bsdmike
Approved by:	re (scottl)
2003-05-19 20:29:07 +00:00
Alan Cox
4a0d6dfd2c Initialize logical_cpus_mask when the logical CPUs are enumerated in
the mptable.  (Previously, logical_cpus_mask was only initialized if
the hyperthreading fixup was executed.)

Approved by:	re (jhb)
Reviewed by:	ps
2003-05-15 05:12:24 +00:00
Peter Wemm
c0a54ff621 Collect the nastiness for preserving the kernel MSR_GSBASE around the
load_gs() calls into a single place that is less likely to go wrong.

Eliminate the per-process context switching of MSR_GSBASE, because it
should be constant for a single cpu.  Instead, save/restore it during
the loading of the new %gs selector for the new process.

Approved by:	re (amd64/* blanket)
2003-05-15 00:23:40 +00:00
Peter Wemm
be52ef1399 Use compile time constants for things like PTmap[] etc because they're
about to move outside of the +/- 2GB range

Suggested by:	jake
Approved by:	re (amd64/* blanket)
2003-05-15 00:20:17 +00:00
Peter Wemm
d85631c4ac Add BASIC i386 binary support for the amd64 kernel. This is largely
stolen from the ia64/ia32 code (indeed there was a repocopy), but I've
redone the MD parts and added and fixed a few essential syscalls.  It
is sufficient to run i386 binaries like /bin/ls, /usr/bin/id (dynamic)
and p4.  The ia64 code has not implemented signal delivery, so I had
to do that.

Before you say it, yes, this does need to go in a common place.  But
we're in a freeze at the moment and I didn't want to risk breaking ia64.
I will sort this out after the freeze so that the common code is in a
common place.

On the AMD64 side, this required adding segment selector context switch
support and some other support infrastructure.  The %fs/%gs etc code
is hairy because loading %gs will clobber the kernel's current MSR_GSBASE
setting.  The segment selectors are not used by the kernel, so they're only
changed at context switch time or when changing modes.  This still needs
to be optimized.

Approved by:	re (amd64/* blanket)
2003-05-14 04:10:49 +00:00
Peter Wemm
0fe93e7480 For the page fault handler, save %cr2 in the outer trap handler so that
we do not have to run so long with interrupts disabled.  This involved
creating tf_addr in the trapframe.  Reorganize the trap stubs so that
they consistently reserve the stack space and initialize any missing
bits.

Approved by:	re (amd64 stuff)
2003-05-12 18:33:19 +00:00
Peter Wemm
0f6241620b Sync ucontext with reality. The struct trapframe changes need to be
reflected here.

Approved by:	re (blanket amd64/*)
2003-05-12 18:23:04 +00:00
Peter Wemm
e9b193dc33 AMD64 physical space is much larger than i386, de-i386 the bus_space and
bus_dma MD code for AMD64.  (And a trivial ifdef update in dev/kbd because
of this).  More updates are needed here to take advantage of the 64 bit
instructions.

Approved by:	re (blanket amd64/*)
2003-05-12 02:44:37 +00:00
Peter Wemm
bf1e897425 Give a %fs and %gs to userland. Use swapgs to obtain the kernel %GS.base
value on entry and exit.  This isn't as easy as it sounds because when
we recursively trap or interrupt, we have to avoid duplicating the
swapgs instruction or we end up back with the userland %gs.  I implemented
this by testing TF_CS to see if we're coming from supervisor mode
already, and check for returning to supervisor. To avoid a race with
interrupts in the brief period after beginning executing the handler and
before the swapgs, convert all trap gates to interrupt gates, and reenable
interrupts immediately after the swapgs.  I am not happy with this.
There are other possible ways to do this that should be investigated.
(eg: storing the GS.base MSR value in the trapframe)

Add some sysarch functions to let the userland code get to this.

Approved by:	re (blanket amd64/*)
2003-05-12 02:37:29 +00:00
Peter Wemm
eeee69d45c Make atdevbase long for the KERNBASE > 4GB case
Approved by: re (amd64/* blanket)
2003-05-11 22:53:43 +00:00
Peter Wemm
0fe0f2515b Provide a fake varargs implementation for lint's benefit. This way
it can see the intent of the va_* macros, even though it cannot work.

Approved by:	re (blanket amd64/*)
2003-05-10 00:55:15 +00:00
Peter Wemm
e1ef71de2b Remove _ARCH_INDIRECT ifdefs. They existed for lib/msun/* on i386, which
could use different versions of the math code depending on whether there
was real floating point hardware or math emulation.  Since the fpu is
part of the core specification on amd64, there is no need for this here.

Approved by:	re (blanket amd64/*)
2003-05-10 00:53:34 +00:00
Peter Wemm
2e4f687a1d bcopyb() isn't used on amd64 kernel (it only exists for i386/pcvt)
Approved by:	re (blanket amd64/*)
2003-05-10 00:51:29 +00:00
Peter Wemm
395e65aa29 Include the MXCSR initial values, based on the AMD docs. This file
should really be renamed to fpu.h and npx.c to fpu.c since its part of
the core architecture on amd64 systems, not an isa 'numeric processor
extension'.
2003-05-09 18:28:05 +00:00
Alexander Kabaev
0eda4c08a5 Style fixes.
Remove DBL_DIG, DBL_MIN, DBL_MAX and their FLT_ counterparts, they
were marked for deprecation ever since SUSv1 at least.
Only define ULLONG_MIN/MAX and LLONG_MAX if long long type is
supported.
Restore a lost comment in MI _limits.h file and remove it from
sys/limits.h where it does not belong.
2003-05-04 22:13:04 +00:00
Peter Wemm
7f47668191 Slight reorg and added AMD64 support. A couple of the MODINFOMD_* values
that were added to sparc64 and later powerpc, really should have been in
the MI area.  But changing that now with insufficient preperation will
just cause too much pain.

Move MD_FETCH() to the MI sys/linker.h file to avoid another two copies
of it.
2003-05-01 03:31:18 +00:00
Peter Wemm
afa8862328 Commit MD parts of a loosely functional AMD64 port. This is based on
a heavily stripped down FreeBSD/i386 (brutally stripped down actually) to
attempt to get a stable base to start from.  There is a lot missing still.
Worth noting:
- The kernel runs at 1GB in order to cheat with the pmap code.  pmap uses
  a variation of the PAE code in order to avoid having to worry about 4
  levels of page tables yet.
- It boots in 64 bit "long mode" with a tiny trampoline embedded in the
  i386 loader.  This simplifies locore.s greatly.
- There are still quite a few fragments of i386-specific code that have
  not been translated yet, and some that I cheated and wrote dumb C
  versions of (bcopy etc).
- It has both int 0x80 for syscalls (but using registers for argument
  passing, as is native on the amd64 ABI), and the 'syscall' instruction
  for syscalls.  int 0x80 preserves all registers, 'syscall' does not.
- I have tried to minimize looking at the NetBSD code, except in a couple
  of places (eg: to find which register they use to replace the trashed
  %rcx register in the syscall instruction).  As a result, there is not a
  lot of similarity.  I did look at NetBSD a few times while debugging to
  get some ideas about what I might have done wrong in my first attempt.
2003-05-01 01:05:25 +00:00
Peter Wemm
1e57e9eba3 Repocopy from x86_64/... to amd64/...
Rename visible x86_64 references to amd64.
Kill MID_MACHINE, its a.out specific, the only platform that supports it
is i386.  All of the other platforms should remove it too.
2003-04-30 22:51:59 +00:00
Alexander Kabaev
104a9b7e3e Deprecate machine/limits.h in favor of new sys/limits.h.
Change all in-tree consumers to include <sys/limits.h>

Discussed on:	standards@
Partially submitted by: Craig Rodrigues <rodrigc@attbi.com>
2003-04-29 13:36:06 +00:00
Jake Burkholder
14ce5bd49b Use inlines for loading and storing page table entries. Use cmpxchg8b for
the PAE case to ensure idempotent 64 bit loads and stores.

Sponsored by:	DARPA, Network Associates Laboratories
2003-04-28 20:35:36 +00:00
Alexander Kabaev
6fd839f9c7 Add a new sys/limits.h file which in turn depends on machine/_limits.h
to get actual constant values. This is in preparation for machine/limits.h
retirement.

Discussed on:	standards@
Submitted by:	Craig Rodrigues <rodrigc@attbi.com>  (*)
Modified by:	kan
2003-04-23 21:41:59 +00:00
David Xu
d1fc2022c3 Backout my last commit.
Requested by: bde
2003-04-20 01:35:21 +00:00
David Xu
2bdf11638e Don't return garbage in high 16 bits. 2003-04-19 02:40:39 +00:00
Maxime Henrion
141bacb048 Change the operation parameter of bus_dmamap_sync() from an
enum to an int and redefine the BUS_DMASYNC_* constants as
flags.  This allows us to specify several operations in one
call to bus_dmamap_sync() as in NetBSD.
2003-04-10 23:03:33 +00:00
Jake Burkholder
ac00210525 Remove invalid cast to vm_offset_t to avoid truncating a physical address
when doing pmap_kextract on a 2MB page.

Spotted by:	peter
Sponsored by:	DARPA, Network Associates Laboratories
2003-04-08 18:22:41 +00:00
Jake Burkholder
46ea68dd10 Better fix for previous previous which still allows the 4megs of kva at
the top of the address space to be reclaimed.  The problem is that with
the APTD gone the mapable kernel address space runs right to the end of
the 32 bit address space.  As a max this is 0x100000000, which can't be
represented in 32 bits, so we have to use ptd entry n-1 and pte offset
n-1, instead of ptd entry n and pte offset 0.  There's still 1 page we
can't use, but we gain just under 4 megs of kva (8 megs with PAE).

Sponsored by:	DARPA, Network Associates Laboratories
2003-04-07 14:27:19 +00:00
Dag-Erling Smørgrav
9f45b2da8f Define ovbcopy() as a macro which expands to the equivalent bcopy() call,
to take care of the KAME IPv6 code which needs ovbcopy() because NetBSD's
bcopy() doesn't handle overlap like ours.

Remove all implementations of ovbcopy().

Previously, bzero was a function pointer on i386, to save a jmp to
bzero_vector.  Get rid of this microoptimization as it only confuses
things, adds machine-dependent code to an MD header, and doesn't really
save all that much.

This commit does not add my pagezero() / pagecopy() code.
2003-04-04 17:29:55 +00:00
Jake Burkholder
d1d03c2b72 Bandaid fix for previous commit while I figure out why it broke. This
caused crashes early in boot on i386 UP machines.

Reported by:	phk
Pointy hat to:	jake
2003-04-04 10:09:44 +00:00
Jake Burkholder
163529c2b3 - Removed APTD and associated macros, it is no longer used.
BANG BANG BANG etc.

Sponsored by:	DARPA, Network Associates Laboratories
2003-04-03 23:44:35 +00:00
Peter Wemm
cc66ebe2a9 Commit a partial lazy thread switch mechanism for i386. it isn't as lazy
as it could be and can do with some more cleanup.  Currently its under
options LAZY_SWITCH.  What this does is avoid %cr3 reloads for short
context switches that do not involve another user process.  ie: we can
take an interrupt, switch to a kthread and return to the user without
explicitly flushing the tlb.  However, this isn't as exciting as it could
be, the interrupt overhead is still high and too much blocks on Giant
still.  There are some debug sysctls, for stats and for an on/off switch.

The main problem with doing this has been "what if the process that you're
running on exits while we're borrowing its address space?" - in this case
we use an IPI to give it a kick when we're about to reclaim the pmap.

Its not compiled in unless you add the LAZY_SWITCH option.  I want to fix a
few more things and get some more feedback before turning it on by default.

This is NOT a replacement for Bosko's lazy interrupt stuff.  This was more
meant for the kthread case, while his was for interrupts.  Mine helps a
little for interrupts, but his helps a lot more.

The stats are enabled with options SWTCH_OPTIM_STATS - this has been a
pseudo-option for years, I just added a bunch of stuff to it.

One non-trivial change was to select a new thread before calling
cpu_switch() in the first place.  This allows us to catch the silly
case of doing a cpu_switch() to the current process.  This happens
uncomfortably often.  This simplifies a bit of the asm code in cpu_switch
(no longer have to call choosethread() in the middle).  This has been
implemented on i386 and (thanks to jake) sparc64.  The others will come
soon.  This is actually seperate to the lazy switch stuff.

Glanced at by:  jake, jhb
2003-04-02 23:53:30 +00:00
Jake Burkholder
7ab9b220d9 - Add support for PAE and more than 4 gigs of ram on x86, dependent on the
kernel opition 'options PAE'.  This will only work with device drivers which
  either use busdma, or are able to handle 64 bit physical addresses.

Thanks to Lanny Baron from FreeBSD Systems for the loan of a test machine
with 6 gigs of ram.

Sponsored by:	DARPA, Network Associates Laboratories, FreeBSD Systems
2003-03-30 05:24:52 +00:00
Jake Burkholder
de54353fb8 - Remove invalid casts.
Sponsored by:	DARPA, Network Associates Laboratories
2003-03-30 01:44:16 +00:00
Jake Burkholder
aea57872f0 - Convert all uses of pmap_pte and get_ptbase to pmap_pte_quick. When
accessing an alternate address space this causes 1 page table page at
  a time to be mapped in, rather than using the recursive mapping technique
  to map in an entire alternate address space.  The recursive mapping
  technique changes large portions of the address space and requires global
  tlb flushes, which seem to cause problems when PAE is enabled.  This will
  also allow IPIs to be avoided when mapping in new page table pages using
  the same technique as is used for pmap_copy_page and pmap_zero_page.

Sponsored by:	DARPA, Network Associates Laboratories
2003-03-30 01:16:19 +00:00
Paul Saab
87437b0b89 Nuke options HTT infavor of machdep.hlt_logical_cpus tunable/sysctl.
This keeps the logical cpu's halted in the idle loop.  By default
the logical cpu's are halted at startup.  It is also possible to
halt any cpu in the idle loop now using machdep.hlt_cpus.

Examples of how to use this:
machdep.hlt_cpus=1	halt cpu0
machdep.hlt_cpus=2	halt cpu1
machdep.hlt_cpus=4	halt cpu2
machdep.hlt_cpus=3	halt cpu0,cpu1

Reviewed by:	jhb, peter
2003-03-26 19:49:34 +00:00