Commit Graph

3683 Commits

Author SHA1 Message Date
David Xu
20a2d71332 Rename thread_siginfo to cpu_thread_siginfo.
Suggested by: jhb
2003-07-15 00:11:04 +00:00
Mark Murray
c7b132c974 Protect lint(1) from a #error. 2003-07-10 18:05:02 +00:00
Peter Wemm
e95babf3a8 unifdef -DLAZY_SWITCH and start to tidy up the associated glue. 2003-07-10 01:02:59 +00:00
Peter Wemm
bf8ca114e2 Fix the VADDR() macros to use either KVADDR() or UVADDR(), depending
on the implied sign extension.  The single unified VADDR() macro was
not able to avoid sign extending the VM_MAXUSER_ADDRESS/USRSTACK values.
Be explicit about UVADDR() (positive address space) and KVADDR()
(kernel negative address space) to make mistakes show up more
spectacularly.

Increase user VM space from 1/2TB (512GB) to 128TB.
2003-07-09 23:04:23 +00:00
Peter Wemm
6486c09935 Fix up bogus index/offset/mask calculations in the allocpte and the
corresponding release code.  This was preventing the use of more than
1/2TB of user VM.  I also spent a week staring at this code only to
eventually find that I'd mistakenly typed a P as an R.
2003-07-09 22:59:45 +00:00
Peter Wemm
4afd44c16a Turn the 2MB page mappings that cover the kernel text+data+bss area back
on now that pmap_pte() can handle it.  I never actually ran into anything
that broke that I know of, but this was turned off as a precaution.
2003-07-09 22:55:00 +00:00
Peter Wemm
436e1f203f Have pmap_pte() on a 2MB mapped address return the 2MB pde itself
rather than a non-existing pte.  There is code elsewhere in i386/amd64
pmap that neglects to handle the large page cases because it knows that
it will see PG_PS in the returned "pte".
2003-07-09 22:53:45 +00:00
Alan Cox
90a7c7b671 In pmap_object_init_pt(), the pmap_invalidate_all() should be performed on
the caller-provided pmap, not the kernel_pmap.  Using the kernel_pmap
results in an unnecessary IPI for TLB shootdown on SMPs.

Reviewed by:	jake, peter
2003-07-08 19:40:35 +00:00
Alan Cox
1f78f902a8 Background: pmap_object_init_pt() premaps the pages of a object in
order to avoid the overhead of later page faults.  In general, it
implements two cases: one for vnode-backed objects and one for
device-backed objects.  Only the device-backed case is really
machine-dependent, belonging in the pmap.

This commit moves the vnode-backed case into the (relatively) new
function vm_map_pmap_enter().  On amd64 and i386, this commit only
amounts to code rearrangement.  On alpha and ia64, the new machine
independent (MI) implementation of the vnode case is smaller and more
efficient than their pmap-based implementations.  (The MI
implementation takes advantage of the fact that objects in -CURRENT
are ordered collections of pages.)  On sparc64, pmap_object_init_pt()
hadn't (yet) been implemented.
2003-07-03 20:18:02 +00:00
Maxime Henrion
331e012396 Sync more things with other backends. 2003-07-01 19:16:48 +00:00
Maxime Henrion
4813f72a9b Honor the boundary of the busdma tag when allocating bounce pages.
This was fixed in revision 1.5 of alpha/alpha/busdma_machdep.c and
was never fixed in other busdma backends using bounce pages.
2003-07-01 16:54:54 +00:00
Scott Long
f6b1c44d1f Mega busdma API commit.
Add two new arguments to bus_dma_tag_create(): lockfunc and lockfuncarg.
Lockfunc allows a driver to provide a function for managing its locking
semantics while using busdma.  At the moment, this is used for the
asynchronous busdma_swi and callback mechanism.  Two lockfunc implementations
are provided: busdma_lock_mutex() performs standard mutex operations on the
mutex that is specified from lockfuncarg.  dftl_lock() is a panic
implementation and is defaulted to when NULL, NULL are passed to
bus_dma_tag_create().  The only time that NULL, NULL should ever be used is
when the driver ensures that bus_dmamap_load() will not be deferred.
Drivers that do not provide their own locking can pass
busdma_lock_mutex,&Giant args in order to preserve the former behaviour.

sparc64 and powerpc do not provide real busdma_swi functions, so this is
largely a noop on those platforms.  The busdma_swi on is64 is not properly
locked yet, so warnings will be emitted on this platform when busdma
callback deferrals happen.

If anyone gets panics or warnings from dflt_lock() being called, please
let me know right away.

Reviewed by:	tmm, gibbs
2003-07-01 15:52:06 +00:00
Alan Cox
dca96f1adc - Export pmap_enter_quick() to the MI VM. This will permit the
implementation of a largely MI pmap_object_init_pt() for vnode-backed
   objects.  pmap_enter_quick() is implemented via pmap_enter() on sparc64
   and powerpc.
 - Correct a mismatch between pmap_object_init_pt()'s prototype and its
   various implementations.  (I plan to keep pmap_object_init_pt() as
   the MD hook for device-backed objects on i386 and amd64.)
 - Correct an error in ia64's pmap_enter_quick() and adjust its interface
   to match the other versions.  Discussed with: marcel
2003-06-29 21:20:04 +00:00
Jeff Roberson
ab875ef896 - Construct a cpu topology map for Hyper Threading systems so that ULE may
take advantage of them.
2003-06-28 22:07:42 +00:00
David Xu
b8f480ab94 Add a machine depended function thread_siginfo, SA signal code
will use the function to construct a siginfo structure and use
the result to export to userland.

Reviewed by: julian
2003-06-28 06:34:08 +00:00
Scott Long
7f95801188 Catch amd64 up with the pending busdma async callback locking. Though this
mechanism might change in the near future, it's best to keep everything in
sync right now.

Reminded by:	peter
2003-06-28 06:07:06 +00:00
Peter Wemm
b6a5f89b4d Turn ips back on. 2003-06-27 23:11:22 +00:00
Peter Wemm
1e5d8b3b66 Oops, I only added a comment about why ips doesn't compile. Actually
comment it out for real.
2003-06-26 04:01:59 +00:00
Peter Wemm
ba1cabf4b9 Sync with i386 - add everything that compiles. There are a few drivers
that are trivially easy to fix (eg: ips) that I've not committed fixes for.
2003-06-26 03:49:54 +00:00
Peter Wemm
2d29639ebb Add back in the ability for pmap_mapdev() to use KVM if the region
being requested is outside of the range of the direct map region.  eg:
for pci windows.  While here, increase the minimum size of the direct
map region to be 4GB instead of 1GB.
2003-06-26 01:04:31 +00:00
Alan Cox
0183359659 MFi386
Add vm object locking to pmap_object_init_pt().
2003-06-23 06:10:52 +00:00
Hidetoshi Shimokawa
e07324646e Move KERNBASE to -2GB.
Currently, we cannot increase KVA more than 2GB.
2003-06-22 13:02:45 +00:00
Hidetoshi Shimokawa
bfcd2ec739 - Allow access to direct mapped region via /dev/kmem. This makes
'netstat -r' work.
- Use direct map for /dev/mem.
2003-06-22 12:59:43 +00:00
Hidetoshi Shimokawa
c1c1cc9c19 - Allocate a new PD Table if kernel grows beyond 1GB boundary.
Reviewed by: peter

- Use direct map in pmap_mapdev().
2003-06-22 12:55:20 +00:00
Hidetoshi Shimokawa
e14720d614 Use direct map in pmap_map().
This saves much KVA for vm_pages and you don't need to increase NKPT
for large physical memory anymore.

Suggested by: dfr
2003-06-20 14:09:33 +00:00
Hidetoshi Shimokawa
d25ac2fa68 Fix direct map page table for 2GB+ physical memory.
You may still need to increase NKPT for larger memory.
I have successfully booted 8GB system with NKPT=256.
2003-06-19 12:14:37 +00:00
Alan Cox
40ebf3e43a Fix a performance bug in all of the various implementations of
uma_small_alloc(): They always zeroed the page regardless of what the
caller requested.
2003-06-18 02:57:38 +00:00
David Xu
0e2a4d3aeb Rename P_THREADED to P_SA. P_SA means a process is using scheduler
activations.
2003-06-15 00:31:24 +00:00
Alan Cox
49a2507bd1 Migrate the thread stack management functions from the machine-dependent
to the machine-independent parts of the VM.  At the same time, this
introduces vm object locking for the non-i386 platforms.

Two details:

1. KSTACK_GUARD has been removed in favor of KSTACK_GUARD_PAGES.  The
different machine-dependent implementations used various combinations
of KSTACK_GUARD and KSTACK_GUARD_PAGES.  To disable guard page, set
KSTACK_GUARD_PAGES to 0.

2. Remove the (unnecessary) clearing of PG_ZERO in vm_thread_new.  In
5.x, (but not 4.x,) PG_ZERO can only be set if VM_ALLOC_ZERO is passed
to vm_page_alloc() or vm_page_grab().
2003-06-14 23:23:55 +00:00
Alan Cox
89f4fca265 Move the *_new_altkstack() and *_dispose_altkstack() functions out of the
various pmap implementations into the machine-independent vm.  They were
all identical.
2003-06-14 06:20:25 +00:00
Peter Wemm
77e2a274d0 GC unused cpu_wait() function 2003-06-11 05:20:33 +00:00
John Baldwin
6b9691f103 - Use IDTVEC() to declare IPI handlers since they are also IDT vectors.
- Make handlers for IPI's used by SMP kernels #ifdef SMP.
2003-06-06 17:45:25 +00:00
John Baldwin
e59ae32f18 - Document the thermal and performance counter LVT entries in the local
APIC.
- Add a lvt_thermal member to the LAPIC struct.
- Add constants for the SMI and INIT LVT delivery modes.
2003-06-06 17:22:15 +00:00
Marcel Moolenaar
c2e4eb969f Change the second (and last) argument of cpu_set_upcall(). Previously
we were passing in a void* representing the PCB of the parent thread.
Now we pass a pointer to the parent thread itself.
The prime reason for this change is to allow cpu_set_upcall() to copy
(parts of) the trapframe instead of having it done in MI code in each
caller of cpu_set_upcall(). Copying the trapframe cannot always be
done with a simply bcopy() or may not always be optimal that way. On
ia64 specifically the trapframe contains information that is specific
to an entry into the kernel and can only be used by the corresponding
exit from the kernel. A trapframe copied verbatim from another frame
is in most cases useless without some additional normalization.

Note that this change removes the assignment to td->td_frame in some
implementations of cpu_set_upcall(). The assignment is redundant.
A previous call to cpu_thread_setup() already did the exact same
assignment. An added benefit of removing the redundant assignment is
that we can now change td_pcb without nasty side-effects.

This change officially marks the ability on ia64 for 1:1 threading.

Not tested on: amd64, powerpc
Compile & boot tested on: alpha, sparc64
Functionally tested on: i386, ia64
2003-06-04 22:46:27 +00:00
Peter Wemm
7fc03ef474 Fix ALIGNED_POINTER(). sizeof((u_int32_t)) is not legal C. 2003-06-04 02:15:13 +00:00
Peter Wemm
babc58fd74 Fix restarted syscalls. When we rewind %rip, we also need to restore
all the argument registers etc since we have almost certainly have trashed
them by now.  Take particular car of %r10 since it held the original value
of %rcx (which we saved in tf_rcx on entry and doreti doesn't know this).
2003-06-02 21:56:08 +00:00
Peter Wemm
c35518b4ed Make this more compatable with libc_r. Make the internal types for storing
registers an array of longs rather than int.
2003-06-02 21:49:35 +00:00
David E. O'Brien
006124d811 Use __FBSDID(). 2003-06-02 16:32:55 +00:00
David E. O'Brien
9676a785e7 Use __FBSDID(). 2003-06-02 06:43:15 +00:00
Peter Wemm
193b147c05 MFi386: i386/include/asm.h rev 1.11: Do not abuse ##. 2003-06-02 05:59:35 +00:00
David E. O'Brien
69bb404192 Use C99 compatable asm statements. 2003-06-02 00:29:35 +00:00
David E. O'Brien
713c939103 Sync with i386/GENERIC ordering. 2003-06-01 20:26:38 +00:00
Peter Wemm
395aac85f8 MFi386: rev 1.56: remove break after return 2003-05-31 22:02:11 +00:00
Peter Wemm
0c5b3efcb0 MFi386: rev 1.23: use gdb_strlen()/gdb_strcpy() directly. 2003-05-31 22:00:57 +00:00
Peter Wemm
fbbfc4c335 MFi386: rev 1.50: remove unused variable 2003-05-31 21:58:55 +00:00
Poul-Henning Kamp
618b80ddcf Avoid unbalancing the { } count in the source file with #ifdef by
putting the opening { after the #ifdef ... #endif sequence.

Found by:       FlexeLint
2003-05-31 20:25:53 +00:00
Peter Wemm
4af5a3de60 Add acpi to the build. Remove the hack from machdep.c that lies to the
loader to shut it up.
2003-05-31 07:00:08 +00:00
Peter Wemm
b043c80645 Have hammer_time() return the proc0 stack location, and have locore
switch to it before calling mi_startup().  The bootstack is WAY too small
for running acpica during probe/attach.  While here, pass modulep/physfree
to the startup routine, rather than writing to the global variables in
locore.S.

Approved by:  re (amd64/*)
2003-05-31 06:54:29 +00:00
Peter Wemm
5681a6f60d Regenerate. 2003-05-31 06:51:04 +00:00
Peter Wemm
1f5b79bc16 Make this compile with WITNESS enabled. It wants the syscall names. 2003-05-31 06:49:53 +00:00