Commit Graph

279 Commits

Author SHA1 Message Date
Alan Cox
566526a957 Migrate pmap_prefault() into the machine-independent virtual memory layer.
A small helper function pmap_is_prefaultable() is added.  This function
encapsulate the few lines of pmap_prefault() that actually vary from
machine to machine.  Note: pmap_is_prefaultable() and pmap_mincore() have
much in common.  Going forward, it's worth considering their merger.
2003-10-03 22:46:53 +00:00
Peter Grehan
02b63ceaff DELAY must be a routine, not a macro definition. 2003-09-26 09:02:24 +00:00
Peter Grehan
84792e72d6 Soften assert in pmap_remove_all.
Introduct pmap_extract_and_hold.

Stolen from: sparc64
2003-09-22 11:59:05 +00:00
Alan Cox
411d10a600 Migrate the sf_buf allocator that is used by sendfile(2) and zero-copy
sockets into machine-dependent files.  The rationale for this
migration is illustrated by the modified amd64 allocator.  It uses the
amd64's direct map to avoid emphemeral mappings in the kernel's
address space.  On an SMP, the emphemeral mappings result in an IPI
for TLB shootdown for each transmitted page.  Yuck.

Maintainers of other 64-bit platforms with direct maps should be able
to use the amd64 allocator as a reference implementation.
2003-08-29 20:04:10 +00:00
Marcel Moolenaar
710338e94f In vm_thread_swap{in|out}(), remove the alpha specific conditional
compilation and replace it with a call to cpu_thread_swap{in|out}().
This allows us to add similar code on ia64 without cluttering the
code even more.
2003-08-16 23:15:15 +00:00
Peter Grehan
eac100658a Update powerpc to use the (old thread,new thread) calling convention
for cpu_throw() and cpu_switch().
2003-08-14 03:56:24 +00:00
Alan Cox
e53f32ace5 Use kmem_alloc_nofault() rather than kmem_alloc_pageable() in pmap_mapdev().
See revision 1.140 of kern/sys_pipe.c for a detailed rationale.

Submitted by:	tegge
2003-08-02 19:26:09 +00:00
Bosko Milekic
b053bc8407 Make sure that when the PV ENTRY zone is created in pmap, that it's
created not only with UMA_ZONE_VM but also with UMA_ZONE_NOFREE.  In
the i386 case in particular, the pmap code would hook a special
page allocation routine that allocated from kernel_map and not kmem_map,
and so when/if the pageout daemon drained the zones, it could actually
push out slabs from the PV ENTRY zone but call UMA's default page_free,
which resulted in pages allocated from kernel_map being freed to
kmem_map; bad.  kmem_free() ignores the return value of the
vm_map_delete and just returns.  I'm not sure what the exact
repercussions could be, but it doesn't look good.

In the PAE case on i386, we also set-up a zone in pmap, so be
conservative for now and make that zone also ZONE_NOFREE and
ZONE_VM.  Do this for the pmap zones for the other archs too,
although in some cases it may not be entirely necessarily.  We'd
rather be safe than sorry at this point.

Perhaps all UMA_ZONE_VM zones should by default be also
UMA_ZONE_NOFREE?

May fix some of silby's crashes on the PV ENTRY zone.
2003-07-31 03:39:51 +00:00
Peter Wemm
ad7a226f9d Deal with 'options KSTACK_PAGES' being a global option. 2003-07-31 01:31:32 +00:00
Alan Cox
cdedf48666 Make pmap_pvo_allocf() callable without Giant. 2003-07-27 20:57:53 +00:00
David Xu
20a2d71332 Rename thread_siginfo to cpu_thread_siginfo.
Suggested by: jhb
2003-07-15 00:11:04 +00:00
Alan Cox
1f78f902a8 Background: pmap_object_init_pt() premaps the pages of a object in
order to avoid the overhead of later page faults.  In general, it
implements two cases: one for vnode-backed objects and one for
device-backed objects.  Only the device-backed case is really
machine-dependent, belonging in the pmap.

This commit moves the vnode-backed case into the (relatively) new
function vm_map_pmap_enter().  On amd64 and i386, this commit only
amounts to code rearrangement.  On alpha and ia64, the new machine
independent (MI) implementation of the vnode case is smaller and more
efficient than their pmap-based implementations.  (The MI
implementation takes advantage of the fact that objects in -CURRENT
are ordered collections of pages.)  On sparc64, pmap_object_init_pt()
hadn't (yet) been implemented.
2003-07-03 20:18:02 +00:00
Alan Cox
dca96f1adc - Export pmap_enter_quick() to the MI VM. This will permit the
implementation of a largely MI pmap_object_init_pt() for vnode-backed
   objects.  pmap_enter_quick() is implemented via pmap_enter() on sparc64
   and powerpc.
 - Correct a mismatch between pmap_object_init_pt()'s prototype and its
   various implementations.  (I plan to keep pmap_object_init_pt() as
   the MD hook for device-backed objects on i386 and amd64.)
 - Correct an error in ia64's pmap_enter_quick() and adjust its interface
   to match the other versions.  Discussed with: marcel
2003-06-29 21:20:04 +00:00
David Xu
b8f480ab94 Add a machine depended function thread_siginfo, SA signal code
will use the function to construct a siginfo structure and use
the result to export to userland.

Reviewed by: julian
2003-06-28 06:34:08 +00:00
Peter Grehan
f209c40211 Remove unused bootpath[] variable. It conflicted with a declaration
in the sunlabel utility, causing build problems.
2003-06-25 08:11:29 +00:00
Alan Cox
49a2507bd1 Migrate the thread stack management functions from the machine-dependent
to the machine-independent parts of the VM.  At the same time, this
introduces vm object locking for the non-i386 platforms.

Two details:

1. KSTACK_GUARD has been removed in favor of KSTACK_GUARD_PAGES.  The
different machine-dependent implementations used various combinations
of KSTACK_GUARD and KSTACK_GUARD_PAGES.  To disable guard page, set
KSTACK_GUARD_PAGES to 0.

2. Remove the (unnecessary) clearing of PG_ZERO in vm_thread_new.  In
5.x, (but not 4.x,) PG_ZERO can only be set if VM_ALLOC_ZERO is passed
to vm_page_alloc() or vm_page_grab().
2003-06-14 23:23:55 +00:00
Alan Cox
89f4fca265 Move the *_new_altkstack() and *_dispose_altkstack() functions out of the
various pmap implementations into the machine-independent vm.  They were
all identical.
2003-06-14 06:20:25 +00:00
Peter Wemm
77e2a274d0 GC unused cpu_wait() function 2003-06-11 05:20:33 +00:00
Marcel Moolenaar
11e0f8e16d Change the second (and last) argument of cpu_set_upcall(). Previously
we were passing in a void* representing the PCB of the parent thread.
Now we pass a pointer to the parent thread itself.
The prime reason for this change is to allow cpu_set_upcall() to copy
(parts of) the trapframe instead of having it done in MI code in each
caller of cpu_set_upcall(). Copying the trapframe cannot always be
done with a simply bcopy() or may not always be optimal that way. On
ia64 specifically the trapframe contains information that is specific
to an entry into the kernel and can only be used by the corresponding
exit from the kernel. A trapframe copied verbatim from another frame
is in most cases useless without some additional normalization.

Note that this change removes the assignment to td->td_frame in some
implementations of cpu_set_upcall(). The assignment is redundant.
A previous call to cpu_thread_setup() already did the exact same
assignment. An added benefit of removing the redundant assignment is
that we can now change td_pcb without nasty side-effects.

This change officially marks the ability on ia64 for 1:1 threading.

Not tested on: amd64, powerpc
Compile & boot tested on: alpha, sparc64
Functionally tested on: i386, ia64
2003-06-04 21:13:21 +00:00
Poul-Henning Kamp
961de0b761 Remove #include <sys/disklabel.h> 2003-06-01 09:25:17 +00:00
John Baldwin
90af4afacb - Merge struct procsig with struct sigacts.
- Move struct sigacts out of the u-area and malloc() it using the
  M_SUBPROC malloc bucket.
- Add a small sigacts_*() API for managing sigacts structures: sigacts_alloc(),
  sigacts_free(), sigacts_copy(), sigacts_share(), and sigacts_shared().
- Remove the p_sigignore, p_sigacts, and p_sigcatch macros.
- Add a mutex to struct sigacts that protects all the members of the struct.
- Add sigacts locking.
- Remove Giant from nosys(), kill(), killpg(), and kern_sigaction() now
  that sigacts is locked.
- Several in-kernel functions such as psignal(), tdsignal(), trapsignal(),
  and thread_stopped() are now MP safe.

Reviewed by:	arch@
Approved by:	re (rwatson)
2003-05-13 20:36:02 +00:00
David E. O'Brien
28e257139a Things run thru the C preprocessor must use C-style comments. 2003-05-05 10:01:10 +00:00
Peter Wemm
161af19be7 Back out last commits. The elf64/elf32 kernel name thing was more pain
than it was worth.
2003-05-01 03:33:28 +00:00
Peter Wemm
1de0385cfc Fix transcription error. Use == NULL, not != NULL. Fortunately this
was harmless.
2003-04-30 22:09:26 +00:00
Peter Wemm
dae0bca875 Look for an elf32 kernel (powerpc) and elf64 kernel (sparc64) as well
as a plain "elf kernel".
2003-04-30 22:05:48 +00:00
John Baldwin
d90e753aa8 Range check the syscall number before looking it up in the syscallnames[]
array.

Submitted by:	pho
2003-04-30 17:59:27 +00:00
Daniel Eischen
1328e1c4be Add an argument to get_mcontext() which specified whether the
syscall return values should be cleared.  The system calls
getcontext() and swapcontext() want to return 0 on success
but these contexts can be switched to at a later time so
the return values need to be cleared in the saved register
sets.  Other callers of get_mcontext() would normally want
the context without clearing the return values.

Remove the i386-specific context saving from the KSE code.
get_mcontext() is not i386-specific any more.

Fix a bad pointer in the alpha get_mcontext() code.  The
context was being bcopy()'d from &td->tf_frame, but tf_frame
is itself a pointer, so the thread was being copied instead.
Spotted by jake.

Glanced at by:  jake
Reviewed by:    bde (months ago)
2003-04-25 01:50:30 +00:00
David E. O'Brien
8368cf8f75 Use __FBSDID rather than rcsid[]. 2003-04-03 21:36:33 +00:00
Jeff Roberson
b8db34d280 - Define a new md function 'casuptr'. This atomically compares and sets
a pointer that is in user space.  It will be used as the basic primitive
   for a kernel supported user space lock implementation.
 - Implement this function in x86's support.s
 - Provide stubs that return -1 in all other architectures.  Implementations
   will follow along shortly.

Reviewed by:	jake
2003-04-01 00:18:55 +00:00
Jeff Roberson
4093529dee - Move p->p_sigmask to td->td_sigmask. Signal masks will be per thread with
a follow on commit to kern_sig.c
 - signotify() now operates on a thread since unmasked pending signals are
   stored in the thread.
 - PS_NEEDSIGCHK moves to TDF_NEEDSIGCHK.
2003-03-31 22:49:17 +00:00
Jeff Roberson
1bf4700bff - Change trapsignal() to accept a thread and not a proc.
- Change all consumers to pass in a thread.

Right now this does not cause any functional changes but it will be important
later when signals can be delivered to specific threads.
2003-03-31 22:02:38 +00:00
Peter Grehan
a29355a07b Enable the FPU on first use per-thread and save state across context
switches. Not as lazy as it could be. Changing FPU state with sigcontext
still TODO.

fpu.c - convert some asm to inline C, and macroize fpu loads/stores
swtch.S - call out to save/restore fpu routines
trap.c - always call enable_fpu, since this shouldn't be called once
         the FPU has been enabled for a thread
genassym.c - define for pcb fpu flag
2003-03-20 10:28:20 +00:00
Peter Grehan
9b6595a08d Add machine check handler. While generally useful, it's required when
issuing PCI config cycles on MPC106-based PowerMacs, which cause machine
checks when accessing non-existent/empty slots.
2003-03-19 08:33:21 +00:00
John Baldwin
263067951a Replace calls to WITNESS_SLEEP() and witness_list() with equivalent calls
to WITNESS_WARN().
2003-03-04 21:03:05 +00:00
Peter Grehan
4b67de3419 Register typo and incorrect 32-bit constant load in previous commit.
Resulted in AST delivery not working.
2003-02-26 14:41:39 +00:00
Maxime Henrion
07159f9c56 Cleanup of the d_mmap_t interface.
- Get rid of the useless atop() / pmap_phys_address() detour.  The
  device mmap handlers must now give back the physical address
  without atop()'ing it.
- Don't borrow the physical address of the mapping in the returned
  int.  Now we properly pass a vm_offset_t * and expect it to be
  filled by the mmap handler when the mapping was successful.  The
  mmap handler must now return 0 when successful, any other value
  is considered as an error.  Previously, returning -1 was the only
  way to fail.  This change thus accidentally fixes some devices
  which were bogusly returning errno constants which would have been
  considered as addresses by the device pager.
- Garbage collect the poorly named pmap_phys_address() now that it's
  no longer used.
- Convert all the d_mmap_t consumers to the new API.

I'm still not sure wheter we need a __FreeBSD_version bump for this,
since and we didn't guarantee API/ABI stability until 5.1-RELEASE.

Discussed with:		alc, phk, jake
Reviewed by:		peter
Compile-tested on:	LINT (i386), GENERIC (alpha and sparc64)
Runtime-tested on:	i386
2003-02-25 03:21:22 +00:00
Peter Grehan
c0bf6c1f26 Catch up to latest KSE changes 2003-02-20 01:57:49 +00:00
Warner Losh
a163d034fa Back out M_* changes, per decision of the TRB.
Approved by: trb
2003-02-19 05:47:46 +00:00
Jeff Roberson
5215b1872f - Split the struct kse into struct upcall and struct kse. struct kse will
soon be visible only to schedulers.  This greatly simplifies much the
   KSE code.

Submitted by:	davidxu
2003-02-17 05:14:26 +00:00
Jeff Roberson
e4625663c9 - Move ke_sticks, ke_iticks, ke_uticks, ke_uu, ke_su, and ke_iu back into
the proc.  These counters are only examined through calcru.

Submitted by:	davidxu
Tested on:	x86, alpha, UP/SMP
2003-02-17 02:19:58 +00:00
Benno Rice
d417ef4a71 GC an unused variable. 2003-02-05 12:34:10 +00:00
Benno Rice
d8e3618615 Export the ns_per_tick variable through md_var.h rather than by declaring
it extern in cpu.c.
2003-02-05 12:33:49 +00:00
Benno Rice
bdfe5c9146 - Use cpu_setup() instead of identifycpu().
- Remove identifycpu().
2003-02-05 12:10:46 +00:00
Benno Rice
4a95ae9a68 Replace the inline asm in delay() with a while loop. This may not be as
efficient but it appears to actually work.  Some investigation may be
required.
2003-02-05 11:26:14 +00:00
Benno Rice
8bc7bc1ea0 - Rename the "powerpc" timecounter to the "decrementer" timecounter.
- Initialise it earlier.
2003-02-05 11:16:36 +00:00
Jake Burkholder
238dd3209a Split statclock into statclock and profclock, and made the method for driving
statclock based on profhz when profiling is enabled MD, since most platforms
don't use this anyway.  This removes the need for statclock_process, whose
only purpose was to subdivide profhz, and gets the profiling clock running
outside of sched_lock on platforms that implement suswintr.
Also changed the interface for starting and stopping the profiling clock to
do just that, instead of changing the rate of statclock, since they can now
be separate.

Reviewed by:	jhb, tmm
Tested on:	i386, sparc64
2003-02-03 17:53:15 +00:00
Julian Elischer
6f8132a867 Reversion of commit by Davidxu plus fixes since applied.
I'm not convinced there is anything major wrong with the patch but
them's the rules..

I am using my "David's mentor" hat to revert this as he's
offline for a while.
2003-02-01 12:17:09 +00:00
Peter Grehan
03b6e02551 - add pmap_pagedaemon_waken variable
- remove dead code and fix warnings in pmap_zero_page/zero_page_area
- implement
    pmap_clear_reference
    pmap_ts_referenced
    pmap_page_exists_quick
    pmap_remove_all
- align pmap_qenter/qremove closer with i386 code
- fix vm_page locking in pmap_new_thread (from benno)
- add new parameter to pmap_clear_bit to return original
  pte value

Approved by:  benno
2003-02-01 02:56:48 +00:00
Benno Rice
b3c02b110a Back out some changes that snuck in with the last commit.
Pointy hat to:	benno
2003-01-27 04:32:10 +00:00
Benno Rice
622cfbd033 Flesh out bus_dmamap_sync. 2003-01-27 04:27:01 +00:00