Commit Graph

113 Commits

Author SHA1 Message Date
Peter Grehan
bd8e6f87c8 Remove bogus increment of re-hashed PTEG index. This snuck in with r1.12 of
pmap.c, and is potentially the cause of hangs reported on machines with a
small amount of memory. On machines with sufficient RAM, and without a lot
of processes running, this situation would probably never occur.

Testing is still incomplete, but it is obviously wrong so remove the
offending code now.

The issue of what to do when both the primary and secondary hash overflow
is still open.

Reported by:	Dan Kresja at windriver dot com, via alc
2006-12-20 01:10:21 +00:00
Peter Grehan
6e4f008cbb Fix gdb issue where the i-cache was not being updated when a breakpoint
was written into a user's address space. The fix is to modify uiomove_fromphys
to sync the icache when an executable user-space page is written into.

Alan Cox suggested that there should probably be a higher-level interface
to this in the ptrace code, but agreed that this is an OK short-term solution.

Files changed:

pmap.h - declaration of pmap_page_executable()
pmap_dispatch.c - pass through the page_executable call to the mmu object
mmu_oea.c - implement the page_executable method by examining the PTE_EXEC
 field in the vm_page_t
uio_machdep.c - in uiomove_fromphys(), if the op was a UIO_WRITE to user-space,
 and if the page is executable, sync the icache since this is at the least
 a breakpoint-write from gdb.

Reported by:	marcel
Tested by:	marcel, grehan on g3+g4
Discussed with:	alc
MFC after:	2 weeks
2006-12-05 04:01:52 +00:00
Peter Grehan
9955cf96f6 Don't use vm_page_flag_set() if installing bootstrap page-table entries
since the vm page mutex's aren't yet initialized. Fixes boot-time panic.

Reported by:	Dario Freni  saturnero at freesbie dot org
2006-11-30 08:13:06 +00:00
Alan Cox
44b8bd66f9 Make pmap_enter() responsible for setting PG_WRITEABLE instead
of its caller.  (As a beneficial side-effect, a high-contention
acquisition of the page queues lock in vm_fault() is eliminated.)
2006-11-12 21:48:34 +00:00
Alan Cox
78985e424a Complete the transition from pmap_page_protect() to pmap_remove_write().
Originally, I had adopted sparc64's name, pmap_clear_write(), for the
function that is now pmap_remove_write().  However, this function is more
like pmap_remove_all() than like pmap_clear_modify() or
pmap_clear_reference(), hence, the name change.

The higher-level rationale behind this change is described in
src/sys/amd64/amd64/pmap.c revision 1.567.  The short version is that I'm
trying to clean up and fix our support for execute access.

Reviewed by: marcel@ (ia64)
2006-08-01 19:06:06 +00:00
Alan Cox
2d96c2b1b6 Add synchronization to moea_zero_page() and moea_zero_page_area().
Remove the acquisition and release of Giant from moea_zero_page_idle().

Tested by: grehan@
2006-07-10 07:03:37 +00:00
Alan Cox
d2809f533b Eliminate the acquisition and release of Giant from moea_extract_and_hold()
and moea_protect().

Tested by: grehan@ and rink@
2006-07-01 23:24:32 +00:00
Alan Cox
d644a0b78a Synchronize accesses to the PTEG table.
Add many lock assertions.

Tested by: grehan@
2006-06-25 19:07:01 +00:00
Rink Springer
5ce609a3e1 Prevent 'mutex not owned' panic on boot if INVARIANTS is in the kernel. This
makes the GENERIC kernel boot on ppc.

Reviewed by:	grehan
Approved by:	imp (mentor)
MFC after:	1 week

dCVS: ----------------------------------------------------------------------
2006-06-17 20:10:32 +00:00
Stephan Uphoff
2053c12705 Remove mpte optimization from pmap_enter_quick().
There is a race with the current locking scheme and removing
it should have no measurable performance impact.
This fixes page faults leading to panics in pmap_enter_quick_locked()
on amd64/i386.

Reviewed by: alc,jhb,peter,ps
2006-06-15 01:01:06 +00:00
Alan Cox
67c867ee8d Correct a typo in the previous revision. 2006-06-06 02:02:10 +00:00
Alan Cox
ce142d9ec0 Introduce the function pmap_enter_object(). It maps a sequence of resident
pages from the same object.  Use it in vm_map_pmap_enter() to reduce the
locking overhead of premapping objects.

Reviewed by: tegge@
2006-06-05 20:35:27 +00:00
Peter Grehan
5927693733 Name change from pmap_* to moea_* to fit into the new order of
mmu implementation.

This code handles the 32-bit 'OEA' MMU found on G2/G3/G4 PPC cores.
2005-11-08 06:49:45 +00:00
Peter Grehan
72ed31087b Fix boot-time hang/panic on G3 systems when modifying IBAT0 in
pmap_bootstrap by using the sync;isync big hammer to make sure
all prior operations have completed.

Reported by:	Nathan Whitehorn <nathan at uchicago edu>
MFC after:	2 days
2005-09-10 21:03:10 +00:00
Alan Cox
ba8bca610c Pass a value of type vm_prot_t to pmap_enter_quick() so that it determine
whether the mapping should permit execute access.
2005-09-03 18:20:20 +00:00
Alan Cox
1c245ae7d1 Introduce a procedure, pmap_page_init(), that initializes the
vm_page's machine-dependent fields.  Use this function in
vm_pageq_add_new_page() so that the vm_page's machine-dependent and
machine-independent fields are initialized at the same time.

Remove code from pmap_init() for initializing the vm_page's
machine-dependent fields.

Remove stale comments from pmap_init().

Eliminate the Boolean variable pmap_initialized from the alpha, amd64,
i386, and ia64 pmap implementations.  Its use is no longer required
because of the above changes and earlier changes that result in physical
memory that is being mapped at initialization time being mapped without
pv entries.

Tested by: cognet, kensmith, marcel
2005-06-10 03:33:36 +00:00
Peter Grehan
b0c2130963 Replaced previous hw.physmem extraction with des's mods to
getenv_ulong() - much simpler.

Pointed out by:	des
2005-03-07 07:31:20 +00:00
Peter Grehan
e2f6d6e28a Allow user to undersize memory with hw.physmem loader variable.
Obtained from:  i386/machdep.c:getmemsize()
2005-03-07 01:46:06 +00:00
Peter Grehan
4dba5df19d Add PVO_FAKE flag to pvo entries for PG_FICTITIOUS mappings, to
avoid trying to reverse-map a device physical address to the
vm_page array and walking into non-existent vm weeds.

found by:  Xorg server exiting
2005-02-25 02:42:15 +00:00
Peter Grehan
94aa7aecdf Fix (accidental?) lock order reversal in pmap_remove. Found when
a process that has mmap'd device mem exits.
2005-01-21 01:02:38 +00:00
Warner Losh
60727d8b86 /* -> /*- for license, minor formatting changes 2005-01-07 02:29:27 +00:00
Peter Grehan
22f2fe59b9 Correctly initialise the 2nd kernel segment, and don't
forget to actually install it in the segment register.
This may fix some of the weird panics seen when kernel VM
is heavily used.
2004-12-29 09:41:40 +00:00
Alan Cox
1f70d62298 Modify pmap_enter_quick() so that it expects the page queues to be locked
on entry and it assumes the responsibility for releasing the page queues
lock if it must sleep.

Remove a bogus comment from pmap_enter_quick().

Using the first change, modify vm_map_pmap_enter() so that the page queues
lock is acquired and released once, rather than each time that a page
is mapped.
2004-12-23 20:16:11 +00:00
Alan Cox
85f5b24573 In the common case, pmap_enter_quick() completes without sleeping.
In such cases, the busying of the page and the unlocking of the
containing object by vm_map_pmap_enter() and vm_fault_prefault() is
unnecessary overhead.  To eliminate this overhead, this change
modifies pmap_enter_quick() so that it expects the object to be locked
on entry and it assumes the responsibility for busying the page and
unlocking the object if it must sleep.  Note: alpha, amd64, i386 and
ia64 are the only implementations optimized by this change; arm,
powerpc, and sparc64 still conservatively busy the page and unlock the
object within every pmap_enter_quick() call.

Additionally, this change is the first case where we synchronize
access to the page's PG_BUSY flag and busy field using the containing
object's lock rather than the global page queues lock.  (Modifications
to the page's PG_BUSY flag and busy field have asserted both locks for
several weeks, enabling an incremental transition.)
2004-12-15 19:55:05 +00:00
Alan Cox
4711f8d71d Lock the kernel pmap in pmap_kenter().
Tested by: gallatin@
2004-09-13 20:36:01 +00:00
Alan Cox
f489bf21ec - Introduce a lock for synchronizing access to the pvo and pteg tables.
- In pmap_enter(), only the acquisition and release of the page queues
   lock needs to check the bootstrap flag.

Tested by: gallatin@
2004-08-30 21:39:22 +00:00
Alan Cox
c3d11d22be Eliminate unnecessary indirection. 2004-08-28 20:27:12 +00:00
Alan Cox
48d0b1a0dc Add pmap locking to many of the functions.
Many thanks to Andrew Gallatin for resolving a powerpc-specific
initialization problem in my original patch.

Tested by: gallatin@
2004-08-26 04:15:36 +00:00
Marius Strobl
39513fa664 Instead of "OpenFirmware", "openfirmware", etc. use the official spelling
"Open Firmware" from IEEE 1275 and OpenFirmware.org (no pun intended).

Ok'ed by:	tmm
2004-08-16 15:45:27 +00:00
Suleiman Souhlal
c0763d3763 Add /dev/mem and /dev/kmem to powerpc.
Approved by:	grehan (mentor)
2004-08-16 13:07:40 +00:00
Peter Grehan
2184ddd1f7 In pmap_page_protect, clear the vm page's PG_WRITEABLE flag if
downgrading to read-only. Found by triggering the KASSERT in
vm_pageout_flush().
2004-08-05 12:44:12 +00:00
Alan Cox
684a62b7bf - Push down the acquisition and release of Giant into pmap_enter_quick()
on those architectures without pmap locking.
 - Eliminate the acquisition and release of Giant in vm_map_pmap_enter().
2004-08-04 22:03:16 +00:00
Alan Cox
9bb0e06861 - Push down the acquisition and release of Giant into pmap_protect() on
those architectures without pmap locking.
 - Eliminate the acquisition and release of Giant from vm_map_protect().

(Translation: mprotect(2) runs to completion without touching Giant on
alpha, amd64, i386 and ia64.)
2004-07-30 20:38:30 +00:00
Alan Cox
ab50a26230 Implement the protection check required by the pmap_extract_and_hold()
specification.

Reviewed and tested by:	grehan@
2004-07-26 18:10:10 +00:00
Alan Cox
3d2e54c317 Push down the acquisition and release of the page queues lock into
pmap_protect() and pmap_remove().  In general, they require the lock in
order to modify a page's pv list or flags.  In some cases, however,
pmap_protect() can avoid acquiring the lock.
2004-07-15 18:00:43 +00:00
Alan Cox
eba90ac75f pmap_remove_pages() must not remove wired mappings. Since
pmap_remove_pages() is an optimization, its implementation is optional.

Discussed with:	grehan
2004-07-12 04:40:26 +00:00
Peter Grehan
5d64cf91fb G4 requires isync after 256Mb ibat/dbat update, G3 requires
isync after each bat update. Otherwise, pmap_bootstrap causes
an ISI exception. A fall-out of loader BAT removal.
2004-07-08 12:47:36 +00:00
Alan Cox
56b093883a Correct pmap_extract()'s return type. It should be vm_paddr_t, not
vm_offset_t.
2004-07-05 23:08:27 +00:00
Peter Grehan
6cc1cdf47b Modify loop test when cycling through phys_avail array. It's possible
for an OpenFirmware implementation to have a single memory region
(hello PearPC).
2004-07-01 08:01:49 +00:00
Alan Cox
1f51408ade Remove avail_end. It is not used. 2004-04-11 06:02:24 +00:00
Alan Cox
c8607538c8 Remove avail_start on those platforms that no longer use it. (Only amd64
does anything with it beyond simple initialization.)
2004-04-05 04:08:00 +00:00
Alan Cox
bdb93eb248 Remove unused arguments from pmap_init(). 2004-04-05 00:37:50 +00:00
Alan Cox
fcffa790e9 Retire pmap_pinit2(). Alpha was the last platform that used it. However,
ever since alpha/alpha/pmap.c revision 1.81 introduced the list allpmaps,
there has been no reason for having this function on Alpha.  Briefly,
when pmap_growkernel() relied upon the list of all processes to find and
update the various pmaps to reflect a growth in the kernel's valid
address space, pmap_init2() served to avoid a race between pmap
initialization and pmap_growkernel().  Specifically, pmap_pinit2() was
responsible for initializing the kernel portions of the pmap and
pmap_pinit2() was called after the process structure contained a pointer
to the new pmap for use by pmap_growkernel().  Thus, an update to the
kernel's address space might be applied to the new pmap unnecessarily,
but an update would never be lost.
2004-03-07 21:06:48 +00:00
Peter Grehan
4daf20b2f1 Increase kernel VA from 256Mb to 512Mb by shifting the segment used
for user copyinout down to 12, and keeping segments 13/14 for
kernel VA.

It would be nice to have more available, but segments lower than
this are reserved for either memory or 1:1 mapped device i/o,
and seg 15 is OpenFirmware ROM. Also, the effort to keep OpenFirmware
available for callbacks limits the use of VA-mapped segments.
Fortunately UMA_MD_SMALL_ALLOC takes away a lot of VM pressure.

Obtained from:  NetBSD
2004-03-02 06:49:21 +00:00
Peter Grehan
7c2779715c Cleaned up param.h:
- culled long-dead #define's
 - segment register defs moved to sr.h
 - NPMAPS moved to pmap.h
 - KERNBASE moved to vmparam.h
 - removed include of <machine/cpu.h> and fixed src files that
   relied on this.

Modifying segment register code no longer causes gcc rebuilds :-)
2004-02-11 07:27:34 +00:00
Peter Grehan
0ee6dbd789 Remove pmap_pvo_allocf zone alloc function. It was a way of
using the direct-mapping of physmem to force PTE data structures
to be physically addressable so the interrupt-time real-mode
DSI trap handler could perform PTE spills. However, the memory
may have been > 256Mb, which would have caused a BAT spill and
double-interrupt.

The new trap code no longer handles PTE spills, so the requirement
that these pages be direct-mapped no longer applies. The irony is
UMA_MD_SMALL_ALLOC will return direct mappings for these structs :-)
2004-02-04 13:16:21 +00:00
Peter Grehan
0efd0097cb When UMA_MD_SMALL_ALLOC is defined, pmap_kextract will be called
for direct-mapped addresses. Assume that any address less than KVA
is one of these and return it. Also assert that an address is KVA
does have a valid mapping - callers of pmap_kextract don't check
the return value, since they assume that they have a valid virtual
address.
2004-01-29 00:45:41 +00:00
Peter Grehan
7b33c6ef7a Disable the per-vm_page PTE cache. This was not being invalidated
correctly, resulting in the dreaded "vm_pageout_flush: partially
invalid page" panic. The caching issue will be revisited in the
future, but opt for safety over performance in the meantime.

Tested by: gallatin
2003-12-16 03:55:57 +00:00
Andrew Gallatin
4f7daed045 pmap_query_bit() should return false if the bit is not set.
Reviewed by: grehan
2003-12-09 14:47:33 +00:00
Alan Cox
566526a957 Migrate pmap_prefault() into the machine-independent virtual memory layer.
A small helper function pmap_is_prefaultable() is added.  This function
encapsulate the few lines of pmap_prefault() that actually vary from
machine to machine.  Note: pmap_is_prefaultable() and pmap_mincore() have
much in common.  Going forward, it's worth considering their merger.
2003-10-03 22:46:53 +00:00