Commit Graph

137 Commits

Author SHA1 Message Date
Konstantin Belousov
897d81a020 Do not call vm_object_deallocate() from vm_map_delete(), because we
hold the map lock there, and might need the vnode lock for OBJT_VNODE
objects. Postpone object deallocation until caller of vm_map_delete()
drops the map lock. Link the map entries to be freed into the freelist,
that is released by the new helper function vm_map_entry_free_freelist().

Reviewed by:	tegge, alc
Tested by:	pho
2009-02-08 20:39:17 +00:00
Alan Cox
fb272dc841 Eliminate stale comments from kmem_malloc(). 2008-07-18 17:41:31 +00:00
Alan Cox
5cfa90e902 Make preparations for increasing the size of the kernel virtual address space
on the amd64 architecture.  The amd64 architecture requires kernel code and
global variables to reside in the highest 2GB of the 64-bit virtual address
space.  Thus, the memory allocated during bootstrap, before the call to
kmem_init(), starts at KERNBASE, which is not necessarily the same as
VM_MIN_KERNEL_ADDRESS on amd64.
2008-06-22 04:54:27 +00:00
Alan Cox
3202ed7523 Introduce a new parameter "superpage_align" to kmem_suballoc() that is
used to request superpage alignment for the submap.

Request superpage alignment for the kmem_map.

Pass VMFS_ANY_SPACE instead of TRUE to vm_map_find().  (They are currently
equivalent but VMFS_ANY_SPACE is the new preferred spelling.)

Remove a stale comment from kmem_malloc().
2008-05-10 21:46:20 +00:00
Alan Cox
2bc24aa956 Eliminate pointless casts from kmem_suballoc(). 2008-04-28 17:25:27 +00:00
Alan Cox
24dedba9f5 Eliminate an unnecessary printf() from kmem_suballoc(). The subsequent
panic() can be extended to convey the same information.
2008-03-30 20:08:59 +00:00
Pawel Jakub Dawidek
79c2840d1d When one tries to allocate memory with the M_WAITOK flag and we are short in
address space in kmem map call vm_lowmem event in a loop and wait a bit for
subsystems to reclaim some memory which in turn will reclaim address space as
well.

Note, this is a work-around.

Reviewed by:	alc
Approved by:	alc
MFC after:	3 days
2008-01-10 08:36:38 +00:00
Alan Cox
eb2a051720 Add an access type parameter to pmap_enter(). It will be used to implement
superpage promotion.

Correct a style error in kmem_malloc(): pmap_enter()'s last parameter is
a Boolean.
2008-01-03 07:34:34 +00:00
Pawel Jakub Dawidek
8ce2d00a04 Change unused 'user_wait' argument to 'timo' argument, which will be
used to specify timeout for msleep(9).

Discussed with:	alc
Reviewed by:	alc
2007-11-07 21:56:58 +00:00
Pawel Jakub Dawidek
0f2c2ce0a3 When KVA is exhausted, try the vm_lowmem event for the last time before
panicing. This helps a lot in ZFS stability.
2007-04-05 20:52:51 +00:00
Alan Cox
9f5c801b94 Change the way that unmanaged pages are created. Specifically,
immediately flag any page that is allocated to a OBJT_PHYS object as
unmanaged in vm_page_alloc() rather than waiting for a later call to
vm_page_unmanage().  This allows for the elimination of some uses of
the page queues lock.

Change the type of the kernel and kmem objects from OBJT_DEFAULT to
OBJT_PHYS.  This allows us to take advantage of the above change to
simplify the allocation of unmanaged pages in kmem_alloc() and
kmem_malloc().

Remove vm_page_unmanage().  It is no longer used.
2007-02-25 06:14:58 +00:00
Alan Cox
e6eaadba43 Declare the map entry created by kmem_init() for the range from
VM_MIN_KERNEL_ADDRESS to the end of the kernel's bootstrap data as
MAP_NOFAULT.
2007-01-07 07:32:04 +00:00
Alan Cox
0f3b612a06 There is no point in setting PG_REFERENCED on kmem_object pages because
they are "unmanaged", i.e., non-pageable, pages.

Remove a stale comment.
2006-11-13 00:27:02 +00:00
Alan Cox
44b8bd66f9 Make pmap_enter() responsible for setting PG_WRITEABLE instead
of its caller.  (As a beneficial side-effect, a high-contention
acquisition of the page queues lock in vm_fault() is eliminated.)
2006-11-12 21:48:34 +00:00
Alan Cox
66bdd5d619 The page queues lock is no longer required by vm_page_wakeup(). 2006-10-23 05:27:31 +00:00
Warner Losh
60727d8b86 /* -> /*- for license, minor formatting changes 2005-01-07 02:29:27 +00:00
Alan Cox
ddf4bb37c8 Use VM_ALLOC_NOBUSY instead of calling vm_page_wakeup(). 2004-10-24 18:46:32 +00:00
Brian Feldman
0ada205ee6 Back out all behavioral chnages. 2004-08-10 14:42:48 +00:00
Brian Feldman
9689d5e5ee Revamp VM map wiring.
* Allow no-fault wiring/unwiring to succeed for consistency;
  however, the wired count remains at zero, so it's a special case.

* Fix issues inside vm_map_wire() and vm_map_unwire() where the
  exact state of user wiring (one or zero) and system wiring
  (zero or more) could be confused; for example, system unwiring
  could succeed in removing a user wire, instead of being an
  error.

* Require all mappings to be unwired before they are deleted.
  When VM space is still wired upon deletion, it will be waited
  upon for the following unwire.  This makes vslock(9) work
  rather than allowing kernel-locked memory to be deleted
  out from underneath of its consumer as it would before.
2004-08-09 19:52:29 +00:00
Alan Cox
5122b74809 For years, kmem_alloc_pageable() has been misused. Now that the last of
these misuses has been corrected, remove it before new ones appear, such as
arm/arm/pmap.c revision 1.8.
2004-07-25 20:08:59 +00:00
Bosko Milekic
099a0e588c Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.

Extensions to UMA worth noting:
  - Better layering between slab <-> zone caches; introduce
    Keg structure which splits off slab cache away from the
    zone structure and allows multiple zones to be stacked
    on top of a single Keg (single type of slab cache);
    perhaps we should look into defining a subset API on
    top of the Keg for special use by malloc(9),
    for example.
  - UMA_ZONE_REFCNT zones can now be added, and reference
    counters automagically allocated for them within the end
    of the associated slab structures.  uma_find_refcnt()
    does a kextract to fetch the slab struct reference from
    the underlying page, and lookup the corresponding refcnt.

mbuma things worth noting:
  - integrates mbuf & cluster allocations with extended UMA
    and provides caches for commonly-allocated items; defines
    several zones (two primary, one secondary) and two kegs.
  - change up certain code paths that always used to do:
    m_get() + m_clget() to instead just use m_getcl() and
    try to take advantage of the newly defined secondary
    Packet zone.
  - netstat(1) and systat(1) quickly hacked up to do basic
    stat reporting but additional stats work needs to be
    done once some other details within UMA have been taken
    care of and it becomes clearer to how stats will work
    within the modified framework.

From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used.  The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.

Additional things worth noting/known issues (READ):
   - One report of 'ips' (ServeRAID) driver acting really
     slow in conjunction with mbuma.  Need more data.
     Latest report is that ips is equally sucking with
     and without mbuma.
   - Giant leak in NFS code sometimes occurs, can't
     reproduce but currently analyzing; brueffer is
     able to reproduce but THIS IS NOT an mbuma-specific
     problem and currently occurs even WITHOUT mbuma.
   - Issues in network locking: there is at least one
     code path in the rip code where one or more locks
     are acquired and we end up in m_prepend() with
     M_WAITOK, which causes WITNESS to whine from within
     UMA.  Current temporary solution: force all UMA
     allocations to be M_NOWAIT from within UMA for now
     to avoid deadlocks unless WITNESS is defined and we
     can determine with certainty that we're not holding
     any locks when we're M_WAITOK.
   - I've seen at least one weird socketbuffer empty-but-
     mbuf-still-attached panic.  I don't believe this
     to be related to mbuma but please keep your eyes
     open, turn on debugging, and capture crash dumps.

This change removes more code than it adds.

A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.

Testing and Debugging:
    rwatson,
    brueffer,
    Ketrien I. Saihr-Kesenchedra,
    ...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
Alan Cox
7ef6ba5d27 Push down the responsibility for zeroing a physical page from the
caller to vm_page_grab().  Although this gives VM_ALLOC_ZERO a
different meaning for vm_page_grab() than for vm_page_alloc(), I feel
such change is necessary to accomplish other goals.  Specifically, I
want to make the PG_ZERO flag immutable between the time it is
allocated by vm_page_alloc() and freed by vm_page_free() or
vm_page_free_zero() to avoid locking overheads.  Once we gave up on
the ability to automatically recognize a zeroed page upon entry to
vm_page_free(), the ability to mutate the PG_ZERO flag became useless.
Instead, I would like to say that "Once a page becomes valid, its
PG_ZERO flag must be ignored."
2004-04-24 20:53:55 +00:00
Warner Losh
05eb3785e7 Remove advertising clause from University of California Regent's license,
per letter dated July 22, 1999.

Approved by: core
2004-04-06 20:15:37 +00:00
Dag-Erling Smørgrav
497ddd5807 Back out previous commit due to objections. 2004-02-16 21:36:59 +00:00
Dag-Erling Smørgrav
cbea5fb98f Don't panic if we fail to satisfy an M_WAITOK request; return 0 instead.
The calling code will either handle that gracefully or cause a page fault.
2004-02-16 18:41:58 +00:00
Alan Cox
37d44833d5 Unmanage pages allocated by kmem_alloc(). (There is no point in having PV
entries for these pages.)
2004-01-10 00:22:33 +00:00
Alan Cox
c020e821c7 Don't bother clearing PG_ZERO in contigmalloc1(), kmem_alloc(), or
kmem_malloc().  It serves no purpose.
2004-01-06 20:52:55 +00:00
Alan Cox
ff5dcf2546 - Increase the scope of the kmem_object's lock in kmem_malloc(). Add a
comment explaining why a further increase is not possible.
2004-01-01 19:48:56 +00:00
Alan Cox
53d0a98878 Remove GIANT_REQUIRED from kmem_suballoc(). 2003-12-28 00:10:48 +00:00
Jonathan Mini
8f101a2f31 NFC: Update stale comments.
Reviewed by:	alc
2003-11-10 00:44:00 +00:00
Alan Cox
49c06616ae Synchronize access to a vm page's valid field using the containing
vm object's lock.
2003-10-04 19:13:27 +00:00
Alan Cox
30bb12a4e8 Call vm_page_unmanage() on pages belonging to the kmem_object. This
eliminates the unnecessary overhead of managing "PV" entries for these
pages.
2003-09-14 02:37:59 +00:00
Eivind Eklund
2ae51145e8 Change clean_map from a global to an auto variable 2003-09-01 16:46:47 +00:00
Bruce M Simpson
abd498aa71 Add the mlockall() and munlockall() system calls.
- All those diffs to syscalls.master for each architecture *are*
   necessary. This needed clarification; the stub code generation for
   mlockall() was disabled, which would prevent applications from
   linking to this API (suggested by mux)
 - Giant has been quoshed. It is no longer held by the code, as
   the required locking has been pushed down within vm_map.c.
 - Callers must specify VM_MAP_WIRE_HOLESOK or VM_MAP_WIRE_NOHOLES
   to express their intention explicitly.
 - Inspected at the vmstat, top and vm pager sysctl stats level.
   Paging-in activity is occurring correctly, using a test harness.
 - The RES size for a process may appear to be greater than its SIZE.
   This is believed to be due to mappings of the same shared library
   page being wired twice. Further exploration is needed.
 - Believed to back out of allocations and locks correctly
   (tested with WITNESS, MUTEX_PROFILING, INVARIANTS and DIAGNOSTIC).

PR:             kern/43426, standards/54223
Reviewed by:    jake, alc
Approved by:    jake (mentor)
MFC after:	2 weeks
2003-08-11 07:14:08 +00:00
Mike Silbersack
cebde06978 More pipe changes:
From alc:
Move pageable pipe memory to a seperate kernel submap to avoid awkward
vm map interlocking issues.  (Bad explanation provided by me.)

From me:
Rework pipespace accounting code to handle this new layout, and adjust
our default values to account for the fact that we now have a solid
limit on allocations.

Also, remove the "maxpipes" limit, as it no longer has a purpose.
(The limit on kva usage solves the problem of having two many pipes.)
2003-08-11 05:51:51 +00:00
Alan Cox
b77c2bcd98 Update the comment at the head of kmem_alloc_nofault() to describe its
purpose and use.
2003-08-01 19:51:43 +00:00
Alan Cox
f50ab15dff Remove GIANT_REQUIRED from kmem_alloc(). 2003-07-27 18:31:32 +00:00
Alan Cox
8e1e7b93b3 Remove GIANT_REQUIRED from kmem_malloc(). 2003-06-28 22:04:52 +00:00
David E. O'Brien
874651b13c Use __FBSDID(). 2003-06-11 23:50:51 +00:00
Alan Cox
984a95d563 Lock the kernel object in kmem_alloc(). 2003-06-07 23:24:10 +00:00
Alan Cox
acbff226fc Update locking on the kmem_object to use the new macros. 2003-04-15 01:16:05 +00:00
Alan Cox
f31c239da1 Eliminate unnecessary gotos from kmem_malloc(). 2003-04-13 00:23:42 +00:00
Alan Cox
469c4ba59e Allow kmem_malloc() without Giant if M_NOWAIT is specified. 2003-01-04 19:26:35 +00:00
Alan Cox
c9267356b7 - Mark the kernel_map as a system map immediately after its creation.
- Correct a cast.
2002-12-30 05:55:41 +00:00
Alan Cox
a623fedef7 Two changes to kmem_malloc():
- Use VM_ALLOC_WIRED.
 - Perform vm_page_wakeup() after pmap_enter(), like we do everywhere else.
2002-12-28 19:03:54 +00:00
Alan Cox
82ea080d88 - Hold the page queues lock around calls to vm_page_flag_clear(). 2002-12-24 19:02:03 +00:00
Alan Cox
dc907f6632 - Hold the page queues lock around vm_page_wakeup(). 2002-12-24 04:24:58 +00:00
Alan Cox
671e427ce9 Increase the scope of the kmem_object locking in kmem_malloc(). 2002-12-20 18:59:23 +00:00
Alan Cox
d8e7c54e1e Hold the page queues lock when performing vm_page_flag_set(). 2002-12-17 19:55:28 +00:00
Alan Cox
4b36fe0cbd Perform vm_object_lock() and vm_object_unlock() on kmem_object
around vm_page_lookup() and vm_page_free().
2002-12-15 21:09:09 +00:00