Commit Graph

103 Commits

Author SHA1 Message Date
Mike Silbersack
cebde06978 More pipe changes:
From alc:
Move pageable pipe memory to a seperate kernel submap to avoid awkward
vm map interlocking issues.  (Bad explanation provided by me.)

From me:
Rework pipespace accounting code to handle this new layout, and adjust
our default values to account for the fact that we now have a solid
limit on allocations.

Also, remove the "maxpipes" limit, as it no longer has a purpose.
(The limit on kva usage solves the problem of having two many pipes.)
2003-08-11 05:51:51 +00:00
Alan Cox
b77c2bcd98 Update the comment at the head of kmem_alloc_nofault() to describe its
purpose and use.
2003-08-01 19:51:43 +00:00
Alan Cox
f50ab15dff Remove GIANT_REQUIRED from kmem_alloc(). 2003-07-27 18:31:32 +00:00
Alan Cox
8e1e7b93b3 Remove GIANT_REQUIRED from kmem_malloc(). 2003-06-28 22:04:52 +00:00
David E. O'Brien
874651b13c Use __FBSDID(). 2003-06-11 23:50:51 +00:00
Alan Cox
984a95d563 Lock the kernel object in kmem_alloc(). 2003-06-07 23:24:10 +00:00
Alan Cox
acbff226fc Update locking on the kmem_object to use the new macros. 2003-04-15 01:16:05 +00:00
Alan Cox
f31c239da1 Eliminate unnecessary gotos from kmem_malloc(). 2003-04-13 00:23:42 +00:00
Alan Cox
469c4ba59e Allow kmem_malloc() without Giant if M_NOWAIT is specified. 2003-01-04 19:26:35 +00:00
Alan Cox
c9267356b7 - Mark the kernel_map as a system map immediately after its creation.
- Correct a cast.
2002-12-30 05:55:41 +00:00
Alan Cox
a623fedef7 Two changes to kmem_malloc():
- Use VM_ALLOC_WIRED.
 - Perform vm_page_wakeup() after pmap_enter(), like we do everywhere else.
2002-12-28 19:03:54 +00:00
Alan Cox
82ea080d88 - Hold the page queues lock around calls to vm_page_flag_clear(). 2002-12-24 19:02:03 +00:00
Alan Cox
dc907f6632 - Hold the page queues lock around vm_page_wakeup(). 2002-12-24 04:24:58 +00:00
Alan Cox
671e427ce9 Increase the scope of the kmem_object locking in kmem_malloc(). 2002-12-20 18:59:23 +00:00
Alan Cox
d8e7c54e1e Hold the page queues lock when performing vm_page_flag_set(). 2002-12-17 19:55:28 +00:00
Alan Cox
4b36fe0cbd Perform vm_object_lock() and vm_object_unlock() on kmem_object
around vm_page_lookup() and vm_page_free().
2002-12-15 21:09:09 +00:00
Alan Cox
fff6062ab6 o Retire vm_page_zero_fill() and vm_page_zero_fill_area(). Ever since
pmap_zero_page() and pmap_zero_page_area() were modified to accept
   a struct vm_page * instead of a physical address, vm_page_zero_fill()
   and vm_page_zero_fill_area() have served no purpose.
2002-08-25 00:22:31 +00:00
Alan Cox
db44450b11 o Remove the setting and clearing of the PG_MAPPED flag. (This flag is
obsolete.)
2002-08-10 07:11:16 +00:00
Alan Cox
57123de641 o Lock page queue accesses by vm_page_free(). 2002-07-28 04:23:03 +00:00
Alan Cox
e16cfdbea4 o Lock page queue accesses by vm_page_wire(). 2002-07-14 19:36:15 +00:00
Alan Cox
93bc4879e6 o Assert GIANT_REQUIRED on system maps in _vm_map_lock(),
_vm_map_lock_read(), and _vm_map_trylock().  Submitted by: tegge
 o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup().
   (This clears the way for exec_map accesses to move outside of Giant.
   The exec_map is not a system map.)
 o Remove some premature MPSAFE comments.

Reviewed by:	tegge
2002-07-12 23:20:06 +00:00
Alan Cox
9688f93163 o Add a "needs wakeup" flag to the vm_map for use by kmem_alloc_wait()
and kmem_free_wakeup().  Previously, kmem_free_wakeup() always
   called wakeup().  In general, no one was sleeping.
 o Export vm_map_unlock_and_wait() and vm_map_wakeup() from vm_map.c
   for use in vm_kern.c.
2002-07-11 02:39:24 +00:00
Alan Cox
848d14193d o Remove GIANT_REQUIRED from kmem_alloc_pageable(), kmem_alloc_nofault(),
and kmem_free().  (Annotate as MPSAFE.)
 o Remove incorrect casts from kmem_alloc_pageable() and kmem_alloc_nofault().
2002-06-23 18:07:40 +00:00
Jeff Roberson
1e081f889b - Move the computation of pflags out of the page allocation loop in
kmem_malloc()
- zero fill pages if PG_ZERO bit is not set after allocation in kmem_malloc()

Suggested by: alc, jake
2002-06-19 23:49:57 +00:00
Jeff Roberson
95f24639b7 Teach kmem_malloc about M_ZERO. 2002-06-19 20:47:18 +00:00
Alan Cox
1d7cf06c8c o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() and
vm_map_user_pageable().
 o Remove vm_map_pageable() and vm_map_user_pageable().
 o Remove vm_map_clear_recursive() and vm_map_set_recursive().  (They were
   only used by vm_map_pageable() and vm_map_user_pageable().)

Reviewed by:	tegge
2002-06-14 18:21:01 +00:00
Peter Wemm
db17c6fc07 Tidy up some loose ends.
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
  (this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
2002-04-29 07:43:16 +00:00
Eivind Eklund
a128794977 - Remove a number of extra newlines that do not belong here according to
style(9)
- Minor space adjustment in cases where we have "( ", " )", if(), return(),
  while(), for(), etc.
- Add /* SYMBOL */ after a few #endifs.

Reviewed by:	alc
2002-03-10 21:52:48 +00:00
Tor Egge
ff91d7800f Revert change in revision 1.53 and add a small comment to protect
the revived code.

vm pages newly allocated are marked busy (PG_BUSY), thus calling
vm_page_delete before the pages has been freed or unbusied will
cause a deadlock since vm_page_object_page_remove will wait for the
busy flag to be cleared.  This can be triggered by calling malloc
with size > PAGE_SIZE and the M_NOWAIT flag on systems low on
physical free memory.

A kernel module that reproduces the problem, written by Logan Gabriel
<logan@mail.2cactus.com>, can be found in the freebsd-hackers mail
archive (12 Apr 2001).  The problem was recently noticed again by
Archie Cobbs <archie@dellroad.org>.

Reviewed by:	dillon
2002-03-09 16:24:27 +00:00
Luigi Rizzo
60363fb9f7 vm/vm_kern.c: rate limit (to once per second) diagnostic printf when
you run out of mbuf address space.

kern/subr_mbuf.c: print a warning message when mb_alloc fails, again
	rate-limited to at most once per second. This covers other
	cases of mbuf allocation failures. Probably it also overlaps the
	one handled in vm/vm_kern.c, so maybe the latter should go away.

This warning will let us gradually remove the printf that are scattered
across most network drivers to report mbuf allocation failures.
Those are potentially dangerous, in that they are not rate-limited and
can easily cause systems to panic.

Unless there is disagreement (which does not seem to be the case
judging from the discussion on -net so far), and because this is
sort of a safety bugfix, I plan to commit a similar change to STABLE
during the weekend (it affects kern/uipc_mbuf.c there).

Discussed-with: jlemon, silby and -net
2001-12-01 00:21:30 +00:00
John Baldwin
02cd7c3cf2 - Remove asleep(), await(), and M_ASLEEP.
- Callers of asleep() and await() have been converted to calling tsleep().
  The only caller outside of M_ASLEEP was the ata driver, which called both
  asleep() and await() with spl-raised, so there was no need for the
  asleep() and await() pair.  M_ASLEEP was unused.

Reviewed by:	jasone, peter
2001-08-10 06:56:12 +00:00
Matthew Dillon
0cddd8f023 With Alfred's permission, remove vm_mtx in favor of a fine-grained approach
(this commit is just the first stage).  Also add various GIANT_ macros to
formalize the removal of Giant, making it easy to test in a more piecemeal
fashion. These macros will allow us to test fine-grained locks to a degree
before removing Giant, and also after, and to remove Giant in a piecemeal
fashion via sysctl's on those subsystems which the authors believe can
operate without Giant.
2001-07-04 16:20:28 +00:00
Bosko Milekic
08442f8a82 Introduce numerous SMP friendly changes to the mbuf allocator. Namely,
introduce a modified allocation mechanism for mbufs and mbuf clusters; one
which can scale under SMP and which offers the possibility of resource
reclamation to be implemented in the future. Notable advantages:

 o Reduce contention for SMP by offering per-CPU pools and locks.
 o Better use of data cache due to per-CPU pools.
 o Much less code cache pollution due to excessively large allocation macros.
 o Framework for `grouping' objects from same page together so as to be able
   to possibly free wired-down pages back to the system if they are no longer
   needed by the network stacks.

 Additional things changed with this addition:

  - Moved some mbuf specific declarations and initializations from
    sys/conf/param.c into mbuf-specific code where they belong.
  - m_getclr() has been renamed to m_get_clrd() because the old name is really
    confusing. m_getclr() HAS been preserved though and is defined to the new
    name. No tree sweep has been done "to change the interface," as the old
    name will continue to be supported and is not depracated. The change was
    merely done because m_getclr() sounds too much like "m_get a cluster."
  - TEMPORARILY disabled mbtypes statistics displaying in netstat(1) and
    systat(1) (see TODO below).
  - Fixed systat(1) to display number of "free mbufs" based on new per-CPU
    stat structures.
  - Fixed netstat(1) to display new per-CPU stats based on sysctl-exported
    per-CPU stat structures. All infos are fetched via sysctl.

 TODO (in order of priority):

  - Re-enable mbtypes statistics in both netstat(1) and systat(1) after
    introducing an SMP friendly way to collect the mbtypes stats under the
    already introduced per-CPU locks (i.e. hopefully don't use atomic() - it
    seems too costly for a mere stat update, especially when other locks are
    already present).
  - Optionally have systat(1) display not only "total free mbufs" but also
    "total free mbufs per CPU pool."
  - Fix minor length-fetching issues in netstat(1) related to recently
    re-enabled option to read mbuf stats from a core file.
  - Move reference counters at least for mbuf clusters into an unused portion
    of the cluster itself, to save space and need to allocate a counter.
  - Look into introducing resource freeing possibly from a kproc.

Reviewed by (in parts): jlemon, jake, silby, terry
Tested by: jlemon (Intel & Alpha), mjacob (Intel & Alpha)
Preliminary performance measurements: jlemon (and me, obviously)
URL: http://people.freebsd.org/~bmilekic/mb_alloc/
2001-06-22 06:35:32 +00:00
Alfred Perlstein
2395531439 Introduce a global lock for the vm subsystem (vm_mtx).
vm_mtx does not recurse and is required for most low level
vm operations.

faults can not be taken without holding Giant.

Memory subsystems can now call the base page allocators safely.

Almost all atomic ops were removed as they are covered under the
vm mutex.

Alpha and ia64 now need to catch up to i386's trap handlers.

FFS and NFS have been tested, other filesystems will need minor
changes (grabbing the vm lock when twiddling page properties).

Reviewed (partially) by: jake, jhb
2001-05-19 01:28:09 +00:00
Mark Murray
fb919e4d5a Undo part of the tangle of having sys/lock.h and sys/mutex.h included in
other "system" header files.

Also help the deprecation of lockmgr.h by making it a sub-include of
sys/lock.h and removing sys/lockmgr.h form kernel .c files.

Sort sys/*.h includes where possible in affected files.

OK'ed by:	bde (with reservations)
2001-05-01 08:13:21 +00:00
John Baldwin
e2181d41d0 Add mtx_assert()'s to verify that kmem_alloc() and kmem_free() are called
with Giant held.
2001-01-24 11:27:29 +00:00
Alfred Perlstein
030f23696c fix comment which was outdated 3 years ago
remove useless assignment
purge entire file of 'register' keyword
2000-12-29 13:49:05 +00:00
Alfred Perlstein
6e4f51d1ac clean up kmem_suballoc():
remove useless assignment
remove 'register' variables
2000-12-29 13:05:22 +00:00
Seigo Tanimura
21cd6e6232 - If swap metadata does not fit into the KVM, reduce the number of
struct swblock entries by dividing the number of the entries by 2
until the swap metadata fits.

- Reject swapon(2) upon failure of swap_zone allocation.

This is just a temporary fix. Better solutions include:
(suggested by:	dillon)

o reserving swap in SWAP_META_PAGES chunks, and
o swapping the swblock structures themselves.

Reviewed by:	alfred, dillon
2000-12-13 10:01:00 +00:00
Peter Wemm
0385347c1a Implement an optimization of the VM<->pmap API. Pass vm_page_t's directly
to various pmap_*() functions instead of looking up the physical address
and passing that.  In many cases, the first thing the pmap code was doing
was going to a lot of trouble to get back the original vm_page_t, or
it's shadow pv_table entry.

Inspired by: John Dyson's 1998 patches.

Also:
Eliminate pv_table as a seperate thing and build it into a machine
dependent part of vm_page_t.  This eliminates having a seperate set of
structions that shadow each other in a 1:1 fashion that we often went to
a lot of trouble to translate from one to the other. (see above)
This happens to save 4 bytes of physical memory for each page in the
system.  (8 bytes on the Alpha).

Eliminate the use of the phys_avail[] array to determine if a page is
managed (ie: it has pv_entries etc).  Store this information in a flag.
Things like device_pager set it because they create vm_page_t's on the
fly that do not have pv_entries.  This makes it easier to "unmanage" a
page of physical memory (this will be taken advantage of in subsequent
commits).

Add a function to add a new page to the freelist.  This could be used
for reclaiming the previously wasted pages left over from preloaded
loader(8) files.

Reviewed by:	dillon
2000-05-21 12:50:18 +00:00
Philippe Charnier
5929bcfaba Revert spelling mistake I made in the previous commit
Requested by: Alan and Bruce
2000-03-27 20:41:17 +00:00
Philippe Charnier
956f31353c Spelling 2000-03-26 15:20:23 +00:00
Poul-Henning Kamp
923502ff91 useracc() the prequel:
Merge the contents (less some trivial bordering the silly comments)
of <vm/vm_prot.h> and <vm/vm_inherit.h> into <vm/vm.h>.  This puts
the #defines for the vm_inherit_t and vm_prot_t types next to their
typedefs.

This paves the road for the commit to follow shortly: change
useracc() to use VM_PROT_{READ|WRITE} rather than B_{READ|WRITE}
as argument.
1999-10-29 18:09:36 +00:00
Alan Cox
02577fa23e Remove the last vestiges of "vm_map_t phys_map". It's been unused
since i386/i386/machdep.c rev 1.45 (or 1994 :-) ).
1999-10-29 05:17:20 +00:00
Peter Wemm
c3aac50f28 $Id$ -> $FreeBSD$ 1999-08-28 01:08:13 +00:00
Alan Cox
76782487f3 Remove the declarations for "vm_map_t io_map". It's been unused
since i386/i386/machdep rev 1.310, i.e., the demise of BOUNCE_BUFFERS.
1999-08-15 23:55:46 +00:00
Alan Cox
aecb0ebbac Remove the declarations for "vm_map_t u_map". It's been unused
since i386/i386/pmap rev 1.190.  (The alpha never used it.)
1999-08-15 21:55:20 +00:00
Peter Wemm
3efc015bae Fix some int/long printf problems for the Alpha 1999-07-01 19:53:43 +00:00
Dmitrij Tejblum
a839bdc8af Add a function kmem_alloc_nofault() - same as kmem_alloc_pageable(), but
create a nofault entry. It will be used to allocate kmem for upages.

(I am not too happy with all this, but it's better than nothing).
1999-06-08 17:03:28 +00:00
Alan Cox
c7003c6991 Correct a problem in kmem_malloc: A kmem_malloc allowing "wait" may
block (VM_WAIT) holding the map lock.  This is bad.  For example, a subsequent
kmem_malloc by an interrupt handler on the same map may find the lock held
and panic in the lockmgr.
1999-03-16 07:39:07 +00:00