Gleb Smirnoff 756a541279 Allocate pager bufs from UMA instead of 80-ish mutex protected linked list.
o In vm_pager_bufferinit() create pbuf_zone and start accounting on how many
  pbufs are we going to have set.
  In various subsystems that are going to utilize pbufs create private zones
  via call to pbuf_zsecond_create(). The latter calls uma_zsecond_create(),
  and sets a limit on created zone. After startup preallocate pbufs according
  to requirements of all pbuf zones.

  Subsystems that used to have a private limit with old allocator now have
  private pbuf zones: md(4), fusefs, NFS client, smbfs, VFS cluster, FFS,
  swap, vnode pager.

  The following subsystems use shared pbuf zone: cam(4), nvme(4), physio(9),
  aio(4). They should have their private limits, but changing that is out of
  scope of this commit.

o Fetch tunable value of kern.nswbuf from init_param2() and while here move
  NSWBUF_MIN to opt_param.h and eliminate opt_swap.h, that was holding only
  this option.
  Default values aren't touched by this commit, but they probably should be
  reviewed wrt to modern hardware.

This change removes a tight bottleneck from sendfile(2) operation, that
uses pbufs in vnode pager. Other pagers also would benefit from faster
allocation.

Together with:	gallatin
Tested by:	pho
2019-01-15 01:02:16 +00:00
..
2018-12-07 15:19:00 +00:00
2018-11-29 03:44:02 +00:00
2018-12-05 16:43:03 +00:00
2018-12-08 06:30:41 +00:00
2018-10-30 21:35:56 +00:00
2018-10-22 02:35:12 +00:00
2017-12-13 16:30:39 +00:00
2018-12-29 21:18:01 +00:00
2018-06-13 16:48:07 +00:00
2018-10-12 00:32:45 +00:00
2018-08-18 19:45:56 +00:00
2018-04-08 16:34:10 +00:00
2018-06-01 13:26:45 +00:00
2018-11-20 14:58:41 +00:00
2018-10-25 15:40:59 +00:00
2018-11-20 14:59:27 +00:00
2018-12-07 15:19:00 +00:00
2018-12-07 15:19:00 +00:00
2018-06-01 13:26:45 +00:00
2018-11-11 00:21:28 +00:00
2018-11-19 00:54:31 +00:00
2018-10-23 21:43:41 +00:00
2019-01-08 09:04:27 +00:00