For 32-bit machines rollback the default number of vnode pager pbufs

back to the lever before r343030.  For 64-bit machines reduce it slightly,
too.  Together with r343030 I bumped the limit up to the value we use at
Netflix to serve 100 Gbit/s of sendfile traffic, and it probably isn't a
good default.

Provide a loader tunable to change vnode pager pbufs count. Document it.
This commit is contained in:
Gleb Smirnoff 2019-02-15 23:36:22 +00:00
parent 3c324b9465
commit 66fb0b1ad7
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=344188
2 changed files with 33 additions and 5 deletions

View File

@ -25,7 +25,7 @@
.\"
.\" $FreeBSD$
.\"
.Dd January 25, 2019
.Dd February 15, 2019
.Dt SENDFILE 2
.Os
.Sh NAME
@ -48,6 +48,7 @@ The
system call
sends a regular file or shared memory object specified by descriptor
.Fa fd
h
out a stream socket specified by descriptor
.Fa s .
.Pp
@ -224,6 +225,19 @@ implementation of
.Fn sendfile
is "zero-copy", meaning that it has been optimized so that copying of the file data is avoided.
.Sh TUNING
.Ss physical paging buffers
.Fn sendfile
uses vnode pager to read file pages into memory.
The pager uses a pool of physical buffers to run its I/O operations.
When system runs out of pbufs, sendfile will block and report state
.Dq Li zonelimit .
Size of the pool can be tuned with
.Va vm.vnode_pbufs
.Xr loader.conf 5
tunable and can be checked with
.Xr sysctl 8
OID of the same name at runtime.
.Ss sendfile(2) buffers
On some architectures, this system call internally uses a special
.Fn sendfile
buffer
@ -279,9 +293,11 @@ buffers usage respectively.
These values may also be viewed through
.Nm netstat Fl m .
.Pp
If a value of zero is reported for
.Va kern.ipc.nsfbufs ,
your architecture does not need to use
If
.Xr sysctl 8
OID
.Va kern.ipc.nsfbufs
doesn't exist, your architecture does not need to use
.Fn sendfile
buffers because their task can be efficiently performed
by the generic virtual memory structures.
@ -363,11 +379,13 @@ does not support
The socket peer has closed the connection.
.El
.Sh SEE ALSO
.Xr loader.conf 5 ,
.Xr netstat 1 ,
.Xr open 2 ,
.Xr send 2 ,
.Xr socket 2 ,
.Xr writev 2 ,
.Xr sysctl 8 ,
.Xr tuning 7
.Rs
.%A K. Elmeleegy

View File

@ -115,13 +115,23 @@ SYSCTL_PROC(_debug, OID_AUTO, vnode_domainset, CTLTYPE_STRING | CTLFLAG_RW,
&vnode_domainset, 0, sysctl_handle_domainset, "A",
"Default vnode NUMA policy");
static int nvnpbufs;
SYSCTL_INT(_vm, OID_AUTO, vnode_pbufs, CTLFLAG_RDTUN | CTLFLAG_NOFETCH,
&nvnpbufs, 0, "number of physical buffers allocated for vnode pager");
static uma_zone_t vnode_pbuf_zone;
static void
vnode_pager_init(void *dummy)
{
vnode_pbuf_zone = pbuf_zsecond_create("vnpbuf", nswbuf * 8);
#ifdef __LP64__
nvnpbufs = nswbuf * 2;
#else
nvnpbufs = nswbuf / 2;
#endif
TUNABLE_INT_FETCH("vm.vnode_pbufs", &nvnpbufs);
vnode_pbuf_zone = pbuf_zsecond_create("vnpbuf", nvnpbufs);
}
SYSINIT(vnode_pager, SI_SUB_CPU, SI_ORDER_ANY, vnode_pager_init, NULL);