Also make kern.maxfilesperproc a boot time tunable.

Auto-tuning threshold discussions aside, it turns out that if you want
to lower this on say, rather memory-packed machines, you either set maxusers
or kern.maxfiles, or you set it in sysctl.  The former is a non-exact
way to tune this; the latter doesn't actually affect anything in the
startup scripts.

This first occured because I wondered why the hell screen would take upwards
of 10 seconds to spawn a new screen.  I then found python doing the same
thing during fork/exec of child processes - it calls close() on each FD
up to the current openfiles limit.  On a 1TB machine this is like, 26 million
FDs per process.  Ugh.

So:

* This allows it to be set early in /boot/loader.conf;
* It can be used to work around the ridiculous situation of
  screen, python, etc doing a close() on potentially millions of FDs
  even though you only have four open.

Tested:

* 4GB, 32GB, 64GB, 128GB, 384GB, 1TB systems with autotune, ensuring
  screen and python forking doesn't result in some pretty hilariously
  bad behaviour.

TODO:

* Note that the default login.conf sets openfiles-cur to unlimited,
  effectively obeying kern.maxfilesperproc.  Perhaps we should fix
  this.

* .. and even if we do, we need to also ensure that daemons get
  a soft limit of something reasonable and capped - they can request
  more FDs themselves.

MFC after:	1 week
Sponsored by:	Norse Corp, Inc.
This commit is contained in:
adrian 2015-09-10 04:05:58 +00:00
parent 7c47a4f9ce
commit 73868b5389

View File

@ -265,7 +265,8 @@ init_param2(long physpages)
if (maxfiles > (physpages / 4))
maxfiles = physpages / 4;
maxfilesperproc = (maxfiles / 10) * 9;
TUNABLE_INT_FETCH("kern.maxfilesperproc", &maxfilesperproc);
/*
* Cannot be changed after boot.
*/