2009-05-04 15:23:58 +00:00
|
|
|
/*-
|
2017-11-20 19:43:44 +00:00
|
|
|
* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
*
|
2009-05-04 15:23:58 +00:00
|
|
|
* Copyright (c) 1989, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
*
|
|
|
|
* This code is derived from software contributed to Berkeley by
|
|
|
|
* Rick Macklem at The University of Guelph.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
2017-02-28 23:42:47 +00:00
|
|
|
* 3. Neither the name of the University nor the names of its contributors
|
2009-05-04 15:23:58 +00:00
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
|
|
|
#include "opt_inet6.h"
|
|
|
|
#include "opt_kgssapi.h"
|
|
|
|
|
|
|
|
#include <fs/nfs/nfsport.h>
|
|
|
|
|
|
|
|
#include <rpc/rpc.h>
|
|
|
|
#include <rpc/rpcsec_gss.h>
|
|
|
|
|
2013-04-17 22:42:43 +00:00
|
|
|
#include <nfs/nfs_fha.h>
|
Revamp the old NFS server's File Handle Affinity (FHA) code so that
it will work with either the old or new server.
The FHA code keeps a cache of currently active file handles for
NFSv2 and v3 requests, so that read and write requests for the same
file are directed to the same group of threads (reads) or thread
(writes). It does not currently work for NFSv4 requests. They are
more complex, and will take more work to support.
This improves read-ahead performance, especially with ZFS, if the
FHA tuning parameters are configured appropriately. Without the
FHA code, concurrent reads that are part of a sequential read from
a file will be directed to separate NFS threads. This has the
effect of confusing the ZFS zfetch (prefetch) code and makes
sequential reads significantly slower with clients like Linux that
do a lot of prefetching.
The FHA code has also been updated to direct write requests to nearby
file offsets to the same thread in the same way it batches reads,
and the FHA code will now also send writes to multiple threads when
needed.
This improves sequential write performance in ZFS, because writes
to a file are now more ordered. Since NFS writes (generally
less than 64K) are smaller than the typical ZFS record size
(usually 128K), out of order NFS writes to the same block can
trigger a read in ZFS. Sending them down the same thread increases
the odds of their being in order.
In order for multiple write threads per file in the FHA code to be
useful, writes in the NFS server have been changed to use a LK_SHARED
vnode lock, and upgrade that to LK_EXCLUSIVE if the filesystem
doesn't allow multiple writers to a file at once. ZFS is currently
the only filesystem that allows multiple writers to a file, because
it has internal file range locking. This change does not affect the
NFSv4 code.
This improves random write performance to a single file in ZFS, since
we can now have multiple writers inside ZFS at one time.
I have changed the default tuning parameters to a 22 bit (4MB)
window size (from 256K) and unlimited commands per thread as a
result of my benchmarking with ZFS.
The FHA code has been updated to allow configuring the tuning
parameters from loader tunable variables in addition to sysctl
variables. The read offset window calculation has been slightly
modified as well. Instead of having separate bins, each file
handle has a rolling window of bin_shift size. This minimizes
glitches in throughput when shifting from one bin to another.
sys/conf/files:
Add nfs_fha_new.c and nfs_fha_old.c. Compile nfs_fha.c
when either the old or the new NFS server is built.
sys/fs/nfs/nfsport.h,
sys/fs/nfs/nfs_commonport.c:
Bring in changes from Rick Macklem to newnfs_realign that
allow it to operate in blocking (M_WAITOK) or non-blocking
(M_NOWAIT) mode.
sys/fs/nfs/nfs_commonsubs.c,
sys/fs/nfs/nfs_var.h:
Bring in a change from Rick Macklem to allow telling
nfsm_dissect() whether or not to wait for mallocs.
sys/fs/nfs/nfsm_subs.h:
Bring in changes from Rick Macklem to create a new
nfsm_dissect_nonblock() inline function and
NFSM_DISSECT_NONBLOCK() macro.
sys/fs/nfs/nfs_commonkrpc.c,
sys/fs/nfsclient/nfs_clkrpc.c:
Add the malloc wait flag to a newnfs_realign() call.
sys/fs/nfsserver/nfs_nfsdkrpc.c:
Setup the new NFS server's RPC thread pool so that it will
call the FHA code.
Add the malloc flag argument to newnfs_realign().
Unstaticize newnfs_nfsv3_procid[] so that we can use it in
the FHA code.
sys/fs/nfsserver/nfs_nfsdsocket.c:
In nfsrvd_dorpc(), add NFSPROC_WRITE to the list of RPC types
that use the LK_SHARED lock type.
sys/fs/nfsserver/nfs_nfsdport.c:
In nfsd_fhtovp(), if we're starting a write, check to see
whether the underlying filesystem supports shared writes.
If not, upgrade the lock type from LK_SHARED to LK_EXCLUSIVE.
sys/nfsserver/nfs_fha.c:
Remove all code that is specific to the NFS server
implementation. Anything that is server-specific is now
accessed through a callback supplied by that server's FHA
shim in the new softc.
There are now separate sysctls and tunables for the FHA
implementations for the old and new NFS servers. The new
NFS server has its tunables under vfs.nfsd.fha, the old
NFS server's tunables are under vfs.nfsrv.fha as before.
In fha_extract_info(), use callouts for all server-specific
code. Getting file handles and offsets is now done in the
individual server's shim module.
In fha_hash_entry_choose_thread(), change the way we decide
whether two reads are in proximity to each other.
Previously, the calculation was a simple shift operation to
see whether the offsets were in the same power of 2 bucket.
The issue was that there would be a bucket (and therefore
thread) transition, even if the reads were in close
proximity. When there is a thread transition, reads wind
up going somewhat out of order, and ZFS gets confused.
The new calculation simply tries to see whether the offsets
are within 1 << bin_shift of each other. If they are, the
reads will be sent to the same thread.
The effect of this change is that for sequential reads, if
the client doesn't exceed the max_reqs_per_nfsd parameter
and the bin_shift is set to a reasonable value (22, or
4MB works well in my tests), the reads in any sequential
stream will largely be confined to a single thread.
Change fha_assign() so that it takes a softc argument. It
is now called from the individual server's shim code, which
will pass in the softc.
Change fhe_stats_sysctl() so that it takes a softc
parameter. It is now called from the individual server's
shim code. Add the current offset to the list of things
printed out about each active thread.
Change the num_reads and num_writes counters in the
fha_hash_entry structure to 32-bit values, and rename them
num_rw and num_exclusive, respectively, to reflect their
changed usage.
Add an enable sysctl and tunable that allows the user to
disable the FHA code (when vfs.XXX.fha.enable = 0). This
is useful for before/after performance comparisons.
nfs_fha.h:
Move most structure definitions out of nfs_fha.c and into
the header file, so that the individual server shims can
see them.
Change the default bin_shift to 22 (4MB) instead of 18
(256K). Allow unlimited commands per thread.
sys/nfsserver/nfs_fha_old.c,
sys/nfsserver/nfs_fha_old.h,
sys/fs/nfsserver/nfs_fha_new.c,
sys/fs/nfsserver/nfs_fha_new.h:
Add shims for the old and new NFS servers to interface with
the FHA code, and callbacks for the
The shims contain all of the code and definitions that are
specific to the NFS servers.
They setup the server-specific callbacks and set the server
name for the sysctl and loader tunable variables.
sys/nfsserver/nfs_srvkrpc.c:
Configure the RPC code to call fhaold_assign() instead of
fha_assign().
sys/modules/nfsd/Makefile:
Add nfs_fha.c and nfs_fha_new.c.
sys/modules/nfsserver/Makefile:
Add nfs_fha_old.c.
Reviewed by: rmacklem
Sponsored by: Spectra Logic
MFC after: 2 weeks
2013-04-17 21:00:22 +00:00
|
|
|
#include <fs/nfsserver/nfs_fha_new.h>
|
|
|
|
|
2009-06-05 14:55:22 +00:00
|
|
|
#include <security/mac/mac_framework.h>
|
|
|
|
|
2009-05-04 15:23:58 +00:00
|
|
|
NFSDLOCKMUTEX;
|
2012-10-14 22:33:17 +00:00
|
|
|
NFSV4ROOTLOCKMUTEX;
|
|
|
|
struct nfsv4lock nfsd_suspend_lock;
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Mapping of old NFS Version 2 RPC numbers to generic numbers.
|
|
|
|
*/
|
Revamp the old NFS server's File Handle Affinity (FHA) code so that
it will work with either the old or new server.
The FHA code keeps a cache of currently active file handles for
NFSv2 and v3 requests, so that read and write requests for the same
file are directed to the same group of threads (reads) or thread
(writes). It does not currently work for NFSv4 requests. They are
more complex, and will take more work to support.
This improves read-ahead performance, especially with ZFS, if the
FHA tuning parameters are configured appropriately. Without the
FHA code, concurrent reads that are part of a sequential read from
a file will be directed to separate NFS threads. This has the
effect of confusing the ZFS zfetch (prefetch) code and makes
sequential reads significantly slower with clients like Linux that
do a lot of prefetching.
The FHA code has also been updated to direct write requests to nearby
file offsets to the same thread in the same way it batches reads,
and the FHA code will now also send writes to multiple threads when
needed.
This improves sequential write performance in ZFS, because writes
to a file are now more ordered. Since NFS writes (generally
less than 64K) are smaller than the typical ZFS record size
(usually 128K), out of order NFS writes to the same block can
trigger a read in ZFS. Sending them down the same thread increases
the odds of their being in order.
In order for multiple write threads per file in the FHA code to be
useful, writes in the NFS server have been changed to use a LK_SHARED
vnode lock, and upgrade that to LK_EXCLUSIVE if the filesystem
doesn't allow multiple writers to a file at once. ZFS is currently
the only filesystem that allows multiple writers to a file, because
it has internal file range locking. This change does not affect the
NFSv4 code.
This improves random write performance to a single file in ZFS, since
we can now have multiple writers inside ZFS at one time.
I have changed the default tuning parameters to a 22 bit (4MB)
window size (from 256K) and unlimited commands per thread as a
result of my benchmarking with ZFS.
The FHA code has been updated to allow configuring the tuning
parameters from loader tunable variables in addition to sysctl
variables. The read offset window calculation has been slightly
modified as well. Instead of having separate bins, each file
handle has a rolling window of bin_shift size. This minimizes
glitches in throughput when shifting from one bin to another.
sys/conf/files:
Add nfs_fha_new.c and nfs_fha_old.c. Compile nfs_fha.c
when either the old or the new NFS server is built.
sys/fs/nfs/nfsport.h,
sys/fs/nfs/nfs_commonport.c:
Bring in changes from Rick Macklem to newnfs_realign that
allow it to operate in blocking (M_WAITOK) or non-blocking
(M_NOWAIT) mode.
sys/fs/nfs/nfs_commonsubs.c,
sys/fs/nfs/nfs_var.h:
Bring in a change from Rick Macklem to allow telling
nfsm_dissect() whether or not to wait for mallocs.
sys/fs/nfs/nfsm_subs.h:
Bring in changes from Rick Macklem to create a new
nfsm_dissect_nonblock() inline function and
NFSM_DISSECT_NONBLOCK() macro.
sys/fs/nfs/nfs_commonkrpc.c,
sys/fs/nfsclient/nfs_clkrpc.c:
Add the malloc wait flag to a newnfs_realign() call.
sys/fs/nfsserver/nfs_nfsdkrpc.c:
Setup the new NFS server's RPC thread pool so that it will
call the FHA code.
Add the malloc flag argument to newnfs_realign().
Unstaticize newnfs_nfsv3_procid[] so that we can use it in
the FHA code.
sys/fs/nfsserver/nfs_nfsdsocket.c:
In nfsrvd_dorpc(), add NFSPROC_WRITE to the list of RPC types
that use the LK_SHARED lock type.
sys/fs/nfsserver/nfs_nfsdport.c:
In nfsd_fhtovp(), if we're starting a write, check to see
whether the underlying filesystem supports shared writes.
If not, upgrade the lock type from LK_SHARED to LK_EXCLUSIVE.
sys/nfsserver/nfs_fha.c:
Remove all code that is specific to the NFS server
implementation. Anything that is server-specific is now
accessed through a callback supplied by that server's FHA
shim in the new softc.
There are now separate sysctls and tunables for the FHA
implementations for the old and new NFS servers. The new
NFS server has its tunables under vfs.nfsd.fha, the old
NFS server's tunables are under vfs.nfsrv.fha as before.
In fha_extract_info(), use callouts for all server-specific
code. Getting file handles and offsets is now done in the
individual server's shim module.
In fha_hash_entry_choose_thread(), change the way we decide
whether two reads are in proximity to each other.
Previously, the calculation was a simple shift operation to
see whether the offsets were in the same power of 2 bucket.
The issue was that there would be a bucket (and therefore
thread) transition, even if the reads were in close
proximity. When there is a thread transition, reads wind
up going somewhat out of order, and ZFS gets confused.
The new calculation simply tries to see whether the offsets
are within 1 << bin_shift of each other. If they are, the
reads will be sent to the same thread.
The effect of this change is that for sequential reads, if
the client doesn't exceed the max_reqs_per_nfsd parameter
and the bin_shift is set to a reasonable value (22, or
4MB works well in my tests), the reads in any sequential
stream will largely be confined to a single thread.
Change fha_assign() so that it takes a softc argument. It
is now called from the individual server's shim code, which
will pass in the softc.
Change fhe_stats_sysctl() so that it takes a softc
parameter. It is now called from the individual server's
shim code. Add the current offset to the list of things
printed out about each active thread.
Change the num_reads and num_writes counters in the
fha_hash_entry structure to 32-bit values, and rename them
num_rw and num_exclusive, respectively, to reflect their
changed usage.
Add an enable sysctl and tunable that allows the user to
disable the FHA code (when vfs.XXX.fha.enable = 0). This
is useful for before/after performance comparisons.
nfs_fha.h:
Move most structure definitions out of nfs_fha.c and into
the header file, so that the individual server shims can
see them.
Change the default bin_shift to 22 (4MB) instead of 18
(256K). Allow unlimited commands per thread.
sys/nfsserver/nfs_fha_old.c,
sys/nfsserver/nfs_fha_old.h,
sys/fs/nfsserver/nfs_fha_new.c,
sys/fs/nfsserver/nfs_fha_new.h:
Add shims for the old and new NFS servers to interface with
the FHA code, and callbacks for the
The shims contain all of the code and definitions that are
specific to the NFS servers.
They setup the server-specific callbacks and set the server
name for the sysctl and loader tunable variables.
sys/nfsserver/nfs_srvkrpc.c:
Configure the RPC code to call fhaold_assign() instead of
fha_assign().
sys/modules/nfsd/Makefile:
Add nfs_fha.c and nfs_fha_new.c.
sys/modules/nfsserver/Makefile:
Add nfs_fha_old.c.
Reviewed by: rmacklem
Sponsored by: Spectra Logic
MFC after: 2 weeks
2013-04-17 21:00:22 +00:00
|
|
|
int newnfs_nfsv3_procid[NFS_V3NPROCS] = {
|
2009-05-04 15:23:58 +00:00
|
|
|
NFSPROC_NULL,
|
|
|
|
NFSPROC_GETATTR,
|
|
|
|
NFSPROC_SETATTR,
|
|
|
|
NFSPROC_NOOP,
|
|
|
|
NFSPROC_LOOKUP,
|
|
|
|
NFSPROC_READLINK,
|
|
|
|
NFSPROC_READ,
|
|
|
|
NFSPROC_NOOP,
|
|
|
|
NFSPROC_WRITE,
|
|
|
|
NFSPROC_CREATE,
|
|
|
|
NFSPROC_REMOVE,
|
|
|
|
NFSPROC_RENAME,
|
|
|
|
NFSPROC_LINK,
|
|
|
|
NFSPROC_SYMLINK,
|
|
|
|
NFSPROC_MKDIR,
|
|
|
|
NFSPROC_RMDIR,
|
|
|
|
NFSPROC_READDIR,
|
|
|
|
NFSPROC_FSSTAT,
|
|
|
|
NFSPROC_NOOP,
|
|
|
|
NFSPROC_NOOP,
|
|
|
|
NFSPROC_NOOP,
|
|
|
|
NFSPROC_NOOP,
|
|
|
|
};
|
|
|
|
|
|
|
|
|
2011-05-08 01:01:27 +00:00
|
|
|
SYSCTL_DECL(_vfs_nfsd);
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
SVCPOOL *nfsrvd_pool;
|
|
|
|
|
|
|
|
static int nfs_privport = 0;
|
2014-10-27 07:47:13 +00:00
|
|
|
SYSCTL_INT(_vfs_nfsd, OID_AUTO, nfs_privport, CTLFLAG_RWTUN,
|
2009-05-04 15:23:58 +00:00
|
|
|
&nfs_privport, 0,
|
|
|
|
"Only allow clients using a privileged port for NFSv2 and 3");
|
|
|
|
|
2009-05-26 01:47:37 +00:00
|
|
|
static int nfs_minvers = NFS_VER2;
|
2014-10-27 07:47:13 +00:00
|
|
|
SYSCTL_INT(_vfs_nfsd, OID_AUTO, server_min_nfsvers, CTLFLAG_RWTUN,
|
2009-05-26 01:47:37 +00:00
|
|
|
&nfs_minvers, 0, "The lowest version of NFS handled by the server");
|
|
|
|
|
|
|
|
static int nfs_maxvers = NFS_VER4;
|
2014-10-27 07:47:13 +00:00
|
|
|
SYSCTL_INT(_vfs_nfsd, OID_AUTO, server_max_nfsvers, CTLFLAG_RWTUN,
|
2009-05-26 01:47:37 +00:00
|
|
|
&nfs_maxvers, 0, "The highest version of NFS handled by the server");
|
|
|
|
|
2014-01-03 15:09:59 +00:00
|
|
|
static int nfs_proc(struct nfsrv_descript *, u_int32_t, SVCXPRT *xprt,
|
|
|
|
struct nfsrvcache **);
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
extern u_long sb_max_adj;
|
|
|
|
extern int newnfs_numnfsd;
|
2011-01-14 23:30:35 +00:00
|
|
|
extern struct proc *nfsd_master_proc;
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* NFS server system calls
|
|
|
|
*/
|
|
|
|
|
|
|
|
static void
|
|
|
|
nfssvc_program(struct svc_req *rqst, SVCXPRT *xprt)
|
|
|
|
{
|
|
|
|
struct nfsrv_descript nd;
|
|
|
|
struct nfsrvcache *rp = NULL;
|
2009-05-14 21:39:08 +00:00
|
|
|
int cacherep, credflavor;
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
memset(&nd, 0, sizeof(nd));
|
|
|
|
if (rqst->rq_vers == NFS_VER2) {
|
2015-04-25 00:58:24 +00:00
|
|
|
if (rqst->rq_proc > NFSV2PROC_STATFS ||
|
|
|
|
newnfs_nfsv3_procid[rqst->rq_proc] == NFSPROC_NOOP) {
|
2009-05-04 15:23:58 +00:00
|
|
|
svcerr_noproc(rqst);
|
|
|
|
svc_freereq(rqst);
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
nd.nd_procnum = newnfs_nfsv3_procid[rqst->rq_proc];
|
|
|
|
nd.nd_flag = ND_NFSV2;
|
|
|
|
} else if (rqst->rq_vers == NFS_VER3) {
|
|
|
|
if (rqst->rq_proc >= NFS_V3NPROCS) {
|
|
|
|
svcerr_noproc(rqst);
|
|
|
|
svc_freereq(rqst);
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
nd.nd_procnum = rqst->rq_proc;
|
|
|
|
nd.nd_flag = ND_NFSV3;
|
|
|
|
} else {
|
|
|
|
if (rqst->rq_proc != NFSPROC_NULL &&
|
|
|
|
rqst->rq_proc != NFSV4PROC_COMPOUND) {
|
|
|
|
svcerr_noproc(rqst);
|
|
|
|
svc_freereq(rqst);
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
nd.nd_procnum = rqst->rq_proc;
|
|
|
|
nd.nd_flag = ND_NFSV4;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Note: we want rq_addr, not svc_getrpccaller for nd_nam2 -
|
|
|
|
* NFS_SRVMAXDATA uses a NULL value for nd_nam2 to detect TCP
|
|
|
|
* mounts.
|
|
|
|
*/
|
|
|
|
nd.nd_mrep = rqst->rq_args;
|
|
|
|
rqst->rq_args = NULL;
|
Revamp the old NFS server's File Handle Affinity (FHA) code so that
it will work with either the old or new server.
The FHA code keeps a cache of currently active file handles for
NFSv2 and v3 requests, so that read and write requests for the same
file are directed to the same group of threads (reads) or thread
(writes). It does not currently work for NFSv4 requests. They are
more complex, and will take more work to support.
This improves read-ahead performance, especially with ZFS, if the
FHA tuning parameters are configured appropriately. Without the
FHA code, concurrent reads that are part of a sequential read from
a file will be directed to separate NFS threads. This has the
effect of confusing the ZFS zfetch (prefetch) code and makes
sequential reads significantly slower with clients like Linux that
do a lot of prefetching.
The FHA code has also been updated to direct write requests to nearby
file offsets to the same thread in the same way it batches reads,
and the FHA code will now also send writes to multiple threads when
needed.
This improves sequential write performance in ZFS, because writes
to a file are now more ordered. Since NFS writes (generally
less than 64K) are smaller than the typical ZFS record size
(usually 128K), out of order NFS writes to the same block can
trigger a read in ZFS. Sending them down the same thread increases
the odds of their being in order.
In order for multiple write threads per file in the FHA code to be
useful, writes in the NFS server have been changed to use a LK_SHARED
vnode lock, and upgrade that to LK_EXCLUSIVE if the filesystem
doesn't allow multiple writers to a file at once. ZFS is currently
the only filesystem that allows multiple writers to a file, because
it has internal file range locking. This change does not affect the
NFSv4 code.
This improves random write performance to a single file in ZFS, since
we can now have multiple writers inside ZFS at one time.
I have changed the default tuning parameters to a 22 bit (4MB)
window size (from 256K) and unlimited commands per thread as a
result of my benchmarking with ZFS.
The FHA code has been updated to allow configuring the tuning
parameters from loader tunable variables in addition to sysctl
variables. The read offset window calculation has been slightly
modified as well. Instead of having separate bins, each file
handle has a rolling window of bin_shift size. This minimizes
glitches in throughput when shifting from one bin to another.
sys/conf/files:
Add nfs_fha_new.c and nfs_fha_old.c. Compile nfs_fha.c
when either the old or the new NFS server is built.
sys/fs/nfs/nfsport.h,
sys/fs/nfs/nfs_commonport.c:
Bring in changes from Rick Macklem to newnfs_realign that
allow it to operate in blocking (M_WAITOK) or non-blocking
(M_NOWAIT) mode.
sys/fs/nfs/nfs_commonsubs.c,
sys/fs/nfs/nfs_var.h:
Bring in a change from Rick Macklem to allow telling
nfsm_dissect() whether or not to wait for mallocs.
sys/fs/nfs/nfsm_subs.h:
Bring in changes from Rick Macklem to create a new
nfsm_dissect_nonblock() inline function and
NFSM_DISSECT_NONBLOCK() macro.
sys/fs/nfs/nfs_commonkrpc.c,
sys/fs/nfsclient/nfs_clkrpc.c:
Add the malloc wait flag to a newnfs_realign() call.
sys/fs/nfsserver/nfs_nfsdkrpc.c:
Setup the new NFS server's RPC thread pool so that it will
call the FHA code.
Add the malloc flag argument to newnfs_realign().
Unstaticize newnfs_nfsv3_procid[] so that we can use it in
the FHA code.
sys/fs/nfsserver/nfs_nfsdsocket.c:
In nfsrvd_dorpc(), add NFSPROC_WRITE to the list of RPC types
that use the LK_SHARED lock type.
sys/fs/nfsserver/nfs_nfsdport.c:
In nfsd_fhtovp(), if we're starting a write, check to see
whether the underlying filesystem supports shared writes.
If not, upgrade the lock type from LK_SHARED to LK_EXCLUSIVE.
sys/nfsserver/nfs_fha.c:
Remove all code that is specific to the NFS server
implementation. Anything that is server-specific is now
accessed through a callback supplied by that server's FHA
shim in the new softc.
There are now separate sysctls and tunables for the FHA
implementations for the old and new NFS servers. The new
NFS server has its tunables under vfs.nfsd.fha, the old
NFS server's tunables are under vfs.nfsrv.fha as before.
In fha_extract_info(), use callouts for all server-specific
code. Getting file handles and offsets is now done in the
individual server's shim module.
In fha_hash_entry_choose_thread(), change the way we decide
whether two reads are in proximity to each other.
Previously, the calculation was a simple shift operation to
see whether the offsets were in the same power of 2 bucket.
The issue was that there would be a bucket (and therefore
thread) transition, even if the reads were in close
proximity. When there is a thread transition, reads wind
up going somewhat out of order, and ZFS gets confused.
The new calculation simply tries to see whether the offsets
are within 1 << bin_shift of each other. If they are, the
reads will be sent to the same thread.
The effect of this change is that for sequential reads, if
the client doesn't exceed the max_reqs_per_nfsd parameter
and the bin_shift is set to a reasonable value (22, or
4MB works well in my tests), the reads in any sequential
stream will largely be confined to a single thread.
Change fha_assign() so that it takes a softc argument. It
is now called from the individual server's shim code, which
will pass in the softc.
Change fhe_stats_sysctl() so that it takes a softc
parameter. It is now called from the individual server's
shim code. Add the current offset to the list of things
printed out about each active thread.
Change the num_reads and num_writes counters in the
fha_hash_entry structure to 32-bit values, and rename them
num_rw and num_exclusive, respectively, to reflect their
changed usage.
Add an enable sysctl and tunable that allows the user to
disable the FHA code (when vfs.XXX.fha.enable = 0). This
is useful for before/after performance comparisons.
nfs_fha.h:
Move most structure definitions out of nfs_fha.c and into
the header file, so that the individual server shims can
see them.
Change the default bin_shift to 22 (4MB) instead of 18
(256K). Allow unlimited commands per thread.
sys/nfsserver/nfs_fha_old.c,
sys/nfsserver/nfs_fha_old.h,
sys/fs/nfsserver/nfs_fha_new.c,
sys/fs/nfsserver/nfs_fha_new.h:
Add shims for the old and new NFS servers to interface with
the FHA code, and callbacks for the
The shims contain all of the code and definitions that are
specific to the NFS servers.
They setup the server-specific callbacks and set the server
name for the sysctl and loader tunable variables.
sys/nfsserver/nfs_srvkrpc.c:
Configure the RPC code to call fhaold_assign() instead of
fha_assign().
sys/modules/nfsd/Makefile:
Add nfs_fha.c and nfs_fha_new.c.
sys/modules/nfsserver/Makefile:
Add nfs_fha_old.c.
Reviewed by: rmacklem
Sponsored by: Spectra Logic
MFC after: 2 weeks
2013-04-17 21:00:22 +00:00
|
|
|
newnfs_realign(&nd.nd_mrep, M_WAITOK);
|
2009-05-04 15:23:58 +00:00
|
|
|
nd.nd_md = nd.nd_mrep;
|
|
|
|
nd.nd_dpos = mtod(nd.nd_md, caddr_t);
|
|
|
|
nd.nd_nam = svc_getrpccaller(rqst);
|
|
|
|
nd.nd_nam2 = rqst->rq_addr;
|
|
|
|
nd.nd_mreq = NULL;
|
|
|
|
nd.nd_cred = NULL;
|
|
|
|
|
|
|
|
if (nfs_privport && (nd.nd_flag & ND_NFSV4) == 0) {
|
|
|
|
/* Check if source port is privileged */
|
|
|
|
u_short port;
|
|
|
|
struct sockaddr *nam = nd.nd_nam;
|
|
|
|
struct sockaddr_in *sin;
|
|
|
|
|
|
|
|
sin = (struct sockaddr_in *)nam;
|
|
|
|
/*
|
|
|
|
* INET/INET6 - same code:
|
|
|
|
* sin_port and sin6_port are at same offset
|
|
|
|
*/
|
|
|
|
port = ntohs(sin->sin_port);
|
|
|
|
if (port >= IPPORT_RESERVED &&
|
|
|
|
nd.nd_procnum != NFSPROC_NULL) {
|
|
|
|
#ifdef INET6
|
2017-02-16 20:47:41 +00:00
|
|
|
char buf[INET6_ADDRSTRLEN];
|
|
|
|
#else
|
|
|
|
char buf[INET_ADDRSTRLEN];
|
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
2009-05-04 15:23:58 +00:00
|
|
|
#if defined(KLD_MODULE)
|
|
|
|
/* Do not use ip6_sprintf: the nfs module should work without INET6. */
|
|
|
|
#define ip6_sprintf(buf, a) \
|
|
|
|
(sprintf((buf), "%x:%x:%x:%x:%x:%x:%x:%x", \
|
|
|
|
(a)->s6_addr16[0], (a)->s6_addr16[1], \
|
|
|
|
(a)->s6_addr16[2], (a)->s6_addr16[3], \
|
|
|
|
(a)->s6_addr16[4], (a)->s6_addr16[5], \
|
|
|
|
(a)->s6_addr16[6], (a)->s6_addr16[7]), \
|
|
|
|
(buf))
|
|
|
|
#endif
|
|
|
|
#endif
|
|
|
|
printf("NFS request from unprivileged port (%s:%d)\n",
|
|
|
|
#ifdef INET6
|
|
|
|
sin->sin_family == AF_INET6 ?
|
2017-02-16 20:47:41 +00:00
|
|
|
ip6_sprintf(buf, &satosin6(sin)->sin6_addr) :
|
2009-05-04 15:23:58 +00:00
|
|
|
#if defined(KLD_MODULE)
|
|
|
|
#undef ip6_sprintf
|
|
|
|
#endif
|
|
|
|
#endif
|
2017-02-16 20:47:41 +00:00
|
|
|
inet_ntoa_r(sin->sin_addr, buf), port);
|
2009-05-04 15:23:58 +00:00
|
|
|
svcerr_weakauth(rqst);
|
|
|
|
svc_freereq(rqst);
|
|
|
|
m_freem(nd.nd_mrep);
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nd.nd_procnum != NFSPROC_NULL) {
|
2009-05-14 21:39:08 +00:00
|
|
|
if (!svc_getcred(rqst, &nd.nd_cred, &credflavor)) {
|
2009-05-04 15:23:58 +00:00
|
|
|
svcerr_weakauth(rqst);
|
|
|
|
svc_freereq(rqst);
|
|
|
|
m_freem(nd.nd_mrep);
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
2009-05-14 21:39:08 +00:00
|
|
|
|
|
|
|
/* Set the flag based on credflavor */
|
|
|
|
if (credflavor == RPCSEC_GSS_KRB5) {
|
|
|
|
nd.nd_flag |= ND_GSS;
|
|
|
|
} else if (credflavor == RPCSEC_GSS_KRB5I) {
|
|
|
|
nd.nd_flag |= (ND_GSS | ND_GSSINTEGRITY);
|
|
|
|
} else if (credflavor == RPCSEC_GSS_KRB5P) {
|
|
|
|
nd.nd_flag |= (ND_GSS | ND_GSSPRIVACY);
|
|
|
|
} else if (credflavor != AUTH_SYS) {
|
|
|
|
svcerr_weakauth(rqst);
|
|
|
|
svc_freereq(rqst);
|
|
|
|
m_freem(nd.nd_mrep);
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-14 21:39:08 +00:00
|
|
|
}
|
|
|
|
|
2009-05-04 15:23:58 +00:00
|
|
|
#ifdef MAC
|
|
|
|
mac_cred_associate_nfsd(nd.nd_cred);
|
|
|
|
#endif
|
2012-10-14 22:33:17 +00:00
|
|
|
/*
|
|
|
|
* Get a refcnt (shared lock) on nfsd_suspend_lock.
|
|
|
|
* NFSSVC_SUSPENDNFSD will take an exclusive lock on
|
|
|
|
* nfsd_suspend_lock to suspend these threads.
|
2016-05-06 23:40:37 +00:00
|
|
|
* The call to nfsv4_lock() that precedes nfsv4_getref()
|
2016-05-06 23:26:17 +00:00
|
|
|
* ensures that the acquisition of the exclusive lock
|
|
|
|
* takes priority over acquisition of the shared lock by
|
|
|
|
* waiting for any exclusive lock request to complete.
|
2012-10-14 22:33:17 +00:00
|
|
|
* This must be done here, before the check of
|
|
|
|
* nfsv4root exports by nfsvno_v4rootexport().
|
|
|
|
*/
|
|
|
|
NFSLOCKV4ROOTMUTEX();
|
2016-05-06 23:26:17 +00:00
|
|
|
nfsv4_lock(&nfsd_suspend_lock, 0, NULL, NFSV4ROOTLOCKMUTEXPTR,
|
|
|
|
NULL);
|
2012-10-14 22:33:17 +00:00
|
|
|
nfsv4_getref(&nfsd_suspend_lock, NULL, NFSV4ROOTLOCKMUTEXPTR,
|
|
|
|
NULL);
|
|
|
|
NFSUNLOCKV4ROOTMUTEX();
|
|
|
|
|
2009-05-26 01:09:33 +00:00
|
|
|
if ((nd.nd_flag & ND_NFSV4) != 0) {
|
2009-05-04 15:23:58 +00:00
|
|
|
nd.nd_repstat = nfsvno_v4rootexport(&nd);
|
2009-05-26 01:09:33 +00:00
|
|
|
if (nd.nd_repstat != 0) {
|
2012-10-14 22:33:17 +00:00
|
|
|
NFSLOCKV4ROOTMUTEX();
|
|
|
|
nfsv4_relref(&nfsd_suspend_lock);
|
|
|
|
NFSUNLOCKV4ROOTMUTEX();
|
2009-05-26 01:09:33 +00:00
|
|
|
svcerr_weakauth(rqst);
|
|
|
|
svc_freereq(rqst);
|
|
|
|
m_freem(nd.nd_mrep);
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-26 01:09:33 +00:00
|
|
|
}
|
|
|
|
}
|
2009-05-04 15:23:58 +00:00
|
|
|
|
2014-01-03 15:09:59 +00:00
|
|
|
cacherep = nfs_proc(&nd, rqst->rq_xid, xprt, &rp);
|
2012-10-14 22:33:17 +00:00
|
|
|
NFSLOCKV4ROOTMUTEX();
|
|
|
|
nfsv4_relref(&nfsd_suspend_lock);
|
|
|
|
NFSUNLOCKV4ROOTMUTEX();
|
2009-05-04 15:23:58 +00:00
|
|
|
} else {
|
|
|
|
NFSMGET(nd.nd_mreq);
|
|
|
|
nd.nd_mreq->m_len = 0;
|
|
|
|
cacherep = RC_REPLY;
|
|
|
|
}
|
|
|
|
if (nd.nd_mrep != NULL)
|
|
|
|
m_freem(nd.nd_mrep);
|
|
|
|
|
|
|
|
if (nd.nd_cred != NULL)
|
|
|
|
crfree(nd.nd_cred);
|
|
|
|
|
|
|
|
if (cacherep == RC_DROPIT) {
|
|
|
|
if (nd.nd_mreq != NULL)
|
|
|
|
m_freem(nd.nd_mreq);
|
|
|
|
svc_freereq(rqst);
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (nd.nd_mreq == NULL) {
|
|
|
|
svcerr_decode(rqst);
|
|
|
|
svc_freereq(rqst);
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (nd.nd_repstat & NFSERR_AUTHERR) {
|
|
|
|
svcerr_auth(rqst, nd.nd_repstat & ~NFSERR_AUTHERR);
|
|
|
|
if (nd.nd_mreq != NULL)
|
|
|
|
m_freem(nd.nd_mreq);
|
|
|
|
} else if (!svc_sendreply_mbuf(rqst, nd.nd_mreq)) {
|
|
|
|
svcerr_systemerr(rqst);
|
|
|
|
}
|
2014-01-03 15:09:59 +00:00
|
|
|
if (rp != NULL) {
|
2014-01-14 20:18:38 +00:00
|
|
|
nfsrvd_sentcache(rp, (rqst->rq_reply_seq != 0 ||
|
|
|
|
SVC_ACK(xprt, NULL)), rqst->rq_reply_seq);
|
2014-01-03 15:09:59 +00:00
|
|
|
}
|
2009-05-04 15:23:58 +00:00
|
|
|
svc_freereq(rqst);
|
2011-07-16 08:51:09 +00:00
|
|
|
|
|
|
|
out:
|
2017-02-25 10:38:18 +00:00
|
|
|
td_softdep_cleanup(curthread);
|
2011-07-16 08:51:09 +00:00
|
|
|
NFSEXITCODE(0);
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check the cache and, optionally, do the RPC.
|
|
|
|
* Return the appropriate cache response.
|
|
|
|
*/
|
|
|
|
static int
|
2014-01-03 15:09:59 +00:00
|
|
|
nfs_proc(struct nfsrv_descript *nd, u_int32_t xid, SVCXPRT *xprt,
|
|
|
|
struct nfsrvcache **rpp)
|
2009-05-04 15:23:58 +00:00
|
|
|
{
|
|
|
|
struct thread *td = curthread;
|
2014-07-01 20:47:16 +00:00
|
|
|
int cacherep = RC_DOIT, isdgram, taglen = -1;
|
|
|
|
struct mbuf *m;
|
|
|
|
u_char tag[NFSV4_SMALLSTR + 1], *tagstr = NULL;
|
|
|
|
u_int32_t minorvers = 0;
|
2014-01-03 15:09:59 +00:00
|
|
|
uint32_t ack;
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
*rpp = NULL;
|
|
|
|
if (nd->nd_nam2 == NULL) {
|
|
|
|
nd->nd_flag |= ND_STREAMSOCK;
|
|
|
|
isdgram = 0;
|
|
|
|
} else {
|
|
|
|
isdgram = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2009-05-26 01:09:33 +00:00
|
|
|
* Two cases:
|
2009-05-04 15:23:58 +00:00
|
|
|
* 1 - For NFSv2 over UDP, if we are near our malloc/mget
|
|
|
|
* limit, just drop the request. There is no
|
|
|
|
* NFSERR_RESOURCE or NFSERR_DELAY for NFSv2 and the
|
|
|
|
* client will timeout/retry over UDP in a little while.
|
2009-05-26 01:09:33 +00:00
|
|
|
* 2 - nd_repstat == 0 && nd_mreq == NULL, which
|
2009-05-04 15:23:58 +00:00
|
|
|
* means a normal nfs rpc, so check the cache
|
|
|
|
*/
|
|
|
|
if ((nd->nd_flag & ND_NFSV2) && nd->nd_nam2 != NULL &&
|
|
|
|
nfsrv_mallocmget_limit()) {
|
|
|
|
cacherep = RC_DROPIT;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* For NFSv3, play it safe and assume that the client is
|
|
|
|
* doing retries on the same TCP connection.
|
|
|
|
*/
|
|
|
|
if ((nd->nd_flag & (ND_NFSV4 | ND_STREAMSOCK)) ==
|
|
|
|
ND_STREAMSOCK)
|
|
|
|
nd->nd_flag |= ND_SAMETCPCONN;
|
|
|
|
nd->nd_retxid = xid;
|
|
|
|
nd->nd_tcpconntime = NFSD_MONOSEC;
|
2014-01-03 15:09:59 +00:00
|
|
|
nd->nd_sockref = xprt->xp_sockref;
|
2014-07-01 20:47:16 +00:00
|
|
|
if ((nd->nd_flag & ND_NFSV4) != 0)
|
|
|
|
nfsd_getminorvers(nd, tag, &tagstr, &taglen,
|
|
|
|
&minorvers);
|
|
|
|
if ((nd->nd_flag & ND_NFSV41) != 0)
|
|
|
|
/* NFSv4.1 caches replies in the session slots. */
|
|
|
|
cacherep = RC_DOIT;
|
|
|
|
else {
|
|
|
|
cacherep = nfsrvd_getcache(nd);
|
|
|
|
ack = 0;
|
|
|
|
SVC_ACK(xprt, &ack);
|
|
|
|
nfsrc_trimcache(xprt->xp_sockref, ack, 0);
|
|
|
|
}
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Handle the request. There are three cases.
|
|
|
|
* RC_DOIT - do the RPC
|
|
|
|
* RC_REPLY - return the reply already created
|
|
|
|
* RC_DROPIT - just throw the request away
|
|
|
|
*/
|
|
|
|
if (cacherep == RC_DOIT) {
|
2014-07-01 20:47:16 +00:00
|
|
|
if ((nd->nd_flag & ND_NFSV41) != 0)
|
|
|
|
nd->nd_xprt = xprt;
|
|
|
|
nfsrvd_dorpc(nd, isdgram, tagstr, taglen, minorvers, td);
|
|
|
|
if ((nd->nd_flag & ND_NFSV41) != 0) {
|
|
|
|
if (nd->nd_repstat != NFSERR_REPLYFROMCACHE &&
|
|
|
|
(nd->nd_flag & ND_SAVEREPLY) != 0) {
|
|
|
|
/* Cache a copy of the reply. */
|
|
|
|
m = m_copym(nd->nd_mreq, 0, M_COPYALL,
|
|
|
|
M_WAITOK);
|
|
|
|
} else
|
|
|
|
m = NULL;
|
|
|
|
if ((nd->nd_flag & ND_HASSEQUENCE) != 0)
|
|
|
|
nfsrv_cache_session(nd->nd_sessionid,
|
|
|
|
nd->nd_slotid, nd->nd_repstat, &m);
|
|
|
|
if (nd->nd_repstat == NFSERR_REPLYFROMCACHE)
|
|
|
|
nd->nd_repstat = 0;
|
2009-05-04 15:23:58 +00:00
|
|
|
cacherep = RC_REPLY;
|
2014-07-01 20:47:16 +00:00
|
|
|
} else {
|
|
|
|
if (nd->nd_repstat == NFSERR_DONTREPLY)
|
|
|
|
cacherep = RC_DROPIT;
|
|
|
|
else
|
|
|
|
cacherep = RC_REPLY;
|
|
|
|
*rpp = nfsrvd_updatecache(nd);
|
|
|
|
}
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
2014-07-01 20:47:16 +00:00
|
|
|
if (tagstr != NULL && taglen > NFSV4_SMALLSTR)
|
|
|
|
free(tagstr, M_TEMP);
|
2011-07-16 08:51:09 +00:00
|
|
|
|
|
|
|
NFSEXITCODE2(0, nd);
|
2009-05-04 15:23:58 +00:00
|
|
|
return (cacherep);
|
|
|
|
}
|
|
|
|
|
2014-01-03 15:09:59 +00:00
|
|
|
static void
|
|
|
|
nfssvc_loss(SVCXPRT *xprt)
|
|
|
|
{
|
|
|
|
uint32_t ack;
|
|
|
|
|
|
|
|
ack = 0;
|
|
|
|
SVC_ACK(xprt, &ack);
|
|
|
|
nfsrc_trimcache(xprt->xp_sockref, ack, 1);
|
|
|
|
}
|
|
|
|
|
2009-05-04 15:23:58 +00:00
|
|
|
/*
|
|
|
|
* Adds a socket to the list for servicing by nfsds.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
nfsrvd_addsock(struct file *fp)
|
|
|
|
{
|
|
|
|
int siz;
|
|
|
|
struct socket *so;
|
2011-07-16 08:51:09 +00:00
|
|
|
int error = 0;
|
2009-05-04 15:23:58 +00:00
|
|
|
SVCXPRT *xprt;
|
|
|
|
static u_int64_t sockref = 0;
|
|
|
|
|
|
|
|
so = fp->f_data;
|
|
|
|
|
|
|
|
siz = sb_max_adj;
|
|
|
|
error = soreserve(so, siz, siz);
|
2011-07-16 08:51:09 +00:00
|
|
|
if (error)
|
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Steal the socket from userland so that it doesn't close
|
|
|
|
* unexpectedly.
|
|
|
|
*/
|
|
|
|
if (so->so_type == SOCK_DGRAM)
|
|
|
|
xprt = svc_dg_create(nfsrvd_pool, so, 0, 0);
|
|
|
|
else
|
|
|
|
xprt = svc_vc_create(nfsrvd_pool, so, 0, 0);
|
|
|
|
if (xprt) {
|
|
|
|
fp->f_ops = &badfileops;
|
|
|
|
fp->f_data = NULL;
|
|
|
|
xprt->xp_sockref = ++sockref;
|
2009-05-26 01:47:37 +00:00
|
|
|
if (nfs_minvers == NFS_VER2)
|
|
|
|
svc_reg(xprt, NFS_PROG, NFS_VER2, nfssvc_program,
|
|
|
|
NULL);
|
|
|
|
if (nfs_minvers <= NFS_VER3 && nfs_maxvers >= NFS_VER3)
|
|
|
|
svc_reg(xprt, NFS_PROG, NFS_VER3, nfssvc_program,
|
|
|
|
NULL);
|
|
|
|
if (nfs_maxvers >= NFS_VER4)
|
|
|
|
svc_reg(xprt, NFS_PROG, NFS_VER4, nfssvc_program,
|
|
|
|
NULL);
|
2014-01-03 15:09:59 +00:00
|
|
|
if (so->so_type == SOCK_STREAM)
|
|
|
|
svc_loss_reg(xprt, nfssvc_loss);
|
2009-06-17 22:55:59 +00:00
|
|
|
SVC_RELEASE(xprt);
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
|
2011-07-16 08:51:09 +00:00
|
|
|
out:
|
|
|
|
NFSEXITCODE(error);
|
|
|
|
return (error);
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Called by nfssvc() for nfsds. Just loops around servicing rpc requests
|
|
|
|
* until it is killed by a signal.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
nfsrvd_nfsd(struct thread *td, struct nfsd_nfsd_args *args)
|
|
|
|
{
|
2009-05-12 16:04:51 +00:00
|
|
|
char principal[MAXHOSTNAMELEN + 5];
|
Currently, softupdate code detects overstepping on the workitems
limits in the code which is deep in the call stack, and owns several
critical system resources, like vnode locks. Attempt to wait while
the per-mount softupdate thread cleans up the backlog may deadlock,
because the thread might need to lock the same vnode which is owned by
the waiting thread.
Instead of synchronously waiting for the worker, perform the worker'
tickle and pause until the backlog is cleaned, at the safe point
during return from kernel to usermode. A new ast request to call
softdep_ast_cleanup() is created, the SU code now only checks the size
of queue and schedules ast.
There is no ast delivery for the kernel threads, so they are exempted
from the mechanism, except NFS daemon threads. NFS server loop
explicitely checks for the request, and informs the schedule_cleanup()
that it is capable of handling the requests by the process P2_AST_SU
flag. This is needed because nfsd may be the sole cause of the SU
workqueue overflow. But, to not cause nsfd to spawn additional
threads just because we slow down existing workers, only tickle su
threads, without waiting for the backlog cleanup.
Reviewed by: jhb, mckusick
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2015-05-27 09:20:42 +00:00
|
|
|
struct proc *p;
|
2011-07-16 08:51:09 +00:00
|
|
|
int error = 0;
|
2009-05-04 15:23:58 +00:00
|
|
|
bool_t ret2, ret3, ret4;
|
|
|
|
|
2009-05-12 16:04:51 +00:00
|
|
|
error = copyinstr(args->principal, principal, sizeof (principal),
|
|
|
|
NULL);
|
|
|
|
if (error)
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Only the first nfsd actually does any work. The RPC code
|
|
|
|
* adds threads to it as needed. Any extra processes offered
|
|
|
|
* by nfsd just exit. If nfsd is new enough, it will call us
|
|
|
|
* once with a structure that specifies how many threads to
|
|
|
|
* use.
|
|
|
|
*/
|
|
|
|
NFSD_LOCK();
|
|
|
|
if (newnfs_numnfsd == 0) {
|
Currently, softupdate code detects overstepping on the workitems
limits in the code which is deep in the call stack, and owns several
critical system resources, like vnode locks. Attempt to wait while
the per-mount softupdate thread cleans up the backlog may deadlock,
because the thread might need to lock the same vnode which is owned by
the waiting thread.
Instead of synchronously waiting for the worker, perform the worker'
tickle and pause until the backlog is cleaned, at the safe point
during return from kernel to usermode. A new ast request to call
softdep_ast_cleanup() is created, the SU code now only checks the size
of queue and schedules ast.
There is no ast delivery for the kernel threads, so they are exempted
from the mechanism, except NFS daemon threads. NFS server loop
explicitely checks for the request, and informs the schedule_cleanup()
that it is capable of handling the requests by the process P2_AST_SU
flag. This is needed because nfsd may be the sole cause of the SU
workqueue overflow. But, to not cause nsfd to spawn additional
threads just because we slow down existing workers, only tickle su
threads, without waiting for the backlog cleanup.
Reviewed by: jhb, mckusick
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2015-05-27 09:20:42 +00:00
|
|
|
p = td->td_proc;
|
|
|
|
PROC_LOCK(p);
|
|
|
|
p->p_flag2 |= P2_AST_SU;
|
|
|
|
PROC_UNLOCK(p);
|
2009-05-04 15:23:58 +00:00
|
|
|
newnfs_numnfsd++;
|
|
|
|
|
|
|
|
NFSD_UNLOCK();
|
|
|
|
|
2009-05-12 16:04:51 +00:00
|
|
|
/* An empty string implies AUTH_SYS only. */
|
|
|
|
if (principal[0] != '\0') {
|
2011-06-19 22:08:55 +00:00
|
|
|
ret2 = rpc_gss_set_svc_name_call(principal,
|
|
|
|
"kerberosv5", GSS_C_INDEFINITE, NFS_PROG, NFS_VER2);
|
|
|
|
ret3 = rpc_gss_set_svc_name_call(principal,
|
|
|
|
"kerberosv5", GSS_C_INDEFINITE, NFS_PROG, NFS_VER3);
|
|
|
|
ret4 = rpc_gss_set_svc_name_call(principal,
|
|
|
|
"kerberosv5", GSS_C_INDEFINITE, NFS_PROG, NFS_VER4);
|
|
|
|
|
|
|
|
if (!ret2 || !ret3 || !ret4)
|
|
|
|
printf("nfsd: can't register svc name\n");
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
|
2009-05-12 16:04:51 +00:00
|
|
|
nfsrvd_pool->sp_minthreads = args->minthreads;
|
|
|
|
nfsrvd_pool->sp_maxthreads = args->maxthreads;
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
svc_run(nfsrvd_pool);
|
|
|
|
|
2009-05-12 16:04:51 +00:00
|
|
|
if (principal[0] != '\0') {
|
2011-06-19 22:08:55 +00:00
|
|
|
rpc_gss_clear_svc_name_call(NFS_PROG, NFS_VER2);
|
|
|
|
rpc_gss_clear_svc_name_call(NFS_PROG, NFS_VER3);
|
|
|
|
rpc_gss_clear_svc_name_call(NFS_PROG, NFS_VER4);
|
2009-05-12 16:04:51 +00:00
|
|
|
}
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
NFSD_LOCK();
|
|
|
|
newnfs_numnfsd--;
|
|
|
|
nfsrvd_init(1);
|
Currently, softupdate code detects overstepping on the workitems
limits in the code which is deep in the call stack, and owns several
critical system resources, like vnode locks. Attempt to wait while
the per-mount softupdate thread cleans up the backlog may deadlock,
because the thread might need to lock the same vnode which is owned by
the waiting thread.
Instead of synchronously waiting for the worker, perform the worker'
tickle and pause until the backlog is cleaned, at the safe point
during return from kernel to usermode. A new ast request to call
softdep_ast_cleanup() is created, the SU code now only checks the size
of queue and schedules ast.
There is no ast delivery for the kernel threads, so they are exempted
from the mechanism, except NFS daemon threads. NFS server loop
explicitely checks for the request, and informs the schedule_cleanup()
that it is capable of handling the requests by the process P2_AST_SU
flag. This is needed because nfsd may be the sole cause of the SU
workqueue overflow. But, to not cause nsfd to spawn additional
threads just because we slow down existing workers, only tickle su
threads, without waiting for the backlog cleanup.
Reviewed by: jhb, mckusick
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
2015-05-27 09:20:42 +00:00
|
|
|
PROC_LOCK(p);
|
|
|
|
p->p_flag2 &= ~P2_AST_SU;
|
|
|
|
PROC_UNLOCK(p);
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
NFSD_UNLOCK();
|
|
|
|
|
2011-07-16 08:51:09 +00:00
|
|
|
out:
|
|
|
|
NFSEXITCODE(error);
|
|
|
|
return (error);
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize the data structures for the server.
|
|
|
|
* Handshake with any new nfsds starting up to avoid any chance of
|
|
|
|
* corruption.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
nfsrvd_init(int terminating)
|
|
|
|
{
|
|
|
|
|
|
|
|
NFSD_LOCK_ASSERT();
|
|
|
|
|
|
|
|
if (terminating) {
|
2011-01-14 23:30:35 +00:00
|
|
|
nfsd_master_proc = NULL;
|
2009-05-04 15:23:58 +00:00
|
|
|
NFSD_UNLOCK();
|
2015-11-21 23:55:46 +00:00
|
|
|
nfsrv_freeallbackchannel_xprts();
|
2017-02-14 17:49:08 +00:00
|
|
|
svcpool_close(nfsrvd_pool);
|
|
|
|
NFSD_LOCK();
|
|
|
|
} else {
|
|
|
|
NFSD_UNLOCK();
|
|
|
|
nfsrvd_pool = svcpool_create("nfsd",
|
|
|
|
SYSCTL_STATIC_CHILDREN(_vfs_nfsd));
|
|
|
|
nfsrvd_pool->sp_rcache = NULL;
|
|
|
|
nfsrvd_pool->sp_assign = fhanew_assign;
|
|
|
|
nfsrvd_pool->sp_done = fha_nd_complete;
|
2009-05-04 15:23:58 +00:00
|
|
|
NFSD_LOCK();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|