2009-05-04 15:23:58 +00:00
|
|
|
/*-
|
2017-11-20 19:43:44 +00:00
|
|
|
* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
*
|
2009-05-04 15:23:58 +00:00
|
|
|
* Copyright (c) 1989, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
*
|
|
|
|
* This code is derived from software contributed to Berkeley by
|
|
|
|
* Rick Macklem at The University of Guelph.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
2017-02-28 23:42:47 +00:00
|
|
|
* 3. Neither the name of the University nor the names of its contributors
|
2009-05-04 15:23:58 +00:00
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Socket operations for use by the nfs server.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef APPLEKEXT
|
|
|
|
#include <fs/nfs/nfsport.h>
|
|
|
|
|
2016-08-12 22:44:59 +00:00
|
|
|
extern struct nfsstatsv1 nfsstatsv1;
|
2009-05-04 15:23:58 +00:00
|
|
|
extern struct nfsrvfh nfs_pubfh, nfs_rootfh;
|
|
|
|
extern int nfs_pubfhset, nfs_rootfhset;
|
|
|
|
extern struct nfsv4lock nfsv4rootfs_lock;
|
|
|
|
extern struct nfsrv_stablefirst nfsrv_stablefirst;
|
2015-05-27 22:00:05 +00:00
|
|
|
extern struct nfsclienthashhead *nfsclienthash;
|
|
|
|
extern int nfsrv_clienthashsize;
|
2009-05-04 15:23:58 +00:00
|
|
|
extern int nfsrc_floodlevel, nfsrc_tcpsavedreplies;
|
2014-07-01 20:47:16 +00:00
|
|
|
extern int nfsd_debuglevel;
|
Merge the pNFS server code from projects/pnfs-planb-server into head.
This code merge adds a pNFS service to the NFSv4.1 server. Although it is
a large commit it should not affect behaviour for a non-pNFS NFS server.
Some documentation on how this works can be found at:
http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt
and will hopefully be turned into a proper document soon.
This is a merge of the kernel code. Userland and man page changes will
come soon, once the dust settles on this merge.
It has passed a "make universe", so I hope it will not cause build problems.
It also adds NFSv4.1 server support for the "current stateid".
Here is a brief overview of the pNFS service:
A pNFS service separates the Read/Write oeprations from all the other NFSv4.1
Metadata operations. It is hoped that this separation allows a pNFS service
to be configured that exceeds the limits of a single NFS server for either
storage capacity and/or I/O bandwidth.
It is possible to configure mirroring within the data servers (DSs) so that
the data storage file for an MDS file will be mirrored on two or more of
the DSs.
When this is used, failure of a DS will not stop the pNFS service and a
failed DS can be recovered once repaired while the pNFS service continues
to operate. Although two way mirroring would be the norm, it is possible
to set a mirroring level of up to four or the number of DSs, whichever is
less.
The Metadata server will always be a single point of failure,
just as a single NFS server is.
A Plan B pNFS service consists of a single MetaData Server (MDS) and K
Data Servers (DS), all of which are recent FreeBSD systems.
Clients will mount the MDS as they would a single NFS server.
When files are created, the MDS creates a file tree identical to what a
single NFS server creates, except that all the regular (VREG) files will
be empty. As such, if you look at the exported tree on the MDS directly
on the MDS server (not via an NFS mount), the files will all be of size 0.
Each of these files will also have two extended attributes in the system
attribute name space:
pnfsd.dsfile - This extended attrbute stores the information that
the MDS needs to find the data storage file(s) on DS(s) for this file.
pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime
and Change attributes for the file, so that the MDS doesn't need to
acquire the attributes from the DS for every Getattr operation.
For each regular (VREG) file, the MDS creates a data storage file on one
(or more if mirroring is enabled) of the DSs in one of the "dsNN"
subdirectories. The name of this file is the file handle
of the file on the MDS in hexadecimal so that the name is unique.
The DSs use subdirectories named "ds0" to "dsN" so that no one directory
gets too large. The value of "N" is set via the sysctl vfs.nfsd.dsdirsize
on the MDS, with the default being 20.
For production servers that will store a lot of files, this value should
probably be much larger.
It can be increased when the "nfsd" daemon is not running on the MDS,
once the "dsK" directories are created.
For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces
of information to the client that allows it to do I/O directly to the DS.
DeviceInfo - This is relatively static information that defines what a DS
is. The critical bits of information returned by the FreeBSD
server is the IP address of the DS and, for the Flexible
File layout, that NFSv4.1 is to be used and that it is
"tightly coupled".
There is a "deviceid" which identifies the DeviceInfo.
Layout - This is per file and can be recalled by the server when it
is no longer valid. For the FreeBSD server, there is support
for two types of layout, call File and Flexible File layout.
Both allow the client to do I/O on the DS via NFSv4.1 I/O
operations. The Flexible File layout is a more recent variant
that allows specification of mirrors, where the client is
expected to do writes to all mirrors to maintain them in a
consistent state. The Flexible File layout also allows the
client to report I/O errors for a DS back to the MDS.
The Flexible File layout supports two variants referred to as
"tightly coupled" vs "loosely coupled". The FreeBSD server always
uses the "tightly coupled" variant where the client uses the
same credentials to do I/O on the DS as it would on the MDS.
For the "loosely coupled" variant, the layout specifies a
synthetic user/group that the client uses to do I/O on the DS.
The FreeBSD server does not do striping and always returns
layouts for the entire file. The critical information in a layout
is Read vs Read/Writea and DeviceID(s) that identify which
DS(s) the data is stored on.
At this time, the MDS generates File Layout layouts to NFSv4.1 clients
that know how to do pNFS for the non-mirrored DS case unless the sysctl
vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File
layouts are generated.
The mirrored DS configuration always generates Flexible File layouts.
For NFS clients that do not support NFSv4.1 pNFS, all I/O operations
are done against the MDS which acts as a proxy for the appropriate DS(s).
When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy.
If the DS is on the same machine, the MDS/DS will do the RPC on the DS as
a proxy and so on, until the machine runs out of some resource, such as
session slots or mbufs.
As such, DSs must be separate systems from the MDS.
Tested by: james.rose@framestore.com
Relnotes: yes
2018-06-12 19:36:32 +00:00
|
|
|
extern int nfsrv_layouthighwater;
|
|
|
|
extern volatile int nfsrv_layoutcnt;
|
2009-05-04 15:23:58 +00:00
|
|
|
NFSV4ROOTLOCKMUTEX;
|
|
|
|
NFSSTATESPINLOCK;
|
|
|
|
|
|
|
|
int (*nfsrv3_procs0[NFS_V3NPROCS])(struct nfsrv_descript *,
|
|
|
|
int, vnode_t , NFSPROC_T *, struct nfsexstuff *) = {
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_getattr,
|
|
|
|
nfsrvd_setattr,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_access,
|
|
|
|
nfsrvd_readlink,
|
|
|
|
nfsrvd_read,
|
|
|
|
nfsrvd_write,
|
|
|
|
nfsrvd_create,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_remove,
|
|
|
|
nfsrvd_remove,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_readdir,
|
|
|
|
nfsrvd_readdirplus,
|
|
|
|
nfsrvd_statfs,
|
|
|
|
nfsrvd_fsinfo,
|
|
|
|
nfsrvd_pathconf,
|
|
|
|
nfsrvd_commit,
|
|
|
|
};
|
|
|
|
|
|
|
|
int (*nfsrv3_procs1[NFS_V3NPROCS])(struct nfsrv_descript *,
|
|
|
|
int, vnode_t , vnode_t *, fhandle_t *,
|
|
|
|
NFSPROC_T *, struct nfsexstuff *) = {
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_lookup,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_mkdir,
|
|
|
|
nfsrvd_symlink,
|
|
|
|
nfsrvd_mknod,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
};
|
|
|
|
|
|
|
|
int (*nfsrv3_procs2[NFS_V3NPROCS])(struct nfsrv_descript *,
|
|
|
|
int, vnode_t , vnode_t , NFSPROC_T *,
|
|
|
|
struct nfsexstuff *, struct nfsexstuff *) = {
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_rename,
|
|
|
|
nfsrvd_link,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
};
|
|
|
|
|
2014-07-01 20:47:16 +00:00
|
|
|
int (*nfsrv4_ops0[NFSV41_NOPS])(struct nfsrv_descript *,
|
2009-05-04 15:23:58 +00:00
|
|
|
int, vnode_t , NFSPROC_T *, struct nfsexstuff *) = {
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_access,
|
|
|
|
nfsrvd_close,
|
|
|
|
nfsrvd_commit,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_delegpurge,
|
|
|
|
nfsrvd_delegreturn,
|
|
|
|
nfsrvd_getattr,
|
|
|
|
nfsrvd_getfh,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_lock,
|
|
|
|
nfsrvd_lockt,
|
|
|
|
nfsrvd_locku,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_verify,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_openconfirm,
|
|
|
|
nfsrvd_opendowngrade,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_read,
|
|
|
|
nfsrvd_readdirplus,
|
|
|
|
nfsrvd_readlink,
|
|
|
|
nfsrvd_remove,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_renew,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_secinfo,
|
|
|
|
nfsrvd_setattr,
|
|
|
|
nfsrvd_setclientid,
|
|
|
|
nfsrvd_setclientidcfrm,
|
|
|
|
nfsrvd_verify,
|
|
|
|
nfsrvd_write,
|
|
|
|
nfsrvd_releaselckown,
|
2014-07-01 20:47:16 +00:00
|
|
|
nfsrvd_notsupp,
|
2018-06-01 19:47:41 +00:00
|
|
|
nfsrvd_bindconnsess,
|
2014-07-01 20:47:16 +00:00
|
|
|
nfsrvd_exchangeid,
|
|
|
|
nfsrvd_createsession,
|
|
|
|
nfsrvd_destroysession,
|
|
|
|
nfsrvd_freestateid,
|
|
|
|
nfsrvd_notsupp,
|
Merge the pNFS server code from projects/pnfs-planb-server into head.
This code merge adds a pNFS service to the NFSv4.1 server. Although it is
a large commit it should not affect behaviour for a non-pNFS NFS server.
Some documentation on how this works can be found at:
http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt
and will hopefully be turned into a proper document soon.
This is a merge of the kernel code. Userland and man page changes will
come soon, once the dust settles on this merge.
It has passed a "make universe", so I hope it will not cause build problems.
It also adds NFSv4.1 server support for the "current stateid".
Here is a brief overview of the pNFS service:
A pNFS service separates the Read/Write oeprations from all the other NFSv4.1
Metadata operations. It is hoped that this separation allows a pNFS service
to be configured that exceeds the limits of a single NFS server for either
storage capacity and/or I/O bandwidth.
It is possible to configure mirroring within the data servers (DSs) so that
the data storage file for an MDS file will be mirrored on two or more of
the DSs.
When this is used, failure of a DS will not stop the pNFS service and a
failed DS can be recovered once repaired while the pNFS service continues
to operate. Although two way mirroring would be the norm, it is possible
to set a mirroring level of up to four or the number of DSs, whichever is
less.
The Metadata server will always be a single point of failure,
just as a single NFS server is.
A Plan B pNFS service consists of a single MetaData Server (MDS) and K
Data Servers (DS), all of which are recent FreeBSD systems.
Clients will mount the MDS as they would a single NFS server.
When files are created, the MDS creates a file tree identical to what a
single NFS server creates, except that all the regular (VREG) files will
be empty. As such, if you look at the exported tree on the MDS directly
on the MDS server (not via an NFS mount), the files will all be of size 0.
Each of these files will also have two extended attributes in the system
attribute name space:
pnfsd.dsfile - This extended attrbute stores the information that
the MDS needs to find the data storage file(s) on DS(s) for this file.
pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime
and Change attributes for the file, so that the MDS doesn't need to
acquire the attributes from the DS for every Getattr operation.
For each regular (VREG) file, the MDS creates a data storage file on one
(or more if mirroring is enabled) of the DSs in one of the "dsNN"
subdirectories. The name of this file is the file handle
of the file on the MDS in hexadecimal so that the name is unique.
The DSs use subdirectories named "ds0" to "dsN" so that no one directory
gets too large. The value of "N" is set via the sysctl vfs.nfsd.dsdirsize
on the MDS, with the default being 20.
For production servers that will store a lot of files, this value should
probably be much larger.
It can be increased when the "nfsd" daemon is not running on the MDS,
once the "dsK" directories are created.
For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces
of information to the client that allows it to do I/O directly to the DS.
DeviceInfo - This is relatively static information that defines what a DS
is. The critical bits of information returned by the FreeBSD
server is the IP address of the DS and, for the Flexible
File layout, that NFSv4.1 is to be used and that it is
"tightly coupled".
There is a "deviceid" which identifies the DeviceInfo.
Layout - This is per file and can be recalled by the server when it
is no longer valid. For the FreeBSD server, there is support
for two types of layout, call File and Flexible File layout.
Both allow the client to do I/O on the DS via NFSv4.1 I/O
operations. The Flexible File layout is a more recent variant
that allows specification of mirrors, where the client is
expected to do writes to all mirrors to maintain them in a
consistent state. The Flexible File layout also allows the
client to report I/O errors for a DS back to the MDS.
The Flexible File layout supports two variants referred to as
"tightly coupled" vs "loosely coupled". The FreeBSD server always
uses the "tightly coupled" variant where the client uses the
same credentials to do I/O on the DS as it would on the MDS.
For the "loosely coupled" variant, the layout specifies a
synthetic user/group that the client uses to do I/O on the DS.
The FreeBSD server does not do striping and always returns
layouts for the entire file. The critical information in a layout
is Read vs Read/Writea and DeviceID(s) that identify which
DS(s) the data is stored on.
At this time, the MDS generates File Layout layouts to NFSv4.1 clients
that know how to do pNFS for the non-mirrored DS case unless the sysctl
vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File
layouts are generated.
The mirrored DS configuration always generates Flexible File layouts.
For NFS clients that do not support NFSv4.1 pNFS, all I/O operations
are done against the MDS which acts as a proxy for the appropriate DS(s).
When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy.
If the DS is on the same machine, the MDS/DS will do the RPC on the DS as
a proxy and so on, until the machine runs out of some resource, such as
session slots or mbufs.
As such, DSs must be separate systems from the MDS.
Tested by: james.rose@framestore.com
Relnotes: yes
2018-06-12 19:36:32 +00:00
|
|
|
nfsrvd_getdevinfo,
|
2014-07-01 20:47:16 +00:00
|
|
|
nfsrvd_notsupp,
|
Merge the pNFS server code from projects/pnfs-planb-server into head.
This code merge adds a pNFS service to the NFSv4.1 server. Although it is
a large commit it should not affect behaviour for a non-pNFS NFS server.
Some documentation on how this works can be found at:
http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt
and will hopefully be turned into a proper document soon.
This is a merge of the kernel code. Userland and man page changes will
come soon, once the dust settles on this merge.
It has passed a "make universe", so I hope it will not cause build problems.
It also adds NFSv4.1 server support for the "current stateid".
Here is a brief overview of the pNFS service:
A pNFS service separates the Read/Write oeprations from all the other NFSv4.1
Metadata operations. It is hoped that this separation allows a pNFS service
to be configured that exceeds the limits of a single NFS server for either
storage capacity and/or I/O bandwidth.
It is possible to configure mirroring within the data servers (DSs) so that
the data storage file for an MDS file will be mirrored on two or more of
the DSs.
When this is used, failure of a DS will not stop the pNFS service and a
failed DS can be recovered once repaired while the pNFS service continues
to operate. Although two way mirroring would be the norm, it is possible
to set a mirroring level of up to four or the number of DSs, whichever is
less.
The Metadata server will always be a single point of failure,
just as a single NFS server is.
A Plan B pNFS service consists of a single MetaData Server (MDS) and K
Data Servers (DS), all of which are recent FreeBSD systems.
Clients will mount the MDS as they would a single NFS server.
When files are created, the MDS creates a file tree identical to what a
single NFS server creates, except that all the regular (VREG) files will
be empty. As such, if you look at the exported tree on the MDS directly
on the MDS server (not via an NFS mount), the files will all be of size 0.
Each of these files will also have two extended attributes in the system
attribute name space:
pnfsd.dsfile - This extended attrbute stores the information that
the MDS needs to find the data storage file(s) on DS(s) for this file.
pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime
and Change attributes for the file, so that the MDS doesn't need to
acquire the attributes from the DS for every Getattr operation.
For each regular (VREG) file, the MDS creates a data storage file on one
(or more if mirroring is enabled) of the DSs in one of the "dsNN"
subdirectories. The name of this file is the file handle
of the file on the MDS in hexadecimal so that the name is unique.
The DSs use subdirectories named "ds0" to "dsN" so that no one directory
gets too large. The value of "N" is set via the sysctl vfs.nfsd.dsdirsize
on the MDS, with the default being 20.
For production servers that will store a lot of files, this value should
probably be much larger.
It can be increased when the "nfsd" daemon is not running on the MDS,
once the "dsK" directories are created.
For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces
of information to the client that allows it to do I/O directly to the DS.
DeviceInfo - This is relatively static information that defines what a DS
is. The critical bits of information returned by the FreeBSD
server is the IP address of the DS and, for the Flexible
File layout, that NFSv4.1 is to be used and that it is
"tightly coupled".
There is a "deviceid" which identifies the DeviceInfo.
Layout - This is per file and can be recalled by the server when it
is no longer valid. For the FreeBSD server, there is support
for two types of layout, call File and Flexible File layout.
Both allow the client to do I/O on the DS via NFSv4.1 I/O
operations. The Flexible File layout is a more recent variant
that allows specification of mirrors, where the client is
expected to do writes to all mirrors to maintain them in a
consistent state. The Flexible File layout also allows the
client to report I/O errors for a DS back to the MDS.
The Flexible File layout supports two variants referred to as
"tightly coupled" vs "loosely coupled". The FreeBSD server always
uses the "tightly coupled" variant where the client uses the
same credentials to do I/O on the DS as it would on the MDS.
For the "loosely coupled" variant, the layout specifies a
synthetic user/group that the client uses to do I/O on the DS.
The FreeBSD server does not do striping and always returns
layouts for the entire file. The critical information in a layout
is Read vs Read/Writea and DeviceID(s) that identify which
DS(s) the data is stored on.
At this time, the MDS generates File Layout layouts to NFSv4.1 clients
that know how to do pNFS for the non-mirrored DS case unless the sysctl
vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File
layouts are generated.
The mirrored DS configuration always generates Flexible File layouts.
For NFS clients that do not support NFSv4.1 pNFS, all I/O operations
are done against the MDS which acts as a proxy for the appropriate DS(s).
When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy.
If the DS is on the same machine, the MDS/DS will do the RPC on the DS as
a proxy and so on, until the machine runs out of some resource, such as
session slots or mbufs.
As such, DSs must be separate systems from the MDS.
Tested by: james.rose@framestore.com
Relnotes: yes
2018-06-12 19:36:32 +00:00
|
|
|
nfsrvd_layoutcommit,
|
|
|
|
nfsrvd_layoutget,
|
|
|
|
nfsrvd_layoutreturn,
|
2014-07-01 20:47:16 +00:00
|
|
|
nfsrvd_notsupp,
|
|
|
|
nfsrvd_sequence,
|
|
|
|
nfsrvd_notsupp,
|
2018-05-11 22:16:23 +00:00
|
|
|
nfsrvd_teststateid,
|
2014-07-01 20:47:16 +00:00
|
|
|
nfsrvd_notsupp,
|
|
|
|
nfsrvd_destroyclientid,
|
|
|
|
nfsrvd_reclaimcomplete,
|
2009-05-04 15:23:58 +00:00
|
|
|
};
|
|
|
|
|
2014-07-01 20:47:16 +00:00
|
|
|
int (*nfsrv4_ops1[NFSV41_NOPS])(struct nfsrv_descript *,
|
2009-05-04 15:23:58 +00:00
|
|
|
int, vnode_t , vnode_t *, fhandle_t *,
|
|
|
|
NFSPROC_T *, struct nfsexstuff *) = {
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_mknod,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_lookup,
|
|
|
|
nfsrvd_lookup,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_open,
|
|
|
|
nfsrvd_openattr,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
2014-07-01 20:47:16 +00:00
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t *, fhandle_t *, NFSPROC_T *, struct nfsexstuff *))0,
|
2009-05-04 15:23:58 +00:00
|
|
|
};
|
|
|
|
|
2014-07-01 20:47:16 +00:00
|
|
|
int (*nfsrv4_ops2[NFSV41_NOPS])(struct nfsrv_descript *,
|
2009-05-04 15:23:58 +00:00
|
|
|
int, vnode_t , vnode_t , NFSPROC_T *,
|
|
|
|
struct nfsexstuff *, struct nfsexstuff *) = {
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_link,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
nfsrvd_rename,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
2014-07-01 20:47:16 +00:00
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
|
|
|
(int (*)(struct nfsrv_descript *, int, vnode_t , vnode_t , NFSPROC_T *, struct nfsexstuff *, struct nfsexstuff *))0,
|
2009-05-04 15:23:58 +00:00
|
|
|
};
|
|
|
|
#endif /* !APPLEKEXT */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Static array that defines which nfs rpc's are nonidempotent
|
|
|
|
*/
|
|
|
|
static int nfsrv_nonidempotent[NFS_V3NPROCS] = {
|
|
|
|
FALSE,
|
|
|
|
FALSE,
|
|
|
|
TRUE,
|
|
|
|
FALSE,
|
|
|
|
FALSE,
|
|
|
|
FALSE,
|
|
|
|
FALSE,
|
|
|
|
TRUE,
|
|
|
|
TRUE,
|
|
|
|
TRUE,
|
|
|
|
TRUE,
|
|
|
|
TRUE,
|
|
|
|
TRUE,
|
|
|
|
TRUE,
|
|
|
|
TRUE,
|
|
|
|
TRUE,
|
|
|
|
FALSE,
|
|
|
|
FALSE,
|
|
|
|
FALSE,
|
|
|
|
FALSE,
|
|
|
|
FALSE,
|
|
|
|
FALSE,
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This static array indicates whether or not the RPC modifies the
|
|
|
|
* file system.
|
|
|
|
*/
|
2018-08-17 21:12:16 +00:00
|
|
|
int nfsrv_writerpc[NFS_NPROCS] = { 0, 0, 1, 0, 0, 0, 0,
|
2009-05-04 15:23:58 +00:00
|
|
|
1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
|
|
|
|
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 };
|
|
|
|
|
|
|
|
/* local functions */
|
|
|
|
static void nfsrvd_compound(struct nfsrv_descript *nd, int isdgram,
|
2014-07-01 20:47:16 +00:00
|
|
|
u_char *tag, int taglen, u_int32_t minorvers, NFSPROC_T *p);
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This static array indicates which server procedures require the extra
|
|
|
|
* arguments to return the current file handle for V2, 3.
|
|
|
|
*/
|
|
|
|
static int nfs_retfh[NFS_V3NPROCS] = { 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1,
|
|
|
|
1, 0, 0, 2, 2, 0, 0, 0, 0, 0, 0 };
|
|
|
|
|
2014-07-01 20:47:16 +00:00
|
|
|
extern struct nfsv4_opflag nfsv4_opflag[NFSV41_NOPS];
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
static int nfsv3to4op[NFS_V3NPROCS] = {
|
|
|
|
NFSPROC_NULL,
|
|
|
|
NFSV4OP_GETATTR,
|
|
|
|
NFSV4OP_SETATTR,
|
|
|
|
NFSV4OP_LOOKUP,
|
|
|
|
NFSV4OP_ACCESS,
|
|
|
|
NFSV4OP_READLINK,
|
|
|
|
NFSV4OP_READ,
|
|
|
|
NFSV4OP_WRITE,
|
|
|
|
NFSV4OP_V3CREATE,
|
|
|
|
NFSV4OP_MKDIR,
|
|
|
|
NFSV4OP_SYMLINK,
|
|
|
|
NFSV4OP_MKNOD,
|
|
|
|
NFSV4OP_REMOVE,
|
|
|
|
NFSV4OP_RMDIR,
|
|
|
|
NFSV4OP_RENAME,
|
|
|
|
NFSV4OP_LINK,
|
|
|
|
NFSV4OP_READDIR,
|
|
|
|
NFSV4OP_READDIRPLUS,
|
|
|
|
NFSV4OP_FSSTAT,
|
|
|
|
NFSV4OP_FSINFO,
|
|
|
|
NFSV4OP_PATHCONF,
|
|
|
|
NFSV4OP_COMMIT,
|
|
|
|
};
|
|
|
|
|
2016-08-12 22:44:59 +00:00
|
|
|
static struct mtx nfsrvd_statmtx;
|
|
|
|
MTX_SYSINIT(nfsst, &nfsrvd_statmtx, "NFSstat", MTX_DEF);
|
|
|
|
|
|
|
|
static void
|
|
|
|
nfsrvd_statstart(int op, struct bintime *now)
|
|
|
|
{
|
|
|
|
if (op > (NFSV42_NOPS + NFSV4OP_FAKENOPS)) {
|
|
|
|
printf("%s: op %d invalid\n", __func__, op);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
mtx_lock(&nfsrvd_statmtx);
|
|
|
|
if (nfsstatsv1.srvstartcnt == nfsstatsv1.srvdonecnt) {
|
|
|
|
if (now != NULL)
|
|
|
|
nfsstatsv1.busyfrom = *now;
|
|
|
|
else
|
|
|
|
binuptime(&nfsstatsv1.busyfrom);
|
|
|
|
|
|
|
|
}
|
|
|
|
nfsstatsv1.srvrpccnt[op]++;
|
|
|
|
nfsstatsv1.srvstartcnt++;
|
|
|
|
mtx_unlock(&nfsrvd_statmtx);
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
nfsrvd_statend(int op, uint64_t bytes, struct bintime *now,
|
|
|
|
struct bintime *then)
|
|
|
|
{
|
|
|
|
struct bintime dt, lnow;
|
|
|
|
|
|
|
|
if (op > (NFSV42_NOPS + NFSV4OP_FAKENOPS)) {
|
|
|
|
printf("%s: op %d invalid\n", __func__, op);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (now == NULL) {
|
|
|
|
now = &lnow;
|
|
|
|
binuptime(now);
|
|
|
|
}
|
|
|
|
|
|
|
|
mtx_lock(&nfsrvd_statmtx);
|
|
|
|
|
|
|
|
nfsstatsv1.srvbytes[op] += bytes;
|
|
|
|
nfsstatsv1.srvops[op]++;
|
|
|
|
|
|
|
|
if (then != NULL) {
|
|
|
|
dt = *now;
|
|
|
|
bintime_sub(&dt, then);
|
|
|
|
bintime_add(&nfsstatsv1.srvduration[op], &dt);
|
|
|
|
}
|
|
|
|
|
|
|
|
dt = *now;
|
|
|
|
bintime_sub(&dt, &nfsstatsv1.busyfrom);
|
|
|
|
bintime_add(&nfsstatsv1.busytime, &dt);
|
|
|
|
nfsstatsv1.busyfrom = *now;
|
|
|
|
|
|
|
|
nfsstatsv1.srvdonecnt++;
|
|
|
|
|
|
|
|
mtx_unlock(&nfsrvd_statmtx);
|
|
|
|
}
|
|
|
|
|
2009-05-04 15:23:58 +00:00
|
|
|
/*
|
|
|
|
* Do an RPC. Basically, get the file handles translated to vnode pointers
|
|
|
|
* and then call the appropriate server routine. The server routines are
|
|
|
|
* split into groups, based on whether they use a file handle or file
|
|
|
|
* handle plus name or ...
|
|
|
|
* The NFS V4 Compound RPC is performed separately by nfsrvd_compound().
|
|
|
|
*/
|
|
|
|
APPLESTATIC void
|
2014-07-01 20:47:16 +00:00
|
|
|
nfsrvd_dorpc(struct nfsrv_descript *nd, int isdgram, u_char *tag, int taglen,
|
|
|
|
u_int32_t minorvers, NFSPROC_T *p)
|
2009-05-04 15:23:58 +00:00
|
|
|
{
|
2010-12-25 21:56:25 +00:00
|
|
|
int error = 0, lktype;
|
2009-05-04 15:23:58 +00:00
|
|
|
vnode_t vp;
|
|
|
|
mount_t mp = NULL;
|
|
|
|
struct nfsrvfh fh;
|
|
|
|
struct nfsexstuff nes;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get a locked vnode for the first file handle
|
|
|
|
*/
|
|
|
|
if (!(nd->nd_flag & ND_NFSV4)) {
|
2010-06-13 05:24:27 +00:00
|
|
|
KASSERT(nd->nd_repstat == 0, ("nfsrvd_dorpc"));
|
2009-05-04 15:23:58 +00:00
|
|
|
/*
|
|
|
|
* For NFSv3, if the malloc/mget allocation is near limits,
|
|
|
|
* return NFSERR_DELAY.
|
|
|
|
*/
|
|
|
|
if ((nd->nd_flag & ND_NFSV3) && nfsrv_mallocmget_limit()) {
|
|
|
|
nd->nd_repstat = NFSERR_DELAY;
|
|
|
|
vp = NULL;
|
|
|
|
} else {
|
|
|
|
error = nfsrv_mtofh(nd, &fh);
|
|
|
|
if (error) {
|
|
|
|
if (error != EBADRPC)
|
|
|
|
printf("nfs dorpc err1=%d\n", error);
|
|
|
|
nd->nd_repstat = NFSERR_GARBAGE;
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
2010-12-25 21:56:25 +00:00
|
|
|
if (nd->nd_procnum == NFSPROC_READ ||
|
Revamp the old NFS server's File Handle Affinity (FHA) code so that
it will work with either the old or new server.
The FHA code keeps a cache of currently active file handles for
NFSv2 and v3 requests, so that read and write requests for the same
file are directed to the same group of threads (reads) or thread
(writes). It does not currently work for NFSv4 requests. They are
more complex, and will take more work to support.
This improves read-ahead performance, especially with ZFS, if the
FHA tuning parameters are configured appropriately. Without the
FHA code, concurrent reads that are part of a sequential read from
a file will be directed to separate NFS threads. This has the
effect of confusing the ZFS zfetch (prefetch) code and makes
sequential reads significantly slower with clients like Linux that
do a lot of prefetching.
The FHA code has also been updated to direct write requests to nearby
file offsets to the same thread in the same way it batches reads,
and the FHA code will now also send writes to multiple threads when
needed.
This improves sequential write performance in ZFS, because writes
to a file are now more ordered. Since NFS writes (generally
less than 64K) are smaller than the typical ZFS record size
(usually 128K), out of order NFS writes to the same block can
trigger a read in ZFS. Sending them down the same thread increases
the odds of their being in order.
In order for multiple write threads per file in the FHA code to be
useful, writes in the NFS server have been changed to use a LK_SHARED
vnode lock, and upgrade that to LK_EXCLUSIVE if the filesystem
doesn't allow multiple writers to a file at once. ZFS is currently
the only filesystem that allows multiple writers to a file, because
it has internal file range locking. This change does not affect the
NFSv4 code.
This improves random write performance to a single file in ZFS, since
we can now have multiple writers inside ZFS at one time.
I have changed the default tuning parameters to a 22 bit (4MB)
window size (from 256K) and unlimited commands per thread as a
result of my benchmarking with ZFS.
The FHA code has been updated to allow configuring the tuning
parameters from loader tunable variables in addition to sysctl
variables. The read offset window calculation has been slightly
modified as well. Instead of having separate bins, each file
handle has a rolling window of bin_shift size. This minimizes
glitches in throughput when shifting from one bin to another.
sys/conf/files:
Add nfs_fha_new.c and nfs_fha_old.c. Compile nfs_fha.c
when either the old or the new NFS server is built.
sys/fs/nfs/nfsport.h,
sys/fs/nfs/nfs_commonport.c:
Bring in changes from Rick Macklem to newnfs_realign that
allow it to operate in blocking (M_WAITOK) or non-blocking
(M_NOWAIT) mode.
sys/fs/nfs/nfs_commonsubs.c,
sys/fs/nfs/nfs_var.h:
Bring in a change from Rick Macklem to allow telling
nfsm_dissect() whether or not to wait for mallocs.
sys/fs/nfs/nfsm_subs.h:
Bring in changes from Rick Macklem to create a new
nfsm_dissect_nonblock() inline function and
NFSM_DISSECT_NONBLOCK() macro.
sys/fs/nfs/nfs_commonkrpc.c,
sys/fs/nfsclient/nfs_clkrpc.c:
Add the malloc wait flag to a newnfs_realign() call.
sys/fs/nfsserver/nfs_nfsdkrpc.c:
Setup the new NFS server's RPC thread pool so that it will
call the FHA code.
Add the malloc flag argument to newnfs_realign().
Unstaticize newnfs_nfsv3_procid[] so that we can use it in
the FHA code.
sys/fs/nfsserver/nfs_nfsdsocket.c:
In nfsrvd_dorpc(), add NFSPROC_WRITE to the list of RPC types
that use the LK_SHARED lock type.
sys/fs/nfsserver/nfs_nfsdport.c:
In nfsd_fhtovp(), if we're starting a write, check to see
whether the underlying filesystem supports shared writes.
If not, upgrade the lock type from LK_SHARED to LK_EXCLUSIVE.
sys/nfsserver/nfs_fha.c:
Remove all code that is specific to the NFS server
implementation. Anything that is server-specific is now
accessed through a callback supplied by that server's FHA
shim in the new softc.
There are now separate sysctls and tunables for the FHA
implementations for the old and new NFS servers. The new
NFS server has its tunables under vfs.nfsd.fha, the old
NFS server's tunables are under vfs.nfsrv.fha as before.
In fha_extract_info(), use callouts for all server-specific
code. Getting file handles and offsets is now done in the
individual server's shim module.
In fha_hash_entry_choose_thread(), change the way we decide
whether two reads are in proximity to each other.
Previously, the calculation was a simple shift operation to
see whether the offsets were in the same power of 2 bucket.
The issue was that there would be a bucket (and therefore
thread) transition, even if the reads were in close
proximity. When there is a thread transition, reads wind
up going somewhat out of order, and ZFS gets confused.
The new calculation simply tries to see whether the offsets
are within 1 << bin_shift of each other. If they are, the
reads will be sent to the same thread.
The effect of this change is that for sequential reads, if
the client doesn't exceed the max_reqs_per_nfsd parameter
and the bin_shift is set to a reasonable value (22, or
4MB works well in my tests), the reads in any sequential
stream will largely be confined to a single thread.
Change fha_assign() so that it takes a softc argument. It
is now called from the individual server's shim code, which
will pass in the softc.
Change fhe_stats_sysctl() so that it takes a softc
parameter. It is now called from the individual server's
shim code. Add the current offset to the list of things
printed out about each active thread.
Change the num_reads and num_writes counters in the
fha_hash_entry structure to 32-bit values, and rename them
num_rw and num_exclusive, respectively, to reflect their
changed usage.
Add an enable sysctl and tunable that allows the user to
disable the FHA code (when vfs.XXX.fha.enable = 0). This
is useful for before/after performance comparisons.
nfs_fha.h:
Move most structure definitions out of nfs_fha.c and into
the header file, so that the individual server shims can
see them.
Change the default bin_shift to 22 (4MB) instead of 18
(256K). Allow unlimited commands per thread.
sys/nfsserver/nfs_fha_old.c,
sys/nfsserver/nfs_fha_old.h,
sys/fs/nfsserver/nfs_fha_new.c,
sys/fs/nfsserver/nfs_fha_new.h:
Add shims for the old and new NFS servers to interface with
the FHA code, and callbacks for the
The shims contain all of the code and definitions that are
specific to the NFS servers.
They setup the server-specific callbacks and set the server
name for the sysctl and loader tunable variables.
sys/nfsserver/nfs_srvkrpc.c:
Configure the RPC code to call fhaold_assign() instead of
fha_assign().
sys/modules/nfsd/Makefile:
Add nfs_fha.c and nfs_fha_new.c.
sys/modules/nfsserver/Makefile:
Add nfs_fha_old.c.
Reviewed by: rmacklem
Sponsored by: Spectra Logic
MFC after: 2 weeks
2013-04-17 21:00:22 +00:00
|
|
|
nd->nd_procnum == NFSPROC_WRITE ||
|
2010-12-25 21:56:25 +00:00
|
|
|
nd->nd_procnum == NFSPROC_READDIR ||
|
2015-05-29 20:22:53 +00:00
|
|
|
nd->nd_procnum == NFSPROC_READDIRPLUS ||
|
2010-12-25 21:56:25 +00:00
|
|
|
nd->nd_procnum == NFSPROC_READLINK ||
|
|
|
|
nd->nd_procnum == NFSPROC_GETATTR ||
|
2015-05-29 20:22:53 +00:00
|
|
|
nd->nd_procnum == NFSPROC_ACCESS ||
|
|
|
|
nd->nd_procnum == NFSPROC_FSSTAT ||
|
|
|
|
nd->nd_procnum == NFSPROC_FSINFO)
|
2010-12-25 21:56:25 +00:00
|
|
|
lktype = LK_SHARED;
|
|
|
|
else
|
|
|
|
lktype = LK_EXCLUSIVE;
|
2009-05-04 15:23:58 +00:00
|
|
|
if (nd->nd_flag & ND_PUBLOOKUP)
|
2010-12-25 21:56:25 +00:00
|
|
|
nfsd_fhtovp(nd, &nfs_pubfh, lktype, &vp, &nes,
|
2018-08-17 21:12:16 +00:00
|
|
|
&mp, nfsrv_writerpc[nd->nd_procnum], p);
|
2009-05-04 15:23:58 +00:00
|
|
|
else
|
2010-12-25 21:56:25 +00:00
|
|
|
nfsd_fhtovp(nd, &fh, lktype, &vp, &nes,
|
2018-08-17 21:12:16 +00:00
|
|
|
&mp, nfsrv_writerpc[nd->nd_procnum], p);
|
2009-05-04 15:23:58 +00:00
|
|
|
if (nd->nd_repstat == NFSERR_PROGNOTV4)
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For V2 and 3, set the ND_SAVEREPLY flag for the recent request
|
|
|
|
* cache, as required.
|
|
|
|
* For V4, nfsrvd_compound() does this.
|
|
|
|
*/
|
|
|
|
if (!(nd->nd_flag & ND_NFSV4) && nfsrv_nonidempotent[nd->nd_procnum])
|
|
|
|
nd->nd_flag |= ND_SAVEREPLY;
|
|
|
|
|
|
|
|
nfsrvd_rephead(nd);
|
|
|
|
/*
|
|
|
|
* If nd_repstat is non-zero, just fill in the reply status
|
|
|
|
* to complete the RPC reply for V2. Otherwise, you must do
|
|
|
|
* the RPC.
|
|
|
|
*/
|
|
|
|
if (nd->nd_repstat && (nd->nd_flag & ND_NFSV2)) {
|
|
|
|
*nd->nd_errp = nfsd_errmap(nd);
|
2016-08-12 22:44:59 +00:00
|
|
|
nfsrvd_statstart(nfsv3to4op[nd->nd_procnum], /*now*/ NULL);
|
|
|
|
nfsrvd_statend(nfsv3to4op[nd->nd_procnum], /*bytes*/ 0,
|
|
|
|
/*now*/ NULL, /*then*/ NULL);
|
2018-08-17 21:12:16 +00:00
|
|
|
if (mp != NULL && nfsrv_writerpc[nd->nd_procnum] != 0)
|
2011-01-06 19:50:11 +00:00
|
|
|
vn_finished_write(mp);
|
2011-07-16 08:51:09 +00:00
|
|
|
goto out;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now the procedure can be performed. For V4, nfsrvd_compound()
|
|
|
|
* works through the sub-rpcs, otherwise just call the procedure.
|
|
|
|
* The procedures are in three groups with different arguments.
|
|
|
|
* The group is indicated by the value in nfs_retfh[].
|
|
|
|
*/
|
|
|
|
if (nd->nd_flag & ND_NFSV4) {
|
2014-07-01 20:47:16 +00:00
|
|
|
nfsrvd_compound(nd, isdgram, tag, taglen, minorvers, p);
|
2009-05-04 15:23:58 +00:00
|
|
|
} else {
|
2016-08-12 22:44:59 +00:00
|
|
|
struct bintime start_time;
|
|
|
|
|
|
|
|
binuptime(&start_time);
|
|
|
|
nfsrvd_statstart(nfsv3to4op[nd->nd_procnum], &start_time);
|
|
|
|
|
2009-05-04 15:23:58 +00:00
|
|
|
if (nfs_retfh[nd->nd_procnum] == 1) {
|
|
|
|
if (vp)
|
2011-07-16 08:05:26 +00:00
|
|
|
NFSVOPUNLOCK(vp, 0);
|
2009-05-04 15:23:58 +00:00
|
|
|
error = (*(nfsrv3_procs1[nd->nd_procnum]))(nd, isdgram,
|
|
|
|
vp, NULL, (fhandle_t *)fh.nfsrvfh_data, p, &nes);
|
|
|
|
} else if (nfs_retfh[nd->nd_procnum] == 2) {
|
|
|
|
error = (*(nfsrv3_procs2[nd->nd_procnum]))(nd, isdgram,
|
|
|
|
vp, NULL, p, &nes, NULL);
|
|
|
|
} else {
|
|
|
|
error = (*(nfsrv3_procs0[nd->nd_procnum]))(nd, isdgram,
|
|
|
|
vp, p, &nes);
|
|
|
|
}
|
2018-08-17 21:12:16 +00:00
|
|
|
if (mp != NULL && nfsrv_writerpc[nd->nd_procnum] != 0)
|
2011-01-06 19:50:11 +00:00
|
|
|
vn_finished_write(mp);
|
2016-08-12 22:44:59 +00:00
|
|
|
|
|
|
|
nfsrvd_statend(nfsv3to4op[nd->nd_procnum], /*bytes*/ 0,
|
|
|
|
/*now*/ NULL, /*then*/ &start_time);
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
if (error) {
|
|
|
|
if (error != EBADRPC)
|
|
|
|
printf("nfs dorpc err2=%d\n", error);
|
|
|
|
nd->nd_repstat = NFSERR_GARBAGE;
|
|
|
|
}
|
|
|
|
*nd->nd_errp = nfsd_errmap(nd);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't cache certain reply status values.
|
|
|
|
*/
|
|
|
|
if (nd->nd_repstat && (nd->nd_flag & ND_SAVEREPLY) &&
|
|
|
|
(nd->nd_repstat == NFSERR_GARBAGE ||
|
|
|
|
nd->nd_repstat == NFSERR_BADXDR ||
|
|
|
|
nd->nd_repstat == NFSERR_MOVED ||
|
|
|
|
nd->nd_repstat == NFSERR_DELAY ||
|
|
|
|
nd->nd_repstat == NFSERR_BADSEQID ||
|
|
|
|
nd->nd_repstat == NFSERR_RESOURCE ||
|
|
|
|
nd->nd_repstat == NFSERR_SERVERFAULT ||
|
|
|
|
nd->nd_repstat == NFSERR_STALECLIENTID ||
|
|
|
|
nd->nd_repstat == NFSERR_STALESTATEID ||
|
|
|
|
nd->nd_repstat == NFSERR_OLDSTATEID ||
|
|
|
|
nd->nd_repstat == NFSERR_BADSTATEID ||
|
|
|
|
nd->nd_repstat == NFSERR_GRACE ||
|
|
|
|
nd->nd_repstat == NFSERR_NOGRACE))
|
|
|
|
nd->nd_flag &= ~ND_SAVEREPLY;
|
2011-07-16 08:51:09 +00:00
|
|
|
|
|
|
|
out:
|
|
|
|
NFSEXITCODE2(0, nd);
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Breaks down a compound RPC request and calls the server routines for
|
|
|
|
* the subprocedures.
|
|
|
|
* Some suboperations are performed directly here to simplify file handle<-->
|
|
|
|
* vnode pointer handling.
|
|
|
|
*/
|
|
|
|
static void
|
2014-07-01 20:47:16 +00:00
|
|
|
nfsrvd_compound(struct nfsrv_descript *nd, int isdgram, u_char *tag,
|
|
|
|
int taglen, u_int32_t minorvers, NFSPROC_T *p)
|
2009-05-04 15:23:58 +00:00
|
|
|
{
|
2016-08-12 22:44:59 +00:00
|
|
|
int i, lktype, op, op0 = 0, statsinprog = 0;
|
2009-05-04 15:23:58 +00:00
|
|
|
u_int32_t *tl;
|
|
|
|
struct nfsclient *clp, *nclp;
|
2014-07-01 20:47:16 +00:00
|
|
|
int numops, error = 0, igotlock;
|
|
|
|
u_int32_t retops = 0, *retopsp = NULL, *repp;
|
2009-05-04 15:23:58 +00:00
|
|
|
vnode_t vp, nvp, savevp;
|
|
|
|
struct nfsrvfh fh;
|
2011-01-06 19:50:11 +00:00
|
|
|
mount_t new_mp, temp_mp = NULL;
|
2009-05-04 15:23:58 +00:00
|
|
|
struct ucred *credanon;
|
|
|
|
struct nfsexstuff nes, vpnes, savevpnes;
|
2011-01-06 19:50:11 +00:00
|
|
|
fsid_t cur_fsid, save_fsid;
|
2009-05-04 15:23:58 +00:00
|
|
|
static u_int64_t compref = 0;
|
2016-08-12 22:44:59 +00:00
|
|
|
struct bintime start_time;
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
NFSVNO_EXINIT(&vpnes);
|
|
|
|
NFSVNO_EXINIT(&savevpnes);
|
|
|
|
/*
|
|
|
|
* Put the seq# of the current compound RPC in nfsrv_descript.
|
|
|
|
* (This is used by nfsrv_checkgetattr(), to see if the write
|
|
|
|
* delegation was created by the same compound RPC as the one
|
|
|
|
* with that Getattr in it.)
|
|
|
|
* Don't worry about the 64bit number wrapping around. It ain't
|
|
|
|
* gonna happen before this server gets shut down/rebooted.
|
|
|
|
*/
|
|
|
|
nd->nd_compref = compref++;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check for and optionally get a lock on the root. This lock means that
|
|
|
|
* no nfsd will be fiddling with the V4 file system and state stuff. It
|
|
|
|
* is required when the V4 root is being changed, the stable storage
|
|
|
|
* restart file is being updated, or callbacks are being done.
|
|
|
|
* When any of the nfsd are processing an NFSv4 compound RPC, they must
|
|
|
|
* either hold a reference count (nfs_usecnt) or the lock. When
|
|
|
|
* nfsrv_unlock() is called to release the lock, it can optionally
|
|
|
|
* also get a reference count, which saves the need for a call to
|
|
|
|
* nfsrv_getref() after nfsrv_unlock().
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* First, check to see if we need to wait for an update lock.
|
|
|
|
*/
|
|
|
|
igotlock = 0;
|
|
|
|
NFSLOCKV4ROOTMUTEX();
|
|
|
|
if (nfsrv_stablefirst.nsf_flags & NFSNSF_NEEDLOCK)
|
|
|
|
igotlock = nfsv4_lock(&nfsv4rootfs_lock, 1, NULL,
|
2011-05-27 22:05:10 +00:00
|
|
|
NFSV4ROOTLOCKMUTEXPTR, NULL);
|
2009-05-04 15:23:58 +00:00
|
|
|
else
|
|
|
|
igotlock = nfsv4_lock(&nfsv4rootfs_lock, 0, NULL,
|
2011-05-27 22:05:10 +00:00
|
|
|
NFSV4ROOTLOCKMUTEXPTR, NULL);
|
2009-05-04 15:23:58 +00:00
|
|
|
NFSUNLOCKV4ROOTMUTEX();
|
|
|
|
if (igotlock) {
|
|
|
|
/*
|
|
|
|
* If I got the lock, I can update the stable storage file.
|
|
|
|
* Done when the grace period is over or a client has long
|
|
|
|
* since expired.
|
|
|
|
*/
|
|
|
|
nfsrv_stablefirst.nsf_flags &= ~NFSNSF_NEEDLOCK;
|
|
|
|
if ((nfsrv_stablefirst.nsf_flags &
|
|
|
|
(NFSNSF_GRACEOVER | NFSNSF_UPDATEDONE)) == NFSNSF_GRACEOVER)
|
|
|
|
nfsrv_updatestable(p);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If at least one client has long since expired, search
|
|
|
|
* the client list for them, write a REVOKE record on the
|
|
|
|
* stable storage file and then remove them from the client
|
|
|
|
* list.
|
|
|
|
*/
|
|
|
|
if (nfsrv_stablefirst.nsf_flags & NFSNSF_EXPIREDCLIENT) {
|
|
|
|
nfsrv_stablefirst.nsf_flags &= ~NFSNSF_EXPIREDCLIENT;
|
2015-05-27 22:00:05 +00:00
|
|
|
for (i = 0; i < nfsrv_clienthashsize; i++) {
|
2009-05-04 15:23:58 +00:00
|
|
|
LIST_FOREACH_SAFE(clp, &nfsclienthash[i], lc_hash,
|
|
|
|
nclp) {
|
|
|
|
if (clp->lc_flags & LCL_EXPIREIT) {
|
|
|
|
if (!LIST_EMPTY(&clp->lc_open) ||
|
|
|
|
!LIST_EMPTY(&clp->lc_deleg))
|
|
|
|
nfsrv_writestable(clp->lc_id,
|
|
|
|
clp->lc_idlen, NFSNST_REVOKE, p);
|
|
|
|
nfsrv_cleanclient(clp, p);
|
|
|
|
nfsrv_freedeleglist(&clp->lc_deleg);
|
|
|
|
nfsrv_freedeleglist(&clp->lc_olddeleg);
|
|
|
|
LIST_REMOVE(clp, lc_hash);
|
|
|
|
nfsrv_zapclient(clp, p);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
NFSLOCKV4ROOTMUTEX();
|
|
|
|
nfsv4_unlock(&nfsv4rootfs_lock, 1);
|
|
|
|
NFSUNLOCKV4ROOTMUTEX();
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* If we didn't get the lock, we need to get a refcnt,
|
|
|
|
* which also checks for and waits for the lock.
|
|
|
|
*/
|
|
|
|
NFSLOCKV4ROOTMUTEX();
|
|
|
|
nfsv4_getref(&nfsv4rootfs_lock, NULL,
|
2011-05-27 22:05:10 +00:00
|
|
|
NFSV4ROOTLOCKMUTEXPTR, NULL);
|
2009-05-04 15:23:58 +00:00
|
|
|
NFSUNLOCKV4ROOTMUTEX();
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If flagged, search for open owners that haven't had any opens
|
|
|
|
* for a long time.
|
|
|
|
*/
|
|
|
|
if (nfsrv_stablefirst.nsf_flags & NFSNSF_NOOPENS) {
|
|
|
|
nfsrv_throwawayopens(p);
|
|
|
|
}
|
|
|
|
|
Merge the pNFS server code from projects/pnfs-planb-server into head.
This code merge adds a pNFS service to the NFSv4.1 server. Although it is
a large commit it should not affect behaviour for a non-pNFS NFS server.
Some documentation on how this works can be found at:
http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt
and will hopefully be turned into a proper document soon.
This is a merge of the kernel code. Userland and man page changes will
come soon, once the dust settles on this merge.
It has passed a "make universe", so I hope it will not cause build problems.
It also adds NFSv4.1 server support for the "current stateid".
Here is a brief overview of the pNFS service:
A pNFS service separates the Read/Write oeprations from all the other NFSv4.1
Metadata operations. It is hoped that this separation allows a pNFS service
to be configured that exceeds the limits of a single NFS server for either
storage capacity and/or I/O bandwidth.
It is possible to configure mirroring within the data servers (DSs) so that
the data storage file for an MDS file will be mirrored on two or more of
the DSs.
When this is used, failure of a DS will not stop the pNFS service and a
failed DS can be recovered once repaired while the pNFS service continues
to operate. Although two way mirroring would be the norm, it is possible
to set a mirroring level of up to four or the number of DSs, whichever is
less.
The Metadata server will always be a single point of failure,
just as a single NFS server is.
A Plan B pNFS service consists of a single MetaData Server (MDS) and K
Data Servers (DS), all of which are recent FreeBSD systems.
Clients will mount the MDS as they would a single NFS server.
When files are created, the MDS creates a file tree identical to what a
single NFS server creates, except that all the regular (VREG) files will
be empty. As such, if you look at the exported tree on the MDS directly
on the MDS server (not via an NFS mount), the files will all be of size 0.
Each of these files will also have two extended attributes in the system
attribute name space:
pnfsd.dsfile - This extended attrbute stores the information that
the MDS needs to find the data storage file(s) on DS(s) for this file.
pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime
and Change attributes for the file, so that the MDS doesn't need to
acquire the attributes from the DS for every Getattr operation.
For each regular (VREG) file, the MDS creates a data storage file on one
(or more if mirroring is enabled) of the DSs in one of the "dsNN"
subdirectories. The name of this file is the file handle
of the file on the MDS in hexadecimal so that the name is unique.
The DSs use subdirectories named "ds0" to "dsN" so that no one directory
gets too large. The value of "N" is set via the sysctl vfs.nfsd.dsdirsize
on the MDS, with the default being 20.
For production servers that will store a lot of files, this value should
probably be much larger.
It can be increased when the "nfsd" daemon is not running on the MDS,
once the "dsK" directories are created.
For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces
of information to the client that allows it to do I/O directly to the DS.
DeviceInfo - This is relatively static information that defines what a DS
is. The critical bits of information returned by the FreeBSD
server is the IP address of the DS and, for the Flexible
File layout, that NFSv4.1 is to be used and that it is
"tightly coupled".
There is a "deviceid" which identifies the DeviceInfo.
Layout - This is per file and can be recalled by the server when it
is no longer valid. For the FreeBSD server, there is support
for two types of layout, call File and Flexible File layout.
Both allow the client to do I/O on the DS via NFSv4.1 I/O
operations. The Flexible File layout is a more recent variant
that allows specification of mirrors, where the client is
expected to do writes to all mirrors to maintain them in a
consistent state. The Flexible File layout also allows the
client to report I/O errors for a DS back to the MDS.
The Flexible File layout supports two variants referred to as
"tightly coupled" vs "loosely coupled". The FreeBSD server always
uses the "tightly coupled" variant where the client uses the
same credentials to do I/O on the DS as it would on the MDS.
For the "loosely coupled" variant, the layout specifies a
synthetic user/group that the client uses to do I/O on the DS.
The FreeBSD server does not do striping and always returns
layouts for the entire file. The critical information in a layout
is Read vs Read/Writea and DeviceID(s) that identify which
DS(s) the data is stored on.
At this time, the MDS generates File Layout layouts to NFSv4.1 clients
that know how to do pNFS for the non-mirrored DS case unless the sysctl
vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File
layouts are generated.
The mirrored DS configuration always generates Flexible File layouts.
For NFS clients that do not support NFSv4.1 pNFS, all I/O operations
are done against the MDS which acts as a proxy for the appropriate DS(s).
When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy.
If the DS is on the same machine, the MDS/DS will do the RPC on the DS as
a proxy and so on, until the machine runs out of some resource, such as
session slots or mbufs.
As such, DSs must be separate systems from the MDS.
Tested by: james.rose@framestore.com
Relnotes: yes
2018-06-12 19:36:32 +00:00
|
|
|
/* Do a CBLAYOUTRECALL callback if over the high water mark. */
|
|
|
|
if (nfsrv_layoutcnt > nfsrv_layouthighwater)
|
|
|
|
nfsrv_recalloldlayout(p);
|
|
|
|
|
2009-05-04 15:23:58 +00:00
|
|
|
savevp = vp = NULL;
|
2011-01-06 19:50:11 +00:00
|
|
|
save_fsid.val[0] = save_fsid.val[1] = 0;
|
|
|
|
cur_fsid.val[0] = cur_fsid.val[1] = 0;
|
2014-07-01 20:47:16 +00:00
|
|
|
|
|
|
|
/* If taglen < 0, there was a parsing error in nfsd_getminorvers(). */
|
2009-05-04 15:23:58 +00:00
|
|
|
if (taglen < 0) {
|
|
|
|
error = EBADRPC;
|
|
|
|
goto nfsmout;
|
|
|
|
}
|
2014-07-01 20:47:16 +00:00
|
|
|
|
2009-05-04 15:23:58 +00:00
|
|
|
(void) nfsm_strtom(nd, tag, taglen);
|
|
|
|
NFSM_BUILD(retopsp, u_int32_t *, NFSX_UNSIGNED);
|
2014-07-01 20:47:16 +00:00
|
|
|
NFSM_DISSECT(tl, u_int32_t *, NFSX_UNSIGNED);
|
|
|
|
if (minorvers != NFSV4_MINORVERSION && minorvers != NFSV41_MINORVERSION)
|
2009-05-04 15:23:58 +00:00
|
|
|
nd->nd_repstat = NFSERR_MINORVERMISMATCH;
|
|
|
|
if (nd->nd_repstat)
|
|
|
|
numops = 0;
|
|
|
|
else
|
|
|
|
numops = fxdr_unsigned(int, *tl);
|
|
|
|
/*
|
|
|
|
* Loop around doing the sub ops.
|
|
|
|
* vp - is an unlocked vnode pointer for the CFH
|
|
|
|
* savevp - is an unlocked vnode pointer for the SAVEDFH
|
|
|
|
* (at some future date, it might turn out to be more appropriate
|
|
|
|
* to keep the file handles instead of vnode pointers?)
|
|
|
|
* savevpnes and vpnes - are the export flags for the above.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < numops; i++) {
|
|
|
|
NFSM_DISSECT(tl, u_int32_t *, NFSX_UNSIGNED);
|
|
|
|
NFSM_BUILD(repp, u_int32_t *, 2 * NFSX_UNSIGNED);
|
2009-05-26 01:16:09 +00:00
|
|
|
*repp = *tl;
|
2009-05-04 15:23:58 +00:00
|
|
|
op = fxdr_unsigned(int, *tl);
|
2014-07-01 20:47:16 +00:00
|
|
|
NFSD_DEBUG(4, "op=%d\n", op);
|
2016-08-12 22:44:59 +00:00
|
|
|
|
|
|
|
binuptime(&start_time);
|
|
|
|
nfsrvd_statstart(op, &start_time);
|
|
|
|
statsinprog = 1;
|
|
|
|
|
2014-07-01 20:47:16 +00:00
|
|
|
if (op < NFSV4OP_ACCESS ||
|
|
|
|
(op >= NFSV4OP_NOPS && (nd->nd_flag & ND_NFSV41) == 0) ||
|
|
|
|
(op >= NFSV41_NOPS && (nd->nd_flag & ND_NFSV41) != 0)) {
|
2009-05-26 01:16:09 +00:00
|
|
|
nd->nd_repstat = NFSERR_OPILLEGAL;
|
|
|
|
*repp++ = txdr_unsigned(NFSV4OP_OPILLEGAL);
|
|
|
|
*repp = nfsd_errmap(nd);
|
|
|
|
retops++;
|
|
|
|
break;
|
|
|
|
} else {
|
|
|
|
repp++;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
2014-07-01 20:47:16 +00:00
|
|
|
if (i == 0)
|
|
|
|
op0 = op;
|
|
|
|
if (i == numops - 1)
|
|
|
|
nd->nd_flag |= ND_LASTOP;
|
2009-05-04 15:23:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Check for a referral on the current FH and, if so, return
|
|
|
|
* NFSERR_MOVED for all ops that allow it, except Getattr.
|
|
|
|
*/
|
|
|
|
if (vp != NULL && op != NFSV4OP_GETATTR &&
|
|
|
|
nfsv4root_getreferral(vp, NULL, 0) != NULL &&
|
|
|
|
nfsrv_errmoved(op)) {
|
|
|
|
nd->nd_repstat = NFSERR_MOVED;
|
|
|
|
*repp = nfsd_errmap(nd);
|
|
|
|
retops++;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2014-07-01 20:47:16 +00:00
|
|
|
/*
|
|
|
|
* For NFSv4.1, check for a Sequence Operation being first
|
|
|
|
* or one of the other allowed operations by itself.
|
|
|
|
*/
|
|
|
|
if ((nd->nd_flag & ND_NFSV41) != 0) {
|
|
|
|
if (i != 0 && op == NFSV4OP_SEQUENCE)
|
|
|
|
nd->nd_repstat = NFSERR_SEQUENCEPOS;
|
|
|
|
else if (i == 0 && op != NFSV4OP_SEQUENCE &&
|
|
|
|
op != NFSV4OP_EXCHANGEID &&
|
|
|
|
op != NFSV4OP_CREATESESSION &&
|
|
|
|
op != NFSV4OP_BINDCONNTOSESS &&
|
|
|
|
op != NFSV4OP_DESTROYCLIENTID &&
|
|
|
|
op != NFSV4OP_DESTROYSESSION)
|
|
|
|
nd->nd_repstat = NFSERR_OPNOTINSESS;
|
|
|
|
else if (i != 0 && op0 != NFSV4OP_SEQUENCE)
|
|
|
|
nd->nd_repstat = NFSERR_NOTONLYOP;
|
|
|
|
if (nd->nd_repstat != 0) {
|
|
|
|
*repp = nfsd_errmap(nd);
|
|
|
|
retops++;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-05-04 15:23:58 +00:00
|
|
|
nd->nd_procnum = op;
|
|
|
|
/*
|
|
|
|
* If over flood level, reply NFSERR_RESOURCE, if at the first
|
|
|
|
* Op. (Since a client recovery from NFSERR_RESOURCE can get
|
|
|
|
* really nasty for certain Op sequences, I'll play it safe
|
|
|
|
* and only return the error at the beginning.) The cache
|
|
|
|
* will still function over flood level, but uses lots of
|
|
|
|
* mbufs.)
|
|
|
|
* If nfsrv_mallocmget_limit() returns True, the system is near
|
|
|
|
* to its limit for memory that malloc()/mget() can allocate.
|
|
|
|
*/
|
2014-07-01 20:47:16 +00:00
|
|
|
if (i == 0 && (nd->nd_rp == NULL ||
|
|
|
|
nd->nd_rp->rc_refcnt == 0) &&
|
2009-05-04 15:23:58 +00:00
|
|
|
(nfsrv_mallocmget_limit() ||
|
|
|
|
nfsrc_tcpsavedreplies > nfsrc_floodlevel)) {
|
2014-08-10 01:13:32 +00:00
|
|
|
if (nfsrc_tcpsavedreplies > nfsrc_floodlevel)
|
|
|
|
printf("nfsd server cache flooded, try "
|
|
|
|
"increasing vfs.nfsd.tcphighwater\n");
|
2009-05-04 15:23:58 +00:00
|
|
|
nd->nd_repstat = NFSERR_RESOURCE;
|
|
|
|
*repp = nfsd_errmap(nd);
|
|
|
|
if (op == NFSV4OP_SETATTR) {
|
2009-05-26 01:16:09 +00:00
|
|
|
/*
|
|
|
|
* Setattr replies require a bitmap.
|
|
|
|
* even for errors like these.
|
|
|
|
*/
|
|
|
|
NFSM_BUILD(tl, u_int32_t *, NFSX_UNSIGNED);
|
|
|
|
*tl = 0;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
retops++;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (nfsv4_opflag[op].savereply)
|
|
|
|
nd->nd_flag |= ND_SAVEREPLY;
|
|
|
|
switch (op) {
|
|
|
|
case NFSV4OP_PUTFH:
|
|
|
|
error = nfsrv_mtofh(nd, &fh);
|
|
|
|
if (error)
|
|
|
|
goto nfsmout;
|
2011-01-06 19:50:11 +00:00
|
|
|
if (!nd->nd_repstat)
|
|
|
|
nfsd_fhtovp(nd, &fh, LK_SHARED, &nvp, &nes,
|
|
|
|
NULL, 0, p);
|
2009-05-04 15:23:58 +00:00
|
|
|
/* For now, allow this for non-export FHs */
|
|
|
|
if (!nd->nd_repstat) {
|
|
|
|
if (vp)
|
|
|
|
vrele(vp);
|
|
|
|
vp = nvp;
|
2011-01-06 19:50:11 +00:00
|
|
|
cur_fsid = vp->v_mount->mnt_stat.f_fsid;
|
2011-07-16 08:05:36 +00:00
|
|
|
NFSVOPUNLOCK(vp, 0);
|
2009-05-04 15:23:58 +00:00
|
|
|
vpnes = nes;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case NFSV4OP_PUTPUBFH:
|
2011-01-06 19:50:11 +00:00
|
|
|
if (nfs_pubfhset)
|
2010-12-25 21:56:25 +00:00
|
|
|
nfsd_fhtovp(nd, &nfs_pubfh, LK_SHARED, &nvp,
|
2011-01-06 19:50:11 +00:00
|
|
|
&nes, NULL, 0, p);
|
|
|
|
else
|
2009-05-04 15:23:58 +00:00
|
|
|
nd->nd_repstat = NFSERR_NOFILEHANDLE;
|
|
|
|
if (!nd->nd_repstat) {
|
|
|
|
if (vp)
|
|
|
|
vrele(vp);
|
|
|
|
vp = nvp;
|
2011-01-06 19:50:11 +00:00
|
|
|
cur_fsid = vp->v_mount->mnt_stat.f_fsid;
|
2011-07-16 08:05:36 +00:00
|
|
|
NFSVOPUNLOCK(vp, 0);
|
2009-05-04 15:23:58 +00:00
|
|
|
vpnes = nes;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case NFSV4OP_PUTROOTFH:
|
|
|
|
if (nfs_rootfhset) {
|
2010-12-25 21:56:25 +00:00
|
|
|
nfsd_fhtovp(nd, &nfs_rootfh, LK_SHARED, &nvp,
|
2011-01-06 19:50:11 +00:00
|
|
|
&nes, NULL, 0, p);
|
2009-05-04 15:23:58 +00:00
|
|
|
if (!nd->nd_repstat) {
|
|
|
|
if (vp)
|
|
|
|
vrele(vp);
|
|
|
|
vp = nvp;
|
2011-01-06 19:50:11 +00:00
|
|
|
cur_fsid = vp->v_mount->mnt_stat.f_fsid;
|
2011-07-16 08:05:36 +00:00
|
|
|
NFSVOPUNLOCK(vp, 0);
|
2009-05-04 15:23:58 +00:00
|
|
|
vpnes = nes;
|
|
|
|
}
|
2011-01-02 21:34:01 +00:00
|
|
|
} else
|
2009-05-04 15:23:58 +00:00
|
|
|
nd->nd_repstat = NFSERR_NOFILEHANDLE;
|
|
|
|
break;
|
|
|
|
case NFSV4OP_SAVEFH:
|
|
|
|
if (vp && NFSVNO_EXPORTED(&vpnes)) {
|
|
|
|
nd->nd_repstat = 0;
|
|
|
|
/* If vp == savevp, a no-op */
|
|
|
|
if (vp != savevp) {
|
|
|
|
if (savevp)
|
|
|
|
vrele(savevp);
|
|
|
|
VREF(vp);
|
|
|
|
savevp = vp;
|
|
|
|
savevpnes = vpnes;
|
2011-01-06 19:50:11 +00:00
|
|
|
save_fsid = cur_fsid;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
Merge the pNFS server code from projects/pnfs-planb-server into head.
This code merge adds a pNFS service to the NFSv4.1 server. Although it is
a large commit it should not affect behaviour for a non-pNFS NFS server.
Some documentation on how this works can be found at:
http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt
and will hopefully be turned into a proper document soon.
This is a merge of the kernel code. Userland and man page changes will
come soon, once the dust settles on this merge.
It has passed a "make universe", so I hope it will not cause build problems.
It also adds NFSv4.1 server support for the "current stateid".
Here is a brief overview of the pNFS service:
A pNFS service separates the Read/Write oeprations from all the other NFSv4.1
Metadata operations. It is hoped that this separation allows a pNFS service
to be configured that exceeds the limits of a single NFS server for either
storage capacity and/or I/O bandwidth.
It is possible to configure mirroring within the data servers (DSs) so that
the data storage file for an MDS file will be mirrored on two or more of
the DSs.
When this is used, failure of a DS will not stop the pNFS service and a
failed DS can be recovered once repaired while the pNFS service continues
to operate. Although two way mirroring would be the norm, it is possible
to set a mirroring level of up to four or the number of DSs, whichever is
less.
The Metadata server will always be a single point of failure,
just as a single NFS server is.
A Plan B pNFS service consists of a single MetaData Server (MDS) and K
Data Servers (DS), all of which are recent FreeBSD systems.
Clients will mount the MDS as they would a single NFS server.
When files are created, the MDS creates a file tree identical to what a
single NFS server creates, except that all the regular (VREG) files will
be empty. As such, if you look at the exported tree on the MDS directly
on the MDS server (not via an NFS mount), the files will all be of size 0.
Each of these files will also have two extended attributes in the system
attribute name space:
pnfsd.dsfile - This extended attrbute stores the information that
the MDS needs to find the data storage file(s) on DS(s) for this file.
pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime
and Change attributes for the file, so that the MDS doesn't need to
acquire the attributes from the DS for every Getattr operation.
For each regular (VREG) file, the MDS creates a data storage file on one
(or more if mirroring is enabled) of the DSs in one of the "dsNN"
subdirectories. The name of this file is the file handle
of the file on the MDS in hexadecimal so that the name is unique.
The DSs use subdirectories named "ds0" to "dsN" so that no one directory
gets too large. The value of "N" is set via the sysctl vfs.nfsd.dsdirsize
on the MDS, with the default being 20.
For production servers that will store a lot of files, this value should
probably be much larger.
It can be increased when the "nfsd" daemon is not running on the MDS,
once the "dsK" directories are created.
For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces
of information to the client that allows it to do I/O directly to the DS.
DeviceInfo - This is relatively static information that defines what a DS
is. The critical bits of information returned by the FreeBSD
server is the IP address of the DS and, for the Flexible
File layout, that NFSv4.1 is to be used and that it is
"tightly coupled".
There is a "deviceid" which identifies the DeviceInfo.
Layout - This is per file and can be recalled by the server when it
is no longer valid. For the FreeBSD server, there is support
for two types of layout, call File and Flexible File layout.
Both allow the client to do I/O on the DS via NFSv4.1 I/O
operations. The Flexible File layout is a more recent variant
that allows specification of mirrors, where the client is
expected to do writes to all mirrors to maintain them in a
consistent state. The Flexible File layout also allows the
client to report I/O errors for a DS back to the MDS.
The Flexible File layout supports two variants referred to as
"tightly coupled" vs "loosely coupled". The FreeBSD server always
uses the "tightly coupled" variant where the client uses the
same credentials to do I/O on the DS as it would on the MDS.
For the "loosely coupled" variant, the layout specifies a
synthetic user/group that the client uses to do I/O on the DS.
The FreeBSD server does not do striping and always returns
layouts for the entire file. The critical information in a layout
is Read vs Read/Writea and DeviceID(s) that identify which
DS(s) the data is stored on.
At this time, the MDS generates File Layout layouts to NFSv4.1 clients
that know how to do pNFS for the non-mirrored DS case unless the sysctl
vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File
layouts are generated.
The mirrored DS configuration always generates Flexible File layouts.
For NFS clients that do not support NFSv4.1 pNFS, all I/O operations
are done against the MDS which acts as a proxy for the appropriate DS(s).
When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy.
If the DS is on the same machine, the MDS/DS will do the RPC on the DS as
a proxy and so on, until the machine runs out of some resource, such as
session slots or mbufs.
As such, DSs must be separate systems from the MDS.
Tested by: james.rose@framestore.com
Relnotes: yes
2018-06-12 19:36:32 +00:00
|
|
|
if ((nd->nd_flag & ND_CURSTATEID) != 0) {
|
|
|
|
nd->nd_savedcurstateid =
|
|
|
|
nd->nd_curstateid;
|
|
|
|
nd->nd_flag |= ND_SAVEDCURSTATEID;
|
|
|
|
}
|
2009-05-04 15:23:58 +00:00
|
|
|
} else {
|
|
|
|
nd->nd_repstat = NFSERR_NOFILEHANDLE;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case NFSV4OP_RESTOREFH:
|
|
|
|
if (savevp) {
|
|
|
|
nd->nd_repstat = 0;
|
|
|
|
/* If vp == savevp, a no-op */
|
|
|
|
if (vp != savevp) {
|
|
|
|
VREF(savevp);
|
|
|
|
vrele(vp);
|
|
|
|
vp = savevp;
|
|
|
|
vpnes = savevpnes;
|
2011-01-06 19:50:11 +00:00
|
|
|
cur_fsid = save_fsid;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
Merge the pNFS server code from projects/pnfs-planb-server into head.
This code merge adds a pNFS service to the NFSv4.1 server. Although it is
a large commit it should not affect behaviour for a non-pNFS NFS server.
Some documentation on how this works can be found at:
http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt
and will hopefully be turned into a proper document soon.
This is a merge of the kernel code. Userland and man page changes will
come soon, once the dust settles on this merge.
It has passed a "make universe", so I hope it will not cause build problems.
It also adds NFSv4.1 server support for the "current stateid".
Here is a brief overview of the pNFS service:
A pNFS service separates the Read/Write oeprations from all the other NFSv4.1
Metadata operations. It is hoped that this separation allows a pNFS service
to be configured that exceeds the limits of a single NFS server for either
storage capacity and/or I/O bandwidth.
It is possible to configure mirroring within the data servers (DSs) so that
the data storage file for an MDS file will be mirrored on two or more of
the DSs.
When this is used, failure of a DS will not stop the pNFS service and a
failed DS can be recovered once repaired while the pNFS service continues
to operate. Although two way mirroring would be the norm, it is possible
to set a mirroring level of up to four or the number of DSs, whichever is
less.
The Metadata server will always be a single point of failure,
just as a single NFS server is.
A Plan B pNFS service consists of a single MetaData Server (MDS) and K
Data Servers (DS), all of which are recent FreeBSD systems.
Clients will mount the MDS as they would a single NFS server.
When files are created, the MDS creates a file tree identical to what a
single NFS server creates, except that all the regular (VREG) files will
be empty. As such, if you look at the exported tree on the MDS directly
on the MDS server (not via an NFS mount), the files will all be of size 0.
Each of these files will also have two extended attributes in the system
attribute name space:
pnfsd.dsfile - This extended attrbute stores the information that
the MDS needs to find the data storage file(s) on DS(s) for this file.
pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime
and Change attributes for the file, so that the MDS doesn't need to
acquire the attributes from the DS for every Getattr operation.
For each regular (VREG) file, the MDS creates a data storage file on one
(or more if mirroring is enabled) of the DSs in one of the "dsNN"
subdirectories. The name of this file is the file handle
of the file on the MDS in hexadecimal so that the name is unique.
The DSs use subdirectories named "ds0" to "dsN" so that no one directory
gets too large. The value of "N" is set via the sysctl vfs.nfsd.dsdirsize
on the MDS, with the default being 20.
For production servers that will store a lot of files, this value should
probably be much larger.
It can be increased when the "nfsd" daemon is not running on the MDS,
once the "dsK" directories are created.
For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces
of information to the client that allows it to do I/O directly to the DS.
DeviceInfo - This is relatively static information that defines what a DS
is. The critical bits of information returned by the FreeBSD
server is the IP address of the DS and, for the Flexible
File layout, that NFSv4.1 is to be used and that it is
"tightly coupled".
There is a "deviceid" which identifies the DeviceInfo.
Layout - This is per file and can be recalled by the server when it
is no longer valid. For the FreeBSD server, there is support
for two types of layout, call File and Flexible File layout.
Both allow the client to do I/O on the DS via NFSv4.1 I/O
operations. The Flexible File layout is a more recent variant
that allows specification of mirrors, where the client is
expected to do writes to all mirrors to maintain them in a
consistent state. The Flexible File layout also allows the
client to report I/O errors for a DS back to the MDS.
The Flexible File layout supports two variants referred to as
"tightly coupled" vs "loosely coupled". The FreeBSD server always
uses the "tightly coupled" variant where the client uses the
same credentials to do I/O on the DS as it would on the MDS.
For the "loosely coupled" variant, the layout specifies a
synthetic user/group that the client uses to do I/O on the DS.
The FreeBSD server does not do striping and always returns
layouts for the entire file. The critical information in a layout
is Read vs Read/Writea and DeviceID(s) that identify which
DS(s) the data is stored on.
At this time, the MDS generates File Layout layouts to NFSv4.1 clients
that know how to do pNFS for the non-mirrored DS case unless the sysctl
vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File
layouts are generated.
The mirrored DS configuration always generates Flexible File layouts.
For NFS clients that do not support NFSv4.1 pNFS, all I/O operations
are done against the MDS which acts as a proxy for the appropriate DS(s).
When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy.
If the DS is on the same machine, the MDS/DS will do the RPC on the DS as
a proxy and so on, until the machine runs out of some resource, such as
session slots or mbufs.
As such, DSs must be separate systems from the MDS.
Tested by: james.rose@framestore.com
Relnotes: yes
2018-06-12 19:36:32 +00:00
|
|
|
if ((nd->nd_flag & ND_SAVEDCURSTATEID) != 0) {
|
|
|
|
nd->nd_curstateid =
|
|
|
|
nd->nd_savedcurstateid;
|
|
|
|
nd->nd_flag |= ND_CURSTATEID;
|
|
|
|
}
|
2009-05-04 15:23:58 +00:00
|
|
|
} else {
|
|
|
|
nd->nd_repstat = NFSERR_RESTOREFH;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
/*
|
|
|
|
* Allow a Lookup, Getattr, GetFH, Secinfo on an
|
|
|
|
* non-exported directory if
|
|
|
|
* nfs_rootfhset. Do I need to allow any other Ops?
|
|
|
|
* (You can only have a non-exported vpnes if
|
|
|
|
* nfs_rootfhset is true. See nfsd_fhtovp())
|
|
|
|
* Allow AUTH_SYS to be used for file systems
|
|
|
|
* exported GSS only for certain Ops, to allow
|
|
|
|
* clients to do mounts more easily.
|
|
|
|
*/
|
|
|
|
if (nfsv4_opflag[op].needscfh && vp) {
|
|
|
|
if (!NFSVNO_EXPORTED(&vpnes) &&
|
|
|
|
op != NFSV4OP_LOOKUP &&
|
|
|
|
op != NFSV4OP_GETATTR &&
|
|
|
|
op != NFSV4OP_GETFH &&
|
2011-06-20 21:57:26 +00:00
|
|
|
op != NFSV4OP_ACCESS &&
|
|
|
|
op != NFSV4OP_READLINK &&
|
2009-05-04 15:23:58 +00:00
|
|
|
op != NFSV4OP_SECINFO)
|
|
|
|
nd->nd_repstat = NFSERR_NOFILEHANDLE;
|
2009-05-14 21:39:08 +00:00
|
|
|
else if (nfsvno_testexp(nd, &vpnes) &&
|
2009-05-04 15:23:58 +00:00
|
|
|
op != NFSV4OP_LOOKUP &&
|
|
|
|
op != NFSV4OP_GETFH &&
|
|
|
|
op != NFSV4OP_GETATTR &&
|
|
|
|
op != NFSV4OP_SECINFO)
|
|
|
|
nd->nd_repstat = NFSERR_WRONGSEC;
|
|
|
|
if (nd->nd_repstat) {
|
|
|
|
if (op == NFSV4OP_SETATTR) {
|
|
|
|
/*
|
|
|
|
* Setattr reply requires a bitmap
|
|
|
|
* even for errors like these.
|
|
|
|
*/
|
|
|
|
NFSM_BUILD(tl, u_int32_t *,
|
|
|
|
NFSX_UNSIGNED);
|
|
|
|
*tl = 0;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (nfsv4_opflag[op].retfh == 1) {
|
|
|
|
if (!vp) {
|
|
|
|
nd->nd_repstat = NFSERR_NOFILEHANDLE;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
VREF(vp);
|
|
|
|
if (nfsv4_opflag[op].modifyfs)
|
2011-01-05 19:35:35 +00:00
|
|
|
vn_start_write(vp, &temp_mp, V_WAIT);
|
2009-05-04 15:23:58 +00:00
|
|
|
error = (*(nfsrv4_ops1[op]))(nd, isdgram, vp,
|
|
|
|
&nvp, (fhandle_t *)fh.nfsrvfh_data, p, &vpnes);
|
|
|
|
if (!error && !nd->nd_repstat) {
|
2011-01-06 19:50:11 +00:00
|
|
|
if (op == NFSV4OP_LOOKUP || op == NFSV4OP_LOOKUPP) {
|
|
|
|
new_mp = nvp->v_mount;
|
|
|
|
if (cur_fsid.val[0] !=
|
|
|
|
new_mp->mnt_stat.f_fsid.val[0] ||
|
|
|
|
cur_fsid.val[1] !=
|
|
|
|
new_mp->mnt_stat.f_fsid.val[1]) {
|
|
|
|
/* crossed a server mount point */
|
|
|
|
nd->nd_repstat = nfsvno_checkexp(new_mp,
|
2009-05-04 15:23:58 +00:00
|
|
|
nd->nd_nam, &nes, &credanon);
|
|
|
|
if (!nd->nd_repstat)
|
|
|
|
nd->nd_repstat = nfsd_excred(nd,
|
|
|
|
&nes, credanon);
|
2009-05-09 18:09:17 +00:00
|
|
|
if (credanon != NULL)
|
|
|
|
crfree(credanon);
|
2009-05-04 15:23:58 +00:00
|
|
|
if (!nd->nd_repstat) {
|
|
|
|
vpnes = nes;
|
2011-01-06 19:50:11 +00:00
|
|
|
cur_fsid = new_mp->mnt_stat.f_fsid;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
2011-01-06 19:50:11 +00:00
|
|
|
}
|
|
|
|
/* Lookup ops return a locked vnode */
|
2011-07-16 08:05:36 +00:00
|
|
|
NFSVOPUNLOCK(nvp, 0);
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
if (!nd->nd_repstat) {
|
|
|
|
vrele(vp);
|
|
|
|
vp = nvp;
|
2011-01-03 00:33:32 +00:00
|
|
|
} else
|
|
|
|
vrele(nvp);
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
if (nfsv4_opflag[op].modifyfs)
|
2011-01-05 19:35:35 +00:00
|
|
|
vn_finished_write(temp_mp);
|
2009-05-04 15:23:58 +00:00
|
|
|
} else if (nfsv4_opflag[op].retfh == 2) {
|
|
|
|
if (vp == NULL || savevp == NULL) {
|
|
|
|
nd->nd_repstat = NFSERR_NOFILEHANDLE;
|
|
|
|
break;
|
2011-01-06 19:50:11 +00:00
|
|
|
} else if (cur_fsid.val[0] != save_fsid.val[0] ||
|
|
|
|
cur_fsid.val[1] != save_fsid.val[1]) {
|
2009-05-04 15:23:58 +00:00
|
|
|
nd->nd_repstat = NFSERR_XDEV;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (nfsv4_opflag[op].modifyfs)
|
2011-01-05 19:35:35 +00:00
|
|
|
vn_start_write(savevp, &temp_mp, V_WAIT);
|
2011-07-16 08:05:31 +00:00
|
|
|
if (NFSVOPLOCK(savevp, LK_EXCLUSIVE) == 0) {
|
2011-01-02 19:58:39 +00:00
|
|
|
VREF(vp);
|
|
|
|
VREF(savevp);
|
|
|
|
error = (*(nfsrv4_ops2[op]))(nd, isdgram,
|
|
|
|
savevp, vp, p, &savevpnes, &vpnes);
|
|
|
|
} else
|
|
|
|
nd->nd_repstat = NFSERR_PERM;
|
2009-05-04 15:23:58 +00:00
|
|
|
if (nfsv4_opflag[op].modifyfs)
|
2011-01-05 19:35:35 +00:00
|
|
|
vn_finished_write(temp_mp);
|
2009-05-04 15:23:58 +00:00
|
|
|
} else {
|
|
|
|
if (nfsv4_opflag[op].retfh != 0)
|
|
|
|
panic("nfsrvd_compound");
|
|
|
|
if (nfsv4_opflag[op].needscfh) {
|
2010-12-25 21:56:25 +00:00
|
|
|
if (vp != NULL) {
|
2015-05-29 20:22:53 +00:00
|
|
|
lktype = nfsv4_opflag[op].lktype;
|
|
|
|
if (nfsv4_opflag[op].modifyfs) {
|
2011-01-05 19:35:35 +00:00
|
|
|
vn_start_write(vp, &temp_mp,
|
|
|
|
V_WAIT);
|
2015-05-29 20:22:53 +00:00
|
|
|
if (op == NFSV4OP_WRITE &&
|
|
|
|
MNT_SHARED_WRITES(temp_mp))
|
|
|
|
lktype = LK_SHARED;
|
|
|
|
}
|
|
|
|
if (NFSVOPLOCK(vp, lktype) == 0)
|
2011-01-02 19:58:39 +00:00
|
|
|
VREF(vp);
|
|
|
|
else
|
2010-12-25 21:56:25 +00:00
|
|
|
nd->nd_repstat = NFSERR_PERM;
|
2011-01-02 19:58:39 +00:00
|
|
|
} else {
|
2009-05-04 15:23:58 +00:00
|
|
|
nd->nd_repstat = NFSERR_NOFILEHANDLE;
|
|
|
|
if (op == NFSV4OP_SETATTR) {
|
2010-12-25 21:56:25 +00:00
|
|
|
/*
|
|
|
|
* Setattr reply requires a
|
|
|
|
* bitmap even for errors like
|
|
|
|
* these.
|
|
|
|
*/
|
|
|
|
NFSM_BUILD(tl, u_int32_t *,
|
|
|
|
NFSX_UNSIGNED);
|
|
|
|
*tl = 0;
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
2011-01-02 19:58:39 +00:00
|
|
|
if (nd->nd_repstat == 0)
|
|
|
|
error = (*(nfsrv4_ops0[op]))(nd,
|
|
|
|
isdgram, vp, p, &vpnes);
|
2009-05-04 15:23:58 +00:00
|
|
|
if (nfsv4_opflag[op].modifyfs)
|
2011-01-05 19:35:35 +00:00
|
|
|
vn_finished_write(temp_mp);
|
2009-05-04 15:23:58 +00:00
|
|
|
} else {
|
|
|
|
error = (*(nfsrv4_ops0[op]))(nd, isdgram,
|
|
|
|
NULL, p, &vpnes);
|
|
|
|
}
|
|
|
|
}
|
2016-04-10 23:07:00 +00:00
|
|
|
}
|
2009-05-04 15:23:58 +00:00
|
|
|
if (error) {
|
|
|
|
if (error == EBADRPC || error == NFSERR_BADXDR) {
|
|
|
|
nd->nd_repstat = NFSERR_BADXDR;
|
|
|
|
} else {
|
|
|
|
nd->nd_repstat = error;
|
|
|
|
printf("nfsv4 comperr0=%d\n", error);
|
|
|
|
}
|
|
|
|
error = 0;
|
|
|
|
}
|
2016-08-12 22:44:59 +00:00
|
|
|
|
|
|
|
if (statsinprog != 0) {
|
|
|
|
nfsrvd_statend(op, /*bytes*/ 0, /*now*/ NULL,
|
|
|
|
/*then*/ &start_time);
|
|
|
|
statsinprog = 0;
|
|
|
|
}
|
|
|
|
|
2009-05-04 15:23:58 +00:00
|
|
|
retops++;
|
|
|
|
if (nd->nd_repstat) {
|
|
|
|
*repp = nfsd_errmap(nd);
|
|
|
|
break;
|
|
|
|
} else {
|
|
|
|
*repp = 0; /* NFS4_OK */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
nfsmout:
|
2016-08-12 22:44:59 +00:00
|
|
|
if (statsinprog != 0) {
|
|
|
|
nfsrvd_statend(op, /*bytes*/ 0, /*now*/ NULL,
|
|
|
|
/*then*/ &start_time);
|
|
|
|
statsinprog = 0;
|
|
|
|
}
|
2009-05-04 15:23:58 +00:00
|
|
|
if (error) {
|
|
|
|
if (error == EBADRPC || error == NFSERR_BADXDR)
|
|
|
|
nd->nd_repstat = NFSERR_BADXDR;
|
|
|
|
else
|
|
|
|
printf("nfsv4 comperr1=%d\n", error);
|
|
|
|
}
|
|
|
|
if (taglen == -1) {
|
|
|
|
NFSM_BUILD(tl, u_int32_t *, 2 * NFSX_UNSIGNED);
|
|
|
|
*tl++ = 0;
|
|
|
|
*tl = 0;
|
|
|
|
} else {
|
|
|
|
*retopsp = txdr_unsigned(retops);
|
|
|
|
}
|
|
|
|
if (vp)
|
|
|
|
vrele(vp);
|
|
|
|
if (savevp)
|
|
|
|
vrele(savevp);
|
|
|
|
NFSLOCKV4ROOTMUTEX();
|
|
|
|
nfsv4_relref(&nfsv4rootfs_lock);
|
|
|
|
NFSUNLOCKV4ROOTMUTEX();
|
2011-07-16 08:51:09 +00:00
|
|
|
|
|
|
|
NFSEXITCODE2(0, nd);
|
2009-05-04 15:23:58 +00:00
|
|
|
}
|