to detect (or load) kernel NLM support in rpc.lockd. Remove the '-k'
option to rpc.lockd and make kernel NLM the default. A user can still
force the use of the old user NLM by building a kernel without NFSLOCKD
and/or removing the nfslockd.ko module.
user-mode lock manager, build a kernel with the NFSLOCKD option and
add '-k' to 'rpc_lockd_flags' in rc.conf.
Highlights include:
* Thread-safe kernel RPC client - many threads can use the same RPC
client handle safely with replies being de-multiplexed at the socket
upcall (typically driven directly by the NIC interrupt) and handed
off to whichever thread matches the reply. For UDP sockets, many RPC
clients can share the same socket. This allows the use of a single
privileged UDP port number to talk to an arbitrary number of remote
hosts.
* Single-threaded kernel RPC server. Adding support for multi-threaded
server would be relatively straightforward and would follow
approximately the Solaris KPI. A single thread should be sufficient
for the NLM since it should rarely block in normal operation.
* Kernel mode NLM server supporting cancel requests and granted
callbacks. I've tested the NLM server reasonably extensively - it
passes both my own tests and the NFS Connectathon locking tests
running on Solaris, Mac OS X and Ubuntu Linux.
* Userland NLM client supported. While the NLM server doesn't have
support for the local NFS client's locking needs, it does have to
field async replies and granted callbacks from remote NLMs that the
local client has contacted. We relay these replies to the userland
rpc.lockd over a local domain RPC socket.
* Robust deadlock detection for the local lock manager. In particular
it will detect deadlocks caused by a lock request that covers more
than one blocking request. As required by the NLM protocol, all
deadlock detection happens synchronously - a user is guaranteed that
if a lock request isn't rejected immediately, the lock will
eventually be granted. The old system allowed for a 'deferred
deadlock' condition where a blocked lock request could wake up and
find that some other deadlock-causing lock owner had beaten them to
the lock.
* Since both local and remote locks are managed by the same kernel
locking code, local and remote processes can safely use file locks
for mutual exclusion. Local processes have no fairness advantage
compared to remote processes when contending to lock a region that
has just been unlocked - the local lock manager enforces a strict
first-come first-served model for both local and remote lockers.
Sponsored by: Isilon Systems
PR: 95247 107555 115524 116679
MFC after: 2 weeks
nfsd(8), in mountd(8), and in rpc.statd(8)
-h bindip
Specify specific IP addresses to bind to for TCP and UDP requests.
This option may be specified multiple times. If no -h option is
specified, rpc.lockd will bind to INADDR_ANY. Note that when specifying
IP addresses with -h, rpc.lockd will automatically add 127.0.0.1 and
if IPv6 is enabled, ::1 to the list.
PR: bin/98500
MFC after: 1 week
so that both parent and child processes ignore this signal.
PR: bin/97768
Submitted by: Gea-Suan Lin <gslin at csie dot nctu dot edu dot tw>
MFC after: 3 days
We already check for write() failures and handle EPIPE.
Failure to handle SIGPIPE was resulting in rpc.lockd terminating.
PR: bin/97768
Reported by: Gea-Suan Lin <gslin at csie dot nctu dot edu dot tw>
MFC after: 1 day
result in abort() beeing called. This is because there is a limit of
the number of groups in the RPC which is 16. When the actual number of
groups is too large it results in xdr_array() returning an error which,
in turn, authunix_create() handles by just calling abort().
Fix this by passing only the first 16 groups to authunix_create().
xt_rtaddr member of SVCXPRT structure. This allows to use IPv6
address stored in "struct sockaddr_storage" in "struct netbuf".
- Output the reason of getnameinfo() error.
Reviewed by: alfred
apply the patch of bin/61718 (which should include/elimatate kern/61122 also).
It seems to fix a few annoying bugs.
PR: bin/61718, kern/61122
Submitted by: bg@sics.seohartman@mail.physik.uni-mainz.de
server, map it to EAGAIN locally rather than EACCES. The NLM spec
indicates the DENIED corresponds to lock contention, not a permission
failure. This fixes O_EXLOCK/O_SHLOCK with O_NONBLOCK, which would
previously give a permission error, which in turn fixes things
like mailq(8) and lockf(1) over NFS.
Approved by: scottl (re)
Reviewed by: truckman, Andrew P Lentvorski, Jr. <bsder@allcaps.org>
Idea from: truckman
has requested the lock in a non-blocking form, instead returning an
immediate failure. This appears to help reduce one of my "locks get
lost" symptoms involving lockf(1), which attempts a non-blocking lock
attempt before actually blocking on the lock. At this point the client
still gets back EACCES, which is an issue we're still working.
Approved by: re (scottl)
Submitted by: Andrew P. Lentvorski, Jr. <bsder@allcaps.org>
from the NFS server, following contention on a lock by this or another
client, immediately notify the waiting process that the lock has been
granted via a wakeup. Without this change, the client rpc.lockd will
not wakeup the waiting process until it next re-polls the lock (sometime
in the next ten seconds), which can lead to marked latency across all
potential lockers, as the lock is held by the client for the duration.
Approved by: re (scottl)
Submitted by: truckman
Reviewed by: Andrew P. Lentvorski, Jr <bsder@allcaps.org>
print out the correct transport it failed on rather than always
spitting out 'udp', also call nc_sperror() to give a more verbose
error message detailing the problem.
rpcgen can't really make those fields const because the remote side might
want to munge them, so we need to pass non-const in. Hackish, but should
work.
Alfred, I took a look at retry_blockingfilelocklist() and the
solution seemed simple enough. Please correct me if I am wrong.
It seems said routine doesn't take into account boundary conditions
when putting back file_lock entries into the blocked lock-list.
Specifically, it fails when the file_lock being put back is the
last element in the list, and when it is the only element in the
list. I've included a patch below.
Basically, it introduces another variable: pfl, which keeps track
of the list item before ifl. That way if nfl is NULL, ifl gets
inserted after pfl. If pfl is also NULL, then it gets inserted
at the head of the list (since it was the only element in the
list).
Submitted by: Mike Makonnen <mike_makonnen@yahoo.com>
Tested by: Thomas Quinot <thomas@cuivre.fr.eu.org>