freebsd-skq/sys/nfs4client
Doug Rabson dfdcada31e Add the new kernel-mode NFS Lock Manager. To use it instead of the
user-mode lock manager, build a kernel with the NFSLOCKD option and
add '-k' to 'rpc_lockd_flags' in rc.conf.

Highlights include:

* Thread-safe kernel RPC client - many threads can use the same RPC
  client handle safely with replies being de-multiplexed at the socket
  upcall (typically driven directly by the NIC interrupt) and handed
  off to whichever thread matches the reply. For UDP sockets, many RPC
  clients can share the same socket. This allows the use of a single
  privileged UDP port number to talk to an arbitrary number of remote
  hosts.

* Single-threaded kernel RPC server. Adding support for multi-threaded
  server would be relatively straightforward and would follow
  approximately the Solaris KPI. A single thread should be sufficient
  for the NLM since it should rarely block in normal operation.

* Kernel mode NLM server supporting cancel requests and granted
  callbacks. I've tested the NLM server reasonably extensively - it
  passes both my own tests and the NFS Connectathon locking tests
  running on Solaris, Mac OS X and Ubuntu Linux.

* Userland NLM client supported. While the NLM server doesn't have
  support for the local NFS client's locking needs, it does have to
  field async replies and granted callbacks from remote NLMs that the
  local client has contacted. We relay these replies to the userland
  rpc.lockd over a local domain RPC socket.

* Robust deadlock detection for the local lock manager. In particular
  it will detect deadlocks caused by a lock request that covers more
  than one blocking request. As required by the NLM protocol, all
  deadlock detection happens synchronously - a user is guaranteed that
  if a lock request isn't rejected immediately, the lock will
  eventually be granted. The old system allowed for a 'deferred
  deadlock' condition where a blocked lock request could wake up and
  find that some other deadlock-causing lock owner had beaten them to
  the lock.

* Since both local and remote locks are managed by the same kernel
  locking code, local and remote processes can safely use file locks
  for mutual exclusion. Local processes have no fairness advantage
  compared to remote processes when contending to lock a region that
  has just been unlocked - the local lock manager enforces a strict
  first-come first-served model for both local and remote lockers.

Sponsored by:	Isilon Systems
PR:		95247 107555 115524 116679
MFC after:	2 weeks
2008-03-26 15:23:12 +00:00
..
nfs4_dev.c Add better sanity checking to the logic that handles ioctl processing 2006-05-13 00:16:35 +00:00
nfs4_dev.h
nfs4_idmap.c - Handle buffer lock waiters count directly in the buffer cache instead 2008-03-01 19:47:50 +00:00
nfs4_idmap.h
nfs4_socket.c
nfs4_subs.c NFSv4 client: 2006-11-28 19:33:28 +00:00
nfs4_vfs_subs.c Unstaticize nfs_iosize() in nfsclient and use it in nfs4client instead 2007-01-25 13:07:25 +00:00
nfs4_vfs.h
nfs4_vfsops.c - Complete part of the unfinished bufobj work by consistently using 2008-03-22 09:15:16 +00:00
nfs4_vn_subs.c NFSv4 client: 2006-11-28 19:33:28 +00:00
nfs4_vn.h
nfs4_vnops.c Add the new kernel-mode NFS Lock Manager. To use it instead of the 2008-03-26 15:23:12 +00:00
nfs4.h
nfs4m_subs.h