Commit Graph

11 Commits

Author SHA1 Message Date
avatar
23397956aa Fixing NO_INET6 build. 2008-06-27 15:29:48 +00:00
dfr
41cea6d5ca Re-implement the client side of rpc.lockd in the kernel. This implementation
provides the correct semantics for flock(2) style locks which are used by the
lockf(1) command line tool and the pidfile(3) library. It also implements
recovery from server restarts and ensures that dirty cache blocks are written
to the server before obtaining locks (allowing multiple clients to use file
locking to safely share data).

Sponsored by:	Isilon Systems
PR:		94256
MFC after:	2 weeks
2008-06-26 10:21:54 +00:00
dfr
79e7f22e28 Back out the nlm_global_lock part of the last change - I forgot that only
exists in my perforce branch :(

Pointy hat:	dfr
2008-06-03 08:10:58 +00:00
dfr
dc4eba5b74 When attempting to use the NSM state number in a lock request to detect
a client reboot, do this check before performing the lock otherwise we
will trash the new lock along with any other old locks the client held
before rebooting.

Make sure nlm_check_idle always returns with nlm_global_lock held.

MFC after:	1 week
2008-06-02 15:59:10 +00:00
dfr
ca246d7eed Don't rely on NSM to help us forget about RPC client handles for
clients that have rebooted (or otherwise changed port numbers). If the
client is broken or has no active locks, it won't notify us. Fall back
on the two minute timeout logic used by the userland rpc.lockd code.

MFC after: 1 week
2008-05-30 09:34:08 +00:00
dfr
0e16b52aec Tighten up the error-handling in nlm_get_rpc. While I'm here, fix a
couple of spelling mistakes in comments.
2008-04-16 09:09:50 +00:00
dfr
5dd963773a Fix some issues that showed up during Kris' testing.
Reported by:	kris
MFC after:	3 days
2008-04-11 10:34:59 +00:00
dfr
59c9f770cb Fix a problem which stopped this from starting up on a kernel compiled
without the INET6 option.
2008-04-09 15:43:19 +00:00
dfr
3d7f9d1b14 Minor changes to improve compatibility with older FreeBSD releases. 2008-03-28 09:50:32 +00:00
dfr
dc98ee4196 Add kernel module support for nfslockd and krpc. Use the module system
to detect (or load) kernel NLM support in rpc.lockd. Remove the '-k'
option to rpc.lockd and make kernel NLM the default. A user can still
force the use of the old user NLM by building a kernel without NFSLOCKD
and/or removing the nfslockd.ko module.
2008-03-27 11:54:20 +00:00
dfr
79d2dfdaa6 Add the new kernel-mode NFS Lock Manager. To use it instead of the
user-mode lock manager, build a kernel with the NFSLOCKD option and
add '-k' to 'rpc_lockd_flags' in rc.conf.

Highlights include:

* Thread-safe kernel RPC client - many threads can use the same RPC
  client handle safely with replies being de-multiplexed at the socket
  upcall (typically driven directly by the NIC interrupt) and handed
  off to whichever thread matches the reply. For UDP sockets, many RPC
  clients can share the same socket. This allows the use of a single
  privileged UDP port number to talk to an arbitrary number of remote
  hosts.

* Single-threaded kernel RPC server. Adding support for multi-threaded
  server would be relatively straightforward and would follow
  approximately the Solaris KPI. A single thread should be sufficient
  for the NLM since it should rarely block in normal operation.

* Kernel mode NLM server supporting cancel requests and granted
  callbacks. I've tested the NLM server reasonably extensively - it
  passes both my own tests and the NFS Connectathon locking tests
  running on Solaris, Mac OS X and Ubuntu Linux.

* Userland NLM client supported. While the NLM server doesn't have
  support for the local NFS client's locking needs, it does have to
  field async replies and granted callbacks from remote NLMs that the
  local client has contacted. We relay these replies to the userland
  rpc.lockd over a local domain RPC socket.

* Robust deadlock detection for the local lock manager. In particular
  it will detect deadlocks caused by a lock request that covers more
  than one blocking request. As required by the NLM protocol, all
  deadlock detection happens synchronously - a user is guaranteed that
  if a lock request isn't rejected immediately, the lock will
  eventually be granted. The old system allowed for a 'deferred
  deadlock' condition where a blocked lock request could wake up and
  find that some other deadlock-causing lock owner had beaten them to
  the lock.

* Since both local and remote locks are managed by the same kernel
  locking code, local and remote processes can safely use file locks
  for mutual exclusion. Local processes have no fairness advantage
  compared to remote processes when contending to lock a region that
  has just been unlocked - the local lock manager enforces a strict
  first-come first-served model for both local and remote lockers.

Sponsored by:	Isilon Systems
PR:		95247 107555 115524 116679
MFC after:	2 weeks
2008-03-26 15:23:12 +00:00