the NFS subsystems use five of the rpcsec_gss/kgssapi entry points,
but since it was not obvious which others might be useful, all
nineteen were included. Basically the nineteen entry points are
set in a structure called rpc_gss_entries and inline functions
defined in sys/rpc/rpcsec_gss.h check for the entry points being
non-NULL and then call them. A default value is returned otherwise.
Requested by rwatson.
Reviewed by: jhb
MFC after: 2 weeks
non-interruptible NFS mounts, where a kernel thread will seem
to be stuck sleeping on "rpccon". The msleep() in clnt_vc_create()
that was waiting to a TCP connect to complete would return ERESTART,
since PCATCH was specified. Then the tsleep() in clnt_reconnect_call()
would sleep for 1 second and then try again and again and...
The patch changes the msleep() in clnt_vc_create() so it only sets
the PCATCH flag for interruptible cases.
Tested by: pho
Reviewed by: jhb
MFC after: 2 weeks
not believe that these leaks had a practical impact,
since the situations in which they would have occurred
would have been extremely rare.
MFC after: 2 weeks
VNET socket push back:
try to minimize the number of places where we have to switch vnets
and narrow down the time we stay switched. Add assertions to the
socket code to catch possibly unset vnets as seen in r204147.
While this reduces the number of vnet recursion in some places like
NFS, POSIX local sockets and some netgraph, .. recursions are
impossible to fix.
The current expectations are documented at the beginning of
uipc_socket.c along with the other information there.
Sponsored by: The FreeBSD Foundation
Sponsored by: CK Software GmbH
Reviewed by: jhb
Tested by: zec
Tested by: Mikolaj Golub (to.my.trociny gmail.com)
MFC after: 2 weeks
erroneously, assumed that 4 bytes of data were in the first
mbuf of a list by replacing the bcopy() with m_copydata().
Also, replace the uses of m_pullup(), which can fail for
reasons other than not enough data, with m_copydata().
For the cases where it isn't known that there is enough
data in the mbuf list, check first via m_len and m_length().
This is believed to fix a problem reported by dpd at dpdtech.com
and george+freebsd at m5p.com.
Reviewed by: jhb
MFC after: 8 days
data size greater than 8192. Since soreserve(so, 256*1024, 256*1024)
would always fail for the default value of sb_max, modify clnt_dg.c
so that it uses the calculated values and checks for an error return
from soreserve(). Also, add a check for error return from soreserve()
to clnt_vc.c and change __rpc_get_t_size() to use sb_max_adj instead of
the bogus maxsize == 256*1024.
PR: kern/150910
Reviewed by: jhb
MFC after: 2 weeks
in the kernel (just as inet_ntoa() and inet_aton()) are and sync their
prototype accordingly with already mentioned functions.
Sponsored by: Sandvine Incorporated
Reviewed by: emaste, rstone
Approved by: dfr
MFC after: 2 weeks
(replay_alloc()) knows how to handle replay_alloc() failure.
- Eliminate 'freed_one' variable, it is not needed - when no entry is found
rce will be NULL.
- Add locking assertions where we expect a rc_lock to be held.
Reviewed by: rmacklem
MFC after: 2 weeks
succeeded and a subsequent interation failed to find an
entry to prune, it could loop infinitely, since the
"freed" variable wasn't reset to FALSE. This patch moves
setting freed FALSE to inside the loop to fix the problem.
Tested by: alan.bryan at yahoo.com
MFC after: 2 weeks
cache, it did not free the request argument mbuf list, resulting in a leak.
This patch fixes that leak.
Tested by: danny AT cs.huji.ac.il
PR: kern/144330
Submitted by: to.my.trociny AT gmail.com (earlier version)
Reviewed by: dfr
MFC after: 2 weeks
kern.ngroups+1. kern.ngroups can range from NGROUPS_MAX=1023 to
INT_MAX-1. Given that the Windows group limit is 1024, this range
should be sufficient for most applications.
MFC after: 1 month
Fix some wrong usages.
Note: this does not affect generated binaries as this argument is not used.
PR: 137213
Submitted by: Eygene Ryabinkin (initial version)
MFC after: 1 month
client just before queuing a request for the connection. The
code already had a check for the connection being shut down
while the request was queued, but not one for the shut down
having been initiated by the server before the request was
in the queue. This appears to fix the problem of slow reconnects
against an NFS server that drops inactive connections reported
by Olaf Seibert, but does not fix the case
where the FreeBSD client generates RST segments at about the
same time as ACKs. This is still a problem that is being
investigated. This patch does not cause a regression for this
case.
Tested by: Olaf Seibert, Daniel Braniss
Reviewed by: dfr
MFC after: 5 days
Who knew that "svn export" was an actual command, or that I would have
vfs_export.c stuck in my mind deep enough to type "export" instead of
"commit"?
Pointy Hat to: jamie
context inside the RPC code.
Temporarily set td's cred to mount's cred before calling socreate() via
__rpc_nconf2socket().
Submitted by: rmacklem (in part)
Reviewed by: rmacklem, rwatson
Discussed with: dfr, bz
Approved by: re (rwatson), julian (mentor)
MFC after: 3 days
kernel resources that block other threads, like vnode locks. The SIGSTOP
sent to such thread (process, rather) shall not stop it until thread
releases the resources.
Tested by: pho
Reviewed by: jhb
Approved by: re (kensmith)
call could get hung sleeping on "gsssta" if the credentials for a user
that had been accessing the mount point have expired. This happened
because rpc_gss_destroy_context() would end up calling itself when the
"destroy context" RPC was attempted, trying to refresh the credentials.
This patch just checks for this case in rpc_gss_refresh() and returns
without attempting the refresh, which avoids the recursive call to
rpc_gss_destroy_context() and the subsequent hang.
Reviewed by: dfr
Approved by: re (Ken Smith), kib (mentor)
This is normally done by a loop in clnt_dg_close(), but requests that aren't
in the pending queue at the time of closing, don't get set. This avoids a
panic in xdrmbuf_create() when it is called with a NULL cr_mrep if
cr_error doesn't get set to ESHUTDOWN while closing.
Reviewed by: dfr
Approved by: re (Ken Smith), kib (mentor)
during reading of the code. Change the code so that it never accesses
rc_connecting, rc_closed or rc_client when the rc_lock mutex is not held.
Also, it now performs the CLNT_CLOSE(client) and CLNT_RELEASE(client)
calls after the rc_lock mutex has been released, since those calls do
msleep()s with another mutex held. Change clnt_reconnect_call() so that
releasing the reference count is delayed until after the
"if (rc->rc_client == client)" check, so that rc_client cannot have been
recycled.
Tested by: pho
Reviewed by: dfr
Approved by: kib (mentor)
side fails, the entry in the cache is left with no valid context
(gd_ctx == GSS_C_NO_CONTEXT). As such, subsequent hits on the cache
will result in persistent authentication failure, even after the user has
done a kinit or similar and acquired a new valid TGT. This patch adds a test
for that case upon a cache hit and calls rpc_gss_init() to make another
attempt at getting valid credentials. It also moves the setting of gc_proc
to before the import of the principal name to ensure that, if that case
fails, it will be detected as a failure after going to "out:".
Reviewed by: dfr
Approved by: kib (mentor)
NGROUPS_MAX, eliminate ABI dependencies on them, and raise the to 1024
and 1023 respectively. (Previously they were equal, but under a close
reading of POSIX, NGROUPS_MAX was defined to be too large by 1 since it
is the number of supplemental groups, not total number of groups.)
The bulk of the change consists of converting the struct ucred member
cr_groups from a static array to a pointer. Do the equivalent in
kinfo_proc.
Introduce new interfaces crcopysafe() and crsetgroups() for duplicating
a process credential before modifying it and for setting group lists
respectively. Both interfaces take care for the details of allocating
groups array. crsetgroups() takes care of truncating the group list
to the current maximum (NGROUPS) if necessary. In the future,
crsetgroups() may be responsible for insuring invariants such as sorting
the supplemental groups to allow groupmember() to be implemented as a
binary search.
Because we can not change struct xucred without breaking application
ABIs, we leave it alone and introduce a new XU_NGROUPS value which is
always 16 and is to be used or NGRPS as appropriate for things such as
NFS which need to use no more than 16 groups. When feasible, truncate
the group list rather than generating an error.
Minor changes:
- Reduce the number of hand rolled versions of groupmember().
- Do not assign to both cr_gid and cr_groups[0].
- Modify ipfw to cache ucreds instead of part of their contents since
they are immutable once referenced by more than one entity.
Submitted by: Isilon Systems (initial implementation)
X-MFC after: never
PR: bin/113398 kern/133867
SVCXPTR structure returned by them, it was possible for the structure
to be free'd before svc_reg() had been completed using the structure.
This patch acquires a reference count on the newly created structure
that is returned by svc_[dg|vc|tli|tp]_create(). It also
adds the appropriate SVC_RELEASE() calls to the callers, except the
experimental nfs subsystem. The latter will be committed separately.
Submitted by: dfr
Tested by: pho
Approved by: kib (mentor)
variables set via the getcredhostid() function. I also changed the type
of ci_hostid to "unsigned long" so that it matches what is returned by
getcredhostid(). Although "struct svc_rpc_gss_clientid" goes on the wire
during RPCSEC_GSS, it is just a variable # of opaque bytes to the client,
so it doesn't matter how much storage ci_hostid uses.
Approved by: kib (mentor)
server would crash because the Solaris10 client would attempt to use
Sun's NFSACL protocol, which FreeBSD doesn't support. When the server
generated the error reply via svcerr_noprog(), it would cause a crash
because it would try and wrap a NULL reply. According to RFC2203, no
wrapping is required for error cases. This one line change avoids
wrapping of NULL replies.
Reviewed by: dfr
Approved by: kib (mentor)
connect failed, the thread would be left stuck in msleep()
indefinitely, since it would call msleep() again for the case
where rc_client == NULL. Change the loop criteria and the if just
after the loop, so that this case is handled correctly.
Reviewed by: dfr
Approved by: kib (mentor)
where an improperly initialized prison field could lead to a panic. This
is not the correct solution, since it fails to address similar problems
for both AUDIT and MAC, which also rely on properly initialized
credentials, but should reduce panic reports while we work that out.
Reported by: ps, kan, others
thread has already unregistered the structure. Also add a KASSERT()
to xprt_unregister_locked() to check that the structure hasn't already
been unregistered.
Reviewed by: jhb
Tested by: pho
Approved by: kib (mentor)
mtx_destroy() of the pool mutex to after SVC_RELEASE(), because
the pool mutex was still locked when soclose() was called by svc_dg_destroy().
To fix this, an mtx_unlock() was added where mtx_destroy() was before
r193436.
Reviewed by: jhb
Tested by: pho
Approved by: rwatson (mentor)
holding SOCKBUF_LOCK() isn't sufficient to guarantee that there is
no upcall in progress, since SOCKBUF_LOCK() is released/re-acquired
in the upcall. An upcall reference counter was added to the upcall
structure that is incremented at the beginning of the upcall and
decremented at the end of the upcall. As such, a reference count == 0
when holding the SOCKBUF_LOCK() guarantees there is no upcall in
progress. Add a function that is called just after soupcall_clear(),
which waits until the reference count == 0.
Also, move the mtx_destroy() down to after soupcall_clear(), so that
the mutex is not destroyed before upcalls are done.
Reviewed by: dfr, jhb
Tested by: pho
Approved by: kib (mentor)
Add a flag so that soupcall_clear() is only called once to cancel
an upcall.
Move the test for xprt_registered in the upcall down to after the
mtx_lock() of the pool mutex, to catch the case where it is
unregistered while the upcall is waiting for the mutex.
Also, move the mtx_destroy() of the pool mutex to after SVC_RELEASE(),
so that it isn't destroyed before the upcalls are disabled.
Reviewed by: dfr, jhb
Tested by: pho
Approved by: kib (mentor)
count of the number of registered policies.
Rather than unconditionally locking sockets before passing them into MAC,
lock them in the MAC entry points only if mac_policy_count is non-zero.
This avoids locking overhead for a number of socket system calls when no
policies are registered, eliminating measurable overhead for the MAC
Framework for the socket subsystem when there are no active policies.
Possibly socket locks should be acquired by policies if they are required
for socket labels, which would further avoid locking overhead when there
are policies but they don't require labeling of sockets, or possibly
don't even implement socket controls.
Obtained from: TrustedBSD Project
- Each socket upcall is now invoked with the appropriate socket buffer
locked. It is not permissible to call soisconnected() with this lock
held; however, so socket upcalls now return an integer value. The two
possible values are SU_OK and SU_ISCONNECTED. If an upcall returns
SU_ISCONNECTED, then the soisconnected() will be invoked on the
socket after the socket buffer lock is dropped.
- A new API is provided for setting and clearing socket upcalls. The
API consists of soupcall_set() and soupcall_clear().
- To simplify locking, each socket buffer now has a separate upcall.
- When a socket upcall returns SU_ISCONNECTED, the upcall is cleared from
the receive socket buffer automatically. Note that a SO_SND upcall
should never return SU_ISCONNECTED.
- All this means that accept filters should now return SU_ISCONNECTED
instead of calling soisconnected() directly. They also no longer need
to explicitly clear the upcall on the new socket.
- The HTTP accept filter still uses soupcall_set() to manage its internal
state machine, but other accept filters no longer have any explicit
knowlege of socket upcall internals aside from their return value.
- The various RPC client upcalls currently drop the socket buffer lock
while invoking soreceive() as a temporary band-aid. The plan for
the future is to add a new flag to allow soreceive() to be called with
the socket buffer locked.
- The AIO callback for socket I/O is now also invoked with the socket
buffer locked. Previously sowakeup() would drop the socket buffer
lock only to call aio_swake() which immediately re-acquired the socket
buffer lock for the duration of the function call.
Discussed with: rwatson, rmacklem
The system hostname is now stored in prison0, and the global variable
"hostname" has been removed, as has the hostname_mtx mutex. Jails may
have their own host information, or they may inherit it from the
parent/system. The proper way to read the hostname is via
getcredhostname(), which will copy either the hostname associated with
the passed cred, or the system hostname if you pass NULL. The system
hostname can still be accessed directly (and without locking) at
prison0.pr_host, but that should be avoided where possible.
The "similar information" referred to is domainname, hostid, and
hostuuid, which have also become prison parameters and had their
associated global variables removed.
Approved by: bz (mentor)
- add FreeBSD implementation of xdrmem_control needed by zfs
- have zfs define xdr_ops using FreeBSD's definition
- remove solaris xdr files from zfs compile
use to identify if the socket is the same one that a cached request
came in on. It is set by nfsrvd_addsock() to a unique value generated
by incrementing an unsigned 64bit static variable for each assignment
and then the value of xp_sockref is tested to see if it is equal to
the value that was saved with the cached reply.
Submitted by: rmacklem
Reviewed by: dfr
Approved by: kib (mentor)
and server. This replaces the RPC implementation of the NFS client and
server with the newer RPC implementation originally developed
(actually ported from the userland sunrpc code) to support the NFS
Lock Manager. I have tested this code extensively and I believe it is
stable and that performance is at least equal to the legacy RPC
implementation.
The NFS code currently contains support for both the new RPC
implementation and the older legacy implementation inherited from the
original NFS codebase. The default is to use the new implementation -
add the NFS_LEGACYRPC option to fall back to the old code. When I
merge this support back to RELENG_7, I will probably change this so
that users have to 'opt in' to get the new code.
To use RPCSEC_GSS on either client or server, you must build a kernel
which includes the KGSSAPI option and the crypto device. On the
userland side, you must build at least a new libc, mountd, mount_nfs
and gssd. You must install new versions of /etc/rc.d/gssd and
/etc/rc.d/nfsd and add 'gssd_enable=YES' to /etc/rc.conf.
As long as gssd is running, you should be able to mount an NFS
filesystem from a server that requires RPCSEC_GSS authentication. The
mount itself can happen without any kerberos credentials but all
access to the filesystem will be denied unless the accessing user has
a valid ticket file in the standard place (/tmp/krb5cc_<uid>). There
is currently no support for situations where the ticket file is in a
different place, such as when the user logged in via SSH and has
delegated credentials from that login. This restriction is also
present in Solaris and Linux. In theory, we could improve this in
future, possibly using Brooks Davis' implementation of variant
symlinks.
Supporting RPCSEC_GSS on a server is nearly as simple. You must create
service creds for the server in the form 'nfs/<fqdn>@<REALM>' and
install them in /etc/krb5.keytab. The standard heimdal utility ktutil
makes this fairly easy. After the service creds have been created, you
can add a '-sec=krb5' option to /etc/exports and restart both mountd
and nfsd.
The only other difference an administrator should notice is that nfsd
doesn't fork to create service threads any more. In normal operation,
there will be two nfsd processes, one in userland waiting for TCP
connections and one in the kernel handling requests. The latter
process will create as many kthreads as required - these should be
visible via 'top -H'. The code has some support for varying the number
of service threads according to load but initially at least, nfsd uses
a fixed number of threads according to the value supplied to its '-n'
option.
Sponsored by: Isilon Systems
MFC after: 1 month