Commit Graph

161 Commits

Author SHA1 Message Date
Konstantin Belousov
5673e3cb08 The cache_enter(9) function shall not be called for doomed dvp.
Assert this.

In the reported panic, vdestroy() fired the assertion "vp has namecache
for ..", because pseudofs may end up doing cache_enter() with reclaimed
dvp, after dotdot lookup temporary unlocked dvp.
Similar problem exists in ufs_lookup() for "." lookup, when vnode
lock needs to be upgraded.

Verify that dvp is not reclaimed before calling cache_enter().

Reported and tested by:	pho
Reviewed by:	kan
MFC after:	2 weeks
2010-04-20 10:19:27 +00:00
Konstantin Belousov
3e22320c43 Fix typo.
MFC after:	3 days
2010-04-15 17:17:02 +00:00
Konstantin Belousov
8f40845151 Correctly handle unlock for !MAKEENTRY case, after successfull attempt of
lock upgrade cache shall be unlocked from write.

Reported by:	Lucius Windschuh <lwindschuh googlemail com>
Reviewed by:	kan
Approved by:	re (rwatson)
2009-08-14 10:57:28 +00:00
Konstantin Belousov
c808c9632d Add explicit struct ucred * argument for VOP_VPTOCNP, to be used by
vn_open_cred in default implementation. Valid struct ucred is needed for
audit and MAC, and curthread credentials may be wrong.

This further requires modifying the interface of vn_fullpath(9), but it
is out of scope of this change.

Reviewed by:	rwatson
2009-06-21 19:21:01 +00:00
Joe Marcus Clarke
8a4444049e Unlock the cache lock before returning when we run out of buffer space
trying to fill in the full path name.

Reported by:	David Naylor <naylor.b.david@gmail.com>
Approved by:	kib
2009-06-05 16:44:42 +00:00
Konstantin Belousov
1358a7957d Unbreak the build. Add missed probes.
Reviewed by:	rwatson
Pointy hat to:	me
2009-05-31 20:16:06 +00:00
Konstantin Belousov
0449e6e1eb Eliminate code duplication in vn_fullpath1() around the cache lookups
and calls to vn_vptocnp() by moving more of the common code to
vn_vptocnp(). Rename vn_vptocnp() to vn_vptocnp_locked() to signify that
cache is locked around the call.

Do not track buffer position by both the pointer and offset, use only
buflen to record the start of the free space.

Export vn_vptocnp() for external consumers as a wrapper around
vn_vptocnp_locked() that locks the cache and handles hold counts.

Tested by:	pho
2009-05-31 14:57:43 +00:00
Alexander Kabaev
348496ad39 More fallout from negative dotdot caching. Negative entries should
be removed from and reinserted to proper ncneg list.

Reported by:  pho
Submitted by: kib
2009-04-17 18:11:11 +00:00
Alexander Kabaev
9cf6772211 Redo previous change using simpler patch that happens to be also
more correct.

Submitted by: tor
2009-04-14 23:56:48 +00:00
Alexander Kabaev
eed8a9edba Fix yet another negative dotodot entry fallout.
Reported by: pho
2009-04-14 23:46:57 +00:00
Alexander Kabaev
9d75482f99 Fix v_cache_dd handling for negative entries. v_cache_dd pointer was
not populated in parent directory if negative entry was being
created, yet entry itself was added to the nc_neg list. It was
possible for parent vnode to get discarded later, leaving negative
entry pointing to now unused memory block.

Reported by:	dho
Revewed by:	kib
2009-04-11 20:23:08 +00:00
Konstantin Belousov
fd409594c6 When zapping v_cache_dd for !MAKEENTRY case in cache_lookup(), we shall
lock cache as writer.

Reviewed by:	kan
2009-04-11 16:12:20 +00:00
Konstantin Belousov
3f54086eba Cache_lookup() for DOTDOT drops dvp vnode lock, allowing dvp to be reclaimed.
Check the condition and return ENOENT then.

In nfs_lookup(), respect ENOENT return from cache_lookup() when it is caused
by dvp reclaim.

Reported and tested by:	pho
2009-04-10 10:22:44 +00:00
Robert Watson
5d5c174869 Nul-terminate strings in the VFS name cache, which negligibly change
the size and cost of name cache entries, but make adding debugging
and tracing easier.

Add SDT DTrace probes for various namecache events:

  vfs:namecache:enter:done - new entry in the name cache, passed parent
    directory vnode pointer, name added to the cache, and child vnode
    pointer.

  vfs:namecache:enter_negative:done - new negative entry in the name cache,
    passed parent vnode pointer, name added to the cache.

  vfs:namecache:fullpath:enter - call to vn_fullpath1() is made, passed
    the vnode to resolve to a name.

  vfs:namecache:fullpath:hit - vn_fullpath1() successfully resolved a
    search for the parent of an object using the namecache, passed the
    discovered parent directory vnode pointer, name, and child vnode
    pointer.

  vfs:namecache:fullpath:miss - vn_fullpath1() failed to resolve a search
    for the parent of an object using the namecache, passed the child
    vnode pointer.

  vfs:namecache:fullpath:return - vn_fullpath1() has completed, passed the
    error number, and if that is zero, the vnode to resolve, and the
    returned path.

  vfs:namecache:lookup:hit - postive name cache entry hit, passed the
    parent directory vnode pointer, name, and child vnode pointer.

  vfs:namecache:lookup:hit_negative - negative name cache entry hit,
    passed the parent directory vnode pointer and name.

  vfs:namecache:lookup:miss - name cache miss, passed the parent directory
    pointer and the full remaining component name (not terminated after the
    cache miss component).

  vfs:namecache:purge:done - name cache purge for a vnode, passed the vnode
    pointer to purge.

  vfs:namecache:purge_negative:done - name cache purge of negative entries
    for children of a vnode, passed the vnode pointer to purge.

  vfs:namecache:purgevfs - name cache purge for a mountpoint, passed the
    mount pointer.  Separate probes will also be invoked for each cache
    entry zapped.

  vfs:namecache:zap:done - name cache entry zapped, passed the parent
    directory vnode pointer, name, and child vnode pointer.

  vfs:namecache:zap_negative:done - negative name cache entry zapped,
    passed the parent directory vnode pointer and name.

For any probes involving an extant name cache entry (enter, hit, zapp),
we use the nul-terminated string for the name component.  For misses,
the remainder of the path, including later components, is provided as
an argument instead since there is no handy nul-terminated version of
the string around.  This is arguably a bug.

MFC after:      1 month
Sponsored by:   Google, Inc.
Reviewed by:	jhb, kan, kib (earlier version)
2009-04-07 20:58:56 +00:00
Alexander Kabaev
bb6418cbe3 Revert change 190655 temporarily. It breaks many setups where nullfs is
used and needs to be revisited.
2009-04-04 17:48:38 +00:00
Peter Wemm
0e875ecafe vn_vptocnp() unlocks the name cache and forgets to re-lock it before
returning in one error case, and mistakenly unlocks it for the
umount -f case.
2009-04-02 21:16:20 +00:00
Alexander Kabaev
607fc40b04 Replace v_dd vnode pointer with v_cache_dd pointer to struct namecache
in directory vnodes. Allow namecache dotdot entry to be created pointing
from child vnode to parent vnode if no existing links in opposite
direction exist. Use direct link from parent to child for dotdot lookups
otherwise.

This restores more efficient dotdot caching in NFS filesystems which
was lost when vnodes stoppped being type stable.

Reviewed by:	kib
2009-03-29 21:25:40 +00:00
John Baldwin
049ce0934f When a file lookup fails due to encountering a doomed vnode from a forced
unmount, consistently return ENOENT rather than EBADF.

Reviewed by:	kib
MFC after:	1 month
2009-03-24 18:16:42 +00:00
Konstantin Belousov
15fb32c07d Do not underflow the buffer and then report the problem. Check for the
condition before the buffer write.
Also, since buflen is unsigned, previous check was ignored.

Reviewed by:	marcus
Tested by:	pho
2009-03-20 11:08:57 +00:00
Konstantin Belousov
83817ce3b1 Remove unneeded braces to reduce used vertical screen space.
The location was missed in r190140.
2009-03-20 11:03:55 +00:00
Konstantin Belousov
9194007261 Do not forget to adjust buflen for the first resolution of the path
from namecache.
While there, compare pointers for equiality.

Reviewed by:	marcus
Tested by:	pho
2009-03-20 11:00:39 +00:00
Konstantin Belousov
065fc451f8 The nc_nlen member of the struct namecache contains the length of the cached
name, not the length + 1.

PR:	132620, 132542
Reported by:	bf2006a yahoo com
Tested by:	bf2006a, pho
Reviewed by:	marcus
2009-03-20 10:59:06 +00:00
Konstantin Belousov
c4a8c2ee24 When ktracing namei operations, log a result of the __getcwd().
MFC after:	1 week
2009-03-20 10:47:16 +00:00
Konstantin Belousov
bf5c835e1c Remove unneeded braces to reduce used vertical screen space. 2009-03-20 10:04:00 +00:00
John Baldwin
4ab2a9a022 Move the debug.hashstat sysctl tree under DIAGNOSTIC. I measured the
debug.hashstat.rawnchash sysctl in particular as taking 7 milliseconds on
a 3GHz Intel Xeon (4x2) running 7.1.  It accounted for almost a quarter of
the total runtime of 'sysctl -a'.  It also performs lots of copyout's while
holding the namecache lock (this does not attempt to fix that).

MFC after:	2 weeks
2009-03-09 19:04:53 +00:00
John Baldwin
03964c8e09 Enable caching of negative pathname lookups in the NFS client. To avoid
stale entries, we save a copy of the directory's modification time when
the first negative cache entry was added in the directory's NFS node.
When a negative cache entry is hit during a pathname lookup, the parent
directory's modification time is checked.  If it has changed, all of the
negative cache entries for that parent are purged and the lookup falls
back to using the RPC.  This required adding a new cache_purge_negative()
method to the name cache to purge only negative cache entries for a given
directory.

Submitted by:	mohans, Rick Macklem, Ricardo Labiaga @ NetApp
Reviewed by:	mohans
2009-02-19 22:28:48 +00:00
John Baldwin
9078981ab1 Convert the global mutex protecting the directory lookup name cache from a
mutex to a reader/writer lock.  Lookup operations first grab a read lock and
perform the lookup.  If the operation results in a need to modify the cache,
then it tries to do an upgrade.  If that fails, it drops the read lock,
obtains a write lock, and redoes the lookup.
2009-01-28 19:05:18 +00:00
John Baldwin
8a7ef10b71 - Mark all standalone INT/LONG/QUAD sysctl's MPSAFE. This is done
inside the SYSCTL() macros and thus does not need to be done for
  all of the nodes scattered across the source tree.
- Mark the name-cache related sysctl's (including debug.hashstat.*) MPSAFE.
- Mark vm.loadavg MPSAFE.
- Remove GIANT_REQUIRED from vmtotal() (everything in this routine already
  has sufficient locking) and mark vm.vmtotal MPSAFE.
- Mark the vm.stats.(sys|vm).* sysctls MPSAFE.
2009-01-23 22:49:23 +00:00
Stephen McKay
58c1607e03 Add a limit on namecache entries.
In normal operation, the number of cache entries is roughly equal to the
number of active vnodes.  However, when most of the recently accessed
vnodes have many hard links, the number of cache entries can be 32000
times as large, exhausting kernel memory and provoking a panic in
kmem_malloc().

MFC after: 2 weeks
2009-01-20 04:21:21 +00:00
Konstantin Belousov
83e73926ad In r185557, the check for existing negative entry for the given name
did not compared nc_dvp with supplied parent directory vnode pointer.
Add the check and note that now branches for vp != NULL and vp == NULL
are the same, thus can be merged.

Reported and reviewed by:	kan
Tested by:	pho
MFC after:	2 weeks
2008-12-30 12:51:14 +00:00
Joe Marcus Clarke
4769218f4b Do not KASSERT when vp->v_dd is NULL. Only directories which have had ".."
looked up would have v_dd set to a non-NULL value.  This fixes a panic
seen when running installworld on a diskless system with a separate /usr
file system.

Submitted by:	cracauer
Approved by:	kib
2008-12-23 20:43:42 +00:00
Konstantin Belousov
86dcb537c9 Keep the hold on the vnode during VOP_VPTOCNP() call, allowing the vop
implementation to drop vnode lock, if needed.

Reported and tested by:	pho
2008-12-23 20:04:31 +00:00
Joe Marcus Clarke
b9022449b3 Add a new VOP, VOP_VPTOCNP, which translates a vnode to its component name
on a best-effort basis.  Teach vn_fullpath to use this new VOP if a
regular VFS cache lookup fails.  This VOP is designed to supplement the
VFS cache to provide a better chance that a vnode-to-name lookup will
succeed.

Currently, an implementation for devfs is being committed.  The default
implementation is to return ENOENT.

A big thanks to kib for the mentorship on this, and to pho for running it
through his stress test suite.

Reviewed by:	arch
Approved by:	kib
2008-12-12 00:57:38 +00:00
Konstantin Belousov
d6568724e1 Shared lookup makes it possible to create several negative cache
entries for one name. Then, creating inode with that name would remove
one entry, leaving others dormant. Reclaiming the vnode would uncover
negative entries, causing false return of ENOENT from the calls like
stat, that do not create inode.

Prevent creation of the duplicated negative entries.

Reported and debugged with:	pho
Reviewed by:	jhb
X-MFC:	after shared lookup changes
2008-12-02 11:14:16 +00:00
Joe Marcus Clarke
ef61995ebd Move vn_fullpath1() outside of FILEDESC locking. This is being done in
advance of teaching vn_fullpath1() how to query file systems for
vnode-to-name mappings when cache lookups fail.

Thanks to kib for guidance and patience on this process.

Reviewed by:	kib
Approved by:	kib
2008-11-25 15:36:15 +00:00
John Baldwin
d2722d704c Part 1 of making shared lookups more resilient with respect to forced
unmounts.  When we upgrade a vnode lock from shared to exclusive during
a name cache lookup, fail the lookup with EBADF if the vnode is invalidated
while we are waiting for the exclusive lock.

Also, for correctness (though I'm not sure it can occur in practice),
downgrade an exclusively locked vnode if it should be share locked.

Tested by:	pho
2008-09-24 18:51:33 +00:00
John Baldwin
cbb598af66 Sort includes. 2008-09-18 20:04:22 +00:00
John Baldwin
969bf150df Fix a race condition with concurrent LOOKUP namecache operations for a vnode
not in the namecache when shared lookups are enabled (vfs.lookup_shared=1,
it is currently off by default) and the filesystem supports shared lookups
(e.g. NFS client).  Specifically, if multiple concurrent LOOKUPs both miss
in the name cache in parallel, each of the lookups may each end up adding an
entry to the namecache resulting in duplicate entries in the namecache
for the same pathname.  A subsequent removal of the mapping of that
pathname to that vnode (via remove or rename) would only evict one of the
entries from the name cache.  As a result, subseqent lookups for that
pathname would still return the old vnode.

This race was observed with shared lookups over NFS where a file was updated
by writing a new file out to a temporary file name and then renaming that
temporary file to the "real" file to effect atomic updates of a file.  Other
processes on the same client that were periodically reading the file would
occasionally receive an ESTALE error from open(2) because the VOP_GETATTR()
in nfs_open() would receive that error when given the stale vnode.

The fix here is to check for duplicates in cache_enter() and just return
if an entry for this same directory and leaf file name for this vnode is
already in the cache.  The check for duplicates is done by walking the
per-vnode list of name cache entries.  It is expected that this list should
be very small in the common case (usually 0 or 1 entries during a
cache_enter() since most files only have 1 "leaf" name).

Reviewed by:	ups, scottl
MFC after:	2 months
2008-08-23 15:13:39 +00:00
Alfred Perlstein
cbd3ba3edf Prevent crashes due to unlocked access to hash buckets in two sysctls.
Use CACHE_LOCK to prevent crashes.

Sysctls fixed: debug.hashstat.nchash and debug.hashstat.rawnchash.

Obtained from: Juniper Networks
MFC After: 1 week
2008-08-16 21:48:10 +00:00
Christian S.J. Peron
dfc714fba1 Currently, BSM audit pathname token generation for chrooted or jailed
processes are not producing absolute pathname tokens.  It is required
that audited pathnames are generated relative to the global root mount
point.  This modification changes our implementation of audit_canon_path(9)
and introduces a new function: vn_fullpath_global(9) which performs a
vnode -> pathname translation relative to the global mount point based
on the contents of the name cache.  Much like vn_fullpath,
vn_fullpath_global is a wrapper function which called vn_fullpath1.

Further, the string parsing routines have been converted to use the
sbuf(9) framework.  This change also removes the conditional acquisition
of Giant, since the vn_fullpath1 method will not dip into file system
dependent code.

The vnode locking was modified to use vhold()/vdrop() instead the vref()
and vrele().  This will modify the hold count instead of modifying the
user count.  This makes more sense since it's the kernel that requires
the reference to the vnode.  This also makes sure that the vnode does not
get recycled we hold the reference to it. [1]

Discussed with:	rwatson
Reviewed by:	kib [1]
MFC after:	2 weeks
2008-07-31 16:57:41 +00:00
Pawel Jakub Dawidek
b03d720760 - Use LK_TYPE_MASK where needed. Actually after sys/sys/lockmgr.h:1.69 it is
no longer needed, but for now we still want to be consistent with other
  similar checks in the tree.
- Call ASSERT_VOP_ELOCKED() only when vget() returns 0.

Reviewed by:	jeff
2008-04-09 20:19:55 +00:00
Konstantin Belousov
0a3af16a75 Add the utility function vn_commname() to retrieve the command name
from the vfs namecache, when available.

Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 11:53:03 +00:00
Robert Watson
237fdd787b In keeping with style(9)'s recommendations on macros, use a ';'
after each SYSINIT() macro invocation.  This makes a number of
lightweight C parsers much happier with the FreeBSD kernel
source, including cflow's prcc and lxr.

MFC after:	1 month
Discussed with:	imp, rink
2008-03-16 10:58:09 +00:00
Attilio Rao
81c794f998 Axe the 'thread' argument from VOP_ISLOCKED() and lockstatus() as it is
always curthread.

As KPI gets broken by this patch, manpages and __FreeBSD_version will be
updated by further commits.

Tested by:	Andrea Barberio <insomniac at slackware dot it>
2008-02-25 18:45:57 +00:00
Attilio Rao
22db15c06f VOP_LOCK1() (and so VOP_LOCK()) and VOP_UNLOCK() are only used in
conjuction with 'thread' argument passing which is always curthread.
Remove the unuseful extra-argument and pass explicitly curthread to lower
layer functions, when necessary.

KPI results broken by this change, which should affect several ports, so
version bumping and manpage update will be further committed.

Tested by: kris, pho, Diego Sardina <siarodx at gmail dot com>
2008-01-13 14:44:15 +00:00
Attilio Rao
cb05b60a89 vn_lock() is currently only used with the 'curthread' passed as argument.
Remove this argument and pass curthread directly to underlying
VOP_LOCK1() VFS method. This modify makes the code cleaner and in
particular remove an annoying dependence helping next lockmgr() cleanup.
KPI results, obviously, changed.

Manpage and FreeBSD_version will be updated through further commits.

As a side note, would be valuable to say that next commits will address
a similar cleanup about VFS methods, in particular vop_lock1 and
vop_unlock.

Tested by:	Diego Sardina <siarodx at gmail dot com>,
		Andrea Di Pasquale <whyx dot it at gmail dot com>
2008-01-10 01:10:58 +00:00
Kris Kennaway
e6d64a0f15 Remove remaining Giant acquisition around vn_fullpath1. This was missed
in r1.106 and has not been required for some years now.

Reviewed by:  jeff
MFC After:    1 week
2007-11-22 21:26:25 +00:00
Pawel Jakub Dawidek
b4d7e2983c Fix some locking cases where we ask for exclusively locked vnode, but we get
shared locked vnode in instead when vfs.lookup_shared is set to 1.

Discussed with:	kib, kris
Tested by:	kris
Approved by:	re (kensmith)
2007-09-21 10:16:56 +00:00
Pawel Jakub Dawidek
dfe97ff4a5 We only flush entries related to the given file system. Currently there are
no 'invalid' cache entires - file system is responsible for keeping it that
way. The comment should have been updated in rev.1.25.
2007-06-18 09:28:24 +00:00
Pawel Jakub Dawidek
6e042171bd To avoid a deadlock when handling .. directory during a lookup, we unlock
parent vnode and relock it after locking child vnode. The problem was that
we always relock it exclusively, even when it was share-locked.

Discussed with:	jeff
2007-05-25 22:23:38 +00:00