Mateusz Guzik
c97c8746c0
cache: neglist -> nl; negstate -> ns
...
No functional changes.
2020-10-16 00:55:09 +00:00
Mateusz Guzik
43777a207d
cache: split hotlist between existing negative lists
...
This simplifies the code while allowing for concurrent negative eviction
down the road.
Cache misses increased slightly due to higher rate of evictions allowed by
the change.
The current algorithm remains too aggressive.
2020-10-15 17:44:17 +00:00
Mateusz Guzik
430dc4518d
cache: make neglist an array given the static size
2020-10-15 17:42:22 +00:00
Mateusz Guzik
dd28b379cb
vfs: support lockless dirfd lookups
2020-10-10 03:48:17 +00:00
Mateusz Guzik
eb88fed446
cache: fix vexec panic when racing against vgone
...
Use of dead_vnodeops would result in a panic instead of returning the intended
EOPNOTSUPP error.
While here make sure to abort, not just try to return a partial result.
The former allows the regular lookup to restart from scratch, while the latter
makes it stuck with an unusable vnode.
Reported by: kevans
2020-10-09 19:10:00 +00:00
Mateusz Guzik
4e2266100d
cache: fix pwd use-after-free in setting up fallback
...
Since the code exits smr section prior to calling pwd_hold, the used
pwd can be freed and a new one allocated with the same address, making
the comparison erroneously true.
Note it is very unlikely anyone ran into it.
2020-10-05 19:38:51 +00:00
Mateusz Guzik
aa34e791fa
cache: update the commentary for path parsing
2020-10-02 14:50:03 +00:00
Mateusz Guzik
b5ab177a99
cache: properly report ENOTDIR on foo/bar lookups where foo is a file
...
Reported by: fernape
2020-10-01 08:46:21 +00:00
Mateusz Guzik
4301a5a794
cache: push the lock into cache_purge_impl
2020-09-30 17:08:34 +00:00
Mateusz Guzik
d4cac59429
cache: use cache_has_entries where appropriate instead of opencoding it
2020-09-30 04:27:38 +00:00
Mateusz Guzik
1b2edd6e2b
cache: eliminate cache_zap_locked_vnode
...
It is only ever called for negative entries and for those it is
just a wrapper around cache_zap_negative_locked_vnode_kl which
always succeeds.
This also fixes a bug where cache_lookup_fallback should have been
calling cache_zap_locked_bucket instead. Note that in order to trigger
the bug NOCACHE must not be set, which currently only happens when
creating a new coredump (and then the coredump-to-be has to have a
negative entry).
2020-09-24 03:38:32 +00:00
Mateusz Guzik
a3d9bf49b5
cache: drop the force flag from purgevfs
...
The optional scan is wasteful, thus it is removed altogether from unmount.
Callers which always want it anyway remain unaffected.
2020-09-23 10:46:07 +00:00
Mateusz Guzik
a952fefff2
cache: reimplement purgevfs to iterate vnodes instead of the entire hash
...
The entire cache scan was a leftover from the old implementation.
It is incredibly wasteful in presence of several mount points and does not
win much even for single ones.
2020-09-23 10:44:49 +00:00
Mateusz Guzik
efeec5f0c6
cache: clean up atomic ops on numneg and numcache
...
- use subtract instead of adding -1
- drop the useless _rel fence
Note this should be converted to a scalable scheme.
2020-09-23 10:42:41 +00:00
Mateusz Guzik
da62ed4f1a
cache: drop write-only tvp_seqc vars
2020-09-08 16:06:46 +00:00
Mateusz Guzik
84ecea90b7
cache: don't update timestmaps on found entry
2020-08-27 06:31:55 +00:00
Mateusz Guzik
5f08d440b0
cache: assorted clean ups
...
In particular remove spurious comments, duplicate assertions and the
inconsistently done KTR support.
2020-08-27 06:31:27 +00:00
Mateusz Guzik
12441fcbe2
cache: ncp = NULL early to account for sdt probes in ailure path
...
CID: 1432106
2020-08-27 06:30:40 +00:00
Mateusz Guzik
1e9a0b391d
cache: relock on failure in cache_zap_locked_vnode
...
This gets rid of bogus scheme of yielding in hopes the blocking thread will
make progress.
2020-08-26 12:54:18 +00:00
Mateusz Guzik
075f58f231
cache: stop null checking in cache_free
2020-08-26 12:53:16 +00:00
Mateusz Guzik
66fa11c898
cache: make it mandatory to request both timestamps or neither
2020-08-26 12:52:54 +00:00
Mateusz Guzik
eef63775b6
cache: convert bucketlocks to a mutex
...
By now bucket locks are almost never taken for anything but writing and
converting to mutex simplifies the code.
2020-08-26 12:52:17 +00:00
Mateusz Guzik
32f3d0821c
cache: only evict negative entries on CREATE when ISLASTCN is set
2020-08-26 12:50:57 +00:00
Mateusz Guzik
935e15187c
cache: decouple smr and locked lookup in the slowpath
...
Tested by: pho
2020-08-26 12:50:10 +00:00
Mateusz Guzik
d3476daddc
cache: factor dotdot lookup out of cache_lookup
...
Tested by: pho
2020-08-26 12:49:39 +00:00
Mateusz Guzik
f9cdb0775e
cache: remove leftover assert in vn_fullpath_any_smr
...
It is only valid when !slash_prefixed. For slash_prefixed the length
is properly accounted for later.
Reported by: markj (syzkaller)
2020-08-24 18:23:58 +00:00
Mateusz Guzik
e35406c8f7
cache: lockless reverse lookup
...
This enables fully scalable operation for getcwd and significantly improves
realpath.
For example:
PATH_CUSTOM=/usr/src ./getcwd_processes -t 104
before: 1550851
after: 380135380
Tested by: pho
2020-08-24 09:00:57 +00:00
Mateusz Guzik
feabaaf995
cache: drop the always curthread argument from reverse lookup routines
...
Note VOP_VPTOCNP keeps getting it as temporary compatibility for zfs.
Tested by: pho
2020-08-24 08:57:02 +00:00
Mateusz Guzik
f0696c5e4b
cache: perform reverse lookup using v_cache_dd if possible
...
Tested by: pho
2020-08-24 08:55:55 +00:00
Mateusz Guzik
ce575cd0e2
cache: populate v_cache_dd for non-VDIR entries
...
It makes v_cache_dd into a little bit of a misnomer and it may be addressed later.
Tested by: pho
2020-08-24 08:55:04 +00:00
Mateusz Guzik
1e448a1558
cache: stronger vnode asserts in cache_enter_time
2020-08-22 16:58:34 +00:00
Mateusz Guzik
760a430bb3
vfs: add a work around for vp_crossmp bug to realpath
...
The actual bug is not yet addressed as it will get much easier after other
problems are addressed (most notably rename contract).
The only affected in-tree consumer is realpath. Everyone else happens to be
performing lookups within a mount point, having a side effect of ni_dvp being
set to mount point's root vnode in the worst case.
Reported by: pho
2020-08-22 06:56:04 +00:00
Mateusz Guzik
17838b5869
cache: don't use cache_purge_negative when renaming
...
It avoidably scans (and evicts) unrelated entries. Instead take
advantage of passed componentname and perform a hash lookup
for the exact one.
Sample data from buildworld probed on cache_purge_negative extended
to count both scanned and evicted entries on each call are below.
At most it has to evict 1.
evicted
value ------------- Distribution ------------- count
-1 | 0
0 |@@@@@@@@@@@@@@@ 19506
1 |@@@@@ 5820
2 |@@@@@@ 7751
4 |@@@@@ 6506
8 |@@@@@ 5996
16 |@@@ 4029
32 |@ 1489
64 | 193
128 | 109
256 | 56
512 | 16
1024 | 7
2048 | 3
4096 | 1
8192 | 1
16384 | 0
scanned
value ------------- Distribution ------------- count
-1 | 0
0 |@@ 2456
1 |@ 1496
2 |@@ 2728
4 |@@@ 4171
8 |@@@@ 5122
16 |@@@@ 5335
32 |@@@@@ 6279
64 |@@@@ 5671
128 |@@@@ 4558
256 |@@ 3123
512 |@@ 2790
1024 |@@ 2449
2048 |@@ 3021
4096 |@ 1398
8192 |@ 886
16384 | 0
2020-08-20 10:06:50 +00:00
Mateusz Guzik
39f8815070
cache: add cache_rename, a dedicated helper to use for renames
...
While here make both tmpfs and ufs use it.
No fuctional changes.
2020-08-20 10:05:46 +00:00
Mateusz Guzik
16be9f9956
cache: reimplement cache_lookup_nomakeentry as cache_remove_cnp
...
This in particular removes unused arguments.
2020-08-20 10:05:19 +00:00
Mateusz Guzik
6c55d6e030
cache: when adding an already existing entry assert on a complete match
2020-08-19 15:08:14 +00:00
Mateusz Guzik
7c75f14f5b
cache: tidy up the comment above cache_prehash
2020-08-19 15:07:28 +00:00
Mateusz Guzik
3c5d2ed71f
cache: add NOCAPCHECK to the list of supported flags for lockless lookup
...
It is de facto supported in that lockless lookup does not do any capability
checks.
2020-08-16 18:33:24 +00:00
Mateusz Guzik
8ab4becab0
vfs: use namei_zone for getcwd allocations
...
instead of malloc.
Note that this should probably be wrapped with a dedicated API and other
vn_getcwd callers did not get converted.
2020-08-16 18:21:21 +00:00
Mateusz Guzik
5e79447d60
cache: let SAVESTART passthrough
...
The flag is only passed for non-LOOKUP ops and those fallback to the slowpath.
2020-08-10 12:28:56 +00:00
Mateusz Guzik
bb48255cf5
cache: resize struct namecache to a multiply of alignment
...
For example struct namecache on amd64 is 100 bytes, but it has to occupies
104. Use the extra bytes to support longer names.
2020-08-10 12:05:55 +00:00
Mateusz Guzik
8b62cebea7
cache: remove unused variables from cache_fplookup_parse
2020-08-10 11:51:56 +00:00
Mateusz Guzik
03337743db
vfs: clean MNTK_FPLOOKUP if MNT_UNION is set
...
Elides checking it during lookup.
2020-08-10 11:51:21 +00:00
Mateusz Guzik
c571b99545
cache: strlcpy -> memcpy
2020-08-10 10:40:14 +00:00
Mateusz Guzik
3ba0e51703
vfs: partially support file create/delete/rename in lockless lookup
...
Perform the lookup until the last 2 elements and fallback to slowpath.
Tested by: pho
Sponsored by: The FreeBSD Foundation
2020-08-10 10:35:18 +00:00
Mateusz Guzik
21d5af2b30
vfs: drop the thread argumemnt from vfs_fplookup_vexec
...
It is guaranteed curthread.
Tested by: pho
Sponsored by: The FreeBSD Foundation
2020-08-10 10:34:22 +00:00
Mateusz Guzik
e910c93eea
cache: add more predicts for failing conditions
2020-08-06 04:20:14 +00:00
Mateusz Guzik
95888901f7
cache: plug unititalized variable use
...
CID: 1431128
2020-08-06 04:19:47 +00:00
Mateusz Guzik
e1b1971c05
cache: don't ignore size passed to nchinittbl
2020-08-05 09:38:02 +00:00
Mateusz Guzik
2b86f9d6d0
cache: convert the hash from LIST to SLIST
...
This reduces struct namecache by sizeof(void *).
Negative side is that we have to find the previous element (if any) when
removing an entry, but since we normally don't expect collisions it should be
fine.
Note this adds cache_get_hash calls which can be eliminated.
2020-08-05 09:25:59 +00:00