that they set v->v_vnlock. This is true for all filesystems in the
tree.
- Remove all uses of LK_THISLAYER. If the lower layer is locked, the
null layer is locked. We only use vget() to get a reference now.
null essentially does no locking. This fixes LOOKUP_SHARED with
nullfs.
- Remove the special LK_DRAIN considerations, I do not believe this is
needed now as LK_DRAIN doesn't destroy the lower vnode's lock, and
it's hardly used anymore.
- Add one well commented hack to prevent the lowervp from going away
while we're in it's VOP_LOCK routine. This can only happen if we're
forcibly unmounted while some callers are waiting in the lock. In
this case the lowervp could be recycled after we drop our last ref
in null_reclaim(). Prevent this with a vhold().
as suggested by Matt's comment. Also fix some style and paranoia issues.
The entire function could benefit from review by a VFS guru.
MFC after: 6 weeks
Since we used an sbuf of size resid to accumulate dirents, we would end
up returning one byte short when we had enough dirents to fill or exceed
the size of the sbuf (the last byte being lost to bogus NUL termination)
causing the next call to return EINVAL due to an unaligned offset. This
went undetected for a long time because I did most of my testing in
single-user mode, where there are rarely enough processes to fill the
4096-byte buffer ls(1) uses. The most common symptom of this bug is that
tab completion of /proc or /compat/linux/proc does not work properly when
many processes are running.
Also, a check near the top would return EINVAL if resid was smaller than
PFS_DELEN, even if it was 0, which is frequently the case and perfectly
allowable. Change the test so that it returns 0 if resid is 0.
MFC after: 2 weeks
the filesystem. Check that rather than VI_XLOCK.
- VOP_INACTIVE should no longer drop the vnode lock.
- The vnode lock is required around calls to vrecycle() and vgone().
Sponsored by: Isilon Systems, Inc.
vnode lock. Remove the c_lock and use the vn lock in its place.
- Keep the coda lock functions so that the debugging information is
preserved, but call directly to the vop_std*lock routines for the real
functionality.
Sponsored by: Isilon Systems, Inc.
long filename. Each substring is indexed by the windows ID, a
sequential one-based value. The previous code was extremely slow,
doing a malloc/strcpy/free for each substring.
This code optimizes these routines with this in mind, using the ID
to index into a single array and concatenating each WIN_CHARS chunk
at once. (The last chunk is variable-length.)
This code has been tested as working on an FS with difficult filename
sizes (255, 13, 26, etc.) It gives a 77.1% decrease in profiled
time (total across all functions) and a 73.7% decrease in wall time.
Test was "ls -laR > /dev/null".
Per-function time savings:
mbnambuf_init: -90.7%
mbnambuf_write: -18.7%
mbnambuf_flush: -67.1%
MFC after: 1 month
List devfs_dirents rather than vnodes off their shared struct cdev, this
saves a pointer field in the vnode at the expense of a field in the
devfs_dirent. There are often 100 times more vnodes so this is bargain.
In addition it makes it harder for people to try to do stypid things like
"finding the vnode from cdev".
Since DEVFS handles all VCHR nodes now, we can do the vnode related
cleanup in devfs_reclaim() instead of in dev_rel() and vgonel().
Similarly, we can do the struct cdev related cleanup in dev_rel()
instead of devfs_reclaim().
rename idestroy_dev() to destroy_devl() for consistency.
Add LIST_ENTRY de_alias to struct devfs_dirent.
Remove v_specnext from struct vnode.
Change si_hlist to si_alist in struct cdev.
String new devfs vnodes' devfs_dirent on si_alist when
we create them and take them off in devfs_reclaim().
Fix devfs_revoke() accordingly. Also don't clear fields
devfs_reclaim() will clear when called from vgone();
Let devfs_reclaim() call dev_rel() instead of vgonel().
Move the usecount tracking from dev_rel() to devfs_reclaim(),
and let dev_rel() take a struct cdev argument instead of vnode.
Destroy SI_CHEAPCLONE devices in dev_rel() (instead of
devfs_reclaim()) when they are no longer used. (This
should maybe happen in devfs_close() instead.)
mounted (is it Joliet, RockRidge, High Sierra) based on bootverbose.
Most file systems don't generate log messages based on details of the
file system superblock, and these log messages disrupt sysinstall output
during a new install from CD. We may want to explore exposing this
status information using nmount() at some point.
MFC after: 3 days
on my P3, microbenchmarks show the unrolled version is 78x faster. In
actual use (recursive ls), this gives an average of 9% improvement in
system time and 2% improvement in wall time.
called in "open", causing mmap() to fail.
Where possible, pass size of file to vnode_create_vobject() rather
than having it find it out the hard way via VOP_LOOKUP
Reviewed by: phk
mmap() on NTFS files was hosed, returning pages offset from the
start of the disk rather than the start of the file. (ie, "cp" of
a 1-block file would get you a copy of the boot sector, not the
data in the file.) The solution isn't ideal, but gives a functioning
filesystem.
Cached vnode lookup was also broken, resulting in vnode haemorrhage.
A lookup on the same file twice would give you two vnodes, and the
resulting cached pages.
Just recently, mmap() was broken due to a lack of a call to
vnode_create_vobject() in ntfs_open().
Discussed with: phk@
with NFS.
We are moving responsibility for creating the vnode_pager object into
the filesystems which own the vnode, and this is one of the places
we have to cover.
We call vnode_create_vobject() directly because we own the vnode.
If we can get the size easily, pass it as an argument to save the
call to VOP_GETATTR() in vnode_create_vobject()
and KASSERT coverage.
After this check there is only one "nasty" cast in this code but there
is a KASSERT to protect against the wrong argument structure behind
that cast.
Un-inlining the meat of VOP_FOO() saves 35kB of text segment on a typical
kernel with no change in performance.
We also now run the checking and tracing on VOP's which have been layered
by nullfs, umapfs, deadfs or unionfs.
Add new (non-inline) VOP_FOO_AP() functions which take a "struct
foo_args" argument and does everything the VOP_FOO() macros
used to do with checks and debugging code.
Add KASSERT to VOP_FOO_AP() check for argument type being
correct.
Slim down VOP_FOO() inline functions to just stuff arguments
into the struct foo_args and call VOP_FOO_AP().
Put function pointer to VOP_FOO_AP() into vop_foo_desc structure
and make VCALL() use it instead of the current offsetoff() hack.
Retire vcall() which implemented the offsetoff()
Make deadfs and unionfs use VOP_FOO_AP() calls instead of
VCALL(), we know which specific call we want already.
Remove unneeded arguments to VCALL() in nullfs and umapfs bypass
functions.
Remove unused vdesc_offset and VOFFSET().
Generally improve style/readability of the generated code.
I'm not sure why a credential was added to these in the first place, it is
not used anywhere and it doesn't make much sense:
The credentials for syncing a file (ability to write to the
file) should be checked at the system call level.
Credentials for syncing one or more filesystems ("none")
should be checked at the system call level as well.
If the filesystem implementation needs a particular credential
to carry out the syncing it would logically have to the
cached mount credential, or a credential cached along with
any delayed write data.
Discussed with: rwatson
After disscussing things I have decided to take the easy and
consistent 90% solution instead of aiming for the very involved 99%
solution.
If we allow forceful unmounts of DEVFS we need to decide how to handle
the devices which are in use through this filesystem at the time.
We cannot just readopt the open devices in the main /dev instance since
that would open us to security issues.
For the majority of the devices, this is relatively straightforward
as we can just pretend they got revoke(2)'ed.
Some devices get tricky: /dev/console and /dev/tty for instance
does a sort of recursive open of the real console device. Other devices
may be mmap'ed (kill the processes ?).
And then there are disk devices which are mounted.
The correct thing here would be to recursively unmount the filesystems
mounte from devices from our DEVFS instance (forcefully) and if
this succeeds, complete the forcefully unmount of DEVFS. But if
one of the forceful unmounts fail we cannot complete the forceful
unmount of DEVFS, but we are likely to already have severed a lot
of stuff in the process of trying.
Event attempting this would be a lot of code for a very far out
corner-case which most people would never see or get in touch with.
It's just not worth it.
methods:
Read can see O_NONBLOCK and O_DIRECT.
Write can see O_NONBLOCK, O_DIRECT and O_FSYNC.
In addition O_DIRECT is shadowed as IO_DIRECT for now for backwards
compatibility.
fcntl.h.
This is in preparation for making the flags passed to device drivers be
consistently from fcntl.h for all entrypoints.
Today open, close and ioctl uses fcntl.h flags, while read and write
uses vnode.h flags.