Commit Graph

14399 Commits

Author SHA1 Message Date
Ed Schouten
8328babdd0 Make pipes in CloudABI work.
Summary:
Pipes in CloudABI are unidirectional. The reason for this is that
CloudABI attempts to provide a uniform runtime environment across
different flavours of UNIX.

Instead of implementing a custom pipe that is unidirectional, we can
simply reuse Capsicum permission bits to support this. This is nice,
because CloudABI already attempts to restrict permission bits to
correspond with the operations that apply to a certain file descriptor.

Replace kern_pipe() and kern_pipe2() by a single kern_pipe() that takes
a pair of filecaps. These filecaps are passed to the newly introduced
falloc_caps() function that creates the descriptors with rights in
place.

Test Plan:
CloudABI pipes seem to be created with proper rights in place:

https://github.com/NuxiNL/cloudlibc/blob/master/src/libc/unistd/pipe_test.c#L44

Reviewers: jilles, mjg

Reviewed By: mjg

Subscribers: imp

Differential Revision: https://reviews.freebsd.org/D3236
2015-07-29 17:18:27 +00:00
Ed Schouten
e555b4309c Introduce falloc_caps() to create descriptors with capabilties in place.
falloc_noinstall() followed by finstall() allows you to create and
install file descriptors with custom capabilities. Add falloc_caps()
that can do both of these actions in one go.

This will be used by CloudABI to create pipes with custom capabilities.

Reviewed by:	mjg
2015-07-29 17:16:53 +00:00
Konstantin Belousov
6cebf7e2be Move bufshutdown() out of the #ifdef INVARIANTS block. 2015-07-29 09:57:34 +00:00
Jeff Roberson
98082691bb - Make 'struct buf *buf' private to vfs_bio.c. Having a global variable
'buf' is inconvenient and has lead me to some irritating to discover
   bugs over the years.  It also makes it more challenging to refactor
   the buf allocation system.
 - Move swbuf and declare it as an extern in vfs_bio.c.  This is still
   not perfect but better than it was before.
 - Eliminate the unused ffs function that relied on knowledge of the buf
   array.
 - Move the shutdown code that iterates over the buf array into vfs_bio.c.

Reviewed by:	kib
Sponsored by:	EMC / Isilon Storage Division
2015-07-29 02:26:57 +00:00
Jeff Roberson
38750ada8f - Eliminate the EMPTYKVA queue. It served as a cache of KVA allocations
attached to bufs to avoid the overhead of the vm.  This purposes is now
   better served by vmem.  Freeing the kva immediately when a buf is
   destroyed leads to lower fragmentation and a much simpler scan algorithm.

Reviewed by:	kib
Sponsored by:	EMC / Isilon Storage Division
2015-07-28 20:24:09 +00:00
Ed Schouten
b114aa7959 Make shutdown() return ENOTCONN as required by POSIX, part deux.
Summary:
Back in 2005, maxim@ attempted to fix shutdown() to return ENOTCONN in case the socket was not connected (r150152). This had to be rolled back (r150155), as it broke some of the existing programs that depend on this behavior. I reapplied this change on my system and indeed, syslogd failed to start up. I fixed this back in February (279016) and MFC'ed it to the supported stable branches. Apart from that, things seem to work out all right.

Since at least Linux and Mac OS X do the right thing, I'd like to go ahead and give this another try. To keep old copies of syslogd working, only start returning ENOTCONN for recent binaries.

I took a look at the XNU sources and they seem to test against both SS_ISCONNECTED, SS_ISCONNECTING and SS_ISDISCONNECTING, instead of just SS_ISCONNECTED. That seams reasonable, so let's do the same.

Test Plan:
This issue was uncovered while writing tests for shutdown() in CloudABI:

https://github.com/NuxiNL/cloudlibc/blob/master/src/libc/sys/socket/shutdown_test.c#L26

Reviewers: glebius, rwatson, #manpages, gnn, #network

Reviewed By: gnn, #network

Subscribers: bms, mjg, imp

Differential Revision: https://reviews.freebsd.org/D3039
2015-07-27 13:17:57 +00:00
Andrey V. Elsukov
41f5f69f96 Build debug version of rmlock's methods only when LOCK_DEBUG > 0.
Currently LOCK_DEBUG is always defined in sys/lock.h (0 or 1).
This means that debugging code always built. In addition the kernel
modules have always defined LOCK_DEBUG as 1. So, debugging rmlock code
is always used by kernel modules.

MFC after:	1 week
2015-07-26 10:53:32 +00:00
Konstantin Belousov
6fd04eff66 With the removal of b_saveaddr in the r285819, b_data must be reset to
b_kvabase when the buffer is reclaimed.  Otherwise, if b_data for the
mapped buffer was adjusted with the page-offset portion of b_offset,
nothing would re-adjust the b_data, which breaks buffer management
code which expects page-aligned b_data (see e.g. bpman_qenter(), which
skips partial pages).

Fix a minor issue with the GB_KVAALLOC requests, which could result in
returning the mapped buffer if the reused buffer is mapped and have
the right amount of KVA reserved.

Improve assertion in the vfs_buf_check_mapped() to catch unmapped
buffers which have their b_data incorrectly adjusted with offset.

Reported and tested by:	pho (previous version)
Reviewed by:	jeff (previous version)
Sponsored by:	The FreeBSD Foundation
2015-07-25 15:00:14 +00:00
Xin LI
1a7c14aec7 Fix a typo in comment.
Submitted by:	Yanhui Shen via twitter
MFC after:	3 days
2015-07-24 22:13:39 +00:00
Marius Strobl
7815d3948c o Revert the other functional half of r239864, i. e. the merge of r134227
from x86 to use smp_ipi_mtx spin lock not only for smp_rendezvous_cpus()
  but also for the MD cache invalidation, TLB demapping and remote register
  reading IPIs due to the following reasons:
  - The cross-IPI SMP deadlock x86 otherwise is subject to can't happen on
    sparc64. That's because on sparc64, spin locks don't disable interrupts
    completely but only raise the processor interrupt level to PIL_TICK. This
    means that IPIs still get delivered and direct dispatch IPIs such as the
    cache invalidation etc. IPIs in question are still executed.
  - In smp_rendezvous_cpus(), smp_ipi_mtx is held not only while sending an
    IPI_RENDEZVOUS, but until all CPUs have processed smp_rendezvous_action().
    Consequently, smp_ipi_mtx may be locked for an extended amount of time as
    queued IPIs (as opposed to the direct ones) such as IPI_RENDEZVOUS are
    scheduled via a soft interrupt. Moreover, given that this soft interrupt
    is only delivered at PIL_RENDEZVOUS, processing of smp_rendezvous_action()
    on a target may be interrupted by f. e. a tick interrupt at PIL_TICK, in
    turn leading to the target in question trying to send an IPI by itself
    while IPI_RENDEZVOUS isn't fully handled, yet, and, thus, resulting in a
    deadlock.
o As mentioned in the commit message of r245850, on least some sun4u platforms
  concurrent sending of IPIs by different CPUs is fatal. Therefore, hold the
  reintroduced MD ipi_mtx also while delivering cross-traps via MI helpers,
  i. e. ipi_{all_but_self,cpu,selected}().
o Akin to x86, let the last CPU to process cpu_mp_bootstrap() set smp_started
  instead of the BSP in cpu_mp_unleash(). This ensures that all APs actually
  are started, when smp_started is no longer 0.
o In all MD and MI IPI helpers, check for smp_started == 1 rather than for
  smp_cpus > 1 or nothing at all. This avoids races during boot causing IPIs
  trying to be delivered to APs that in fact aren't up and running, yet.
  While at it, move setting of the cpu_ipi_{selected,single}() pointers to
  the appropriate delivery functions from mp_init() to cpu_mp_start() where
  it's better suited and allows to get rid of the global isjbus variable.
o Given that now concurrent IPI delivery no longer is possible, also nuke
  the delays before completely disabling interrupts again in the CPU-specific
  cross-trap delivery functions, previously giving other CPUs a window for
  sending IPIs on their part. Actually, we now should be able to entirely get
  rid of completely disabling interrupts in these functions. Such a change
  needs more testing, though.
o In {s,}tick_get_timecount_mp(), make the {s,}tick variable static. While not
  necessary for correctness, this avoids page faults when accessing the stack
  of a foreign CPU as {s,}tick now is locked into the TLBs as part of static
  kernel data. Hence, {s,}tick_get_timecount_mp() always execute as fast as
  possible, avoiding jitter.

PR:		201245
MFC after:	3 days
2015-07-24 15:13:21 +00:00
Sergey Kandaurov
ef88ae77ea Call ksem_get() with initialized 'rights'.
ksem_get() consumes fget(), and it's mandatory there.

Reported by:	truckman
Reviewed by:	mjg
2015-07-23 23:18:03 +00:00
Jeff Roberson
fade8dd714 Refactor unmapped buffer address handling.
- Use pointer assignment rather than a combination of pointers and
   flags to switch buffers between unmapped and mapped.  This eliminates
   multiple flags and generally simplifies the logic.
 - Eliminate b_saveaddr since it is only used with pager bufs which have
   their b_data re-initialized on each allocation.
 - Gather up some convenience routines in the buffer cache for
   manipulating buf space and buf malloc space.
 - Add an inline, buf_mapped(), to standardize checks around unmapped
   buffers.

In collaboration with: mlaier
Reviewed by:	kib
Tested by:	pho (many small revisions ago)
Sponsored by:	EMC / Isilon Storage Division
2015-07-23 19:13:41 +00:00
Jeff Roberson
1c1ddc0351 - Don't defeat the FIFO nature of the buffer cache by eliminating the
most recently used buffer when we are under paging pressure.  This is
   a perversion of the buffer and page replacement algorithms and recent
   improvements to the page daemon have rendered it unnecessary.  In the
   event that low-memory deadlocks become an issue it would be possible
   to make a daemon or event handler that performs a similar action on
   the oldest buffers rather than the newest.  Since the buf cache
   is analogous to the page cache and some minimum working set is desired
   another possibility is to simply shrink the minimum working set which
   has less downside now that file pages are not directly mapped.

Sponsored by:	EMC / Isilon
Reviewed by:	alc, kib (with some minor objection)
Tested by:	pho
2015-07-23 02:20:41 +00:00
Konstantin Belousov
e637a6e3f9 The smp_rendezvous_cpus() function should ensure that all accesses
done by the functions called on other CPUs, are visible to the caller.
Pair otherwise useless acquire on smp_rv_waiters[3] with a release add
to ensure synchronized with relation, which guarantees visibility.

Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	3 weeks
2015-07-21 22:56:46 +00:00
Konstantin Belousov
01f5e0866b The part of r285680 which removed release semantic for two stores to
it_need was wrong [*].  Restore the releases and add a comment
explaining why it is needed.

Noted by:	alc [*]
Reviewed by:	bde [*]
Sponsored by:	The FreeBSD Foundation
2015-07-21 14:39:34 +00:00
Sergey Kandaurov
94df6fad1d Fix sb_state constant names as used e.g. to display in DDB ``show sockbuf''.
MFC after:	1 week
2015-07-21 09:57:13 +00:00
Ed Schouten
5a170c1b0e Add an API for easily creating userspace threads in kernelspace.
This change refactors the existing create_thread() function to be more
generic. It replaces almost all of its arguments by a callback that can
be used to extract the thread ID and copy it out to the right place, but
also to perform additional initialization steps, such as setting the
trapframe. This also makes the difference between thr_new() and
thr_create() more clear in my opinion.

This function is going to be used by the CloudABI compatibility layer.

It looks like the OpenSolaris compatibility framework already provides a
function called thread_create(). Rename this function to
do_thread_create() and use a macro to deal with the namespacing
conflict. A similar approach is already used for thread_exit().

MFC after:	1 month
2015-07-20 10:20:04 +00:00
Alexander Motin
d3e2e28e74 Fix typo in comment.
Submitted by:	Masao Uebayashi
2015-07-20 09:37:42 +00:00
Mark Johnston
97cc6870f6 Don't increment the spin count until after the first attempt to acquire a
rwlock read lock. Otherwise the lockstat:::rw-spin probe will fire
spuriously.

MFC after:	1 week
2015-07-19 22:26:02 +00:00
Kirk McKusick
1b79b9498b Restructure code for readability improvement. No functional change.
Reviewed by: kib
2015-07-19 22:25:16 +00:00
Mark Johnston
de2c95cc00 Consistently use a reader/writer flag for lockstat probes in rwlock(9) and
sx(9), rather than using the probe function name to determine whether a
given lock is a read lock or a write lock. Update lockstat(1) accordingly.
2015-07-19 22:24:33 +00:00
Mark Johnston
32cd0147fa Implement the lockstat provider using SDT(9) instead of the custom provider
in lockstat.ko. This means that lockstat probes now have typed arguments and
will utilize SDT probe hot-patching support when it arrives.

Reviewed by:	gnn
Differential Revision:	https://reviews.freebsd.org/D2993
2015-07-19 22:14:09 +00:00
Marcelo Araujo
f19e47d691 Add support to the jail framework to be able to mount linsysfs(5) and
linprocfs(5).

Differential Revision:	D2846
Submitted by:		Nikolai Lifanov <lifanov@mail.lifanov.com>
Reviewed by:		jamie
2015-07-19 08:52:35 +00:00
Konstantin Belousov
283dfee925 Further cleanup after r285607.
Remove useless release semantic for some stores to it_need.  For
stores where the release is needed, add a comment explaining why.

Fence after the atomic_cmpset() op on the it_need should be acquire
only, release is not needed (see above).  The combination of
atomic_cmpset() + fence_acq() is better expressed there as
atomic_cmpset_acq().

Use atomic_cmpset() for swi' ih_need read and clear.

Discussed with:	alc, bde
Reviewed by:	bde
Comments wording provided by:	bde
Sponsored by:	The FreeBSD Foundation
MFC after:	2 weeks
2015-07-18 19:59:29 +00:00
Konstantin Belousov
b4490c6e93 The si_status field of the siginfo_t, provided by the waitid(2) and
SIGCHLD signal, should keep full 32 bits of the status passed to the
_exit(2).

Split the combined p_xstat of the struct proc into the separate exit
status p_xexit for normal process exit, and signalled termination
information p_xsig.  Kernel-visible macro KW_EXITCODE() reconstructs
old p_xstat from p_xexit and p_xsig.  p_xexit contains complete status
and copied out into si_status.

Requested by:	Joerg Schilling
Reviewed by:	jilles (previous version), pho
Tested by:	pho
Sponsored by:	The FreeBSD Foundation
2015-07-18 09:02:50 +00:00
Mark Johnston
c6d48c8752 Fix the !KDTRACE_HOOKS build.
X-MFC-With:	r285664
2015-07-18 04:38:11 +00:00
Mark Johnston
e2b25737ee Pass the lock object to lockstat_nsecs() and return immediately if
LO_NOPROFILE is set. Some timecounter handlers acquire a spin mutex, and
we don't want to recurse if lockstat probes are enabled.

PR:		201642
Reviewed by:	avg
MFC after:	3 days
2015-07-18 00:57:30 +00:00
Mark Johnston
efe8b26b82 Modify lockstat_nsecs() to just return unless lockstat probes are actually
enabled. The cost of a timecounter read can be quite significant, and the
problem became more apparent after r284297, since that change resulted in
a call to lockstat_nsecs() for each acquisition of an rwlock read lock.

PR:		201642
Reviewed by:	avg
Tested by:	Jason Unovitch
MFC after:	3 days
Differential Revision:	https://reviews.freebsd.org/D3073
2015-07-18 00:22:00 +00:00
Ed Schouten
fd054c2df9 Undo r285656.
It turns out that the CDDL sources already introduce a function called
thread_create(). I'll investigate what we can do to make these functions
coexist.

Reported by: Ivan Klymenko
2015-07-17 22:26:45 +00:00
Ed Schouten
82a3d2cbfc Add an API for easily creating userspace threads in kernelspace.
This change refactors the existing create_thread() function to be more
generic. It replaces almost all of its arguments by a callback that can
be used to extract the thread ID and copy it out to the right place, but
also to perform additional initialization steps, such as setting the
trapframe. This also makes the difference between thr_new() and
thr_create() more clear in my opinion.

This function is going to be used by the CloudABI compatibility layer.

Reviewed by:	kib
MFC after:	1 month
2015-07-17 16:34:01 +00:00
Mateusz Guzik
2919a0c5c1 fd: partially deduplicate fdescfree and fdescfree_remapped
This also moves vrele of cdir/rdir/jdir vnodes earlier, which should not
matter.
2015-07-16 15:26:37 +00:00
Mateusz Guzik
cd672ca60f Get rid of lim_update_thread and cred_update_thread.
Their primary use was in thread_cow_update to free up old resources.
Freeing had to be done with proc lock held and _cow_ funcs already knew
how to free old structs.
2015-07-16 14:30:11 +00:00
Mateusz Guzik
752fc07d33 vfs: implement v_holdcnt/v_usecount manipulation using atomic ops
Transitions 0->1 and 1->0 (which decide e.g. on putting the vnode on the free
list) of either counter are still guarded with vnode interlock.

Reviewed by:	kib (earlier version)
Tested by:	pho
2015-07-16 13:57:05 +00:00
Ed Schouten
457f7e23b1 Implement CloudABI's exec() call.
Summary:
In a runtime that is purely based on capability-based security, there is
a strong emphasis on how programs start their execution. We need to make
sure that we execute an new program with an exact set of file
descriptors, ensuring that credentials are not leaked into the process
accidentally.

Providing the right file descriptors is just half the problem. There
also needs to be a framework in place that gives meaning to these file
descriptors. How does a CloudABI mail server know which of the file
descriptors corresponds to the socket that receives incoming emails?
Furthermore, how will this mail server acquire its configuration
parameters, as it cannot open a configuration file from a global path on
disk?

CloudABI solves this problem by replacing traditional string command
line arguments by tree-like data structure consisting of scalars,
sequences and mappings (similar to YAML/JSON). In this structure, file
descriptors are treated as a first-class citizen. When calling exec(),
file descriptors are passed on to the new executable if and only if they
are referenced from this tree structure. See the cloudabi-run(1) man
page for more details and examples (sysutils/cloudabi-utils).

Fortunately, the kernel does not need to care about this tree structure
at all. The C library is responsible for serializing and deserializing,
but also for extracting the list of referenced file descriptors. The
system call only receives a copy of the serialized data and a layout of
what the new file descriptor table should look like:

    int proc_exec(int execfd, const void *data, size_t datalen, const int *fds,
              size_t fdslen);

This change introduces a set of fd*_remapped() functions:

- fdcopy_remapped() pulls a copy of a file descriptor table, remapping
  all of the file descriptors according to the provided mapping table.
- fdinstall_remapped() replaces the file descriptor table of the process
  by the copy created by fdcopy_remapped().
- fdescfree_remapped() frees the table in case we aborted before
  fdinstall_remapped().

We then add a function exec_copyin_data_fds() that builds on top these
functions. It copies in the data and constructs a new remapped file
descriptor. This is used by cloudabi_sys_proc_exec().

Test Plan:
cloudabi-run(1) is capable of spawning processes successfully, providing
it data and file descriptors. procstat -f seems to confirm all is good.
Regular FreeBSD processes also work properly.

Reviewers: kib, mjg

Reviewed By: mjg

Subscribers: imp

Differential Revision: https://reviews.freebsd.org/D3079
2015-07-16 07:05:42 +00:00
Konstantin Belousov
70a3efc14f Do not use atomic_swap_int(9), it is not available on all
architectures.  Atomic_cmpset_int(9) is a direct replacement, due to
loop.  The change fixes arm, arm64, mips an sparc64, which lack
atomic_swap().

Suggested and reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	2 weeks
2015-07-15 21:44:16 +00:00
Konstantin Belousov
615b6ea2c8 Reset non-zero it_need indicator to zero atomically with fetching its
current value.  It is believed that the change is the real fix for the
issue which was covered over by the r252683.

With the current code, if the interrupt handler sets it_need between
read and consequent reset, the update could be lost and
ithread_execute_handlers() would not be called in response to the lost
update.

The r252683 could have hide the issue since at the moment of commit,
atomic_load_acq_int() did locked cmpxchg on the variable, which puts
the cache line into the exclusive owned state and clears store
buffers.  Then the immediate store of zero has very high chance of
reusing the exclusive state of the cache line and make the load and
store sequence operate as atomic swap.

For now, add the acq+rel fence immediately after the swap, to not
disturb current (but excessive) ordering.  Acquire is needed for the
ih_need reads after the load, while release does not serve a useful
purpose [*].

Reviewed by:	alc
Noted by:	alc [*]
Discussed with:	bde
Tested by:	pho
Sponsored by:	The FreeBSD Foundation
MFC after:	2 weeks
2015-07-15 17:36:35 +00:00
Konstantin Belousov
03bbcb2f0c Style. Remove excessive brackets. Compare non-boolean with zero.
Sponsored by:	The FreeBSD Foundation
MFC after:	2 weeks
2015-07-15 17:14:05 +00:00
Mark Johnston
02d131ad11 Fix some error-handling bugs when core dump compression is enabled:
- Ensure that core dump parameters are initialized in the error path.
- Don't call gzio_fini() on a NULL stream.

Reported by:	rpaulo
2015-07-14 18:24:05 +00:00
Conrad Meyer
0c40f3532d Fix cleanup race between unp_dispose and unp_gc
unp_dispose and unp_gc could race to teardown the same mbuf chains, which
can lead to dereferencing freed filedesc pointers.

This patch adds an IGNORE_RIGHTS flag on unpcbs marking the unpcb's RIGHTS
as invalid/freed. The flag is protected by UNP_LIST_LOCK.

To serialize against unp_gc, unp_dispose needs the socket object. Change the
dom_dispose() KPI to take a socket object instead of an mbuf chain directly.

PR:		194264
Differential Revision:	https://reviews.freebsd.org/D3044
Reviewed by:	mjg (earlier version)
Approved by:	markj (mentor)
Obtained from:	mjg
MFC after:	1 month
Sponsored by:	EMC / Isilon Storage Division
2015-07-14 02:00:50 +00:00
Mateusz Guzik
6161705823 exec: textvp -> oldtextvp; binvp -> newtextvp
This makes it consistent with the rest of the naming in do_execve.

No functional changes.
2015-07-14 01:13:37 +00:00
Mateusz Guzik
853be5ffef exec plug a redundant vref + vrele of the image vnode 2015-07-14 00:43:08 +00:00
Mateusz Guzik
e94e50af1d racct: perform a lockless check for p_throttled
This reduces proc lock contention.

Reviewed by:	trasz
2015-07-13 22:52:11 +00:00
Conrad Meyer
c578e0fb48 pipe_direct_write: Fix mismatched pipelock/unlock
If a signal is caught in pipelock, causing it to fail, pipe_direct_write
should not try to pipeunlock.

Reported by:	pho
Differential Revision:	https://reviews.freebsd.org/D3069
Reviewed by:	kib
Approved by:	markj (mentor)
MFC after:	1 week
Sponsored by:	EMC / Isilon Storage Division
2015-07-13 17:45:22 +00:00
Ian Lepore
969fc29e0b Use the monotonic (uptime) counter rather than time-of-day to measure elapsed
time between ntp_adjtime() clock offset adjustments.  This eliminates spurious
frequency steering after a large clock step (such as a 1970->2015 step on a
system with no battery-backed clock hardware).

This problem was discovered after the import of ntpd 4.2.8, which does things
in a slightly different (but still correct) order than the 4.2.4 we had
previously.  In particular, 4.2.4 would step the clock then immediately after
use ntp_adjtime() to set the frequency and offset to zero, which captured the
post-step time-of-day as a side effect.  In 4.2.8, ntpd sets frequency and
offset to zero before any initial clock step, capturing the time as 1970-ish,
then when it next calls ntp_adjtime() it's with a non-zero offset measurement.
This non-zero value gets multiplied by the apparent 45-year interval, which
blows up into a completely bogus frequency steer.  That gets clamped to
500ppm, but that's still enough to make the clock drift so fast that ntpd has
to keep stepping it every few minutes to compensate.
2015-07-12 18:38:17 +00:00
Bjoern A. Zeeb
97fc027722 Try to unbreak the build after r285390 removing the obsolete static
declaration.
2015-07-12 00:26:22 +00:00
Mateusz Guzik
c634b75204 vfs: always clear VI_OWEINACT in consumers bumping v_usecount
Previously vputx would detect the condition and clear the flag.

With this change it is invalid to have both v_usecount > 0 and the flag
set. Assert the condition is met in all revlevant places.

Reviewed by:	kib
2015-07-11 16:28:55 +00:00
Mateusz Guzik
2d1ca3cdff vfs: move si_usecount manipulation to dedicated functions
Reviewed by:	kib
2015-07-11 16:28:12 +00:00
Mateusz Guzik
8a08cec166 Create a dedicated function for ensuring that cdir and rdir are populated.
Previously several places were doing it on its own, partially
incorrectly (e.g. without the filedesc locked) or even actively harmful
by populating jdir or assigning rootvnode without vrefing it.

Reviewed by:	kib
2015-07-11 16:22:48 +00:00
Mateusz Guzik
f0725a8e1e Move chdir/chroot-related fdp manipulation to kern_descrip.c
Prefix exported functions with pwd_.

Deduplicate some code by adding a helper for setting fd_cdir.

Reviewed by:	kib
2015-07-11 16:19:11 +00:00
Adrian Chadd
871ef8b0d8 Regenerate syscalls. 2015-07-11 15:22:11 +00:00