The old buffer was not being initialized and a later str*() op on
it would cause a crash if it wasn't initialized by a previous
call to setproctitle(3) with an actual string.
Noticed by: Ashley Penney <ashp@unloved.org>
to clarify which system call accepts which arguments. Previously
the manual page gave the impression that calling unmount() with
flags of (MNT_FORCE | MNT_UPDATE | MNT_RDONLY) would downgrade a
read-write mount to read-only, which is clearly untrue; to do that,
these flags should be passed to mount() instead.
not spinlock_t. Spinlock_t and the associated functions and macros may
require blocking signals in order for async-safe libc functions to behave
appropriately in libthr. This is undesriable for libthr internal locking.
So, this is the first step in completely separating libthr from libc's
locking primitives.
Three new macros should be used for internal libthr locking from now on:
THR_LOCK, THR_TRYLOCK, THR_UNLOCK.
and the disabling of signals. What we are really interested in is
keeping track of recursive disabling of signals. We should not
be recursively acquiring thread locks. Any such situations should
be reorganized to not require a recursive lock.
Separating the two out also allows us to block signals independent of
acquiring thread locks. This will be needed in libthr in the near future when
we put the pieces together to protect libc functions that use pthread mutexes
and low level locks.
Change execvp to be a wrapper around execvP. This is necessary for some
of the /rescue pieces. It may also be more generally applicable as well.
Submitted by: Tim Kientzle <kientzle@acm.org>
Approved by: Silence on arch@
implementation and the new improved one. We now precompute the
signal set passed to sigtimedwait, using an inverted set when
necessary for compatibility with older kernels.
exit function has invalidated the need for _spin[un]lock_pthread().
The _spin[un]lock() functions can now dereference curthread without
the danger that the ldtentry containing the pointer to the thread
has been cleared out from under them.
signals were changed in kernel, it will retrieve the pending set and
try to find a thread to dispatch the signal. The dispatching process
can be rolled back if the signal is no longer in kernel.
o Create two functions _thr_signal_init() and _thr_signal_deinit(),
all signal action settings are retrieved from kernel when threading
mode is turned on, after a fork(), child process will reset them to
user settings by calling _thr_signal_deinit(). when threading mode
is not turned on, all signal operations are direct past to kernel.
o When a thread generated a synchoronous signals and its context returned
from completed list, UTS will retrieve the signal from its mailbox and try
to deliver the signal to thread.
o Context signal mask is now only used when delivering signals, thread's
current signal mask is always the one in pthread structure.
o Remove have_signals field in pthread structure, replace it with
psf_valid in pthread_signal_frame. when psf_valid is true, in context
switch time, thread will backout itself from some mutex/condition
internal queues, then begin to process signals. when a thread is not
at blocked state and running, check_pending indicates there are signals
for the thread, after preempted and then resumed time, UTS will try to
deliver signals to the thread.
o At signal delivering time, not only pending signals in thread will be
scanned, process's pending signals will be scanned too.
o Change sigwait code a bit, remove field sigwait in pthread_wait_data,
replace it with oldsigmask in pthread structure, when a thread calls
sigwait(), its current signal mask is backuped to oldsigmask, and waitset
is copied to its signal mask and when the thread gets a signal in the
waitset range, its current signal mask is restored from oldsigmask,
these are done in atomic fashion.
o Two additional POSIX APIs are implemented, sigwaitinfo() and sigtimedwait().
o Signal code locking is better than previous, there is fewer race conditions.
o Temporary disable most of code in _kse_single_thread as it is not safe
after fork().
[+|-]Inf, [+|-]NaN, nan(...), and hexidecimal FP constants.
While here, add %a and %A, which are aliases for %e, and
add support for long doubles.
Reviewed by: standards@
breakages. Note that runtime compatibility is not guaranteed. Future
changes to setjmp/longjmp in libc will break threaded applications
linked against libc_r.so.5 on ia64. We pull our "tier 2" card once
more...
Reviewed by: ru
obsolete. The intend is to add glue to either libthr or
libpthread to create the necessary compat links.
o Hook libpthread to the build on ia64. This is slightly out of
order, because the kernel still doesn't have all the support,
but that's not a problem in this case.
functions are derived from the swapctx() and restorectx() (resp)
from sys/ia64/ia64/context.s. The code is expected to be 99%
correct, but has not yet been tested.
Note that with these functions operating on mcontext_t, we also
created the foundation upon which we can implement getcontext(2)
and setcontext(2) replacements. It's not guaranteed that the use
of these syscalls and _ia64_{save|restore}_context() on the same
uicontext_t is actually going to work. Replacing the syscalls is
now trivially achieved.
This commit completes the ia64 port of libpthread itself (modulo
testing and bugfixes).
the register stack and memory stack and call the function given to it.
While here, provide empty, non-working, stubs for the context functions
(_ia64_save_context() and _ia64_restore_context()) so that anyone can at
least compile libkse from CVS sources. Real implementations will follow
soon.
minimize the amount and complexity of assembly code that needs to be
written. This way the core functionality is spread over 3 elementary
functions that don't have to do anything that can more easily and
more safely be done in C. As such, assembly code will only have to
know about the definition of mcontext_t.
The runtime cost of not having these functions being inlined is less
important than the cleanliness and maintainability of the code at
this stage of the implementation.
and sigsuspend(2), all three of which operate or depend on the
process signal mask.
Add a missing xref to sigsetops(3), without which the above three
syscalls would be useless.
platforms the compiler warns about incompatible integer/pointer casts
and on ia64 this generally is bad news. We know that what we're doing
here is valid/correct, so suppress the warning. No functional change.
Sleeps better: marcel
by moving the definition of struct ksd to pthread_md.h and removing
the inclusion of ksd.h from thr_private.h (which has the definition
of struct kse and kse_critical_t). This allows ksd.h to have inline
functions that use struct kse and kse_critical_t and generally
yields a cleaner implementation at the cost of not having all ksd
related types/definitions in one header.
Implement the ksd functionality on ia64 by using inline functions
and permanently remove ksd.c from the ia64 specific makefile.
This change does not clean up the i386 specific version of ksd.h.
NOTE: The ksd code on ia64 abuses the tp register in the same way
as it is abused in libthr in that it is incompatible with the
runtime specification. This will be address when support for TLS
hits the tree.
_ksd_readandclear_tmbx to be function-like. That way we
can define them as inline functions or create prototypes
for them.
This change allows the ksd interface on ia64 to be fully
inlined.