This is a more generic version of taskqueue_start_threads_pinned()
which only supports a single cpuid.
This originally came from John Baldwin <jhb@> who implemented it
as part of a push towards NUMA awareness in drivers. I started implementing
something similar for RSS and NUMA, then found he already did it.
I'd like to axe taskqueue_start_threads_pinned() so it doesn't become
part of a longer-term API. (Read: hps@ wants to MFC things, and
if I don't do this soon, he'll MFC what's here. :-)
I have a follow-up commit which converts the intel drivers over
to using the cpuset version of this function, so we can eventually
nuke the the pinned version.
Tested:
* igb, ixgbe
Obtained from: jhbbsd
Phabric: https://reviews.freebsd.org/D1247
Reviewed by: jhb, avg
Sponsored by: Spectra Logic Corporation
sys/kern_subr_taskqueue.c:
Modify taskqueue_drain_all() processing to use a temporary
"barrier task", rather than rely on a user task that may
be destroyed during taskqueue_drain_all()'s execution. The
barrier task is queued behind all previously queued tasks
and then has its priority elevated so that future tasks
cannot pass it in the queue.
Use a similar barrier scheme to drain threads processing
current tasks. This requires taskqueue_run_locked() to
insert and remove the taskqueue_busy object for the running
thread for every task processed.
share/man/man9/taskqueue.9:
Remove warning about live-lock issues with taskqueue_drain_all()
and indicate that it does not wait for tasks queued after
it begins processing.
This _was_ right, a last minute suggestion and not enough testing makes
Adrian a bad boy.
Tested:
* igb(4) with RSS patches, by hand verifying each igb(4) taskqueue
tid from procstat -ka using cpuset -g -t <tid>.
taskqueue worker thread(s) to.
For now it isn't a taskqueue/taskthread error to fail to pin
to the given cpuid.
Thanks to rpaulo@, kib@ and jhb@ for feedback.
Tested:
* igb(4), with local RSS patches to pin taskqueues.
TODO:
* ask the doc team for help in documenting the new API call.
* add a taskqueue_start_threads_cpuset() method which takes
a cpuset_t - but this may require a bunch of surgery to
bring cpuset_t into scope.
This API has semantics similar to that of taskqueue_drain but acts on
all tasks that might be queued or running on a taskqueue.
A caller must ensure that no new tasks are being enqueued otherwise this
call would be totally meaningless. For example, if the tasks are
enqueued by an interrupt filter then its interrupt must be disabled.
MFC after: 10 days
Move tq_enqueue() call out of the queue lock for known handlers (actually
I have found no others in the base system). This reduces queue lock hold
time and congestion spinning under active multithreaded enqueuing.
Remove locking from taskqueue_member(). The list of threads is static
during the taskqueue life cycle, so there is no need to protect it,
taking quite congested lock several more times for each ZFS I/O.
The scope of these callbacks is primarily to support actions that affect the
taskqueue's thread environments. They are entirely optional, and
consequently are introduced as a new API: taskqueue_set_callback().
This interface allows the caller to specify that a taskqueue requires a
callback and optional context pointer for a given callback type.
The callback types included in this commit can be used to register a
constructor and destructor for thread-local storage using osd(9). This
allows a particular taskqueue to define that its threads require a specific
type of TLS, without the need for a specially-orchestrated task-based
mechanism for startup and shutdown in order to accomplish it.
Two callback types are supported at this point:
- TASKQUEUE_CALLBACK_TYPE_INIT, called by every thread when it starts, prior
to processing any tasks.
- TASKQUEUE_CALLBACK_TYPE_SHUTDOWN, called by every thread when it exits,
after it has processed its last task but before the taskqueue is
reclaimed.
While I'm here:
- Add two new macros, TQ_ASSERT_LOCKED and TQ_ASSERT_UNLOCKED, and use them
in appropriate locations.
- Fix taskqueue.9 to mention taskqueue_start_threads(), which is a required
interface for all consumers of taskqueue(9).
Reviewed by: kib (all), eadler (taskqueue.9), brd (taskqueue.9)
Approved by: ken (mentor)
Sponsored by: Spectra Logic
MFC after: 1 month
taskqueue_enqueue_timeout(). Do not rearm the callout if it is
already armed and the ticks is negative. Otherwise rearm it to fire
in abs(ticks) ticks in the future.
The intended use is to call taskqueue_enqueue_timeout() for the given
timeout_task with the same negative ticks argument. As result, the
task is scheduled to execute not further than abs(ticks) ticks in
future, and the consequent enqueues are coalesced until the already
scheduled task is finished.
Reviewed by: rwatson
Tested by: Markus Gebert <markus.gebert@hostpoint.ch>
MFC after: 2 weeks
If it overflows before the taskqueue can run, the task will be
re-added to the taskqueue and cause a loop in the task list.
Reported by: Arnaud Lacombe <lacombar@gmail.com>
Submitted by: Ryan Stone <rysto32@gmail.com>
Reviewed by: jhb
Approved by: re (kib)
MFC after: 1 day
mechanism. The caller may specify a timeout in ticks after which the
task will be scheduled.
Sponsored by: The FreeBSD Foundation
Reviewed by: jeff, jhb
MFC after: 1 month
tq_name was used write-only and besides it was just a pointer, so it
could point to some garbage in a temporary buffer that's gone.
This change shouldn't change KPI/KBI as struct taskqueue is private to
subr_taskqueue.c.
If we find a need for tq_name it can be resurrected at any moment.
taskqueue_create() interface is preserved for this purpose.
Suggested by: jhb
MFC after: 10 days
taskqueues, more than one task can be running simultaneously.
Also make taskqueue_run(9) static to the file, since there are no
consumers in the base kernel and the function signature needs to change
with this fix.
Remove mention of taskqueue_run(9) and taskqueue_run_fast(9) from the
taskqueue(9) man page.
Reviewed by: jhb
Approved by: zml (mentor)
via %s
Most of the cases looked harmless, but this is done for the sake of
correctness. In one case it even allowed to drop an intermediate buffer.
Found by: clang
MFC after: 2 week
ta_func may free the task structure, so no references to its members
are valid after the handler has been called. Using a per-queue member
and having waits longer than strictly necessary was suggested by jhb.
Submitted by: Matthew Fleming <matthew.fleming@isilon.com>
Reviewed by: zml, jhb
taskqueue_drain(9) will not correctly detect whether a task is
currently running. The check is against a field in the taskqueue
struct, but for a threaded queue with more than one thread, multiple
threads can simultaneously be running a task, thus stomping over the
tq_running field.
Submitted by: Matthew Fleming <matthew.fleming@isilon.com>
Reviewed by: jhb
Approved by: dfr (mentor)
td_name[] arrays are actually MAXCOMLEN + 1 in size and a few places that
created shadow copies of these arrays were just using MAXCOMLEN.
- Prefer using sizeof() of an array type to explicit constants for the
array length in a few places.
- Ensure that all of p_comm[] and td_name[] is always zero'd during
execve() to guard against any possible information leaks. Previously
trailing garbage in p_comm[] could be leaked to userland in ktrace
record headers via td_name[].
Reviewed by: bde
replace it with wrappers around our taskqueue(9).
To make it possible implement taskqueue_member() function which returns 1
if the given thread was created by the given taskqueue.
Approved by: re (kib)
the owner of a queue to block and unblock execution of the tasks in the
queue while allowing tasks to continue to be added queue. Combining this
with taskqueue_drain() allows a queue to be safely disabled. The unblock
function may run (or schedule to run) the queue when it is called, just as
calling taskqueue_enqueue() would.
Reviewed by: jhb, sam
to kproc_xxx as they actually make whole processes.
Thos makes way for us to add REAL kthread_create() and friends
that actually make theads. it turns out that most of these
calls actually end up being moved back to the thread version
when it's added. but we need to make this cosmetic change first.
I'd LOVE to do this rename in 7.0 so that we can eventually MFC the
new kthread_xxx() calls.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
setrunqueue() was mostly empty. The few asserts and thread state
setting were moved to the individual schedulers. sched_add() was
chosen to displace it for naming consistency reasons.
- Remove adjustrunqueue, it was 4 lines of code that was ifdef'd to be
different on all three schedulers where it was only called in one place
each.
- Remove the long ifdef'd out remrunqueue code.
- Remove the now redundant ts_state. Inspect the thread state directly.
- Don't set TSF_* flags from kern_switch.c, we were only doing this to
support a feature in one scheduler.
- Change sched_choose() to return a thread rather than a td_sched. Also,
rely on the schedulers to return the idlethread. This simplifies the
logic in choosethread(). Aside from the run queue links kern_switch.c
mostly does not care about the contents of td_sched.
Discussed with: julian
- Move the idle thread loop into the per scheduler area. ULE wants to
do something different from the other schedulers.
Suggested by: jhb
Tested on: x86/amd64 sched_{4BSD, ULE, CORE}.