thread: Update main threading documentation

Change-Id: I47b69efb0e3794bfc6150ae0c8457c637233fe28
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/470521
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This commit is contained in:
Ben Walker 2019-10-04 14:02:20 -07:00 committed by Jim Harris
parent 9a650e34ad
commit 00d692d0df

View File

@ -3,60 +3,58 @@
# Theory # Theory
One of the primary aims of SPDK is to scale linearly with the addition of One of the primary aims of SPDK is to scale linearly with the addition of
hardware. This can mean a number of things in practice. For instance, moving hardware. This can mean many things in practice. For instance, moving from one
from one SSD to two should double the number of I/O's per second. Or doubling SSD to two should double the number of I/O's per second. Or doubling the number
the number of CPU cores should double the amount of computation possible. Or of CPU cores should double the amount of computation possible. Or even doubling
even doubling the number of NICs should double the network throughput. To the number of NICs should double the network throughput. To achieve this, the
achieve this, the software must be designed such that threads of execution are software's threads of execution must be independent from one another as much as
independent from one another as much as possible. In practice, that means possible. In practice, that means avoiding software locks and even atomic
avoiding software locks and even atomic instructions. instructions.
Traditionally, software achieves concurrency by placing some shared data onto Traditionally, software achieves concurrency by placing some shared data onto
the heap, protecting it with a lock, and then having all threads of execution the heap, protecting it with a lock, and then having all threads of execution
acquire the lock only when that shared data needs to be accessed. This model acquire the lock only when accessing the data. This model has many great
has a number of great properties: properties:
* It's relatively easy to convert single-threaded programs to multi-threaded * It's easy to convert single-threaded programs to multi-threaded programs
programs because you don't have to change the data model from the because you don't have to change the data model from the single-threaded
single-threaded version. You just add a lock around the data. version. You add a lock around the data.
* You can write your program as a synchronous, imperative list of statements * You can write your program as a synchronous, imperative list of statements
that you read from top to bottom. that you read from top to bottom.
* Your threads can be interrupted and put to sleep by the operating system * The scheduler can interrupt threads, allowing for efficient time-sharing
scheduler behind the scenes, allowing for efficient time-sharing of CPU resources. of CPU resources.
Unfortunately, as the number of threads scales up, contention on the lock Unfortunately, as the number of threads scales up, contention on the lock around
around the shared data does too. More granular locking helps, but then also the shared data does too. More granular locking helps, but then also increases
greatly increases the complexity of the program. Even then, beyond a certain the complexity of the program. Even then, beyond a certain number of contended
number highly contended locks, threads will spend most of their time locks, threads will spend most of their time attempting to acquire the locks and
attempting to acquire the locks and the program will not benefit from any the program will not benefit from more CPU cores.
additional CPU cores.
SPDK takes a different approach altogether. Instead of placing shared data in a SPDK takes a different approach altogether. Instead of placing shared data in a
global location that all threads access after acquiring a lock, SPDK will often global location that all threads access after acquiring a lock, SPDK will often
assign that data to a single thread. When other threads want to access the assign that data to a single thread. When other threads want to access the data,
data, they pass a message to the owning thread to perform the operation on they pass a message to the owning thread to perform the operation on their
their behalf. This strategy, of course, is not at all new. For instance, it is behalf. This strategy, of course, is not at all new. For instance, it is one of
one of the core design principles of the core design principles of
[Erlang](http://erlang.org/download/armstrong_thesis_2003.pdf) and is the main [Erlang](http://erlang.org/download/armstrong_thesis_2003.pdf) and is the main
concurrency mechanism in [Go](https://tour.golang.org/concurrency/2). A message concurrency mechanism in [Go](https://tour.golang.org/concurrency/2). A message
in SPDK typically consists of a function pointer and a pointer to some context, in SPDK consists of a function pointer and a pointer to some context. Messages
and is passed between threads using a are passed between threads using a
[lockless ring](http://dpdk.org/doc/guides/prog_guide/ring_lib.html). Message [lockless ring](http://dpdk.org/doc/guides/prog_guide/ring_lib.html). Message
passing is often much faster than most software developer's intuition leads them to passing is often much faster than most software developer's intuition leads them
believe, primarily due to caching effects. If a single core is consistently to believe due to caching effects. If a single core is accessing the same data
accessing the same data (on behalf of all of the other cores), then that data (on behalf of all of the other cores), then that data is far more likely to be
is far more likely to be in a cache closer to that core. It's often most in a cache closer to that core. It's often most efficient to have each core work
efficient to have each core work on a relatively small set of data sitting in on a small set of data sitting in its local cache and then hand off a small
its local cache and then hand off a small message to the next core when done. message to the next core when done.
In more extreme cases where even message passing may be too costly, a copy of In more extreme cases where even message passing may be too costly, each thread
the data will be made for each thread. The thread will then only reference its may make a local copy of the data. The thread will then only reference its local
local copy. To mutate the data, threads will send a message to each other copy. To mutate the data, threads will send a message to each other thread
thread telling them to perform the update on their local copy. This is great telling them to perform the update on their local copy. This is great when the
when the data isn't mutated very often, but may be read very frequently, and is data isn't mutated very often, but is read very frequently, and is often
often employed in the I/O path. This of course trades memory size for employed in the I/O path. This of course trades memory size for computational
computational efficiency, so it's use is limited to only the most critical code efficiency, so it is used in only the most critical code paths.
paths.
# Message Passing Infrastructure # Message Passing Infrastructure
@ -68,48 +66,65 @@ their documentation (e.g. @ref nvme). Most libraries, however, depend on SPDK's
abstraction, located in `libspdk_thread.a`. The thread abstraction provides a abstraction, located in `libspdk_thread.a`. The thread abstraction provides a
basic message passing framework and defines a few key primitives. basic message passing framework and defines a few key primitives.
First, spdk_thread is an abstraction for a thread of execution and First, `spdk_thread` is an abstraction for a lightweight, stackless thread of
spdk_poller is an abstraction for a function that should be execution. A lower level framework can execute an `spdk_thread` for a single
periodically called on the given thread. On each system thread that the user timeslice by calling `spdk_thread_poll()`. A lower level framework is allowed to
wishes to use with SPDK, they must first call spdk_thread_create(). move an `spdk_thread` between system threads at any time, as long as there is
only a single system thread executing `spdk_thread_poll()` on that
`spdk_thread` at any given time. New lightweight threads may be created at any
time by calling `spdk_thread_create()` and destroyed by calling
`spdk_thread_destroy()`. The lightweight thread is the foundational abstraction for
threading in SPDK.
The library also defines two other abstractions: spdk_io_device and There are then a few additional abstractions layered on top of the
spdk_io_channel. In the course of implementing SPDK we noticed the `spdk_thread`. One is the `spdk_poller`, which is an abstraction for a
same pattern emerging in a number of different libraries. In order to function that should be repeatedly called on the given thread. Another is an
implement a message passing strategy, the code would describe some object with `spdk_msg_fn`, which is a function pointer and a context pointer, that can
global state and also some per-thread context associated with that object that be sent to a thread for execution via `spdk_thread_send_msg()`.
was accessed in the I/O path to avoid locking on the global state. The pattern
was clearest in the lowest layers where I/O was being submitted to block The library also defines two additional abstractions: `spdk_io_device` and
devices. These devices often expose multiple queues that can be assigned to `spdk_io_channel`. In the course of implementing SPDK we noticed the same
threads and then accessed without a lock to submit I/O. To abstract that, we pattern emerging in a number of different libraries. In order to implement a
generalized the device to spdk_io_device and the thread-specific queue to message passing strategy, the code would describe some object with global state
spdk_io_channel. Over time, however, the pattern has appeared in a huge and also some per-thread context associated with that object that was accessed
number of places that don't fit quite so nicely with the names we originally in the I/O path to avoid locking on the global state. The pattern was clearest
chose. In today's code spdk_io_device is any pointer, whose uniqueness is in the lowest layers where I/O was being submitted to block devices. These
predicated only on its memory address, and spdk_io_channel is the per-thread devices often expose multiple queues that can be assigned to threads and then
context associated with a particular spdk_io_device. accessed without a lock to submit I/O. To abstract that, we generalized the
device to `spdk_io_device` and the thread-specific queue to `spdk_io_channel`.
Over time, however, the pattern has appeared in a huge number of places that
don't fit quite so nicely with the names we originally chose. In today's code
`spdk_io_device` is any pointer, whose uniqueness is predicated only on its
memory address, and `spdk_io_channel` is the per-thread context associated with
a particular `spdk_io_device`.
The threading abstraction provides functions to send a message to any other The threading abstraction provides functions to send a message to any other
thread, to send a message to all threads one by one, and to send a message to thread, to send a message to all threads one by one, and to send a message to
all threads for which there is an io_channel for a given io_device. all threads for which there is an io_channel for a given io_device.
Most critically, the thread abstraction does not actually spawn any system level
threads of its own. Instead, it relies on the existence of some lower level
framework that spawns system threads and sets up event loops. Inside those event
loops, the threading abstraction simply requires the lower level framework to
repeatedly call `spdk_thread_poll()` on each `spdk_thread()` that exists. This
makes SPDK very portable to a wide variety of asynchronous, event-based
frameworks such as [Seastar](https://www.seastar.io) or [libuv](https://libuv.org/).
# The event Framework # The event Framework
As the number of example applications in SPDK grew, it became clear that a The SPDK project didn't want to officially pick an asynchronous, event-based
large portion of the code in each was implementing the basic message passing framework for all of the example applications it shipped with, in the interest
infrastructure required to call spdk_thread_create(). This includes spawning of supporting the widest variety of frameworks possible. But the applications do
one thread per core, pinning each thread to a unique core, and allocating of course require something that implements an asynchronous event loop in order
lockless rings between the threads for message passing. Instead of to run, so enter the `event` framework located in `lib/event`. This framework
re-implementing that infrastructure for each example application, SPDK includes things like spawning one thread per core, pinning each thread to a
provides the SPDK @ref event. This library handles setting up all of the unique core, polling and scheduling the lightweight threads, installing signal
message passing infrastructure, installing signal handlers to cleanly handlers to cleanly shutdown, and basic command line option parsing. When
shutdown, implements periodic pollers, and does basic command line parsing. started through spdk_app_start(), the library automatically spawns all of the
When started through spdk_app_start(), the library automatically spawns all of threads requested, pins them, and is ready for lightweight threads to be
the threads requested, pins them, and calls spdk_thread_create(). This makes created. This makes it much easier to implement a brand new SPDK application and
it much easier to implement a brand new SPDK application and is the recommended is the recommended method for those starting out. Only established applications
method for those starting out. Only established applications with sufficient should consider directly integrating the lower level libraries.
message passing infrastructure should consider directly integrating the lower
level libraries.
# Limitations of the C Language # Limitations of the C Language