The performance of the bucket search logic is one of the main factors influencing the performance of the key lookup operation.
The data structures and algorithm are designed to make the best use of Intel CPU architecture resources like:
cache memory space, cache memory bandwidth, external memory bandwidth, multiple execution units working in parallel,
out of order instruction execution, special CPU instructions, etc.
The bucket search logic handles multiple input packets in parallel.
It is built as a pipeline of several stages (3 or 4), with each pipeline stage handling two different packets from the burst of input packets.
On each pipeline iteration, the packets are pushed to the next pipeline stage: for the 4-stage pipeline,
two packets (that just completed stage 3) exit the pipeline,
two packets (that just completed stage 2) are now executing stage 3, two packets (that just completed stage 1) are now executing stage 2,
two packets (that just completed stage 0) are now executing stage 1 and two packets (next two packets to read from the burst of input packets)
are entering the pipeline to execute stage 0.
The pipeline iterations continue until all packets from the burst of input packets execute the last stage of the pipeline.
The bucket search logic is broken into pipeline stages at the boundary of the next memory access.
Each pipeline stage uses data structures that are stored (with high probability) into the L1 or L2 cache memory of the current CPU core and
breaks just before the next memory access required by the algorithm.
The current pipeline stage finalizes by prefetching the data structures required by the next pipeline stage,
so given enough time for the prefetch to complete,
when the next pipeline stage eventually gets executed for the same packets,
it will read the data structures it needs from L1 or L2 cache memory and thus avoid the significant penalty incurred by L2 or L3 cache memory miss.
By prefetching the data structures required by the next pipeline stage in advance (before they are used)
and switching to executing another pipeline stage for different packets,
the number of L2 or L3 cache memory misses is greatly reduced, hence one of the main reasons for improved performance.
This is because the cost of L2/L3 cache memory miss on memory read accesses is high, as usually due to data dependency between instructions,
the CPU execution units have to stall until the read operation is completed from L3 cache memory or external DRAM memory.
By using prefetch instructions, the latency of memory read accesses is hidden,
provided that it is preformed early enough before the respective data structure is actually used.
By splitting the processing into several stages that are executed on different packets (the packets from the input burst are interlaced),
enough work is created to allow the prefetch instructions to complete successfully (before the prefetched data structures are actually accessed) and
also the data dependency between instructions is loosened.
For example, for the 4-stage pipeline, stage 0 is executed on packets 0 and 1 and then,
before same packets 0 and 1 are used (i.e. before stage 1 is executed on packets 0 and 1),
different packets are used: packets 2 and 3 (executing stage 1), packets 4 and 5 (executing stage 2) and packets 6 and 7 (executing stage 3).
By executing useful work while the data structures are brought into the L1 or L2 cache memory, the latency of the read memory accesses is hidden.
By increasing the gap between two consecutive accesses to the same data structure, the data dependency between instructions is loosened;
this allows making the best use of the super-scalar and out-of-order execution CPU architecture,
as the number of CPU core execution units that are active (rather than idle or stalled due to data dependency constraints between instructions) is maximized.
The bucket search logic is also implemented without using any branch instructions.
This avoids the important cost associated with flushing the CPU core execution pipeline on every instance of branch misprediction.
:numref:`figure_figure34`, :numref:`table_qos_25` and :numref:`table_qos_26` detail the main data structures used to implement configurable key size hash tables (either LRU or extendable bucket,
As displayed in :numref:`table_qos_29`, the lookup tables for *match* and *match_many* can be collapsed into a single 32-bit value and the lookup table for
:numref:`figure_figure37`, :numref:`figure_figure38`, :numref:`table_qos_30` and :numref:`table_qos_31` detail the main data structures used to implement 8-byte and 16-byte key hash tables
once the pipelined version of the bucket search algorithm has been executed for all the packets in the burst of input packets,
the non-optimized implementation of the bucket search algorithm is also executed for any packets that did not produce a lookup hit,
but have the bucket in extended state.
As result of executing the non-optimized version, some of these packets may produce a lookup hit or lookup miss.
This does not impact the performance of the key lookup operation,
as the probability of having the bucket in extended state is relatively small.
Pipeline Library Design
-----------------------
A pipeline is defined by:
#. The set of input ports;
#. The set of output ports;
#. The set of tables;
#. The set of actions.
The input ports are connected with the output ports through tree-like topologies of interconnected tables.
The table entries contain the actions defining the operations to be executed on the input packets and the packet flow within the pipeline.
Connectivity of Ports and Tables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To avoid any dependencies on the order in which pipeline elements are created,
the connectivity of pipeline elements is defined after all the pipeline input ports,
output ports and tables have been created.
General connectivity rules:
#. Each input port is connected to a single table. No input port should be left unconnected;
#. The table connectivity to other tables or to output ports is regulated by the next hop actions of each table entry and the default table entry.
The table connectivity is fluid, as the table entries and the default table entry can be updated during run-time.
* A table can have multiple entries (including the default entry) connected to the same output port.
A table can have different entries connected to different output ports.
Different tables can have entries (including default table entry) connected to the same output port.
* A table can have multiple entries (including the default entry) connected to another table,
in which case all these entries have to point to the same table.
This constraint is enforced by the API and prevents tree-like topologies from being created (allowing table chaining only),
with the purpose of simplifying the implementation of the pipeline run-time execution engine.
Port Actions
~~~~~~~~~~~~
Port Action Handler
^^^^^^^^^^^^^^^^^^^
An action handler can be assigned to each input/output port to define actions to be executed on each input packet that is received by the port.
Defining the action handler for a specific input/output port is optional (i.e. the action handler can be disabled).
For input ports, the action handler is executed after RX function. For output ports, the action handler is executed before the TX function.
The action handler can decide to drop packets.
Table Actions
~~~~~~~~~~~~~
Table Action Handler
^^^^^^^^^^^^^^^^^^^^
An action handler to be executed on each input packet can be assigned to each table.
Defining the action handler for a specific table is optional (i.e. the action handler can be disabled).
The action handler is executed after the table lookup operation is performed and the table entry associated with each input packet is identified.
The action handler can only handle the user-defined actions, while the reserved actions (e.g. the next hop actions) are handled by the Packet Framework.
The action handler can decide to drop the input packet.
Reserved Actions
^^^^^^^^^^^^^^^^
The reserved actions are handled directly by the Packet Framework without the user being able to change their meaning
through the table action handler configuration.
A special category of the reserved actions is represented by the next hop actions, which regulate the packet flow between input ports,
A complex application is typically split across multiple cores, with cores communicating through SW queues.
There is usually a performance limit on the number of table lookups
and actions that can be fitted on the same CPU core due to HW constraints like:
available CPU cycles, cache memory size, cache transfer BW, memory transfer BW, etc.
As the application is split across multiple CPU cores, the Packet Framework facilitates the creation of several pipelines,
the assignment of each such pipeline to a different CPU core
and the interconnection of all CPU core-level pipelines into a single application-level complex pipeline.
For example, if CPU core A is assigned to run pipeline P1 and CPU core B pipeline P2,
then the interconnection of P1 with P2 could be achieved by having the same set of SW queues act like output ports
for P1 and input ports for P2.
This approach enables the application development using the pipeline, run-to-completion (clustered) or hybrid (mixed) models.
It is allowed for the same core to run several pipelines, but it is not allowed for several cores to run the same pipeline.
Shared Data Structures
~~~~~~~~~~~~~~~~~~~~~~
The threads performing table lookup are actually table writers rather than just readers.
Even if the specific table lookup algorithm is thread-safe for multiple readers
(e. g. read-only access of the search algorithm data structures is enough to conduct the lookup operation),
once the table entry for the current packet is identified, the thread is typically expected to update the action meta-data stored in the table entry
(e.g. increment the counter tracking the number of packets that hit this table entry), and thus modify the table entry.
During the time this thread is accessing this table entry (either writing or reading; duration is application specific),
for data consistency reasons, no other threads (threads performing table lookup or entry add/delete operations) are allowed to modify this table entry.
Mechanisms to share the same table between multiple threads:
#.**Multiple writer threads.**
Threads need to use synchronization primitives like semaphores (distinct semaphore per table entry) or atomic instructions.
The cost of semaphores is usually high, even when the semaphore is free.
The cost of atomic instructions is normally higher than the cost of regular instructions.
#.**Multiple writer threads, with single thread performing table lookup operations and multiple threads performing table entry add/delete operations.**
The threads performing table entry add/delete operations send table update requests to the reader (typically through message passing queues),
which does the actual table updates and then sends the response back to the request initiator.
#.**Single writer thread performing table entry add/delete operations and multiple reader threads that perform table lookup operations with read-only access to the table entries.**
The reader threads use the main table copy while the writer is updating the mirror copy.
Once the writer update is done, the writer can signal to the readers and busy wait until all readers swaps between the mirror copy (which now becomes the main copy) and
the mirror copy (which now becomes the main copy).
Interfacing with Accelerators
-----------------------------
The presence of accelerators is usually detected during the initialization phase by inspecting the HW devices that are part of the system (e.g. by PCI bus enumeration).
Typical devices with acceleration capabilities are:
Usually, to support a specific functional block, specific implementation of Packet Framework tables and/or ports and/or actions has to be provided for each accelerator,
with all the implementations sharing the same API: pure SW implementation (no acceleration), implementation using accelerator A, implementation using accelerator B, etc.
The selection between these implementations could be done at build time or at run-time (recommended), based on which accelerators are present in the system,
The Software Switch (SWX) pipeline is designed to combine the DPDK performance with the flexibility of the P4-16 language [1]. It can be used either by itself
to code a complete software switch or data plane application, or in combination with the open-source P4 compiler P4C [2], acting as a P4C back-end that allows
the P4 programs to be translated to the DPDK SWX API and run on multi-core CPUs.
The main features of the SWX pipeline are:
* Nothing is hard-wired, everything is dynamically defined: The packet headers (i.e. the network protocols), the packet meta-data, the actions, the tables
and the pipeline itself are dynamically defined instead of selected from a predefined set.
* Instructions: The actions and the life of the packet through the pipeline are defined with instructions that manipulate the pipeline objects mentioned
above. The pipeline is the main function of the packet program, with actions as subroutines triggered by the tables.
* Call external plugins: Extern objects and functions can be defined to call functionality that cannot be efficiently implemented with the existing
pipeline-oriented instruction set, such as: error detecting/correcting codes, cryptographic operations, meters, statistics counter arrays, heuristics, etc.
* Better control plane interaction: Transaction-oriented table update mechanism that supports multi-table atomic updates. Multiple tables can be updated in a
single step with only the before-update and the after-update table entries visible to the packets. Alignment with the P4Runtime [3] protocol.
* Performance: Multiple packets are in-flight within the pipeline at any moment. Each packet is owned by a different time-sharing thread in
run-to-completion, with the thread pausing before memory access operations such as packet I/O and table lookup to allow the memory prefetch to complete.
The instructions are verified and translated at initialization time with no run-time impact. The instructions are also optimized to detect and "fuse"
frequently used patterns into vector-like instructions transparently to the user.
The main SWX pipeline components are:
* Input and output ports: Each port instantiates a port type that defines the port operations, e.g. Ethernet device port, PCAP port, etc. The RX interface
of the input ports and the TX interface of the output ports are single packet based, with packet batching typically implemented internally by each port for
performance reasons.
* Structure types: Each structure type is used to define the logical layout of a memory block, such as: packet headers, packet meta-data, action data stored
in a table entry, mailboxes of extern objects and functions. Similar to C language structs, each structure type is a well defined sequence of fields, with
each field having a unique name and a constant size.
* Packet headers: Each packet typically has one or multiple headers. The headers are extracted from the input packet as part of the packet parsing operation,
which is likely executed immediately after the packet reception. As result of the extract operation, each header is logically removed from the packet, so
once the packet parsing operation is completed, the input packet is reduced to opaque payload. Just before transmission, one or several headers are pushed
in front of each output packet through the emit operation; these headers can be part of the set of headers that were previously extracted from the input
packet (and potentially modified afterwards) or some new headers whose contents is generated by the pipeline (e.g. by reading them from tables). The format
of each packet header is defined by instantiating a structure type.
* Packet meta-data: The packet meta-data is filled in by the pipeline (e.g. by reading it from tables) or computed by the pipeline. It is not sent out unless
some of the meta-data fields are explicitly written into the headers emitted into the output packet. The format of the packet meta-data is defined by
instantiating a structure type.
* Extern objects and functions: Used to plug into the pipeline any functionality that cannot be efficiently implemented with the existing pipeline instruction
set. Each extern object and extern function has its own mailbox, which is used to pass the input arguments to and retrieve the output arguments from the
extern object member functions or the extern function. The mailbox format is defined by instantiating a structure type.
* Instructions: The pipeline and its actions are defined with instructions from a predefined instruction set. The instructions are used to receive and
transmit the current packet, extract and emit headers from/into the packet, read/write the packet headers, packet meta-data and mailboxes, start table
lookup operations, read the action arguments from the table entry, call extern object member functions or extern functions. See the rte_swx_pipeline.h file
for the complete list of instructions.
* Actions: The pipeline actions are dynamically defined through instructions as opposed to predefined. Essentially, the actions are subroutines of the
pipeline program and their execution is triggered by the table lookup. The input arguments of each action are read from the table entry (in case of table
lookup hit) or the default table action (in case of table lookup miss) and are read-only; their format is defined by instantiating a structure type. The
actions have read-write access to the packet headers and meta-data.
* Table: Each pipeline typically has one or more lookup tables. The match fields of each table are flexibly selected from the packet headers and meta-data
defined for the current pipeline. The set of table actions is flexibly selected for each table from the set of actions defined for the current pipeline. The
tables can be looked at as special pipeline operators that result in one of the table actions being called, depending on the result of the table lookup
operation.
* Pipeline: The pipeline represents the main program that defines the life of the packet, with subroutines (actions) executed on table lookup. As packets
go through the pipeline, the packet headers and meta-data are transformed along the way.