Replace the BSD license header with the SPDX tag for files
with only an Intel copyright on them.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Rework for the variable size key extendible bucket (EXT) hash
table to use the mask-based hash function and the unified
parameter structure.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Add unified parameter structure for all hash tables in librte_table.
Add mask-based hash function prototype, which is input parameter for
all hash tables.
Renamed the non-mask-based hash function prototype and all the calls
to it (to be removed later).
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
The non-dosig version of the variable size key extendible bucket
hash tables are removed. The remaining hash tables are renamed to
eliminate the dosig particle from their name.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
New functions prototypes for bulk add/delete added to table API. New
functions allows adding/deleting multiple records with single function
call. For now those functions are implemented only for ACL table. For
other tables these function pointers are set to NULL.
Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Fix RTE_MBUF_METADATA macros to allow for unaligned accesses to
meta-data fields.
Forcing aligned accesses is not really required, so this is removing an
unneeded constraint.
This issue was met during testing of the new version of the ip_pipeline
application. There is no performance impact.
This change has no ABI impact, as the previous code that uses aligned
accesses continues to run without any issues.
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Include rte_memory.h for lib files that use __rte_cache_aligned
attribute.
Consider the following code:
struct per_core_foo {
...
} __rte_cache_aligned;
struct global_foo {
struct per_core_foo foo[RTE_MAX_CORE];
};
If __rte_cache_aligned is not defined (rte_memory.h is not included),
the code compiles but the structure is not aligned... it defines the
structure and creates a global variable called __rte_cache_aligned.
And this can lead to really bad things if this code is in a .h that
is included by files that may or may not include rte_memory.h
Signed-off-by: Jia Yu <jyu@vmware.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
CACHE_LINE_SIZE is a macro defined in machine/param.h in FreeBSD and
conflicts with DPDK macro version.
Adding RTE_ prefix to avoid conflicts.
CACHE_LINE_MASK and CACHE_LINE_ROUNDUP are also prefixed.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
[Thomas: updated on HEAD, including PPC]
During initialization of rte_table_hash_ext and rte_table_hash_lru, a
contiguous region of memory is allocated to store meta data, buckets,
extended buckets, keys, stack of keys, stack of extended buckets and
data entries. The size of each region depends on the hash table
configuration.
The address of each region is calculated using offsets relative to the
beginning of the memory region. Without this patch, the offsets
contain the size of the table meta data (sizeof(struct
rte_table_hash)). These addresses are stored in pointers which are
used when entries are added or deleted and lookups are performed.
Instead of adding these offsets to the address of the beginning of the
memory region, they are added to the address of the end of the meta
data (= address of the beginning of the memory region + sizeof(struct
rte_table_hash)). The resulting addresses are off by sizeof(struct
rte_table_hash) bytes. As a consequence, memory past the allocated
region can be accessed by the add, delete and lookup operations.
This patch corrects the address calculation by not including the size
of the meta data in the offsets.
Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
During initialization of rte_hash_table_ext and rte_hash_table_lru,
t->data_size_shl is calculated. This member contains the number of
bits to shift left during calculation of the location of entries in
the hash table. To determine the number of bits to shift left, the
size of the entry (as provided to the rte_table_hash_ext_create and
rte_table_hash_lru_create) has to be used instead of the size of the
key.
Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
When an entry is deleted from an extensible rte_table_hash, the bucket
that stored the entry can become empty. If this is the case, the
bucket needs to be removed from the chain of buckets.
During removal of the bucket, the chain should be updated first. If
the bucket that will be removed is cleared first, the chain is broken
and the information to update the chain is lost.
Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Various types of hash tables presented under the Packet Framework toolbox.
Hash table types:
1. Extendible bucket (ext): when bucket is full, bucket is extended with
more keys
2. Least Recently Used (LRU): when bucket is full, the LRU entry is discarded
3. Pre-computed key signature: RX core extracts the key n-tuple from the
packet, computes the key signature and saves the key and key signature
within the packet meta-data; flow classification core performs the actual
lookup (the bucket search stage) after reading the key and key signature
from packet meta-data
4. Signature computed on-the-fly (do-sig version): the same CPU core extracts
the key n-tuple from pkt, computes key signature and performs the table
lookup
5. Configurable key size or optimized for single key size (8-byte, 16-byte
and 32-byte key sizes)
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Pablo de Lara Guarch <pablo.de.lara.guarch@intel.com>
Acked by: Ivan Boule <ivan.boule@6wind.com>