Commit Graph

17 Commits

Author SHA1 Message Date
Bruce Richardson
369991d997 lib: use SPDX tag for Intel copyright files
Replace the BSD license header with the SPDX tag for files
with only an Intel copyright on them.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2018-01-04 22:41:39 +01:00
Cristian Dumitrescu
4b08cd8927 table: update copyright
Removed incorrect white spaces and updated year in copyrigh headers.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2017-10-24 13:11:32 +02:00
Cristian Dumitrescu
6907931413 table: rework variable size key ext hash tables
Rework for the variable size key extendible bucket (EXT) hash
table to use the mask-based hash function and the unified
parameter structure.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2017-10-24 13:08:08 +02:00
Cristian Dumitrescu
2dc5198907 table: add unified params struct and mask-based hash func
Add unified parameter structure for all hash tables in librte_table.

Add mask-based hash function prototype, which is input parameter for
all hash tables.

Renamed the non-mask-based hash function prototype and all the calls
to it (to be removed later).

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2017-10-24 13:07:43 +02:00
Cristian Dumitrescu
14c1477ca6 table: remove deprecated variable size key ext hash tables
The non-dosig version of the variable size key extendible bucket
hash tables are removed. The remaining hash tables are renamed to
eliminate the dosig particle from their name.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2017-10-24 13:06:42 +02:00
Aleksey Katargin
c1e07f036d table: fix stats update
Fixed stats double update.

Signed-off-by: Aleksey Katargin <gureedo@gmail.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2017-04-21 01:29:21 +02:00
Stephen Hemminger
c5ba278876 lib: remove unnecessary void cast
Remove unnecessary casts of void * pointers to a specific type.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2017-04-11 18:05:10 +02:00
Fan Zhang
e37d9d7655 table: improve lookup performance with prefetch offset
This patch modifies rte_prefetch offsets to improve hash/lru
table lookup performance.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2015-11-26 00:45:50 +01:00
Marcin Kerlin
5217261c6a table: add bulk adding and deleting
New functions prototypes for bulk add/delete added to table API. New
functions allows adding/deleting multiple records with single function
call. For now those functions are implemented only for ACL table. For
other tables these function pointers are set to NULL.

Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2015-11-26 00:32:12 +01:00
Maciej Gajdzica
cbca3f635b table: add hash stats
Added statistics for hash ext table.
Added statistics for hash key8 table.
Added statistics for hash key16 table.
Added statistics for hash key32 table.
Added statistics for hash_lru table.

Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2015-06-23 23:31:14 +02:00
Daniel Mrzyglod
1129992baa port: fix unaligned access to metadata
Fix RTE_MBUF_METADATA macros to allow for unaligned accesses to
meta-data fields.
Forcing aligned accesses is not really required, so this is removing an
unneeded constraint.
This issue was met during testing of the new version of the ip_pipeline
application. There is no performance impact.
This change has no ABI impact, as the previous code that uses aligned
accesses continues to run without any issues.

Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2015-06-22 22:10:46 +02:00
Jia Yu
3a52e64742 lib: fix cache alignment of structures
Include rte_memory.h for lib files that use __rte_cache_aligned
attribute.

Consider the following code:

	struct per_core_foo {
		...
	} __rte_cache_aligned;

	struct global_foo {
		struct per_core_foo foo[RTE_MAX_CORE];
	};

If __rte_cache_aligned is not defined (rte_memory.h is not included),
the code compiles but the structure is not aligned... it defines the
structure and creates a global variable called __rte_cache_aligned.
And this can lead to really bad things if this code is in a .h that
is included by files that may or may not include rte_memory.h

Signed-off-by: Jia Yu <jyu@vmware.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
2014-12-11 01:42:02 +01:00
Sergio Gonzalez Monroy
fdf20fa7be add prefix to cache line macros
CACHE_LINE_SIZE is a macro defined in machine/param.h in FreeBSD and
conflicts with DPDK macro version.
Adding RTE_ prefix to avoid conflicts.
CACHE_LINE_MASK and CACHE_LINE_ROUNDUP are also prefixed.

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
[Thomas: updated on HEAD, including PPC]
2014-11-27 16:21:11 +01:00
Balazs Nemeth
5bc3c265cb table: fix pointer calculations at initialization
During initialization of rte_table_hash_ext and rte_table_hash_lru, a
contiguous region of memory is allocated to store meta data, buckets,
extended buckets, keys, stack of keys, stack of extended buckets and
data entries. The size of each region depends on the hash table
configuration.

The address of each region is calculated using offsets relative to the
beginning of the memory region. Without this patch, the offsets
contain the size of the table meta data (sizeof(struct
rte_table_hash)). These addresses are stored in pointers which are
used when entries are added or deleted and lookups are performed.

Instead of adding these offsets to the address of the beginning of the
memory region, they are added to the address of the end of the meta
data (= address of the beginning of the memory region + sizeof(struct
rte_table_hash)). The resulting addresses are off by sizeof(struct
rte_table_hash) bytes. As a consequence, memory past the allocated
region can be accessed by the add, delete and lookup operations.

This patch corrects the address calculation by not including the size
of the meta data in the offsets.

Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2014-11-24 13:17:49 +01:00
Balazs Nemeth
8595428e50 table: fix incorrect initialization
During initialization of rte_hash_table_ext and rte_hash_table_lru,
t->data_size_shl is calculated.  This member contains the number of
bits to shift left during calculation of the location of entries in
the hash table.  To determine the number of bits to shift left, the
size of the entry (as provided to the rte_table_hash_ext_create and
rte_table_hash_lru_create) has to be used instead of the size of the
key.

Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2014-11-24 13:17:49 +01:00
Balazs Nemeth
fb20a4bd0f table: fix empty bucket removal during entry deletion
When an entry is deleted from an extensible rte_table_hash, the bucket
that stored the entry can become empty. If this is the case, the
bucket needs to be removed from the chain of buckets.

During removal of the bucket, the chain should be updated first. If
the bucket that will be removed is cleared first, the chain is broken
and the information to update the chain is lost.

Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
2014-11-24 13:17:49 +01:00
Cristian Dumitrescu
8aa327214c table: hash
Various types of hash tables presented under the Packet Framework toolbox.

Hash table types:
1. Extendible bucket (ext): when bucket is full, bucket is extended with
   more keys
2. Least Recently Used (LRU): when bucket is full, the LRU entry is discarded
3. Pre-computed key signature: RX core extracts the key n-tuple from the
   packet, computes the key signature and saves the key and key signature
   within the packet meta-data; flow classification core performs the actual
   lookup (the bucket search stage) after reading the key and key signature
   from packet meta-data
4. Signature computed on-the-fly (do-sig version): the same CPU core extracts
   the key n-tuple from pkt, computes key signature and performs the table
   lookup
5. Configurable key size or optimized for single key size (8-byte, 16-byte
   and 32-byte key sizes)

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Pablo de Lara Guarch <pablo.de.lara.guarch@intel.com>
Acked by: Ivan Boule <ivan.boule@6wind.com>
2014-06-17 03:34:10 +02:00