Commit Graph

13 Commits

Author SHA1 Message Date
Stephen Hemminger
c5ba278876 lib: remove unnecessary void cast
Remove unnecessary casts of void * pointers to a specific type.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2017-04-11 18:05:10 +02:00
Gowrishankar Muthukrishnan
43f15e2837 table: fix verification on hash bucket header alignment
In powerpc systems, rte table hash structs rte_bucket_4_8, rte_bucket_4_16 and
rte_bucket_4_32 are not cache aligned and hence verification on same would fail.
Instead of checking alignment on cpu cacheline, it could equally be tested as
multiple of 64 bytes.

Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2016-09-09 17:56:25 +02:00
Fan Zhang
e37d9d7655 table: improve lookup performance with prefetch offset
This patch modifies rte_prefetch offsets to improve hash/lru
table lookup performance.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2015-11-26 00:45:50 +01:00
Fan Zhang
68866e2417 table: add 16-byte hash operations computed on lookup
This patch is to adding hash table operations for key signature
computed on lookup ("do-sig") for LRU hash tables and Extendible buckets.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2015-11-26 00:45:50 +01:00
Fan Zhang
fc6bcc6fee table: add key mask to 8 and 16-byte hash parameters
This patch relates to ABI change proposed for librte_table.
The key_mask parameter is added for 8-byte and 16-byte
key extendible bucket and LRU tables.The release notes
is updated and the deprecation notice is removed.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2015-11-26 00:45:50 +01:00
Marcin Kerlin
5217261c6a table: add bulk adding and deleting
New functions prototypes for bulk add/delete added to table API. New
functions allows adding/deleting multiple records with single function
call. For now those functions are implemented only for ACL table. For
other tables these function pointers are set to NULL.

Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2015-11-26 00:32:12 +01:00
Maciej Gajdzica
cbca3f635b table: add hash stats
Added statistics for hash ext table.
Added statistics for hash key8 table.
Added statistics for hash key16 table.
Added statistics for hash key32 table.
Added statistics for hash_lru table.

Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2015-06-23 23:31:14 +02:00
Daniel Mrzyglod
1129992baa port: fix unaligned access to metadata
Fix RTE_MBUF_METADATA macros to allow for unaligned accesses to
meta-data fields.
Forcing aligned accesses is not really required, so this is removing an
unneeded constraint.
This issue was met during testing of the new version of the ip_pipeline
application. There is no performance impact.
This change has no ABI impact, as the previous code that uses aligned
accesses continues to run without any issues.

Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2015-06-22 22:10:46 +02:00
Miroslaw Walukiewicz
2f95a470b8 table: fix crash during key overload
hash_key8_ext, hash_key16_ext and hash_key32_ext tables allocate cache
entries to support table overload cases. The crash can occur when cache
entry is free after use.
The problem is with computing the index of the free cache entry.

Signed-off-by: Mirek Walukiewicz <miroslaw.walukiewicz@intel.com>
Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2015-03-26 22:33:41 +01:00
Jia Yu
3a52e64742 lib: fix cache alignment of structures
Include rte_memory.h for lib files that use __rte_cache_aligned
attribute.

Consider the following code:

	struct per_core_foo {
		...
	} __rte_cache_aligned;

	struct global_foo {
		struct per_core_foo foo[RTE_MAX_CORE];
	};

If __rte_cache_aligned is not defined (rte_memory.h is not included),
the code compiles but the structure is not aligned... it defines the
structure and creates a global variable called __rte_cache_aligned.
And this can lead to really bad things if this code is in a .h that
is included by files that may or may not include rte_memory.h

Signed-off-by: Jia Yu <jyu@vmware.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
2014-12-11 01:42:02 +01:00
Sergio Gonzalez Monroy
fdf20fa7be add prefix to cache line macros
CACHE_LINE_SIZE is a macro defined in machine/param.h in FreeBSD and
conflicts with DPDK macro version.
Adding RTE_ prefix to avoid conflicts.
CACHE_LINE_MASK and CACHE_LINE_ROUNDUP are also prefixed.

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
[Thomas: updated on HEAD, including PPC]
2014-11-27 16:21:11 +01:00
Balazs Nemeth
14f2544cda table: fix checking extended buckets in unoptimized case
If a key is not found in a bucket and the bucket has been extended,
the extended buckets also have to checked for potentially matching
keys. The extended buckets are checked at the end of the lookup. In
most cases, this logic is skipped as it is uncommon to have buckets in
an extended state.

In case the lookup is performed with less than 5 packets, an
unoptimized version is run instead (the optimized version requires at
least 5 packets). The extended buckets should also be checked in this
case instead of simply ignoring the extended buckets.

Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2014-11-24 13:17:49 +01:00
Cristian Dumitrescu
8aa327214c table: hash
Various types of hash tables presented under the Packet Framework toolbox.

Hash table types:
1. Extendible bucket (ext): when bucket is full, bucket is extended with
   more keys
2. Least Recently Used (LRU): when bucket is full, the LRU entry is discarded
3. Pre-computed key signature: RX core extracts the key n-tuple from the
   packet, computes the key signature and saves the key and key signature
   within the packet meta-data; flow classification core performs the actual
   lookup (the bucket search stage) after reading the key and key signature
   from packet meta-data
4. Signature computed on-the-fly (do-sig version): the same CPU core extracts
   the key n-tuple from pkt, computes key signature and performs the table
   lookup
5. Configurable key size or optimized for single key size (8-byte, 16-byte
   and 32-byte key sizes)

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Pablo de Lara Guarch <pablo.de.lara.guarch@intel.com>
Acked by: Ivan Boule <ivan.boule@6wind.com>
2014-06-17 03:34:10 +02:00