Commit Graph

2195 Commits

Author SHA1 Message Date
Helin Zhang
a0454b5d2e i40e: update device ids
Add new Device ID's for backplane and QSFP+ adapters, and delete
deprecated one for backplane.

Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
2016-03-16 17:25:25 +01:00
Wenzhuo Lu
a7740dc130 ixgbe: support new devices and MAC types
Add the support for new devices and mac types, as supported by the base
code update.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2016-03-16 17:09:27 +01:00
Ravi Kerur
1da352d62e e1000: support I217 and I218 devices
Modified driver and eal code to support I217 and I218 Intel NICs.

Compiled and tested (via testpmd) on Ubuntu 14.04 for target
	x86_64-native-linuxapp-gcc
Compiled for target x86_64-native-linuxapp-clang

Signed-off-by: Ravi Kerur <rkerur@gmail.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2016-03-16 16:57:48 +01:00
Tetsuya Mukawa
fb871d0a4d vhost: fix default value of kickfd and callfd
Currently, default values of kickfd and callfd are -1.
If the values are -1, current code guesses kickfd and callfd haven't
been initialized yet. Then vhost library will guess the virtqueue isn't
ready for processing.

But callfd and kickfd will be set as -1 when "--enable-kvm"
isn't specified in QEMU command line. It means we cannot treat -1 as
uninitialized state.

The patch defines -1 and -2 as VIRTIO_INVALID_EVENTFD and
VIRTIO_UNINITIALIZED_EVENTFD, and uses VIRTIO_UNINITIALIZED_EVENTFD for
the default values of kickfd and callfd.

Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
2016-03-15 00:20:29 +01:00
Yuanhan Liu
a436f53ebf vhost: avoid dead loop chain
If a malicious guest forges a dead loop chain, it could lead to a dead
loop of copying the desc buf to mbuf, which results to all mbuf being
exhausted.

Add a var nr_desc to avoid such case.

Suggested-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
2016-03-15 00:07:32 +01:00
Yuanhan Liu
c687b0b635 vhost: check for ring descriptors overflow
A malicious guest may easily forge some illegal vring desc buf.
To make our vhost robust, we need make sure desc->next will not
go beyond the vq->desc[] array.

Suggested-by: Rich Lane <rich.lane@bigswitch.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
2016-03-15 00:05:59 +01:00
Yuanhan Liu
623bc47054 vhost: do sanity check for ring descriptor length
We need make sure that desc->len is bigger than the size of virtio net
header, otherwise, unexpected behaviour might happen due to "desc_avail"
would become a huge number with for following code:

	desc_avail  = desc->len - vq->vhost_hlen;

For dequeue code path, it will try to allocate enough mbuf to hold such
size of desc buf, which ends up with consuming all mbufs, leading to no
free mbuf is available. Therefore, you might see an error message:

	Failed to allocate memory for mbuf.

Also, for both dequeue/enqueue code path, while it copies data from/to
desc buf, the big "desc_avail" would result to access memory not belong
the desc buf, which could lead to some potential memory access errors.

A malicious guest could easily forge such malformed vring desc buf. Every
time we restart an interrupted DPDK application inside guest would also
trigger this issue, as all huge pages are reset to 0 during DPDK re-init,
leading to desc->len being 0.

Therefore, this patch does a sanity check for desc->len, to make vhost
robust.

Reported-by: Rich Lane <rich.lane@bigswitch.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
2016-03-15 00:03:46 +01:00
Yuanhan Liu
c252bcf9ec vhost: remove wrong unlikely prediction in Rx
VIRTIO_NET_F_MRG_RXBUF is a default feature supported by vhost.
Adding unlikely for VIRTIO_NET_F_MRG_RXBUF detection doesn't
make sense to me at all.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
2016-03-14 23:59:47 +01:00
Yuanhan Liu
a98240621d vhost: remove rte_memcpy from header copy
First of all, rte_memcpy() is mostly useful for copying big packets
by leveraging hardware advanced instructions like AVX. But for virtio
net hdr, which is 12 bytes at most, invoking rte_memcpy() will not
introduce any performance boost.

And, to my suprise, rte_memcpy() is VERY huge. Since rte_memcpy()
is inlined, it increases the binary code size linearly every time
we call it at a different place. Replacing the two rte_memcpy()
with directly copy saves nearly 12K bytes of code size!

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
2016-03-14 23:58:11 +01:00
Yuanhan Liu
932a00b85a vhost: refactor mergeable Rx
Current virtio_dev_merge_rx() implementation just looks like the
old rte_vhost_dequeue_burst(), full of twisted logic, that you
can see same code block in quite many different places.

However, the logic of virtio_dev_merge_rx() is quite similar to
virtio_dev_rx().  The big difference is that the mergeable one
could allocate more than one available entries to hold the data.
Fetching all available entries to vec_buf at once makes the
difference a bit bigger then.

The refactored code looks like below:

	while (mbuf_has_not_drained_totally || mbuf_has_next) {
		if (this_desc_has_no_room) {
			this_desc = fetch_next_from_vec_buf();

			if (it is the last of a desc chain)
				update_used_ring();
		}

		if (this_mbuf_has_drained_totally)
			mbuf = fetch_next_mbuf();

		COPY(this_desc, this_mbuf);
	}

This patch reduces quite many lines of code, therefore, make it much
more readable.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
2016-03-14 23:56:41 +01:00
Yuanhan Liu
282a94ba99 vhost: refactor Rx
This is a simple refactor, as there isn't any twisted logic in old
code. Here I just broke the code and introduced two helper functions,
reserve_avail_buf() and copy_mbuf_to_desc() to make the code more
readable.

Also, it saves nearly 1K bytes of binary code size.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
2016-03-14 23:55:06 +01:00
Yuanhan Liu
bc7f87a2c1 vhost: refactor dequeueing
The current rte_vhost_dequeue_burst() implementation is a bit messy
and logic twisted. And you could see repeat code here and there.

However, rte_vhost_dequeue_burst() acutally does a simple job: copy
the packet data from vring desc to mbuf. What's tricky here is:

- desc buff could be chained (by desc->next field), so that you need
  fetch next one if current is wholly drained.

- One mbuf could not be big enough to hold all desc buff, hence you
  need to chain the mbuf as well, by the mbuf->next field.

The simplified code looks like following:

	while (this_desc_is_not_drained_totally || has_next_desc) {
		if (this_desc_has_drained_totally) {
			this_desc = next_desc();
		}

		if (mbuf_has_no_room) {
			mbuf = allocate_a_new_mbuf();
		}

		COPY(mbuf, desc);
	}

Note that the old patch does a special handling for skipping virtio
header. However, that could be simply done by adjusting desc_avail
and desc_offset var:

	desc_avail  = desc->len - vq->vhost_hlen;
	desc_offset = vq->vhost_hlen;

This refactor makes the code much more readable (IMO), yet it reduces
binary code size.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
2016-03-14 23:49:36 +01:00
Keith Wiles
6b5a857fb0 eal: decrease log level of some debug messages
When log level is set to 7 (INFO) these messages are still displayed
and should be set to DEBUG.

Signed-off-by: Keith Wiles <keith.wiles@intel.com>
2016-03-13 23:44:35 +01:00
Stephen Hemminger
03d00293ca sched: eliminate floating point in calculating byte clock
The old code was doing a floating point divide for each rte_dequeue()
which is very expensive. Change to using fixed point scaled inverse
multiply. To maintain equivalent precision, scaled math is used.
The application ABI is the same.

This improved performance from 5Gbit/sec to 10 Gbit/sec when configured
for 10 Gbit/sec rate.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2016-03-13 23:31:59 +01:00
Stephen Hemminger
ffe3ec811e sched: introduce reciprocal divide
This adds (with permission of the original author)
reciprocal divide based on algorithm in Linux.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
2016-03-13 23:31:59 +01:00
Stephen Hemminger
4d51afb5cd sched: keep track of RED drops
Add new statistic to keep track of drops due to RED.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2016-03-13 23:28:00 +01:00
Panu Matilainen
4758404a30 mk: fix eal shared library dependencies
Add DT_NEEDED entries for librte_eal external dependencies.
Details between the platforms differ somewhat, and for static
builds they need to be handled from mk/exec-env still.

Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
2016-03-13 20:27:38 +01:00
Panu Matilainen
0d822b8047 mk: fix vhost shared library dependencies
Add DT_NEEDED entries for external library dependencies which
are the most critical ones for sane operation.
Clean up vhost_cuse CFLAGS/LDFLAGS confusion while at it.

Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
2016-03-13 20:27:26 +01:00
Panu Matilainen
e86a699cf6 mk: fix shared library dependencies on libm and librt
There are two places that need -lm (test app and librte_sched) and
exactly one that needs -lrt (librte_sched). Add the relevant
DT_NEEDED entries to both, and eliminate the bogus discrepancy
between Linux and BSD EXECENV_LDLIBS wrt these libs.

Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
2016-03-13 20:27:07 +01:00
Fan Zhang
eb5f4119b2 port: add pcap file dump
Originally, sink ports in librte_port releases received mbufs back to
mempool. This patch adds optional packet dumping to PCAP feature in sink
port: the packets will be dumped to user defined PCAP file for storage or
debugging. The user may also choose the sink port's activity: either it
continuously dump the packets to the file, or stops at certain dumping

This feature shares same CONFIG_RTE_PORT_PCAP compiler option as source
port PCAP file support feature. Users can enable or disable this feature
by setting CONFIG_RTE_PORT_PCAP compiler option "y" or "n".

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2016-03-13 16:04:11 +01:00
Fan Zhang
d4b42133d8 port: add pcap file source
Originally, source ports in librte_port is an input port used as packet
generator. Similar to Linux kernel /dev/zero character device, it
generates null packets. This patch adds optional PCAP file support to
source port: instead of sending NULL packets, the source port generates
packets copied from a PCAP file. To increase the performance, the packets
in the file are loaded to memory initially, and copied to mbufs in circular
manner. Users can enable or disable this feature by setting
CONFIG_RTE_PORT_PCAP compiler option "y" or "n".

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2016-03-13 16:04:02 +01:00
Xutao Sun
7b1312891b ethdev: add IP in GRE tunnel
Signed-off-by: Xutao Sun <xutao.sun@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
2016-03-13 15:27:20 +01:00
Xutao Sun
dd76f93c2d ethdev: rework tunnel filtering structure
Change the fields of outer_mac and inner_mac in struct
rte_eth_tunnel_filter_conf from pointer to struct in order to
keep the code's readability.

Signed-off-by: Xutao Sun <xutao.sun@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
2016-03-13 15:26:55 +01:00
Wenzhuo Lu
d909af8f72 ixgbe: offload VxLAN and NVGRE Rx checksum on X550
X550 will do VxLAN & NVGRE RX checksum off-load automatically.
This patch exposes the result of the checksum off-load.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2016-03-13 11:52:52 +01:00
Wenzhuo Lu
1cbe755fef ethdev: rename UDP tunnel port functions
The names of function for tunnel port configuration are not
accurate. They're tunnel_add/del, better change them to
tunnel_port_add/del.
The old functions are directly replaced because the API and ABI
compatibility of ethdev are already broken in 16.04.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2016-03-13 11:44:33 +01:00
Wenzhuo Lu
c49b2bad6a ethdev: support L2 tunnel operations
Add functions to support l2 tunnel configuration and operations.
1, L2 tunnel ether type modification.
   It means modifying the ether type of a specific type of tunnel.
   So the packet with this ether type will be parsed as this type
   of tunnel.
2, Enabling/disabling l2 tunnel support.
   It means enabling/disabling the ability of parsing the specific
   type of tunnel. This ability should be enabled before we enable
   filtering, forwarding, offloading for this specific type of
   tunnel.
3, Insertion and stripping for l2 tunnel tag.
4, Forwarding the packets to a pool based on l2 tunnel tag.

Only support e-tag tunnel now.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
2016-03-11 23:25:57 +01:00
Helin Zhang
19b16e2f64 ethdev: add vlan type when setting ether type
In order to set ether type of VLAN for single VLAN, inner
and outer VLAN, the VLAN type as an input parameter is added
to 'rte_eth_dev_set_vlan_ether_type()'.
In addition, corresponding changes in e1000, ixgbe and i40e
are also added.

It is an ABI break but ethdev library is already bumped for 16.04.

Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2016-03-11 22:21:06 +01:00
Tomasz Kulasek
d6c99e62c8 ethdev: add buffered Tx
Many sample apps include internal buffering for single-packet-at-a-time
operation. Since this is such a common paradigm, this functionality is
better suited to being implemented in the ethdev API.

The new APIs in the ethdev library are:
* rte_eth_tx_buffer_init - initialize buffer
* rte_eth_tx_buffer - buffer up a single packet for future transmission
* rte_eth_tx_buffer_flush - flush any unsent buffered packets
* rte_eth_tx_buffer_set_err_callback - set up a callback to be called in
  case transmitting a buffered burst fails. By default, we just free the
  unsent packets.

As well as these, an additional reference callbacks are provided, which
frees the packets:

* rte_eth_tx_buffer_drop_callback - silently drop packets (default
  behavior)
* rte_eth_tx_buffer_count_callback - drop and update user-provided counter
  to track the number of dropped packets

Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
2016-03-11 18:05:55 +01:00
Yuanhan Liu
fd2fca6f6b vhost: fix queue pair reallocation
vq is allocated on pairs, hence we should do pair reallocation
at numa_realloc() as well, otherwise an error like following
occurs while do numa reallocation:

    VHOST_CONFIG: reallocate vq from 0 to 1 node
    PANIC in rte_free():
    Fatal error: Invalid memory

The reason we don't catch it is because numa_realloc() will
not take effect when RTE_LIBRTE_VHOST_NUMA is not enabled,
which is the default case.

Fixes: e049ca6d10 ("vhost-user: prepare multiple queue setup")

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
2016-03-11 16:49:20 +01:00
Yuanhan Liu
78ffdaff06 vhost: simplify numa reallocation
We could first check if we need realloc vq or not, if so,
reallocate it. We then do similar to vhost dev realloc.

This could get rid of the tons of repeated "if (realloc_dev)"
and "if (realloc_vq)" statements, therefore, makes code
a bit more readable.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
2016-03-11 16:49:17 +01:00
Yuanhan Liu
45ca9c6f7b vhost: get rid of linked list for devices
While we use a single linked list to maintain all devices, we could
use a static array to achieve the same goal, just like what we did
to maintain the eth devices with rte_eth_devices array. This could
simplifies the code a bit.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
2016-03-11 16:46:18 +01:00
Yuanhan Liu
6fe390eed0 vhost: fix build with kernel < 3.5
VIRTIO_NET_F_GUEST_ANNOUNCE is a new feature introduced since kernel
v3.5. For older kernels (or more precisely, old distributions), we
could simply define it manually, to fix the "macro not defined" error.

Fixes: d293dac8f3 ("vhost: claim support of guest announce")

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
2016-03-11 16:46:18 +01:00
Jerin Jacob
cbc2f1dccf lpm/arm: support NEON
Enabled CONFIG_RTE_LIBRTE_LPM, CONFIG_RTE_LIBRTE_TABLE,
CONFIG_RTE_LIBRTE_PIPELINE libraries for arm and arm64

TABLE, PIPELINE libraries were disabled due to LPM library dependency.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
2016-03-11 15:56:07 +01:00
Jerin Jacob
15d8dfc05b lpm/x86: move SSE implementation to be architecture agnostic
-Used architecture agnostic xmm_t to represent 128 bit SIMD variable

-Introduced vect_* API abstraction in app/test to test rte_lpm_lookupx4
API in  architecture agnostic way

-Moved rte_lpm_lookupx4 SSE implementation to architecture specific
rte_lpm_sse.h file to accommodate new rte_lpm_lookupx4 implementation
for a different architecture.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2016-03-11 15:50:11 +01:00
Declan Doherty
26c2e4ad5a cryptodev: add capabilities discovery
This patch add a mechanism for discovery of crypto device features and supported
crypto operations and algorithms. It also provides a method for a crypto PMD to
publish any data range limitations it may have for the operations and algorithms
it supports.

The parameter feature_flags added to rte_cryptodev struct is used to capture
features such as operations supported (symmetric crypto, operation chaining etc)
as well parameter such as whether the device is hardware accelerated or uses
SIMD instructions.

The capabilities parameter allows a PMD to define an array of supported operations
with any limitation which that implementation may have.

Finally the rte_cryptodev_info struct has been extended to allow retrieval of
these parameter using the existing rte_cryptodev_info_get() API.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
2016-03-11 10:43:09 +01:00
Declan Doherty
eec136f3c5 aesni_gcm: add driver for AES-GCM crypto operations
This patch provides the implementation of an AES-NI accelerated crypto PMD
which is dependent on Intel's multi-buffer library, see the white paper
"Fast Multi-buffer IPsec Implementations on Intel®  Architecture  Processors"

This PMD supports AES_GCM authenticated encryption and authenticated
decryption using 128-bit AES keys

The patch also contains the related unit tests functions

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
2016-03-11 01:01:42 +01:00
Deepak Kumar Jain
a59ffe7eb9 cryptodev: add bit-wise handling for SNOW 3G
Wireless algorithms like Snow3G needs input in bits.
In this patch, changes have been made to incorporate this requirement
in both QAT and SW PMD.

Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2016-03-11 00:18:01 +01:00
Pablo de Lara
3aafc423cf snow3g: add driver for SNOW 3G library
Added new SW PMD which makes use of the libsso SW library,
which provides wireless algorithms SNOW 3G UEA2 and UIA2
in software.

This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_SNOW3G_UEA2
- RTE_CRYPTO_SYM_AUTH_SNOW3G_UIA2

The SNOW 3G hash and cipher algorithms, which are enabled
by this crypto PMD are implemented by Intel's libsso software
library. For library download and build instructions,
see the documentation included (doc/guides/cryptodevs/snow3g.rst)

The patch also contains the related unit tests function to test the PMD
supported operations.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
2016-03-11 00:14:47 +01:00
Declan Doherty
67f64f2e12 mbuf_offload: remove library
As cryptodev library does not depend on mbuf_offload library
any longer, this patch removes it.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
2016-03-10 21:08:28 +01:00
Declan Doherty
c0f87eb525 cryptodev: change burst API to be crypto op oriented
This patch modifies the crypto burst enqueue/dequeue APIs to operate on bursts
rte_crypto_op's rather than the current implementation which operates on
rte_mbuf bursts, this simplifies the burst processing in the crypto PMDs and the
use of crypto operations in general, including new functions for managing
rte_crypto_op pools.

These changes continues the separation of the symmetric operation parameters
from the more general operation parameters, which will simplify the integration
of asymmetric crypto operations in the future.

PMDs, unit tests and sample applications are also modified to work with the
modified and new API.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
2016-03-10 17:12:45 +01:00
Fiona Trahe
a7f4562b09 cryptodev: remove unused field
Remove unused phys_addr field from key in crypto_xform,
simplify struct and fix knock-on impacts in l2fwd-crypto app

Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
2016-03-10 17:12:41 +01:00
Fiona Trahe
1bd407fac8 cryptodev: extract symmetric operations
This patch splits symmetric specific definitions and
functions away from the common crypto APIs to facilitate the future extension
and expansion of the cryptodev framework, in order to allow asymmetric
crypto operations to be introduced at a later date, as well as to clean the
logical structure of the public includes. The patch also introduces the _sym
prefix to symmetric specific structure and functions to improve clarity in
the API.

Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
2016-03-10 17:12:41 +01:00
Fiona Trahe
a0b4c5b8d4 cryptodev: clean up
- Fixed >80char lines in test file
- Removed unused elements from stats struct
- Removed unused objects in rte_cryptodev_pmd.h
- Renamed variables
- Replaced leading spaces with tabs
- Improved performance results display in test

Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
2016-03-10 17:12:40 +01:00
Jasvinder Singh
4c387fcdf7 pipeline: add new functions for action handlers
Two new pipeline API functions have been added to the library. The packet
hijack API function can be called by any input/output port or table action
handler to remove selected packets from the burst of packets read from one
of the pipeline input ports and then either send these packets out through
any pipeline output port or drop them.

Another packet drop API function can be used by the pipeline action
handlers (port in/out, table) to drop the packets selected using packet
mask. This function updates the drop statistics counters correctly.

Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2016-03-10 01:29:42 +01:00
Jasvinder Singh
88ac2fd99f pipeline: support packet redirection at action handlers
Currently, there is no mechanism that allows the pipeline ports (in/out)
and table action handlers to override the default forwarding decision
(as previously configured per input port or in the table entry). The port
(in/out) and table action handler prototypes have been changed to allow
pipeline action handlers (port in/out, table) to remove the selected
packets from the further pipeline processing and to take full ownership
for these packets. This feature will be helpful to implement functions
such as exception handling (e.g. TTL =0), load balancing etc.

Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2016-03-10 01:28:29 +01:00
Huawei Xie
b8eb345378 pci: ignore devices already managed in Linux when mapping x86 ioport
call pci_ioport_map (on x86) only if the pci device is not bound
to a kernel driver.

Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
2016-03-10 00:36:51 +01:00
Huawei Xie
bd80d4730a pci: rework ioport map error handling
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
2016-03-10 00:36:51 +01:00
Huawei Xie
ee4e23ada8 pci: identify devices not managed by any kernel driver
Use RTE_KDRV_NONE to indicate that kernel driver (other than VFIO/UIO) isn't
managing the device.

Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
2016-03-10 00:36:51 +01:00
Huawei Xie
ddc5d282fb pci: fix error code comment
positive return of devinit of pci driver means the driver doesn't support
this device.

Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
2016-03-10 00:36:51 +01:00
Huawei Xie
b00eca4e5c pci: use new compiler flag for x86
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
2016-03-10 00:36:51 +01:00