Add new Device ID's for backplane and QSFP+ adapters, and delete
deprecated one for backplane.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Modified driver and eal code to support I217 and I218 Intel NICs.
Compiled and tested (via testpmd) on Ubuntu 14.04 for target
x86_64-native-linuxapp-gcc
Compiled for target x86_64-native-linuxapp-clang
Signed-off-by: Ravi Kerur <rkerur@gmail.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Currently, default values of kickfd and callfd are -1.
If the values are -1, current code guesses kickfd and callfd haven't
been initialized yet. Then vhost library will guess the virtqueue isn't
ready for processing.
But callfd and kickfd will be set as -1 when "--enable-kvm"
isn't specified in QEMU command line. It means we cannot treat -1 as
uninitialized state.
The patch defines -1 and -2 as VIRTIO_INVALID_EVENTFD and
VIRTIO_UNINITIALIZED_EVENTFD, and uses VIRTIO_UNINITIALIZED_EVENTFD for
the default values of kickfd and callfd.
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
If a malicious guest forges a dead loop chain, it could lead to a dead
loop of copying the desc buf to mbuf, which results to all mbuf being
exhausted.
Add a var nr_desc to avoid such case.
Suggested-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
A malicious guest may easily forge some illegal vring desc buf.
To make our vhost robust, we need make sure desc->next will not
go beyond the vq->desc[] array.
Suggested-by: Rich Lane <rich.lane@bigswitch.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
We need make sure that desc->len is bigger than the size of virtio net
header, otherwise, unexpected behaviour might happen due to "desc_avail"
would become a huge number with for following code:
desc_avail = desc->len - vq->vhost_hlen;
For dequeue code path, it will try to allocate enough mbuf to hold such
size of desc buf, which ends up with consuming all mbufs, leading to no
free mbuf is available. Therefore, you might see an error message:
Failed to allocate memory for mbuf.
Also, for both dequeue/enqueue code path, while it copies data from/to
desc buf, the big "desc_avail" would result to access memory not belong
the desc buf, which could lead to some potential memory access errors.
A malicious guest could easily forge such malformed vring desc buf. Every
time we restart an interrupted DPDK application inside guest would also
trigger this issue, as all huge pages are reset to 0 during DPDK re-init,
leading to desc->len being 0.
Therefore, this patch does a sanity check for desc->len, to make vhost
robust.
Reported-by: Rich Lane <rich.lane@bigswitch.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
VIRTIO_NET_F_MRG_RXBUF is a default feature supported by vhost.
Adding unlikely for VIRTIO_NET_F_MRG_RXBUF detection doesn't
make sense to me at all.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
First of all, rte_memcpy() is mostly useful for copying big packets
by leveraging hardware advanced instructions like AVX. But for virtio
net hdr, which is 12 bytes at most, invoking rte_memcpy() will not
introduce any performance boost.
And, to my suprise, rte_memcpy() is VERY huge. Since rte_memcpy()
is inlined, it increases the binary code size linearly every time
we call it at a different place. Replacing the two rte_memcpy()
with directly copy saves nearly 12K bytes of code size!
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Current virtio_dev_merge_rx() implementation just looks like the
old rte_vhost_dequeue_burst(), full of twisted logic, that you
can see same code block in quite many different places.
However, the logic of virtio_dev_merge_rx() is quite similar to
virtio_dev_rx(). The big difference is that the mergeable one
could allocate more than one available entries to hold the data.
Fetching all available entries to vec_buf at once makes the
difference a bit bigger then.
The refactored code looks like below:
while (mbuf_has_not_drained_totally || mbuf_has_next) {
if (this_desc_has_no_room) {
this_desc = fetch_next_from_vec_buf();
if (it is the last of a desc chain)
update_used_ring();
}
if (this_mbuf_has_drained_totally)
mbuf = fetch_next_mbuf();
COPY(this_desc, this_mbuf);
}
This patch reduces quite many lines of code, therefore, make it much
more readable.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This is a simple refactor, as there isn't any twisted logic in old
code. Here I just broke the code and introduced two helper functions,
reserve_avail_buf() and copy_mbuf_to_desc() to make the code more
readable.
Also, it saves nearly 1K bytes of binary code size.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The current rte_vhost_dequeue_burst() implementation is a bit messy
and logic twisted. And you could see repeat code here and there.
However, rte_vhost_dequeue_burst() acutally does a simple job: copy
the packet data from vring desc to mbuf. What's tricky here is:
- desc buff could be chained (by desc->next field), so that you need
fetch next one if current is wholly drained.
- One mbuf could not be big enough to hold all desc buff, hence you
need to chain the mbuf as well, by the mbuf->next field.
The simplified code looks like following:
while (this_desc_is_not_drained_totally || has_next_desc) {
if (this_desc_has_drained_totally) {
this_desc = next_desc();
}
if (mbuf_has_no_room) {
mbuf = allocate_a_new_mbuf();
}
COPY(mbuf, desc);
}
Note that the old patch does a special handling for skipping virtio
header. However, that could be simply done by adjusting desc_avail
and desc_offset var:
desc_avail = desc->len - vq->vhost_hlen;
desc_offset = vq->vhost_hlen;
This refactor makes the code much more readable (IMO), yet it reduces
binary code size.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The old code was doing a floating point divide for each rte_dequeue()
which is very expensive. Change to using fixed point scaled inverse
multiply. To maintain equivalent precision, scaled math is used.
The application ABI is the same.
This improved performance from 5Gbit/sec to 10 Gbit/sec when configured
for 10 Gbit/sec rate.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
This adds (with permission of the original author)
reciprocal divide based on algorithm in Linux.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Add DT_NEEDED entries for librte_eal external dependencies.
Details between the platforms differ somewhat, and for static
builds they need to be handled from mk/exec-env still.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Add DT_NEEDED entries for external library dependencies which
are the most critical ones for sane operation.
Clean up vhost_cuse CFLAGS/LDFLAGS confusion while at it.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
There are two places that need -lm (test app and librte_sched) and
exactly one that needs -lrt (librte_sched). Add the relevant
DT_NEEDED entries to both, and eliminate the bogus discrepancy
between Linux and BSD EXECENV_LDLIBS wrt these libs.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Originally, sink ports in librte_port releases received mbufs back to
mempool. This patch adds optional packet dumping to PCAP feature in sink
port: the packets will be dumped to user defined PCAP file for storage or
debugging. The user may also choose the sink port's activity: either it
continuously dump the packets to the file, or stops at certain dumping
This feature shares same CONFIG_RTE_PORT_PCAP compiler option as source
port PCAP file support feature. Users can enable or disable this feature
by setting CONFIG_RTE_PORT_PCAP compiler option "y" or "n".
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Originally, source ports in librte_port is an input port used as packet
generator. Similar to Linux kernel /dev/zero character device, it
generates null packets. This patch adds optional PCAP file support to
source port: instead of sending NULL packets, the source port generates
packets copied from a PCAP file. To increase the performance, the packets
in the file are loaded to memory initially, and copied to mbufs in circular
manner. Users can enable or disable this feature by setting
CONFIG_RTE_PORT_PCAP compiler option "y" or "n".
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Change the fields of outer_mac and inner_mac in struct
rte_eth_tunnel_filter_conf from pointer to struct in order to
keep the code's readability.
Signed-off-by: Xutao Sun <xutao.sun@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
X550 will do VxLAN & NVGRE RX checksum off-load automatically.
This patch exposes the result of the checksum off-load.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The names of function for tunnel port configuration are not
accurate. They're tunnel_add/del, better change them to
tunnel_port_add/del.
The old functions are directly replaced because the API and ABI
compatibility of ethdev are already broken in 16.04.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add functions to support l2 tunnel configuration and operations.
1, L2 tunnel ether type modification.
It means modifying the ether type of a specific type of tunnel.
So the packet with this ether type will be parsed as this type
of tunnel.
2, Enabling/disabling l2 tunnel support.
It means enabling/disabling the ability of parsing the specific
type of tunnel. This ability should be enabled before we enable
filtering, forwarding, offloading for this specific type of
tunnel.
3, Insertion and stripping for l2 tunnel tag.
4, Forwarding the packets to a pool based on l2 tunnel tag.
Only support e-tag tunnel now.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
In order to set ether type of VLAN for single VLAN, inner
and outer VLAN, the VLAN type as an input parameter is added
to 'rte_eth_dev_set_vlan_ether_type()'.
In addition, corresponding changes in e1000, ixgbe and i40e
are also added.
It is an ABI break but ethdev library is already bumped for 16.04.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Many sample apps include internal buffering for single-packet-at-a-time
operation. Since this is such a common paradigm, this functionality is
better suited to being implemented in the ethdev API.
The new APIs in the ethdev library are:
* rte_eth_tx_buffer_init - initialize buffer
* rte_eth_tx_buffer - buffer up a single packet for future transmission
* rte_eth_tx_buffer_flush - flush any unsent buffered packets
* rte_eth_tx_buffer_set_err_callback - set up a callback to be called in
case transmitting a buffered burst fails. By default, we just free the
unsent packets.
As well as these, an additional reference callbacks are provided, which
frees the packets:
* rte_eth_tx_buffer_drop_callback - silently drop packets (default
behavior)
* rte_eth_tx_buffer_count_callback - drop and update user-provided counter
to track the number of dropped packets
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
vq is allocated on pairs, hence we should do pair reallocation
at numa_realloc() as well, otherwise an error like following
occurs while do numa reallocation:
VHOST_CONFIG: reallocate vq from 0 to 1 node
PANIC in rte_free():
Fatal error: Invalid memory
The reason we don't catch it is because numa_realloc() will
not take effect when RTE_LIBRTE_VHOST_NUMA is not enabled,
which is the default case.
Fixes: e049ca6d10 ("vhost-user: prepare multiple queue setup")
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
We could first check if we need realloc vq or not, if so,
reallocate it. We then do similar to vhost dev realloc.
This could get rid of the tons of repeated "if (realloc_dev)"
and "if (realloc_vq)" statements, therefore, makes code
a bit more readable.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
While we use a single linked list to maintain all devices, we could
use a static array to achieve the same goal, just like what we did
to maintain the eth devices with rte_eth_devices array. This could
simplifies the code a bit.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
VIRTIO_NET_F_GUEST_ANNOUNCE is a new feature introduced since kernel
v3.5. For older kernels (or more precisely, old distributions), we
could simply define it manually, to fix the "macro not defined" error.
Fixes: d293dac8f3 ("vhost: claim support of guest announce")
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Enabled CONFIG_RTE_LIBRTE_LPM, CONFIG_RTE_LIBRTE_TABLE,
CONFIG_RTE_LIBRTE_PIPELINE libraries for arm and arm64
TABLE, PIPELINE libraries were disabled due to LPM library dependency.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
-Used architecture agnostic xmm_t to represent 128 bit SIMD variable
-Introduced vect_* API abstraction in app/test to test rte_lpm_lookupx4
API in architecture agnostic way
-Moved rte_lpm_lookupx4 SSE implementation to architecture specific
rte_lpm_sse.h file to accommodate new rte_lpm_lookupx4 implementation
for a different architecture.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch add a mechanism for discovery of crypto device features and supported
crypto operations and algorithms. It also provides a method for a crypto PMD to
publish any data range limitations it may have for the operations and algorithms
it supports.
The parameter feature_flags added to rte_cryptodev struct is used to capture
features such as operations supported (symmetric crypto, operation chaining etc)
as well parameter such as whether the device is hardware accelerated or uses
SIMD instructions.
The capabilities parameter allows a PMD to define an array of supported operations
with any limitation which that implementation may have.
Finally the rte_cryptodev_info struct has been extended to allow retrieval of
these parameter using the existing rte_cryptodev_info_get() API.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
This patch provides the implementation of an AES-NI accelerated crypto PMD
which is dependent on Intel's multi-buffer library, see the white paper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors"
This PMD supports AES_GCM authenticated encryption and authenticated
decryption using 128-bit AES keys
The patch also contains the related unit tests functions
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
Wireless algorithms like Snow3G needs input in bits.
In this patch, changes have been made to incorporate this requirement
in both QAT and SW PMD.
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added new SW PMD which makes use of the libsso SW library,
which provides wireless algorithms SNOW 3G UEA2 and UIA2
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_SNOW3G_UEA2
- RTE_CRYPTO_SYM_AUTH_SNOW3G_UIA2
The SNOW 3G hash and cipher algorithms, which are enabled
by this crypto PMD are implemented by Intel's libsso software
library. For library download and build instructions,
see the documentation included (doc/guides/cryptodevs/snow3g.rst)
The patch also contains the related unit tests function to test the PMD
supported operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
As cryptodev library does not depend on mbuf_offload library
any longer, this patch removes it.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
This patch modifies the crypto burst enqueue/dequeue APIs to operate on bursts
rte_crypto_op's rather than the current implementation which operates on
rte_mbuf bursts, this simplifies the burst processing in the crypto PMDs and the
use of crypto operations in general, including new functions for managing
rte_crypto_op pools.
These changes continues the separation of the symmetric operation parameters
from the more general operation parameters, which will simplify the integration
of asymmetric crypto operations in the future.
PMDs, unit tests and sample applications are also modified to work with the
modified and new API.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Remove unused phys_addr field from key in crypto_xform,
simplify struct and fix knock-on impacts in l2fwd-crypto app
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
This patch splits symmetric specific definitions and
functions away from the common crypto APIs to facilitate the future extension
and expansion of the cryptodev framework, in order to allow asymmetric
crypto operations to be introduced at a later date, as well as to clean the
logical structure of the public includes. The patch also introduces the _sym
prefix to symmetric specific structure and functions to improve clarity in
the API.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
- Fixed >80char lines in test file
- Removed unused elements from stats struct
- Removed unused objects in rte_cryptodev_pmd.h
- Renamed variables
- Replaced leading spaces with tabs
- Improved performance results display in test
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Two new pipeline API functions have been added to the library. The packet
hijack API function can be called by any input/output port or table action
handler to remove selected packets from the burst of packets read from one
of the pipeline input ports and then either send these packets out through
any pipeline output port or drop them.
Another packet drop API function can be used by the pipeline action
handlers (port in/out, table) to drop the packets selected using packet
mask. This function updates the drop statistics counters correctly.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Currently, there is no mechanism that allows the pipeline ports (in/out)
and table action handlers to override the default forwarding decision
(as previously configured per input port or in the table entry). The port
(in/out) and table action handler prototypes have been changed to allow
pipeline action handlers (port in/out, table) to remove the selected
packets from the further pipeline processing and to take full ownership
for these packets. This feature will be helpful to implement functions
such as exception handling (e.g. TTL =0), load balancing etc.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
call pci_ioport_map (on x86) only if the pci device is not bound
to a kernel driver.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Use RTE_KDRV_NONE to indicate that kernel driver (other than VFIO/UIO) isn't
managing the device.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
positive return of devinit of pci driver means the driver doesn't support
this device.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>