The original purpose of this deprecation is to make sure PCI devices
are reset whenever DPDK apps crash.
Since the commit b58eedfc7d from Shijith can fix this problem without
deprecating anything, now there is no need to deprecate iomem and ioport
mapping in igb_uio.
Fixes: 3bac1dbc1e ("doc: announce iomem and ioport removal from igb_uio")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
It is very difficult to list OS which are really supported.
Instead of continuing this unrealistic effort, a more reliable list
of tested platforms and OS is updated in the release notes.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: John McNamara <john.mcnamara@intel.com>
Add tested Intel platforms with Intel NICs to the release note.
Signed-off-by: Yulong Pei <yulong.pei@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Fixes: 0667319395 ("doc: add libnuma as dependency")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: John McNamara <john.mcnamara@intel.com>
The work on ethdev Rx and Tx offloads has not been completed for 17.08.
It will be completed on 17.11
The notice is kept and postponed until the end of this work.
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
The function rte_cryptodev_create_vdev is an alias
for rte_vdev_init() which is scheduled to move out of the
rte_eal library. Lets deprecate this function to be able to
remove it from the cryptodev library in 17.11.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
The _rte_eth_dev_callback_process function has been modified.
The return value has been changed form void to int and an
extra parameter "void *ret_param" has been added.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Crypto operation status RTE_CRYPTO_OP_STATUS_ENQUEUED is removed
from rte_crypto.h as it is not needed for crypto operation processing.
This status value is redundant to RTE_CRYPTO_OP_STATUS_NOT_PROCESSED value
and it was not intended to be part of public API.
Signed-off-by: Kirill Rybalchenko <kirill.rybalchenko@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Crypto keys and digests are not expected
to be big, so using a uint16_t to store
their lengths should be enough.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Additional Authenticated Data (AAD) was removed from the
authentication parameters, but still the supported size
was part of the authentication capabilities of a PMD.
Fixes: 4428eda8bb ("cryptodev: remove AAD from authentication structure")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Add header files, update .map files with new service
functions, and add the service header to the doxygen
for building.
This service header API allows DPDK to use services as
a concept of something that requires CPU cycles. An example
is a PMD that runs in software to schedule events, where a
hardware version exists that does not require a CPU.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Added API change description - moved bypass functions
from the rte_ethdev library to ixgbe PMD
Fixes: e261265e42 ("ethdev: move bypass functions to ixgbe PMD")
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
In this patch, we introduce five APIs to support TCP/IPv4 GRO.
- gro_tcp4_reassemble: reassemble an inputted TCP/IPv4 packet.
- gro_tcp4_tbl_create: create a TCP/IPv4 reassembly table, which is used
to merge packets.
- gro_tcp4_tbl_destroy: free memory space of a TCP/IPv4 reassembly table.
- gro_tcp4_tbl_pkt_count: return the number of packets in a TCP/IPv4
reassembly table.
- gro_tcp4_tbl_timeout_flush: flush timeout packets from a TCP/IPv4
reassembly table.
TCP/IPv4 GRO API assumes all inputted packets are with correct IPv4
and TCP checksums. And TCP/IPv4 GRO API doesn't update IPv4 and TCP
checksums for merged packets. If inputted packets are IP fragmented,
TCP/IPv4 GRO API assumes they are complete packets (i.e. with L4
headers).
In TCP/IPv4 GRO, we use a table structure, called TCP/IPv4 reassembly
table, to reassemble packets. A TCP/IPv4 reassembly table includes a key
array and a item array, where the key array keeps the criteria to merge
packets and the item array keeps packet information.
One key in the key array points to an item group, which consists of
packets which have the same criteria value. If two packets are able to
merge, they must be in the same item group. Each key in the key array
includes two parts:
- criteria: the criteria of merging packets. If two packets can be
merged, they must have the same criteria value.
- start_index: the index of the first incoming packet of the item group.
Each element in the item array keeps the information of one packet. It
mainly includes three parts:
- firstseg: the address of the first segment of the packet
- lastseg: the address of the last segment of the packet
- next_pkt_index: the index of the next packet in the same item group.
All packets in the same item group are chained by next_pkt_index.
With next_pkt_index, we can locate all packets in the same item
group one by one.
To process an incoming packet needs three steps:
a. check if the packet should be processed. Packets with one of the
following properties won't be processed:
- FIN, SYN, RST, URG, PSH, ECE or CWR bit is set;
- packet payload length is 0.
b. traverse the key array to find a key which has the same criteria
value with the incoming packet. If find, goto step c. Otherwise,
insert a new key and insert the packet into the item array.
c. locate the first packet in the item group via the start_index in the
key. Then traverse all packets in the item group via next_pkt_index.
If find one packet which can merge with the incoming one, merge them
together. If can't find, insert the packet into this item group.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Jianfeng Tan <jianfeng.tan@intel.com>
Generic Receive Offload (GRO) is a widely used SW-based offloading
technique to reduce per-packet processing overhead. It gains
performance by reassembling small packets into large ones. This
patchset is to support GRO in DPDK. To support GRO, this patch
implements a GRO API framework.
To enable more flexibility to applications, DPDK GRO is implemented as
a user library. Applications explicitly use the GRO library to merge
small packets into large ones. DPDK GRO provides two reassembly modes.
One is called lightweight mode, the other is called heavyweight mode.
If applications want to merge packets in a simple way and the number
of packets is relatively small, they can use the lightweight mode.
If applications need more fine-grained controls, they can choose the
heavyweight mode.
rte_gro_reassemble_burst is the main reassembly API which is used in
lightweight mode and processes N packets at a time. For applications,
performing GRO in lightweight mode is simple. They just need to invoke
rte_gro_reassemble_burst. Applications can get GROed packets as soon as
rte_gro_reassemble_burst returns.
rte_gro_reassemble is the main reassembly API which is used in
heavyweight mode and tries to merge N inputted packets with the packets
in GRO reassembly tables. For applications, performing GRO in heavyweight
mode is relatively complicated. Before performing GRO, applications need
to create a GRO context object, which keeps reassembly tables of
desired GRO types, by rte_gro_ctx_create. Then applications can use
rte_gro_reassemble to merge packets. The GROed packets are in the
reassembly tables of the GRO context object. If applications want to get
them, applications need to manually flush them by flush API.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Jianfeng Tan <jianfeng.tan@intel.com>
Introduce a more versatile helper to parse device strings. This
helper expects a generic rte_devargs structure as storage in order not
to require API changes in the future, should this structure be
updated.
The old equivalent function is thus being deprecated, as its API does
not allow to accompany rte_devargs evolutions.
A deprecation notice is issued.
This new helper will parse bus information as well as device name and
device parameters. It does not allocate an rte_devargs structure and
expects one to be given as input.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
The dpdk-test-eventdev tool is a Data Plane Development Kit (DPDK)
application that allows exercising various eventdev use cases. This
application has a generic framework to add new eventdev based test cases
to verify functionality and measure the performance parameters of DPDK
eventdev devices.
This patch adds the skeleton of the dpdk-test-eventdev application.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
The session mempool pointer is needed in each queue pair,
if session-less operations are being handled.
Therefore, the API is changed to accept this parameter,
as the session mempool is created outside the
device configuration function, similar to what ethdev
does with the rx queues.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Change crypto device's session management to make it
device independent and simplify architecture when session
is intended to be used on more than one device.
Sessions private data is agnostic to underlying device
by adding an indirection in the sessions private data
using the crypto driver identifier.
A single session can contain indirections to multiple device types.
New function rte_cryptodev_sym_session_init has been created,
to initialize the driver private session data per driver to be
used on a same session, and rte_cryptodev_sym_session_clear
to clear this data before calling rte_cryptodev_sym_session_free.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Mempool pointer can be obtained from the object itself,
which means that it is not required to actually store the pointer
in the session.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Since crypto session will not be attached to a specific
device or driver, the field driver_id is not required
anymore (only used to check that a session was being
handled by the right device).
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Device id is necessary in the crypto session,
as it was only used for the functions that attach/detach
a session to a queue pair.
Since the session is not going to be attached to a device
anymore, this is field is no longer necessary.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Device id is going to be removed from session,
as the session will be device independent.
Therefore, the functions that attach/dettach a session
to a queue pair need to be updated, to accept the device id
as a parameter, apart from the queue pair id and the session.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Instead of creating the session mempool while configuring
the crypto device, apps will create the mempool themselves.
This way, it gives flexibility to the user to have a single
mempool for all devices (as long as the objects are big
enough to contain the biggest private session size) or
separate mempools for different drivers.
Also, since the mempool is now created outside the
device configuration function, now it needs to be passed
through this function, which will be eventually passed
when setting up the queue pairs, as ethernet devices do.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Multi-core scheduling mode is a mode where scheduler distributes
crypto operations in a round-robin base, between several core
assigned as workers.
Signed-off-by: Kirill Rybalchenko <kirill.rybalchenko@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Remove crypto device driver name string definitions from librte_cryptodev,
which avoid to library changes every time a new crypto driver was added.
The driver name is predefined internaly in the each PMD.
The applications could use the crypto device driver names based on
options with the driver name string provided in command line.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Changes device type identification to be based on a unique
driver id replacing the current device type enumeration, which needed
library changes every time a new crypto driver was added.
The driver id is assigned dynamically during driver registration using
the new macro RTE_PMD_REGISTER_CRYPTO_DRIVER which returns a unique
uint8_t identifier for that driver. New APIs are also introduced
to allow retrieval of the driver id using the driver name.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Since Intel Multi Buffer library for IPSec has been updated to
support Scatter Gather List, the AESNI GCM PMD can link
to this library, instead of the ISA-L library.
This move eases the maintenance of the driver, as it will
use the same library as the AESNI MB PMD.
It also adds support for 192-bit keys.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
IPSec Multi-buffer library v0.46 has been released,
which includes, among othe features, support for 12-byte IV,
for AES-CTR, keeping also the previous 16-byte IV,
for backward compatibility reasons.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
AES-GCM support is added as per the AEAD type of crypto
operations. Support for AES-CTR is also added.
test/crypto and documentation is also updated for
dpaa2_sec to add supported algorithms.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Now that AAD is only used in AEAD algorithms,
there is no need to keep AAD in the authentication
structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
AEAD operation parameters can be set in the new
aead structure, in the crypto operation.
This structure is within a union with the cipher
and authentication parameters, since operations can be:
- AEAD: using the aead structure
- Cipher only: using only the cipher structure
- Auth only: using only the authentication structure
- Cipher-then-auth/Auth-then-cipher: using both cipher
and authentication structures
Therefore, all three cannot be used at the same time.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
AEAD algorithms such as AES-GCM needed to be
used as a concatenation of a cipher transform and
an authentication transform.
Instead, a new transform and functions to handle it
are created to support these kind of algorithms,
making their use easier.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Digest length was duplicated in the authentication transform
and the crypto operation structures.
Since digest length is not expected to change in a same
session, it is removed from the crypto operation.
Also, the length has been shrunk to 16 bits,
which should be sufficient for any digest.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Additional authenticated data (AAD) information was duplicated
in the authentication transform and in the crypto
operation structures.
Since AAD length is not meant to be changed in a same session,
it is removed from the crypto operation structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Authentication algorithms, such as AES-GMAC or the wireless
algorithms (like SNOW3G) use IV, like cipher algorithms.
So far, AES-GMAC has used the IV from the cipher structure,
and the wireless algorithms have used the AAD field,
which is not technically correct.
Therefore, authentication IV parameters have been added,
so API is more correct. Like cipher IV, auth IV is expected
to be copied after the crypto operation.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Since IV parameters (offset and length) should not
change for operations in the same session, these parameters
are moved to the crypto transform structure, so they will
be stored in the sessions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Since IV now is copied after the crypto operation, in
its private size, IV can be passed only with offset
and length.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Instead of storing a pointer to operation specific parameters,
such as symmetric crypto parameters, use a zero-length array,
to mark that these parameters will be stored after the
generic crypto operation structure, which was already assumed
in the code, reducing the memory footprint of the crypto operation.
Besides, it is always expected to have rte_crypto_op
and rte_crypto_sym_op (the only operation specific parameters
structure right now) to be together, as they are initialized
as a single object in the crypto operation pool.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Storing a pointer to the user data is unnecessary,
since user can store additional data, after the crypto operation.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Instead of storing some crypto operation flags,
such as operation status, as enumerations,
store them as uint8_t, for memory efficiency.
Also, reserve extra 5 bytes in the crypto operation,
for future additions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Session type (operation with or without session) is not
something specific to symmetric operations.
Therefore, the variable is moved to the generic crypto operation
structure.
Since this is an ABI change, the cryptodev library version
gets bumped.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Add release notes update to announce support of rte_flow on igb NIC.
And update NIC features document for this feature.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Change the rte_eth_dev_callback_process function to return int,
and add a void *ret_param parameter.
The new parameter is used by ixgbe and i40e instead of abusing
the user data of the callback.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
This patch remove the deprecated functions as well as notice for
scheduler mode set/get API changes.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Move all bypass functions to ixgbe pmd and remove function
pointers from the eth_dev_ops struct.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
TX coalescing waits for ETH_COALESCE_PKT_NUM packets to be coalesced
across bursts before transmitting them. For slow traffic, such as
100 PPS, this approach increases latency since packets are received
one at a time and tx coalescing has to wait for ETH_COALESCE_PKT
number of packets to arrive before transmitting.
To fix this:
- Update rx path to use status page instead and only receive packets
when either the ingress interrupt timer threshold (5 us) or
the ingress interrupt packet count threshold (32 packets) fires.
(i.e. whichever happens first).
- If number of packets coalesced is <= number of packets sent
by tx burst function, stop coalescing and transmit these packets
immediately.
Also added compile time option to favor throughput over latency by
default.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Add code to detect and run T6 devices. Update PCI ID Device table
with Chelsio T6 device ids and update documentation.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Update enic NIC guide, release notes and add flow API to the
supported features list.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Nelson Escobar <neescoba@cisco.com>
Add template release notes for DPDK 17.08 with inline
comments and explanations of the various sections.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
The current crypto operation and symmetric crypto operation
structures will be reworked for correctness and improvement,
reducing also their sizes, to fit into less cache lines,
as stated in the following RFC:
http://dpdk.org/dev/patchwork/patch/24011/
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
API changes are planned for 17.08 to made sessions agnostic to the
underlaying devices, removing coupling with crypto PMDs, so a single
session can be used on multiple devices.
It requires to change "struct rte_cryptodev_sym_session" to store more
than one private data for devices, as well as remove redundant dev_id
and dev_type.
Effected public functions:
- rte_cryptodev_sym_session_pool_create
- rte_cryptodev_sym_session_create
- rte_cryptodev_sym_session_free
While session will not be directly associated with device, followed API
will be changed adding uint8_t dev_id to the argument list:
- rte_cryptodev_queue_pair_attach_sym_session
- rte_cryptodev_queue_pair_detach_sym_session
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Refer to RFC patch - cryptodev: remove crypto device type enumeration
It is planned to remove device type enumeration rte_cryptodev_type from
library to remove the coupling between crypto PMD and the librte_cryptodev.
In this case following stuctures will be changed: rte_cryptodev_session,
rte_cryptodev_sym_session, rte_cryptodev_info, rte_cryptodev.
It is planned to change the function rte_cryptodev_count_devtype().
The function prototype doesn’t clearly show the operation.
>From next release 17.08 the dev_type will be changed to driver_id.
So the function name will change to rte_cryptodev_device_count_by_driver().
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
The following PMD names definitions will be moved to the individual PMDs
to remove the coupling between crypto PMDs and the librte_cryptodev:
CRYPTODEV_NAME_NULL_PMD
CRYPTODEV_NAME_AESNI_MB_PMD
CRYPTODEV_NAME_AESNI_GCM_PMD
CRYPTODEV_NAME_OPENSSL_PMD
CRYPTODEV_NAME_QAT_SYM_PMD
CRYPTODEV_NAME_SNOW3G_PMD
CRYPTODEV_NAME_KASUMI_PMD
CRYPTODEV_NAME_ZUC_PMD
CRYPTODEV_NAME_ARMV8_PMD
CRYPTODEV_NAME_SCHEDULER_PMD
CRYPTODEV_NAME_DPAA2_SEC_PMD
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Some work remains for VDEV bus move.
Not sure if there will be some API or ABI changes.
The notice is kept and postponed until the end of this rework.
The PCI fields must be removed from cryptodev and the newly
introduced eventdev, in order to complete the bus rework.
The VLAN flags rework should be completed in 17.08.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Some VFIO functions have been exported without any rename,
thus no breakage.
The struct eth_driver has been removed, but rte_pci_driver
is still used in rte_cryptodev_driver.
Fixes: a016873eb3 ("vfio: export utility functions in map file")
Fixes: 9dca21fb80 ("ethdev: remove ethdev driver")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The deprecation of the bypass functions in the ethdev has not been
done in 17.05. Let's postpone to 17.08.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The change of _rte_eth_dev_callback_process has not been done in 17.05.
Let's postpone to 17.08.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This is an ABI change notice for DPDK 17.08 in ethdev
about changes in rte_eth_txmode structure.
Currently Tx offloads are enabled by default, and can be disabled
using ETH_TXQ_FLAGS_NO* flags. This behaviour is not consistent with
the Rx side where the Rx offloads are disabled by default and enabled
according to bit field in rte_eth_rxmode structure.
The proposal is to disable the Tx offloads by default, and provide
a way for the application to enable them in rte_eth_txmode structure.
Besides of making the Tx configuration API more consistent for
applications, PMDs will be able to provide a better out of the
box performance.
Finally, as part of the work, the ETH_TXQ_FLAGS_NO* will
be superseded as well.
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The PCI and virtual bus are planned to be moved to the generic
drivers/bus directory in v17.08. For this change to be possible, the EAL
must be made completely independent.
The rte_devargs structure currently holds device representation internal
to those two busses. It must be made generic before this work can be
completed.
Instead of using either a driver name for a vdev or a PCI address for a
PCI device, a devargs structure will have to be able to describe any
possible device on all busses, without introducing dependencies on
any bus-specific device representation. This will break the ABI for this
structure.
Additionally, an evolution will occur regarding the device parsing
from the command-line. A user must be able to set which bus will handle
which device, and this setting is integral to the definition of a
device.
The format has not yet been formally defined, but a proposition will
follow soon for a new command line parameter format for all devices.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Add tested Intel platforms with Intel NICs to the release note.
Signed-off-by: Yulong Pei <yulong.pei@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Because of UIO only support one interrupt, when insmod
``igb_uio`` and running l3fwd-power APP, link status
getting doesn't work properly.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Revert patches to provide clear view for
upcoming changes. Reverted patches are listed below:
commit ea85e7d711 ("ethdev: retrieve xstats by ID")
commit a954495245 ("ethdev: get xstats ID by name")
commit 1223608adb ("app/proc-info: support xstats by ID")
commit 25e38f09af ("net/e1000: support xstats by ID")
commit 923419333f ("net/ixgbe: support xstats by ID")
Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
The l2fwd-keepalive example has infinite processing loops and as a
result the only way to exit it is via SIGINT/SIGTERM (e.g. Control-C).
The resulting shutdown is unclean, which is fixed by adding a signal
handler that causes the processing loops to break.
Fixes: e64833f227 ("examples/l2fwd-keepalive: add sample application")
Signed-off-by: Remy Horton <remy.horton@intel.com>
Tested-by: Roman Korynkevych <romanx.korynkevych@intel.com>
Introduced new function: rte_eth_xstats_get_id_by_name
to retrieve xstats ids by its names.
doc: added release note
Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Extended xstats API in ethdev library to allow grouping of stats
logically so they can be retrieved per logical grouping managed
by the application.
Changed existing functions rte_eth_xstats_get_names and
rte_eth_xstats_get to use a new list of arguments: array of ids
and array of values. ABI versioning mechanism was used to
support backward compatibility.
Introduced two new functions rte_eth_xstats_get_all and
rte_eth_xstats_get_names_all which keeps functionality of the
previous ones (respectively rte_eth_xstats_get and
rte_eth_xstats_get_names) but use new API inside.
test-pmd: add support for new xstats API retrieving by id in
testpmd application: xstats_get() and
xstats_get_names() call with modified parameters.
doc: add description for modified xstats API
Documentation change for modified extended statistics API functions.
The old API only allows retrieval of *all* of the NIC statistics
at once. Given this requires a MMIO read PCI transaction per statistic
it is an inefficient way of retrieving just a few key statistics.
Often a monitoring agent only has an interest in a few key statistics,
and the old API forces wasting CPU time and PCIe bandwidth in retrieving
*all* statistics; even those that the application didn't explicitly
show an interest in.
The new, more flexible API allow retrieval of statistics per ID.
If a PMD wishes, it can be implemented to read just the required
NIC registers. As a result, the monitoring application no longer wastes
PCIe bandwidth and CPU time.
Signed-off-by: Jacek Piasecki <jacekx.piasecki@intel.com>
Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Some scheduling modes may need extra options to be configured,
this patch adds the function prototype for setting/getting
options.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
When insmod "igb_uio" with "intr_mode=legacy and test link
status interrupt. Since INTx interrupt is not supported by
X710/XL710/XXV710, it will cause Input/Output error when
reading file descriptor.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by Jingjing Wu <jingjing.wu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch adds the NXP dpaa2 architecture and pmd details
in the Network interfaces section.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch deprecates the following functions in 17.05,
which will be removed in 17.08.
- rte_crpytodev_scheduler_mode_get()
- rte_crpytodev_scheduler_mode_set()
These two new functions replace them, fixing the typo in their names.
- rte_cryptodev_scheduler_mode_get()
- rte_cryptodev_scheduler_mode_set()
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
DOCSIS BPI mode is handled in the QAT PMD by sending full blocks to the
hardware device for encryption and using OpenSSL libcrypto for pre- or
post-processing of any partial blocks.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Adds support in OpenSSL PMD for algorithm following the DOCSIS
specification, which combines DES-CBC for full DES blocks (8 bytes)
and DES-CFB for last runt block (less than 8 bytes).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Tested-by: Yang Gang <gangx.yang@intel.com>
Underlying IPSec Multi buffer library implements
DOCSIS specification, so this commit adds support
for this new feature, which combines AES-CBC for full
AES blocks (16 bytes) and AES-CFB for last runt block
(less than 16 bytes).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Fail-over mode works with 2 slaves, primary slave and secondary slave.
In this mode, the scheduler will enqueue the incoming crypto op burst
to the primary slave. When one or more crypto ops are failed to be
enqueued, they then will be enqueued to the secondary slave.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Packet-size based distribution mode is a scheduling mode works with 2
slaves, primary slave and secondary slave, and distribute the enqueued
crypto ops to them based on their data lengths. A crypto op will be
distributed to the primary slave if its data length equals or bigger
than the designated threshold, otherwise it will be handled by the
secondary slave.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
HW based crypto drivers may only support limited number of
sessions per queue pair. This requires support for attaching
sessions to specific queue pair. New APIs are introduced to
attach/detach a session with/from a particular queue pair.
These are optional APIs.
Application can call attach API after creating a session
and can call detach API before deleting a session.
Application needs to check if max_nb_sessions_per_qp > 0,
then it should call the attach API.
max_nb_sessions_per_qp = 0 means infinite sessions per qp
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Remove DPDK QAT sample app, in favour of the newer applications
that use the cryptodev library: ipsec-gw and l2fwd-crypto,
which has support for Intel QuickAssist devices.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch changes the device configuration API for rte_cryptodev_ops
function prototype, and update all cryptodev PMDs for this change.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Add a library designed to calculate latency statistics and report them
to the application when queried. The library measures minimum, average and
maximum latencies, and jitter in nano seconds. The current implementation
supports global latency stats, i.e. per application stats.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Signed-off-by: Remy Horton <remy.horton@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch adds a library that calculates peak and average data-rate
statistics. For ethernet devices. These statistics are reported using
the metrics library.
Signed-off-by: Remy Horton <remy.horton@intel.com>
This patch adds a new information metrics library. This Metrics
library implements a mechanism by which producers can publish
numeric information for later querying by consumers. Metrics
themselves are statistics that are not generated by PMDs, and
hence are not reported via ethdev extended statistics.
Metric information is populated using a push model, where
producers update the values contained within the metric
library by calling an update function on the relevant metrics.
Consumers receive metric information by querying the central
metric data, which is held in shared memory.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The ring and distributor reworks are done.
Fixes: a6619414e0 ("ring: make struct and macros type agnostic")
Fixes: 775003ad2f ("distributor: add new burst-capable library")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Deprecate the following functions:
- rte_set_log_level(), replaced by rte_log_set_global_level()
- rte_get_log_level(), replaced by rte_log_get_global_level()
- rte_set_log_type(), replaced by rte_log_set_level()
- rte_get_log_type(), replaced by rte_log_get_level()
The new functions provide a better control of the per-type log level,
and have a better name prefix (rte_log_).
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Introduce 2 new functions to support dynamic log types:
- rte_log_register(): register a log name, and return a log type id
- rte_log_set_level(): set the log level of a given log type
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The reorganization of the mbuf structure induces an ABI breakage.
Bump the library version, and update the documentation accordingly.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Rename "rte_virtio_net.h" to "rte_vhost.h", to not let it be virtio
net specific.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
rename "virtio_net_device_ops" to "vhost_device_ops", to not let it
be virtio-net specific.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
They are virtio-net specific and should be defined inside the virtio-net
driver.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
We used to use rte_vhost_get_queue_num() for telling how many vrings.
However, the return value is the number of "queue pairs", which is
very virtio-net specific. To make it generic, we should return the
number of vrings instead, and let the driver do the proper translation.
Say, virtio-net driver could turn it to the number of queue pairs by
dividing 2.
Meanwhile, mark rte_vhost_get_queue_num as deprecated.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Assume there is an application both support vhost-user net and
vhost-user scsi, the callback should be different. Making notify
ops per vhost driver allow application define different set of
callbacks for different driver.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The rte_eth_vhost_feature_disable/enable/get APIs are just a wrapper of
rte_vhost_feature_disable/enable/get. However, the later are going to
be refactored; it's going to take an extra parameter (socket_file path),
to let it be per-device.
Instead of changing those vhost-pmd APIs to adapt to the new vhost APIs,
we could simply remove them, and let vdev to serve this purpose. After
all, vdev options is better for disabling/enabling some features.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
So far, virtio-user with vhost-user as the backend can only support
client mode. So when vhost user backend is down, i.e., unix socket
connection is broken, the connection cannot be re-connected. We will
forcely set the link state to be down.
Note: virtio-user with vhost-kernel as the backend still cannot
support lsc now as we fail to find a way to monitor the backend, tap
device, up/down events.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
For rxq interrupt, the device (backend driver) will notify driver
through callfd. Each virtqueue has a callfd. To keep compatible
with the existing framework, we will give these callfds to
interrupt thread for listening for interrupts.
Before that, we need to allocate intr_handle, and fill callfds
into it so that driver can use it to set up rxq interrupt mode.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
This patch implements support for the Virtio MTU feature.
When negotiated, the host shares its maximum supported MTU,
which is used as initial MTU and as maximum MTU the application
can set.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Updates the documentation and feature lists for the AVP PMD device.
Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Vincent Jardin <vincent.jardin@6wind.com>
Set some TCs to strict priority mode.
It's a global setting on a physical port.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Set a specific TC's max bandwidth on a VF.
User can call the API to set VF TC's max bandwidth
from PF.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Allocate all TCs' relative bandwidth (percentage) on
a specific VF.
It can be taken as relative min bandwidth setting.
User can call the API to set VF TC's min bandwidth
from PF.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Support setting VF max bandwidth from PF.
User can call the API on PF to set a specific VF's
max bandwidth.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Only pattern items VOID, ETH and actions VOID, QUEUE is now
supported.
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
With all vmxnet3 version 3 changes incorporated in the vmxnet3 driver,
the driver can configure emulation to run at vmxnet3 version 3, provided
the emulation advertises support for version 3.
This patch also updates release notes.
Signed-off-by: Shrikrishna Khare <skhare@vmware.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Acked-by: Jin Heo <heoj@vmware.com>
* Add link block check for KR.
* Complete HW initialization even if SFP is not present.
* Add VF xcast promiscuous mode.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch enables i40e driver in PowerPC along with its altivec
intrinsic support.
Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Acked-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Current device hotplug is just supported by UIO managed devices.
This patch adds same functionality with VFIO.
It has been validated through tests using IOMMU and also with
VFIO and no-iommu mode.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Introduce a new API to get the status of a descriptor.
For Rx, it is almost similar to rx_descriptor_done API, except it
differentiates "used" descriptors (which are hold by the driver and not
returned to the hardware).
For Tx, it is a new API.
The descriptor_done() API, and probably the rx_queue_count() API could
be replaced by this new API as soon as it is implemented on all PMDs.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Add an extra parameter to the ring dequeue burst/bulk functions so that
those functions can optionally return the amount of remaining objs in the
ring. This information can be used by applications in a number of ways,
for instance, with single-consumer queues, it provides a max
dequeue size which is guaranteed to work.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add an extra parameter to the ring enqueue burst/bulk functions so that
those functions can optionally return the amount of free space in the
ring. This information can be used by applications in a number of ways,
for instance, with single-producer queues, it provides a max
enqueue size which is guaranteed to work. It can also be used to
implement watermark functionality in apps, replacing the older
functionality with a more flexible version, which enables apps to
implement multiple watermark thresholds, rather than just one.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The bulk fns for rings returns 0 for all elements enqueued and negative
for no space. Change that to make them consistent with the burst functions
in returning the number of elements enqueued/dequeued, i.e. 0 or N.
This change also allows the return value from enq/deq to be used directly
without a branch for error checking.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Remove the watermark support. A future commit will add support for having
enqueue functions return the amount of free space in the ring, which will
allow applications to implement their own watermark checks, while also
being more useful to the app.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
There was a compile time setting to enable a ring to yield when
it entered a loop in mp or mc rings waiting for the tail pointer update.
Build time settings are not recommended for enabling/disabling features,
and since this was off by default, remove it completely. If needed, a
runtime enabled equivalent can be used.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The debug option only provided statistics to the user, most of
which could be tracked by the application itself. Remove this as a
compile time option, and feature, simplifying the code.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Users compiling DPDK should not need to know or care about the arrangement
of cachelines in the rte_ring structure. Therefore just remove the build
option and set the structures to be always split. On platforms with 64B
cachelines, for improved performance use 128B rather than 64B alignment
since it stops the producer and consumer data being on adjacent cachelines.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add a new API to force free consumed buffers on Tx ring. API will return
the number of packets freed (0-n) or error code if feature not supported
(-ENOTSUP) or input invalid (-ENODEV).
Signed-off-by: Billy McFall <bmcfall@redhat.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
This patch extend next_hop field from 8-bits to 21-bits in LPM library
for IPv6.
Added versioning symbols to functions and updated
library and applications that have a dependency on LPM library.
Signed-off-by: Vladyslav Buslov <vladyslav.buslov@harmonicinc.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
As announced in the deprecation notice, remove the functions for
single/multi producer/consumer enqueue/dequeue.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Add template release notes for DPDK 17.05 with inline
comments and explanations of the various sections.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Bruce Richardson <bruce.ricahrdson@intel.com>
A new parameter is planned to be added in 17.05 release in
rte_cryptodev_info.sym - max_nb_sessions_per_qp.
This will allow applications to know the maximum number of session
which can be attached to queue_pairs of device.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
I made a vhost ABI/API refactoring at v16.04, meant to avoid such issue
forever. Well, apparently, I lied.
People are looking for more vhost-user options now days, other than
vhost-user net only. For example, SPDK (Storage Performance Development
Kit) are looking for chance of vhost-user SCSI and vhost-user block.
Apparently, they also need a vhost-user backend, while DPDK already
has a (mature enough) backend, they don't want to implement it again
from scratch. They want to leverage the one DPDK provides.
However, the last refactoring hasn't done that right, at least it's
not friendly for extending vhost-user to add more devices support.
For example, different virtio devices has its own feature set, while
APIs like rte_vhost_feature_disable(feature_mask) have no option to
tell the device type. Thus, a more proper API should look like:
rte_vhost_feature_disable(device_type, feature_mask);
Besides that, few public files and structures should be renamed, to
not let it bind to virtio-net. Specifically, they are:
- virtio_net_device_ops --> vhost_device_ops
- rte_virtio_net.h --> rte_vhost.h
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In 17.05, nine rte_eth_dev_* functions for bypass control,
and implemented only in ixgbe, will be removed from ethdev,
renamed and moved to the ixgbe PMD-specific API.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
The change of _rte_eth_dev_callback_process has not been done in 17.02.
Let's postpone to 17.05.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Some vfio symbols need to be exported outside librte_eal.
For that, they need to be renamed to rte_* naming convention.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The new bus model has been proposed in 17.02 without being used.
The big rework should happen in 17.05.
Suggested-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Jan Blunck <jblunck@infradead.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Document proposed changes for the rings code in the next release.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add some text and rearrange lists to make sure it is clear that the
tested platforms listed in the release notes are some combinations
of the items in each group.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Add tested Intel platforms with Intel NICs to the release note.
Signed-off-by: Yulong Pei <yulong.pei@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
These sections do not provide the exact tests that were done nor whether
specific NICs are supported by all platforms.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Given that the packet distributor library improvements (1) will
not be in 17.02, I plan on doing some consolidation of the
API for burst operation for 17.05, merging the two api's into
one, with options for single or burst operation.
(1) http://dpdk.org/dev/patchwork/patch/19911/
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The feature is part of 17.02, so the ABI changes notice can be removed.
Fixes: 4fb7e803eb ("ethdev: add Tx preparation")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This ABI changes to remove iomem and ioport mapping in igb_uio. The
purpose of this changes was to fix a bug: when DPDK app crashes,
those devices by igb_uio are not stopped either DPDK PMD driver or
igb_uio driver.
Then it has been pointed out by Stephen Hemminger that it has
backward compatibility issue: cannot run old version DPDK on
modified igb_uio.
However, we still have not figure out a new way to fix this bug
without this change. Let's postpone this deprecation announcement
in case this change cannot be avoided.
Fixes: 3bac1dbc1e ("doc: announce iomem and ioport removal from igb_uio")
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Suggested-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
When bind the "uio_pci_generic" module in X710/XL710/XXV710,
the result is failed. The "uio_pci_generic" module is not
supported by X710/XL710/XXV710.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Add documentation to describe using the new performance test application.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Add documentation about the driver and update
release notes.
Signed-off-by: Zbigniew Bodek <zbigniew.bodek@caviumnetworks.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Current Cryptodev AES-NI GCM PMD is implemented using Multi Buffer
Crypto library.This patch reimplement the device using ISA-L Crypto
library: https://github.com/01org/isa-l_crypto.
The migration entailed the following additional support for:
* GMAC algorithm.
* 256-bit cipher key.
* Session-less mode.
* Out-of place processing
* Scatter-gatter support for chained mbufs (only out-of place and
destination mbuf must be contiguous)
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Update driver to use new AESNI Multibuffer IPSec library single
operation functionality (cipher only and authentication only).
This patch also adds tests for this new feature.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
The Intel(R) Multi Buffer Crypto library used in the AESNI MB PMD
has been moved to a new repository, in github.
This patch updates the link where it can be downloaded.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Elastic Flow Distributor (EFD) is a distributor library that uses
perfect hashing to determine a target/value for a given incoming flow key.
It has the following advantages:
- First, because it uses perfect hashing, it does not store
the key itself and hence lookup performance is not dependent
on the key size.
- Second, the target/value can be any arbitrary value hence
the system designer and/or operator can better optimize service rates
and inter-cluster network traffic locating.
- Third, since the storage requirement is much smaller than a hash-based
flow table (i.e. better fit for CPU cache), EFD can scale to
millions of flow keys.
Finally, with current optimized library implementation performance
is fully scalable with number of CPU cores.
Signed-off-by: Byron Marohn <byron.marohn@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Signed-off-by: Saikrishna Edupuganti <saikrishna.edupuganti@intel.com>
Acked-by: Christian Maciocco <christian.maciocco@intel.com>
Update the doc and release note.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Vincent Jardin <vincent.jardin@6wind.com>
Add PCI device ID for ConnectX-5 and enable multi-packet send for PF and VF
along with changing documentation and release note.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
remove the following API's:
rte_eth_dev_set_vf_rxmode
rte_eth_dev_set_vf_rx
rte_eth_dev_set_vf_tx
rte_eth_dev_set_vf_vlan_filter
rte_eth_dev_set_vf_rate_limit
Increment LIBABIVER in Makefile
Remove deprecation notice for removing rte_eth_dev_set_vf_* API's.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Enable the PMD by default on supported configurations.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds a new API 'rte_eth_dev_fw_version_get' for
fetching firmware version by a given device.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch mainly allocates structure to store queue/irq mapping,
and configure queue/irq mapping down through PCI ops. It also creates
eventfds for each Rx queue and tell the kernel about the eventfd/intr
binding.
Note: So far, we hard-code 1:1 queue/irq mapping (each rx queue has
one exclusive interrupt), like this:
vec 0 -> config irq
vec 1 -> rxq0
vec 2 -> rxq1
...
which means, the "vectors" option of QEMU should be configured with
a value >= N+1 (N is the number of the queue pairs).
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Tested-by: Lei Yao <lei.a.yao@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch add support vhost kernel as the backend for virtio_user.
Three main hook functions are added:
- vhost_kernel_setup() to open char device, each vq pair needs one
vhostfd;
- vhost_kernel_ioctl() to communicate control messages with vhost
kernel module;
- vhost_kernel_enable_queue_pair() to open tap device and set it
as the backend of corresonding vhost fd (that is to say, vq pair).
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The introduce of virtio 1.0 support brings yet another set of ops, badly,
it's not handled correctly, that it breaks the multiple process support.
The issue is the data/function pointer may vary from different processes,
and the old used to do one time set (for primary process only). That
said, the function pointer the secondary process saw is actually from the
primary process space. Accessing it could likely result to a crash.
Kudos to the last patches, we now be able to maintain those info that may
vary among different process locally, meaning every process could have its
own copy for each of them, with the correct value set. And this is what
this patch does:
- remap the PCI (IO port for legacy device and memory map for modern
device)
- set vtpci_ops correctly
After that, multiple process would work like a charm. (At least, it
passed my fuzzy test)
Fixes: b8f04520ad ("virtio: use PCI ioport API")
Fixes: d5bbeefca8 ("virtio: introduce PCI implementation structure")
Fixes: 6ba1f63b5a ("virtio: support specification 1.0")
Cc: stable@dpdk.org
Reported-by: Juho Snellman <jsnell@iki.fi>
Reported-by: Yaron Illouz <yaroni@radcom.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
MACsec (or LinkSec, 802.1AE) is a MAC level encryption/authentication
scheme defined in IEEE 802.1AE that uses symmetric cryptography.
This commit adds the MACsec offload support for ixgbe.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
They are superseded by the generic flow API (rte_flow). Target release is
not defined yet.
Suggested-by: Kevin Traynor <ktraynor@redhat.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Olga Shern <olgas@mellanox.com>
Doing a device information query on a non-PCI device such as
vhost was resulting in the dereferencing of a NULL pointer
(the absent PCI data), causing a segmentation fault.
Fixes: bda68ab9d1 ("examples/ethtool: add user-space ethtool sample application")
Signed-off-by: Remy Horton <remy.horton@intel.com>