Meter statistics are each policer action each counter.
Totally 4 counters per each meter.
It causes cache missed
and lead to data forwarding performance low.
To optimize it, support pass counter for green
and drop counter for red.
Totally two counters per each meter.
Also use the global drop statistics for
all meter drop action.
Limitations as below:
1. It does not support yellow counter and return 0.
2. All the meter colors with drop action will be
counted only by the global drop statistics.
3. Red color must be with drop action.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The patch 'net/hns3: rename Rx burst function' changed `simple'
Rx function name from 'scalar' to 'scalar simple', but doc
ignored that.
This patch fixed it.
Fixes: aa5baf47e1 ("net/hns3: rename Rx burst function")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The command line for testing connection tracking is added. To create
a conntrack object, 3 parts are needed.
set conntrack com peer ...
set conntrack orig scale ...
set conntrack rply scale ...
This will create a full conntrack action structure for the indirect
action. After the indirect action handle of "conntrack" created, it
could be used in the flow creation. Before updating, the same
structure is also needed together with the update command
"conntrack_update" to update the "dir" or "ctx".
After the flow with conntrack action created, the packet should jump
to the next flow for the result checking with conntrack item. The
state is defined with bits and a valid combination could be
supported.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
This commit introduces the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.
The basic usage is that in the first flow rule the application should
add the conntrack action and jump to the next flow table. In the
following flow rule(s) of the next table, the application should use
the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack
action context correctly, the information of packets from both
directions are required.
The conntrack action should be created on one ethdev port and supply
the peer ethdev port as a parameter to the action. After context
created, it could only be used between these two ethdev ports
(dual-port mode) or a single port. The application should modify the
action via the API "rte_action_handle_update" only when before using
it to create a flow rule with conntrack for the opposite direction.
This will help the driver to recognize the direction of the flow to
be created, especially in the single-port mode, in which case the
traffic from both directions will go through the same ethdev port
if the application works as an "forwarding engine" but not an end
point. There is no need to call the update interface if the
subsequent flow rules have nothing to be changed.
Query will be supported via "rte_action_handle_query" interface,
about the current packets information and connection status. The
fields query capabilities depends on the HW.
For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to make sure the conntrack module
works correctly without missing any packet. Only the valid packets
should pass the conntrack, packets with invalid TCP information,
like out of window, or with invalid header, like malformed, should
not pass.
Naming and definition:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/
netfilter/nf_conntrack_tcp.h
https://elixir.bootlin.com/linux/latest/source/net/netfilter/
nf_conntrack_proto_tcp.c
Other reference:
https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The i40evf PMD will be deprecated, iavf will be the only VF driver for
Intel 700 serial (i40e) NIC family.
To reach this, there will be 2 steps:
Step 1: iavf will be the default VF driver, while i40evf still can be
selected by devarg: "driver=i40evf".
This is covered by this patch, which include:
1) add all 700 serial NIC VF device ID into iavf PMD
2) skip probe if devargs contain "driver=i40evf" in iavf
3) continue probe if devargs contain "driver=i40evf" in i40evf
Step 2: i40evf and related devarg are removed, this will happen at DPDK
21.11
Between step 1 and step 2, no new feature will be added into i40evf
except bug fix.
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
This patch supports runtime config of mask device capability, it was
used to mask the capability which queried from firmware.
The device argument key is "dev_caps_mask" which takes hexadecimal
bitmask where each bit represents whether mask corresponding capability.
Its main purpose is to debug and avoid problems.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The integrity item allows the application to match
on the integrity of a packet.
Usage example:
match that packet integrity checks are OK. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
Match that L4 packet is OK - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.
However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.
The application logic can be positive meaning if the packet is
valid jump / do actions, or negative if packet is not valid
jump to SW / do actions (like drop) and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.
Since currently rte_flow works in positive way the assumption is
that the positive way will be the common way in this case also.
When thinking what is the best API to implement such feature,
we need to consider the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexibility.
First option: Add integrity flags to each of the items.
For example add checksum_ok to IPv4 item.
Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.
Cons:
1. API breakage.
2. Increase number of flows, since app can't add global rule and must
have dedicated flow for each of the flow combinations, for example
matching on ICMP traffic or UDP/TCP traffic with IPv4 / IPv6 will
result in 5 flows.
Second option: dedicated item
Pros:
1. No API breakage, and there will be no for some time due to having
extra space. (by using bits)
2. Just one flow to support the ICMP or UDP/TCP traffic with IPv4 /
IPv6.
3. Simplicity application can just look at one place to see all possible
checks.
4. Allow future support for more tests.
Cons:
1. New item, that holds number of fields from different items.
For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
passed. This may mean that in some HW such flow should be split to
number of flows or fail.
2. l2_ok - all check for layer 2 have passed.
3. l3_ok - all check for layer 3 have passed. If packet doesn't have
L3 layer this check should fail.
4. l4_ok - all check for layer 4 have passed. If packet doesn't
have L4 layer this check should fail.
5. l2_crc_ok - the layer 2 CRC is O.K.
6. ipv4_csum_ok - IPv4 checksum is O.K. It is possible that the
IPv4 checksum will be O.K. but the l3_ok will be 0. It is not
possible that checksum will be 0 and the l3_ok will be 1.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 length is smaller than the
frame length.
Example of usage:
1. Check packets from all possible layers for integrity.
flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
2. Check only packet with layer 4 (UDP / TCP)
flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1
l4_ok = 1
Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Currently, upper-layer application could get queue state only
through pointers such as dev->data->tx_queue_state[queue_id],
this is not the recommended way to access it. So this patch
add get queue state when call rte_eth_rx_queue_info_get and
rte_eth_tx_queue_info_get API.
Note: After add queue_state field, the 'struct rte_eth_rxq_info' size
remains 128B, and the 'struct rte_eth_txq_info' size remains 64B, so
it could be ABI compatible.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch fixes HiSilicon copyright syntax.
According to the suggestion of our legal department,
to standardize the copyright license of our code to
avoid potential copyright risks, we make a unified
modification to the "Hisilicon", which was nonstandard,
in the main modules we maintain.
We change it to "HiSilicon", which is consistent with
the terms used on the following official website:
https://www.hisilicon.com/en/terms-of-use.
Fixes: 565829db8b ("net/hns3: add build and doc infrastructure")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The "--tx-queue-stats-mapping" and "--rx-queue-stats-mapping"
and display and clear of "stats_map" have been removed from
testpmd.
This patch deletes some descriptions about queue stats mapping
in testpmd doc.
Fixes: 08dcd18706 ("app/testpmd: fix queue stats mapping configuration")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
There is no reason for the DPDK libraries to all have 'librte_' prefix on
the directory names. This prefix makes the directory names longer and also
makes it awkward to add features referring to individual libraries in the
build - should the lib names be specified with or without the prefix.
Therefore, we can just remove the library prefix and use the library's
unique name as the directory name, i.e. 'eal' rather than 'librte_eal'
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
To help with consistency across all files, add a section to the
contributors guide on meson coding style. Although short, this covers
the basics for now, and can be extended in future as we see the need.
Meson style guide recommends four-space indents, like for python,
so add to editorconfig file.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Switch from using tabs to 4 spaces for meson.build indentation, for the
basic infrastructure and tooling files, as well as doc and kernel
directories.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This patch adds predictable RSS API.
It is based on the idea of searching partial Toeplitz hash collisions.
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Add documentation for the Toeplitz hash library.
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Reviewed-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: John McNamara <john.mcnamara@intel.com>
This patch implements the Forwarding Information Base (FIB) library
in l3fwd using the function calls and infrastructure introduced in
the previous patch.
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
The purpose of this commit is to add the necessary function calls
and supporting infrastructure to allow the Forwarding Information Base
(FIB) library to be integrated into the l3fwd sample app.
Instead of adding an individual flag for FIB, a new flag '--lookup' has
been added that allows the user to select their desired lookup method.
The flags '-E' and '-L' have been retained for backwards compatibility.
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
In case an event from a previous stage is required to be forwarded
to a crypto adapter and PMD supports internal event port in crypto
adapter, exposed via capability
RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have
a way to check in the API rte_event_enqueue_burst(), whether it is
for crypto adapter or for eth tx adapter.
Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(),
which can send to a crypto adapter.
Note that RTE_EVENT_TYPE_* cannot be used to make that decision,
as it is meant for event source and not event destination.
And event port designated for crypto adapter is designed to be used
for OP_NEW mode.
Hence, in order to support an event PMD which has an internal event port
in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed
via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD,
application should use rte_event_crypto_adapter_enqueue() API to enqueue
events.
When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode),
application can use API rte_event_enqueue_burst() as it was doing earlier,
i.e. retrieve event port used by crypto adapter and bind its event queues
to that port and enqueue events using the API rte_event_enqueue_burst().
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Support rte flow priority attribute for DCF switch filter.
When a packet is matched by two rules, the behavior of it
is not defined. This patch supports flow priority to create
different recipes for this situation. Only priority 0 and 1
are supported and higher value denotes higher priority.
for example:
1. flow create 0 priority 0 ingress pattern eth / vlan tci is 2 / vlan
tci is 2 / end actions vf id 2 / end
2. flow create 0 priority 1 ingress pattern eth / vlan / vlan / ipv4 dst
is 192.168.0.1 / end actions vf id 1 / end
These two rules can be created at the same time in DCF switch
filter and priority of rule 2 is higher. Packet hits rule 2
when two conditions of rules are satisfied.
Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The speed capability of the device can be reported to the upper-layer app
in rte_eth_dev_info_get API. In this API, the speed capability is derived
from the 'supported_speed', which is the speed capability actually
supported by the NIC. The value of the 'supported_speed' is obtained
once in the probe stage and may be updated in the scheduled task to deal
with the change of the transmission interface.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Add a specific path for RX AVX512 (flexible descriptor).
In this path, support the HW offload features, like,
checksum, VLAN stripping, RSS hash.
This path is chosen automatically according to the
configuration.
'inline' is used, then the duplicate code is generated
by the compiler.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
'qfi' field is 8 bits which represent single bit for
PPP (paging Policy Presence) single bit for RQI
(Reflective QoS Indicator) and 6 bits for QFI
(QoS Flow Identifier)
This is based on RFC 38415-g30
https://www.3gpp.org/ftp/Specs/archive/38_series/38.415/38415-g30.zip
Updated the doxygen comment and the mask for 'qfi'
to properly identify the full 8 bits of the field.
note: changing the default mask would cause different
patterns generated by testpmd.
Fixes: 346553db5b ("ethdev: add GTP extension header to flow API")
Cc: stable@dpdk.org
Signed-off-by: Raslan Darawsheh <rasland@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for single flow dump.
The CLIs to dump one rule: flow dump PORT rule ID
to dump all: flow dump PORT all
Examples:
testpmd> flow dump 0 all
testpmd> flow dump 0 rule 0
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Previous implementations support dump all the flows. Add new arg
rte_flow in rte_flow_dev_dump to dump one flow.
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
add meter profile packet_mode to the ethernet device.
One example:
add port meter profile rfc2697 (port_id) (profile_id)
(cir) (cbs) (ebs) (packet_mode)
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Currently meter algorithms only supports rate is bytes per second (BPS).
Add packet_mode flag in meter profile parameters data structure.
So that it can meter traffic by packet per second.
When packet_mode is 0, the profile rates and bucket sizes are
specified in bytes per second and bytes
when packet_mode is not 0, the profile rates and bucket sizes are
specified in packets and packets per second.
The below structure will be extended:
rte_mtr_meter_profile
rte_mtr_capabilities
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Updates the documentation for push/pop VLAN support. In E-Switch
mode, push VLAN on ingress traffic and pop VLAN in egress traffic
are both support.
Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Reviewed-by: Asaf Penso <asafp@nvidia.com>
Currently, PF driver will report lsc when it detects the link status
change, it's not a generic implementation.
We refactor PF lsc event report by following scheme:
1. PF driver marks RTE_PCI_DRV_INTR_LSC in rte_pci_driver by default.
2. In the init stage, PF driver will detect whether firmware supports
lsc interrupt or not, driver will clear RTE_ETH_DEV_INTR_LSC flag if
firmware doesn't support lsc interrupt.
3. PF driver will report lsc event only when dev_conf.intr_conf.lsc is
set.
Note: If the firmware supports lsc interrupt, we also keep periodic
polling to deal with the interrupt loss.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, VF driver periodically obtains link status from PF kernel
driver, and reports lsc event when detects link status change. Because
the period is 1 second, it's probably too late to report especially
in such as bonding scenario.
To solve this problem we use the following scheme:
1. PF kernel driver support immediate push link status to all VFs when
it detects the link status changes.
2. VF driver will detect PF kernel driver whether support push link
status in device init stage by sending request link info mailbox
message to PF, PF then tell VF the push capability by extend
HNS3_MBX_LINK_STAT_CHANGE mailbox message.
3. VF driver marks RTE_PCI_DRV_INTR_LSC in rte_pci_driver by default,
when it detects PF doesn't support push link status then it will clear
RTE_ETH_DEV_INTR_LSC flag.
So if PF kernel driver supports push link status to VF, then VF driver
will have RTE_ETH_DEV_INTR_LSC capability.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
To compile with meson some dependencies should be installed.
Section "Getting the Tools" describes what needed, but per
OS there are additional steps to do.
Add links to Linux, FreeBSD, and Windows guide for more info.
Signed-off-by: Asaf Penso <asafp@nvidia.com>
Meson with Windows clang generates incorrect linker flag
"--subsystem,console" instead of "/subsystem:console" which
will fail the DPDK build. This is discovered at porting testpmd.
Meson 0.57.0 has the fix and should be used for DPDK Windows build.
Update the WindowsGSG DPDK Build document for the proper meson version.
Signed-off-by: Jie Zhou <jizh@microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Bump Meson required version to 0.49.2 which is chosen so as
to be provided by both redhat-8 and debian-10.
Update documentation and travis setup script accordingly.
This fixes the following warning:
WARNING: Project targeting '>= 0.47.1' but tried to use feature introduced
in '0.48.0': console arg in custom_target
'console' argument is used within kernel/linux/kni/meson.build
Signed-off-by: Gabriel Ganne <gabriel.ganne@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The Key Wrap approach is used by applications in order to protect keys
located in untrusted storage or transmitted over untrusted
communications networks. The constructions are typically built from
standard primitives such as block ciphers and cryptographic hash
functions.
The Key Wrap method and its parameters are a secret between the keys
provider and the device, means that the device is preconfigured for
this method using very secured way.
The key wrap method may change the key length and layout.
Add a description for the cipher transformation key to allow wrapped key
to be forwarded by the same API.
Add a new feature flag RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY to be enabled
by PMDs support wrapped key in cipher trasformation.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This patch changes the experimental raw data path dequeue burst API.
Originally the API enforces the user to provide callback function
to get maximum dequeue count. This change gives the user one more
option to pass directly the expected dequeue count.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Adding lookaside IPsec UDP encapsulation support
for NAT traversal.
Application has to add udp-encap option to sa config file
to enable UDP encapsulation on the SA.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
In cryptography, a block cipher is a deterministic algorithm operating
on fixed-length groups of bits, called blocks.
A block cipher consists of two paired algorithms, one for encryption
and the other for decryption. Both algorithms accept two inputs:
an input block of size n bits and a key of size k bits; and both yield
an n-bit output block. The decryption algorithm is defined to be the
inverse function of the encryption.
For AES standard the block size is 16 bytes.
For AES in XTS mode, the data to be encrypted\decrypted does not have to
be multiple of 16B size, the unit of data is called data-unit.
The data-unit size can be any size in range [16B, 2^24B], so, in this
case, a data stream is divided into N amount of equal data-units and
must be encrypted\decrypted in the same data-unit resolution.
For ABI compatibility reason, the size is limited to 64K (16-bit field).
The new field dataunit_len is inserted in a struct padding hole,
which is only 2 bytes long in 32-bit build.
It could be moved and extended later during an ABI-breakage window.
The current cryptodev API doesn't allow the user to select a specific
data-unit length supported by the devices.
In addition, there is no definition how the IV is detected per data-unit
when single operation includes more than one data-unit.
That causes applications to use single operation per data-unit even though
all the data is continuous in memory what reduces datapath performance.
Add a new feature flag to support multiple data-unit sizes, called
RTE_CRYPTODEV_FF_CIPHER_MULTIPLE_DATA_UNITS.
Add a new field in cipher capability, called dataunit_set,
where the devices can report the range of the supported data-unit sizes.
Add a new cipher transformation field, called dataunit_len, where the user
can select the data-unit length for all the operations.
All the new fields do not change the size of their structures,
by filling some struct padding holes.
They are added as exceptions in the ABI check file libabigail.abignore.
Using a bitmap to report the supported data-unit sizes capability allows
the devices to report a range simply as same as the user to read it
simply. also, thus sizes are usually common and probably will be shared
among different devices.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Added support for DIGEST_ENCRYPTED mode for octeontx
and octeontx2 platforms.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The script dependencies list was incomplete,
this patch adds missing modules and removes an unnecessary entry.
The installation command was also added.
Fixes: f400e0b82b ("app/crypto-perf: add script to graph perf results")
Cc: stable@dpdk.org
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add Arm SoC configuration sets to Arm meson.build and add an arch
agnostic meson option, 'platform', to select from these SoC
configurations for meson native builds. This is preferable to
specifying a cross file when doing aarch64 -> aarch64 builds, since the
cross file specifies the toolchain as well.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Tested-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add support for enabling or disabling drivers for Arm cross build. Do
not implement any enable/disable lists yet.
Enabling drivers is useful when building for an SoC where we only want
to build a few drivers. That way the list won't be too long.
Similarly, disabling drivers is useful when we want to disable only a
few drivers.
Both of these are advantageous mainly in aarch64 -> aarch64 (or arch ->
same arch) builds, where the build machine may have the required driver
dependencies, yet we don't want to build drivers for a specific SoC.
If enable_drivers is a non-empty list, build only those drivers,
otherwise build all drivers and add them to enable_drivers. If
disable_drivers is non-empty list, build all drivers specified in
enable_drivers except those in disable_drivers.
There are two drivers, bus/pci and bus/vdev, which break the build if
not enabled. Address this by always enabling these if the user disables
them or doesn't specify in their allowlist.
Also remove the old Makefile arm configuration options which don't do
anything in Meson.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This is a new type of reader-writer lock that provides better fairness
guarantees which better suited for typical DPDK applications.
A pflock has two ticket pools, one for readers and one
for writers.
Phase-fair reader writer locks ensure that neither reader nor writer will
be starved.
Neither reader or writer are preferred, they execute in alternating
phases.
All operations of the same type (reader or writer) that acquire the lock
are handled in FIFO order.
Write operations are exclusive, and multiple read operations can be run
together (until a write arrives).
A similar implementation is in Concurrency Kit package in FreeBSD.
For more information see:
"Reader-Writer Synchronization for Shared-Memory Multiprocessor
Real-Time Systems",
http://www.cs.unc.edu/~anderson/papers/ecrts09b.pdf
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>