Add config support to cross compile for Marvell CN10K SoC.
Marvell CN10K SoC is based on ARM Neoverse N2 cores.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
In case of EAL initialization failure rte_eal_memory_detach() may be
called before mapping memory configuration, which in this case points
to the static structure. Attempt to unmap it yields error:
EAL: Could not unmap shared memory config: Invalid argument
Skip unmapping memory configuration if it's not yet shared.
Fixes: dfbc61a2f9 ("mem: detach memsegs on cleanup")
Reported-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
This patch adds predictable RSS API.
It is based on the idea of searching partial Toeplitz hash collisions.
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Add documentation for the Toeplitz hash library.
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Reviewed-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: John McNamara <john.mcnamara@intel.com>
Each table entry is made up of match fields and action data, with the
latter made up of the action ID and the action arguments. The approach
of having the user specify explicitly the endianness of the action
arguments is difficult to be picked up by P4 compilers, as the P4
compiler is generally unaware about this aspect.
This commit introduces the auto-detection of the endianness of the
action arguments by examining the endianness of the their destination:
network byte order (NBO) when they get copied to headers and host byte
order (HBO) when they get copied to packet meta-data or mailboxes.
The endianness specification of each action argument as part of the
rule specification, e.g. H(...) and N(...) is removed from the rule
file and auto-detected based on their destination. The DMA instruction
scope is made internal, so mov instructions need to be used. The
pattern of transferring complete headers from table entry action args
to headers is detected, and the associated set of mov instructions
plus header validate is internally detected and replaced with the
internal-only DMA instruction to preserve performance.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Decouple between the different instruction optimizer. Allow each
optimization to run as a separate iteration on the entire instruction
stream.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
This patch implements the Forwarding Information Base (FIB) library
in l3fwd using the function calls and infrastructure introduced in
the previous patch.
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
The purpose of this commit is to add the necessary function calls
and supporting infrastructure to allow the Forwarding Information Base
(FIB) library to be integrated into the l3fwd sample app.
Instead of adding an individual flag for FIB, a new flag '--lookup' has
been added that allows the user to select their desired lookup method.
The flags '-E' and '-L' have been retained for backwards compatibility.
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
To prevent code duplication from the addition of lookup methods
the routes specified in lpm should be moved to a common header.
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Any IP within the 2001:200::/48 subnet will match all the routes given
instead of 1 individual route and the application cannot
differentiate between them.
The change in this patch allows the ports to be individually matched using
smaller /64 ranges for each port. These smaller subnet ranges are still
within the 2001:200::/48 subnet range set aside for benchmarking
in RFC5180.
l3fwd will now use 2001:200:0:{0-7}::/64 where 0-7 is the port ID for IPv6.
Fixes: 37afe381bd ("examples/l3fwd: use reserved IP addresses")
Cc: stable@dpdk.org
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Initialize prev_tsc to cur_tsc. This avoids running the TX queue drain
in the first iteration of the packet processing loop.
Signed-off-by: Kathleen Capella <kathleen.capella@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
This patch deletes the comments which are wrong and unnecessary.
Fixes: ab129e9065 ("examples/ptpclient: add minimal PTP client")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The SWX pipeline instructions work with operands of different types:
header fields (h.header.field), packet meta-data (m.field), extern
object mailbox field (e.obj.field), extern function (f.field), action
data read from table entries (t.field), or immediate values; hence the
HMEFTI acronym. The H operands are stored in network byte order (NBO),
while the MEFT operands are stored in host byte order (HBO), hence the
need to operate endianness conversions.
Some of the endianness conversion macros were not working correctly
for some cases such as operands of different sizes, and they are fixed
now. Affected instructions: mov, and, or, xor, jmpeq, jmpneq.
Fixes: 7210349d5b ("pipeline: add SWX move instruction")
Fixes: 650195cf96 ("pipeline: introduce SWX and instruction")
Fixes: 8f796198dc ("pipeline: introduce SWX or instruction")
Fixes: b4e607f9fd ("pipeline: introduce SWX XOR instruction")
Fixes: b3947e25be ("pipeline: introduce SWX jump and return instructions")
Cc: stable@dpdk.org
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Adjusting the error code for the internal function instruction_config
to match the rest of the code which is returning a negative value on
error. Cosmetic change.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Enhance the behavior of the emit instruction to ignore invalid
headers, as mandated by the P4 language specification.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Add support for table statistics for the SWX pipeline. For each table,
we maintain a counter for lookup hit packets, one for lookup miss
packets and one packet counter for each table action.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Yogesh Jangra <yogesh.jangra@intel.com>
Enabled the TX instruction to accept an immediate value for the output
port argument. The drop instruction is simply an alias to the TX
instruction for the last output port of the pipeline.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
The match fields for a given table have to be part of the same header
or the metadata structure. This commit removes the requirement that
the list of match fields must observe the order of fields within their
structure. For example, the h.ipv4.dst_addr field can now be listed
before the h.ipv4.src_addr field in a table match field list, even
though within the IPv4 header the dst_addr field is present after the
src_addr field.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Currently, the table entry action data is required to be NULL when the
action data size is zero. We now require that action data is ignored
when the action data size is zero. This is to allow for a table entry
instance to be allocated once with max action data size for the table
and reused repeatedly for actions of different sizes, including zero.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Churchill Khangar <churchill.khangar@intel.com>
When inserting devargs that is already in list,
existing one was reset and replaced completely by new one,
the entry info was lost during copy.
This patch backups entry info before copy.
Fixes: 64051bb1f1 ("devargs: unify scratch buffer storage")
Reported-by: Jim Harris <james.r.harris@intel.com>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Currently, new user mem maps are checked if they are adjacent to
an existing mem map and if so, the mem map entries are merged.
It didn't check for duplicate mem maps, so if the API is called
with the same mem map multiple times, they will occupy multiple
mem map entries. This will reduce the amount of entries available
for unique mem maps.
So check for duplicate mem maps and merge them into one mem map
entry if any found.
Fixes: 0cbce3a167 ("vfio: skip DMA map failure if already mapped")
Cc: stable@dpdk.org
Suggested-by: Kevin Traynor <ktraynor@redhat.com>
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Use UINT64_MAX instead of -1ULL.
Some compilers generate a warning when applying a '-' to
an unsigned literal so avoid this by initializing with
unsigned preprocessor definition.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Use UINT64_MAX and UINT32_MAX instead of -1 or ~0 literal variations
of different explicit widths when creating masks and sentinel values.
Some compilers generate a warning when applying a '-' to an unsigned
literal so avoid this by initializing with unsigned preprocessor
definitions where appropriate.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Check for failure, while here just increment len once after checking for
failure instead of duplicating len + 1 math in two different argument
lists.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Here adds configs for Kunpeng server.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Initiate software crypto adapter service, only if hardware capabilities
are not reported. In OP_FORWARD mode, software service is not required
to enqueue events if OP_FORWARD capability is supported by the PMD.
Fixes: 7901eac340 ("eventdev: add crypto adapter implementation")
Cc: stable@dpdk.org
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto
adapter if forward mode is supported in driver.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
In case an event from a previous stage is required to be forwarded
to a crypto adapter and PMD supports internal event port in crypto
adapter, exposed via capability
RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have
a way to check in the API rte_event_enqueue_burst(), whether it is
for crypto adapter or for eth tx adapter.
Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(),
which can send to a crypto adapter.
Note that RTE_EVENT_TYPE_* cannot be used to make that decision,
as it is meant for event source and not event destination.
And event port designated for crypto adapter is designed to be used
for OP_NEW mode.
Hence, in order to support an event PMD which has an internal event port
in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed
via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD,
application should use rte_event_crypto_adapter_enqueue() API to enqueue
events.
When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode),
application can use API rte_event_enqueue_burst() as it was doing earlier,
i.e. retrieve event port used by crypto adapter and bind its event queues
to that port and enqueue events using the API rte_event_enqueue_burst().
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Split AVX512 Rx data path into two, one is for basic,
the other one can support additional Rx offload features,
including Rx checksum offload, Rx vlan offload, RSS offload.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: Qin Sun <qinx.sun@intel.com>
Add alternative Tx data path for AVX512 which can support partial
Tx offload features, including Tx checksum offload, vlan/QinQ
insertion offload.
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: Qin Sun <qinx.sun@intel.com>
When set default MAC address, use type VIRTCHNL_ETHER_ADDR_PRIMARY as this
case is changing device/primary unicast MAC. For other cases, such as
adding or deleting extra unicast addresses and multicast addresses, use
type VIRTCHNL_ETHER_ADDR_EXTRA.
Fixes: cb25d4323f ("net/avf: enable MAC VLAN and promisc ops")
Cc: stable@dpdk.org
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Tested-by: Yan Xia <yanx.xia@intel.com>
Currently, there is no way for a VF driver to specify that it wants to
change its device/primary unicast MAC address. This makes it
difficult/impossible for the PF driver to track the VF's device/primary
unicast MAC address, which is used for VM/VF reboot and displaying on
the host. Fix this by using 2 bits of a pad byte in the
virtchnl_ether_addr structure so the VF can specify what type of MAC
it's adding/deleting.
Below are the values that should be used by all VF drivers going
forward.
VIRTCHNL_ETHER_ADDR_LEGACY(0):
- The type should only ever be 0 for legacy AVF drivers (i.e.
drivers that don't support the new type bits). The PF drivers
will track VF's device/primary unicast MAC using with best
effort.
VIRTCHNL_ETHER_ADDR_PRIMARY(1):
- This type should only be used when the VF is changing their
device/primary unicast MAC. It should be used for both delete
and add cases related to the device/primary unicast MAC.
VIRTCHNL_ETHER_ADDR_EXTRA(2):
- This type should be used when the VF is adding and/or deleting
MAC addresses that are not the device/primary unicast MAC. For
example, extra unicast addresses and multicast addresses
assuming the PF supports "extra" addresses at all.
If a PF is parsing the type field of the virtchnl_ether_addr, then it
should use the VIRTCHNL_ETHER_ADDR_TYPE_MASK to mask the first two bits
of the type field since 0, 1, and 2 are the only valid values.
For i40evf PMD, when set default MAC address, use type
VIRTCHNL_ETHER_ADDR_PRIMARY as this case is changing device/primary
unicast MAC. For other cases, such as adding or deleting extra unicast
addresses and multicast addresses, use type VIRTCHNL_ETHER_ADDR_EXTRA.
Fixes: 6d13ea8e8e ("net: add rte prefix to ether structures")
Fixes: caccf8b318 ("ethdev: return diagnostic when setting MAC address")
Cc: stable@dpdk.org
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Tested-by: Yan Xia <yanx.xia@intel.com>
Support rte flow priority attribute for DCF switch filter.
When a packet is matched by two rules, the behavior of it
is not defined. This patch supports flow priority to create
different recipes for this situation. Only priority 0 and 1
are supported and higher value denotes higher priority.
for example:
1. flow create 0 priority 0 ingress pattern eth / vlan tci is 2 / vlan
tci is 2 / end actions vf id 2 / end
2. flow create 0 priority 1 ingress pattern eth / vlan / vlan / ipv4 dst
is 192.168.0.1 / end actions vf id 1 / end
These two rules can be created at the same time in DCF switch
filter and priority of rule 2 is higher. Packet hits rule 2
when two conditions of rules are satisfied.
Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Enable additional PHY types (25G-AOC and 25G-ACC) for set PHY config
command.
Signed-off-by: Yury Kylulin <yury.kylulin@intel.com>
Tested-by: Ashish Paul <apaul@juniper.net>
This patch separates Tx burst function implementations to different
source files, thus allowing them to compile in parallel.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch separates Tx function implementations to different source
file as an optional preparation step for Tx cleanup.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch moves Tx burst and its inline functions declarations to
header file to allow its use from several separate source files and as a
possible preparation for Tx cleanup.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch separates Tx function declarations to different header file
in preparation for removing their implementation from the source file
and as an optional preparation for Tx cleanup.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch separates Rx function implementations to different source
file as an optional preparation step for further consolidation of Rx
burst functions.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The mlx5_rxtx.c file contains a lot of Tx burst functions, each of those
is performance-optimized for the specific set of requested offloads.
These ones are generated on the basis of the template function and it
takes significant time to compile, just due to a large number of giant
functions generated in the same file and this compilation is not being
done in parallel with using multithreading.
Therefore we can split the mlx5_rxtx.c file into several separate files
to allow different functions to be compiled simultaneously.
In this patch, we separate Rx function declarations to different header
file in preparation for removing them from the source file and as an
optional preparation step for further consolidation of Rx burst
functions.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Currently, user could use runtime config "rx_func_hint=simple" to
select the hns3_recv_pkts API, but the API's name get from
rte_eth_rx_burst_mode_get is "Scalar" which has not reflected "simple".
So this patch renames hns3_recv_pkts to hns3_recv_pkts_simple, and
also change it's name which gets from rte_eth_rx_burst_mode_get to
"Scalar Simple" to maintain conceptual consistency.
Fixes: 521ab3e933 ("net/hns3: add simple Rx path")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Restore the original code, where VHOST_SET_OWNER is applied to
all vhostfds of the device.
Fixes: 06856cabb8 ("net/virtio: add virtio-user ops to set owner")
Cc: stable@dpdk.org
Signed-off-by: Thierry Herbelot <thierry.herbelot@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch adds the configuration of fixed speed for the PF device.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
If the flow control auto-negotiation is not supported and the flow
control modes on the local and link partner is asymmetric, the flow
control on the NIC does not take effect. The support of the
auto-negotiation capability requires the cooperation of the firmware
and driver.
This patch supports the flow control auto-negotiation only for copper
port. For optical ports, the forced flow control mode is still used.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>