In DPDK, 'rte_socket_id' means the running socket while
'rte_eth_dev_socket_id' is the device socket.
For better performance, memory which queue setup used and device
should be in the same socket.
This patch make sure it calls rte_eth_dev_socket_id API to get device
socket_id when setting ringparam.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch adds a comment for RTE_HASH_BUCKET_ENTRIES
explaining why a particular value was chosen.
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
This library can be made optional.
drivers/gpu and app/test-gpudev depend on this library,
so they are automatically disabled if the lib is disabled.
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
If the modify field action requests the field copy/set transaction
from other field, the destination field hardware bit offset was
assigned incorrectly with non-zero byte offset, causing wrong
translations for the fields with sizes larger than 32 bits.
Fixes: 40c8fb1fd3 ("net/mlx5: update modify field action")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The "mlx5_devx_cmd_mkey_create" DevX cmd allocates DevX object using
mlx5_malloc and then creates MKey using glue function.
Compatibly, "mlx5_devx_cmd_destroy" cmd releases first the MKey using
glue function, and then free the DevX object using mlx5_free.
On Windows OS, the reg_mr function creates Mkey using
"mlx5_devx_cmd_mkey_create" cmd, but dereg_mr function using directly
glue function without freeing the object.
This behavior causes memory leak at each MR release.
In addition, the dereg_mr function makes sure that the MR pointer is
valid before destroying its fields, but always calls the memset function
that makes a difference to it.
This patch moves the dereg_mr function to use "mlx5_devx_cmd_destroy"
instead of glue function, and extends the validity test to the whole
function.
Fixes: ba42071982 ("common/mlx5: add reg/dereg MR on Windows")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
To detect the timestamp mode configured on the NIC the mlx5 PMD uses the
firmware command ACCESS_REGISTER_USER.
The HCA capability command has an attribute flag checking whether
firmware supports the command.
However, the HCA capability query command read the flag from wrong place
in PRM structure.
This patch move the flag to correct place.
Fixes: 972a1bf812 ("common/mlx5: fix user mode register access command")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
mlx5 PMD incorrectly overwrote outer layer fields in MPLS tunnel
rte_flow patterns using defaults for MPLS tunnels. This included
overwriting UDP destination port in MPLSoUDP and GRE protocol field in
MPLSoGRE.
This patch fixes this behavior. If application provides the values in
flow pattern items preceding the MPLS flow item the provided values will
be used, otherwise the defaults will be applied.
Fixes: d1abe664dd ("net/mlx5: add MPLS to Direct Verbs flow engine")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Assuming a user tried to send multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, using a device with
minimum inlining requirements (such as ConnectX-4 Lx or when user
specified them explicitly), sending such packets caused segfault.
Segfault was caused by failed invariants in
mlx5_tx_packet_multi_inline function.
This patch introduces a logic for multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, to omit mbuf scanning for
filling inline buffer and inline only minimal amount of data required.
Fixes: ec837ad0fc ("net/mlx5: fix multi-segment inline for the first segments")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Users are confused with a feature with "P", added necessary
explanation for this.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
These edits affect the outermost header in the current processing state
of the packet, which might have been decapsulated by prior action DECAP.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
In a tunnel packet, these actions affect the inner header if
action DECAP is set; otherwise, they affect the outer header.
Adding these actions is done in two steps: add the action to
the action mask and indicate the MAC address entry ID to use.
This allows the user to check the order of actions first and
allocate resources when time comes to enable the action rule.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Add a context structure to simplify handling of action sets.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
The number of counters being non-zero can be detected before
the action set registry traversal, so move the check outside.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Tunnel offload API allows application to restore packet to
its original form if chain of flows missed after DECAP action.
The main idea of the tunnel offload API was to query port PMD
to provide flow elements - actions or items.
Flow elements supplied by PMD are merged with original flow rule
elements provided by testpmd operator to create a new flow rule,
optimal for PMD, to implement the tunnel offload API.
That flow rule transformation is hidden form testpmd operator and uses
internal testpmd resources.
Current testpmd did not release tunnel offload resources if flow rule
validation failed.
The patch always releases tunnel offload resources after flow rule
validation returns.
Fixes: 1b9f274623 ("app/testpmd: add commands for tunnel offload")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
Remove the vdev args check for secondary process which prevents the
secondary from attaching to the device created by the primary process
via the hotplug framework. This check was removed for other vdevs but
was missed for failsafe.
Fixes: 4852aa8f6e ("drivers/net: enable hotplug on secondary process")
Cc: stable@dpdk.org
Signed-off-by: Kumara Parameshwaran <kumaraparamesh92@gmail.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
DMA on SN1022 SoC requires extra mapping of the memory via MCDI.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
NIC DMA memory regions API allows to establish mapping of DMA addresses
used by NIC to host IOVA understood by the host when IOMMU is absent
and NIC cannot address entire host IOVA space because of too small
DMA mask.
The API does not allow to address entire host IOVA space, but allows
arbitrary regions of the space really used for the NIC DMA.
A DMA region needs to be mapped in order to perform MCDI initialization.
Since the NIC has not been probed at that point, its configuration cannot
be accessed and there an UNKNOWN mapping type is assumed.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Fixes: 60e53c078d ("net/sfc: support decrement IP TTL actions in transfer flows")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Fixes: 96fd2bd69b ("net/sfc: support flow action count in transfer rules")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
The driver internal variable to track the next consumer index on
the Rx ring was not being set if there was an mbuf allocation
failure. In that scenario, eventually it would fall out of sync
with the actual consumer index and raise a false alarm on Thor
needlessly causing a segmentation fault with testpmd
Fixes: 03c8f2fe11 ("net/bnxt: detect bad opaque in Rx completion")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
The ULP context list was not updated when high availability
feature was deinitialized. This caused the ULP context list
to acquire the lock when it is not supposed to causing a
deadlock. The fix is to correctly clear the list.
Fixes: 3184b1ef66 ("net/bnxt: add HA support in ULP")
Cc: stable@dpdk.org
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
1. removed the global flag for TruFlow global config initialization.
2. Modified the TruFlow context lock to be a global lock instead
of per context lock.
3. The ULP context list is modified to check on the ULP configuration
data so alarm handlers can operate on the correct ULP context.
These changes help in support of multiple network cards using
single DPDK application.
Fixes: d75b55121b ("net/bnxt: add context list for timers")
Cc: stable@dpdk.org
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The SRAM resource free did not reset the next block to be used
when the block is not empty. This caused the flows not be created
when max flows limit is reached and you delete one flow and try to
add a new flow. The fix calls the update of the next free block
even when block is not empty.
Fixes: 37ff91c158 ("net/bnxt: add SRAM manager model")
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Move wc_tcam_slices_per_row and database structure of
global_cfg and if_tbl to session structure to support
multiple TruFlow sessions with different card type under single
DPDK application instance.
Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Testing has shown no performance benefit from software prefetching
of receive completion descriptors in the AVX2 burst receive path,
and slightly better performance without them on some CPU families,
so this patch removes them.
Fixes: c4e4c18963 ("net/bnxt: add AVX2 RX/Tx")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Each call to the AVX2 vector burst receive function makes at
least one pass through the function's inner loop, loading
256 bytes of completion descriptors and copying 8 rte_mbuf
pointers regardless of whether there are any packets to be
received.
Unidirectional forwarding performance is improved by about
3-4% if we ensure that at least one packet can be received
before entering the inner loop.
Fixes: c4e4c18963 ("net/bnxt: add AVX2 RX/Tx")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The current approach detects the proxy port on each port (re-)plug and
may spam the log with error messages if the PMD does not support flows.
As testpmd is a debug tool, it must not do such implicit port handling.
Instead, the new API should be called only when the user requests that.
Revoke the existing code. Implement an explicit command-line primitive
to let the user find the proxy port themselves. Provide relevant hints.
Fixes: 1179f05cc9 ("ethdev: query proxy port to manage transfer flows")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Mempool registration code had a wrong assumption that it is always
dealing with packet mempools and always called rte_pktmbuf_priv_flags(),
which returned a random value for different types of mempools.
In particular, it could consider MPRQ mempools as having externally
pinned buffers, which is wrong.
Packet mempools cannot be reliably recognized, but it is sufficient to
check that the mempool is not a packet one, so it cannot have externally
pinned buffers.
Compare mempool private data size to that of packet mempools to check.
Fixes: 690b2a88c2 ("common/mlx5: add mempool registration facilities")
Fixes: fec28ca0e3 ("net/mlx5: support mempool registration")
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The lock sh->txpp.mutex was not correctly released on one path
of cleanup function return, potentially causing the deadlock.
Fixes: d133f4cdb7 ("net/mlx5: create clock queue for packet pacing")
Cc: stable@dpdk.org
Signed-off-by: Chengfeng Ye <cyeaa@connect.ust.hk>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch uses tx_free_thresh to control mbufs free when the common
xmit algorithm is used.
This patch also modifies the implementation of PMD's tx_done_cleanup
because the mbuf free algorithm changed.
In the testpmd single core MAC forwarding scenario, the performance is
improved by 10% at 64B on Kunpeng920 platform.
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently the vector and simple xmit algorithm don't support multi_segs,
so if Tx offload support MBUF_FAST_FREE, driver could invoke
rte_mempool_put_bulk() to free Tx mbufs in this situation.
In the testpmd single core MAC forwarding scenario, the performance is
improved by 8% at 64B on Kunpeng920 platform.
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Current implementation for raw encap sets the length to be in bytes,
but, GTP 'extension' header length is an 8-bit field in 4-octet units.
This fixes the length calculation of the header length.
Fixes: 9213c50e36 ("app/testpmd: support GTP PSC option in raw sets")
Cc: stable@dpdk.org
Signed-off-by: Raslan Darawsheh <rasland@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
When a port starts in non-isolated mode,
an internal indirect RSS is created that includes all configured queues
and a flow rule is created that references this indirect RSS.
If before switching to non-isolated mode an indirect RSS was created
that includes the same set of queues, it would be reused at this point.
However, because the port had been stopped (or not yet started),
the TIR for this indirect RSS had been destroyed (or not yet created).
The flow rule could not be created and the port start failed.
Creation of TIRs is moved before configuring non-isolated mode flows,
but it is not enough because of the following issue.
Commit 0cedf34da7 ("net/mlx5: move Rx queue reference count")
changed mlx5_rxq_get() not to increment RxQ control structure
reference count, mlx5_rxq_ref() was introduced for this purpose.
mlx5_ind_table_obj_attach() was not updated to use the new function,
so when the port was stopped, the control structure reference count
of an RxQ used in RSS reached zero and the structure was destroyed.
Use mlx5_rxq_ref() to keep RxQ control structure
needed for indirect RSS persistence across port restart.
Fixes: ec4e11d41d ("net/mlx5: preserve indirect actions on restart")
Fixes: 0cedf34da7 ("net/mlx5: move Rx queue reference count")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The RSS can be one of the fate actions when creating a meter with
policy. In the previous implementation, the RSS validation was missed
when creating a flow rule with such meter due to the fact that a
policy meter was created firstly and then used in the rule. In the
stage of meter creation, no rte_flow_item* information was provided.
A unnecessary RSS expansion might be called since the validation was
missed and would cause an unexpected error of the rule creation. Even
though the rule should be rejected from the very beginning, it would
cause confusion. There might be some other errors when the validation
was missed.
Adding the RSS validation inside the meter action validation will
prevent the code from continuing when there is a conflict between the
items, other actions and the policy meter RSS action.
Fixes: 4443201863 ("net/mlx5: support meter creation with policy")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Reviewed-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When application creates several flows to match on GRE tunnel
without explicitly specifying GRE protocol type value in
flow rules, PMD will translate that to zero mask.
RDMA-CORE cannot distinguish between different inner flow types and
produces identical matchers for each zero mask.
The patch extracts inner header type from flow rule and forces it
in GRE protocol type, if application did not specify
any without explicitly specifying GRE protocol type value in
flow rules, protocol type value.
Fixes: fc2c498ccb ("net/mlx5: add Direct Verbs translate items")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
When application creates several flows to match on GENEVE tunnel
without explicitly specifying GENEVE protocol type value in
flow rules, PMD will translate that to zero mask.
RDMA-CORE cannot distinguish between different inner flow types and
produces identical matchers for each zero mask.
The patch extracts inner header type from flow rule and forces it
in GENEVE protocol type, if application did not specify
any without explicitly specifying GENEVE protocol type value in
flow rules, protocol type value.
Fixes: e59a5dbcfd ("net/mlx5: add flow match on GENEVE item")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
RFC-2784 allows any valid Ethernet type in GRE protocol type field.
Add Ethernet to GRE RSS expansion.
Fixes: f4b901a46a ("net/mlx5: add flow GRE item")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
VXLAN-GPE extends VXLAN protocol and provides the next protocol
field specifying the first inner header type.
The application can assign some explicit value to
VXLAN-GPE::next_protocol field or set it to the default one. In the
latter case, the rdma-core library cannot recognize the matcher
built by PMD correctly, and it results in hardware configuration
missing inner headers match.
The patch forces VXLAN-GPE::next_protocol assignment if the
application did not explicitly assign it to the non-default value
Fixes: 90456726eb ("net/mlx5: fix VXLAN-GPE item translation")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Secondary process depends on the vector offload flag to select right
Rx offload path. This patch adds a variable in share memory to store
the vector offload flag that can be directly read by secondary process.
Fixes: 808a17b3c1 ("net/ice: add Rx AVX512 offload path")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Tested-by: Qin Sun <qinx.sun@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The ice DCF device reset has been supported. Release notes is updated
to synchronize with the feature.
Fixes: 1a86f4dbdf ("net/ice: support DCF device reset")
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Just fix the wrong defines of GTPU flags between UL and DL. These two
are defined are misplaced to each other.
Fixes: 8ebb93942b ("net/ice/base: add function to set HW profile for raw flow")
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
If the packet uses multiple descriptors and its descriptor indices are
wrapped, the first descriptor flag is not updated last, which may cause
virtio read the incomplete packet. For example, given a packet uses 64
descriptors, and virtio ring size is 256, and its descriptor indices are
224~255 and 0~31, current implementation will update 224~255 descriptor
flags earlier than 0~31 descriptor flags.
This patch fixes this issue by updating descriptor flags in one loop,
so that the first descriptor flag is always updated last.
Fixes: 873e8dad6f ("vhost: support packed ring in async datapath")
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The return value of "mlx5_os_wrapped_mkey_create" is checked in the
caller. A zero means success without any error.
The typo in the if-condition should be fixed in case there is a
misjudgment.
Fixes: 398ea8450c ("vdpa/mlx5: workaround dirty bitmap MR creation")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
When the example starts in mergeable mode with an i40e port,
it fails to launch because the examples use default mtu MAX_MTU
to configure ethdev. The root cause is some devices have Ethernet
frame overhead and then MAX_MTU will be larger than device's max
mtu, so the ethdev configure will fail.
This patch checks the device's max MTU before setting the ethdev
configuration. If the device has a max MTU, use that value to
configure.
Fixes: 1bb4a528c4 ("ethdev: fix max Rx packet length")
Reported-by: Xingguang He <xingguang.he@intel.com>
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This library can be made optional.
dumpcap and pdump applications depend on this library, check for
dependencies like what we have for examples.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
metrics, bitratestats, jobstats and latencystats libraries can be made
optional as they provide standalone features.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
GRO and GSO integration in testpmd is relatively self contained and easy
to extract.
Those libraries can be made optional as they provide standalone
features.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>