PMD sets up and clears the slow path interrupt status block in dev_start
and dev_stop flow and slow path interrupt status block DMA memory for
device is allocated in dev_configure flow.
This situation creates a state where, after dev_stop is called, and if
there is a slow path interrupt from device, PMD sees the old value of
status block consumer in dev_start flow, since DMA memory for status
block belongs to old configuration and dev_start will result in
new slow path interrupt status block configuration.
And since PMD fails to ack new slow path interrupt with correct status
block consumer value, device continues to trigger interrupt causing an
interrupt flood.
Fix is to create and destroy status block DMA memory in dev_start and
dev_stop flow instead of dev_configure and dev_close flow.
Fixes: 540a211084 ("bnx2x: driver core")
Cc: stable@dpdk.org
Signed-off-by: Shahed Shaikh <shshaikh@marvell.com>
Acked-by: Rasesh Mody <rmody@marvell.com>
Patch "8bd31421c593 ("net/bnx2x: fix ramrod timeout")"
introduced a regression where sc->scan_fp flags is
set for unexpectedly long time. So the slow path completion
handler flow is run unnecessarily which walks over receive
descriptor ring of fast path and drops the data packets while looking
for slow path completion descriptor out of fast path ring.
This issue is seen under heavy traffic with link events happening
in background.
Fixes: 8bd31421c5 ("net/bnx2x: fix ramrod timeout")
Cc: stable@dpdk.org
Signed-off-by: Shahed Shaikh <shshaikh@marvell.com>
Acked-by: Rasesh Mody <rmody@marvell.com>
Previous solution was using memzones in invalid way in hope to assign
IO queue to the appropriate NUMA zone.
The right way is to use socket_id from the rx/tx queue setup function
and then pass it to the IO queue.
Fixes: 3d3edc265f ("net/ena: make coherent memory allocation NUMA-aware")
Cc: stable@dpdk.org
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Configuring buffer size based following parameters:
- max-pkt-len
- max supported segments per MTU
Buffer size are configured as given below:
- If platform supports infinite segments per packet then default
buffer size is used.
- If platform supports nb_mtu_seg_max segments then buffer size
is configured as (max-pkt-len / nb_mtu_seg_max) + headroom
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
rte_eth_dev_info structure exposes, nb_seg_max & nb_mtu_seg_max
to provide maximum number of supported segments for a given platform.
Defining UINT16_MAX as default value of above mentioned variables to
expose support of infinite/maximum segments.
Based on above values, application can decide best size for buffers
while creating mbuf pool.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch adds two parameters `start_queue` and `queue_count` to
specify the range of netdev queues used by AF_XDP pmd.
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Implement zero copy of af_xdp pmd through mbuf's external memory
mechanism to achieve high performance.
This patch also provides a new parameter "pmd_zero_copy" for user, so they
can choose to enable zero copy of af_xdp pmd or not.
To be clear, "zero copy" here is different from the "zero copy mode" of
AF_XDP, it is about zero copy between af_xdp umem and mbuf used in dpdk
application.
Suggested-by: Vipin Varghese <vipin.varghese@intel.com>
Suggested-by: Tummala Sivaprasad <sivaprasad.tummala@intel.com>
Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
The failsafe driver device info had several issues in the
info it reported in dev_info_get:
- it cleared dev_info->device set in rte_eth_dev_info_get
- many fields (for example max_rx_queue) should be the minimum
of all sub devices
- it reported tx capa for the active transmit device, but
the device may change.
There was enough messed up that ended up reworking the info_get
handler. There is no need to save current values or have a
template for defaults.
Fixes: 4e31ee26ed ("net/failsafe: report actual device capabilities")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Initializes mark_spec pointer to NULL.
Coverity issue: 341075
Fixes: 0bbcfc706a ("net/i40e: support MARK and RSS flow action")
Signed-off-by: Mesut Ali Ergin <mesut.a.ergin@intel.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Make changes needed to support rss for thor-based controllers.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This commit adds support to the bnxt PMD for devices
based on the BCM57508 "thor" Ethernet controller.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reduce code duplication and prepare for supporting hardware with
different ring allocation requirements by refactoring ring
allocation code.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reduce code duplication and prepare for newer controllers that
use different doorbell protocols by refactoring doorbell handling
code.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Use consistent values for vnic->rss_rule. No functional change,
these all equate to uint16_t 0xffff.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Use consistent macro names for ring type values. (There is no
functional change, the "alloc" and "free" values are identical.)
Fixes: 6371b91fb6 ("net/bnxt: add ring alloc/free")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Descriptor fields in CP ring are in little-endian form, convert
to CPU endian before performing arithmetic operations.
Also use more general comparison when checking for ring
index wrap.
Fixes: f2a768d4d1 ("net/bnxt: add completion ring")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Tx datapath feature bits were useful on migration from the old offload API
to the new one. However, right now it just adds indirection which
complicates code reading and understanding. Also addition of a new
offloads requires addition of a new feature bits and makes patches longer
and harder to understand. So, remove feature bits which correspond to Tx
offloads and simply advertise device and per-queue offloads directly.
Generic code could still mask some offloads if running HW or FW does not
support it.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Rx datapath feature bits were useful on migration from the old offload API
to the new one. However, right now it just adds indirection which
complicates code reading and understanding. Also addition of a new
offloads requires addition of a new feature bits and makes patches longer
and harder to understand. So, remove feature bits which correspond to Rx
offloads and simply advertise device and per-queue offloads directly.
Generic code could still mask some offloads if running HW or FW does not
support it.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
We currently have no check on the mempool pointer passed to
rte_eth_rx_queue_setup.
Let's avoid a plain crash when dereferencing it.
Suggested-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Rx checksum flags and input errors shouldn't be updated on Tx, as it
would work only for packets forwarding.
The ierrors statistic should be updated on Rx, right after checking
Rx checksum flags if the Rx checksum offload is enabled.
Fixes: 1173fca25a ("ena: add polling-mode driver")
Cc: stable@dpdk.org
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Instead of counting number of used NIC Tx bufs just count number
of Tx packets.
Fixes: 45b6d86184 ("net/ena: add per-queue software counters stats")
Cc: stable@dpdk.org
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
Shared memory packet interface (memif) PMD allows for DPDK and any other
client using memif (DPDK, VPP, libmemif) to communicate using shared
memory. The created device transmits packets in a raw format. It can be
used with Ethernet mode, IP mode, or Punt/Inject. At this moment, only
Ethernet mode is supported in DPDK memif implementation. Memif is Linux
only.
Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch removes useless checks on 'prev' pointer, as it
is always set before with a valid value.
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Head segment data_len field is wrongly summed with the length
of all the segments of the chain, whereas it should be the
length of the first segment only.
Fixes: a76290c8f1 ("net/virtio: implement Rx path for packed queues")
Cc: stable@dpdk.org
Reported-by: Yaroslav Brustinov <ybrustin@cisco.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
After having dequeued a burst of descriptors, there may be a
need to dequeue a few more if the last packet was segmented
and not complete. When it happens, the extra segments were
not properly attached to the mbuf chain, and so were lost.
Also, head segment data_len field is wrongly summed with
the length of all the segments of the chain.
This patch fixes both the mbuf chaining and head segment's
data_len field
Fixes: bcac5aa207 ("net/virtio: improve batching in mergeable path")
Cc: stable@dpdk.org
Reported-by: Yaroslav Brustinov <ybrustin@cisco.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
After having dequeued a burst of descriptors, there may be a
need to dequeue a few more if the last packet was segmented
and not complete. When it happens, the extra segments were
not properly attached to the mbuf chain, and so were lost.
Also, head segment data_len field is wrongly summed with
the length of all the segments of the chain.
This patch fixes both the mbuf chaining and head segment's
data_len field.
Fixes: e5f456a98d ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Reported-by: Yaroslav Brustinov <ybrustin@cisco.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Set RTE_ETH_DEV_CLOSE_REMOVE upon probe so all the private
resources for the port can be freed by rte_eth_dev_close().
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Some helpers in the header file are forced inlined other are
only inlined, this patch forces inline for all.
It will avoid it to be embedded as functions when called multiple
times in the same object file. For example, when we added packed
ring support in vhost-user library, rte_memcpy_generic got no
more inlined.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Now that we have a single function to map the descriptors
buffers, let's prefetch them there as it is the earliest
place we can do it.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Handling of fragmented virtio-net header and indirect descriptors
tables was implemented to fix CVE-2018-1059. It should never
happen with healthy guests and so is already considered as
unlikely code path.
This patch moves these bits into non-inline dedicated functions
to reduce the I-cache pressure.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
At runtime either packed Tx/Rx functions will always be called,
or split Tx/Rx functions will always be called.
This patch removes the forced inlining in order to reduce
the I-cache pressure.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
In order to reduce the I-cache pressure, this patch removes
the inlining of the dirty pages logging functions, that we
can consider as cold path.
Indeed, these functions are only called while doing live
migration, so not called most of the time.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
This .rx_queue_setup devop is called after ethdev already dereferenced
the mempool pointer.
No need to check and we can remove this rte_exit.
Fixes: 48cec290a3 ("net/virtio: move queue configure code to proper place")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Use of weak symbols can hide makefile errors especially when
custom makefiles are used. Removing the use of weak symbols
to avoid a stub function being linked in production code.
Signed-off-by: David Harton <dharton@cisco.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
When users call rte_eth_dev_close() and rte_dev_remove(), the af_xdp
pmd return -1 (EPERM) due to eth_dev == NULL.
Since the af_xdp pmd driver advertises RTE_ETH_DEV_CLOSE_REMOVE, all
the resources are freed on rte_eth_dev_close(). rte_dev_remove() tries
to detach device and subsequently calls rte_pmd_af_xdp_remove() that
tries to free already freed resources and fails.
Fix it by return success.
Fixes: f1debd77ef ("net/af_xdp: introduce AF_XDP PMD")
Cc: stable@dpdk.org
Reported-at: https://patchwork.ozlabs.org/patch/1106528/
Suggested-by: Ilya Maximets <i.maximets@samsung.com>
Signed-off-by: William Tu <u9012063@gmail.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
The device private pointer (dev_private) is of type void *
therefore no cast is necessary in C.
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The device private pointer (dev_private) is of type void *
therefore no cast is necessary in C.
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The device private pointer (dev_private) is of type void *
therefore no cast is necessary in C.
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The device private pointer (dev_private) is of type void *
therefore no cast is necessary in C.
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The device private pointer (dev_private) is of type void *
therefore no cast is necessary in C.
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The device private pointer (dev_private) is of type void *
therefore no cast is necessary in C.
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The device private pointer (dev_private) is of type void *
therefore no cast is necessary in C.
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The device private pointer (dev_private) is of type void *
therefore no cast is necessary in C.
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The device private pointer (dev_private) is of type void *
therefore no cast is necessary in C.
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Lance Richardson <lance.richardson@broadcom.com>
The device private pointer (dev_private) is of type void *
therefore no cast is necessary in C.
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>