In case VIRTIO_F_ORDER_PLATFORM(36) is not negotiated, then the frontend
and backend are assumed to be implemented in software, that is they can
run on identical CPUs in an SMP configuration.
Thus a weak form of memory barriers like rte_smp_r/wmb, other than
rte_cio_r/wmb, is sufficient for this case(vq->hw->weak_barriers == 1)
and yields better performance.
For the above case, this patch helps yielding even better performance
by replacing the two-way barriers with C11 one-way barriers for used
index in split ring.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Optimize packed ring Tx path like Rx path. Split Tx path into batch and
single Tx functions. Batch function is further optimized by AVX512
instructions.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Optimize packed ring Rx path with SIMD instructions. Solution of
optimization is pretty like vhost, is that split path into batch and
single functions. Batch function is further optimized by AVX512
instructions.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Move offload, xmit cleanup and packed xmit enqueue function to header
file. These functions will be reused by packed ring vectorized path.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Previously, virtio split ring vectorized path was enabled by default.
This is not suitable for everyone because that path does not follow
virtio spec. Add new devarg for virtio vectorized path selection. By
default vectorized path is disabled.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Ring initialization is different when inorder feature negotiated. This
action should dependent on negotiated feature bits.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Introduce free threshold setting in Rx queue, its default value is 32.
Limit the threshold size to multiple of four as only vectorized packed
Rx function will utilize it. Virtio driver will rearm Rx queue when
more than rx_free_thresh descs were dequeued.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Virtio driver has its own logtype and should not use legacy
PMD logtype.
Fixes: 32c118fd0059 ("virtio: free mbuf's with threshold")
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
Fixes: 1c8489da561b ("net/virtio-user: fix multi-process support")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Missing parenthesis around expression before type cast to struct
virtio_net_hdr pointer makes the arithmetic to be in
sizeof(struct virtio_net_hdr) units.
Use rte_pktmbuf_mtod_offset() to fix the problem.
Type of head_size is changed to signed since some compilers bark
on unary minus applied to unsigned.
Fixes: 1ae55ad38e5e ("net/virtio: fix mbuf data and packet length mismatch")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Deferred start Tx queue is not supported by the driver.
Fixes: 0748be2cf9a2 ("ethdev: queue start and stop")
Cc: stable@dpdk.org
Signed-off-by: Dilshod Urazov <dilshod.urazov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Deferred start Rx queue is not supported by the driver.
Fixes: 0748be2cf9a2 ("ethdev: queue start and stop")
Cc: stable@dpdk.org
Signed-off-by: Dilshod Urazov <dilshod.urazov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Previous fix removes usage of rte_pktmbuf_prepend() to get pointer
to virtio net header which changes mbuf data_off and data_len.
Size of virtio net header is added to segment length when Tx descriptor
is composed, but segment address (calculated using data_off) is not
adjusted to take size of virtio net header into account.
Fixes: 1ae55ad38e5e ("net/virtio: fix mbuf data and packet length mismatch")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
In case VIRTIO_F_ORDER_PLATFORM(36) is not negotiated, then the frontend
and backend are assumed to be implemented in software, that is they can
run on identical CPUs in an SMP configuration.
Thus a weak form of memory barriers like rte_smp_r/wmb, other than
rte_cio_r/wmb, is sufficient for this case(vq->hw->weak_barriers == 1)
and yields better performance.
For the above case, this patch helps yielding even better performance
by replacing the two-way barriers with C11 one-way barriers for used
flags in packed ring.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In case VIRTIO_F_ORDER_PLATFORM(36) is not negotiated, then the frontend
and backend are assumed to be implemented in software, that is they can
run on identical CPUs in an SMP configuration.
Thus a weak form of memory barriers like rte_smp_r/wmb, other than
rte_cio_r/wmb, is sufficient for this case(vq->hw->weak_barriers == 1)
and yields better performance.
For the above case, this patch helps yielding even better performance
by replacing the two-way barriers with C11 one-way barriers for avail
flags in packed ring.
Meanwhile, a read barrier is required to ensure ordering between
descriptor's flags and content reads [1]. With C11, load-acquire can
enforce the ordering instead of rmb barrier.
[1] https://patchwork.dpdk.org/patch/49109/
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
If reserve virtio header room by function rte_pktmbuf_prepend, both
segment data length and packet length of mbuf will be increased.
Data length will be equal to descriptor length, while packet length
should be decreased as virtio-net header won't be taken into packet.
Thus will cause mismatch in mbuf structure. Fix this issue by access
mbuf data directly and increase descriptor length if it is needed.
Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload")
Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues")
Fixes: 4905ed3a523f ("net/virtio: optimize Tx enqueue for packed ring")
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Reported-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
With vectorized functions, only the rx stats for number of packets is
incremented.
Update also the other statistics.
Performance impact is about 2%
Fixes: fc3d66212fed ("virtio: add vector Rx")
Cc: stable@dpdk.org
Signed-off-by: Thibaut Collet <thibaut.collet@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Check whether space are enough before burst enqueue operation. If more
space is needed, will try to clean up used descriptors for space on
demand. It can give more chances to free used descriptors, thus will
help RFC2544 performance. Also deduct failed xmit packets from total
xmit number.
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
When doing xmit in-order enqueue, packets are buffered and then flushed
into avail ring. Buffered packets can be dropped due to insufficient
space. Moving stats update action just after successful avail ring
updates can guarantee correctness.
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
We should check the descriptor state instead of vq's internal
free count (i.e. the number of descriptors that we haven't made
available) for the remaining mergeable packets.
Fixes: a76290c8f1cf ("net/virtio: implement Rx path for packed queues")
Cc: stable@dpdk.org
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
When there is no enough segments for a packet in mergeable
packed Rx path, we should free the whole mbuf chain instead
of just recycling the last segment.
Fixes: a76290c8f1cf ("net/virtio: implement Rx path for packed queues")
Cc: stable@dpdk.org
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
When there is no enough segments for a packet in mergeable
Rx path, we should free the whole mbuf chain instead of just
recycling the last segment.
Fixes: bcac5aa207f8 ("net/virtio: improve batching in mergeable path")
Cc: stable@dpdk.org
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
When there is no enough segments for a packet in in-order
mergeable Rx path, we should free the whole mbuf chain instead
of just recycling the last segment.
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
VLAN tag insertion should be in Tx prepare, not in Tx burst functions.
One of Tx prepare goals is to be able to do preparations in advance
(possibly on different CPU core) and then transmit it fast.
Also Tx prepare can report that a packet does not pass Tx offloads
check. E.g. has no enough headroom to insert VLAN header.
Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation")
Cc: stable@dpdk.org
Signed-off-by: Dilshod Urazov <dilshod.urazov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Virtio requires pseudo-header checksum in TCP/UDP checksum to do
offload, but it was lost when Tx prepare is introduced. Also
rte_validate_tx_offload() should be used to validate Tx offloads.
Also it is incorrect to do virtio_tso_fix_cksum() after prepend
to mbuf without taking prepended size into account, since layer 2/3/4
lengths provide incorrect offsets after prepend.
Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation")
Cc: stable@dpdk.org
Signed-off-by: Dilshod Urazov <dilshod.urazov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
This patch removes useless checks on 'prev' pointer, as it
is always set before with a valid value.
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Head segment data_len field is wrongly summed with the length
of all the segments of the chain, whereas it should be the
length of the first segment only.
Fixes: a76290c8f1cf ("net/virtio: implement Rx path for packed queues")
Cc: stable@dpdk.org
Reported-by: Yaroslav Brustinov <ybrustin@cisco.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
After having dequeued a burst of descriptors, there may be a
need to dequeue a few more if the last packet was segmented
and not complete. When it happens, the extra segments were
not properly attached to the mbuf chain, and so were lost.
Also, head segment data_len field is wrongly summed with
the length of all the segments of the chain.
This patch fixes both the mbuf chaining and head segment's
data_len field
Fixes: bcac5aa207f8 ("net/virtio: improve batching in mergeable path")
Cc: stable@dpdk.org
Reported-by: Yaroslav Brustinov <ybrustin@cisco.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
After having dequeued a burst of descriptors, there may be a
need to dequeue a few more if the last packet was segmented
and not complete. When it happens, the extra segments were
not properly attached to the mbuf chain, and so were lost.
Also, head segment data_len field is wrongly summed with
the length of all the segments of the chain.
This patch fixes both the mbuf chaining and head segment's
data_len field.
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Reported-by: Yaroslav Brustinov <ybrustin@cisco.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This .rx_queue_setup devop is called after ethdev already dereferenced
the mempool pointer.
No need to check and we can remove this rte_exit.
Fixes: 48cec290a3d2 ("net/virtio: move queue configure code to proper place")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add 'RTE_' prefix to defines:
- rename ETHER_ADDR_LEN as RTE_ETHER_ADDR_LEN.
- rename ETHER_TYPE_LEN as RTE_ETHER_TYPE_LEN.
- rename ETHER_CRC_LEN as RTE_ETHER_CRC_LEN.
- rename ETHER_HDR_LEN as RTE_ETHER_HDR_LEN.
- rename ETHER_MIN_LEN as RTE_ETHER_MIN_LEN.
- rename ETHER_MAX_LEN as RTE_ETHER_MAX_LEN.
- rename ETHER_MTU as RTE_ETHER_MTU.
- rename ETHER_MAX_VLAN_FRAME_LEN as RTE_ETHER_MAX_VLAN_FRAME_LEN.
- rename ETHER_MAX_VLAN_ID as RTE_ETHER_MAX_VLAN_ID.
- rename ETHER_MAX_JUMBO_FRAME_LEN as RTE_ETHER_MAX_JUMBO_FRAME_LEN.
- rename ETHER_MIN_MTU as RTE_ETHER_MIN_MTU.
- rename ETHER_LOCAL_ADMIN_ADDR as RTE_ETHER_LOCAL_ADMIN_ADDR.
- rename ETHER_GROUP_ADDR as RTE_ETHER_GROUP_ADDR.
- rename ETHER_TYPE_IPv4 as RTE_ETHER_TYPE_IPv4.
- rename ETHER_TYPE_IPv6 as RTE_ETHER_TYPE_IPv6.
- rename ETHER_TYPE_ARP as RTE_ETHER_TYPE_ARP.
- rename ETHER_TYPE_VLAN as RTE_ETHER_TYPE_VLAN.
- rename ETHER_TYPE_RARP as RTE_ETHER_TYPE_RARP.
- rename ETHER_TYPE_QINQ as RTE_ETHER_TYPE_QINQ.
- rename ETHER_TYPE_ETAG as RTE_ETHER_TYPE_ETAG.
- rename ETHER_TYPE_1588 as RTE_ETHER_TYPE_1588.
- rename ETHER_TYPE_SLOW as RTE_ETHER_TYPE_SLOW.
- rename ETHER_TYPE_TEB as RTE_ETHER_TYPE_TEB.
- rename ETHER_TYPE_LLDP as RTE_ETHER_TYPE_LLDP.
- rename ETHER_TYPE_MPLS as RTE_ETHER_TYPE_MPLS.
- rename ETHER_TYPE_MPLSM as RTE_ETHER_TYPE_MPLSM.
- rename ETHER_VXLAN_HLEN as RTE_ETHER_VXLAN_HLEN.
- rename ETHER_ADDR_FMT_SIZE as RTE_ETHER_ADDR_FMT_SIZE.
- rename VXLAN_GPE_TYPE_IPV4 as RTE_VXLAN_GPE_TYPE_IPV4.
- rename VXLAN_GPE_TYPE_IPV6 as RTE_VXLAN_GPE_TYPE_IPV6.
- rename VXLAN_GPE_TYPE_ETH as RTE_VXLAN_GPE_TYPE_ETH.
- rename VXLAN_GPE_TYPE_NSH as RTE_VXLAN_GPE_TYPE_NSH.
- rename VXLAN_GPE_TYPE_MPLS as RTE_VXLAN_GPE_TYPE_MPLS.
- rename VXLAN_GPE_TYPE_GBP as RTE_VXLAN_GPE_TYPE_GBP.
- rename VXLAN_GPE_TYPE_VBNG as RTE_VXLAN_GPE_TYPE_VBNG.
- rename ETHER_VXLAN_GPE_HLEN as RTE_ETHER_VXLAN_GPE_HLEN.
Do not update the command line library to avoid adding a dependency to
librte_net.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add 'rte_' prefix to structures:
- rename struct ether_addr as struct rte_ether_addr.
- rename struct ether_hdr as struct rte_ether_hdr.
- rename struct vlan_hdr as struct rte_vlan_hdr.
- rename struct vxlan_hdr as struct rte_vxlan_hdr.
- rename struct vxlan_gpe_hdr as struct rte_vxlan_gpe_hdr.
Do not update the command line library to avoid adding a dependency to
librte_net.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The function rte_vlan_insert may allocate a new buffer for the
vlan header and return a different mbuf than originally passed.
In this case, the stored mbuf in txm[] array could point to wrong
buffer.
Fixes: dd856dfcb9e7 ("virtio: use any layout on Tx")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Since previous test is for mtu < 1519 the next else if
is always true. This causes the lgtm static tool to complain.
Not a real issue, just cosmetic.
Fixes: 76d4c652e07d ("virtio: add extended stats")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Rami Rosen <ramirose@gmail.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
We are consistently passing 1 as the argument in the data path,
so there is no need to define avail/used flags as function-like
macros anymore. This patch changes the avail and used flags to
constants. And a frequently used combination is also introduced.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch improves descriptors refill by using the same
batching strategy as done in in-order and mergeable path.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Drop redundant suffix (_packed and _event) from the fields in
packed ring structure.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Put split ring and packed ring specific fields into separate
sub-structures, and also union them as they won't be available
at the same time.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Cache the AVAIL, USED and WRITE bits to avoid calculating
them as much as possible. Note that, the WRITE bit isn't
cached for control queue.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch introduces an optimized enqueue function in packed
ring for the case that virtio net header can be prepended to
the unchained mbuf.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch introduces a helper for clearing the virtio net header
to avoid the code duplication. Macro is used as it shows slightly
better performance.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
When IN_ORDER feature is negotiated, device may just write out a
single used descriptor for a batch of buffers:
"""
Some devices always use descriptors in the same order in which they
have been made available. These devices can offer the VIRTIO_F_IN_ORDER
feature. If negotiated, this knowledge allows devices to notify the
use of a batch of buffers to the driver by only writing out a single
used descriptor with the Buffer ID corresponding to the last descriptor
in the batch.
The device then skips forward in the ring according to the size of the
batch. The driver needs to look up the used Buffer ID and calculate the
batch size to be able to advance to where the next used descriptor will
be written by the device.
"""
But the Tx path of packed ring can't handle this. With this patch,
when IN_ORDER is negotiated, driver will manage the IDs linearly,
look up the used buffer ID and advance to the next used descriptor
that will be written by the device.
Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
When IN_ORDER feature is negotiated, device may just write out a
single used ring entry for a batch of buffers:
"""
Some devices always use descriptors in the same order in which they
have been made available. These devices can offer the VIRTIO_F_IN_ORDER
feature. If negotiated, this knowledge allows devices to notify the
use of a batch of buffers to the driver by only writing out a single
used ring entry with the id corresponding to the head entry of the
descriptor chain describing the last buffer in the batch.
The device then skips forward in the ring according to the size of
the batch. Accordingly, it increments the used idx by the size of
the batch.
The driver needs to look up the used id and calculate the batch size
to be able to advance to where the next used ring entry will be written
by the device.
"""
Currently, the in-order Tx path in split ring can't handle this.
With this patch, driver will allocate desc_extra[] based on the
index in avail/used ring instead of the index in descriptor table.
And driver can just relay on the used->idx written by device to
reclaim the descriptors and Tx buffers.
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
We should try to cleanup at least the 'need' number of descs.
Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Read barrier is required between reading the flags (desc_is_used)
and the content of descriptor to ensure the ordering.
Otherwise, speculative read of desc.id could be reordered with
reading of the desc.flags.
Fixes: a76290c8f1cf ("net/virtio: implement Rx path for packed queues")
Cc: stable@dpdk.org
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
There should be read barrier between checking VIRTQUEUE_NUSED (reading
the used->idx) and reading these descriptors. It's done for the first
checks at the beginning of these functions but missed while checking
for extra required descriptors.
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
Fixes: 13ce5e7eb94f ("virtio: mergeable buffers")
Cc: stable@dpdk.org
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Read barrier must be implied between reading descriptor flags
and descriptor id. Otherwise, in case of reordering, we could
read wrong descriptor id.
For the reference, similar barrier for split rings is the read
barrier between VIRTQUEUE_NUSED (reading the used->idx) and
the call to the virtio_xmit_cleanup().
Additionally removed double update of 'used_idx'. It's enough
to set it in the end of the loop.
Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues")
Cc: stable@dpdk.org
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>