16291 Commits

Author SHA1 Message Date
Michal Krawczyk
45b6d86184 net/ena: add per-queue software counters stats
Those counters provide information regards sent/received bytes and
packets per queue.

Signed-off-by: Solganik Alexander <sashas@lightbitslabs.com>
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
709b1dcb73 net/ena: fix cleanup for out of order packets
When wrong req_id is detected some previous mbufs could be used for
receiving different segments of received packets. In such cases chained
mbufs will be twice returned to pool.

To prevent it chained mbuf is now freed just after error detection.

To simplify cleaning, pointers taken for Rx ring are set to NULL.

As after ena_rx_queue_release_bufs and ena_tx_queue_release_bufs queues
are not used updating of next_to_clean pointer is not necessary.

Fixes: c2034976673d ("net/ena: add Rx out of order completion")
Cc: stable@dpdk.org

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
eccbe2ffdc net/ena: fix invalid reference to variable in union
Use empty_rx_reqs instead of empty_tx_reqs.
As those two variables are part of union this not cause
any failure, but for consistency should be changed.

Fixes: c2034976673d ("net/ena: add Rx out of order completion")
Cc: stable@dpdk.org

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
b01ead202b net/ena: add supported RSS offloads types
The PMD was not passing RSS offloads values although it was supporting
the RSS. To allow application to probe the PMD for RSS support, the
missing information was added.

Fixes: 1173fca25af9 ("ena: add polling-mode driver")
Cc: stable@dpdk.org

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
498c687aee net/ena: adjust new line in log messages
Only PMD_*_LOG is adding new line character to the log message.
All printouts were adjusted for consistency.

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
e457bc70e5 net/ena: do not reconfigure queues on reset
Reset function should return the port to initial state, in which no Tx
and Rx queues are setup. Then application should reconfigure the queues.

According to DPDK documentation the rte_eth_dev_reset() itself is a
generic function which only does some hardware reset operations through
calling dev_unint() and dev_init().

ena_com_dev_reset which perform NIC registers reset should be called
during stop.

Fixes: 2081d5e2e92d ("net/ena: add reset routine")
Cc: stable@dpdk.org

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
26e5543dc8 net/ena: destroy queues if start failed
If start function fails, previously created queues have to be removed.

ena_queue_restart_all() and ena_queue_restart() are renamed to
ena_queue_start_all() and ena_queue_start().

ena_free_io_queues_all() is renamed to ena_queue_stop_all().

Fixes: df238f84c0a2 ("net/ena: recreate HW IO rings on start and stop")
Cc: stable@dpdk.org

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
c7519ea5eb net/ena: call additional doorbells if needed
Before sending next packet, check if calling doorbell is needed.

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
353907507b net/ena: increase maximum Rx ring size
Some of ENA devices supports 8k Rx rings. Maximum supported size is
received upon device initialization.
As ENA_DEFAULT_RING_SIZE_RX macro is upper limit, it needs to be
adjusted.

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Michal Krawczyk
2fca2a98c0 net/ena: support LLQv2
LLQ (Low Latency Queue) is the feature that allows pushing header
directly to the device through PCI before even DMA is triggered.
It reduces latency, because device can start preparing packet before
payload is sent through DMA.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
f00930d929 net/ena: skip packet with wrong request id
When invalid req_id is received, the reset should be handled by the
application, as it is indicating invalid rings state, so further Rx
is not making any sense.

Fixes: c2034976673d ("net/ena: add Rx out of order completion")
Cc: stable@dpdk.org

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
ea93d37eb4 net/ena: add HW queues depth setup
The device now allows driver to reconfigure Tx and Rx queues depth
independently. Moreover, maximum size for Tx and Rx can be different.
Those maximum values are received from the device.

After reset, previous ring configuration is restored.

If number of descriptor is set to RTE_ETH_DEV_FALLBACK_RX_RINGSIZE
or RTE_ETH_DEV_FALLBACK_TX_RINGSIZE, the maximum value is restored.

Remove checks, if provided number is not too big, as this is done in
generic functions (rte_eth_rx_queue_setup and rte_eth_tx_queue_setup).

Maximum number of segments is being set for Rx packets and provided to
ena_com_rx_pkt() for validation.

Unused definitions were removed.

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
9b260dbf74 net/ena: add reset reason in Rx error
Whenever the driver will receive too many descriptors from the device,
it should trigger the device reset with reset reason set to
ENA_REGS_RESET_TOO_MANY_RX_DESCS.

Fixes: 241da076b1f7 ("net/ena: adjust error checking and cleaning")
Cc: stable@dpdk.org

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
b9302eb93a net/ena: pass number of CPUs to the host info structure
The new ena_com allows the number of CPUs to be passed to the device in
the host info structure.

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Rafal Kozik
b68309be44 net/ena/base: update communication layer for the ENAv2
ena_com is the communication layer provided by the vendor and common to
all ENA drivers.
This patch updates it to version from 2018.09.26.

It adds support for ENAv2 device together with LLQ feature, adds
doorbell optimization and reconfiguration of HW queues depth
independently.

The driver was adjusted to the new changes in the HAL.

Signed-off-by: Rafal Kozik <rk@semihalf.com>
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
2018-12-21 16:22:40 +01:00
Qi Zhang
62b52877ad app/testpmd: batch MAC swap for performance on x86
Do four packets macswap in same loop iterate to squeeze more
CPU cycles.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
2018-12-21 16:22:40 +01:00
Qi Zhang
a68b61687a app/testpmd: improve MAC swap performance for x86
The patch optimizes the mac swap operation by taking advantage
of SSE instructions, it only impacts x86 platform.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
2018-12-21 16:22:40 +01:00
Qi Zhang
a825afdbb9 app/testpmd: move MAC swap functions
Move macswap workload to dedicate function, so we can further enable
platform specific optimized version.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
2018-12-21 16:22:40 +01:00
Xiao Wang
f42cc9a81b doc: update ifc NIC document
Add the SW assisted VDPA live migration feature into NIC doc.

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Xiao Wang
4bb531e152 net/ifc: support SW assisted VDPA live migration
In SW assisted live migration mode, driver will stop the device and
setup a mediated virtio ring to relay the communication between the
virtio driver and the VDPA device.

This data path intervention will allow SW to help on guest dirty page
logging for live migration.

This SW fallback is event driven relay thread, so when the network
throughput is low, this SW fallback will take little CPU resource, but
when the throughput goes up, the relay thread's CPU usage will goes up
accordingly.

User needs to take all the factors including CPU usage, guest perf
degradation, etc. into consideration when selecting the live migration
support mode.

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Xiao Wang
b95cbe7f1b net/ifc: use vhost lib function for used ring logging
Vhost lib has already provided a helper for used ring logging, driver
could use it to reduce code.

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Xiao Wang
4c3f55cc23 net/ifc: add LM mode parameter
This patch series enables a new method for live migration, i.e. software
assisted live migration. This patch provides a device argument for user
to choose the methold.

When "sw-live-migration=1", driver/device will do live migration with a
relay thread dealing with dirty page logging. Without this parameter,
device will do dirty page logging and there's no relay thread consuming
CPU resource.

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Xiao Wang
40ef35f4a5 net/ifc: detect if VDPA mode is specified
If user wants the VF to be used in VDPA (vhost data path acceleration)
mode, then the user can add a "vdpa=1" parameter for the device.

So if driver does not find this option, it should quit and let the bus
continue the probe.

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Xiao Wang
7b47ea59f2 net/ifc: store only registered device instance
If driver fails to register ifc VF device into vhost lib, then this
device should not be stored.

Fixes: a3f8150eac6d ("net/ifcvf: add ifcvf vDPA driver")
Cc: stable@dpdk.org

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Xiao Wang
c6ce525dd0 net/ifc: add probing error logs
Driver probe may fail for different causes, debug message is helpful for
debugging issue.

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Xiao Wang
b13ad2decc vhost: provide helpers for virtio ring relay
This patch provides two helpers for vdpa device driver to perform a
relay between the guest virtio ring and a mediated virtio ring.

The available ring relay will synchronize the available entries, and
help to do desc validity checking.

The used ring relay will synchronize the used entries from mediated ring
to guest ring, and help to do dirty page logging for live migration.

The later patch will leverage these two helpers.

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Xiao Wang
43f34e3566 vhost: provide helper for host notifier ctrl
VDPA driver can decide if it needs to enable/disable the host notifier
mapping, so exposing a API can allow flexibility. A later patch will
base on this.

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Xiao Wang
02e3b285d4 vhost: remove unused function
vhost_detach_vdpa_device() is internally defined but not used, remove
it in this patch.

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Jens Freimann
aea29aa5d3 net/virtio: enable packed virtqueues by default
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Jens Freimann
07dd7e250d net/virtio-user: fail if cq used with packed vq
Until we have support for control virtqueues let's disable it and
fail device initalization if specified as a parameter.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Yuanhan Liu
34f3966c7f net/virtio-user: add option to use packed queues
Add option to enable packed queue support for virtio-user
devices.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Jens Freimann
ec194c2f18 net/virtio: support packed queue in send command
Use packed virtqueue format when reading and writing descriptors
to/from the ring.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Jens Freimann
a76290c8f1 net/virtio: implement Rx path for packed queues
Implement the receive part.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Jens Freimann
892dc798fa net/virtio: implement Tx path for packed queues
This implements the transmit path for devices with
support for packed virtqueues.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Jens Freimann
56785a2d6f net/virtio: dump packed virtqueue data
Add support to dump packed virtqueue data to the
VIRTQUEUE_DUMP() macro.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Jens Freimann
f803734b0f net/virtio: vring init for packed queues
Add and initialize descriptor data structures.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Jens Freimann
e9f4feb7e6 net/virtio: add packed virtqueue helpers
Add helper functions to set/clear and check descriptor flags.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Jens Freimann
4c3f5822eb net/virtio: add packed virtqueue defines
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Matthias Gatto
276d63505b vhost: fix race condition when adding fd in the fdset
fdset_add can call fdset_shrink_nolock which call fdset_move
concurrently to poll that is call in fdset_event_dispatch.

This patch add a mutex to protect poll from been call at the same time
fdset_add call fdset_shrink_nolock.

Fixes: 1b815b89599c ("vhost: try to shrink pfdset when fdset_add fails")
Cc: stable@dpdk.org

Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2018-12-21 16:22:40 +01:00
Tiago Lam
6ef75e405d doc: add af_packet PMD guide
As of commit 364e08f2bbc0, DPDK allows an application to send and
receive raw packets using an AF_PACKET and PACKET_MMAP, when using
Linux Kernel. This complements it by adding a simple guide with the
following information:
- An introduction, where a brief explanation of this driver is given,
  pointing out the dependency on PACKET_MMAP;
- Which options are supported at configuration time, while setting up an
  interface, and it's inherent limitations;
- What the prerequisites are;
- A command line example of how to set up a DPDK port using the
  af_packet driver.

Since there's a dependency in PACKET_MMAP, the guide also points to the
original Kernel documentation, so the reader can get more details.

Signed-off-by: Tiago Lam <tiago.lam@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2018-12-21 16:22:40 +01:00
Stephen Hemminger
f2b76d22f8 net/netvsc: fix probe when VF not found
It is possible that the VF device exists but DPDK doesn't know
about it. This could happen if device was blacklisted or more
likely the necessary device (Mellanox) was not part of the DPDK
configuration.

In either case, the right thing to do is just keep working
but only with the slower para-virtual device.

Fixes: dc7680e8597c ("net/netvsc: support integrated VF")
Cc: stable@dpdk.org

Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
2018-12-21 16:22:40 +01:00
Stephen Hemminger
c578d8507b net/netvsc: fix transmit descriptor pool cleanup
On device close or startup errors, the transmit descriptor pool
was being left behind.

Fixes: 4e9c73e96e83 ("net/netvsc: add Hyper-V network device")
Cc: stable@dpdk.org

Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
2018-12-21 16:22:40 +01:00
Stephen Hemminger
3c32fdf432 net/netvsc: support receive without VLAN strip
In some cases, VLAN stripping is not desireable. If necessary
re-insert stripped VLAN tag.

Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
2018-12-21 16:22:40 +01:00
Xiaoyun Li
63056c1920 net/i40e: fix statistics inconsistency
While calculating the input packet count per port, discarded packets
should be reduced, right now only PF VSI discarded packets are reduced.
But while calculating the input byte count per port, Rx byte count is
used, which should take all discarded packets into account, including
VF VSI ones.
This will cause inconsistency in stat counters in some cases.

This patch would take all VSI stats as packet and byte count to address
the issue.

Fixes: 763de290cbd1 ("net/i40e: fix packet count for PF")
Cc: stable@dpdk.org

Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-12-21 16:22:40 +01:00
Zhirun Yan
0eaa1f8c75 net/i40e: clear VF reset flags after reset
The reset flags vf->vf_reset and vf->pend_msg are set when VF received
VIRTCHNL_EVENT_RESET_IMPENDING. So after resetting done, these flags
should be cleared.

Fixes: 8cacf78469a7 ("net/i40e: fix VF initialization error")
Cc: stable@dpdk.org

Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-12-21 16:22:40 +01:00
Anatoly Burakov
b2cb71e95e test/mem: check non-heap external memory API
Currently, extmem autotest only covers the external malloc heap
API. Extend it to also cover the non-heap, register/unregister
external memory API.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
2018-12-21 15:32:46 +01:00
Anatoly Burakov
f4e24a403c test/mem: check external memseg list
Extend the extmem autotest to check whether the memseg lists for
externally allocated memory are always marked as external.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
2018-12-21 15:29:22 +01:00
Anatoly Burakov
483c40f572 test/mem: check external memory without IOVA table
Currently, only scenario with valid IOVA table is tested. Fix this
by also testing without IOVA table - in these cases, EAL should
always return RTE_BAD_IOVA for all memsegs, and contiguous memzone
allocation should fail.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
2018-12-21 15:27:43 +01:00
Anatoly Burakov
6324cc5cf7 test/mem: refactor and rename functions
We will be adding a new extmem test that will behave roughly similar
to already existing, so clarify function names to distinguish between
these tests, as well as factor out the common parts.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
2018-12-21 15:26:44 +01:00
Anatoly Burakov
ba731ea1dd malloc: fix deadlock when reading stats
Currently, malloc statistics and external heap creation code
use memory hotplug lock as a way to synchronize accesses to
heaps (as in, locking the hotplug lock to prevent list of heaps
from changing under our feet). At the same time, malloc
statistics code will also lock the heap because it needs to
access heap data and does not want any other thread to allocate
anything from that heap.

In such scheme, it is possible to enter a deadlock with the
following sequence of events:

thread 1		thread 2
rte_malloc()
			rte_malloc_dump_stats()
take heap lock
			take hotplug lock
failed to allocate,
attempt to take
hotplug lock
			attempt to take heap lock

Neither thread will be able to continue, as both of them are
waiting for the other one to drop the lock. Adding an
additional lock will require an ABI change, so instead of
that, make malloc statistics calls thread-unsafe with
respect to creating/destroying heaps.

Fixes: 72cf92b31855 ("malloc: index heaps using heap ID rather than NUMA node")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
2018-12-21 15:26:43 +01:00