Add the structures for the control queues.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add all the device IDs that represent the NIC.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add the commands, error codes, and structures
for the sideband queue.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add the commands, error codes, and structures for
the admin queue.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add the structures required by the NIC.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add the registers that comprise the Intel(R) 800
Series NIC. There is no functionality in this patch.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Skip parsing pattern match items that have no spec. This fixes NULL
dereference when accessing their non-existent spec.
Fixes: ee61f5113b ("net/cxgbe: parse and validate flows")
Cc: stable@dpdk.org
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Location of filter TID table should be after active TID table memory,
and not from the beginning of TID table memory. This fixes memory
corruption due to overlapping regions.
Fixes: 3a381a4116 ("net/cxgbe: query firmware for HASH filter resources")
Cc: stable@dpdk.org
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
The ENAv2 is introducing many new features, mainly the LLQ feature
(Low Latency Queue) which allows the device to process packets faster
and as a result, the latency is noticeably lower.
The second major feature is configurable depth of hw queues where Rx
and Tx can be reconfigured independently and maximum depth of Rx queue
is 8k.
The release also includes many bug fixes and minor new features, like
improved statistics counters and extended statistics.
Driver is still compatible with ENAv1 device.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Previously, the features list was indicating unsupported ENA PMD
features and were missing few ones, that were actually supported.
The features file was updated, so it is now reflecting current driver
state.
The documentation was updated with the more actual example and features,
especially ones which are ENA and not listed in the features file.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
After Rx or Tx cleanup update completion queue head by calling
ena_com_update_dev_comp_head().
Fixes: 1daff5260f ("net/ena: use unmasked head and tail")
Cc: stable@dpdk.org
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
The Rx drops cannot be acquired using the older API. Now, it must be
read in keep alive message.
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
ENA PMD is having it's own custom statistics counters. They are exposed
to the application by using the xstats DPDK API.
The deprecated and unused statistics are removed, together with old API.
Signed-off-by: Solganik Alexander <sashas@lightbitslabs.com>
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Those counters provide information regards sent/received bytes and
packets per queue.
Signed-off-by: Solganik Alexander <sashas@lightbitslabs.com>
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
When wrong req_id is detected some previous mbufs could be used for
receiving different segments of received packets. In such cases chained
mbufs will be twice returned to pool.
To prevent it chained mbuf is now freed just after error detection.
To simplify cleaning, pointers taken for Rx ring are set to NULL.
As after ena_rx_queue_release_bufs and ena_tx_queue_release_bufs queues
are not used updating of next_to_clean pointer is not necessary.
Fixes: c203497667 ("net/ena: add Rx out of order completion")
Cc: stable@dpdk.org
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
Use empty_rx_reqs instead of empty_tx_reqs.
As those two variables are part of union this not cause
any failure, but for consistency should be changed.
Fixes: c203497667 ("net/ena: add Rx out of order completion")
Cc: stable@dpdk.org
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
The PMD was not passing RSS offloads values although it was supporting
the RSS. To allow application to probe the PMD for RSS support, the
missing information was added.
Fixes: 1173fca25a ("ena: add polling-mode driver")
Cc: stable@dpdk.org
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
Only PMD_*_LOG is adding new line character to the log message.
All printouts were adjusted for consistency.
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
Reset function should return the port to initial state, in which no Tx
and Rx queues are setup. Then application should reconfigure the queues.
According to DPDK documentation the rte_eth_dev_reset() itself is a
generic function which only does some hardware reset operations through
calling dev_unint() and dev_init().
ena_com_dev_reset which perform NIC registers reset should be called
during stop.
Fixes: 2081d5e2e9 ("net/ena: add reset routine")
Cc: stable@dpdk.org
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
If start function fails, previously created queues have to be removed.
ena_queue_restart_all() and ena_queue_restart() are renamed to
ena_queue_start_all() and ena_queue_start().
ena_free_io_queues_all() is renamed to ena_queue_stop_all().
Fixes: df238f84c0 ("net/ena: recreate HW IO rings on start and stop")
Cc: stable@dpdk.org
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
Before sending next packet, check if calling doorbell is needed.
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
Some of ENA devices supports 8k Rx rings. Maximum supported size is
received upon device initialization.
As ENA_DEFAULT_RING_SIZE_RX macro is upper limit, it needs to be
adjusted.
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
LLQ (Low Latency Queue) is the feature that allows pushing header
directly to the device through PCI before even DMA is triggered.
It reduces latency, because device can start preparing packet before
payload is sent through DMA.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
When invalid req_id is received, the reset should be handled by the
application, as it is indicating invalid rings state, so further Rx
is not making any sense.
Fixes: c203497667 ("net/ena: add Rx out of order completion")
Cc: stable@dpdk.org
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
The device now allows driver to reconfigure Tx and Rx queues depth
independently. Moreover, maximum size for Tx and Rx can be different.
Those maximum values are received from the device.
After reset, previous ring configuration is restored.
If number of descriptor is set to RTE_ETH_DEV_FALLBACK_RX_RINGSIZE
or RTE_ETH_DEV_FALLBACK_TX_RINGSIZE, the maximum value is restored.
Remove checks, if provided number is not too big, as this is done in
generic functions (rte_eth_rx_queue_setup and rte_eth_tx_queue_setup).
Maximum number of segments is being set for Rx packets and provided to
ena_com_rx_pkt() for validation.
Unused definitions were removed.
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
Whenever the driver will receive too many descriptors from the device,
it should trigger the device reset with reset reason set to
ENA_REGS_RESET_TOO_MANY_RX_DESCS.
Fixes: 241da076b1 ("net/ena: adjust error checking and cleaning")
Cc: stable@dpdk.org
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
The new ena_com allows the number of CPUs to be passed to the device in
the host info structure.
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
ena_com is the communication layer provided by the vendor and common to
all ENA drivers.
This patch updates it to version from 2018.09.26.
It adds support for ENAv2 device together with LLQ feature, adds
doorbell optimization and reconfiguration of HW queues depth
independently.
The driver was adjusted to the new changes in the HAL.
Signed-off-by: Rafal Kozik <rk@semihalf.com>
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Do four packets macswap in same loop iterate to squeeze more
CPU cycles.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
The patch optimizes the mac swap operation by taking advantage
of SSE instructions, it only impacts x86 platform.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Move macswap workload to dedicate function, so we can further enable
platform specific optimized version.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Add the SW assisted VDPA live migration feature into NIC doc.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In SW assisted live migration mode, driver will stop the device and
setup a mediated virtio ring to relay the communication between the
virtio driver and the VDPA device.
This data path intervention will allow SW to help on guest dirty page
logging for live migration.
This SW fallback is event driven relay thread, so when the network
throughput is low, this SW fallback will take little CPU resource, but
when the throughput goes up, the relay thread's CPU usage will goes up
accordingly.
User needs to take all the factors including CPU usage, guest perf
degradation, etc. into consideration when selecting the live migration
support mode.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Vhost lib has already provided a helper for used ring logging, driver
could use it to reduce code.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch series enables a new method for live migration, i.e. software
assisted live migration. This patch provides a device argument for user
to choose the methold.
When "sw-live-migration=1", driver/device will do live migration with a
relay thread dealing with dirty page logging. Without this parameter,
device will do dirty page logging and there's no relay thread consuming
CPU resource.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
If user wants the VF to be used in VDPA (vhost data path acceleration)
mode, then the user can add a "vdpa=1" parameter for the device.
So if driver does not find this option, it should quit and let the bus
continue the probe.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
If driver fails to register ifc VF device into vhost lib, then this
device should not be stored.
Fixes: a3f8150eac ("net/ifcvf: add ifcvf vDPA driver")
Cc: stable@dpdk.org
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Driver probe may fail for different causes, debug message is helpful for
debugging issue.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch provides two helpers for vdpa device driver to perform a
relay between the guest virtio ring and a mediated virtio ring.
The available ring relay will synchronize the available entries, and
help to do desc validity checking.
The used ring relay will synchronize the used entries from mediated ring
to guest ring, and help to do dirty page logging for live migration.
The later patch will leverage these two helpers.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
VDPA driver can decide if it needs to enable/disable the host notifier
mapping, so exposing a API can allow flexibility. A later patch will
base on this.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
vhost_detach_vdpa_device() is internally defined but not used, remove
it in this patch.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Until we have support for control virtqueues let's disable it and
fail device initalization if specified as a parameter.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add option to enable packed queue support for virtio-user
devices.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Use packed virtqueue format when reading and writing descriptors
to/from the ring.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This implements the transmit path for devices with
support for packed virtqueues.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add support to dump packed virtqueue data to the
VIRTQUEUE_DUMP() macro.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>