6113 Commits

Author SHA1 Message Date
Rahul Lakkireddy
b84bcf4019 net/cxgbe: free resources during uninit
Move freeing up resources from dev_close() to dev_uninit(). This fixes
NULL pointer de-reference when accessing adapter context needed by
other ports under same PF, but had been freed up by the first port.
This can happen if only the first port is started up and the check
to free up all resources is still satisfied. When dev_close is
called for other ports, adapter context is NULL since it was freed
up by the first port.

Thus, by moving to dev_uninit() all the ports can be teared down
safely without need for extra checks.

Fixes: 2195df6d11bd ("net/cxgbe: rework ethdev device allocation")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
2018-05-14 22:32:23 +01:00
Ophir Munk
97b2217ae5 net/mlx4: advertise supported RSS hash functions
Advertise mlx4 supported RSS functions as part of dev_infos_get
callback.
Previous to this commit RSS support was reported as none. Since the
introduction of [1] it is required that all RSS configurations will be
verified.

[1] commit 8863a1fbfc66 ("ethdev: add supported hash function check")

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
2018-05-14 22:32:23 +01:00
Ophir Munk
cbd737416c net/mlx4: avoid constant recreations in function
Function mlx4_conv_rss_types() contains constant arrays variables
which are recreated with every call to the function. By changing the
arrays definitions from "const" to "static const" these recreations
can be saved.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
2018-05-14 22:32:23 +01:00
Yongseok Koh
c9ec2192ff net/mlx5: use correct field in a union structure
This is not a bug but it is better to use semantically correct field.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
2018-05-14 22:32:23 +01:00
Yongseok Koh
0cfdc1808d net/mlx5: use coherent I/O memory barrier
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
2018-05-14 22:32:22 +01:00
Yongseok Koh
5f44cfd011 net/mlx5: fix inlining segmented TSO packet
When a multi-segmented packet is inlined, data can be further inlined even
after the first segment. In case of TSO packet, extra inline data after TSO
header should be carried by an inline DSEG which has 4B inline header
recording the length of the inline data. If more than one segment is
inlined, the length doesn't count from the second segment. This will cause
a fault in HW and CQE will have an error, which is ignored by PMD.

Fixes: f895536be4fa ("net/mlx5: enable inlining data from multiple segments")
Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
2018-05-14 22:32:22 +01:00
Ophir Munk
907252079a net/failsafe: add an RSS hash update callback
Add an RSS hash update callback to eth_dev_ops.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
2018-05-14 22:32:22 +01:00
Remy Horton
266c467f60 net/ixgbe: fix missing port representor data-path
This patch adds Rx and Tx burst functions to the ixgbe
Port Representors, so that the implementation within
ixgbe PMD can be tested using applications such as
testpmd which require data-path functionality.

Fixes: cf80ba6e2038 ("net/ixgbe: add support for representor ports")

Signed-off-by: Remy Horton <remy.horton@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
2018-05-14 22:32:17 +01:00
Remy Horton
8973508a0c net/i40e: fix missing port representor data-path
This patch adds Rx and Tx burst functions to the i40e Port
Representors, so that the implementation within this PMD
can be tested using applications such as testpmd which
require data-path functionality.

Fixes: e0cb96204b71 ("net/i40e: add support for representor ports")

Signed-off-by: Remy Horton <remy.horton@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
2018-05-14 22:32:11 +01:00
Beilei Xing
3d4faec985 net/i40e: print original value for global register change
Currently, only new value is printed during global
register change. Add original value to help debugging
facility.

Fixes: bc66b9717c50 ("net/i40e: add debug logs when writing global registers")
Cc: stable@dpdk.org

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-05-14 22:31:53 +01:00
Zhiyong Yang
201a416517 net/virtio-user: fix multiple queues fail in server mode
This patch fixes multiple queues failure when virtio-user works in
server mode.

This patch adds feature negotiation in the processing of virtio-user
connection and enables multiple-queue pairs.

Fixes: bd8f50a45d0f ("net/virtio-user: support server mode")
Cc: stable@dpdk.org

Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
2018-05-14 22:31:53 +01:00
Yanglong Wu
4fa9cd4338 net/i40e: fix missing VLAN offload capability
VLAN offload capability should be exposed in VF
since i40e does support it.

Fixes: c3ac7c5b0b8a ("net/i40e: convert to new Rx offloads API")

Signed-off-by: Yanglong Wu <yanglong.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2018-05-14 22:31:53 +01:00
Qi Zhang
399421100e net/i40e: fix missing mbuf fast free offload
Expose the missing mbuf fast free capability since i40 does
support it.

Fixes: 7497d3e2f777 ("net/i40e: convert to new Tx offloads API")

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
2018-05-14 22:31:53 +01:00
Matan Azrad
7fda13d3a5 net/failsafe: fix sub-device ownership race
There is time between the sub-device port probing by the sub-device PMD
to the sub-device port ownership taking by a fail-safe port.

In this time, the port is available for the application usage. For
example, the port will be exposed to the applications which use
RTE_ETH_FOREACH_DEV iterator.

Thus, ownership unaware applications may manage the port in this time
what may cause a lot of problematic behaviors in the fail-safe
sub-device initialization.

Register to the ethdev NEW event to take the sub-device port ownership
before it becomes exposed to the application.

Fixes: a46f8d584eb8 ("net/failsafe: add fail-safe PMD")
Cc: stable@dpdk.org

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
2018-05-14 22:31:53 +01:00
Thomas Monjalon
fbe90cdd77 ethdev: add probing finish function
A new hook function is added and called inside the PMDs at the end
of the device probing:
	- in primary process, after allocating, init and config
	- in secondary process, after attaching and local init

This new function is almost empty for now.
It will be used later to add some post-initialization processing.

For the PMDs calling the helpers rte_eth_dev_create() or
rte_eth_dev_pci_generic_probe(), the hook rte_eth_dev_probing_finish()
is called from here, and not in the PMD itself.

Note that the helper rte_eth_dev_create() could be used more,
especially for vdevs, avoiding some code duplication in PMDs.

Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
2018-05-14 22:31:53 +01:00
Thomas Monjalon
01a98fdd08 drivers/net: use higher level of probing helper for PCI
The drivers avp, bnx2x and liquidio were using the helper function
rte_eth_dev_pci_allocate() and can be replaced by
rte_eth_dev_pci_generic_probe() which calls the former.

Fixes: dcd5c8112bc3 ("ethdev: add PCI driver helpers")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
2018-05-14 22:31:53 +01:00
Thomas Monjalon
1958142640 net/failsafe: fix sub-device visibility
The iterator function rte_eth_find_next_owned_by(), used by the
iterator macro RTE_ETH_FOREACH_DEV_OWNED_BY, are ignoring the devices
which are neither ATTACHED nor REMOVED. Thus sub-devices, having
the state DEFERRED, cannot be seen with the ethdev iterator.
The state RTE_ETH_DEV_DEFERRED can be replaced by
RTE_ETH_DEV_ATTACHED + owner.

Fixes: dcd0c9c32b8d ("net/failsafe: use ownership mechanism for slaves")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
2018-05-14 22:31:52 +01:00
Beilei Xing
9153411e44 net/i40e: print global register change info
Global register change info during enabling
flexible payload is not printed.
This patch changes macro to print the global
register change info.

Fixes: d2f9fe8ae309 ("net/i40e: turn off flexible payload on driver init")
Cc: stable@dpdk.org

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
2018-05-14 22:31:52 +01:00
Andrew Rybchenko
4436cf988b net/sfc: fix inner TCP/UDP checksum offload control
If application uses Tx offload API and sets ETH_TXQ_FLAGS_IGNORE flag,
it still should have inner TCP/UDP checksum offload enabled if it is
supported and TCP/UDP checksum offload is requested.

Fixes: c78d280e88ef ("net/sfc: convert to new Tx offload API")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-05-14 22:31:52 +01:00
Hyong Youb Kim
9713ece25c net/enic: fix flow drop action
Drop is a fate-deciding action, so mark it as FATE. It was missing in
a previous commit.

Fixes: cc17feb90413 ("ethdev: alter behavior of flow API actions")

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
2018-05-14 22:31:52 +01:00
Wei Dai
acca1807e7 net/e1000: report Tx multi segment offload
This feature has been confirmed with testpmd:
testpmd> set fwd txonly
testpmd> port stop all
testpmd> port config all txd 1024
testpmd> set txsplit on
testpmd> set txpkts 70,80,90,100
testpmd> start
It can be observed at peer port that UDP packets
with UDP data length 298 bytes.

Signed-off-by: Wei Dai <wei.dai@intel.com>
2018-05-14 22:31:52 +01:00
Beilei Xing
b5f6272c24 net/i40e: fix link status update
Link status is not updated correctly, link speed is 0
when link is up and link speed is not 0 when link is
down. This patch fixes the issue.

Fixes: eef2daf2e199 ("net/i40e: fix link update no wait")
Cc: stable@dpdk.org

Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
2018-05-14 22:31:52 +01:00
Yongseok Koh
7d6bf6b866 net/mlx5: add Multi-Packet Rx support
Multi-Packet Rx Queue (MPRQ a.k.a Striding RQ) can further save PCIe
bandwidth by posting a single large buffer for multiple packets. Instead of
posting a buffer per a packet, one large buffer is posted in order to
receive multiple packets on the buffer. A MPRQ buffer consists of multiple
fixed-size strides and each stride receives one packet.

Rx packet is mem-copied to a user-provided mbuf if the size of Rx packet is
comparatively small, or PMD attaches the Rx packet to the mbuf by external
buffer attachment - rte_pktmbuf_attach_extbuf(). A mempool for external
buffers will be allocated and managed by PMD.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2018-05-14 22:31:52 +01:00
Yongseok Koh
18bee13096 net/mlx5: add a function to rdma-core glue
mlx5dv_create_wq() is added for the Multi-Packet RQ (a.k.a Striding RQ).

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2018-05-14 22:31:52 +01:00
Yongseok Koh
3e1f82a1f1 net/mlx5: separate filling Rx flags
Filling in fields of mbuf becomes a separate inline function so that this
can be reused.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2018-05-14 22:31:52 +01:00
Yongseok Koh
9797bfcce1 net/mlx4: add new memory region support
This is the new design of Memory Region (MR) for mlx PMD, in order to:
- Accommodate the new memory hotplug model.
- Support non-contiguous Mempool.

There are multiple layers for MR search.

L0 is to look up the last-hit entry which is pointed by mr_ctrl->mru (Most
Recently Used). If L0 misses, L1 is to look up the address in a fixed-sized
array by linear search. L0/L1 is in an inline function -
mlx4_mr_lookup_cache().

If L1 misses, the bottom-half function is called to look up the address
from the bigger local cache of the queue. This is L2 - mlx4_mr_addr2mr_bh()
and it is not an inline function. Data structure for L2 is the Binary Tree.

If L2 misses, the search falls into the slowest path which takes locks in
order to access global device cache (priv->mr.cache) which is also a B-tree
and caches the original MR list (priv->mr.mr_list) of the device. Unless
the global cache is overflowed, it is all-inclusive of the MR list. This is
L3 - mlx4_mr_lookup_dev(). The size of the L3 cache table is limited and
can't be expanded on the fly due to deadlock. Refer to the comments in the
code for the details - mr_lookup_dev(). If L3 is overflowed, the list will
have to be searched directly bypassing the cache although it is slower.

If L3 misses, a new MR for the address should be created -
mlx4_mr_create(). When it creates a new MR, it tries to register adjacent
memsegs as much as possible which are virtually contiguous around the
address. This must take two locks - memory_hotplug_lock and
priv->mr.rwlock. Due to memory_hotplug_lock, there can't be any
allocation/free of memory inside.

In the free callback of the memory hotplug event, freed space is searched
from the MR list and corresponding bits are cleared from the bitmap of MRs.
This can fragment a MR and the MR will have multiple search entries in the
caches. Once there's a change by the event, the global cache must be
rebuilt and all the per-queue caches will be flushed as well. If memory is
frequently freed in run-time, that may cause jitter on dataplane processing
in the worst case by incurring MR cache flush and rebuild. But, it would be
the least probable scenario.

To guarantee the most optimal performance, it is highly recommended to use
an EAL option - '--socket-mem'. Then, the reserved memory will be pinned
and won't be freed dynamically. And it is also recommended to configure
per-lcore cache of Mempool. Even though there're many MRs for a device or
MRs are highly fragmented, the cache of Mempool will be much helpful to
reduce misses on per-queue caches anyway.

'--legacy-mem' is also supported.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
2018-05-14 22:31:52 +01:00
Yongseok Koh
2d684b911d net/mlx4: remove memory region support
This patch removes current support of Memory Region (MR) in order to
accommodate the dynamic memory hotplug patch. This patch can be compiled
but traffic can't flow and HW will raise faults. Subsequent patches will
add new MR support.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
2018-05-14 22:31:51 +01:00
Yongseok Koh
974f1e7ef1 net/mlx5: add new memory region support
This is the new design of Memory Region (MR) for mlx PMD, in order to:
- Accommodate the new memory hotplug model.
- Support non-contiguous Mempool.

There are multiple layers for MR search.

L0 is to look up the last-hit entry which is pointed by mr_ctrl->mru (Most
Recently Used). If L0 misses, L1 is to look up the address in a fixed-sized
array by linear search. L0/L1 is in an inline function -
mlx5_mr_lookup_cache().

If L1 misses, the bottom-half function is called to look up the address
from the bigger local cache of the queue. This is L2 - mlx5_mr_addr2mr_bh()
and it is not an inline function. Data structure for L2 is the Binary Tree.

If L2 misses, the search falls into the slowest path which takes locks in
order to access global device cache (priv->mr.cache) which is also a B-tree
and caches the original MR list (priv->mr.mr_list) of the device. Unless
the global cache is overflowed, it is all-inclusive of the MR list. This is
L3 - mlx5_mr_lookup_dev(). The size of the L3 cache table is limited and
can't be expanded on the fly due to deadlock. Refer to the comments in the
code for the details - mr_lookup_dev(). If L3 is overflowed, the list will
have to be searched directly bypassing the cache although it is slower.

If L3 misses, a new MR for the address should be created -
mlx5_mr_create(). When it creates a new MR, it tries to register adjacent
memsegs as much as possible which are virtually contiguous around the
address. This must take two locks - memory_hotplug_lock and
priv->mr.rwlock. Due to memory_hotplug_lock, there can't be any
allocation/free of memory inside.

In the free callback of the memory hotplug event, freed space is searched
from the MR list and corresponding bits are cleared from the bitmap of MRs.
This can fragment a MR and the MR will have multiple search entries in the
caches. Once there's a change by the event, the global cache must be
rebuilt and all the per-queue caches will be flushed as well. If memory is
frequently freed in run-time, that may cause jitter on dataplane processing
in the worst case by incurring MR cache flush and rebuild. But, it would be
the least probable scenario.

To guarantee the most optimal performance, it is highly recommended to use
an EAL option - '--socket-mem'. Then, the reserved memory will be pinned
and won't be freed dynamically. And it is also recommended to configure
per-lcore cache of Mempool. Even though there're many MRs for a device or
MRs are highly fragmented, the cache of Mempool will be much helpful to
reduce misses on per-queue caches anyway.

'--legacy-mem' is also supported.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
2018-05-14 22:31:51 +01:00
Yongseok Koh
d561b5dc13 net/mlx5: remove memory region support
This patch removes current support of Memory Region (MR) in order to
accommodate the dynamic memory hotplug patch. This patch can be compiled
but traffic can't flow and HW will raise faults. Subsequent patches will
add new MR support.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
2018-05-14 22:31:51 +01:00
Wei Dai
a4996bd89c ethdev: new Rx/Tx offloads API
This patch check if a input requested offloading is valid or not.
Any reuqested offloading must be supported in the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
dev_conf->[rt]xmode.offloads to rte_eth_dev_configure() and
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If any offloading is enabled in rte_eth_dev_configure() by application,
it is enabled on all queues no matter whether it is per-queue or
per-port type and no matter whether it is set or cleared in
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If a per-queue offloading hasn't be enabled in rte_eth_dev_configure(),
it can be enabled or disabled for individual queue in
ret_eth_[rt]x_queue_setup().
A new added offloading is the one which hasn't been enabled in
rte_eth_dev_configure() and is reuqested to be enabled in
rte_eth_[rt]x_queue_setup(), it must be per-queue type,
otherwise trigger an error log.
The underlying PMD must be aware that the requested offloadings
to PMD specific queue_setup() function only carries those
new added offloadings of per-queue type.

This patch can make above such checking in a common way in rte_ethdev
layer to avoid same checking in underlying PMD.

This patch assumes that all PMDs in 18.05-rc2 have already
converted to offload API defined in 17.11 . It also assumes
that all PMDs can return correct offloading capabilities
in rte_eth_dev_infos_get().

In the beginning of [rt]x_queue_setup() of underlying PMD,
add offloads = [rt]xconf->offloads |
dev->data->dev_conf.[rt]xmode.offloads; to keep same as offload API
defined in 17.11 to avoid upper application broken due to offload
API change.
PMD can use the info that input [rt]xconf->offloads only carry
the new added per-queue offloads to do some optimization or some
code change on base of this patch.

Signed-off-by: Wei Dai <wei.dai@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
2018-05-14 22:31:51 +01:00
Yongseok Koh
df428ceef4 net/mlx5: change device reference for secondary process
rte_eth_devices[] is not shared between primary and secondary process, but
a static array to each process. The reverse pointer of device (priv->dev)
is invalid. Instead, priv has the pointer to shared data of the device,
  struct rte_eth_dev_data *dev_data;

Two macros are added,
  #define PORT_ID(priv) ((priv)->dev_data->port_id)
  #define ETH_DEV(priv) (&rte_eth_devices[PORT_ID(priv)])

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
2018-05-14 22:31:51 +01:00
Ophir Munk
ce07b1514d net/mlx4: fix CRC stripping capability report
There are two capabilities related to CRC stripping:
1. mlx4 HW capability to perform CRC stripping on a received packet.
This capability is built in mlx4 HW. It should be returned by the API
call mlx4_get_rx_queue_offloads().
2. mlx4 driver capability to enable/disable HW CRC stripping. This
capability is dependent on the driver version.

Before this commit the second capability was falsely returned by
the mentioned API. This commit fixes it by returning the first
capability.
mlx4 HW performs CRC stripping by default and this capability is
always reported as "true".

The ability to enable/disable CRC stripping is supported since this
commit and requires OFED version 4.3-1.5.0.0 or rdma-core version v18.
CRC stripping will be done by default regardless of its configuration
when working with OFED or rdma-core versions earlier than those
previously specified or before this commit.

Fixes: de1df14e6e6ec ("net/mlx4: support CRC strip toggling")
Cc: stable@dpdk.org

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
2018-05-14 22:31:51 +01:00
Raslan Darawsheh
95e7a72f9d net/failsafe: fix probe cleanup
The hot-plug alarm mechanism is responsible to practically execute both
plug in and out operations. It periodically tries to detect missed
sub-devices to be reconfigured and clean the resources of the removed
sub-devices.

The hot-plug alarm is started by the failsafe probe function, and it's
wrongly not stopped if failsafe instance got an error. for example
when starting failsafe with a MAC option, and giving it an invalid MAC
address this will lead to a NULL pointer for the dev private field. Then
when the hotplug alarm is called it will try to access this pointer,
which will lead to a segmentation fault.

Uninstall the hot-plug alarm in case of error in probe function.

Fixes: ebea83f8 ("net/failsafe: add plug-in support")
Cc: stable@dpdk.org

Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
2018-05-14 22:31:51 +01:00
Ophir Munk
807dd827b3 net/failsafe: advertise supported RSS functions
Advertise failsafe supported RSS functions as part of dev_infos_get
callback. Set failsafe default RSS hash functions to be:
ETH_RSS_IP, ETH_RSS_UDP, and ETH_RSS_TCP.
The result of failsafe RSS hash functions is the logical AND of the
RSS hash functions among all failsafe sub_devices and failsafe own
defaults.

Previous to this commit RSS support was reported as none. Since the
introduction of [1] it is required that all RSS configurations be
verified.

[1] commit 8863a1fbfc66 ("ethdev: add supported hash function check")

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
2018-05-14 22:31:51 +01:00
Shreyansh Jain
2c01a48a50 net/dpaa: update optimal burst size in device info
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
2018-05-14 22:31:51 +01:00
Shreyansh Jain
0b5deefbe6 net/dpaa: fix max push mode queue
Split default and max push mode queues to 4 and 8, respectively.

Fixes: 0c504f6950b6 ("net/dpaa: support push mode")
Cc: stable@dpdk.org

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
2018-05-14 22:31:51 +01:00
Ophir Munk
8d54ede738 net/tap: report on supported RSS hash functions
Report on TAP supported RSS functions as part of dev_infos_get
callback: ETH_RSS_IP, ETH_RSS_UDP and ETH_RSS_TCP.
Known limitation: TAP supports all of the above hash functions together
and not in partial combinations.
Previous to this commit RSS support was reported as none. Since the
introduction of [1] it is required that all RSS configurations will be
verified.

[1] commit 8863a1fbfc66 ("ethdev: add supported hash function check")

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2018-05-14 22:31:51 +01:00
Ajit Khaparde
6f7169f4eb net/bnxt: add NVM specific HWRM commands
This patch adds new NVM specific HWRM commands.
Code using these commands will be added in future patches.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2018-05-14 22:31:50 +01:00
Ajit Khaparde
034703ad57 net/bnxt: add HWRM commands for more filtering support
Add HWRM structures to support more filtering features
like metering, tunnel filters and more.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2018-05-14 22:31:50 +01:00
Ajit Khaparde
40cc43bf4b net/bnxt: add async event HWRM commands
add structures to add async_event support.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2018-05-14 22:31:50 +01:00
Ajit Khaparde
9a891c1764 net/bnxt: update HWRM to version 1.9.2
Update HWRM structures to version 1.9.2

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2018-05-14 22:31:50 +01:00
Hyong Youb Kim
94c3518958 net/enic: update UDP RSS controls
Current adapters which support UDP RSS piggyback on TCP RSS. Change
the controls to be forward compatible with future adapters, which will
have independent control of UDP and TCP.

Fixes: 9bd04182bb01 ("net/enic: support UDP RSS on 1400 series adapters")

Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
2018-05-14 22:31:50 +01:00
Hyong Youb Kim
3656c30e48 net/enic: fix RSS hash type advertisement
The NIC can hash these RSS packet types, but they are not advertised
via flow_type_rss_offloads. So add them.
- Part of the IPv4 hash:
  ETH_RSS_FRAG_IPV4
  ETH_RSS_NONFRAG_IPV4_OTHER
- Part of the IPv6 hash:
  ETH_RSS_FRAG_IPV6
  ETH_RSS_NONFRAG_IPV6_OTHER
- Part of the UDP hash:
  ETH_RSS_IPV6_UDP_EX

Also, do not use NIC_CFG_RSS_HASH_TYPE_IPV6_EX and
NIC_CFG_RSS_HASH_TYPE_TCP_IPV6_EX, as they are not needed to enable
RSS over IPv6 with extension headers.

Fixes: c2fec27b5cb0 ("net/enic: allow to change RSS settings")

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
2018-05-14 22:31:50 +01:00
John Daley
579cc855af net/enic: set rte errno to positive value
Related to d9fff8a31, where rte_errno should always have positive
errno values.

Technically this is an ABI change since it fixes an error code
introduced in 18.02, but is minor and inconsequential.

Fixes: 1e81dbb5321b ("net/enic: add Tx prepare handler")
Cc: stable@dpdk.org

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
2018-05-14 22:31:50 +01:00
Hyong Youb Kim
95faa2a9df net/enic: fix the MTU handler to rely on max packet length
The RQ setup functions (enic_alloc_rq and enic_alloc_rx_queue_mbufs)
have changed to rely on max_rx_pkt_len to determine the use of scatter
and buffer size. But, the MTU handler only updates ethdev's MTU
value. So make it update max_rx_pkt_len as well. Other PMDs also
update both mtu and max_rx_pkt_len in their MTU handlers.

Also the condition for taking a short cut (scatter is disabled) in the
MTU handler is wrong. Even when scatter is disabled, a change in
max_rx_pkt_len may affect the buffer size posted to the NIC. So remove
that condition.

Finally, fix a comment and a warning message condition.

Fixes: 422ba91716a7 ("net/enic: heed the requested max Rx packet size")

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
2018-05-14 22:31:50 +01:00
Hyong Youb Kim
a74629cfa3 net/enic: enable RQ first and then post Rx buffers
Future VIC adapters may require that the driver enable RQ before
posting new buffers to the NIC. So split enic_alloc_rx_queue_mbufs()
into two functions, one that allocates buffers and fills RQ and the
other that posts them (i.e. PIO write to a doorbell). And, call the
post function only after enabling RQ.

Currently released models are not affected by this change, as they
work fine whether the driver posts buffers before or after enabling RQ.

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
2018-05-14 22:31:50 +01:00
Maxime Coquelin
12ecb2f63b net/virtio-user: support memory hotplug
When memory is hot-added or hot-removed, the virtio-user driver has to
notify the vhost-user backend with sending a VHOST_USER_SET_MEM_TABLE
request with the new memory map as payload.

This patch implements and registers a mem_event callback, it pauses the
datapath and updates memory regions to vhost in case of hot-add or
hot-remove event. This memory region update has only to be done when the
device is already started, so a new status flag is added to the device to
keep track of the status.

As the device can now be managed by different threads, a mutex is
introduced to protect against concurrent device configuration.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
2018-05-14 22:31:50 +01:00
Yongseok Koh
95d7e115be net/mlx5: fix calculation of Tx TSO inline room size
rdma-core doesn't add up max_tso_header size to max_inline_data size. The
library takes bigger value between the two.

Fixes: 43e9d9794cde ("net/mlx5: support upstream rdma-core")
Cc: stable@dpdk.org

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
2018-05-14 22:31:50 +01:00
Raslan Darawsheh
690de2850b net/mlx5: fix resource leak in case of error
If something went wrong in mlx5_pci_prob the allocated eth dev
will cause a memory leak.

This commit release the eth dev that was previously allocated.

Fixes: 771fa900b73a ("mlx5: introduce new driver for Mellanox ConnectX-4 adapters")
Cc: stable@dpdk.org

Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
2018-05-14 22:31:50 +01:00
Raslan Darawsheh
e9f4166014 net/mlx5: fix double free on error handling
When attr_ctx is NULL it will attempt to free the list of devices twice.
Avoid double freeing the list by directly going to error handling.

Fixes: 771fa900b73a ("mlx5: introduce new driver for Mellanox ConnectX-4 adapters")
Cc: stable@dpdk.org

Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
2018-05-14 22:31:49 +01:00