The following commit has added a number of existing offload flags such
as PKT_TX_IPV4 and PKT_TX_IPV6 to PKT_TX_OFFLOAD_MASK defined in
rte_mbuf.h. That change breaks the enic driver's Tx prepare handler.
commit ef28cfa73822 ("mbuf: fix Tx offload mask")
The enic driver keeps the supported offload flags in a local variable
(tx_offload_mask), which is strictly a subset of
PKT_TX_OFFLOAD_MASK. This variable is then used to compute the
unsupported flags (tx_offload_notsup_mask), and the Tx prepare handler
(tx_pkt_prepare) uses it to reject packets with unsupported offload
flags.
As is, tx_offload_notsup_mask ends up containing flags like
PKT_TX_IPV4 that are actually supported by the driver, which then
breaks any application that uses checksum offloads and calls the Tx
prepare handler. So add the flags to tx_offload_mask that the driver
supports but were missing in PKT_TX_OFFLOAD_MASK.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
The handler for dev_supported_ptypes_get currently returns null when
the vectorized Rx handler is used. It is also missing tunnel packet
types. Add the missing packet types to the supported list, and return
the right list for the vectorized Rx handler.
Fixes: 8a6ff33d6d ("net/enic: add AVX2 based vectorized Rx handler")
Fixes: 93fb21fdbe ("net/enic: enable overlay offload for VXLAN and GENEVE")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
- track whether counter DMAs are active or not so they can be stopped if
needed before DMA memory is freed
- fix counter DMA shut-down by changing vnic_dev_counter_dma_cfg() to
take the number of counters to DMA instead of high counter index and
use num counters = 0 to shut off DMAs
- remove unnecessary checks that DMA counter memory is valid and that
counter DMAs are in use
- change the minimum DMA period to match what 1400 series adapter is
capable of
- fix comments and change a couple variable names to make more sense
Fixes: 86df6c4e2f ("net/enic: support flow counter action")
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Add the vectorized version of the no-scatter Rx handler. It aims to
process 8 descriptors per loop using AVX2 SIMD instructions. This
handler is in its own file enic_rxtx_vec_avx2.c, and makefile and
meson.build are modified to compile it when the compiler supports
AVX2. Under ideal conditions, the vectorized handler reduces
cycles/packet by more than 30%, when compared against the no-scatter
Rx handler. Most implementation ideas come from i40e's AVX2 based
handler, so credit goes to its authors.
At this point, the new handler is meant for field trials, and is not
selected by default. So add a new devarg enable-avx2-rx to allow the
user to request the use of the new handler. When enable-avx2-rx=1, the
driver will consider using the new handler.
Also update the guide doc and introduce the vectorized handler.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Move a number of Rx functions to the header file so that the avx2
based Rx handler can use them.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Support counter action for 1400 series adapters.
The adapter API for allocating and freeing counters is independent of
the adapter match/action API. If the filter action is requested, a
counter is first allocated and then assigned to the filter, and when
the filter is deleted, the counter must also be deleted.
Counters are DMAd to pre-allocated consistent memory periodically,
controlled by the define VNIC_FLOW_COUNTER_UPDATE_MSECS. The default is
100 milliseconds.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
rte_flow structures were not being freed when destroyed or flushed.
Fixes: 6ced137607 ("net/enic: flow API for NICs with advanced filters enabled")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Signed-off-by: John Daley <johndale@cisco.com>
Reopening vNIC does not automatically disable overlay offload. If it
is previously enabled, it remains enabled even when the user restarts
DPDK and requests overlay offload to be disabled via devarg
disable-overlay=1. So explicitly disable overlay offload when
requested.
Fixes: 93fb21fdbe ("net/enic: enable overlay offload for VXLAN and GENEVE")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Currently the simple Tx handler supports no offloads, which makes it
usable only for a small number of benchmarks. Add vlan and checksum
offloads to the handler, as cycles/packet increases only by about 3
cycles, and applications commonly use those offloads.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
The NIC indicates VLAN TCI to the driver even when VLAN stripping is
disabled. The driver sets mbuf's vlan_tci but not PKT_RX_VLAN. Set
PKT_RX_VLAN to indicate that vlan_tci is valid.
Fixes: c6f4555074 ("net/enic: add ethernet VLAN packet type")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Cisco VIC models support RTE_IOVA_VA, so enable it. This change allows
the driver to work properly when --no-huge is used, in combination
with vfio and iommu.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Bugzilla ID: 39
Fixes: 9913fbb91d ("enic/base: common code")
Fixes: 322b355f21 ("net/enic/base: bring NIC interface functions up to date")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Removed DEV_RX_OFFLOAD_CRC_STRIP offload flag.
Without any specific Rx offload flag, default behavior by PMDs is to
strip CRC.
PMDs that support keeping CRC should advertise DEV_RX_OFFLOAD_KEEP_CRC
Rx offload capability.
Applications that require keeping CRC should check PMD capability first
and if it is supported can enable this feature by setting
DEV_RX_OFFLOAD_KEEP_CRC in Rx offload flag in rte_eth_dev_configure()
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tomasz Duszynski <tdu@semihalf.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Jan Remes <remes@netcope.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
The NIC persists the vxlan port number across vNIC init/de-init
(e.g. restart testpmd). So, explicitly reset the setting to the
default value (4789) as part of the initialization.
Fixes: 8a4efd1741 ("net/enic: add handlers to add/delete vxlan port number")
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
This reverts the patch that enabled mbuf fast free.
There are two main reasons.
First, enic_fast_free_wq_bufs is broken. When
DEV_TX_OFFLOAD_MBUF_FAST_FREE is enabled, the driver calls this
function to free transmitted mbufs. This function currently does not
reset next and nb_segs. This is simply wrong as the fast-free flag
does not imply anything about next and nb_segs.
We could fix enic_fast_free_wq_bufs by making it to call
rte_pktmbuf_prefree_seg to reset the required fields. But, it negates
most of cycle saving.
Second, there are customer applications that blindly enable all Tx
offloads supported by the device. Some of these applications do not
satisfy the requirements of mbuf fast free (i.e. a single pool per
queue and refcnt = 1), and end up crashing or behaving badly.
Fixes: bcaa54c1a1 ("net/enic: support mbuf fast free offload")
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
enic_set_mtu always reverts to the default Rx handler after changing
MTU. Try to use the simpler, non-scatter handler in this case as well.
Fixes: 35e2cb6a17 ("net/enic: add simple Rx handler")
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
A constructor is usually declared with RTE_INIT* macros.
As it is a static function, no need to declare before its definition.
The macro is used directly in the function definition.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
In the default Rx handler stop processing packets at the end of
the completion ring so that wrapping doesn't have to be checked
in the inner while loop.
Also, check the color bit in the completion without using a conditional.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
The default tx handler checks the maximum packet size. Check it in the
prepare handler too. WQ stops working if the app/driver tries to send
oversized packets, so these checks are unavoidable.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Add a much-simplified handler that works when all offloads are
disabled, except mbuf fast free. When compared against the default
handler, under ideal conditions, cycles per packet drop by 60+%.
The driver tries to use the simple handler first.
The idea of using specialized/simplified handlers is from the Intel
and Mellanox drivers.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Request one completion update per roughly 32 buffers. It saves DMA
resources on the NIC, PCIe utilization, and cache miss rates.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
WQ is currently using vnic_wq_buf to store mbuf pointers for Tx
packets. But, it contains an unused mempool pointer and mbuf is
unnecessarily cast to void pointer. Remove vnic_wq_buf entirely and
use an mbuf pointer array instead.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
The NIC has one configurable VXLAN port, which is set to the default
4789 upon vNIC reset. Adding a non-default port replaces this single
VXLAN port. Deleting the previously added non-default port restores
the VXLAN port to the hardware default.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Add a new devarg "ig-vlan-rewrite" to allow the user to set
non-default rewrite mode. The UCS VIC may add/remove/modify the VLAN
header of an ingress packet depending on the ingress VLAN rewrite
mode.
By default, the driver sets the pass-through mode, which tells the NIC
"do not touch VLAN header and preserve it as is". This mode is usually
sufficient, but can complicate deployments for certain environments.
For example, OVS-DPDK in UCS blade environments may want to use "untag
default VLAN mode", which removes the VLAN header from an ingress
packet if it matches vNIC's default VLAN.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Report min/max ring sizes, alignments, and so on, and rely on the
common checks implemented in the rte_ethdev layer.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
The fetch index must be initialized only when RQ is
disabled. Otherwise, it may lead to stale entries in IG descriptor
cache on the VIC.
Fixes: a74629cfa3 ("net/enic: enable RQ first and then post Rx buffers")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Currently, enic_alloc_wq (via rte_eth_tx_queue_setup) may overwrite
the admin limit with a lower value. This is wrong as seen in the
following sequence.
1. UCS admin-set Tx queue limit (config.wq_desc_count) = 4096
2. Set up tx queue with 512 descriptors
The admin limit (config.wq_desc_count) becomes 512.
3. Stop ports and now set up Tx queue with 1024 descriptors.
This fails because 1024 is greater than the admin limit (512).
Do not modify the admin limit, and when queried, report the current
number of descriptors instead of the admin limit. The rx queue setup
(enic_alloc_rq) does not this problem.
Fixes: fefed3d1e6 ("enic: new driver")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
The UDP RSS interface has changed in the release firmware for 100G VIC
adapters. The capability bit is now in NIC_CFG. Also the driver is
supposed to use CMD_NIC_CFG_CHK and check if RSS config is
successful. No more changes are expected with respect to UDP RSS API.
Fixes: 94c3518958 ("net/enic: update UDP RSS controls")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Fix missing or incorrect packet types discovered by DTS.
- Non-IP inner packets
Set the tunnel flag.
- Inner Ethernet packets
All supported tunnel packets have Ethernet as inner packets. So, set
INNER_L2_ETHER for all tunnel types.
- IPv4 fragments carrying TCP/UDP
The NIC indicates TCP/UDP based on the protocol in IP header. For
fragments, ignore that bit and always set L4_FRAG.
- IPv6 fragments
The NIC does regconize fragments (IPv6 packets with fragment extension
headers). Set packet types for these.
Fixes: 93fb21fdbe ("net/enic: enable overlay offload for VXLAN and GENEVE")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Add the following missing flags to the advertised offloads.
- DEV_RX_OFFLOAD_CRC_STRIP
CRC is always stripped.
- DEV_RX_OFFLOAD_JUMBO_FRAME
Jumbo support is always enabled on the NIC.
- DEV_RX_OFFLOAD_SCATTER
Scatter Rx is currently supported.
- DEV_TX_OFFLOAD_MULTI_SEGS
Multiple-segment transmit has always been supported.
Fixes: 93fb21fdbe ("net/enic: enable overlay offload for VXLAN and GENEVE")
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Drop is a fate-deciding action, so mark it as FATE. It was missing in
a previous commit.
Fixes: cc17feb904 ("ethdev: alter behavior of flow API actions")
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Current adapters which support UDP RSS piggyback on TCP RSS. Change
the controls to be forward compatible with future adapters, which will
have independent control of UDP and TCP.
Fixes: 9bd04182bb ("net/enic: support UDP RSS on 1400 series adapters")
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
The NIC can hash these RSS packet types, but they are not advertised
via flow_type_rss_offloads. So add them.
- Part of the IPv4 hash:
ETH_RSS_FRAG_IPV4
ETH_RSS_NONFRAG_IPV4_OTHER
- Part of the IPv6 hash:
ETH_RSS_FRAG_IPV6
ETH_RSS_NONFRAG_IPV6_OTHER
- Part of the UDP hash:
ETH_RSS_IPV6_UDP_EX
Also, do not use NIC_CFG_RSS_HASH_TYPE_IPV6_EX and
NIC_CFG_RSS_HASH_TYPE_TCP_IPV6_EX, as they are not needed to enable
RSS over IPv6 with extension headers.
Fixes: c2fec27b5c ("net/enic: allow to change RSS settings")
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
Related to d9fff8a31, where rte_errno should always have positive
errno values.
Technically this is an ABI change since it fixes an error code
introduced in 18.02, but is minor and inconsequential.
Fixes: 1e81dbb532 ("net/enic: add Tx prepare handler")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
The RQ setup functions (enic_alloc_rq and enic_alloc_rx_queue_mbufs)
have changed to rely on max_rx_pkt_len to determine the use of scatter
and buffer size. But, the MTU handler only updates ethdev's MTU
value. So make it update max_rx_pkt_len as well. Other PMDs also
update both mtu and max_rx_pkt_len in their MTU handlers.
Also the condition for taking a short cut (scatter is disabled) in the
MTU handler is wrong. Even when scatter is disabled, a change in
max_rx_pkt_len may affect the buffer size posted to the NIC. So remove
that condition.
Finally, fix a comment and a warning message condition.
Fixes: 422ba91716 ("net/enic: heed the requested max Rx packet size")
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
Future VIC adapters may require that the driver enable RQ before
posting new buffers to the NIC. So split enic_alloc_rx_queue_mbufs()
into two functions, one that allocates buffers and fills RQ and the
other that posts them (i.e. PIO write to a doorbell). And, call the
post function only after enabling RQ.
Currently released models are not affected by this change, as they
work fine whether the driver posts buffers before or after enabling RQ.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
This new attribute enables applications to create flow rules that do not
simply match traffic whose origin is specified in the pattern (e.g. some
non-default physical port or VF), but actively affect it by applying the
flow rule at the lowest possible level in the underlying device.
It breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
TPID handling in rte_flow VLAN and E_TAG pattern item definitions is not
consistent with the normal stacking order of pattern items, which is
confusing to applications.
Problem is that when followed by one of these layers, the EtherType field
of the preceding layer keeps its "inner" definition, and the "outer" TPID
is provided by the subsequent layer, the reverse of how a packet looks like
on the wire:
Wire: [ ETH TPID = A | VLAN EtherType = B | B DATA ]
rte_flow: [ ETH EtherType = B | VLAN TPID = A | B DATA ]
Worse, when QinQ is involved, the stacking order of VLAN layers is
unspecified. It is unclear whether it should be reversed (innermost to
outermost) as well given TPID applies to the previous layer:
Wire: [ ETH TPID = A | VLAN TPID = B | VLAN EtherType = C | C DATA ]
rte_flow 1: [ ETH EtherType = C | VLAN TPID = B | VLAN TPID = A | C DATA ]
rte_flow 2: [ ETH EtherType = C | VLAN TPID = A | VLAN TPID = B | C DATA ]
While specifying EtherType/TPID is hopefully rarely necessary, the stacking
order in case of QinQ and the lack of documentation remain an issue.
This patch replaces TPID in the VLAN pattern item with an inner
EtherType/TPID as is usually done everywhere else (e.g. struct vlan_hdr),
clarifies documentation and updates all relevant code.
It breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Summary of changes for PMDs that implement ETH, VLAN or E_TAG pattern
items:
- bnxt: EtherType matching is supported with and without VLAN, but TPID
matching is not and triggers an error.
- e1000: EtherType matching is only supported with the ETHERTYPE filter,
which does not support VLAN matching, therefore no impact.
- enic: same as bnxt.
- i40e: same as bnxt with existing FDIR limitations on allowed EtherType
values. The remaining filter types (VXLAN, NVGRE, QINQ) do not support
EtherType matching.
- ixgbe: same as e1000, with additional minor change to rely on the new
E-Tag macro definition.
- mlx4: EtherType/TPID matching is not supported, no impact.
- mlx5: same as bnxt.
- mvpp2: same as bnxt.
- sfc: same as bnxt.
- tap: same as bnxt.
Fixes: b1a4b4cbc0 ("ethdev: introduce generic flow API")
Fixes: 99e7003831 ("net/ixgbe: parse L2 tunnel filter")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch makes the following changes to flow rule actions:
- List order now matters, they are redefined as performed first to last
instead of "all simultaneously".
- Repeated actions are now supported (e.g. specifying QUEUE multiple times
now duplicates traffic among them). Previously only the last action of
any given kind was taken into account.
- No more distinction between terminating/non-terminating/meta actions.
Flow rules themselves are now defined as always terminating unless a
PASSTHRU action is specified.
These changes alter the behavior of flow rules in corner cases in order to
prepare the flow API for actions that modify traffic contents or properties
(e.g. encapsulation, compression) and for which order matter when combined.
Previously one would have to do so through multiple flow rules by combining
PASSTRHU with priority levels, however this proved overly complex to
implement at the PMD level, hence this simpler approach.
This breaks ABI compatibility for the following public functions:
- rte_flow_create()
- rte_flow_validate()
PMDs with rte_flow support are modified accordingly:
- bnxt: no change, implementation already forbids multiple actions and does
not support PASSTHRU.
- e1000: no change, same as bnxt.
- enic: modified to forbid redundant actions, no support for default drop.
- failsafe: no change needed.
- i40e: no change, implementation already forbids multiple actions.
- ixgbe: same as i40e.
- mlx4: modified to forbid multiple fate-deciding actions and drop when
unspecified.
- mlx5: same as mlx4, with other redundant actions also forbidden.
- sfc: same as mlx4.
- tap: implementation already complies with the new behavior except for
the default pass-through modified as a default drop.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
A local variable was used without initialization and triggered a
coverity issue.
Is is fixed here, but there is no ill effect of not initializing
the variable in this case. 'rxq_interrupt_offset' is irrelevant
if 'rxq_interrupt_enable' is not set (the condition caught by
coverity).
Coverity issue: 268314
Fixes: fc2c8c0668 ("net/enic: use Tx completion index instead of messages")
Cc: stable@dpdk.org
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Recent NIC models support overlay offload. The overlay offload
feature enables the following on the NIC.
- Rx/Tx checksum offloads for both inner and outer packets.
- Rx inner packet type classification.
- TSO.
- Inner RSS.
TX descriptors do not require any changes, except the header length
for TSO. The NIC parses outer/inner packets and performs offloads on
them as necessary. The header length for tunneled TSO includes both
inner and outer headers.
The NIC actually parses and performs the above for NVGRE as well. DPDK
currently has no offload flags for NVGRE, and the hardware has no
controls to individually enable tunnel types either. So do nothing for
now.
The driver enables overlay offload by default. Add a devargs
'disable-overlay=<0|1>' to allow the app to disable it.
Also update the enic guide doc.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Modified enic_del_mac_address() to get a return value from the vnic layer.
Reused the .mac_addr_add and .mac_addr_del callbacks code to implement
primary mac address handler.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Public struct rte_eth_dev_info has a "struct rte_pci_device" field in it
although it is common for all ethdev in all buses.
Replacing pci specific struct with generic device struct and updating
places that are using pci device in a way to get this information from
generic device.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: David Marchand <david.marchand@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
1330 and 1400 series adapters support the drop action. Check for its
availability and set the necessary flag when creating NIC filters.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
The enic code called from rte_eth_dev_set_mtu() was assuming that the
Rx queues are already set up via a call to rte_eth_tx_queue_setup().
OVS calls rte_eth_dev_set_mtu() before rte_eth_rx_queue_setup() and
a null pointer was dereferenced.
Fixes: c3e09182bc ("net/enic: support scatter Rx in MTU update")
Cc: stable@dpdk.org
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Recent models support IPv4/IPv6 UDP RSS. There is no control bit to
enable UDP RSS alone. Instead, the NIC enables/disables TCP and UDP
RSS together.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
The firmware on new hardware models flushes the global descriptor
cache by default. Use CMD_OPENF_IG_DESCCACHE to avoid cache
flushing. This flag has no effect on older models.
Suggested-by: Govindarajulu Varadarajan <gvaradar@cisco.com>
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>