Notify user otherwise. A similar check has already been added to mlx5 in
commit "mlx5: check port is configured as ethernet device".
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Currently, the maximum value of rx/tx queues are kept by EAL. But,
the value is used like below with different meanings in vhost PMD.
- The maximum value of current enabled queues.
- The maximum value of current supported queues.
This wrong double meaning will cause an issue like below steps.
* Invoke application with below option.
--vdev 'eth_vhost0,iface=<socket path>,queues=4'
* Configure queues like below.
rte_eth_dev_configure(portid, 2, 2, ...);
* Configure queues again like below.
rte_eth_dev_configure(portid, 4, 4, ...);
The second rte_eth_dev_configure() will fail because both
the maximum value of current enabled queues and supported queues
will be '2' after calling first rte_eth_dev_configure().
To fix the issue, the patch adds another variable to keep the maximum
number of supported queues in vhost PMD.
Fixes: 23981fb0d78b ("vhost: Add vhost PMD")
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ciara Loftus <ciara.loftus@intel.com>
Issue:
When CONFIG_RTE_LIBTRE_I40E_RX_ALLOW_BULK_ALLOC=n in config file, there
will be a build error:
'i40e_recv_pkts_bulk_alloc' undeclared
Now DPDK i40e PMD uses the preprocessor to choose whether or not to define
the bulk recv functions, but for selection of the RX function, PMD only
depends on a C variable. This causes the inconsistency and leads to the
build error due to the bulk recv function not being defined.
Fixes: 8e109464c0 ("i40e: allow vector Rx and Tx usage")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch extends the commands for changing flow director filter's input
set. It adds vlan as a possible filter input field.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch extends flow director to select vlan id as part of
filter's input set and program the filter rule with vlan id.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds missing VLAN bitmask for inner frame in case of
tunneling and fixes VLAN tags bitmasks for single or outer frame
in case of tunneling.
Fixes: 98f0557076 ("i40e: configure input fields for RSS or flow director")
Signed-off-by: Andrey Chilikin <andrey.chilikin@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch extends commands for changing a flow director filter's input
set. It adds tos, protocol and ttl as filter's input fields, and removes
the words selection from flex payloads.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch extends flow director to select more IP Header fields
as filter input set.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds a new function to set the fdir input set to default
when initialization.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
In this patch, flex payload is removed from valid fdir input set
values. This is because all flex payload configuration can be set
in struct rte_fdir_conf during device configure phase, which is
a more flexible way of setting this up.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
For the input set selection, Hash filter and Flow director shared
the same function, i.e. i40e_filter_inset_select.
For code readability, this patch replaces i40e_filter_inset_select
with two new functions: i40e_hash_filter_inset_select and
i40e_fdir_filter_inset_select for Hash filter and Flow director
respectively.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds RTE_ETH_INPUT_SET_L3_IP4_TTL,
RTE_ETH_INPUT_SET_L3_IP6_HOP_LIMITS input field types and extends
struct rte_eth_ipv4_flow and rte_eth_ipv6_flow to support filtering
by tos, protocol and ttl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Virtio has an mbuf descriptor ring containing mbufs to be used for
receiving traffic. When the host queues traffic to be sent to the guest, it
consumes these descriptors. If none exist, it discards the packet.
The virtio pmd allocates mbufs to the descriptor ring every time it
successfully receives a packet. However, it never does it if it does not
receive a valid packet. If the descriptor ring is exhausted, and the mbuf
mempool does not have any mbufs free (which can happen for various reasons,
such as queueing along the processing pipeline), then the receive call will
not allocate any mbufs to the descriptor ring, and when it finishes, the
descriptor ring will be empty. The ring being empty means that we will
never receive a packet again, which means we will never allocate mbufs to
the ring: we are stuck.
Ultimately, the problem arises because there is a dependency between
receiving packets and making the descriptor ring not be empty, and a
dependency between the descriptor ring not being empty, and receiving
packets.
To fix the problem, this pakes makes virtio always try to allocate mbufs
to the descriptor ring, if necessary, when polling for packets. Do this by
removing the early exit if no packets were received. Since the packet loop
later will do nothing if there are no packets, this is fine.
I reproduced the problem by pushing packets through a pipelined systems
(such as the client_server sample application) after artificially
decreasing the size of the mbuf pool and introducing a delay in a secondary
stage.
Without the fix, the process stops receiving packets fairly quicky. With
the fix, it continues to receive packets.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Kyle Larose <klarose@sandvine.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
This structure has immutable function pointers.
Also fix indentation.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
On initialization, the rq descriptor count was set to the limit
of the vic. When the requested number of rx descriptors was
less than this count, enic_alloc_rq() was incorrectly setting
the count to the lower value. This results in later calls to
enic_alloc_rq() incorrectly using the lower value as the adapter
limit.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Update function can be called with no key to enable or disable a RSS
protocol, or with a key to be applied to the desired protocols.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
RSS configuration provided by the application should not be used as storage
by the PMD.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
For x550 device, the reta table has 512 entries, but in function
ixgbe_dev_rss_reta_query and ixgbe_dev_rss_reta_update we use an
"uint8_t i" to traverse the entries, this will lead the function
to an endless loop.
This patch changes the data type from uint8_t to uint16_t to fix
the issue.
Fixes: 4bee94a6c2 ("ixgbe: support 512 RSS entries on x550")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
If the packet_error bit in the completion descriptor is set, the
remainder of the descriptor and data are invalid. PKT_RX_MAC_ERR
was set in the mbuf->ol_flags if packet_error was set and used
later to indicate an error packet. But since PKT_RX_MAC_ERR is
defined as 0, mbuf flags and packet types and length were being
misinterpreted.
Make the function enic_cq_rx_to_pkt_err_flags() return true for error
packets and use the return value instead of mbuf->ol_flags to indicate
error packets. Also remove warning for error packets and rely on
rx_error stats.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
In the receive path, the function to set mbuf ol_flags used the
mbuf packet_type before it was set.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
Add checks to make sure we don't try to allocate more tx or rx queues
than we support.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Add the missing '\n' character to the end of a few print statements.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Acked-by: John Daley <johndale@cisco.com>
VLAN insertion can be done in hardware when supported in Verbs. A software
fallback is provided otherwise. The software implementation is also used
when multi-packet send is enabled on a queue, as both features are mutually
exclusive.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Environment variable MLX5_PMD_ENABLE_PADDING enables HW packet padding
in PCI bus transactions.
When packet size is cache aligned and CRC stripping is enabled, 4 fewer
bytes are written to the PCI bus. Enabling padding makes such packets
aligned again.
In cases where PCI bandwidth is the bottleneck, padding can improve
performance by 10%.
This is disabled by default since this can also decrease performance for
unaligned packet sizes.
Signed-off-by: Olga Shern <olgas@mellanox.com>
fix packet padding macro check
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Secondary processes are expected to use queues and other resources
allocated by the primary, however Verbs resources can only be shared
between processes when inherited through fork().
This limitation can be worked around for TX by configuring separate queues
from secondary processes.
Signed-off-by: Or Ami <ora@mellanox.com>
Add driver functions to set link state up or down.
Burst functions are updated to make sure applications cannot attempt to
send/receive after link is brought down.
Signed-off-by: Or Ami <ora@mellanox.com>
When Linux PF and DPDK VF are used for i40e PMD, when a PF reset occurs,
an interrupt will go via adminq event to inform the VF of the reset.
A callback mechanism is introduced for the VF to allow it to invoke a
registered callback when PF reset happens.
Users can register a callback for this interrupt event using:
rte_eth_dev_callback_register(portid,
RTE_ETH_EVENT_INTR_RESET,
reset_event_callback,
arg);
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Currently, i40evf PMD uses a global static buffer to send virtchnl
commands to host driver. It is shared by multiple VFs.
This patch changed to allocate a virtchnl cmd buffer for each VF.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The patch introduces a new PMD. This PMD is implemented as thin wrapper
of librte_vhost. It means librte_vhost is also needed to compile the PMD.
The vhost messages will be handled only when a port is started. So start
a port first, then invoke QEMU.
The PMD has 2 parameters.
- iface: The parameter is used to specify a path to connect to a
virtio-net device.
- queues: The parameter is used to specify the number of the queues
virtio-net device has.
(Default: 1)
Here is an example.
$ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i
To connect above testpmd, here is qemu command example.
$ qemu-system-x86_64 \
<snip>
-chardev socket,id=chr0,path=/tmp/sock0 \
-netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 \
-device virtio-net-pci,netdev=net0,mq=on
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Update for queue state event name:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This patch adds a below event type.
- RTE_ETH_EVENT_QUEUE_STATE
This event will occur when some queues are enabled or disabled.
So far, only vhost PMD supports the event, and it indicates some queues
are enabled or disabled by virtio-net device. Such an event is needed
because virtio-net device may not enable all queues vhost PMD prepare.
Because only vhost PMD uses the event so far, it isn't an actual hardware
interrupt but a simple software event.
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Minor modification to event name and comment:
Suggested-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This is a PMD for the Amazon ethernet ENA (Elastic Network Adapters)
family.
The driver operates variety of ENA adapters through feature negotiation
with the adapter and upgradable commands set.
ENA driver handles PCI Physical and Virtual ENA functions.
Signed-off-by: Evgeny Schemeilin <evgenys@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Release Note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Implementation of platform specific code for ENA communication layer.
Signed-off-by: Evgeny Schemeilin <evgenys@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Low level common abstraction for ENA device communication.
Signed-off-by: Netanel Belgazal <netanel@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Add a new API rte_eth_dev_get_supported_ptypes to query what packet types
can be filled by a given device. The device should be already started or
its PMD RX burst function already decided, since the packet types supported
may vary depending on RX function.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The option --no-summary will remove this line in quiet mode:
total: 1 errors, 0 warnings, 7 lines checked
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
probe the kernel module existence through /sys/module/ to make it work
with both module and inbuilt kernel module
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
flake8 checks were run for both python 2.7 and 3.4
There were some style issues as:
- Line width > 79
- No double blank line before function definition
- No double blank space before inline comment
- Some other minor issues
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Acked-by: John McNamara <john.mcnamara@intel.com>
The output for the core list included an extra linefeed making
the number of lines displayed much larger then required.
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
This test expects that a vdev is instanciated on the command
line. If it's not the case, just skip this part.
Fixes: 4ea3801b32 ("app/test: fix ring unit test")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
NULL crypto operation is now supported, but l2fwd-crypto
was missing an update on the list of supported algorithms
that can be passed from command line.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
AES GCM is an algorithm for both ciphering and authentication,
but the authentication algorithm was missing in the
list of supported algorithms that can be passed from command line.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Since SNOW3G UEA2/UIA2 are supported now by both HW and SW,
l2fwd-crypto may use them, extending the list of algorithms
parsed from command line.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Previous cdev parameter was changed to cdev_type,
to select a crypto device type preference (HW/SW/ANY),
instead of the device itself (QAT/AESNI...).
Also deleted cdev duplicated parameter from the help.
Fixes: 27cf2d1b18 ("examples/l2fwd-crypto: discover capabilities")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>