Provide API's to enable allmulticast and promiscuous in Netvsc PMD
with VF. This keeps the VF and PV path in sync.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
I40e driver needed users to config exact fdir mode to create rte_flow
rules but it shouldn't. This patch allows users to create rte_flow rules
without configuring fdir mode and let the driver config fdir automatically.
And remove the workaround in flow filtering example.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
After setting up the link on a fiber port, the maximum wait time for
the link to come up is 500 ms in ixgbe_setup_mac_link_multispeed_fiber().
On an x550 SFP+ port, this is often not sufficiently long for the link
to come up. This can result in never being able to retrieve accurate
link status for the port using rte_eth_link_get_nowait().
Increase the maximum wait time in ixgbe_setup_mac_link_multispeed_fiber()
to 1 s.
Bugzilla ID: 69
Fixes: f3430431abaf ("ixgbe/base: add SFP+ dual-speed support")
Cc: stable@dpdk.org
Signed-off-by: Matthew Smith <mgsmith@netgate.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Since rte_intr_enable is called at init and start time. Remove it in
interrupt_action function to avoid too many system calls.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Since rte_intr_enable is called at init and start time. Remove it in
interrupt_action function to avoid too many system calls.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
There is a new set of TR bits that can be used when replacing
the cloud filters so add them in. Also added a check to make
sure that the replace cloud filters AQ command doesn't get
executed on an X722 since it is not supported there.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Andrey Chilikin <andrey.chilikin@intel.com>
Signed-off-by: Kirill Rybalchenko <kirill.rybalchenko@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Add HW capability flag to indicate that firmware supports stopping
LLDP agent. This feature has been added in FW API 1.7 for XL710
devices and 1.6 for X722. Also raise expected minor version number
for X722 FW API to 6.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
These two functions are currently only used in the LED get/set
functions, which are not apart of the VF driver. So the
i40e_aq_set/get_phy_register functions should be wrapped so they
can be removed from the VF driver.
This was brought up in the Linux community that these functions in
the VF driver had no callers in the tree, so they should be removed.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Carlsville Device use 10GBASE-T/1GBASE-T PHY with additional support
for 5GBASE-T/2.5GBASE-T.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
This patch adds the default value for Flow Control Refresh Threshold
to set_mac_config AdminQ command. Previously, calling this AdminQ
command would overwrite the default value with 0.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
During switching between old NVM structure approach (called
structured NVM) to new one (called flat NVM) or backward flash
needs to be rearranged to required NVM structure.
This is a part of transition from one NVM structure to another.
The function is introduced to command firmware to start
rearrangement process.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Firmware can return a busy state, so the i40e_asq_send_command will
return I40E_ERR_NOT_READY.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Add new field - command flags with only one flag for now. Added flag
tells FW that it shouldn't change page while accessing QSFP module,
as it was set manually.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The wait time for Global Reset Ready steady state is calculated based on
the GLGEN_RSTCTL.GRSTDEL value. However, current impelementation multiplied
that value by 20 as a workaround for an issue in SOC platforms.
This resulted in the maximum GLGEN_RSTCTL.GRSTDEL timeout of 6.5 seconds
becoming 130 seconds, which is so long that the VMkernel watchdog thinks
the kernel is frozen and triggers a PSOD.
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Allocated resources were not freed in the event of failure in
i40e_init_asq function. This patch gracefully handles all failures.
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Allocated resources were not freed in the event of a failure in
i40e_init_lan_hmc function. This patch gracefully handles the fail
case after initializing the lan hmc.
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The NVM is in little endian so when we read from it we need to do
the correct thing for the endianness of the machine.
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
This patch fixes the polling mechanism of GLGEN_RSTAT.DEVSTATE
in the PF Reset path when Global Reset is in progress.
While the driver is polling for the end of the PF Reset and
the Global Reset is triggered, abandon the PF Reset path and
prepare for the upcoming Global Reset.
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
This patch enables tc-flower based hardware offloads. tc flower
filter provided by the kernel is configured as driver specific
cloud filter. The patch implements functions and admin queue
commands needed to support cloud filters in the driver and
adds cloud filters to configure these tc-flower filters.
Also it cover below API renaming for code clean.
- i40e_aq_add_cloud_filters_big_buffer to
i40e_aq_add_cloud_filters_bb
- i40e_aq_remove_cloud_filters_big_buffer to
i40e_aq_rem_cloud_filters_bb
- i40e_aq_remove_cloud_filters to
i40e_aq_rem_cloud_filters
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Add new admin queue definitions and extended fields for cloud
filter support. Define big buffer for extended general fields
in Add/Remove Cloud filters command.
Also rename i40e_aqc_add_remove_cloud_filters_element_data to
i40e_aq__cloud_filters_element_data.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Add definitions for L4 filters and switch modes based on cloud filters
modes and extend the set switch config command to include the additional
cloud filter mode.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
This patch overwrites number of ports for X722 devices with support for
OCP PHY mezzanine. The old method with checking if port is disabled in
the PRTGEN_CNF register cannot be used in this case. When the OCP is
removed, ports were seen as disabled, which resulted in wrong calculation
of partition id, that caused WoL to be disabled on certain ports.
Fixes: 3c89193a36fd ("i40e/base: support WOL config for X722")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
We should not issue Admin Queue command before Admin Queue is initialized.
But this happened in i40e_hw_init and i40e_filter_input_set_init.
The patch fixes the issue by proper reordering.
Fixes: b6a0ec418274 ("i40e: use AQ for Rx control register read/write")
Cc: stable@dpdk.org
Reported-by: Anand Rawat <anand.rawat@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
All information about a device to probe can be grouped
in a common string, which is what we usually call devargs.
An application should not have to parse this string before
calling the EAL probe function.
And the syntax could evolve to be more complex and support
matching multiple devices in one string.
That's why the bus name and device name should be removed from
rte_eal_hotplug_add().
Instead of changing this function, a simpler one is added
and used in the old one, which may be deprecated later.
When removing a device, we already know its rte_device handle
which can be directly passed as parameter of rte_eal_hotplug_remove().
If the rte_device is not known, it can be retrieved with the devargs,
by iterating in the device list (future RTE_DEV_FOREACH()).
Similarly to the probing case, a new function is added
and used in the old one, which may be deprecated later.
The new function is used in failsafe, because the replacement is easy.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
When a device is added with a devargs (hotplug or whitelist),
the bus pointer can be retrieved via its devargs.
But there is no such devargs.bus in case of standard scan.
A pointer to the rte_bus handle is added to rte_device.
When a device is allocated (during a scan),
the pointer to its bus is assigned.
It will make possible to remove a rte_device,
using the function pointer from its bus.
The function rte_bus_find_by_device() becomes useless,
and may be removed later.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
The function rte_devargs_remove(), which is intended to be internal,
can take a devargs structure as argument.
The matching is still using string comparison of bus name and
device name.
It is simpler and may allow a different devargs matching in future.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
The enum names are *_params (plural form).
And the items are also using the plural form: *_PARAMS_*.
It looks more natural to use the singular form *_PARAM_* for items.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
We could match devices by their PCI id (vendor id, device id, etc).
But for now, only matching by PCI address is implemented.
The devargs parameter "id" is renamed "addr" to reflect its real meaning.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
When adding or removing external memory from the memory map, there
may be actions that need to be taken on account of this memory (e.g.
DMA mapping). Add support for triggering callbacks when adding,
removing, attaching or detaching external memory.
Some memory event callback handlers will need additional logic to
handle external memory regions. For example, virtio callback has to
completely ignore externally allocated memory, because there is no
way to find file descriptors backing the memory address in a
generic fashion. All other callbacks have also been adjusted to
handle RTE_BAD_IOVA as IOVA address, as this is one of the expected
use cases for external memory support.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
When we allocate and use DPDK memory, we need to be able to
differentiate between DPDK hugepage segments and segments that
were made part of DPDK but are externally allocated. Add such
a property to memseg lists.
This breaks the ABI, so document the change in release notes.
This also breaks a few internal assumptions about memory
contiguousness, so adjust malloc code in a few places.
All current calls for memseg walk functions were adjusted to
ignore external segments where it made sense.
Mempools is a special case, because we may be asked to allocate
a mempool on a specific socket, and we need to ignore all page
sizes on other heaps or other sockets. Previously, this
assumption of knowing all page sizes was not a problem, but it
will be now, so we have to match socket ID with page size when
calculating minimum page size for a mempool.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Previously, to calculate length of memory area covered by a memseg
list, we would've needed to multiply page size by length of fbarray
backing that memseg list. This is not obvious and unnecessarily
low level, so store length in the memseg list itself.
This breaks ABI, so bump the EAL ABI version and document the
change. Also, while we're breaking ABI, pack the members a little
better.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
The typedef of "__virtio16" is introduced into Linux kernel in v3.19.
To prevent build error on old kernel, this patch replaces the
"__virtio" usage with "uint16_t".
Fixes: d7fe5a2861e7 ("net/ifc: support live migration")
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch add support to use select call with qman portal fd
for timeout based dequeue request for eventdev.
If there is a event available qman portal fd will be set
and the function will be awakened. If no event is available,
it will only wait till the given timeout value.
In case of interrupt the timeout ticks are used as usecs.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Make the -Wno-format-nonliteral flag conditional, and only set in
clang and gcc builds, since this flag is not supported (nor needed)
when building dsw with icc.
Fixes: 46a186b1f0c5 ("event/dsw: add device registration and build system")
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Currently, DPDK will skip mapping some areas (or even an entire BAR)
if MSI-X table happens to be in them but is smaller than page size.
Kernels 4.16+ will allow mapping MSI-X BARs [1], and will report this
as a capability flag. Capability flags themselves are also only
supported since kernel 4.6 [2].
This commit will introduce support for checking VFIO capabilities,
and will use it to check if we are allowed to map BARs with MSI-X
tables in them, along with backwards compatibility for older
kernels, including a workaround for a variable rename in VFIO
region info structure [3].
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/
linux.git/commit/?id=a32295c612c57990d17fb0f41e7134394b2f35f6
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/
linux.git/commit/?id=c84982adb23bcf3b99b79ca33527cd2625fbe279
[3] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/
linux.git/commit/?id=ff63eb638d63b95e489f976428f1df01391e15e4
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
The PCI bus can now parse a matching field "id" as follows:
"bus=pci,id=0000:00:00.0"
or
"bus=pci,id=00:00.0"
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Add Tx adapter support and move few routines around to avoid code
duplication.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
This commit adds a unit test that checks the behaviour
of the unlinks_in_progress() function, ensuring that the
returned values are the number of unlinks requested,
until the scheduler runs and "acks" the requests, after
which the count should be zero again.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds a counter to each port, which counts the
number of unlinks that have been performed. When the scheduler
thread starts its scheduling routine, it "acks" all unlinks that
have been requested, and the application is gauranteed that no
more events will be scheduled to the port from the unlinked queue.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
This commit fixes the cq index checks when unlinking
ports/queues while the scheduler core is running.
Previously, the == comparison could be "skipped" if
in particular corner cases. With the check being changed
to >= this is resolved as the cq idx gets reset to zero.
Bugzilla ID: 60
Fixes: 617995dfc5b2 ("event/sw: add scheduling logic")
Cc: stable@dpdk.org
Suggested-by: Matias Elo <matias.elo@nokia.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch restructure the code to have the QBMAN portal
affliated at run time for per lcore basis.
The device cleanup is also improved.
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch enhances:
1. Configure the dequeue time out value as per the given
method or per dequeue, global or default.
2. The timeout values were being mixed as ns or ms timeouts,
Now the values are stored as ns and scale is in ms.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>