In poll mode driver of octeontx2 the RQ is connected to a CQ and it is
responsible for asserting backpressure to the CGX channel.
When event eth Rx adapter is configured, the RQ is connected to a event
queue, to enable backpressure we need to configure AURA assigned to a
given RQ to backpressure CGX channel.
Event device expects unique AURA to be configured per ethernet device.
If multiple RQ from different ethernet devices use the same AURA,
the backpressure will be disabled, application can override this
using devargs:
-a 0002:0e:00.0,force_rx_bp=1
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
When device is re-configured, memory allocated for work slot is freed
and new memory is allocated. Due to this we may loose some important
configurations/mappings done with initial work slot memory.
For example, whenever rte_event_eth_tx_adapter_queue_add is called
some important meta i.e. txq handle is stored in work slot structure.
If device gets reconfigured after this tx adaptor add, txq to work
slot mapping will be lost resulting in seg fault during packet
processing, as txq handle could not be retrieved from work slot.
Fixes: 67b5f46864 ("event/octeontx2: add port config functions")
Cc: stable@dpdk.org
Signed-off-by: Harman Kalra <hkalra@marvell.com>
When XAQ pool is being re-configured, and if the same memzone
is used for fc_mem when freeing the old mempool the fc_mem
will be incorrectly updated with the free count.
Fixes: ffa4ec0b60 ("event/octeontx2: allow adapters to resize inflight buffers")
Cc: stable@dpdk.org
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
The rte_eventdev_pmd*.h files are for drivers only and should be private
to DPDK, and not installed for app use.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Enhance Tx path cache locality, remove current tag type and group
stores from datapath to conserve store buffers.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Unlinking queues from port should be done during port release. Doing it
during device re-configuration could result in segfault as ports array
is re-allocated based on new number of ports.
Fixes: f7ac8b66b2 ("event/octeontx2: support linking queues to ports")
Cc: stable@dpdk.org
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Since selftest now depends on dynamic mbuf fields it is not
feasible to run selftest on device probe.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
This commit implements the eventdev ABI changes required by
the DLB/DLB2 PMDs. Several data structures and constants are modified
or added in this patch, thereby requiring modifications to the
dependent apps and examples.
The DLB/DLB2 hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports a further restricted to a maximum of 1 linked queue.
3) DLB does not have the ability to carry the flow_id as part
of the event (QE) payload. Note that the DLB2 hardware is capable of
carrying the flow_id.
Following is a detailed description of the changes that have been made.
1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertise its capabilities so that applications can take
the appropriate actions based on those capabilities.
struct rte_event_dev_info {
uint32_t max_event_port_links;
/**< Maximum number of queues that can be linked to a single event
* port by this device.
*/
uint8_t max_single_link_event_port_queue_pairs;
/**< Maximum number of event ports and queues that are optimized for
* (and only capable of) single-link configurations supported by this
* device. These ports and queues are not accounted for in
* max_event_ports or max_event_queues.
*/
}
2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.
/** Event device configuration structure */
struct rte_event_dev_config {
uint8_t nb_single_link_event_port_queues;
/**< Number of event ports and queues that will be singly-linked to
* each other. These are a subset of the overall event ports and
* queues; this value cannot exceed *nb_event_ports* or
* *nb_event_queues*. If the device has ports and queues that are
* optimized for single-link usage, this field is a hint for how many
* to allocate; otherwise, regular event ports and queues can be used.
*/
}
3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only attribute is
assigned to other, with the remaining bits available for future assignment.
* Event port configuration bitmap flags */
#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL (1ULL << 0)
/**< Configure the port not to release outstanding events in
* rte_event_dev_dequeue_burst(). If set, all events received through
* the port must be explicitly released with RTE_EVENT_OP_RELEASE or
* RTE_EVENT_OP_FORWARD. Must be unset if the device is not
* RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
*/
#define RTE_EVENT_PORT_CFG_SINGLE_LINK (1ULL << 1)
/**< This event port links only to a single event queue.
*
* @see rte_event_port_setup(), rte_event_port_link()
*/
#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
/**
* The implicit release disable attribute of the port
*/
struct rte_event_port_conf {
uint32_t event_port_cfg;
/**< Port cfg flags(EVENT_PORT_CFG_) */
}
This patch also removes the depreciation notice and announce
the new eventdev ABI changes in release note.
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add SWTAG flush operation at the end of transmit sequence to
immediately release the tag held by the core.
Reuse Tag address to check SWTAG completion status.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
The crypto adapter callback functions and associated data structures
are added.
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
When event device is re-configured maintain the event queue to event port
links and event port status instead of resetting them.
Fixes: cd24e70258 ("event/octeontx2: add device configure function")
Cc: stable@dpdk.org
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock NPA aura and pool contexts in NDC cache.
The device args take hexadecimal bitmask where each bit represent the
corresponding aura/pool id.
Example:
-w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Added new flag for SECURITY in compiler optimized Tx fastpath
framework. With this, compiler autogenerates functions which
have security enabled.
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Archana Muniganti <marchana@marvell.com>
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
This patch introduces a `flag` in the Eth TX adapter enqueue API.
Some drivers may support burst functionality only with the packets
having same destination device and queue.
The flag `RTE_EVENT_ETH_TX_ADAPTER_ENQUEUE_SAME_DEST` can be used
to indicate this so the underlying driver, for drivers to utilize
burst functionality appropriately.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Add support to below TCP segmentation offloads for
96XX A1 onwards and 95xx B0 onwards.
- TCPv4, TCPv6
- VXLAN[v4 | v6][v4 | v6]
- GENEVE[v4 | v6][v4 | v6]
This patch also modifies a fastpath function to be forced
inline due to performance reasons for multi-seg mode.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
In order to align name with other PCI driver flag such as
RTE_PCI_DRV_NEED_MAPPING and to reflect its purpose, change
RTE_PCI_DRV_IOVA_AS_VA flag name as RTE_PCI_DRV_NEED_IOVA_AS_VA.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Add PTP support for SSO based on rx_offloads of the queue connected to
it.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add internal SSO functions to allow event adapters to resize SSO buffers
that are used to hold in-flight events in DRAM.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add selftest to verify sanity of SSO.
Can be run by passing devargs to SSO PF as follows:
Example:
--dev "0002:0e:00.0,selftest=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
events. By default the buffers are assigned to the SSO GGRPs to
satisfy minimum HW requirements. SSO is free to assign the remaining
buffers to GGRPs based on a preconfigured threshold.
We can control the QoS of SSO GGRP by modifying the above mentioned
thresholds. GGRPs that have higher importance can be assigned higher
thresholds than the rest.
Example:
--dev "0002:0e:00.0,qos=[1-50-50-50]" // [Qx-XAQ-TAQ-IAQ]
Qx -> Event queue Aka SSO GGRP.
XAQ -> DRAM In-flights.
TAQ & IAQ -> SRAM In-flights.
The values need to be expressed in terms of percentages, 0 represents
default.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Octeontx2 SSO by default is set to use dual workslot mode.
Add devargs option to force legacy mode i.e. single workslot mode.
Example:
--dev "0002:0e:00.0,single_ws=1"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
OcteonTx2 AP core SSO cache contains two entries each entry caches
state of an single GWS aka event port.
AP core requests events from SSO by using following sequence :
1. Write to SSOW_LF_GWS_OP_GET_WORK
2. Wait for SSO to complete scheduling by polling on SSOW_LF_GWS_TAG[63]
3. SSO notifies core by clearing SSOW_LF_GWS_TAG[63] and if work is
valid SSOW_LF_GWS_WQP is non-zero.
The above sequence uses only one in-core cache entry.
In dual workslot mode we try to use both the in-core cache entries by
triggering GET_WORK on a second workslot as soon as the above sequence
completes. This effectively hides the schedule latency of SSO if there
are enough events with unique flow_tags in-flight.
This mode reserves two SSO GWS lf's for each event port effectively
doubling single core performance.
Dual workslot mode is the default mode of operation in octeontx2.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Add support for retrieving statistics from SSO GWS and GGRP.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Register and implement SSO GWS and GGRP IRQ handlers for error
interrupts.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Links between queues and ports are controlled by setting/clearing GGRP
membership in SSOW_LF_GWS_GRPMSK_CHG.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
The number of events for a *open system* event device is specified
as -1 as per the eventdev specification.
Since, Octeontx2 SSO inflight events are only limited by DRAM size, the
xae_cnt devargs parameter is introduced to provide upper limit for
in-flight events.
Example:
--dev "0002:0e:00.0,xae_cnt=8192"
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add the device configure function that attaches the requested number of
SSO GWS(event ports) and GGRP(event queues) LF's to the PF.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add the info_get function to return details on the queues, flow,
prioritization capabilities, etc. which this device has.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
SSO object needs to be initialized to communicate with the kernel AF
driver through mbox using the common API's.
Also, initialize the internal eventdev structure to defaults.
Attach NPA lf to the PF if needed.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add the make and meson based build infrastructure along with the
eventdev(SSO) device probe.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>