Replace the arguments array by one argument.
All objects in the args array have the same values, so there is no need
to use an array, only one struct is enough.
The args object is a lot smaller, and the allocation can be replaced
with a global variable.
As a consequence of using a single argument, there is no need to use a
loop to launch the test on every core one by one. Replace it with
rte_eal_mp_remote_launch.
The allocation of obj_table isn't needed either, because MAX_BULK is
small. The allocation can instead be replaced with a static array.
Signed-off-by: Steven Lariau <steven.lariau@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Gage Eads <gage.eads@intel.com>
When generating the documentation, a new warning can be seen:
.../dpdk/lib/librte_ethdev/rte_ethdev.h:2441:
warning: argument 'link_speed' of command @param is not found in the
argument list of rte_eth_link_speed_to_str(uint32_t speed_link)
.../dpdk/lib/librte_ethdev/rte_ethdev.h:2455: warning: The following
parameters of rte_eth_link_speed_to_str(uint32_t speed_link) are not
documented: parameter 'speed_link'
Align the function prototype to its doxygen description.
Fixes: fbf931c9c3 ("ethdev: format link status text")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
When the --werror meson build option is set, we can set the WARN_AS_ERRORS
doxygen option in the doxygen config flag to get the same behaviour for API
doc building as for building the rest of DPDK. This can help catch
documentation errors sooner in the development process.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
To see only errors and warnings from the doc builds, we can send the
standard output text to a logfile and have only the stderr messages
printed. This is similar to what is done for the API documentation.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The meson documentation states that projects should not rely upon the
custom_target build commands are run from any given directory. Therefore,
rather than writing the standout output from doxygen to the current
directory - which could be anywhere in future, put it into the api
directory, so that it is in a known location.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The API docs were output to "<build>/doc/api/api" folder, which was
ugly-looking with the repeated "api", and inconsistent with the sphinx
guides which were written to "<build>/doc/guides/html". Changing the
doxygen output folder to "html" fixes both these issues.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The standard output of doxygen is very verbose, and since ninja mixes
stdout and stderr together it makes it difficult to see any warnings from
the doxygen run. Therefore, we can just log the standard output to file,
and only output the stderr to make warnings clear.
Suggested-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Starting from Linux 5.9 'get_user_pages_remote()' API doesn't get
'struct task_struct' parameter:
commit 64019a2e467a ("mm/gup: remove task_struct pointer for all gup code")
The change reflected to the KNI with version check.
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch fixes the possible time-of-check to time-of-use (TOCTOU)
attack problem by copying request data and descriptor index to local
variable prior to process.
Also the original sequential read of descriptors may lead to TOCTOU
attack. This patch fixes the problem by loading all descriptors of a
request to local buffer before processing.
CVE-2020-14375
Fixes: 3bb595ecd6 ("vhost/crypto: add request handler")
Cc: stable@dpdk.org
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
This patch fixes the incorrect data length check to vhost crypto.
Instead of blindly accepting the descriptor length as data length, the
change compare the request provided data length and descriptor length
first. The security issue CVE-2020-14374 is not fixed alone by this
patch, part of the fix is done through:
"vhost/crypto: fix missed request check for copy mode".
CVE-2020-14374
Fixes: 3c79609fda ("vhost/crypto: handle virtually non-contiguous buffers")
Cc: stable@dpdk.org
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
This patch fixes vhost crypto library for the incorrect source and
destination buffer calculation in the copy mode.
Fixes: cd1e8f03ab ("vhost/crypto: fix packet copy in chaining mode")
Cc: stable@dpdk.org
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
This patch fixes the incorrect descriptor deduction for vhost crypto.
CVE-2020-14378
Fixes: 16d2e718b8 ("vhost/crypto: fix possible out of bound access")
Cc: stable@dpdk.org
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
This patch fixes the missing iv space allocation in crypto
operation mempool.
Fixes: 709521f4c2 ("examples/vhost_crypto: support multi-core")
Cc: stable@dpdk.org
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Commit d0fcc38f5f ("vhost: improve device readiness notifications")
makes the assumption that every Virtio devices are considered
ready for preocessing as soon as first queue pair is configured
and enabled.
While this is true for Virtio-net, it isn't for Virtio-scsi
and Virtio-blk.
This patch fixes this by only making this assumption for
the builtin Virtio-net backend, and restores back to previous
behaviour for other backends.
Fixes: d0fcc38f5f ("vhost: improve device readiness notifications")
Reported-by: Changpeng Liu <changpeng.liu@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
When the BAR contains MSI-X table, pci_vfio_mmap_bar() tries to skip
the table and map the rest. "map around it" is the phrase used in the
source. The function splits the BAR into two regions: the region
before the table (first part or memreg[0]) and the region after the
table (second part or memreg[1]).
For hardware that has MSI-X vector table offset 0, the first part does
not exist (memreg[0].size == 0).
Capabilities: [60] MSI-X: Enable- Count=48 Masked-
Vector table: BAR=2 offset=00000000
PBA: BAR=2 offset=00001000
The mapping part of the function maps the first part, if it
exists. Then, it maps the second part, if it exists and "if mapping the
first part succeeded".
The recent change that replaces MAP_FAILED with NULL breaks the "if
mapping the first part succeeded" condition (1) in the snippet below.
void *map_addr = NULL;
if (memreg[0].size) {
/* actual map of first part */
map_addr = pci_map_resource(...);
}
/* if there's a second part, try to map it */
if (map_addr != NULL // -- (1)
&& memreg[1].offset && memreg[1].size) {
[...]
}
if (map_addr == NULL) {
RTE_LOG(ERR, EAL, "Failed to map pci BAR%d\n",
bar_index);
return -1;
}
When the first part does not exist, (1) sees map_addr is still NULL,
and the function fails. This behavior is a regression and fails
probing hardware with vector table offset 0.
Previously, (1) was "map_addr != MAP_FAILED", which meant
pci_map_resource() was actually attempted and failed. So, expand (1)
to check if the first part exists as well, to match the semantics of
MAP_FAILED.
Bugzilla ID: 539
Fixes: e200535c1c ("mem: drop mapping API workaround")
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Since rte_atomicXX APIs are not allowed to be used, use C11 atomic
builtins for link status update.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Since rte_atomicXX APIs are not allowed to be used, use C11 atomic
builtins for power in use state update.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: David Hunt <david.hunt@intel.com>
Since rte_atomicXX APIs are not allowed to be used, use C11 atomic builtins
for device processing counter.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
Since rte_atomicXX APIs are not allowed to be used, use C11 builtins to
check if EAL is already initialized.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Replace use of RTE_MACHINE_CPUFLAG macros with regular compiler
macros, which are more complete than those provided by DPDK, and as such
it allows new instruction sets to be leveraged without having to do
extra work to set them up in DPDK.
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
This patch updates Mellanox maintainers mails from
the Mellanox domain to Nvidia domain.
Cc: stable@dpdk.org
Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Removed the documentation maintainers.
The documentation is now, currently, unmaintained.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Signed-off-by: Marko Kovacevic <marko.kovacevic@intel.com>
Since the 20.08 release deprecated rte_cio_*mb APIs because these APIs
provide the same functionality as rte_io_*mb APIs on all platforms, so
remove them and use rte_io_*mb instead.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Add more parameters into the macro TEST_RING_VERIFY and expand the scope
of application for it. Then replace all ring APIs check with
TEST_RING_VERIFY to facilitate debugging.
Furthermore, correct a spelling mistakes of the macro
TEST_RING_FULL_EMTPY_ITER.
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Do code clean up by moving repeated code inside 'test_ring_mem_cmp'
function to validate data and print information of enqueue/dequeue
elements if validation fails.
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Validate the return value of single element enqueue/dequeue operation in
the test.
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add check in test_ring_basic_ex and test_ring_with_exact_size for single
element enqueue and dequeue operations to validate the dequeued objects.
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When using memcmp function to check data, the third param should be the
size of all elements, rather than the number of the elements.
Fixes: a9fe152363 ("test/ring: add custom element size functional tests")
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The ring capacity is (RING_SIZE - 1), thus only (RING_SIZE - 1) number of
elements can be enqueued into the ring.
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When enqueue one element to ring in the performance test, a pointer
should be passed to rte_ring_[sp|mp]enqueue APIs, not the pointer
to a table of void *pointers.
Fixes: a9fe152363 ("test/ring: add custom element size functional tests")
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Flow Manager (flowman) provides DECAP_STRIP operation which
decapsulates VXLAN header and then removes VLAN header from the inner
packet. Use this operation to support vxlan_decap followed by
of_pop_vlan.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
When VXLAN source port in the template is zero, the adapter is
expected to generate a value based on the inner packet flow, when it
performs encapsulation. Flow Manager in the VIC adapter currently
lacks such ability. So, generate a random port when creating a flow if
the port is zero, to avoid transmitting packets with source port 0.
Fixes: ea7768b5bb ("net/enic: add flow implementation based on Flow Manager API")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
When a VLAN pattern is present, the flow handler always copies its
inner_type to the match buffer regardless of its value (i.e. HW
matches inner_type against packet's inner ethertype). When inner_type
spec and mask are both 0, adding it to the match buffer is usually
harmless but breaks the following pattern used in some applications
like OVS-DPDK.
flow create 0 ingress ... pattern eth ... type is 0x0800 /
vlan tci spec 0x2 tci mask 0xefff / ipv4 / end actions count /
of_pop_vlan / ...
The VLAN pattern's inner_type is 0. And the outer eth pattern's type
actually specifies the inner ethertype. The outer ethertype (0x0800)
is first copied to the match buffer. Then, the driver copies
inner_type (0) to the match buffer, which overwrites the existing
0x0800 with 0 and breaks the app usage above.
Simply ignore inner_type when it is 0, which is the correct
behavior. As a byproduct, the driver can support the usage like the
above.
Fixes: ea7768b5bb ("net/enic: add flow implementation based on Flow Manager API")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Group 0 corresponds to TCAM which supports priorities. Accept non-zero
priorities for group 0 flows.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Use Flow Manager (flowman) to support egress PORT_ID action. It can
steer egress packets from PFs and VFs to any uplink port as long as
they are all on the same VIC adapter. It can also steer packets
between ports on the same VIC adapter (i.e. loopback).
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
The 'next' field in struct enic is unused. The comment in enic_cq_rq()
is out-of-date. Remove them.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Use Flow Manager (flowman) to support flow API for
representors. Representor's flow handlers simply invoke PF handlers
and pass the representor's flowman structure. The PF flowman handlers
are aware of representors and perform appropriate devcmds to create
flows on the NIC.
Also use flowman to create internal flows for implicit VF-representor
path. With that, representor Tx/Rx is now functional.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
VF representor ports can create flows on VFs through the PF flowman
(Flow Manager) instance in the firmware. These flows match packets
egressing from VFs and apply flowman actions.
1. Make flow handler aware of VF representors
When a representor port invokes flow APIs, use the PF port's flowman
instance to perform flowman devcmd. If the port ID refers to a
representor, use VF handle instead of PF handle.
2. Serialize flow API calls
Multiple application thread may invoke flow APIs through PF and VF
representor ports simultaneously. This leads to races, as ports all
share the same PF flowman instance. Use a lock to serialize API
calls. Lock is used only when representors exist.
3. Add functions to create flows for implicit representor paths
There is an implicit path between VF and its representor. The
functions below create flow rules to implement that path.
- enic_fm_add_rep2vf_flow()
- enic_fm_add_vf2rep_flow()
The flows created for representor paths are marked as internal. These
are not visible to application, and the flush API does not destroy
them. They are automatically deleted when the representor port stops
(enic_fm_destroy).
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
A VF representor allocates queues from PF's pool of queues and use
them for its Tx and Rx. It supports 1 Tx queue and 1 Rx queue.
Implicit packet forwarding between representor queues and VF does not
yet exist. It will be enabled in subsequent commits using flowman API.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Enable the minimal VF representor without Tx/Rx and flow API support.
1. Enable the standard devarg 'representor'
When the devarg is specified, create VF representor ports.
2. Initialize flowman early during PF probe
Representors require the flowman API from the firmware. Initialize it
before creating VF representors, so probe can detect the flowman
support and fail if not available.
3. Add enic_fm_allocate_switch_domain() to allocate switch domain ID
PFs and VFs on the same VIC adapter can forward packets to each other,
so the switch domain is the physical adapter.
4. Create a vnic_dev lock to serialize concurrent devcmd calls
PF and VF representor ports may invoke devcmd (e.g. dump stats)
simultaneously. As they all share a single PF devcmd instance in the
firmware, use a lock to serialize devcmd calls.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
VF representors need to proxy devcmd through the PF vnic_dev
instance. Extend vnic_dev to accommodate them as follows.
1. Add vnic_vf_rep_register()
A VF representor creates its own vnic_dev instance via this function
and saves VF ID. When performing devcmd, vnic_dev uses the saved VF ID
to proxy devcmd through the PF vnic_dev instance.
2. Add vnic_register_lock()
As PF and VF representors appear as independent ports to the
application, its threads may invoke APIs on them simultaneously,
leading to race conditions on the PF vnic_dev. For example, thread A
can query stats on PF port, while thread B queries stats on a VF
representor.
The PF port invokes this function to provide a lock to vnic_dev. This
lock is used to serialize devcmd calls from PF and VF representors.
3. Add utility functions to assist VF representor settings
vnic_dev_mtu() and vnic_dev_uif() retrieve vnic MTU and UIF number
(uplink index), respectively.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Add a field named rx_buf_size in rte_eth_rxq_info to indicate the buffer
size used in receiving packets for HW.
In this way, upper-layer users can get this information by calling
rte_eth_rx_queue_info_get.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The address should be calculated before type cast, not after.
Fixes: cc02518132 ("net/netvsc: split send buffers from Tx descriptors")
Cc: stable@dpdk.org
Reported-by: Souvik Dey <sodey@rbbn.com>
Signed-off-by: Long Li <longli@microsoft.com>
Using 'rte_mb' to synchronize the shared ring head/tail between producer
and consumer will stall the pipeline and damage performance on the weak
memory model platforms, such like aarch64.
Relax the expensive barrier with c11 atomic with explicit memory
ordering can improve 3.6% performance on throughput.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Jakub Grajciar <jgrajcia@cisco.com>
Currently, there is a possibility that segment faults occur when sending
packets whose payloads are stored in multiple buffers based on hns3
network engine. The related core dump information as follows:
Program terminated with signal 11, Segmentation fault.
0 hns3_reassemble_tx_pkts
2512 temp = temp->next;
Missing separate debuginfos, use:
(gdb) bt
0 hns3_reassemble_tx_pkts
1 0x0000000000969c60 in hns3_check_non_tso_pkt
2 0x000000000096adbc in hns3_xmit_pkts
3 0x000000000050d4d0 in rte_eth_tx_burst
4 0x000000000050fca4 in pkt_burst_transmit
5 0x00000000004ca6b8 in run_pkt_fwd_on_lcore
6 0x00000000004ca7fc in start_pkt_forward_on_core
7 0x00000000006975a4 in eal_thread_loop
8 0x0000ffffa6f7fc48 in start_thread
9 0x0000ffffa6ed1600 in thread_start
The root cause is that hns3 PMD driver invokes the rte_pktmbuf_free_seg
API function to release the same rte_mbuf multiple times. The rte_mbuf
pointer is not set to NULL in the internal function
hns3_rx_queue_release_mbufs which is invoked during queue setup, stop
and close. As a result the rte_mbuf in Rx queues will be repeatedly
released when the user application setup queues or stop/start the dev
for multiple times.
Probably for performance reasons, DPDK mempool lib does not check for
the repeated rte_mbuf releases. The Address of released rte_mbuf are
directly stored into the per lcore cache of the mempool. This makes the
rte_mbufs obtained from mempool by calling rte_mempool_get_bulk API
function repetitively. ultimately, it causes to access to a NULL
pointer in PMD driver.
This patch fixes this problem by setting released mbuf pointer to NULL
in the internal function named hns3_rx_queue_release_mbuf. And the other
internal function named hns3_reassemble_tx_pkts is optimized to avoid a
similar problem.
Fixes: bba6366983 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
When Rx of scattered packets is off, we have some possibility of using
vector Rx process function or simple Rx functions in hns3 PMD driver.
If the input MTU is increased and the maximum length of received packets
is greater than the length of a buffer for Rx packets, the hardware
network engine needs to use multiple BDs and buffers to store these
packets. This will cause problems when still using vector Rx process
function or simple Rx function to receiving packets. So, when Rx of
scattered packets is off and device is started, it is not permitted to
increase MTU so that the maximum length of Rx packets is greater than Rx
buffer length.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>