The python script introduced in this patch runs the crypto performance
test application for various test cases, and graphs the results.
Test cases are defined in config JSON files, this is where parameters
are specified for each test. Currently there are various test cases for
devices crypto_qat, crypto_aesni_mb and crypto_gcm. Tests for the
ptest types Throughput and Latency are supported for each.
The results of each test case are graphed and saved in PDFs (one PDF for
each test suite graph type, with all test cases).
The graphs output include various grouped barcharts for throughput
tests, and histogram and boxplot graphs are used for latency tests.
Documentation is added to outline the configuration and usage for the
script.
Usage:
A JSON config file must be specified when running the script,
"./dpdk-graph-crypto-perf <config_file>"
The script uses the installed app by default (from ninja install).
Alternatively we can pass path to app by
"-f <rel_path>/<build_dir>/app/dpdk-test-crypto-perf"
All device test suites are run by default.
Alternatively we can specify by adding arguments,
"-t latency" - to run latency test suite only
"-t throughput latency"
- to run both throughput and latency test suites
A directory can be specified for all output files,
or the script directory is used by default.
"-o <output_dir>"
To see the output from the dpdk-test-crypto-perf app,
use the verbose option "-v".
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
The csv output for each ptest type used ";" instead of ",".
This has now been fixed to use the comma format that is used in the csv
headers.
Fixes: f6cefe253c ("app/crypto-perf: add range/list of sizes")
Fixes: 96dfeb609b ("app/crypto-perf: add new PMD benchmarking mode")
Fixes: da40ebd6d3 ("app/crypto-perf: display results in test runner")
Cc: stable@dpdk.org
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
The csv output for the latency performance test had an extra header,
"Packet Size", which is a duplicate of "Buffer Size", and had no
corresponding value in the output. This is now removed.
Fixes: f6cefe253c ("app/crypto-perf: add range/list of sizes")
Cc: stable@dpdk.org
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
The symmetric session configure callback function in OCTEON TX2 crypto
PMD returns error if the cipher operation is not set to either encrypt
or decrypt. This patch sets the cipher operation for the null cipher
to encrypt.
Fixes: 7444937523 ("test/event_crypto_adapter: fix configuration")
Cc: stable@dpdk.org
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
For the wmb in order_process_stage_1 and order_process_stage_invalid in
the order test, they can be removed. This is because when the test results
are wrong, the worker core writes 'true' to t->err. Then other worker
cores, producer cores and the main core will load the 'error' index and
stop testing. So, for the worker cores, no other storing operation needs
to be guaranteed after this when errors happen.
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
For "processed_pkts" function, no operations should keep the order that
being executed before loading "worker[i].processed_pkts".
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Simply replace rte_smp barrier with atomic threand fence.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
For "processed_pkts" and "total_latency" functions, no operations should
keep the order that being executed before loading
"worker[i].processed_pkts". Thus rmb is unnecessary before loading.
For "perf_launch_lcores" function, wmb after that the main lcore
updates the variable "t->done", which represents the end of the test
signal, is unnecessary. Because after the main lcore updates this
siginal variable, it will jump out of the launch function loop, and wait
other lcores stop or return error in the main function(evt_main.c).
During this time, there is no important storing operation and thus no
need for wmb.
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patch fixes RTE SMP barrier bugs for the perf test of eventdev.
For the "perf_process_last_stage" function, wmb after storing
processed_pkts should be moved before it. This is because the worker
lcore should ensure it has really finished data processing, e.g. event
stored into buffers, before the shared variables "w->processed_pkts"are
stored.
For the "perf_process_last_stage_latency", on the one hand, the wmb
should be moved before storing into "w->processed_pkts". The reason is
the same as above. But on the other hand, for "w->latency", wmb is
unnecessary due to data dependency.
Fixes: 2369f73329 ("app/testeventdev: add perf queue worker functions")
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
For eventdev pipeline test, in burst_tx cases, there is no needed to
set ev.op as RTE_EVENT_OP_RELEASE and call pipeline_event_enqueue_burst
to release events. This is because for tx mode(internal_port=true),
the capability "implicit_release" of dev is enabled, and the app can
release events by "rte_event_dequeue_burst" rather than enqueue.
Fixes: 314bcf58ca ("app/eventdev: add pipeline queue worker functions")
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
For the fwd mode (internal_port = false) in pipeline test,
processed-pkts increment should after enqueue. However, in
multi_stage_fwd and multi_stage_burst_fwd, "w->processed_pkts" is
increased before enqueue.
To fix this, move "w->processed_pkts" increment after enqueue, and then
the main core can load the correct number of processed packets.
Fixes: 314bcf58ca ("app/eventdev: add pipeline queue worker functions")
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
When testing ring performance in the case that multiple lcores are mapped
to the same physical core, e.g. --lcores '(0-3)@10', it takes a very long
time to wait for the "enqueue_dequeue_bulk_helper" to finish.
This is because too much iteration numbers and extremely low efficiency
for enqueue and dequeue with this kind of core mapping. Following are the
test results to show the above phenomenon:
x86-Intel(R) Xeon(R) Gold 6240:
$sudo ./app/test/dpdk-test --lcores '(0-1)@25'
Testing using two hyperthreads(bulk (size: 8):)
iter_shift: 3 5 7 9 11 13 *15 17 19 21 23
run time: 7s 7s 7s 8s 9s 16s 47s 170s 660s >0.5h >1h
legacy APIs: SP/SC: 37 11 6 40525 40525 40209 40367 40407 40541 NoData NoData
legacy APIs: MP/MC: 56 14 11 50657 40526 40526 40526 40625 40585 NoData NoData
aarch64-n1sdp:
$sudo ./app/test/dpdk-test --lcore '(0-1)@1'
Testing using two hyperthreads(bulk (size: 8):)
iter_shift: 3 5 7 9 11 13 *15 17 19 21 23
run time: 8s 8s 8s 9s 9s 14s 34s 111s 418s 25min >1h
legacy APIs: SP/SC: 0.4 0.2 0.1 488 488 488 488 488 489 489 NoData
legacy APIs: MP/MC: 0.4 0.3 0.2 488 488 488 488 490 489 489 NoData
As the number of iterations increases, so does the time which is required
to run the program. Currently (iter_shift = 23), it will take more than
1 hour to wait for the test to finish. To fix this, the "iter_shift" should
decrease and ensure enough iterations to keep the test data stable.
In order to achieve this, we also test with "-l" EAL argument:
x86-Intel(R) Xeon(R) Gold 6240:
$sudo ./app/test/dpdk-test -l 25-26
Testing using two NUMA nodes(bulk (size: 8):)
iter_shift: 3 5 7 9 11 13 *15 17 19 21 23
run time: 6s 6s 6s 6s 6s 6s 6s 7s 8s 11s 27s
legacy APIs: SP/SC: 47 20 13 22 54 83 91 73 81 75 95
legacy APIs: MP/MC: 44 18 18 240 245 270 250 249 252 250 253
aarch64-n1sdp:
$sudo ./app/test/dpdk-test -l 1-2
Testing using two physical cores(bulk (size: 8):)
iter_shift: 3 5 7 9 11 13 *15 17 19 21 23
run time: 8s 8s 8s 8s 8s 8s 8s 9s 9s 11s 23s
legacy APIs: SP/SC: 0.7 0.4 1.2 1.8 2.0 2.0 2.0 2.0 2.0 2.0 2.0
legacy APIs: MP/MC: 0.3 0.4 1.3 1.9 2.9 2.9 2.9 2.9 2.9 2.9 2.9
According to above test data, when "iter_shift" is set as "15", the test
run time is reduced to less than 1 minute and the test result can keep
stable in x86 and aarch64 servers.
Fixes: 1fa5d0099e ("test/ring: add custom element size performance tests")
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The distributor library implementation uses a cyclic queue to store
packets returned from workers. These packets can be later collected
with rte_distributor_returned_pkts() call.
However the queue has limited capacity. It is able to contain only
127 packets (RTE_DISTRIB_RETURNS_MASK).
Big burst tests sent 1024 packets in 32 packets bursts without waiting
until they are processed by the distributor. In case when tests were
run with big number of worker threads, it happened that more than
127 packets were returned from workers and put into cyclic queue.
This caused packets to be dropped by the queue, making them impossible
to be collected later with rte_distributor_returned_pkts() calls.
However the test waited for all packets to be returned infinitely.
This patch fixes the big burst test by not allowing more than
queue capacity packets to be processed at the same time, making
impossible to drop any packets.
It also cleans up duplicated code in the same test.
Bugzilla ID: 612
Fixes: c0de0eb82e ("distributor: switch over to new API")
Cc: stable@dpdk.org
Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Tested-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: David Hunt <david.hunt@intel.com>
Currently, test-flow-perf app cannot generate flows with meter action.
This patch introduces new parameter "--meter" to generate flows
with meter action.
Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Reviewed-by: Wisam Jaddo <wisamm@nvidia.com>
Reviewed-by: Alexander Kozyrev <akozyrev@nvidia.com>
The app will calculate and output used CPU time for flow insertion rate.
It's also needed for some new insertion items, such as meter.
It's better to split this calculation and output part to a single function,
so that all new insertion items can use it.
Signed-off-by: Dong Zhou <dongzhou@nvidia.com>
Reviewed-by: Wisam Jaddo <wisamm@nvidia.com>
Reviewed-by: Alexander Kozyrev <akozyrev@nvidia.com>
We need to differentiate between crypto and ethernet security
context as they belong to different devices.
Fixes: d82d6ac643 ("app/procinfo: add crypto security context info")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Meson can use cmake as a fallback for detecting packages, and this can
lead to picking up 64-libs for 32-bit builds. To work around this, force
the use of pkg-config only for detecting libcrypto, zlib, jansson and
other package dependencies.
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Tested-by: Liron Himi <lironh@marvell.com>
Tested-by: Lee Daly <lee.daly@intel.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Martin Spinler <spinler@cesnet.cz>
The "includes" variable in the app/meson.build file was ignored when
building the executable, meaning that apps couldn't pass additional
include paths directly back. Fix this to align with drivers and libs.
Fixes: fa036e70d7 ("app: generalize meson build")
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Support rss action in the sample sub-actions list.
The examples for the sample flow use case and result as below:
set sample_actions 0 mark id 0x12 / rss queues 0 1 2 3 end / end
flow create 0 ingress group 1 pattern eth / end actions
sample ratio 1 index 0 / jump group 2 / end
This flow will result in all the matched ingress packets will be
jumped to next table, and the each packet will be marked with 0x12
and duplicated to rss queues of the control application.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add support for the RTE_FLOW_ACTION_MODIFY_FIELD to the testpmd.
Implement CLI to create the modify_field action and supply all the
needed parameters to modify an arbitrary packet field (as well as
mark, tag or metadata) with data from another field or immediate
value.
Example of the flow is the following:
flow create 0 egress group 1 pattern eth / ipv4 / udp / end
actions modify_field op set dst_type tag dst_level 2 dst_offset 8
src_type gtp_teid src_level 0 src_offset 0 width 16 / end
This flow copies 16 bits from the second Tag in the Tags array
into the outermost GTP TEID packet header field. 8 bits of the
Tag are skipped as indicated by the dst_offset action parameter.
op, dst_type, src_type and width are the mandatory parameters to
specify. Levels and offset are 0 by default if they are not
overridden by a user. The operation can be set, add or sub.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Strict-aliasing rules are violated by cast to uint16_t* in flowgen.c and
the calculated IP checksum is wrong. Use attribute __may_alias__ to fix
the problem.
Fixes: e9e23a617e ("app/testpmd: add flowgen forwarding engine")
Cc: stable@dpdk.org
Signed-off-by: George Prekas <prekageo@amazon.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The patch adds the GENEVE rte flow option length support to
command line interpreter. The flow command with GENEVE
option items looks like:
flow create 0 ingress pattern eth / ipv4 / udp / geneve vni
is 100 optlen is 2 / end actions drop / end
The option length should be specified in 32-bit words, this
value specifies the all options length in the GENEVE header.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
The patch adds the GENEVE option rte flow item support to
command line interpreter. The flow command with GENEVE
option items looks like:
flow create 0 ingress pattern eth / ipv4 / udp / geneve vni is 100 /
geneve-opt class is 99 length is 1 type is 0 data is 0x669988 /
end actions drop / end
The option length should be specified in 32-bit words, this
value specifies the length of the data pattern/mask arrays (should be
multiplied by sizeof(uint32_t) to be expressed in bytes. If match
on the length itself is not needed the mask should be set to zero, in
this case length is used to specify the pattern/mask array lengths only.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
When the max rx packet length is smaller than the sum of mtu size and
ether overhead size, it should be enlarged, otherwise the VLAN packets
will be dropped.
Removed the rx_offloads assignment for jumbo frame during command line
parsing, and set the correct jumbo frame flag if MTU size is larger than
the default value 'RTE_ETHER_MTU' within 'init_config()'.
Fixes: 384161e006 ("app/testpmd: adjust on the fly VLAN configuration")
Fixes: 35b2d13fd6 ("net: add rte prefix to ether defines")
Fixes: ce17eddefc ("ethdev: introduce Rx queue offloads API")
Fixes: 150c9ac2df ("app/testpmd: update Rx offload after setting MTU")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Request the driver for number of entries in the FEC caps
array and then dynamically allocate the array.
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Start from index 0 when going through the FEC array. This will allow
"off" to get printed for RTE_ETH_FEC_NOFEC mode.
Fixes: b19da32e31 ("app/testpmd: add FEC command")
Cc: stable@dpdk.org
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add new UDP tunnel port params for eCPRI configuration, the command
as below:
testpmd> port config 0 udp_tunnel_port add ecpri 6789
testpmd> port config 0 udp_tunnel_port rm ecpri 6789
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch add support for generating GTP PDU container
session option for the raw encap and raw decap sets.
The generated options is single 32-bit word with
minimal length specified as 4, no extra fields in the
option are supported. The option item must be preceded
with the GTP item itself, and GTP item flags must be
set accordingly:
- E flag must be set, we are going to provide extension
- S flag must be reset, because GTP item does not
provide the value for SQN field, we can't fill this one
- PN flag must be reset, no N-PDU value provided by
GTP item either
The raw set example:
set raw_encap 2 eth / ipv4 / udp /
gtp v_pt_rsv_flags is 0x34 / gtp_psc / end_set
Please, note - value 0x34 provides the required flag combination.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Fixes some spelling errors in app logs and help text.
Fixes: 7da018731c ("app/crypto-perf: add help option")
Fixes: f8be1786b1 ("app/crypto-perf: introduce performance test application")
Cc: stable@dpdk.org
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch adds test case for AES-XCBC hash only for
Digest and Digest-verify
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch adds test cases for testing functionality of
enqueue and dequeue callback mechanism.
Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Use rte_pktmbuf_free_bulk instead of loop when freeing
packets.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
The ticketlock test is fast and should be run all the time.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Aaron Conole <aconole@redhat.com>
The Tx buffer may overflow when there is more than one port.
Fixes: 002ade70e9 ("app/test: measure cycles per packet in Rx/Tx")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
Tested-by: Wei Ling <weix.ling@intel.com>
Add a test case to test scan operation post clear of half
cacheline of slabs.
Also fix meson.build to include test_bitmap.c in the compilation.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
When running one test (via DPDK_TEST) the test program
would leave the terminal in raw mode. This was because
it was setting up cmdline to do interactive input.
The fix is to use cmdline_new() for the interactive case.
This also fixes a memory leak because the test
runner was never calling cmdline_free().
Fixes: 9b848774a5 ("test: use env variable to run tests")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Tested-by: Harry van Haaren <harry.van.haaren@intel.com>
Trivial fix to for spelling errors and incorrect spacing.
No change to any built code.
Fixes: 7a61fc5d1b ("test/rwlock: add new test-cases")
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Each core already comes with its local storage for mcslock (in its
stack), therefore there is no need to define an additional per-lcore
mcslock.
Fixes: 32dcb9fd2a ("test/mcslock: add MCS queued lock unit test")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Existing test cases create 256 tbl8 groups for testing. The number covers
only 8 bit next_hop/group field. Since the next_hop/group field had been
extended to 24-bits, creating more than 256 groups in tests can improve
the coverage.
Coverage was not expanded to reach the max supported group number, because
it would take too much time to run for this fast-test.
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Tested-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
The --export-dynamic linker option is only applicable to ELF.
On Windows, where COFF is used, it causes warnings:
x86_64-w64-mingw32-ld: warning: --export-dynamic is not supported
for PE+ targets, did you mean --export-all-symbols? (MinGW)
LINK : warning LNK4044: unrecognized option '/-export-dynamic';
ignored (clang)
Don't add --export-dynamic on Windows anywhere.
Fixes: b031e13d7f ("build: fix plugin load on static build")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Ranjit Menon <ranjit.menon@intel.com>
Performance measurement (elapsed time and Gbps) are based on Linux
clock() API. The resolution is improved by replacing the clock() API
with rte_rdtsc_precise() API.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Up to this commit measuring the parsing elapsed time and Giga bits per
second performance was done on the aggregation of all QPs (per core).
This commit separates the time measurements per individual QP.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Up to this commit the regex application was running with multiple QPs on
a single core. This commit adds the option to specify a number of cores
on which multiple QPs will run.
A new parameter 'nb_lcores' was added to configure the number of cores:
--nb_lcores <num of cores>.
If not configured the number of cores is set to 1 by default. On
application startup a few initial steps occur by the main core: the
number of QPs and cores are parsed. The QPs are distributed as evenly
as possible on the cores. The regex device and all QPs are initialized.
The data file is read and saved in a buffer. Then for each core the
application calls rte_eal_remote_launch() with the worker routine
(run_regex) as its parameter.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Up to this commit the input data file was read from scratch for each QP,
which is redundant. Starting from this commit the data file is read only
once at startup. Each QP will clone the data.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Up to this commit the regex application used one QP which was assigned a
number of jobs, each with a different segment of a file to parse. This
commit adds support for multiple QPs assignments. All QPs will be
assigned the same number of jobs, with the same segments of file to
parse. It will enable comparing functionality with different numbers of
QPs. All queues are managed on one core with one thread. This commit
focuses on changing routines API to support multi QPs, mainly, QP scalar
variables are replaced by per-QP struct instance. The enqueue/dequeue
operations are interleaved as follows:
enqueue(QP #1)
enqueue(QP #2)
...
enqueue(QP #n)
dequeue(QP #1)
dequeue(QP #2)
...
dequeue(QP #n)
A new parameter 'nb_qps' was added to configure the number of QPs:
--nb_qps <num of qps>.
If not configured, nb_qps is set to 1 by default.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Function rte_pktmbuf_pool_create() is moved from init_port() routine to
run_regex() routine. Looking forward on multi core support - init_port()
will be called only once as part of application startup while mem pool
creation should be called multiple times (per core).
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
According to RTE flow user guide, PMD will not keep flow rules after
port stop. Application resources that refer to flow rules become
obsolete after port stop and must not be used.
Testpmd maintains linked list of active flows for each port. Entries in
that list are allocated dynamically and must be explicitly released to
prevent memory leak.
The patch releases testpmd port flow_list that holds remaining flows
before port is stopped.
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Currently, the queue stats mapping has the following problems:
1) Many PMD drivers don't support queue stats mapping. But there is no
failure message after executing the command "set stat_qmap rx 0 2 2".
2) Once queue mapping is set, unrelated and unmapped queues are also
displayed.
3) The configuration result does not take effect or can not be queried
in real time.
4) The mapping arrays, "tx_queue_stats_mappings_array" &
"rx_queue_stats_mappings_array" are global and their sizes are based
on fixed max port and queue size assumptions.
5) These record structures, 'map_port_queue_stats_mapping_registers()'
and its sub functions are redundant for majority of drivers.
6) The display of the queue stats and queue stats mapping is mixed
together.
Since xstats is used to obtain queue statistics, we have made the
following simplifications and adjustments:
1) If PMD requires and supports queue stats mapping, configure to driver
in real time by calling ethdev API after executing the command "set
stat_qmap rx/tx ...". If not, the command can not be accepted.
2) Based on the above adjustments, these record structures,
'map_port_queue_stats_mapping_registers()' and its sub functions can
be removed. "tx-queue-stats-mapping" & "rx-queue-stats-mapping"
parameters, and 'parse_queue_stats_mapping_config()' can be removed
too.
3) remove display of queue stats mapping in 'fwd_stats_display()' &
'nic_stats_display()', and obtain queue stats by xstats. Since the
record structures are removed, 'nic_stats_mapping_display()' can be
deleted.
Fixes: 4dccdc789b ("app/testpmd: simplify handling of stats mappings error")
Fixes: 013af9b6b6 ("app/testpmd: various updates")
Fixes: ed30d9b691 ("app/testpmd: add stats per queue")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This is already 'else' leg of the opposite comparison, simple 'else'
will be logically same.
Fixes: f8be1786b1 ("app/crypto-perf: introduce performance test application")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
'flag' is initialized to '0' but it is overwritten later, moving the
declaration where it is used and initialize with actual value.
Fixes: 0101a0ec62 ("app/procinfo: add --show-mempool")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
The intention with the "sizeof(0)" usage is not clear, but the 'stats'
already 'memset' by 'rte_cryptodev_stats_get()' API, removing 'memset'
in application.
Fixes: fe773600fe ("app/procinfo: add --show-crypto")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
'parse_xstats_ids()' return 'int'. The return value is assigned to
'nb_xstats_ids' unsigned value, later negative check on this variable is
wrong.
Adding interim 'int' variable for negative check.
Fixes: 7ac16a3660 ("app/proc-info: support xstats by ID and by name")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
'_filters' is compared twice, second one will be always false, removing
it using the message more relevant to the '_filters'.
Fixes: 2deb6b5246 ("app/procinfo: add collectd format and host id")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Since items are static then the default values will be zero,
thus the memset to zero value is just a redundant code.
Also remove the all not needed variables, that can be replaced
with direct set to the structure itself.
Fixes: bf3688f1e8 ("app/flow-perf: add insertion rate calculation")
Cc: stable@dpdk.org
Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Reviewed-by: Alexander Kozyrev <akozyrev@nvidia.com>
Reviewed-by: Suanming Mou <suanmingm@nvidia.com>
The clock() function is not good practice to use for multiple
cores/threads, since it measures the CPU time used by the process
and not the wall clock time, while when running through multiple
cores/threads simultaneously, we can burn through CPU time much
faster.
As a result this commit will change the way of measurement to use
rd_tsc, and the results will be divided by the processor frequency.
Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Reviewed-by: Alexander Kozyrev <akozyrev@nvidia.com>
Reviewed-by: Suanming Mou <suanmingm@nvidia.com>
One of the ways to increase the insertion/deletion rate is to use
multi-threaded insertion/deletion. Thus it's needed to have support
for testing and measure those rates using flow-perf application.
Now we generate cores and distribute all flows to those cores,
and start inserting/deleting in parallel.
The app now receive the cores count to use from command line option,
then it distribute the rte_flow rules evenly between the cores, and
start inserting/deleting. Each worker will report it's own results,
and in the end the MAIN worker will report the total results for all
cores.
The total results are calculated using RULES_COUNT divided over
max time used between all cores.
Also this touches the memory area, since inserting using multiple cores
in same time the pre solution for memory is not valid, thus now we save
memory before and after each allocation for all cores. In the end we
pick the min pre memory and the max post memory from all cores.
The difference between those values represent the total memory consumed
by the total rte_flow rules from all cores, and then report the total
size of single rte_flow in byte for each port.
How to use this feature:
--cores=N
Where 1 =< N <= RTE_MAX_LCORE
Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Reviewed-by: Alexander Kozyrev <akozyrev@nvidia.com>
Reviewed-by: Suanming Mou <suanmingm@nvidia.com>
Provide the flows_handler() function the ability to control
flow performance processes. It is made possible after the
introduction of the insert_flows() function.
Also provide to the flows_handler() function the ability to print
the DPDK layer memory consumption of rte_flow rule, regardless
if deletion feature is enabled or not, while in previous
solution it was printing all memory changes after flows_handler().
Thus if deletion is there, it will not provide any memory that
represents the rte_flow rule size.
Also current design is easier to read and understand.
Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Reviewed-by: Alexander Kozyrev <akozyrev@nvidia.com>
Reviewed-by: Suanming Mou <suanmingm@nvidia.com>
When dpdk is compiled as static libraries, it is not possible
to load a plugin from an application. We get the following error:
EAL: librte_pmd_xxxx.so: undefined symbol: per_lcore__rte_errno
This happens because the dpdk symbols are not exported. Add them to the
dynamic symbol table by using '-Wl,--export-dynamic'. This option was
previously present when compiled with Makefiles, it was introduced in
commit f9a08f6502 ("eal: add support for shared object drivers")
Also add it to the pkg-config file.
Fixes: 16ade738fd ("app/testpmd: build with meson")
Fixes: 89f0711f9d ("examples: build some samples with meson")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
If there was no RTE_NET_I40E configured the static routine
str2flowtype() was not used causing compilation warning.
The str2flowtype() is moved under #ifdef RTE_NET_I40E block.
Fixes: 1be514fbce ("ethdev: remove legacy FDIR filter type support")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The rte_flow_item_eth and rte_flow_item_vlan items are refined.
The structs do not exactly represent the packet bits captured on the
wire anymore so add_*_header functions should use real header instead of
the using rte_flow_item_* struct.
Replace the rte_flow_item_* with the existing corresponding rte_*_hdr.
Fixes: 09315fc838 ("ethdev: add VLAN attributes to ethernet and VLAN items")
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Setting MTU after each 'rte_eth_dev_configure()' prevents using
"--max-pkt-len=N" parameter and "port config all max-pkt-len #" command
This is breaking DTS scatter test case which is using
"--max-pkt-len=9000" testpmd parameter.
Reverting workaround to recover the DTS testcase.
Fixes: 1c21ee95cf ("app/testpmd: fix MTU after device configure")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Bo Chen <box.c.chen@intel.com>
testpmd provides commands to test tunnel offload rte_flow
capabilities. Testpmd tunnel commands allow to configure new ofloaded
tunnel types, list existing offloaded tunnels and destroy existing
offloaded tunnels.
Tunnel offload commands allowed parameters repetition. For example,
the following commands were accepted:
testpmd> flow tunnel create 0 create 1 type vxlan
or
testpmd> flow tunnel list 0 list 1
Current patch fixed that fault. Correct tunnel commands syntax is:
testpmd> flow tunnel create <port> type <tunnel type>
testpmd> flow tunnel list <port>
testpmd> flow tunnel destroy <port> id <tunnel id>
Fixes: 1b9f274623 ("app/testpmd: add commands for tunnel offload")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch defines new RSS offload types for eCPRI. For eCPRI with
Message Type 0, the hash field is physical channel ID.
Signed-off-by: Simei Su <simei.su@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
All EAL flags tests are run by calling the "eal_flags_autotest" command.
There is no compatibility to maintain for sub commands only called by
meson.
Fixes: db27370b57 ("eal: replace blacklist/whitelist options")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The options and variables are renamed to use block/allow terminology.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Replace -w / --pci-whitelist with -a / --allow options
and --pci-blacklist with --block.
The -b short option remains unchanged.
Allow the old options for now, but print a nag
warning since old options are deprecated.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Remove unused assignment statement as the assigned variable is
not used in the code further.
Coverity issue: 363690
Fixes: 6c583103a2 ("test/ring: factorize object checks")
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
When DPDK is compiled with gcc < 9 with the optimization level set to 1
gcc sees zcd in test_ring.h as possibly being uninitialised. To correct
this error if statements from _st_ring_dequeue_bulk and
_st_ring_enqueue_bulk were corrected within test_ring_mt_peek_stress_zc.c
Fixes: f72299fd15 ("test/ring: add stress tests for zero copy API")
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Fix SEGFAULT when nb_timer_adapters command line parameter is
set to 0.
Fixes: 98c6292105 ("app/eventdev: add options for event timer adapter")
Cc: stable@dpdk.org
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
The print statement had a typo, "sesionless" should have been
"sessionless". This is now fixed.
Fixes: afcfa2fd04 ("test/crypto: check session-less support")
Cc: stable@dpdk.org
Signed-off-by: Ciara Power <ciara.power@intel.com>
This patch fixes bypassed out of place test for PMDs that support it.
Fixes: 4868f6591c ("test/crypto: add cases for raw datapath API")
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
In two test cases, the op value is set by the return of the
process_crypto_request function, which may be NULL. The op->status
value was checked afterwards, which was causing a dereference issue.
To fix this, a temporary op variable is used to hold the return
from the process_crypto_request function, so the original op->status
can be checked after the possible NULL return value.
The original op value is then set to hold the temporary op value.
Coverity issue: 363452, 363465
Fixes: 4868f6591c ("test/crypto: add cases for raw datapath API")
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch fixes the GMAC SGL test that fails to bypass
unsupported PMDs.
Fixes: dcdd01691f ("test/crypto: add GMAC SGL")
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Tested-by: Yu Jiang <yux.jiang@intel.com>
In 'rte_eth_dev_configure()', if 'DEV_RX_OFFLOAD_JUMBO_FRAME' is not set
the max frame size is limited to 'RTE_ETHER_MAX_LEN' (1518).
This is mistake because for the PMDs that has frame size bigger than
"RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN" (18 bytes), the MTU becomes
less than 1500, causing a valid frame with 1500 bytes payload to be
dropped.
Since 'rte_eth_dev_set_mtu()' works as expected, it is called after
'rte_eth_dev_configure()' to fix the MTU.
It may look redundant to set MTU after 'rte_eth_dev_configure()', both
with default values, but it is not, the resulting MTU config can be
different in the device based on frame overhead of the PMD.
And instead of setting the MTU to default value, it is first get via
'rte_eth_dev_get_mtu()' and set again, this is to cover cases MTU
changed from testpmd command line.
'rte_eth_dev_set_mtu()', '-ENOTSUP' error is ignored to prevent
irrelevant warning messages for the virtual PMDs.
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Tested-by: Igor Romanov <igor.romanov@oktetlabs.ru>
When an age action becomes aged-out the next call for
rte_flow_get_aged_flows API should return the action context supplied
by the action configuration structure.
In case the age action is created by the shared action API, the shared
action context of the Testpmd application was not set.
In addition, the application handler of the contexts returned by the
rte_flow_get_aged_flows API didn't consider the fact that the action
could be set by the shared action API and considered it as regular flow
context.
This caused a crash in Testpmd when the context is parsed.
This patch set context type in the flow and shared action context and
uses it to parse the aged-out contexts correctly.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The fix of max_rx_pkt_len for allowing VLAN packets in all cases
was breaking configuration of some drivers. Example with virtio:
Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728
Fail to configure port 0
Trying to fix the logic was revealing other issues in some drivers.
That's why it is decided to revert.
The workaround for the original issue would be
to set the MTU explicitly from the application
with rte_eth_dev_set_mtu().
See RFC: https://patches.dpdk.org/patch/83756/
Fixes: f6870a7ed6 ("app/testpmd: fix max Rx packet length for VLAN packet")
Cc: stable@dpdk.org
Reported-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
cmdline_numtype member names clash with Windows system identifiers.
Add RTE_ prefix to cmdline constants to avoid this and possible
future conflicts.
Suggested-by: Ranjit Menon <ranjit.menon@intel.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Ranjit Menon <ranjit.menon@intel.com>
Acked-by: Jie Zhou <jizh@microsoft.com>
Tested-by: Jie Zhou <jizh@microsoft.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Avoid code duplication by combining single and multi threaded tests
Also, enable support for more than 2 writers
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Return error if Add/Delete fail in multiwriter perf test
Return error if single or multi writer test fails
Fixes: eff30b59cc ("test/lpm: add RCU performance tests")
Cc: stable@dpdk.org
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Fix incorrect calculations for LPM adds, LPM deletes,
and average cycles in RCU QSBR perf tests
Since, rcu qsbr tests run for 'RCU_ITERATIONS' and not
'ITERATIONS', replace 'ITERATIONS' with 'RCU_ITERATIONS'
for calculating adds, deletes, and cycles.
Also, for multi-writer perf test, each writer only writes
half of NUM_LDEPTH_ROUTE_ENTRIES.
For 2 writers, total adds (or deletes) should be
(RCU_ITERATIONS * NUM_LDEPTH_ROUTE_ENTRIES) instead of
(2 * RCU_ITERATIONS * NUM_LDEPTH_ROUTE_ENTRIES).
Since, for both the single and multi writer tests, total adds/deletes
is equal to (RCU_ITERATIONS * NUM_LDEPTH_ROUTE_ENTRIES),
this has been replaced with a macro 'TOTAL_WRITES' and furthermore,
'g_writes' has been removed since it is always a fixed value
equal to TOTAL_WRITES.
Fixes: eff30b59cc ("test/lpm: add RCU performance tests")
Cc: stable@dpdk.org
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
A "+" symbol was incorrectly placed at the beginning of a line,
this is now removed.
Fixes: 52af6ccb2b ("telemetry: add utility functions for creating JSON")
Cc: stable@dpdk.org
Signed-off-by: Ciara Power <ciara.power@intel.com>
Currently, flow-perf measures the performance of
rule installation/deletion operations by breaking
down the entire number of operations into windows
of fixed size (i.e., 100000 operations per window).
Then, flow-perf measures the total time per window
and computes an average time across all windows.
This commit allows flow-perf users to configure
the number of rules per window instead of using
a fixed pre-compiled value. To do so, users must
pass --rules-batch=N, where N is the number of
rules per window (or batch).
For consistency reasons, flow_count variable is
now renamed to rules_count. This variable is the
total number of rules to be installed/deleted.
For example, if a user wants to measure how much
time it takes to install 1M rules in a certain NIC,
he/she can input:
--rules-count=1000000
This way flow-perf will break down 1M flow rules into
10 batches of 100k flow rules each (this is the default
batch size) and compute an average across the 10
measurements.
Now, if the user modifies the number of rules per
batch as follows:
--rules-count=1000000 --rules-batch=500000
then flow-perf will break down 1M flow rules into
2 batches of 500k flow rules each and compute the
average across the 2 measurements.
Finally, this commit also adds default variables
to the usage function instead of hardcoded values.
Signed-off-by: Georgios Katsikas <katsikas.gp@gmail.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
The rte_flow_item_eth and rte_flow_item_vlan items are refined.
The structs do not exactly represent the packet bits captured on the
wire anymore so set raw_encap/decap commands should only copy real
header instead of the whole struct.
Replace the rte_flow_item_* with the existing corresponding rte_*_hdr.
Fixes: 09315fc838 ("ethdev: add VLAN attributes to ethernet and VLAN items")
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This attribute helps PMDs to tell actions supposed to work
on the so-called hardware e-switch level from regular ones.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
When the max Rx packet length is smaller than the sum of MTU size and
ether overhead size, it should be enlarged, otherwise the VLAN packets
will be dropped.
Fixes: 35b2d13fd6 ("net: add rte prefix to ether defines")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
When the number of forwarding cores changed in runtime, the issue may
be encountered:
If the nbcore set little than current nbcore, the forwarding thread
will still running on the extra cores. Therefore, trying to stop
forwarding will hang testpmd, since it will wait for the extra cores to
stop.
So do not allow to change nbcore number when forwarding is running.
Fixes: 0c0db76f42 ("app/testpmd: separate forward config setup from display")
Cc: stable@dpdk.org
Signed-off-by: Zhenghua Zhou <zhenghuax.zhou@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
In the current implementation of eCPRI flow item parsing of the CLI,
the token items in the list are not connected properly.
A command containing "rtc_ctrl rtc_id spec 14857 rtc_id mask 0xff00"
will be considered invalid. In order to support spec with mask, the
common entry needs to be typed twice and the whole command will be
too long.
By changing the token lists, it could support spec with mask without
backing from the entry of the item.
Fixes: 17d103cc93 ("app/testpmd: add eCPRI in flow creation patterns")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The command uses FDIR filter information get API which
is not supported any more.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of FDIR filters RTE flow API should be used.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Global filter configuration request was supported by net/i40e
driver only to configure GRE key length.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of L2 tunnel filter RTE flow API should be used.
Preserve RTE_ETH_FILTER_L2_TUNNEL since it is used in drivers
internally in RTE flow API support.
rte_eth_l2_tunnel_conf structure is used in other ethdev API
functions.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of HASH filter RTE flow API should be used.
Preserve RTE_ETH_FILTER_HASH since it is used in drivers
internally in RTE flow API support.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of TUNNEL filter RTE flow API should be used.
Move corresponding defines and helper structure to ethdev
driver interface since it is still used by drivers internally.
Preserve RTE_ETH_FILTER_TUNNEL because of usage in drivers.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of N-tuple filter RTE flow API should be used.
Preserve struct rte_eth_ntuple_filter in ethdev API since
the structure and related defines are used in flow classify
library and a number of drivers.
Preserve RTE_ETH_FILTER_NTUPLE because of usage in drivers.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of SYN filter RTE flow API should be used.
Move corresponding definitions to ethdev internal driver API
since it is used by drivers internally.
Preserve RTE_ETH_FILTER_SYN because of it as well.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of FLEXIBLE filter RTE flow API should be used.
Temporarily preserve helper defines in public interface.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of EtherType filter RTE flow API should be used.
Move corresponding definitions to ethdev internal driver API
since it is used by drivers internally.
Preserve RTE_ETH_FILTER_ETHERTYPE because of it as well.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Instead of MACVLAN filter RTE flow API should be used.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
When a flow API RSS rule is issued in testpmd, device RSS key is changed
unexpectedly, device RSS key is changed to the testpmd default RSS key.
Consider the following usage with testpmd:
1. first, startup testpmd:
testpmd> show port 0 rss-hash key
RSS functions: all ipv4-frag ipv4-other ipv6-frag ipv6-other ip
RSS key: 6D5A56DA255B0EC24167253D43A38FB0D0CA2BCBAE7B30B477CB2DA38030F
20C6A42B73BBEAC01FA
2. create a rss rule
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / end \
actions rss types ipv4-udp end queues end / end
3. show rss-hash key
testpmd> show port 0 rss-hash key
RSS functions: all ipv4-udp udp
RSS key: 74657374706D6427732064656661756C74205253532068617368206B65792
C206F76657272696465
This is because testpmd always sends a key with the RSS rule,
if user provides a key as part of the rule that key is used, if user
doesn't provide a key, testpmd default key is sent to the PMDs, which is
causing device programmed RSS key to be changed.
There was a previous attempt to fix the same issue [1], but it has been
reverted back [2] because of the crash when 'key_len' is provided
without 'key'.
This patch follows the same approach with the initial fix [1] but also
addresses the crash.
After change, testpmd RSS key is 'NULL' by default, if user provides a
key as part of rule it is used, if not no key is sent to the PMDs at all
[1]
Commit a4391f8bae ("app/testpmd: set default RSS key as null")
[2]
Commit f3698c3d09 ("app/testpmd: revert setting default RSS")
Fixes: d0ad8648b1 ("app/testpmd: fix RSS flow action configuration")
Cc: stable@dpdk.org
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Currently there exists inconsistency about name of transmission
policy for a Link Bonding device. "xmit_balance_policy" is not
correct, which should be modified to "balance_xmit_policy".
Fixes: 2950a76931 ("bond: testpmd support")
Fixes: ac718398f4 ("doc: testpmd application user guide")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
rte_gso_segment decreased refcnt of pkt by one, but
it is wrong if pkt is external mbuf, pkt won't be
freed because of incorrect refcnt, the result is
application can't allocate mbuf from mempool because
mbufs in mempool are run out of.
One correct way is application should call
rte_pktmbuf_free after calling rte_gso_segment to free
pkt explicitly. rte_gso_segment must not handle it, this
should be responsibility of application.
This commit changed rte_gso_segment in functional behavior
and return value, so the application must take appropriate
actions according to return values, "ret < 0" means it
should free and drop 'pkt', "ret == 0" means 'pkt' isn't
GSOed but 'pkt' can be transmitted as a normal packet,
"ret > 0" means 'pkt' has been GSOed into two or multiple
segments, it should use "pkts_out" to transmit these
segments. The application must free 'pkt' after call
rte_gso_segment when return value isn't equal to 0.
Fixes: 119583797b ("gso: support TCP/IPv4 GSO")
Cc: stable@dpdk.org
Signed-off-by: Yi Yang <yangyi01@inspur.com>
Acked-by: Jiayu Hu <jiayu.hu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
SNOW3G, KASUMI and ZUC algorithms have been added to
aesni_mb PMD in this release. These algorithms are not supported
for the crypto synchronous API, so the crypto tests for these algorithms
are disabled.
Fixes: ae8e085c60 ("crypto/aesni_mb: support KASUMI F8/F9")
Fixes: 6c42e0cf4d ("crypto/aesni_mb: support SNOW3G-UEA2/UIA2")
Fixes: fd8df85487 ("crypto/aesni_mb: support ZUC-EEA3/EIA3")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
update offload dequeue to retrieve the full ring to be
agnostic of implementation.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Aidan Goddard <aidan.goddard@accelercomm.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Reducing number of repetitions from 1000 to 100
to save time. Results are accurate enough with
100 loops.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Liu Tianjiao <tianjiao.liu@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
bler test results are not valid when LLR compression
is used or for loopback scenarios. Skipping these.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Aidan Goddard <aidan.goddard@accelercomm.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Reviewed-by: Tom Rix <trix@redhat.com>
Replacing magic number for default wait time for hw
offload.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Liu Tianjiao <tianjiao.liu@intel.com>
Run preloading explicitly for unit tests. Load each code block
by reusing existing input op then restore for the actual test.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Liu Tianjiao <tianjiao.liu@intel.com>
Adding explicit check in ut that the stats counters
have the expect values.
This was identified as a gap from code coverage.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Aidan Goddard <aidan.goddard@accelercomm.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Adding explicit different ut when testing for validation
or latency (early termination enabled or not).
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Aidan Goddard <aidan.goddard@accelercomm.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Previously, the Tx timestamp field and flag were registered in testpmd,
as described in mlx5 guide.
For consistency between Rx and Tx timestamps,
managing mbuf registrations inside the driver, as properly documented,
is a simpler expectation.
The only driver to support this feature (mlx5) is updated
as well as the testpmd application.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The mbuf timestamp is moved to a dynamic field
in order to allow removal of the deprecated static field.
The related mbuf flag is also replaced.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add a variety of self-tests for both ldb and directed
ports/queues, as well as configure, start, stop, link, etc...
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
Add a variety of self-tests for both ldb and directed
ports/queues, as well as configure, start, stop, link, etc...
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
configure qp with OOS according to device capabilities
returned from rte_regexdev_info_get.
Signed-off-by: Guy Kaneti <guyk@marvell.com>
Acked-by: Ori Kam <orika@nvidia.com>
The order test stored a sequence number in the deprecated mbuf field
seqn.
It is moved to a dynamic field in order to allow removal of seqn.
Signed-off-by: David Marchand <david.marchand@redhat.com>
The reorder library used sequence numbers stored in the deprecated field
seqn.
It is moved to a dynamic mbuf field in order to allow removal of seqn.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The order test stored the flow ID in the deprecated mbuf field udata64.
It is moved to a dynamic field in order to allow removal of udata64.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The test used the deprecated mbuf field udata64.
It is moved to a dynamic field in order to allow removal of udata64.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The test used the deprecated mbuf field udata64.
It is moved to a dynamic field in order to allow removal of udata64.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Tested-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
There is a test for dynamic field registration at a specific offset.
Depending on which driver is probed, some dynamic fields may be
already registered at this offset.
This failure is skipped with a warning.
Fixes: 4958ca3a44 ("mbuf: support dynamic fields and flags")
Cc: stable@dpdk.org
Reported-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Add functional tests for zero copy APIs. Test enqueue/dequeue
functions are created using the zero copy APIs to fit into
the existing testing method.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Use uintptr_t instead of unsigned long while initializing the
array of pointers.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Added -v option to switch between different lookup implementations
to measure their performance and correctness.
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
FIB type RTE_FIB_TYPE_MAX is used only for sanity checks,
remove it to prevent applications start using it.
The same is for FIB6's RTE_FIB6_TYPE_MAX.
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
When merging this series after Bruce changes on build macros, an old macro
usage has been re-introduced.
Fixes: d82d6ac643 ("app/procinfo: add crypto security context info")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The config options CONFIG_RTE_* are simple RTE_* defines with meson.
Now that make support is dropped, update the names in logs and comments.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: David Marchand <david.marchand@redhat.com>
Replace master lcore with main lcore and
replace slave lcore with worker lcore.
Keep the old functions and macros but mark them as deprecated
for this release.
The "--master-lcore" command line option is also deprecated
and any usage will print a warning and use "--main-lcore"
as replacement.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
There are cases where a port maybe owned by another (failsafe, netvsc,
bond); but currently proc-info has no way to look at stats of those
ports. This patch provides way for the user to explicitly ask for these
ports.
If no portmask is given the output is unchanged; it only shows the
top level ports. If portmask requests a specific port it will be
shown even if owned.
Increase the size of port mask variable to unsigned long to
allow up to 64 ports to be handled on 64 bit architecture.
The device owner is also a useful thing to show in port info.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
If crypto context is not present, no point in displaying it.
This patch adds the crypto based security context info.
Also improve the flag printing to SECURITY OFFLOAD from
INLINE.
Use common code for displaying crypto context information
when doing show_ports and show_crypto.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Many drivers will report per queue info
as well as how many descriptors are in use.
Also display per-queue offload flags.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Revise the display of port information to include more data
and be more human friendly.
* Show driver and device information
* Show MAC address
* Show flow control information
* Combine lines if possible
* Show all multicast mode
* Show queue mempool name
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The DPDK EAL info messages at the start of a diagnostic application
are not helpful to end user. Suppress them by setting log-level
by default.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Printing extra borders does not improve readability, and is just
unnecessary. Putting TSC hz in header also makes no sense here.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This logtype is defined but never used.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
getopt_long() parses command-line arguments. One of its arguments
'longopts' is a pointer to the first element of an array of struct
option. The last element of the array has to be filled with zeros
to mark the end of options. For example:
struct option longopts[] = {
{ "help", 0, 0, ARG_HELP},
....
/* End of options */
{ 0, 0, 0, 0 }
};
This commit adds the last element. Prior to this commit getopt_long()
continued parsing beyond the longopts[] array which occasionally caused
segmentation faults.
Fixes: de06137cb2 ("app/regex: add RegEx test application")
Cc: stable@dpdk.org
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Use the newer macros defined by meson in all DPDK source code, to ensure
there are no errors when the old non-standard macros are removed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
As discussed on the dpdk-dev mailing list[1], we can make some easy
improvements in standardizing the naming of the various components in DPDK,
and their associated feature-enabled macros.
Following this patch, each library will have the name in format,
'librte_<name>.so', and the macro indicating that library is enabled in the
build will have the form 'RTE_LIB_<NAME>'.
Similarly, for libraries, the equivalent name formats and macros are:
'librte_<class>_<name>.so' and 'RTE_<CLASS>_<NAME>', where class is the
device type taken from the relevant driver subdirectory name, i.e. 'net',
'crypto' etc.
To avoid too many changes at once for end applications, the old macro names
will still be provided in the build in this release, but will be removed
subsequently.
[1] http://inbox.dpdk.org/dev/ef7c1a87-79ab-e405-4202-39b7ad6b0c71@solarflare.com/t/#u
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Rosen Xu <rosen.xu@intel.com>
A number of lib and driver dependencies for various apps were missed on
build because the proper macro names for their use were mismatched between
meson and make build systems. Before adding in equivalent compatibility
macros we need to ensure to add the proper dependencies to ensure a valid
build.
Fixes: 16ade738fd ("app/testpmd: build with meson")
Fixes: b5dc795a8a ("test: build app with meson as dpdk-test")
Fixes: 996ef11761 ("app: add all remaining apps to meson build")
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
If an error occurred when reading from the socket, the function
returned without closing the socket. This is now fixed to avoid the
resource leak of the sock variable going out of scope.
Coverity issue: 363043
Fixes: bd78cf693e ("test/telemetry: add unit tests for data to JSON")
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
When the socket connection failed, an error was printed to screen but
the function did not return an error, and continued to try read from the
socket. This is now corrected to close the socket and return -1 when the
connection fails.
Fixes: bd78cf693e ("test/telemetry: add unit tests for data to JSON")
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
By using the alloc_size() attribute the compiler can optimize
better and detect errors at compile time.
For example, Gcc will fail one of the invalid allocation examples
in app/test/test_malloc.c because the allocation is outside the
limits of memory.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
rte_mempool_get_bulk is used to get bufs/many_bufs from the pool,
but at some locations when test fails the bufs/many_bufs are
not returned back to the pool.
Due to this, multiple executions of distributor_autotest gives the
following error message: Error getting mbufs from pool.
To resolve this issue rte_mempool_put_bulk is used whenever the test
fails and returns.
Fixes: c3eabff124 ("distributor: add unit tests")
Cc: stable@dpdk.org
Signed-off-by: Sarosh Arif <sarosh.arif@emumba.com>
Acked-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Reviewed-by: David Hunt <david.hunt@intel.com>
Sending number of packets equal to number of workers isn't enough
to stop all workers in burst version of distributor as more than
one packet can be matched and consumed by a single worker. This way
some of workers might not be awaken from rte_distributor_get_pkt().
This patch fixes it by sending packets one by one. Each sent packet
causes exactly one worker to quit.
Fixes: 775003ad2f ("distributor: add new burst-capable library")
Cc: stable@dpdk.org
Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
In all distributor tests there is a chance that tests
will send packets to distributor with rte_distributor_process()
before workers are started and requested for packets.
This patch ensures that all packets are delivered to workers
by calling rte_distributor_process() in loop until number
of successfully processed packets reaches required by test.
Change is applied to every first call in test case.
Cc: stable@dpdk.org
Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Acked-by: David Hunt <david.hunt@intel.com>
All of the former tests analyzed only statistics
of packets processed by all workers.
The new test verifies also if packets are processed
on workers as expected.
Every packets processed by the worker is marked
and analyzed after it is returned to distributor.
This test allows finding issues in matching algorithms.
Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Acked-by: David Hunt <david.hunt@intel.com>
Instead of making delays in test code and waiting
for worker hopefully to reach proper states,
synchronize worker shutdown test cases with spin lock
on atomic variable.
Fixes: c0de0eb82e ("distributor: switch over to new API")
Cc: stable@dpdk.org
Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Acked-by: David Hunt <david.hunt@intel.com>
During quit_workers function distributor's main core processes
some packets to wake up pending worker cores so they can quit.
As quit_workers acts also as a cleanup procedure for next test
case it should also collect these packets returned by workers'
handlers, so the cyclic buffer with returned packets
in distributor remains empty.
Fixes: c3eabff124 ("distributor: add unit tests")
Fixes: c0de0eb82e ("distributor: switch over to new API")
Cc: stable@dpdk.org
Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Acked-by: David Hunt <david.hunt@intel.com>
Statistics of handled packets are cleared and read on main lcore,
while they are increased in workers handlers on different lcores.
Without synchronization occasionally showed invalid values.
This patch uses atomic mechanisms to synchronize.
Relaxed memory model is used.
Fixes: c3eabff124 ("distributor: add unit tests")
Cc: stable@dpdk.org
Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Acked-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Sanity tests with mbuf alloc and shutdown tests assume that
mbufs passed to worker cores are freed in handlers.
Such packets should not be returned to the distributor's main
core. The only packets that should be returned are the packets
send after completion of the tests in quit_workers function.
This patch stops returning mbufs to distributor's core.
In case of shutdown tests it is impossible to determine
how worker and distributor threads would synchronize.
Packets used by tests should be freed and packets used during
quit_workers() shouldn't. That's why returning mbufs to mempool
is moved to test procedure run on distributor thread
from worker threads.
Additionally this patch cleans up unused variables.
Fixes: c0de0eb82e ("distributor: switch over to new API")
Cc: stable@dpdk.org
Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Acked-by: David Hunt <david.hunt@intel.com>
The sanity test with worker shutdown delegates all bufs
to be processed by a single lcore worker, then it freezes
one of the lcore workers and continues to send more bufs.
The freezed core shuts down first by calling
rte_distributor_return_pkt().
The test intention is to verify if packets assigned to
the shut down lcore will be reassigned to another worker.
However the shutdown core was not always the one, that was
processing packets. The lcore processing mbufs might be different
every time test is launched. This is caused by keeping the value
of wkr static variable in rte_distributor_process() function
between running test cases.
Test freezed always lcore with 0 id. The patch stores the id
of worker that is processing the data in zero_idx global atomic
variable. This way the freezed lcore is always the proper one.
Fixes: c3eabff124 ("distributor: add unit tests")
Cc: stable@dpdk.org
Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Tested-by: David Hunt <david.hunt@intel.com>
rte_distributor_request_pkt and rte_distributor_get_pkt dereferenced
oldpkt parameter when in RTE_DIST_ALG_SINGLE even if number
of returned buffers from worker to distributor was 0.
This patch passes NULL to the legacy API when number of returned
buffers is 0. This allows passing NULL as oldpkt parameter.
Distributor tests are also updated passing NULL as oldpkt and
0 as number of returned packets, where packets are not returned.
Fixes: 775003ad2f ("distributor: add new burst-capable library")
Cc: stable@dpdk.org
Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Acked-by: David Hunt <david.hunt@intel.com>
The API ``rte_security_session_create`` takes only single
mempool for session and session private data. So the
application need to create mempool for twice the number of
sessions needed and will also lead to wastage of memory as
session private data need more memory compared to session.
Hence the API is modified to take two mempool pointers
- one for session and one for private data.
This is very similar to crypto based session create APIs.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Tested-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Use ENODEV as the error code if specified port ID is invalid.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Change eth_dev_stop_t return value from void to int.
Make eth_dev_stop_t implementations across all drivers to return
negative errno values if case of error conditions.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
rte_eth_dev_stop() return value was changed from void to int,
so this patch modify usage of this function across test/event
according to new return type.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
rte_eth_dev_stop() return value was changed from void to int,
so this patch modify usage of this function across app/testpmd
according to new return type.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
rte_eth_dev_stop() return value was changed from void to int,
so this patch modify usage of this function across app/flow-perf
according to new return type.
Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
If Rx queue is configured with split feature the extended
setup with specified segment sizes and pool will be performed.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add command line parameter:
--rxoffs=X[,Y]
Sets the offsets of packet segments from the beginning of the
receiving buffer if split feature is engaged. Affects only the
queues configured with split offloads (currently BUFFER_SPLIT
is supported only).
Add interactive mode command, providing the same:
testpmd> set rxoffs (x[,y]*)
Where x[,y]* represents a CSV list of values, without white space.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add command line parameter:
--rxpkts=X[,Y]
Sets the length of segments to scatter packets on receiving if split
feature is engaged. Affects only the queues configured with split
offloads (currently BUFFER_SPLIT is supported only).
Add interactive mode command:
testpmd> set rxpkts (x[,y]*)
Where x[,y]* represents a CSV list of values, without white space.
Sets the length of segments to scatter packets on receiving if split
feature is engaged. Affects only the queues configured with split
offloads (currently BUFFER_SPLIT is supported only). Optionally the
multiple memory pools can be specified with --mbuf-size command line
parameter and the mbufs to receive will be allocated sequentially
from these extra memory pools.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch add support for RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT
providing per queue configuration for this offload.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The command line parameter --mbuf-size is updated, it can handle
the multiple values like the following:
--mbuf-size=2176,512,768,4096
specifying the creation the extra memory pools with the requested
mbuf data buffer sizes. If some buffer split feature is engaged
the extra memory pools can be used to configure the Rx queues
with rte_the_dev_rx_queue_setup_ex().
The extra pools are created with requested sizes, and pool names
are assigned with appended index: mbuf_pool_socket_%socket_%index.
Index zero is used to specify the first mandatory pool to maintain
compatibility with existing code.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
There is no error info displayed when running flow flush
command with invalid port. This patch fixed the issue.
Fixes: 2a449871a1 ("app/testpmd: align behaviour of multi-port detach")
Cc: stable@dpdk.org
Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Reviewed-by: Suanming Mou <suanmingm@nvidia.com>
rte_flow update introduced has_vlan field for ETH header item,
and field has_more_vlan for VLAN header item.
The new fields are used to clearly indicate packet tagging chrasteristics.
This patch updates testpmd CLI to support the new fields.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
A new parameter `hairpin-mode` is introduced to the testpmd command
line. Bitmask value is used to provide a more flexible configuration.
This parameter should be used when `hairpinq` is specified in the
command line.
Bit 0 in the LSB indicates the hairpin will use the loop mode. The
previous port Rx queue will be connected to the current port Tx
queue.
Bit 1 in the LSB indicates the hairpin will use pair port mode. The
even index port will be paired with the next odd index port. If the
total number of the probed ports is odd, then the last one will be
paired to itself.
If this byte is zero, then each port will be paired to itself.
Bit 0 takes a higher priority in the checking.
Bit 4 in the second bytes indicate if the hairpin will use explicit
Tx flow mode.
e.g. in the command line, "--hairpinq=2 --hairpin-mode=0x11"
If not set, default value zero will be used and the behavior will
try to get aligned with the previous single port mode. If the ports
belong to different vendors' NICs, it is suggested to use the `self`
hairpin mode only.
Since hairpin configures the hardware resources, the port mask of
packets forwarding engine will not be used here.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
rte_flow update, following RFC [1], added to ethdev the rte_flow item
ipv6_frag_ext.
This patch updates testpmd CLI to support the new item and its fields.
To match on fragmented IPv6 packets, this item is added to pattern:
... ipv6 / ipv6_frag_ext ...
[1] http://mails.dpdk.org/archives/dev/2020-March/160255.html
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
rte_flow update, following RFC [1], introduced has_frag_ext field for
IPv6 header item, used to indicate match on fragmented/non-fragmented
packets.
This patch updates testpmd CLI to support the new field.
To match on non-fragmented IPv6 packets, need to use pattern:
... ipv6 has_frag_ext spec 0 has_frag_ext mask 1 ...
To match on fragmented IPv6 packets, need to use pattern:
... ipv6 has_frag_ext spec 1 has_frag_ext mask 1 ...
To match on any IPv6 packets, the has_frag_ext field should
not be specified for match.
[1] https://mails.dpdk.org/archives/dev/2020-August/177257.html
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
This patch updates testpmd CLI to support fragment_offset field of
IPv4 header item.
To match on non-fragmented IPv4 packets, need to use pattern:
... ipv4 fragment_offset spec 0 fragment_offset mask 0x3fff ...
To match on fragmented IPv4 packets, need to use pattern:
... ipv4 fragment_offset spec 1 fragment_offset last 0x3fff
fragment_offset mask 0x3fff ...
(Use the full available range 1 to 0x3fff to include all possible
values.)
To match on any IPv4 packets, fragmented and non-fragmented,
the fragment_offset field should not be specified for match.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
This patch adds shared action support to testpmd CLI.
All shared actions created via testpmd CLI assigned ID for further
reference in other CLI commands. Shared action ID supplied as CLI
argument or assigned by testpmd is similar to flow ID & limited to
scope of testpdm CLI.
Create shared action syntax:
flow shared_action {port_id} create [action_id {shared_action_id}]
[ingress] [egress] action {action} / end
Create shared action examples:
flow shared_action 0 create action_id 100 \
ingress action rss queues 1 2 end / end
This creates shared rss action with id 100 on port 0.
flow shared_action 0 create action_id \
ingress action rss queues 0 1 end / end
This creates shared rss action with id assigned by testpmd
on port 0.
Update shared action syntax:
flow shared_action {port_id} update {shared_action_id}
action {action} / end
Update shared action example:
flow shared_action 0 update 100 \
action rss queues 0 3 end / end
This updates shared rss action having id 100 on port 0
with rss to queues 0 3 (in create example rss queues were
1 & 2).
Destroy shared action syntax:
flow shared_action {port_id} destroy action_id {shared_action_id} [...]
Destroy shared action example:
flow shared_action 0 destroy action_id 100 action_id 101
This destroys shared actions having id 100 & 101
Query shared action syntax:
flow shared_action {port} query {shared_action_id}
Query shared action example:
flow shared_action 0 query 100
This queries shared actions having id 100
Use shared action as flow action syntax:
flow create {port_id} ... / end actions [action / [...]]
shared {action_id} / [action / [...]] end
Use shared action as flow action example:
flow create 0 ingress pattern ... / end \
actions shared 100 / end
This creates flow rule where rss action is shared rss action
having id 100.
All shared action CLIs report status of the command.
Shared action query CLI output depends on action type.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Add a function to calculate the length of an IPv4 header as suggested
on the mailing list [1]. Call where appropriate.
[1] https://mails.dpdk.org/archives/dev/2020-October/184471.html
Suggested-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Michael Pfeiffer <michael.pfeiffer@tu-ilmenau.de>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Following ethdev update in the previous patch of this series, this
patch adds CLI support to query information related to AGE action.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Use sample action with ratio is 1 for mirroring flow, add
supports to set the different port or encap action for mirrored
packets.
The example of test-pmd command:
1. set sample_actions 1 port_id id 1 / end
flow create 0 ... pattern eth / end actions
sample ratio 1 index 1 / port_id id 2...
The flow will result in all the matched ingress packets will be sent to
port 2, and also mirrored the packets and sent to port 1.
2. set raw_encap 0 eth src.../ ipv4.../...
set raw_encap 1 eth src.../ ipv4.../...
set sample_actions 2 raw_encap index 0 / port_id id 0 / end
flow create 0 ... pattern eth / end actions
sample ratio 1 index 2 / raw_encap index 1 / port_id id 0...
The flow will result in all the matched egress packets will be
encapsulated and sent to wire, and also mirrored the packets and with
the different encapsulated data and sent to wire.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add a new testpmd command 'set sample_actions' that supports the
multiple sample actions list configuration by using the index:
set sample_actions <index> <actions list>
The examples for the sample flow use case and result as below:
1. set sample_actions 0 mark id 0x8 / queue index 2 / end
.. pattern eth / end actions sample ratio 2 index 0 / jump group 2 ...
This flow will result in all the matched ingress packets will be
jumped to next flow table, and the each second packet will be
marked and sent to queue 2 of the control application.
2. ...pattern eth / end actions sample ratio 2 / port_id id 2 ...
The flow will result in all the matched ingress packets will be
duplicated and sent to the representor peer (VF or wire) on DPDK port 2,
and the each second packet will also be sent to E-Switch manager vport.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This is a cleanup commit.
It assembles all tunnel outer updates into one function call to avoid
code duplications.
It defines RTE_VXLAN_GPE_DEFAULT_PORT (4790) in accordance with all
other tunnel protocol definitions.
It replaces all numeric values 4789 in their corresponding definition
RTE_VXLAN_GPE_DEFAULT_PORT.
It updates the 'csum parse-tunnel' documentation.
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
IANA has assigned port 6081 as the fixed well-known destination port for
GENEVE. Nevertheless draft-ietf-nvo3-geneve-09 recommends that
implementations make this configurable. This commit enables specifying
any positive UDP destination port number for GENEVE protocol parsing.
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
GENEVE is a widely used tunneling protocol in modern Virtualized
Networks. testpmd already supports parsing of several tunneling
protocols including VXLAN, VXLAN-GPE, GRE. This commit adds GENEVE
parsing of inner protocols (IPv4-0x0800, IPv6-0x86dd, Ethernet-0x6558)
based on IETF draft-ietf-nvo3-geneve-09. GENEVE is considered more
flexible than the other protocols. In terms of protocol format GENEVE
header has a variable length options as opposed to other tunneling
protocols which have a fixed header size.
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Called rte_eth_dev_info_get() in testpmd, to get device info
so that speed capabilities can be printed under "show device info"
Bugzilla ID: 496
Signed-off-by: Sarosh Arif <sarosh.arif@emumba.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
rte_cldemote is similar to a prefetch hint - in reverse.
On x86, cldemote(addr) enables software to hint to hardware that line is
likely to be shared. This is quite useful in core-to-core communications
where cache-line is likely to be shared.
ARM and PPC implementation is provided with NOP and can be added if any
equivalent instructions could be used for implementation on those
architectures.
Signed-off-by: Omkar Maslekar <omkar.maslekar@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
There is a potential race condition in 'service_attr_get' which will cause
test failures since the service core thread is still running while the
values are being retrieved/reset.
This patch fixes the race condition by waiting for the service core thread
to stop before continuing with the unit test checks.
Fixes: 4d55194d76 ("service: add attribute get function")
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This commit implements the eventdev ABI changes required by
the DLB/DLB2 PMDs. Several data structures and constants are modified
or added in this patch, thereby requiring modifications to the
dependent apps and examples.
The DLB/DLB2 hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports a further restricted to a maximum of 1 linked queue.
3) DLB does not have the ability to carry the flow_id as part
of the event (QE) payload. Note that the DLB2 hardware is capable of
carrying the flow_id.
Following is a detailed description of the changes that have been made.
1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertise its capabilities so that applications can take
the appropriate actions based on those capabilities.
struct rte_event_dev_info {
uint32_t max_event_port_links;
/**< Maximum number of queues that can be linked to a single event
* port by this device.
*/
uint8_t max_single_link_event_port_queue_pairs;
/**< Maximum number of event ports and queues that are optimized for
* (and only capable of) single-link configurations supported by this
* device. These ports and queues are not accounted for in
* max_event_ports or max_event_queues.
*/
}
2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.
/** Event device configuration structure */
struct rte_event_dev_config {
uint8_t nb_single_link_event_port_queues;
/**< Number of event ports and queues that will be singly-linked to
* each other. These are a subset of the overall event ports and
* queues; this value cannot exceed *nb_event_ports* or
* *nb_event_queues*. If the device has ports and queues that are
* optimized for single-link usage, this field is a hint for how many
* to allocate; otherwise, regular event ports and queues can be used.
*/
}
3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only attribute is
assigned to other, with the remaining bits available for future assignment.
* Event port configuration bitmap flags */
#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL (1ULL << 0)
/**< Configure the port not to release outstanding events in
* rte_event_dev_dequeue_burst(). If set, all events received through
* the port must be explicitly released with RTE_EVENT_OP_RELEASE or
* RTE_EVENT_OP_FORWARD. Must be unset if the device is not
* RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
*/
#define RTE_EVENT_PORT_CFG_SINGLE_LINK (1ULL << 1)
/**< This event port links only to a single event queue.
*
* @see rte_event_port_setup(), rte_event_port_link()
*/
#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
/**
* The implicit release disable attribute of the port
*/
struct rte_event_port_conf {
uint32_t event_port_cfg;
/**< Port cfg flags(EVENT_PORT_CFG_) */
}
This patch also removes the depreciation notice and announce
the new eventdev ABI changes in release note.
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Since we are not holding the mbufs or creating any references
in the app, hence mbuf fast free offload can be enabled.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The resources held by crypto adapter should be freed when the
test suite exits.
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
The arguments passed to rte_event_crypto_adapter_caps_get() and
rte_event_crypto_adapter_create() are incorrect.
In the rte_event_crypto_adapter_caps_get(), event device id should
be the first argument and cryptodev id should be the second argument.
In the rte_event_crypto_adapter_create(), the event device id should
be the second argument.
Fixes: 3c2c535ecf ("test: add event crypto adapter auto-test")
Cc: stable@dpdk.org
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
The capability of a hardware event device should be checked before
creating a event crypto adapter in a particular mode. The test case
returns error if the mode is not supported.
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Allows creation of net_null if vdev EAL option is not specified and
uninit vdev created in the test. The change also adds error checks
for vdev init and uninit.
Signed-off-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Reviewed-by: Nikhil Rao <nikhil.rao@intel.com>
adapter_multi_eth_add_del() does vdev init but doesn't uninit them.
This causes issues when running event_eth_rx_adapter_autotest multiple
times.
The fix does vdev uninit before exiting the test.
Signed-off-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Reviewed-by: Nikhil Rao <nikhil.rao@intel.com>
A new functions which uses the structure of the test vectors for SDAP
is added and call a functions responsible to call the test_pdcp_proto
with the test vector both for encapsulation and decapsulation.
Signed-off-by: Franck Lenormand <franck.lenormand@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
The function test_pdcp_proto was relying to heavily on the structure
of test vectors for PDCP making it difficult to be reusable.
The function is changed to take all the test parameters as input and
does not need access to the tests vectors anymore.
Signed-off-by: Franck Lenormand <franck.lenormand@nxp.com>
The test vectors are structured in a more readable way compared
to test vector for PDCP. This structure allows to have all the
information about a test vector at the same place.
Signed-off-by: Franck Lenormand <franck.lenormand@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch updates the xform with right configuration.
For session based ops, sym session pool is created with
valid userdata size.
Fixes: 24054e3640 ("test/crypto: use separate session mempools")
Cc: stable@dpdk.org
Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch adds the cryptodev raw API test support to unit test.
In addition a new test-case for QAT PMD for the test type is
enabled.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch updates ``rte_crypto_sym_vec`` structure to add
support for both cpu_crypto synchronous operation and
asynchronous raw data-path APIs. The patch also includes
AESNI-MB and AESNI-GCM PMD changes, unit test changes and
documentation updates.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Switched from device specific test suite to unified
cryptodev test suite. Removed the armv8 device specific test suite.
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
ut_setup / ut_teardown are invoked for each test case by test framework.
The call inside test_stats is unnecessary and even incorrect.
This caused double free of objects such as crypto operation structure.
Trapped the issue when RTE_LIBRTE_MEMPOOL_DEBUG was enabled.
Fix issue by removing ut_setup / ut_teardown from test case implementation.
Fixes: 202d375c60 ("app/test: add cryptodev unit and performance tests")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Tested-by: Adam Dybkowski <adamx.dybkowski@intel.com>
This patch adds AES-ECB 128, 192 and 256 support to the aesni_mb PMD.
AES-ECB 128, 192 and 256 test vectors added to cryptodev tests.
Signed-off-by: Marcel Cornu <marcel.d.cornu@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
5G DL default symlink was pointing to a 4G vector.
Fixes: d762705308 ("app/bbdev: add test vectors for 5GNR")
Cc: stable@dpdk.org
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Aidan Goddard <aidan.goddard@accelercomm.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Acked-by: Liu Tianjiao <tianjiao.liu@intel.com>
This patch replaces the usage of the word 'slave' with more
appropriate word 'worker' in QAT PMD and Scheduler PMD
as well as in their docs. Also the test app was modified
to use the new wording.
The Scheduler PMD's public API was modified according to the
previous deprecation notice:
rte_cryptodev_scheduler_slave_attach is now called
rte_cryptodev_scheduler_worker_attach,
rte_cryptodev_scheduler_slave_detach is
rte_cryptodev_scheduler_worker_detach,
rte_cryptodev_scheduler_slaves_get is
rte_cryptodev_scheduler_workers_get.
Also, the configuration value RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES
was renamed to RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS.
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
DPDK APIs have to be prefixed with "rte_" in order to avoid
namespace pollution.
Let's fix it while fpga_lte_fec API is still experimental.
Fixes: efd453698c ("baseband/fpga_lte_fec: add driver for FEC on FPGA")
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tom Rix <trix@redhat.com>
DPDK APIs have to be prefixed with "rte_" in order to avoid
namespace pollution.
Let's fix it while fpga_5gnr_fec API is still experimental.
Fixes: 2d4306438c ("baseband/fpga_5gnr_fec: add configure function")
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tom Rix <trix@redhat.com>
Add configure function to configure the PF from within
the bbdev-test itself without external application
configuration the device.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Liu Tianjiao <tianjiao.liu@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add in the "info_get" function to the driver, to allow us to query the
device.
No processing capability are available yet.
Linking bbdev-test to support the PMD with null capability.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Liu Tianjiao <tianjiao.liu@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In testsuite_setup(), ts_params is configured for first valid device.
The same device should be used as valid device in
test_device_configure_invalid_dev_id test case.
Fixes: 202d375c60 ("app/test: add cryptodev unit and performance tests")
Cc: stable@dpdk.org
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
The option RTE_EAL_ALWAYS_PANIC_ON_ERROR was off by default,
and not customizable with meson. It is completely removed.
The function rte_dump_registers is a trace of the bare metal support
era, and was not supported in userland. It is completely removed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: David Marchand <david.marchand@redhat.com>
This commit adds new rte_prefetchX_write() variants, that suggest to the
compiler to use a prefetch instruction with intention to write. As a
compiler builtin, the compiler can choose based on compilation target
what the best implementation for this instruction is.
Three versions are provided, targeting the different levels of cache.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Modify the test_sched application to build the hierarchical scheduler
with default subport bandwidth profile. It also allows to update
a subport with different subport rates dynamically
Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
struct cmdline exposes platform-specific members it contains, most
notably struct termios that is only available on Unix. While ABI
considerations prevent from hinding the definition on already supported
platforms, struct cmdline is considered logically opaque from now on.
Add a deprecation notice targeted at 20.11.
* Remove tests checking struct cmdline content as meaningless.
* Fix missing cmdline_free() in unit test.
* Add cmdline_get_rdline() to access history buffer indirectly.
The new function is currently used only in tests.
Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Introduce classify implementation that uses AVX512 specific ISA.
rte_acl_classify_avx512x32() is able to process up to 32 flows in parallel.
It uses 512-bit width registers/instructions and provides higher
performance then rte_acl_classify_avx512x16(), but can cause
frequency level change.
Note that for now only 64-bit version is supported.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Introduce classify implementation that uses AVX512 specific ISA.
rte_acl_classify_avx512x16() is able to process up to 16 flows in parallel.
It uses 256-bit width registers/instructions only
(to avoid frequency level change).
Note that for now only 64-bit version is supported.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
- enhance output to print extra stats
- use rte_rdtsc_precise() for cycle measurements
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch enables the optimized calculation of CRC32-Ethernet and
CRC16-CCITT using the AVX512 and VPCLMULQDQ instruction sets. This CRC
implementation is built if the compiler supports the required instruction
sets. It is selected at run-time if the host CPU, again, supports the
required instruction sets.
Signed-off-by: Mairtin o Loingsigh <mairtin.oloingsigh@intel.com>
Signed-off-by: David Coyle <david.coyle@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Jasvinder Singh <jasvinder.singh@intel.com>
Reviewed-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch adds new flags into the test_cpuflags() functions for ARM64
platform, such as RTE_CPUFLAG_SVE, etc.
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
RTE_ARCH_xx flags are used to distinguish platform architectures.
These flags can be used to pick different code paths for different
architectures at compile time.
For Arm platforms, there are 3 flags in use: RTE_ARCH_ARM,
RTE_ARCH_ARMv7 and RTE_ARCH_ARM64.
RTE_ARCH_ARM64 is for 64-bit aarch64 platforms,
and RTE_ARCH_ARM & RTE_ARCH_ARMv7 are for 32-bit platforms.
RTE_ARCH_ARMv7 is for ARMv7 platforms as its name suggested.
The issue is meaning of RTE_ARCH_ARM is not clear enough.
Because no info about platform word length is included in the name.
To make the flag names more clear, a naming scheme is proposed.
RTE_ARCH_ARM (all Arm platforms)
|
+----RTE_ARCH_32 (New. 32-bit platforms of all architectures)
| |
| +----RTE_ARCH_ARMv7 (ARMv7 platforms)
| |
| +----RTE_ARCH_ARMv8_AARCH32 (aarch32 state on aarch64 machine)
|
+----RTE_ARCH_64 (64-bit platforms of all architectures)
|
+----RTE_ARCH_ARM64 (64-bit Arm platforms)
RTE_ARCH_32 will be explicitly defined for 32-bit platforms.
To fit into the new naming scheme, current usage of RTE_ARCH_ARM in
project is mapped to (RTE_ARCH_ARM && RTE_ARCH_32).
Matching flags for other architectures are:
RTE_ARCH_X86
|
+----RTE_ARCH_32
| |
| +----RTE_ARCH_I686
| |
| +----RTE_ARCH_X86_X32
|
+----RTE_ARCH_64
|
+----RTE_ARCH_X86_64
RTE_ARCH_PPC_64 ---- RTE_ARCH_64
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
This commit adds testpmd capability to query and config FEC
function of device. This includes:
- show FEC capabilities, example:
testpmd> show port 0 fec capabilities
- show FEC mode, example:
testpmd> show port 0 fec_mode
- config FEC mode, example:
testpmd> set port <port_id> fec_mode auto|off|rs|baser
where:
auto|off|rs|baser are four kinds of FEC mode which dev
support according to MAC link speed.
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Chengchang Tang <tangchengchang@huawei.com>
This patch adds tests for verifying telemetry data structures are
converted to JSON as expected. Both flat and recursive data structures
are tested, for all possible value types.
The app connects to the telemetry socket as a client, and registers one
command with a corresponding callback function. Each time the callback
function is called, it copies a global data variable to the data pointer
passed in by telemetry.
When a test case is run, the test case function builds up the global
data variable with the relevant data types, and the expected json string
output which should be generated from that. The 'test_output()' function
is used to trigger the callback and ensure the actual output matches
that expected.
Signed-off-by: Louise Kilheeney <louise.kilheeney@intel.com>
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
As well as this is correct thing to close devices before exit, it is
also useful to test the closing devices from secondary process.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Since the rawdev autotest can now be used to test all rawdevs on the
system, there is no need for a dedicated ioat autotest command.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Rather than having each rawdev provide its own autotest command, we can
instead just use the generic rawdev_autotest to test any and all available
rawdevs.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Changed scripts to explicitly use Python 3 only, to avoid
maintaining Python 2.
Removed deprecation notices.
Signed-off-by: Louise Kilheeney <louise.kilheeney@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Robin Jarry <robin.jarry@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
The ethdev port id should be 16 bits now. This patch changes the
variable size of port id in applications from 8 bits to 16 bits.
Fixes: e977e4199a ("app/testpmd: add commands to load/unload BPF filters")
Fixes: 46cf97e4bb ("eventdev: add test for eth Tx adapter")
Cc: stable@dpdk.org
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Ability to distinguish ICMP identifier fields in packets.
Distinguish ICMP sequence number field too.
Already supports ICMP code and type fields in current version.
Existing fields in ICMP header contain the required information.
ICMP header already is supported and no code change in RTE FLOW.
Extend testpmd CLI to include the fields of ident and sequence number.
One example:
flow create 0 ingress pattern eth / ipv4 /
icmp code is 1 ident is 5 seq is 6 /
end actions count / queue index 0 / end
The ICMP packet with code 1, identifier 5 and
sequence number 6 will be matched.
It will implement action counter and forward to queue 0.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
A port can be closed in multiple situations:
- close command calling close_port() -> rte_eth_dev_close()
- exit calling close_port() -> rte_eth_dev_close()
- hotplug calling close_port() -> rte_eth_dev_close()
- hotplug calling detach_device() -> rte_dev_remove()
- port detach command, detach_device() -> rte_dev_remove()
- device detach command, detach_devargs() -> rte_eal_hotplug_remove()
The flow rules are flushed before each close.
It was already done in close_port(), detach_devargs() and
detach_port_device() which calls detach_device(),
but not in detach_device(). As a consequence, it was missing for siblings
of port detach command and unplugged device.
The check before calling port_flow_flush() is moved inside the function.
The state of the port to close is checked to be stopped.
As above, this check was missing in detach_device(),
impacting the cases of a multi-port device unplugged or detached
with the port detach command.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Since rte_eth_dev_release_port() is called on all port close operations,
the event RTE_ETH_EVENT_DESTROY can be reliably used for resetting
the port status on the application side.
The intermediate state RTE_PORT_HANDLING is removed in close_port()
because a port can also be closed by a PMD in a device remove operation.
In case multiple ports are closed, calling remove_invalid_ports()
only once is enough.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
The device operation .dev_close was returning void.
This driver interface is changed to return an int.
Note that the API rte_eth_dev_close() is still returning void,
although a deprecation notice is pending to change it as well.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Reviewed-by: Sachin Saxena <sachin.saxena@oss.nxp.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Reviewed-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Currently, the information of Rx/Tx queues from PMD driver is not
displayed exactly in the rxtx_config_display function. Because
"ports[pid].rx_conf" and "ports[pid].tx_conf" maintained in testpmd
application may be not the value actually used by PMD driver. For
instance, user does not set a field, but PMD driver has to use the
default value.
This patch fixes rxtx_config_display so that the information of Rx/Tx
queues can be really displayed for the PMD driver that implement
.rxq_info_get and .txq_info_get ops callback function.
Fixes: 75c530c1bd ("app/testpmd: fix port configuration print")
Fixes: d44f8a485f ("app/testpmd: enable per queue configure")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The number of desc is a per queue configuration. But in the check
function, nb_txd & nb_rxd are used to check whether the desc_id is
valid. nb_txd & nb_rxd are the global configuration of number of desc.
If the queue configuration is changed by cmdline liks: "port config xx
txq xx ring_size xxx", the real value will be changed.
This patch use the real value to check whether the desc_id is valid.
And if these are not configured by user. It will use the default value
to check it, since the rte_eth_rx_queue_setup & rte_eth_tx_queue_setup
will use a default value to configure the queue if nb_rx_desc or
nb_tx_desc is zero.
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
In txonly forward mode, the packet header is fixed by the initial
setting, including the packet length and checksum. So when the packets
varies, this may cause a packet header error. Currently, there are two
methods in txonly mode to randomly change the packets.
1. Set txsplit random and txpkts (x[,y]*), the number of segments
each packets will be a random value between 1 and total number of
segments determined by txpkts settings.
The step as follows:
a) ./testpmd -w xxx -l xx -n 4 -- -i --disable-device-start
b) port config 0 tx_offload multi_segs on
c) set fwd txonly
d) set txsplit rand
e) set txpkts 2048,2048,2048,2048
f) start
The nb_segs of the packets sent by testpmd will be 1~4. The real packet
length will be 2048, 4096, 6144 and 8192. But in fact the packet length
in ip header and udp header will be fixed by 8178 and 8158.
2. Set txonly-multi-flow. the ip address will be varied to generate
multiple flow.
The step as follows:
a) ./testpmd -w xxx -l xx -n 4 -- -i --txonly-multi-flow
b) set fwd txonly
c) start
The ip address of each pkts will change randomly, but since the header
is fixed, the checksum may be a error value.
Therefore, this patch adds a function to update the packet length and
check sum in the pkts header when the txsplit mode is set to rand or
multi-flow is set.
Fixes: 82010ef55e ("app/testpmd: make txonly mode generate multiple flows")
Fixes: 79bec05b32 ("app/testpmd: add ability to split outgoing packets")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Currently, if nb_txd is not set, the txpkts is not allowed to be set
because the nb_txd is used to avoid the number of segments exceed the Tx
ring size and the default value of nb_txd is 0. And there is a bug that
nb_txd is the global configuration for Tx ring size and the ring size
could be changed by some command per queue. So these valid check is
unreliable and introduced unnecessary constraints.
This patch adds a valid check function to use the real Tx ring size to
check the validity of txpkts.
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
When failing to configure VLAN offloads after the port was started, there
is no need to update the port configuration. Currently, when user
configure an unsupported VLAN offloads and fails, and then restart the
port, it will fails since the configuration has been refreshed.
This patch makes the function return directly instead of refreshing the
configuration when execution fails.
Fixes: 384161e006 ("app/testpmd: adjust on the fly VLAN configuration")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>