Go to file
Wei Hu (Xavier) 76d794566d net/hns3: maximize queue number
The maximum number of queues for hns3 PF and VF driver is 64 based on
hns3 network engine with revision_id equals 0x21. Based on hns3 network
engine with revision_id equals 0x30, the hns3 PF PMD driver can support
up to 1280 queues, and hns3 VF PMD driver can support up to 128 queues.

The following points need to be modified to support maximizing queue
number and maintain better compatibility:
1) Maximizing the number of queues for hns3 PF and VF PMD driver In
   current version, VF is not supported when PF is driven by hns3 PMD
   driver. If maximum queue numbers allocated to PF PMD driver is less
   than total tqps_num allocated to this port, all remaining number of
   queues are mapped to VF function, which is unreasonable. So we fix
   that all remaining number of queues are mapped to PF function.

   Using RTE_LIBRTE_HNS3_MAX_TQP_NUM_PER_PF which comes from
   configuration file to limit the queue number allocated to PF device
   based on hns3 network engine with revision_id greater than 0x30. And
   PF device still keep the maximum 64 queues based on hns3 network
   engine with revision_id equals 0x21.

   Remove restriction of the macro HNS3_MAX_TQP_NUM_PER_FUNC on the
   maximum number of queues in hns3 VF PMD driver and use the value
   allocated by hns3 PF kernel netdev driver.

2) According to the queue number allocated to PF device, a variable
   array for Rx and Tx queue is dynamically allocated to record the
   statistics of Rx and Tx queues during the .dev_init ops
   implementation function.
3) Add an extended field in hns3_pf_res_cmd to support the case that
   numbers of queue are greater than 1024.
4) Use new base address of Rx or Tx queue if QUEUE_ID of Rx or Tx queue
   is greater than 1024.
5) Remove queue id mask and use all bits of actual queue_id as the
   queue_id to configure hardware.
6) Currently, 0~9 bits of qset_id in hns3_nq_to_qs_link_cmd used to
   record actual qset id and 10 bit as VLD bit are configured to
   hardware. So we also need to use 11~15 bits when actual qset_id is
   greater than 1024.
7) The number of queue sets based on different network engine are
   different. We use it to calculate group number and configure to
   hardware in the backpressure configuration.
8) Adding check operations for number of Rx and Tx queue user configured
   when mapping queue to tc Rx queue numbers under a single TC must be
   less than rss_size_max supported by a single TC. Rx and Tx queue
   numbers are allocated to every TC by average. So Rx and Tx queue
   numbers must be an integer multiple of 2, or redundant queues are not
   available.
9) We can specify which packets enter the queue with a specific queue
   number, when creating flow table rules by rte_flow API. Currently,
   driver uses 0~9 bits to record the queue_id. So it is necessary to
   extend one bit field to record queue_id and configure to hardware, if
   the queue_id is greater than 1024.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-10-08 19:58:10 +02:00
.ci ci: reduce examples in static builds 2020-04-17 23:34:08 +02:00
app app/testpmd: fix build with gcc 11 2020-10-08 12:44:37 +02:00
buildtools pmdinfogen: fix build with gcc 11 2020-10-08 12:45:54 +02:00
config net/hns3: maximize queue number 2020-10-08 19:58:10 +02:00
devtools support python 3 only 2020-10-02 13:51:00 +02:00
doc raw/ioat: add fill operation 2020-10-08 14:33:20 +02:00
drivers net/hns3: maximize queue number 2020-10-08 19:58:10 +02:00
examples examples/pipeline: fix files for table update 2020-10-08 15:09:28 +02:00
kernel kernel/linux: remove igb_uio 2020-10-06 14:50:13 +02:00
lib eal: fix experimental block for 20.11 2020-10-08 15:20:51 +02:00
license eal: move OS-specific sub-directories 2020-03-31 13:08:55 +02:00
usertools usertools: support binding Intel DSA device 2020-10-08 14:33:20 +02:00
.editorconfig devtools: add EditorConfig file 2020-02-22 21:05:22 +01:00
.gitattributes
.gitignore regex/mlx5: introduce driver for BlueField 2 2020-07-21 19:04:05 +02:00
.travis.yml ci: add tests jobs in aarch64 vm 2020-09-30 21:52:54 +02:00
ABI_VERSION version: 20.11-rc0 2020-08-12 11:32:16 +02:00
MAINTAINERS maintainers: update for MCS lock 2020-10-09 11:01:43 +02:00
Makefile build: create dummy Makefile 2020-09-07 23:51:57 +02:00
meson_options.txt trace: introduce new subsystem 2020-04-23 15:39:06 +02:00
meson.build build: check AVX512 rather than binutils version 2020-07-05 21:32:40 +02:00
README
VERSION version: 20.11-rc0 2020-08-12 11:32:16 +02:00

DPDK is a set of libraries and drivers for fast packet processing.
It supports many processor architectures and both FreeBSD and Linux.

The DPDK uses the Open Source BSD-3-Clause license for the core libraries
and drivers. The kernel components are GPL-2.0 licensed.

Please check the doc directory for release notes,
API documentation, and sample application information.

For questions and usage discussions, subscribe to: users@dpdk.org
Report bugs and issues to the development mailing list: dev@dpdk.org