numam-dpdk/MAINTAINERS

1821 lines
47 KiB
Plaintext
Raw Normal View History

DPDK Maintainers
================
The intention of this file is to provide a set of names that we can rely on
for helping in patch reviews and questions.
These names are additional recipients for emails sent to dev@dpdk.org.
Please avoid private emails.
Descriptions of section entries:
M: Maintainer's Full Name <address@domain>
T: Git tree location.
F: Files and directories with wildcard patterns.
A trailing slash includes all files and subdirectory files.
A wildcard includes all files but not subdirectories.
One pattern per line. Multiple F: lines acceptable.
X: Files and directories exclusion, same rules as F:
K: Keyword regex pattern to match content.
One regex pattern per line. Multiple K: lines acceptable.
General Project Administration
------------------------------
Main Branch
M: Thomas Monjalon <thomas@monjalon.net>
M: David Marchand <david.marchand@redhat.com>
T: git://dpdk.org/dpdk
Next-net Tree
M: Ferruh Yigit <ferruh.yigit@amd.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
T: git://dpdk.org/next/dpdk-next-net
Next-net-brcm Tree
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
T: git://dpdk.org/next/dpdk-next-net-brcm
Next-net-intel Tree
M: Qi Zhang <qi.z.zhang@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
Next-net-mrvl Tree
M: Jerin Jacob <jerinj@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
Next-net-mlx Tree
M: Raslan Darawsheh <rasland@nvidia.com>
T: git://dpdk.org/next/dpdk-next-net-mlx
Next-virtio Tree
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
T: git://dpdk.org/next/dpdk-next-virtio
Next-crypto Tree
M: Akhil Goyal <gakhil@marvell.com>
T: git://dpdk.org/next/dpdk-next-crypto
Next-eventdev Tree
M: Jerin Jacob <jerinj@marvell.com>
T: git://dpdk.org/next/dpdk-next-eventdev
Stable Branches
M: Luca Boccassi <bluca@debian.org>
M: Kevin Traynor <ktraynor@redhat.com>
M: Christian Ehrhardt <christian.ehrhardt@canonical.com>
M: Xueming Li <xuemingl@nvidia.com>
T: git://dpdk.org/dpdk-stable
Security Issues
M: maintainers@dpdk.org
Documentation (with overlaps)
F: README
F: doc/
Developers and Maintainers Tools
M: Thomas Monjalon <thomas@monjalon.net>
F: MAINTAINERS
F: devtools/build-dict.sh
F: devtools/check-doc-vs-code.sh
F: devtools/check-dup-includes.sh
F: devtools/check-maintainers.sh
F: devtools/check-forbidden-tokens.awk
F: devtools/check-git-log.sh
F: devtools/check-spdx-tag.sh
F: devtools/check-symbol-maps.sh
F: devtools/checkpatches.sh
F: devtools/get-maintainer.sh
F: devtools/git-log-fixes.sh
F: devtools/load-devel-config
F: devtools/parse-flow-support.sh
F: devtools/process-iwyu.py
F: devtools/update-patches.py
F: devtools/words-case.txt
license: introduce SPDX identifiers The DPDK uses the Open Source BSD-3-Clause license for the core libraries and drivers. The kernel components are naturally GPLv2 licensed. Many of the files in the DPDK source code contain the full text of the applicable license. For example, most of the BSD-3-Clause files contain a full copy of the BSD-3-Clause license text. Including big blocks of License headers in all files blows up the source code with mostly redundant information. An additional problem is that even the same licenses are referred to by a number of slightly varying text blocks (full, abbreviated, different indentation, line wrapping and/or white space, with obsolete address information, ...) which makes validation and automatic processing a nightmare. To make this easier, DPDK uses of a single line reference to Unique License Identifiers in source files as defined by the Linux Foundation's SPDX project https://spdk.org. Adding license information in this fashion, rather than adding full license text, can be more efficient for developers; decreases errors; and improves automated detection of licenses. The current set of valid, predefined SPDX identifiers is set forth on the SPDX License List at https://spdx.org/licenses/. For example, to label a file as subject to the BSD-3-Clause license, the following text would be used as the top line of the file. SPDX-License-Identifier: BSD-3-Clause Note: Any new file contributions in DPDK shall adhere to the above scheme. It is also recommended to replace or at least amend the existing license text in the code with SPDX-License-Identifiers. Any exception to DPDK IP policies shall be approved by DPDK tech board and DPDK Governing Board. Steps for any exception approval: 1. Mention the appropriate license identifier form SPDX. If the license is not listed in SPDX Licenses. It is the submitters responsibiliity to get it first listed. 2. Get the required approval from the DPDK Technical Board. Technical board may advise the author to check alternate means first. If no other alternatives are found and the merit of the contributions are important for DPDK's mission, it may decide on such exception with two-thirds vote of the members. 3. Technical board then approach Governing board for such limited approval for the given contribution only. Any approvals shall be documented in "licenses/exceptions.txt" with record dates. Note: From the legal point of view, this patch is supposed to be only a change to the textual representation of the license information, but in no way any change to the actual license terms. With this patch applied, all files will still be licensed under the same terms they were before. Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Thomas Monjalon <thomas@monjalon.net>
2017-12-19 10:14:38 +00:00
F: license/
F: .editorconfig
Build System
build: add infrastructure for meson and ninja builds To build with meson and ninja, we need some initial infrastructure in place. The build files for meson always need to be called "meson.build", and options get placed in meson_options.txt This commit adds a top-level meson.build file, which sets up the global variables for tracking drivers, libraries, etc., and then includes other build files, before finishing by writing the global build configuration header file and a DPDK pkgconfig file at the end, using some of those same globals. From the top level build file, the only include file thus far is for the config folder, which does some other setup of global configuration parameters, including pulling in architecture specific parameters from an architectural subdirectory. A number of configuration build options are provided for the project to tune a number of global variables which will be used later e.g. max numa nodes, max cores, etc. These settings all make their way to the global build config header "rte_build_config.h". There is also a file "rte_config.h", which includes "rte_build_config.h", and this file is meant to hold other build-time values which are present in our current static build configuration but are not normally meant for user-configuration. Ideally, over time, the values placed here should be moved to the individual libraries or drivers which want those values. Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Reviewed-by: Harry van Haaren <harry.van.haaren@intel.com> Acked-by: Keith Wiles <keith.wiles@intel.com> Acked-by: Luca Boccassi <luca.boccassi@gmail.com>
2017-08-28 10:57:12 +00:00
M: Bruce Richardson <bruce.richardson@intel.com>
F: Makefile
build: add infrastructure for meson and ninja builds To build with meson and ninja, we need some initial infrastructure in place. The build files for meson always need to be called "meson.build", and options get placed in meson_options.txt This commit adds a top-level meson.build file, which sets up the global variables for tracking drivers, libraries, etc., and then includes other build files, before finishing by writing the global build configuration header file and a DPDK pkgconfig file at the end, using some of those same globals. From the top level build file, the only include file thus far is for the config folder, which does some other setup of global configuration parameters, including pulling in architecture specific parameters from an architectural subdirectory. A number of configuration build options are provided for the project to tune a number of global variables which will be used later e.g. max numa nodes, max cores, etc. These settings all make their way to the global build config header "rte_build_config.h". There is also a file "rte_config.h", which includes "rte_build_config.h", and this file is meant to hold other build-time values which are present in our current static build configuration but are not normally meant for user-configuration. Ideally, over time, the values placed here should be moved to the individual libraries or drivers which want those values. Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Reviewed-by: Harry van Haaren <harry.van.haaren@intel.com> Acked-by: Keith Wiles <keith.wiles@intel.com> Acked-by: Luca Boccassi <luca.boccassi@gmail.com>
2017-08-28 10:57:12 +00:00
F: meson.build
F: meson_options.txt
F: config/
F: buildtools/chkincs/
F: buildtools/call-sphinx-build.py
F: buildtools/get-cpu-count.py
F: buildtools/get-numa-count.py
F: buildtools/list-dir-globs.py
F: buildtools/pkg-config/
F: buildtools/symlink-drivers-solibs.sh
F: buildtools/symlink-drivers-solibs.py
F: devtools/test-meson-builds.sh
F: devtools/check-meson.py
build: add infrastructure for meson and ninja builds To build with meson and ninja, we need some initial infrastructure in place. The build files for meson always need to be called "meson.build", and options get placed in meson_options.txt This commit adds a top-level meson.build file, which sets up the global variables for tracking drivers, libraries, etc., and then includes other build files, before finishing by writing the global build configuration header file and a DPDK pkgconfig file at the end, using some of those same globals. From the top level build file, the only include file thus far is for the config folder, which does some other setup of global configuration parameters, including pulling in architecture specific parameters from an architectural subdirectory. A number of configuration build options are provided for the project to tune a number of global variables which will be used later e.g. max numa nodes, max cores, etc. These settings all make their way to the global build config header "rte_build_config.h". There is also a file "rte_config.h", which includes "rte_build_config.h", and this file is meant to hold other build-time values which are present in our current static build configuration but are not normally meant for user-configuration. Ideally, over time, the values placed here should be moved to the individual libraries or drivers which want those values. Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Reviewed-by: Harry van Haaren <harry.van.haaren@intel.com> Acked-by: Keith Wiles <keith.wiles@intel.com> Acked-by: Luca Boccassi <luca.boccassi@gmail.com>
2017-08-28 10:57:12 +00:00
Public CI
M: Aaron Conole <aconole@redhat.com>
M: Michael Santana <maicolgabriel@hotmail.com>
F: .travis.yml
ci: hook to GitHub Actions With the recent changes in terms of free access to the Travis CI, let's offer an alternative with GitHub Actions. Running jobs on ARM is not supported unless using external runners, so this commit only adds builds for x86_64 and cross compiling for i386 and aarch64. Differences with the Travis CI integration: - Error logs are not dumped to the console when something goes wrong. Instead, they are gathered in a "catch-all" step and attached as artifacts. - A cache entry is stored once and for all, but if no cache is found you can inherit from the default branch cache. The cache is 5GB large, for the whole git repository. - The maximum retention of logs and artifacts is 3 months. - /home/runner is world writable, so a workaround has been added for starting dpdk processes. - Ilya, working on OVS GHA support, noticed that jobs can run with processors that don't have the same capabilities. For DPDK, this impacts the ccache content since everything was built with -march=native so far, and we will end up with binaries that can't run in a later build. The problem has not been seen in Travis CI (?) but it is safer to use a fixed "-Dmachine=default" in any case. - Scheduling jobs is part of the configuration and takes the form of a crontab. A build is scheduled every Monday at 0:00 (UTC) to provide a default ccache for the week (useful for the ovsrobot). Signed-off-by: David Marchand <david.marchand@redhat.com> Tested-by: Ferruh Yigit <ferruh.yigit@intel.com> Acked-by: Thomas Monjalon <thomas@monjalon.net> Acked-by: Aaron Conole <aconole@redhat.com>
2020-12-04 17:36:21 +00:00
F: .github/workflows/build.yml
F: .ci/
ABI Policy & Versioning
M: Ray Kinsella <mdr@ashroe.eu>
F: lib/eal/include/rte_compat.h
F: lib/eal/include/rte_function_versioning.h
F: doc/guides/contributing/abi_*.rst
F: doc/guides/rel_notes/deprecation.rst
F: devtools/check-abi.sh
F: devtools/check-abi-version.sh
F: devtools/check-symbol-change.sh
F: devtools/gen-abi.sh
F: devtools/libabigail.abignore
F: devtools/update-abi.sh
F: devtools/update_version_map_abi.py
F: buildtools/check-symbols.sh
F: buildtools/map-list-symbol.sh
F: drivers/*/*/*.map
F: lib/*/*.map
Driver information
M: Neil Horman <nhorman@tuxdriver.com>
M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
F: buildtools/coff.py
F: buildtools/gen-pmdinfo-cfile.py
F: buildtools/pmdinfogen.py
F: usertools/dpdk-pmdinfo.py
F: doc/guides/tools/pmdinfo.rst
Environment Abstraction Layer
-----------------------------
T: git://dpdk.org/dpdk
EAL API and common code
F: lib/eal/common/
F: lib/eal/unix/
F: lib/eal/include/
F: lib/eal/version.map
F: doc/guides/prog_guide/env_abstraction_layer.rst
F: app/test/test_alarm.c
F: app/test/test_atomic.c
F: app/test/test_barrier.c
F: app/test/test_byteorder.c
F: app/test/test_common.c
F: app/test/test_cpuflags.c
F: app/test/test_cycles.c
F: app/test/test_debug.c
F: app/test/test_devargs.c
F: app/test/test_eal*
F: app/test/test_errno.c
F: app/test/test_lcores.c
F: app/test/test_logs.c
F: app/test/test_memcpy*
F: app/test/test_per_lcore.c
F: app/test/test_pflock.c
F: app/test/test_prefetch.c
F: app/test/test_reciprocal_division*
F: app/test/test_rwlock.c
F: app/test/test_spinlock.c
F: app/test/test_string_fns.c
F: app/test/test_tailq.c
F: app/test/test_threads.c
F: app/test/test_version.c
Trace - EXPERIMENTAL
M: Jerin Jacob <jerinj@marvell.com>
M: Sunil Kumar Kori <skori@marvell.com>
F: lib/eal/include/rte_trace*.h
F: lib/eal/common/eal_common_trace*.c
F: lib/eal/common/eal_trace.h
F: doc/guides/prog_guide/trace_lib.rst
F: app/test/test_trace*
Memory Allocation
M: Anatoly Burakov <anatoly.burakov@intel.com>
F: lib/eal/include/rte_fbarray.h
F: lib/eal/include/rte_mem*
F: lib/eal/include/rte_malloc.h
F: lib/eal/common/*malloc*
F: lib/eal/common/eal_common_dynmem.c
F: lib/eal/common/eal_common_fbarray.c
F: lib/eal/common/eal_common_mem*
F: lib/eal/common/eal_hugepages.h
F: lib/eal/linux/eal_mem*
F: lib/eal/freebsd/eal_mem*
F: doc/guides/prog_guide/env_abstraction_layer.rst
F: app/test/test_external_mem.c
F: app/test/test_func_reentrancy.c
F: app/test/test_fbarray.c
F: app/test/test_malloc.c
F: app/test/test_malloc_perf.c
F: app/test/test_memory.c
F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
Keep alive
F: lib/eal/include/rte_keepalive.h
F: lib/eal/common/rte_keepalive.c
F: examples/l2fwd-keepalive/
F: doc/guides/sample_app_ug/keep_alive.rst
Secondary process
M: Anatoly Burakov <anatoly.burakov@intel.com>
K: RTE_PROC_
F: lib/eal/common/eal_common_proc.c
F: doc/guides/prog_guide/multi_proc_support.rst
F: app/test/test_mp_secondary.c
F: examples/multi_process/
F: doc/guides/sample_app_ug/multi_process.rst
Service Cores
M: Harry van Haaren <harry.van.haaren@intel.com>
F: lib/eal/include/rte_service.h
F: lib/eal/include/rte_service_component.h
F: lib/eal/common/rte_service.c
F: doc/guides/prog_guide/service_cores.rst
F: app/test/test_service_cores.c
Bitops
M: Joyce Kong <joyce.kong@arm.com>
F: lib/eal/include/rte_bitops.h
F: app/test/test_bitops.c
Bitmap
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
F: lib/eal/include/rte_bitmap.h
F: app/test/test_bitmap.c
MCSlock
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
F: lib/eal/include/rte_mcslock.h
F: app/test/test_mcslock.c
eal: add seqlock A sequence lock (seqlock) is a synchronization primitive which allows for data-race free, low-overhead, high-frequency reads, suitable for data structures shared across many cores and which are updated relatively infrequently. A seqlock permits multiple parallel readers. A spinlock is used to serialize writers. In cases where there is only a single writer, or writer-writer synchronization is done by some external means, the "raw" sequence counter type (and accompanying rte_seqcount_*() functions) may be used instead. To avoid resource reclamation and other issues, the data protected by a seqlock is best off being self-contained (i.e., no pointers [except to constant data]). One way to think about seqlocks is that they provide means to perform atomic operations on data objects larger than what the native atomic machine instructions allow for. DPDK seqlocks (and the underlying sequence counters) are not preemption safe on the writer side. A thread preemption affects performance, not correctness. A seqlock contains a sequence number, which can be thought of as the generation of the data it protects. A reader will 1. Load the sequence number (sn). 2. Load, in arbitrary order, the seqlock-protected data. 3. Load the sn again. 4. Check if the first and second sn are equal, and even numbered. If they are not, discard the loaded data, and restart from 1. The first three steps need to be ordered using suitable memory fences. A writer will 1. Take the spinlock, to serialize writer access. 2. Load the sn. 3. Store the original sn + 1 as the new sn. 4. Perform load and stores to the seqlock-protected data. 5. Store the original sn + 2 as the new sn. 6. Release the spinlock. Proper memory fencing is required to make sure the first sn store, the data stores, and the second sn store appear to the reader in the mentioned order. The sn loads and stores must be atomic, but the data loads and stores need not be. The original seqlock design and implementation was done by Stephen Hemminger. This is an independent implementation, using C11 atomics. For more information on seqlocks, see https://en.wikipedia.org/wiki/Seqlock Acked-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com> Reviewed-by: Chengwen Feng <fengchengwen@huawei.com> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
2022-05-23 14:23:46 +00:00
Sequence Lock
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
F: lib/eal/include/rte_seqcount.h
F: lib/eal/include/rte_seqlock.h
F: app/test/test_seqlock.c
Ticketlock
M: Joyce Kong <joyce.kong@arm.com>
F: lib/eal/include/rte_ticketlock.h
F: app/test/test_ticketlock.c
Pseudo-random Number Generation
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
F: lib/eal/include/rte_random.h
F: lib/eal/common/rte_random.c
F: app/test/test_rand_perf.c
ARM v7
M: Ruifeng Wang <ruifeng.wang@arm.com>
F: config/arm/
F: lib/eal/arm/
X: lib/eal/arm/include/*_64.h
ARM v8
M: Ruifeng Wang <ruifeng.wang@arm.com>
F: config/arm/
F: doc/guides/linux_gsg/cross_build_dpdk_for_arm64.rst
F: lib/eal/arm/
X: lib/eal/arm/include/*_32.h
F: lib/*/*_arm64.*
F: lib/*/*_neon.*
F: drivers/*/*/*_neon.*
F: app/*/*_neon.*
F: examples/*/*_neon.*
F: examples/common/neon/
IBM POWER (alpha)
M: David Christensen <drc@linux.vnet.ibm.com>
F: config/ppc/
F: lib/eal/ppc/
F: lib/*/*_altivec*
F: drivers/*/*/*_altivec.*
F: app/*/*_altivec.*
F: examples/*/*_altivec.*
F: examples/common/altivec/
eal/riscv: support RISC-V architecture Add all necessary elements for DPDK to compile and run EAL on SiFive Freedom U740 SoC which is based on SiFive U74-MC (ISA: rv64imafdc) core complex. This includes: - EAL library implementation for rv64imafdc ISA. - meson build structure for 'riscv' architecture. RTE_ARCH_RISCV define is added for architecture identification. - xmm_t structure operation stubs as there is no vector support in the U74 core. Compilation was tested on Ubuntu and Arch Linux using riscv64 toolchain. Clang compilation currently not supported due to issues with missing relocation relaxation. Two rte_rdtsc() schemes are provided: stable low-resolution using rdtime (default) and unstable high-resolution using rdcycle. User can override the scheme by defining RTE_RISCV_RDTSC_USE_HPM=1 during compile time of both DPDK and the application. The reasoning for this is as follows. The RISC-V ISA mandates that clock read by rdtime has to be of constant period and synchronized between all hardware threads within 1 tick (chapter 10.1 in version 20191213 of RISC-V spec). However this clock may not be of high-enough frequency for dataplane uses. I.e. on HiFive Unmatched (FU740) it is 1MHz. There is a high-resolution alternative in form of rdcycle which is clocked at the core clock frequency. The drawbacks are that it may be disabled during sleep (WFI), its frequency might change due to DVFS and it is core-local and therefore cannot be used as a wall-clock. It can however be used for micro-benchmarking user applications, similarly to Aarch64's PMCCNTR PMU counter. The platform is currently marked as linux-only because rte_cycles implementation uses the timebase-frequency device-tree node read through the proc file system. Such approach was chosen because Linux kernel depends on the presence of this device-tree node. The i40e PMD driver is disabled on RISC-V as the rv64gc ISA has no vector operations. The compilation of following modules has been disabled by this commit and will be re-enabled in later commits as fixes are introduced: net/ixgbe, net/memif, net/tap, example/l3fwd. Sponsored-by: Frank Zhao <frank.zhao@starfivetech.com> Sponsored-by: Sam Grove <sam.grove@sifive.com> Signed-off-by: Michal Mazurek <maz@semihalf.com> Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
2022-06-07 10:46:10 +00:00
RISC-V
M: Stanislaw Kardach <kda@semihalf.com>
F: config/riscv/
F: doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst
F: lib/eal/riscv/
Intel x86
M: Bruce Richardson <bruce.richardson@intel.com>
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
F: config/x86/
F: doc/guides/linux_gsg/nic_perf_intel_platform.rst
F: buildtools/binutils-avx512-check.py
F: doc/guides/howto/avx512.rst
F: lib/eal/x86/
F: lib/*/*_sse*
F: lib/*/*_avx*
F: drivers/*/*/*_sse*
F: drivers/*/*/*_avx*
F: app/*/*_sse*
F: app/*/*_avx*
F: examples/*/*_sse*
F: examples/*/*_avx*
F: examples/common/sse/
Linux EAL (with overlaps)
F: lib/eal/linux/
F: doc/guides/linux_gsg/
Linux UIO
F: drivers/bus/pci/linux/*uio*
Linux VFIO
M: Anatoly Burakov <anatoly.burakov@intel.com>
F: lib/eal/linux/*vfio*
F: drivers/bus/pci/linux/*vfio*
FreeBSD EAL (with overlaps)
M: Bruce Richardson <bruce.richardson@intel.com>
F: lib/eal/freebsd/
F: doc/guides/freebsd_gsg/
FreeBSD contigmem
M: Bruce Richardson <bruce.richardson@intel.com>
F: kernel/freebsd/contigmem/
FreeBSD UIO
M: Bruce Richardson <bruce.richardson@intel.com>
F: kernel/freebsd/nic_uio/
Windows support
M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
M: Narcisa Ana Maria Vasile <navasile@linux.microsoft.com>
M: Dmitry Malloy <dmitrym@microsoft.com>
M: Pallavi Kadam <pallavi.kadam@intel.com>
F: lib/eal/windows/
F: buildtools/map_to_win.py
F: doc/guides/windows_gsg/
Windows memory allocation
M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
F: lib/eal/windows/eal_hugepages.c
F: lib/eal/windows/eal_mem*
Core Libraries
--------------
T: git://dpdk.org/dpdk
Memory pool
M: Olivier Matz <olivier.matz@6wind.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
F: lib/mempool/
F: drivers/mempool/ring/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
Ring queue
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
F: lib/ring/
F: doc/guides/prog_guide/ring_lib.rst
F: app/test/test_ring*
F: app/test/test_func_reentrancy.c
Stack
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/stack/
F: drivers/mempool/stack/
F: app/test/test_stack*
F: doc/guides/prog_guide/stack_lib.rst
Packet buffer
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/mbuf/
F: doc/guides/prog_guide/mbuf_lib.rst
F: app/test/test_mbuf.c
Ethernet API
M: Thomas Monjalon <thomas@monjalon.net>
M: Ferruh Yigit <ferruh.yigit@amd.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/
F: app/test/test_ethdev*
F: devtools/test-null.sh
F: doc/guides/prog_guide/switch_representation.rst
Flow API
M: Ori Kam <orika@nvidia.com>
T: git://dpdk.org/next/dpdk-next-net
F: app/test-pmd/cmdline_flow.c
F: doc/guides/prog_guide/rte_flow.rst
F: lib/ethdev/rte_flow*
Traffic Management API - EXPERIMENTAL
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/rte_tm*
F: app/test-pmd/cmdline_tm.*
Traffic Metering and Policing API - EXPERIMENTAL
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/rte_mtr*
F: app/test-pmd/cmdline_mtr.*
Baseband API
M: Nicolas Chautru <nicolas.chautru@intel.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/bbdev/
F: doc/guides/prog_guide/bbdev.rst
F: doc/guides/bbdevs/features/default.ini
F: app/test-bbdev/
F: doc/guides/tools/testbbdev.rst
F: examples/bbdev_app/
F: doc/guides/sample_app_ug/bbdev_app.rst
Crypto API
M: Akhil Goyal <gakhil@marvell.com>
M: Fan Zhang <royzhang1980@gmail.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/cryptodev/
F: app/test/test_cryptodev*
F: examples/l2fwd-crypto/
Security API
M: Akhil Goyal <gakhil@marvell.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/security/
F: doc/guides/prog_guide/rte_security.rst
F: app/test/test_security*
Compression API - EXPERIMENTAL
M: Fan Zhang <royzhang1980@gmail.com>
M: Ashish Gupta <ashish.gupta@marvell.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/compressdev/
F: drivers/compress/
F: app/test/test_compressdev*
F: doc/guides/prog_guide/compressdev.rst
F: doc/guides/compressdevs/features/default.ini
regexdev: introduce API As RegEx usage become more used by DPDK applications, for example: * Next Generation Firewalls (NGFW) * Deep Packet and Flow Inspection (DPI) * Intrusion Prevention Systems (IPS) * DDoS Mitigation * Network Monitoring * Data Loss Prevention (DLP) * Smart NICs * Grammar based content processing * URL, spam and adware filtering * Advanced auditing and policing of user/application security policies * Financial data mining - parsing of streamed financial feeds * Application recognition. * Dmemory introspection. * Natural Language Processing (NLP) * Sentiment Analysis. * Big data database acceleration. * Computational storage. Number of PMD providers started to work on HW implementation, along side with SW implementations. This lib adds the support for those kind of devices. The RegEx Device API is composed of two parts: - The application-oriented RegEx API that includes functions to setup a RegEx device (configure it, setup its queue pairs and start it), update the rule database and so on. - The driver-oriented RegEx API that exports a function allowing a RegEx poll Mode Driver (PMD) to simultaneously register itself as a RegEx device driver. RegEx device components and definitions: +-----------------+ | | | o---------+ rte_regexdev_[en|de]queue_burst() | PCRE based o------+ | | | RegEx pattern | | | +--------+ | | matching engine o------+--+--o | | +------+ | | | | | queue |<==o===>|Core 0| | o----+ | | | pair 0 | | | | | | | | +--------+ +------+ +-----------------+ | | | ^ | | | +--------+ | | | | | | +------+ | | +--+--o queue |<======>|Core 1| Rule|Database | | | pair 1 | | | +------+----------+ | | +--------+ +------+ | Group 0 | | | | +-------------+ | | | +--------+ +------+ | | Rules 0..n | | | | | | |Core 2| | +-------------+ | | +--o queue |<======>| | | Group 1 | | | pair 2 | +------+ | +-------------+ | | +--------+ | | Rules 0..n | | | | +-------------+ | | +--------+ | Group 2 | | | | +------+ | +-------------+ | | | queue |<======>|Core n| | | Rules 0..n | | +-------o pair n | | | | +-------------+ | +--------+ +------+ | Group n | | +-------------+ |<-------rte_regexdev_rule_db_update() | | | |<-------rte_regexdev_rule_db_compile_activate() | | Rules 0..n | |<-------rte_regexdev_rule_db_import() | +-------------+ |------->rte_regexdev_rule_db_export() +-----------------+ RegEx: A regular expression is a concise and flexible means for matching strings of text, such as particular characters, words, or patterns of characters. A common abbreviation for this is â~@~\RegExâ~@~]. RegEx device: A hardware or software-based implementation of RegEx device API for PCRE based pattern matching syntax and semantics. PCRE RegEx syntax and semantics specification: http://regexkit.sourceforge.net/Documentation/pcre/pcrepattern.html RegEx queue pair: Each RegEx device should have one or more queue pair to transmit a burst of pattern matching request and receive a burst of receive the pattern matching response. The pattern matching request/response embedded in *rte_regex_ops* structure. Rule: A pattern matching rule expressed in PCRE RegEx syntax along with Match ID and Group ID to identify the rule upon the match. Rule database: The RegEx device accepts regular expressions and converts them into a compiled rule database that can then be used to scan data. Compilation allows the device to analyze the given pattern(s) and pre-determine how to scan for these patterns in an optimized fashion that would be far too expensive to compute at run-time. A rule database contains a set of rules that compiled in device specific binary form. Match ID or Rule ID: A unique identifier provided at the time of rule creation for the application to identify the rule upon match. Group ID: Group of rules can be grouped under one group ID to enable rule isolation and effective pattern matching. A unique group identifier provided at the time of rule creation for the application to identify the rule upon match. Scan: A pattern matching request through *enqueue* API. It may possible that a given RegEx device may not support all the features of PCRE. The application may probe unsupported features through struct rte_regexdev_info::pcre_unsup_flags By default, all the functions of the RegEx Device API exported by a PMD are lock-free functions which assume to not be invoked in parallel on different logical cores to work on the same target object. For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical cores to operates on same RegEx queue pair. Of course, this function can be invoked in parallel by different logical core on different queue pair. It is the responsibility of the upper level application to enforce this rule. In all functions of the RegEx API, the RegEx device is designated by an integer >= 0 named the device identifier *dev_id* At the RegEx driver level, RegEx devices are represented by a generic data structure of type *rte_regexdev*. RegEx devices are dynamically registered during the PCI/SoC device probing phase performed at EAL initialization time. When a RegEx device is being probed, a *rte_regexdev* structure and a new device identifier are allocated for that device. Then, the regexdev_init() function supplied by the RegEx driver matching the probed device is invoked to properly initialize the device. The role of the device init function consists of resetting the hardware or software RegEx driver implementations. If the device init operation is successful, the correspondence between the device identifier assigned to the new device and its associated *rte_regexdev* structure is effectively registered. Otherwise, both the *rte_regexdev* structure and the device identifier are freed. The functions exported by the application RegEx API to setup a device designated by its device identifier must be invoked in the following order: - rte_regexdev_configure() - rte_regexdev_queue_pair_setup() - rte_regexdev_start() Then, the application can invoke, in any order, the functions exported by the RegEx API to enqueue pattern matching job, dequeue pattern matching response, get the stats, update the rule database, get/set device attributes and so on If the application wants to change the configuration (i.e. call rte_regexdev_configure() or rte_regexdev_queue_pair_setup()), it must call rte_regexdev_stop() first to stop the device and then do the reconfiguration before calling rte_regexdev_start() again. The enqueue and dequeue functions should not be invoked when the device is stopped. Finally, an application can close a RegEx device by invoking the rte_regexdev_close() function. Each function of the application RegEx API invokes a specific function of the PMD that controls the target device designated by its device identifier. For this purpose, all device-specific functions of a RegEx driver are supplied through a set of pointers contained in a generic structure of type *regexdev_ops*. The address of the *regexdev_ops* structure is stored in the *rte_regexdev* structure by the device init function of the RegEx driver, which is invoked during the PCI/SoC device probing phase, as explained earlier. In other words, each function of the RegEx API simply retrieves the *rte_regexdev* structure associated with the device identifier and performs an indirect invocation of the corresponding driver function supplied in the *regexdev_ops* structure of the *rte_regexdev* structure. For performance reasons, the address of the fast-path functions of the RegEx driver is not contained in the *regexdev_ops* structure. Instead, they are directly stored at the beginning of the *rte_regexdev* structure to avoid an extra indirect memory access during their invocation. RTE RegEx device drivers do not use interrupts for enqueue or dequeue operation. Instead, RegEx drivers export Poll-Mode enqueue and dequeue functions to applications. The *enqueue* operation submits a burst of RegEx pattern matching request to the RegEx device and the *dequeue* operation gets a burst of pattern matching response for the ones submitted through *enqueue* operation. Typical application utilisation of the RegEx device API will follow the following programming flow. - rte_regexdev_configure() - rte_regexdev_queue_pair_setup() - rte_regexdev_rule_db_update() Needs to invoke if precompiled rule database not provided in rte_regexdev_config::rule_db for rte_regexdev_configure() and/or application needs to update rule database. - rte_regexdev_rule_db_compile_activate() Needs to invoke if rte_regexdev_rule_db_update function was used. - Create or reuse exiting mempool for *rte_regex_ops* objects. - rte_regexdev_start() - rte_regexdev_enqueue_burst() - rte_regexdev_dequeue_burst() Signed-off-by: Jerin Jacob <jerinj@marvell.com> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com> Signed-off-by: Ori Kam <orika@mellanox.com>
2020-07-06 17:36:46 +00:00
RegEx API - EXPERIMENTAL
M: Ori Kam <orika@nvidia.com>
F: lib/regexdev/
F: app/test-regex/
regexdev: introduce API As RegEx usage become more used by DPDK applications, for example: * Next Generation Firewalls (NGFW) * Deep Packet and Flow Inspection (DPI) * Intrusion Prevention Systems (IPS) * DDoS Mitigation * Network Monitoring * Data Loss Prevention (DLP) * Smart NICs * Grammar based content processing * URL, spam and adware filtering * Advanced auditing and policing of user/application security policies * Financial data mining - parsing of streamed financial feeds * Application recognition. * Dmemory introspection. * Natural Language Processing (NLP) * Sentiment Analysis. * Big data database acceleration. * Computational storage. Number of PMD providers started to work on HW implementation, along side with SW implementations. This lib adds the support for those kind of devices. The RegEx Device API is composed of two parts: - The application-oriented RegEx API that includes functions to setup a RegEx device (configure it, setup its queue pairs and start it), update the rule database and so on. - The driver-oriented RegEx API that exports a function allowing a RegEx poll Mode Driver (PMD) to simultaneously register itself as a RegEx device driver. RegEx device components and definitions: +-----------------+ | | | o---------+ rte_regexdev_[en|de]queue_burst() | PCRE based o------+ | | | RegEx pattern | | | +--------+ | | matching engine o------+--+--o | | +------+ | | | | | queue |<==o===>|Core 0| | o----+ | | | pair 0 | | | | | | | | +--------+ +------+ +-----------------+ | | | ^ | | | +--------+ | | | | | | +------+ | | +--+--o queue |<======>|Core 1| Rule|Database | | | pair 1 | | | +------+----------+ | | +--------+ +------+ | Group 0 | | | | +-------------+ | | | +--------+ +------+ | | Rules 0..n | | | | | | |Core 2| | +-------------+ | | +--o queue |<======>| | | Group 1 | | | pair 2 | +------+ | +-------------+ | | +--------+ | | Rules 0..n | | | | +-------------+ | | +--------+ | Group 2 | | | | +------+ | +-------------+ | | | queue |<======>|Core n| | | Rules 0..n | | +-------o pair n | | | | +-------------+ | +--------+ +------+ | Group n | | +-------------+ |<-------rte_regexdev_rule_db_update() | | | |<-------rte_regexdev_rule_db_compile_activate() | | Rules 0..n | |<-------rte_regexdev_rule_db_import() | +-------------+ |------->rte_regexdev_rule_db_export() +-----------------+ RegEx: A regular expression is a concise and flexible means for matching strings of text, such as particular characters, words, or patterns of characters. A common abbreviation for this is â~@~\RegExâ~@~]. RegEx device: A hardware or software-based implementation of RegEx device API for PCRE based pattern matching syntax and semantics. PCRE RegEx syntax and semantics specification: http://regexkit.sourceforge.net/Documentation/pcre/pcrepattern.html RegEx queue pair: Each RegEx device should have one or more queue pair to transmit a burst of pattern matching request and receive a burst of receive the pattern matching response. The pattern matching request/response embedded in *rte_regex_ops* structure. Rule: A pattern matching rule expressed in PCRE RegEx syntax along with Match ID and Group ID to identify the rule upon the match. Rule database: The RegEx device accepts regular expressions and converts them into a compiled rule database that can then be used to scan data. Compilation allows the device to analyze the given pattern(s) and pre-determine how to scan for these patterns in an optimized fashion that would be far too expensive to compute at run-time. A rule database contains a set of rules that compiled in device specific binary form. Match ID or Rule ID: A unique identifier provided at the time of rule creation for the application to identify the rule upon match. Group ID: Group of rules can be grouped under one group ID to enable rule isolation and effective pattern matching. A unique group identifier provided at the time of rule creation for the application to identify the rule upon match. Scan: A pattern matching request through *enqueue* API. It may possible that a given RegEx device may not support all the features of PCRE. The application may probe unsupported features through struct rte_regexdev_info::pcre_unsup_flags By default, all the functions of the RegEx Device API exported by a PMD are lock-free functions which assume to not be invoked in parallel on different logical cores to work on the same target object. For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical cores to operates on same RegEx queue pair. Of course, this function can be invoked in parallel by different logical core on different queue pair. It is the responsibility of the upper level application to enforce this rule. In all functions of the RegEx API, the RegEx device is designated by an integer >= 0 named the device identifier *dev_id* At the RegEx driver level, RegEx devices are represented by a generic data structure of type *rte_regexdev*. RegEx devices are dynamically registered during the PCI/SoC device probing phase performed at EAL initialization time. When a RegEx device is being probed, a *rte_regexdev* structure and a new device identifier are allocated for that device. Then, the regexdev_init() function supplied by the RegEx driver matching the probed device is invoked to properly initialize the device. The role of the device init function consists of resetting the hardware or software RegEx driver implementations. If the device init operation is successful, the correspondence between the device identifier assigned to the new device and its associated *rte_regexdev* structure is effectively registered. Otherwise, both the *rte_regexdev* structure and the device identifier are freed. The functions exported by the application RegEx API to setup a device designated by its device identifier must be invoked in the following order: - rte_regexdev_configure() - rte_regexdev_queue_pair_setup() - rte_regexdev_start() Then, the application can invoke, in any order, the functions exported by the RegEx API to enqueue pattern matching job, dequeue pattern matching response, get the stats, update the rule database, get/set device attributes and so on If the application wants to change the configuration (i.e. call rte_regexdev_configure() or rte_regexdev_queue_pair_setup()), it must call rte_regexdev_stop() first to stop the device and then do the reconfiguration before calling rte_regexdev_start() again. The enqueue and dequeue functions should not be invoked when the device is stopped. Finally, an application can close a RegEx device by invoking the rte_regexdev_close() function. Each function of the application RegEx API invokes a specific function of the PMD that controls the target device designated by its device identifier. For this purpose, all device-specific functions of a RegEx driver are supplied through a set of pointers contained in a generic structure of type *regexdev_ops*. The address of the *regexdev_ops* structure is stored in the *rte_regexdev* structure by the device init function of the RegEx driver, which is invoked during the PCI/SoC device probing phase, as explained earlier. In other words, each function of the RegEx API simply retrieves the *rte_regexdev* structure associated with the device identifier and performs an indirect invocation of the corresponding driver function supplied in the *regexdev_ops* structure of the *rte_regexdev* structure. For performance reasons, the address of the fast-path functions of the RegEx driver is not contained in the *regexdev_ops* structure. Instead, they are directly stored at the beginning of the *rte_regexdev* structure to avoid an extra indirect memory access during their invocation. RTE RegEx device drivers do not use interrupts for enqueue or dequeue operation. Instead, RegEx drivers export Poll-Mode enqueue and dequeue functions to applications. The *enqueue* operation submits a burst of RegEx pattern matching request to the RegEx device and the *dequeue* operation gets a burst of pattern matching response for the ones submitted through *enqueue* operation. Typical application utilisation of the RegEx device API will follow the following programming flow. - rte_regexdev_configure() - rte_regexdev_queue_pair_setup() - rte_regexdev_rule_db_update() Needs to invoke if precompiled rule database not provided in rte_regexdev_config::rule_db for rte_regexdev_configure() and/or application needs to update rule database. - rte_regexdev_rule_db_compile_activate() Needs to invoke if rte_regexdev_rule_db_update function was used. - Create or reuse exiting mempool for *rte_regex_ops* objects. - rte_regexdev_start() - rte_regexdev_enqueue_burst() - rte_regexdev_dequeue_burst() Signed-off-by: Jerin Jacob <jerinj@marvell.com> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com> Signed-off-by: Ori Kam <orika@mellanox.com>
2020-07-06 17:36:46 +00:00
F: doc/guides/prog_guide/regexdev.rst
F: doc/guides/regexdevs/features/default.ini
regexdev: introduce API As RegEx usage become more used by DPDK applications, for example: * Next Generation Firewalls (NGFW) * Deep Packet and Flow Inspection (DPI) * Intrusion Prevention Systems (IPS) * DDoS Mitigation * Network Monitoring * Data Loss Prevention (DLP) * Smart NICs * Grammar based content processing * URL, spam and adware filtering * Advanced auditing and policing of user/application security policies * Financial data mining - parsing of streamed financial feeds * Application recognition. * Dmemory introspection. * Natural Language Processing (NLP) * Sentiment Analysis. * Big data database acceleration. * Computational storage. Number of PMD providers started to work on HW implementation, along side with SW implementations. This lib adds the support for those kind of devices. The RegEx Device API is composed of two parts: - The application-oriented RegEx API that includes functions to setup a RegEx device (configure it, setup its queue pairs and start it), update the rule database and so on. - The driver-oriented RegEx API that exports a function allowing a RegEx poll Mode Driver (PMD) to simultaneously register itself as a RegEx device driver. RegEx device components and definitions: +-----------------+ | | | o---------+ rte_regexdev_[en|de]queue_burst() | PCRE based o------+ | | | RegEx pattern | | | +--------+ | | matching engine o------+--+--o | | +------+ | | | | | queue |<==o===>|Core 0| | o----+ | | | pair 0 | | | | | | | | +--------+ +------+ +-----------------+ | | | ^ | | | +--------+ | | | | | | +------+ | | +--+--o queue |<======>|Core 1| Rule|Database | | | pair 1 | | | +------+----------+ | | +--------+ +------+ | Group 0 | | | | +-------------+ | | | +--------+ +------+ | | Rules 0..n | | | | | | |Core 2| | +-------------+ | | +--o queue |<======>| | | Group 1 | | | pair 2 | +------+ | +-------------+ | | +--------+ | | Rules 0..n | | | | +-------------+ | | +--------+ | Group 2 | | | | +------+ | +-------------+ | | | queue |<======>|Core n| | | Rules 0..n | | +-------o pair n | | | | +-------------+ | +--------+ +------+ | Group n | | +-------------+ |<-------rte_regexdev_rule_db_update() | | | |<-------rte_regexdev_rule_db_compile_activate() | | Rules 0..n | |<-------rte_regexdev_rule_db_import() | +-------------+ |------->rte_regexdev_rule_db_export() +-----------------+ RegEx: A regular expression is a concise and flexible means for matching strings of text, such as particular characters, words, or patterns of characters. A common abbreviation for this is â~@~\RegExâ~@~]. RegEx device: A hardware or software-based implementation of RegEx device API for PCRE based pattern matching syntax and semantics. PCRE RegEx syntax and semantics specification: http://regexkit.sourceforge.net/Documentation/pcre/pcrepattern.html RegEx queue pair: Each RegEx device should have one or more queue pair to transmit a burst of pattern matching request and receive a burst of receive the pattern matching response. The pattern matching request/response embedded in *rte_regex_ops* structure. Rule: A pattern matching rule expressed in PCRE RegEx syntax along with Match ID and Group ID to identify the rule upon the match. Rule database: The RegEx device accepts regular expressions and converts them into a compiled rule database that can then be used to scan data. Compilation allows the device to analyze the given pattern(s) and pre-determine how to scan for these patterns in an optimized fashion that would be far too expensive to compute at run-time. A rule database contains a set of rules that compiled in device specific binary form. Match ID or Rule ID: A unique identifier provided at the time of rule creation for the application to identify the rule upon match. Group ID: Group of rules can be grouped under one group ID to enable rule isolation and effective pattern matching. A unique group identifier provided at the time of rule creation for the application to identify the rule upon match. Scan: A pattern matching request through *enqueue* API. It may possible that a given RegEx device may not support all the features of PCRE. The application may probe unsupported features through struct rte_regexdev_info::pcre_unsup_flags By default, all the functions of the RegEx Device API exported by a PMD are lock-free functions which assume to not be invoked in parallel on different logical cores to work on the same target object. For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical cores to operates on same RegEx queue pair. Of course, this function can be invoked in parallel by different logical core on different queue pair. It is the responsibility of the upper level application to enforce this rule. In all functions of the RegEx API, the RegEx device is designated by an integer >= 0 named the device identifier *dev_id* At the RegEx driver level, RegEx devices are represented by a generic data structure of type *rte_regexdev*. RegEx devices are dynamically registered during the PCI/SoC device probing phase performed at EAL initialization time. When a RegEx device is being probed, a *rte_regexdev* structure and a new device identifier are allocated for that device. Then, the regexdev_init() function supplied by the RegEx driver matching the probed device is invoked to properly initialize the device. The role of the device init function consists of resetting the hardware or software RegEx driver implementations. If the device init operation is successful, the correspondence between the device identifier assigned to the new device and its associated *rte_regexdev* structure is effectively registered. Otherwise, both the *rte_regexdev* structure and the device identifier are freed. The functions exported by the application RegEx API to setup a device designated by its device identifier must be invoked in the following order: - rte_regexdev_configure() - rte_regexdev_queue_pair_setup() - rte_regexdev_start() Then, the application can invoke, in any order, the functions exported by the RegEx API to enqueue pattern matching job, dequeue pattern matching response, get the stats, update the rule database, get/set device attributes and so on If the application wants to change the configuration (i.e. call rte_regexdev_configure() or rte_regexdev_queue_pair_setup()), it must call rte_regexdev_stop() first to stop the device and then do the reconfiguration before calling rte_regexdev_start() again. The enqueue and dequeue functions should not be invoked when the device is stopped. Finally, an application can close a RegEx device by invoking the rte_regexdev_close() function. Each function of the application RegEx API invokes a specific function of the PMD that controls the target device designated by its device identifier. For this purpose, all device-specific functions of a RegEx driver are supplied through a set of pointers contained in a generic structure of type *regexdev_ops*. The address of the *regexdev_ops* structure is stored in the *rte_regexdev* structure by the device init function of the RegEx driver, which is invoked during the PCI/SoC device probing phase, as explained earlier. In other words, each function of the RegEx API simply retrieves the *rte_regexdev* structure associated with the device identifier and performs an indirect invocation of the corresponding driver function supplied in the *regexdev_ops* structure of the *rte_regexdev* structure. For performance reasons, the address of the fast-path functions of the RegEx driver is not contained in the *regexdev_ops* structure. Instead, they are directly stored at the beginning of the *rte_regexdev* structure to avoid an extra indirect memory access during their invocation. RTE RegEx device drivers do not use interrupts for enqueue or dequeue operation. Instead, RegEx drivers export Poll-Mode enqueue and dequeue functions to applications. The *enqueue* operation submits a burst of RegEx pattern matching request to the RegEx device and the *dequeue* operation gets a burst of pattern matching response for the ones submitted through *enqueue* operation. Typical application utilisation of the RegEx device API will follow the following programming flow. - rte_regexdev_configure() - rte_regexdev_queue_pair_setup() - rte_regexdev_rule_db_update() Needs to invoke if precompiled rule database not provided in rte_regexdev_config::rule_db for rte_regexdev_configure() and/or application needs to update rule database. - rte_regexdev_rule_db_compile_activate() Needs to invoke if rte_regexdev_rule_db_update function was used. - Create or reuse exiting mempool for *rte_regex_ops* objects. - rte_regexdev_start() - rte_regexdev_enqueue_burst() - rte_regexdev_dequeue_burst() Signed-off-by: Jerin Jacob <jerinj@marvell.com> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com> Signed-off-by: Ori Kam <orika@mellanox.com>
2020-07-06 17:36:46 +00:00
DMA device API - EXPERIMENTAL
M: Chengwen Feng <fengchengwen@huawei.com>
F: lib/dmadev/
F: drivers/dma/skeleton/
F: app/test/test_dmadev*
F: doc/guides/prog_guide/dmadev.rst
M: Kevin Laatz <kevin.laatz@intel.com>
M: Bruce Richardson <bruce.richardson@intel.com>
F: examples/dma/
F: doc/guides/sample_app_ug/dma.rst
General-Purpose Graphics Processing Unit (GPU) API - EXPERIMENTAL
M: Elena Agostini <eagostini@nvidia.com>
F: lib/gpudev/
F: doc/guides/prog_guide/gpudev.rst
F: doc/guides/gpus/features/default.ini
F: app/test-gpudev/
Eventdev API
M: Jerin Jacob <jerinj@marvell.com>
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/
F: drivers/event/skeleton/
F: app/test/test_eventdev.c
F: examples/l3fwd/l3fwd_event*
Eventdev Ethdev Rx Adapter API
M: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*eth_rx_adapter*
F: app/test/test_event_eth_rx_adapter.c
F: doc/guides/prog_guide/event_ethernet_rx_adapter.rst
Eventdev Ethdev Tx Adapter API
M: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*eth_tx_adapter*
F: app/test/test_event_eth_tx_adapter.c
F: doc/guides/prog_guide/event_ethernet_tx_adapter.rst
Eventdev Timer Adapter API
M: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*timer_adapter*
F: app/test/test_event_timer_adapter.c
F: doc/guides/prog_guide/event_timer_adapter.rst
Eventdev Crypto Adapter API
M: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*crypto_adapter*
F: app/test/test_event_crypto_adapter.c
F: doc/guides/prog_guide/event_crypto_adapter.rst
Raw device API
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
F: lib/rawdev/
F: drivers/raw/skeleton/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
Memory Pool Drivers
-------------------
Bucket memory pool
M: Artem V. Andreev <artem.andreev@oktetlabs.ru>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
F: drivers/mempool/bucket/
Marvell cnxk
M: Ashwin Sekhar T K <asekhar@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/mempool/cnxk/
F: doc/guides/mempool/cnxk.rst
Bus Drivers
-----------
Auxiliary bus driver - EXPERIMENTAL
M: Parav Pandit <parav@nvidia.com>
M: Xueming Li <xuemingl@nvidia.com>
F: drivers/bus/auxiliary/
Intel FPGA bus
M: Rosen Xu <rosen.xu@intel.com>
F: drivers/bus/ifpga/
NXP buses
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
F: drivers/common/dpaax/
F: drivers/bus/dpaa/
F: drivers/bus/fslmc/
PCI bus driver
F: drivers/bus/pci/
VDEV bus driver
F: drivers/bus/vdev/
F: app/test/test_vdev.c
VMBUS bus driver
M: Stephen Hemminger <sthemmin@microsoft.com>
M: Long Li <longli@microsoft.com>
F: drivers/bus/vmbus/
Networking Drivers
------------------
M: Ferruh Yigit <ferruh.yigit@amd.com>
T: git://dpdk.org/next/dpdk-next-net
F: doc/guides/nics/features/default.ini
Link bonding
M: Chas Williams <chas3@att.com>
M: Min Hu (Connor) <humin29@huawei.com>
F: drivers/net/bonding/
F: doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
F: app/test/test_link_bonding*
F: examples/bond/
Linux KNI
F: kernel/linux/kni/
F: lib/kni/
F: doc/guides/prog_guide/kernel_nic_interface.rst
F: app/test/test_kni.c
F: examples/kni/
F: doc/guides/sample_app_ug/kernel_nic_interface.rst
Linux AF_PACKET
M: John W. Linville <linville@tuxdriver.com>
F: drivers/net/af_packet/
F: doc/guides/nics/features/afpacket.ini
Linux AF_XDP
M: Ciara Loftus <ciara.loftus@intel.com>
M: Qi Zhang <qi.z.zhang@intel.com>
F: drivers/net/af_xdp/
F: doc/guides/nics/af_xdp.rst
F: doc/guides/nics/features/af_xdp.ini
Amazon ENA
M: Marcin Wojtas <mw@semihalf.com>
M: Michal Krawczyk <mk@semihalf.com>
M: Shai Brandes <shaibran@amazon.com>
M: Evgeny Schemeilin <evgenys@amazon.com>
M: Igor Chauskin <igorch@amazon.com>
F: drivers/net/ena/
F: doc/guides/nics/ena.rst
F: doc/guides/nics/features/ena.ini
AMD axgbe
M: Chandubabu Namburu <chandu@amd.com>
F: drivers/net/axgbe/
F: doc/guides/nics/axgbe.rst
F: doc/guides/nics/features/axgbe.ini
Marvell/Aquantia atlantic
M: Igor Russkikh <irusskikh@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/atlantic/
F: doc/guides/nics/atlantic.rst
F: doc/guides/nics/features/atlantic.ini
Atomic Rules ARK
M: Shepard Siegel <shepard.siegel@atomicrules.com>
M: Ed Czeck <ed.czeck@atomicrules.com>
M: John Miller <john.miller@atomicrules.com>
F: drivers/net/ark/
F: doc/guides/nics/ark.rst
F: doc/guides/nics/features/ark.ini
Broadcom bnxt
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
M: Somnath Kotur <somnath.kotur@broadcom.com>
T: git://dpdk.org/next/dpdk-next-net-brcm
F: drivers/net/bnxt/
F: doc/guides/nics/bnxt.rst
F: doc/guides/nics/features/bnxt.ini
Cavium ThunderX nicvf
M: Jerin Jacob <jerinj@marvell.com>
M: Maciej Czekaj <mczekaj@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/thunderx/
F: doc/guides/nics/thunderx.rst
F: doc/guides/nics/features/thunderx.ini
ethdev: remove old close behaviour The temporary flag RTE_ETH_DEV_CLOSE_REMOVE is removed. It was introduced in DPDK 18.11 in order to give time for PMDs to migrate. The old behaviour was to free only queues when closing a port. The new behaviour is calling rte_eth_dev_release_port() which does three more tasks: - trigger event callback - reset state and few pointers - free all generic port resources The private port resources must be released in the .dev_close callback. The .remove callback should: - call .dev_close callback - call rte_eth_dev_release_port() - free multi-port device shared resources Despite waiting two years, some drivers have not migrated, so they may hit issues with the incompatible new behaviour. After sending emails, adding logs, and announcing the deprecation, the only last solution is to declare these drivers as unmaintained: ionic, liquidio, nfp Below is a summary of what to implement in those drivers. * The freeing of private port resources must be moved from the ".remove(device)" function to the ".dev_close(port)" function. * If a generic resource (.mac_addrs or .hash_mac_addrs) cannot be freed, it must be set to NULL in ".dev_close" function to protect from subsequent rte_eth_dev_release_port() freeing. * Note 1: The generic resources are freed in rte_eth_dev_release_port(), after ".dev_close" is called in rte_eth_dev_close(), but not when calling ".dev_close" directly from the ".remove" PMD function. That's why rte_eth_dev_release_port() must still be called explicitly from ".remove(device)" after calling the ".dev_close" PMD function. * Note 2: If a device can have multiple ports, the common resources must be freed only in the ".remove(device)" function. * Note 3: The port is supposed to be in a stopped state when it is closed. If it is not the case, it is free to the PMD implementation how to react when trying to close a non-stopped port: either try to stop it automatically or just return an error. Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Reviewed-by: Liron Himi <lironh@marvell.com> Reviewed-by: Haiyue Wang <haiyue.wang@intel.com> Acked-by: Jeff Guo <jia.guo@intel.com> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
2020-09-28 23:14:29 +00:00
Cavium LiquidIO - UNMAINTAINED
M: Shijith Thotton <sthotton@marvell.com>
M: Srisivasubramanian Srinivasan <srinivasan@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/liquidio/
F: doc/guides/nics/liquidio.rst
F: doc/guides/nics/features/liquidio.ini
Cavium OCTEON TX
M: Harman Kalra <hkalra@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/octeontx/
F: drivers/mempool/octeontx/
F: drivers/net/octeontx/
F: doc/guides/nics/octeontx.rst
F: doc/guides/nics/features/octeontx.ini
Chelsio cxgbe
M: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
F: drivers/net/cxgbe/
F: doc/guides/nics/cxgbe.rst
F: doc/guides/nics/features/cxgbe.ini
Cisco enic
M: John Daley <johndale@cisco.com>
M: Hyong Youb Kim <hyonkim@cisco.com>
F: drivers/net/enic/
F: doc/guides/nics/enic.rst
F: doc/guides/nics/features/enic.ini
Hisilicon hns3
M: Dongdong Liu <liudongdong3@huawei.com>
M: Yisen Zhuang <yisen.zhuang@huawei.com>
F: drivers/net/hns3/
F: doc/guides/nics/hns3.rst
F: doc/guides/nics/features/hns3.ini
Huawei hinic
M: Ziyang Xuan <xuanziyang2@huawei.com>
M: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
M: Guoyang Zhou <zhouguoyang@huawei.com>
F: drivers/net/hinic/
F: doc/guides/nics/hinic.rst
F: doc/guides/nics/features/hinic.ini
Intel e1000
M: Simei Su <simei.su@intel.com>
M: Wenjun Wu <wenjun1.wu@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/e1000/
F: doc/guides/nics/e1000em.rst
F: doc/guides/nics/intel_vf.rst
F: doc/guides/nics/features/e1000.ini
F: doc/guides/nics/features/igb*.ini
Intel ixgbe
M: Qiming Yang <qiming.yang@intel.com>
M: Wenjun Wu <wenjun1.wu@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ixgbe/
F: doc/guides/nics/ixgbe.rst
F: doc/guides/nics/intel_vf.rst
F: doc/guides/nics/features/ixgbe*.ini
Intel i40e
M: Yuying Zhang <Yuying.Zhang@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/i40e/
F: doc/guides/nics/i40e.rst
F: doc/guides/nics/intel_vf.rst
F: doc/guides/nics/features/i40e*.ini
Intel fm10k
M: Qi Zhang <qi.z.zhang@intel.com>
M: Xiao Wang <xiao.w.wang@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/fm10k/
F: doc/guides/nics/fm10k.rst
F: doc/guides/nics/features/fm10k*.ini
Intel iavf
M: Jingjing Wu <jingjing.wu@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/iavf/
F: drivers/common/iavf/
F: doc/guides/nics/features/iavf*.ini
Intel ice
M: Qiming Yang <qiming.yang@intel.com>
M: Qi Zhang <qi.z.zhang@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ice/
F: doc/guides/nics/ice.rst
F: doc/guides/nics/features/ice.ini
Intel igc
M: Junfeng Guo <junfeng.guo@intel.com>
M: Simei Su <simei.su@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/igc/
F: doc/guides/nics/igc.rst
F: doc/guides/nics/features/igc.ini
Intel ipn3ke
M: Rosen Xu <rosen.xu@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ipn3ke/
F: doc/guides/nics/ipn3ke.rst
F: doc/guides/nics/features/ipn3ke.ini
Marvell cnxk
M: Nithin Dabilpuram <ndabilpuram@marvell.com>
M: Kiran Kumar K <kirankumark@marvell.com>
M: Sunil Kumar Kori <skori@marvell.com>
M: Satha Rao <skoteshwar@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/cnxk/
F: drivers/net/cnxk/
F: doc/guides/nics/cnxk.rst
F: doc/guides/nics/features/cnxk*.ini
F: doc/guides/platform/cnxk.rst
Marvell mvpp2
M: Liron Himi <lironh@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/mvep/
F: drivers/net/mvpp2/
F: doc/guides/nics/mvpp2.rst
F: doc/guides/nics/features/mvpp2.ini
Marvell mvneta
M: Zyta Szpak <zr@semihalf.com>
M: Liron Himi <lironh@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/mvneta/
F: doc/guides/nics/mvneta.rst
F: doc/guides/nics/features/mvneta.ini
Marvell OCTEON TX EP - endpoint
M: Radha Mohan Chintakuntla <radhac@marvell.com>
M: Veerasenareddy Burru <vburru@marvell.com>
M: Sathesh Edara <sedara@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/octeon_ep/
F: doc/guides/nics/features/octeon_ep.ini
F: doc/guides/nics/octeon_ep.rst
Mellanox mlx4
M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
T: git://dpdk.org/next/dpdk-next-net-mlx
F: drivers/net/mlx4/
F: doc/guides/nics/mlx4.rst
F: doc/guides/nics/features/mlx4.ini
Mellanox mlx5
M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
T: git://dpdk.org/next/dpdk-next-net-mlx
F: drivers/common/mlx5/
F: drivers/net/mlx5/
F: buildtools/options-ibverbs-static.sh
F: doc/guides/nics/mlx5.rst
F: doc/guides/nics/features/mlx5.ini
Microsoft vdev_netvsc - EXPERIMENTAL
M: Matan Azrad <matan@nvidia.com>
F: drivers/net/vdev_netvsc/
F: doc/guides/nics/vdev_netvsc.rst
Microsoft Hyper-V netvsc
M: Stephen Hemminger <sthemmin@microsoft.com>
M: Long Li <longli@microsoft.com>
F: drivers/net/netvsc/
F: doc/guides/nics/netvsc.rst
F: doc/guides/nics/features/netvsc.ini
Netcope nfb
M: Martin Spinler <spinler@cesnet.cz>
F: drivers/net/nfb/
F: doc/guides/nics/nfb.rst
F: doc/guides/nics/features/nfb.ini
Netronome nfp
M: Chaoyong He <chaoyong.he@corigine.com>
M: Niklas Soderlund <niklas.soderlund@corigine.com>
F: drivers/net/nfp/
F: doc/guides/nics/nfp.rst
F: doc/guides/nics/features/nfp*.ini
NXP dpaa
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
F: drivers/mempool/dpaa/
F: drivers/net/dpaa/
F: doc/guides/nics/dpaa.rst
F: doc/guides/nics/features/dpaa.ini
NXP dpaa2
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
F: drivers/mempool/dpaa2/
F: drivers/net/dpaa2/
F: doc/guides/nics/dpaa2.rst
F: doc/guides/nics/features/dpaa2.ini
NXP enetc
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
NXP enetfec - EXPERIMENTAL
M: Apeksha Gupta <apeksha.gupta@nxp.com>
M: Sachin Saxena <sachin.saxena@nxp.com>
F: drivers/net/enetfec/
F: doc/guides/nics/enetfec.rst
F: doc/guides/nics/features/enetfec.ini
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
F: doc/guides/nics/pfe.rst
F: drivers/net/pfe/
F: doc/guides/nics/features/pfe.ini
Pensando ionic
M: Andrew Boyer <aboyer@pensando.io>
F: drivers/net/ionic/
F: doc/guides/nics/ionic.rst
F: doc/guides/nics/features/ionic.ini
Marvell QLogic bnx2x
M: Rasesh Mody <rmody@marvell.com>
M: Shahed Shaikh <shshaikh@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/bnx2x/
F: doc/guides/nics/bnx2x.rst
F: doc/guides/nics/features/bnx2x*.ini
Marvell QLogic qede PMD
M: Rasesh Mody <rmody@marvell.com>
M: Devendra Singh Rawat <dsinghrawat@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/qede/
F: doc/guides/nics/qede.rst
F: doc/guides/nics/features/qede*.ini
Solarflare sfc_efx
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
F: drivers/common/sfc_efx/
F: drivers/net/sfc/
F: doc/guides/nics/sfc_efx.rst
F: doc/guides/nics/features/sfc.ini
Wangxun ngbe
M: Jiawen Wu <jiawenwu@trustnetic.com>
F: drivers/net/ngbe/
F: doc/guides/nics/ngbe.rst
F: doc/guides/nics/features/ngbe.ini
Wangxun txgbe
M: Jiawen Wu <jiawenwu@trustnetic.com>
M: Jian Wang <jianwang@trustnetic.com>
F: drivers/net/txgbe/
F: doc/guides/nics/txgbe.rst
F: doc/guides/nics/features/txgbe.ini
VMware vmxnet3
M: Jochen Behrens <jbehrens@vmware.com>
F: drivers/net/vmxnet3/
F: doc/guides/nics/vmxnet3.rst
F: doc/guides/nics/features/vmxnet3.ini
Vhost-user
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
T: git://dpdk.org/next/dpdk-next-virtio
F: lib/vhost/
F: doc/guides/prog_guide/vhost_lib.rst
F: examples/vhost/
F: doc/guides/sample_app_ug/vhost.rst
F: examples/vhost_blk/
F: doc/guides/sample_app_ug/vhost_blk.rst
F: examples/vhost_crypto/
F: examples/vdpa/
F: doc/guides/sample_app_ug/vdpa.rst
Vhost PMD
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
T: git://dpdk.org/next/dpdk-next-virtio
F: drivers/net/vhost/
F: doc/guides/nics/vhost.rst
F: doc/guides/nics/features/vhost.ini
Virtio PMD
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
T: git://dpdk.org/next/dpdk-next-virtio
F: drivers/net/virtio/
F: doc/guides/nics/virtio.rst
F: doc/guides/nics/features/virtio*.ini
Wind River AVP
M: Steven Webster <steven.webster@windriver.com>
M: Matt Peters <matt.peters@windriver.com>
F: drivers/net/avp/
F: doc/guides/nics/avp.rst
F: doc/guides/nics/features/avp.ini
PCAP PMD
F: drivers/net/pcap/
F: doc/guides/nics/pcap_ring.rst
F: doc/guides/nics/features/pcap.ini
Tap PMD
F: drivers/net/tap/
F: doc/guides/nics/tap.rst
F: doc/guides/nics/features/tap.ini
KNI PMD
F: drivers/net/kni/
F: doc/guides/nics/kni.rst
Ring PMD
M: Bruce Richardson <bruce.richardson@intel.com>
F: drivers/net/ring/
F: doc/guides/nics/pcap_ring.rst
F: app/test/test_pmd_ring.c
F: app/test/test_pmd_ring_perf.c
Null Networking PMD
M: Tetsuya Mukawa <mtetsuyah@gmail.com>
F: drivers/net/null/
Fail-safe PMD
M: Gaetan Rivet <grive@u256.net>
F: drivers/net/failsafe/
F: doc/guides/nics/fail_safe.rst
F: doc/guides/nics/features/failsafe.ini
Softnic PMD
M: Jasvinder Singh <jasvinder.singh@intel.com>
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
F: drivers/net/softnic/
F: doc/guides/nics/softnic.rst
Memif PMD
M: Jakub Grajciar <jgrajcia@cisco.com>
F: drivers/net/memif/
F: doc/guides/nics/memif.rst
F: doc/guides/nics/features/memif.ini
Crypto Drivers
--------------
T: git://dpdk.org/next/dpdk-next-crypto
F: doc/guides/cryptodevs/features/default.ini
AMD CCP Crypto
M: Sunil Uttarwar <sunilprakashrao.uttarwar@amd.com>
F: drivers/crypto/ccp/
F: doc/guides/cryptodevs/ccp.rst
F: doc/guides/cryptodevs/features/ccp.ini
ARMv8 Crypto
M: Ruifeng Wang <ruifeng.wang@arm.com>
F: drivers/crypto/armv8/
F: doc/guides/cryptodevs/armv8.rst
F: doc/guides/cryptodevs/features/armv8.ini
Broadcom FlexSparc
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
M: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
M: Vikas Gupta <vikas.gupta@broadcom.com>
F: drivers/crypto/bcmfs/
F: doc/guides/cryptodevs/bcmfs.rst
F: doc/guides/cryptodevs/features/bcmfs.ini
Cavium OCTEON TX crypto
M: Anoob Joseph <anoobj@marvell.com>
F: drivers/common/cpt/
F: drivers/crypto/octeontx/
F: doc/guides/cryptodevs/octeontx.rst
F: doc/guides/cryptodevs/features/octeontx.ini
Crypto Scheduler
M: Kai Ji <kai.ji@intel.com>
F: drivers/crypto/scheduler/
F: doc/guides/cryptodevs/scheduler.rst
Intel QuickAssist
M: Kai Ji <kai.ji@intel.com>
F: drivers/crypto/qat/
F: drivers/common/qat/
F: doc/guides/cryptodevs/qat.rst
F: doc/guides/cryptodevs/features/qat.ini
IPsec MB
M: Kai Ji <kai.ji@intel.com>
M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
F: drivers/crypto/ipsec_mb/
F: doc/guides/cryptodevs/aesni_gcm.rst
F: doc/guides/cryptodevs/aesni_mb.rst
F: doc/guides/cryptodevs/chacha20_poly1305.rst
F: doc/guides/cryptodevs/kasumi.rst
F: doc/guides/cryptodevs/snow3g.rst
F: doc/guides/cryptodevs/zuc.rst
F: doc/guides/cryptodevs/features/aesni_gcm.ini
F: doc/guides/cryptodevs/features/aesni_mb.ini
F: doc/guides/cryptodevs/features/chacha20_poly1305.ini
F: doc/guides/cryptodevs/features/kasumi.ini
F: doc/guides/cryptodevs/features/snow3g.ini
F: doc/guides/cryptodevs/features/zuc.ini
Marvell cnxk crypto
M: Ankur Dwivedi <adwivedi@marvell.com>
M: Anoob Joseph <anoobj@marvell.com>
M: Tejasree Kondoj <ktejasree@marvell.com>
F: drivers/crypto/cnxk/
F: doc/guides/cryptodevs/cnxk.rst
F: doc/guides/cryptodevs/features/cn9k.ini
F: doc/guides/cryptodevs/features/cn10k.ini
Marvell mvsam
M: Michael Shamis <michaelsh@marvell.com>
M: Liron Himi <lironh@marvell.com>
F: drivers/crypto/mvsam/
F: doc/guides/cryptodevs/mvsam.rst
F: doc/guides/cryptodevs/features/mvsam.ini
Marvell Nitrox
M: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
M: Srikanth Jampala <jsrikanth@marvell.com>
F: drivers/crypto/nitrox/
F: doc/guides/cryptodevs/nitrox.rst
F: doc/guides/cryptodevs/features/nitrox.ini
Mellanox mlx5
M: Matan Azrad <matan@nvidia.com>
F: drivers/crypto/mlx5/
F: doc/guides/cryptodevs/mlx5.rst
F: doc/guides/cryptodevs/features/mlx5.ini
Null Crypto
M: Kai Ji <kai.ji@intel.com>
F: drivers/crypto/null/
F: doc/guides/cryptodevs/null.rst
F: doc/guides/cryptodevs/features/null.ini
NXP CAAM JR
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
F: drivers/crypto/caam_jr/
F: doc/guides/cryptodevs/caam_jr.rst
F: doc/guides/cryptodevs/features/caam_jr.ini
NXP DPAA_SEC
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
F: drivers/crypto/dpaa_sec/
F: doc/guides/cryptodevs/dpaa_sec.rst
F: doc/guides/cryptodevs/features/dpaa_sec.ini
NXP DPAA2_SEC
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
F: drivers/crypto/dpaa2_sec/
F: doc/guides/cryptodevs/dpaa2_sec.rst
F: doc/guides/cryptodevs/features/dpaa2_sec.ini
OpenSSL
M: Kai Ji <kai.ji@intel.com>
F: drivers/crypto/openssl/
F: doc/guides/cryptodevs/openssl.rst
F: doc/guides/cryptodevs/features/openssl.ini
Virtio
M: Jay Zhou <jianjay.zhou@huawei.com>
F: drivers/crypto/virtio/
F: doc/guides/cryptodevs/virtio.rst
F: doc/guides/cryptodevs/features/virtio.ini
Compression Drivers
-------------------
T: git://dpdk.org/next/dpdk-next-crypto
Cavium OCTEON TX zipvf
M: Ashish Gupta <ashish.gupta@marvell.com>
F: drivers/compress/octeontx/
F: doc/guides/compressdevs/octeontx.rst
F: doc/guides/compressdevs/features/octeontx.ini
Intel QuickAssist
M: Kai Ji <kai.ji@intel.com>
F: drivers/compress/qat/
F: drivers/common/qat/
ISA-L
M: Lee Daly <lee.daly@intel.com>
F: drivers/compress/isal/
F: doc/guides/compressdevs/isal.rst
F: doc/guides/compressdevs/features/isal.ini
Mellanox mlx5
M: Matan Azrad <matan@nvidia.com>
F: drivers/compress/mlx5/
ZLIB
M: Sunila Sahu <ssahu@marvell.com>
F: drivers/compress/zlib/
F: doc/guides/compressdevs/zlib.rst
F: doc/guides/compressdevs/features/zlib.ini
DMAdev Drivers
--------------
Intel IDXD - EXPERIMENTAL
M: Bruce Richardson <bruce.richardson@intel.com>
M: Kevin Laatz <kevin.laatz@intel.com>
F: drivers/dma/idxd/
F: doc/guides/dmadevs/idxd.rst
Intel IOAT
M: Bruce Richardson <bruce.richardson@intel.com>
M: Conor Walsh <conor.walsh@intel.com>
F: drivers/dma/ioat/
F: doc/guides/dmadevs/ioat.rst
HiSilicon DMA
M: Chengwen Feng <fengchengwen@huawei.com>
F: drivers/dma/hisilicon/
F: doc/guides/dmadevs/hisilicon.rst
Marvell CNXK DPI DMA
M: Radha Mohan Chintakuntla <radhac@marvell.com>
M: Veerasenareddy Burru <vburru@marvell.com>
F: drivers/dma/cnxk/
F: doc/guides/dmadevs/cnxk.rst
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
F: drivers/dma/dpaa/
F: doc/guides/dmadevs/dpaa.rst
NXP DPAA2 QDMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
F: drivers/dma/dpaa2/
F: doc/guides/dmadevs/dpaa2.rst
RegEx Drivers
-------------
Marvell OCTEON CN9K regex
M: Liron Himi <lironh@marvell.com>
F: drivers/regex/cn9k/
F: doc/guides/regexdevs/cn9k.rst
F: doc/guides/regexdevs/features/cn9k.ini
Mellanox mlx5
M: Ori Kam <orika@nvidia.com>
F: drivers/regex/mlx5/
F: doc/guides/regexdevs/mlx5.rst
F: doc/guides/regexdevs/features/mlx5.ini
vDPA Drivers
------------
T: git://dpdk.org/next/dpdk-next-virtio
Intel ifc
M: Xiao Wang <xiao.w.wang@intel.com>
F: drivers/vdpa/ifc/
F: doc/guides/vdpadevs/ifc.rst
F: doc/guides/vdpadevs/features/ifcvf.ini
Mellanox mlx5 vDPA
M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
F: drivers/vdpa/mlx5/
F: doc/guides/vdpadevs/mlx5.rst
F: doc/guides/vdpadevs/features/mlx5.ini
Xilinx sfc vDPA
M: Vijay Kumar Srivastava <vsrivast@xilinx.com>
F: drivers/vdpa/sfc/
F: doc/guides/vdpadevs/sfc.rst
F: doc/guides/vdpadevs/features/sfc.ini
Eventdev Drivers
----------------
M: Jerin Jacob <jerinj@marvell.com>
T: git://dpdk.org/next/dpdk-next-eventdev
Cavium OCTEON TX ssovf
M: Jerin Jacob <jerinj@marvell.com>
F: drivers/event/octeontx/
F: doc/guides/eventdevs/octeontx.rst
Cavium OCTEON TX timvf
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
F: drivers/event/octeontx/timvf_*
Intel DLB2
M: Timothy McDaniel <timothy.mcdaniel@intel.com>
F: drivers/event/dlb2/
F: doc/guides/eventdevs/dlb2.rst
Marvell cnxk
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
M: Shijith Thotton <sthotton@marvell.com>
F: drivers/event/cnxk/
F: doc/guides/eventdevs/cnxk.rst
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
F: drivers/event/dpaa/
F: doc/guides/eventdevs/dpaa.rst
NXP DPAA2 eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
F: drivers/event/dpaa2/
F: doc/guides/eventdevs/dpaa2.rst
Software Eventdev PMD
M: Harry van Haaren <harry.van.haaren@intel.com>
F: drivers/event/sw/
F: doc/guides/eventdevs/sw.rst
F: examples/eventdev_pipeline/
F: doc/guides/sample_app_ug/eventdev_pipeline.rst
Distributed Software Eventdev PMD
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
F: drivers/event/dsw/
F: doc/guides/eventdevs/dsw.rst
Software OPDL Eventdev PMD
M: Liang Ma <liangma@liangbit.com>
M: Peter Mccarthy <peter.mccarthy@intel.com>
F: drivers/event/opdl/
F: doc/guides/eventdevs/opdl.rst
Baseband Drivers
----------------
Intel baseband
M: Nicolas Chautru <nicolas.chautru@intel.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: drivers/baseband/turbo_sw/
F: doc/guides/bbdevs/turbo_sw.rst
F: doc/guides/bbdevs/features/turbo_sw.ini
F: drivers/baseband/fpga_lte_fec/
F: doc/guides/bbdevs/fpga_lte_fec.rst
F: doc/guides/bbdevs/features/fpga_lte_fec.ini
F: drivers/baseband/fpga_5gnr_fec/
F: doc/guides/bbdevs/fpga_5gnr_fec.rst
F: doc/guides/bbdevs/features/fpga_5gnr_fec.ini
F: drivers/baseband/acc100/
F: doc/guides/bbdevs/acc100.rst
F: doc/guides/bbdevs/features/acc100.ini
F: doc/guides/bbdevs/features/acc101.ini
Null baseband
M: Nicolas Chautru <nicolas.chautru@intel.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: drivers/baseband/null/
F: doc/guides/bbdevs/null.rst
F: doc/guides/bbdevs/features/null.ini
NXP LA12xx
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: drivers/baseband/la12xx/
F: doc/guides/bbdevs/la12xx.rst
F: doc/guides/bbdevs/features/la12xx.ini
GPU Drivers
-----------
NVIDIA CUDA
M: Elena Agostini <eagostini@nvidia.com>
F: drivers/gpu/cuda/
F: doc/guides/gpus/cuda.rst
Rawdev Drivers
--------------
Intel FPGA
M: Rosen Xu <rosen.xu@intel.com>
M: Tianfei zhang <tianfei.zhang@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/raw/ifpga/
F: doc/guides/rawdevs/ifpga.rst
Marvell CNXK BPHY
M: Jakub Palider <jpalider@marvell.com>
M: Tomasz Duszynski <tduszynski@marvell.com>
F: doc/guides/rawdevs/cnxk_bphy.rst
F: drivers/raw/cnxk_bphy/
Marvell CNXK GPIO
M: Jakub Palider <jpalider@marvell.com>
M: Tomasz Duszynski <tduszynski@marvell.com>
F: doc/guides/rawdevs/cnxk_gpio.rst
F: drivers/raw/cnxk_gpio/
NTB
M: Jingjing Wu <jingjing.wu@intel.com>
M: Junfeng Guo <junfeng.guo@intel.com>
F: drivers/raw/ntb/
F: doc/guides/rawdevs/ntb.rst
F: examples/ntb/
F: doc/guides/sample_app_ug/ntb.rst
NXP DPAA2 CMDIF
M: Gagandeep Singh <g.singh@nxp.com>
F: drivers/raw/dpaa2_cmdif/
F: doc/guides/rawdevs/dpaa2_cmdif.rst
Packet processing
-----------------
Network headers
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/net/
F: app/test/test_cksum.c
F: app/test/test_cksum_perf.c
Packet CRC
M: Jasvinder Singh <jasvinder.singh@intel.com>
F: lib/net/net_crc.h
F: lib/net/rte_net_crc*
F: lib/net/net_crc_avx512.c
F: lib/net/net_crc_sse.c
F: app/test/test_crc.c
IP fragmentation & reassembly
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
F: lib/ip_frag/
F: doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
F: app/test/test_ipfrag.c
F: examples/ip_fragmentation/
F: doc/guides/sample_app_ug/ip_frag.rst
F: examples/ip_reassembly/
F: doc/guides/sample_app_ug/ip_reassembly.rst
lib/gro: add Generic Receive Offload API framework Generic Receive Offload (GRO) is a widely used SW-based offloading technique to reduce per-packet processing overhead. It gains performance by reassembling small packets into large ones. This patchset is to support GRO in DPDK. To support GRO, this patch implements a GRO API framework. To enable more flexibility to applications, DPDK GRO is implemented as a user library. Applications explicitly use the GRO library to merge small packets into large ones. DPDK GRO provides two reassembly modes. One is called lightweight mode, the other is called heavyweight mode. If applications want to merge packets in a simple way and the number of packets is relatively small, they can use the lightweight mode. If applications need more fine-grained controls, they can choose the heavyweight mode. rte_gro_reassemble_burst is the main reassembly API which is used in lightweight mode and processes N packets at a time. For applications, performing GRO in lightweight mode is simple. They just need to invoke rte_gro_reassemble_burst. Applications can get GROed packets as soon as rte_gro_reassemble_burst returns. rte_gro_reassemble is the main reassembly API which is used in heavyweight mode and tries to merge N inputted packets with the packets in GRO reassembly tables. For applications, performing GRO in heavyweight mode is relatively complicated. Before performing GRO, applications need to create a GRO context object, which keeps reassembly tables of desired GRO types, by rte_gro_ctx_create. Then applications can use rte_gro_reassemble to merge packets. The GROed packets are in the reassembly tables of the GRO context object. If applications want to get them, applications need to manually flush them by flush API. Signed-off-by: Jiayu Hu <jiayu.hu@intel.com> Reviewed-by: Jianfeng Tan <jianfeng.tan@intel.com>
2017-07-09 05:46:44 +00:00
Generic Receive Offload - EXPERIMENTAL
M: Jiayu Hu <jiayu.hu@intel.com>
F: lib/gro/
F: doc/guides/prog_guide/generic_receive_offload_lib.rst
lib/gro: add Generic Receive Offload API framework Generic Receive Offload (GRO) is a widely used SW-based offloading technique to reduce per-packet processing overhead. It gains performance by reassembling small packets into large ones. This patchset is to support GRO in DPDK. To support GRO, this patch implements a GRO API framework. To enable more flexibility to applications, DPDK GRO is implemented as a user library. Applications explicitly use the GRO library to merge small packets into large ones. DPDK GRO provides two reassembly modes. One is called lightweight mode, the other is called heavyweight mode. If applications want to merge packets in a simple way and the number of packets is relatively small, they can use the lightweight mode. If applications need more fine-grained controls, they can choose the heavyweight mode. rte_gro_reassemble_burst is the main reassembly API which is used in lightweight mode and processes N packets at a time. For applications, performing GRO in lightweight mode is simple. They just need to invoke rte_gro_reassemble_burst. Applications can get GROed packets as soon as rte_gro_reassemble_burst returns. rte_gro_reassemble is the main reassembly API which is used in heavyweight mode and tries to merge N inputted packets with the packets in GRO reassembly tables. For applications, performing GRO in heavyweight mode is relatively complicated. Before performing GRO, applications need to create a GRO context object, which keeps reassembly tables of desired GRO types, by rte_gro_ctx_create. Then applications can use rte_gro_reassemble to merge packets. The GROed packets are in the reassembly tables of the GRO context object. If applications want to get them, applications need to manually flush them by flush API. Signed-off-by: Jiayu Hu <jiayu.hu@intel.com> Reviewed-by: Jianfeng Tan <jianfeng.tan@intel.com>
2017-07-09 05:46:44 +00:00
Generic Segmentation Offload
M: Jiayu Hu <jiayu.hu@intel.com>
F: lib/gso/
F: doc/guides/prog_guide/generic_segmentation_offload_lib.rst
IPsec
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/ipsec/
M: Bernard Iremonger <bernard.iremonger@intel.com>
F: app/test/test_ipsec*
F: doc/guides/prog_guide/ipsec_lib.rst
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
F: app/test-sad/
Flow Classify - EXPERIMENTAL
M: Bernard Iremonger <bernard.iremonger@intel.com>
F: lib/flow_classify/
F: app/test/test_flow_classify*
F: doc/guides/prog_guide/flow_classify_lib.rst
F: examples/flow_classify/
F: doc/guides/sample_app_ug/flow_classify.rst
Distributor
M: David Hunt <david.hunt@intel.com>
F: lib/distributor/
F: doc/guides/prog_guide/packet_distrib_lib.rst
F: app/test/test_distributor*
F: examples/distributor/
F: doc/guides/sample_app_ug/dist_app.rst
Reorder
M: Reshma Pattan <reshma.pattan@intel.com>
F: lib/reorder/
F: doc/guides/prog_guide/reorder_lib.rst
F: app/test/test_reorder*
F: examples/packet_ordering/
F: doc/guides/sample_app_ug/packet_ordering.rst
Hierarchical scheduler
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
M: Jasvinder Singh <jasvinder.singh@intel.com>
F: lib/sched/
F: doc/guides/prog_guide/qos_framework.rst
F: app/test/test_pie.c
F: app/test/test_red.c
F: app/test/test_sched.c
F: examples/qos_sched/
F: doc/guides/sample_app_ug/qos_scheduler.rst
pdump: add new library for packet capture The librte_pdump library provides a framework for packet capturing in dpdk. The library provides set of APIs to initialize the packet capture framework, to enable or disable the packet capture, and to uninitialize it. The librte_pdump library works on a client/server model. The server is responsible for enabling or disabling the packet capture and the clients are responsible for requesting the enabling or disabling of the packet capture. Enabling APIs are supported with port, queue, ring and mempool parameters. Applications should pass on this information to get the packets from the dpdk ports. For enabling requests from applications, library creates the client request containing the mempool, ring, port and queue information and sends the request to the server. After receiving the request, server registers the Rx and Tx callbacks for all the port and queues. After the callbacks registration, registered callbacks will get the Rx and Tx packets. Packets then will be copied to the new mbufs that are allocated from the user passed mempool. These new mbufs then will be enqueued to the application passed ring. Applications need to dequeue the mbufs from the rings and direct them to the devices like pcap vdev for viewing the packets outside of the dpdk using the packet capture tools. For disabling requests, library creates the client request containing the port and queue information and sends the request to the server. After receiving the request, server removes the Rx and Tx callback for all the port and queues. Signed-off-by: Reshma Pattan <reshma.pattan@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2016-06-15 14:06:22 +00:00
Packet capture
M: Reshma Pattan <reshma.pattan@intel.com>
M: Stephen Hemminger <stephen@networkplumber.org>
F: lib/pdump/
pdump: add new library for packet capture The librte_pdump library provides a framework for packet capturing in dpdk. The library provides set of APIs to initialize the packet capture framework, to enable or disable the packet capture, and to uninitialize it. The librte_pdump library works on a client/server model. The server is responsible for enabling or disabling the packet capture and the clients are responsible for requesting the enabling or disabling of the packet capture. Enabling APIs are supported with port, queue, ring and mempool parameters. Applications should pass on this information to get the packets from the dpdk ports. For enabling requests from applications, library creates the client request containing the mempool, ring, port and queue information and sends the request to the server. After receiving the request, server registers the Rx and Tx callbacks for all the port and queues. After the callbacks registration, registered callbacks will get the Rx and Tx packets. Packets then will be copied to the new mbufs that are allocated from the user passed mempool. These new mbufs then will be enqueued to the application passed ring. Applications need to dequeue the mbufs from the rings and direct them to the devices like pcap vdev for viewing the packets outside of the dpdk using the packet capture tools. For disabling requests, library creates the client request containing the port and queue information and sends the request to the server. After receiving the request, server removes the Rx and Tx callback for all the port and queues. Signed-off-by: Reshma Pattan <reshma.pattan@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2016-06-15 14:06:22 +00:00
F: doc/guides/prog_guide/pdump_lib.rst
F: app/test/test_pdump.*
F: lib/pcapng/
F: doc/guides/prog_guide/pcapng_lib.rst
F: app/test/test_pcapng.c
F: app/pdump/
F: doc/guides/tools/pdump.rst
F: app/dumpcap/
F: doc/guides/tools/dumpcap.rst
pdump: add new library for packet capture The librte_pdump library provides a framework for packet capturing in dpdk. The library provides set of APIs to initialize the packet capture framework, to enable or disable the packet capture, and to uninitialize it. The librte_pdump library works on a client/server model. The server is responsible for enabling or disabling the packet capture and the clients are responsible for requesting the enabling or disabling of the packet capture. Enabling APIs are supported with port, queue, ring and mempool parameters. Applications should pass on this information to get the packets from the dpdk ports. For enabling requests from applications, library creates the client request containing the mempool, ring, port and queue information and sends the request to the server. After receiving the request, server registers the Rx and Tx callbacks for all the port and queues. After the callbacks registration, registered callbacks will get the Rx and Tx packets. Packets then will be copied to the new mbufs that are allocated from the user passed mempool. These new mbufs then will be enqueued to the application passed ring. Applications need to dequeue the mbufs from the rings and direct them to the devices like pcap vdev for viewing the packets outside of the dpdk using the packet capture tools. For disabling requests, library creates the client request containing the port and queue information and sends the request to the server. After receiving the request, server removes the Rx and Tx callback for all the port and queues. Signed-off-by: Reshma Pattan <reshma.pattan@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2016-06-15 14:06:22 +00:00
Packet Framework
----------------
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
F: lib/pipeline/
F: lib/port/
F: lib/table/
F: doc/guides/prog_guide/packet_framework.rst
F: app/test/test_table*
F: app/test-pipeline/
F: doc/guides/sample_app_ug/test_pipeline.rst
F: examples/ip_pipeline/
F: examples/pipeline/
F: doc/guides/sample_app_ug/ip_pipeline.rst
Algorithms
----------
ACL
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
F: lib/acl/
F: doc/guides/prog_guide/packet_classif_access_ctrl.rst
F: app/test-acl/
F: app/test/test_acl.*
EFD
M: Byron Marohn <byron.marohn@intel.com>
M: Yipeng Wang <yipeng1.wang@intel.com>
F: lib/efd/
F: doc/guides/prog_guide/efd_lib.rst
F: app/test/test_efd*
F: examples/server_node_efd/
F: doc/guides/sample_app_ug/server_node_efd.rst
Hashes
M: Yipeng Wang <yipeng1.wang@intel.com>
M: Sameh Gobriel <sameh.gobriel@intel.com>
M: Bruce Richardson <bruce.richardson@intel.com>
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
F: lib/hash/
F: doc/guides/prog_guide/hash_lib.rst
F: doc/guides/prog_guide/toeplitz_hash_lib.rst
F: app/test/test_*hash*
F: app/test/test_func_reentrancy.c
LPM
M: Bruce Richardson <bruce.richardson@intel.com>
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
F: lib/lpm/
F: doc/guides/prog_guide/lpm*
F: app/test/test_lpm*
F: app/test/test_func_reentrancy.c
F: app/test/test_xmmt_ops.h
Membership - EXPERIMENTAL
M: Yipeng Wang <yipeng1.wang@intel.com>
M: Sameh Gobriel <sameh.gobriel@intel.com>
F: lib/member/
F: doc/guides/prog_guide/member_lib.rst
F: app/test/test_member*
RIB/FIB
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
F: lib/rib/
F: app/test/test_rib*
F: lib/fib/
F: app/test/test_fib*
F: app/test-fib/
Traffic metering
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
F: lib/meter/
F: doc/guides/sample_app_ug/qos_scheduler.rst
F: app/test/test_meter.c
F: examples/qos_meter/
F: doc/guides/sample_app_ug/qos_metering.rst
Other libraries
---------------
Configuration file
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
F: lib/cfgfile/
F: app/test/test_cfgfile.c
F: app/test/test_cfgfiles/
Interactive command line
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/cmdline/
F: app/test-cmdline/
F: app/test/test_cmdline*
F: examples/cmdline/
F: doc/guides/sample_app_ug/cmd_line.rst
Key/Value parsing
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/kvargs/
F: app/test/test_kvargs.c
RCU
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
F: lib/rcu/
F: app/test/test_rcu*
F: doc/guides/prog_guide/rcu_lib.rst
PCI
M: Gaetan Rivet <grive@u256.net>
F: lib/pci/
Power management
M: David Hunt <david.hunt@intel.com>
F: lib/power/
F: doc/guides/prog_guide/power_man.rst
F: app/test/test_power*
F: examples/l3fwd-power/
F: doc/guides/sample_app_ug/l3_forward_power_man.rst
F: examples/vm_power_manager/
F: doc/guides/sample_app_ug/vm_power_management.rst
Timers
M: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
F: lib/timer/
F: doc/guides/prog_guide/timer_lib.rst
F: app/test/test_timer*
F: examples/timer/
F: doc/guides/sample_app_ug/timer.rst
Job statistics
F: lib/jobstats/
F: examples/l2fwd-jobstats/
F: doc/guides/sample_app_ug/l2_forward_job_stats.rst
Metrics
F: lib/metrics/
F: app/test/test_metrics.c
Bit-rate statistics
F: lib/bitratestats/
F: app/test/test_bitratestats.c
Latency statistics
M: Reshma Pattan <reshma.pattan@intel.com>
F: lib/latencystats/
F: app/test/test_latencystats.c
Telemetry
M: Ciara Power <ciara.power@intel.com>
F: lib/telemetry/
F: app/test/test_telemetry*
F: usertools/dpdk-telemetry*
F: doc/guides/howto/telemetry.rst
BPF
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
F: lib/bpf/
F: examples/bpf/
F: app/test/test_bpf.c
F: app/test-pmd/bpf_cmd.*
F: doc/guides/prog_guide/bpf_lib.rst
Graph - EXPERIMENTAL
M: Jerin Jacob <jerinj@marvell.com>
M: Kiran Kumar K <kirankumark@marvell.com>
F: lib/graph/
F: doc/guides/prog_guide/graph_lib.rst
F: app/test/test_graph*
M: Nithin Dabilpuram <ndabilpuram@marvell.com>
F: examples/l3fwd-graph/
F: doc/guides/sample_app_ug/l3_forward_graph.rst
Nodes - EXPERIMENTAL
M: Nithin Dabilpuram <ndabilpuram@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
F: lib/node/
Test Applications
-----------------
Unit tests framework
F: app/test/commands.c
F: app/test/has_hugepage.py
F: app/test/packet_burst_generator.c
F: app/test/packet_burst_generator.h
F: app/test/process.h
F: app/test/resource.*
F: app/test/test.c
F: app/test/test.h
F: app/test/test_pmd_perf.c
F: app/test/test_resource.c
F: app/test/virtual_pmd.c
F: app/test/virtual_pmd.h
Sample packet helper functions for unit test
M: Reshma Pattan <reshma.pattan@intel.com>
F: app/test/sample_packet_forward.c
F: app/test/sample_packet_forward.h
Networking drivers testing tool
M: Aman Singh <aman.deep.singh@intel.com>
M: Yuying Zhang <yuying.zhang@intel.com>
T: git://dpdk.org/next/dpdk-next-net
F: app/test-pmd/
F: doc/guides/testpmd_app_ug/
Flow performance tool
M: Wisam Jaddo <wisamm@nvidia.com>
F: app/test-flow-perf/
F: doc/guides/tools/flow-perf.rst
Compression performance test application
T: git://dpdk.org/next/dpdk-next-crypto
F: app/test-compress-perf/
F: doc/guides/tools/comp_perf.rst
app/crypto-perf: introduce performance test application This patchset introduce new application which allows measuring performance parameters of PMDs available in crypto tree. The goal of this application is to replace existing performance tests in app/test. Parameters available are: throughput (--ptest throughput) and latency (--ptest latency). User can use multiply cores to run tests on but only one type of crypto PMD can be measured during single application execution. Cipher parameters, type of device, type of operation and chain mode have to be specified in the command line as application parameters. These parameters are checked using device capabilities structure. Couple of new library functions in librte_cryptodev are introduced for application use. To build the application a CONFIG_RTE_APP_CRYPTO_PERF flag has to be set (it is set by default). Example of usage: -c 0xc0 --vdev crypto_aesni_mb_pmd -w 0000:00:00.0 -- --ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth --cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo sha1-hmac --auth-op generate --auth-key-sz 64 --auth-digest-sz 12 --total-ops 10000000 --burst-sz 32 --buffer-sz 64 Signed-off-by: Declan Doherty <declan.doherty@intel.com> Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com> Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com> Signed-off-by: Marcin Kerlin <marcinx.kerlin@intel.com> Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
2017-01-25 16:27:33 +00:00
Crypto performance test application
app/crypto-perf: add script to graph perf results The python script introduced in this patch runs the crypto performance test application for various test cases, and graphs the results. Test cases are defined in config JSON files, this is where parameters are specified for each test. Currently there are various test cases for devices crypto_qat, crypto_aesni_mb and crypto_gcm. Tests for the ptest types Throughput and Latency are supported for each. The results of each test case are graphed and saved in PDFs (one PDF for each test suite graph type, with all test cases). The graphs output include various grouped barcharts for throughput tests, and histogram and boxplot graphs are used for latency tests. Documentation is added to outline the configuration and usage for the script. Usage: A JSON config file must be specified when running the script, "./dpdk-graph-crypto-perf <config_file>" The script uses the installed app by default (from ninja install). Alternatively we can pass path to app by "-f <rel_path>/<build_dir>/app/dpdk-test-crypto-perf" All device test suites are run by default. Alternatively we can specify by adding arguments, "-t latency" - to run latency test suite only "-t throughput latency" - to run both throughput and latency test suites A directory can be specified for all output files, or the script directory is used by default. "-o <output_dir>" To see the output from the dpdk-test-crypto-perf app, use the verbose option "-v". Signed-off-by: Ciara Power <ciara.power@intel.com> Acked-by: Declan Doherty <declan.doherty@intel.com> Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
2021-01-20 17:29:30 +00:00
M: Ciara Power <ciara.power@intel.com>
T: git://dpdk.org/next/dpdk-next-crypto
app/crypto-perf: introduce performance test application This patchset introduce new application which allows measuring performance parameters of PMDs available in crypto tree. The goal of this application is to replace existing performance tests in app/test. Parameters available are: throughput (--ptest throughput) and latency (--ptest latency). User can use multiply cores to run tests on but only one type of crypto PMD can be measured during single application execution. Cipher parameters, type of device, type of operation and chain mode have to be specified in the command line as application parameters. These parameters are checked using device capabilities structure. Couple of new library functions in librte_cryptodev are introduced for application use. To build the application a CONFIG_RTE_APP_CRYPTO_PERF flag has to be set (it is set by default). Example of usage: -c 0xc0 --vdev crypto_aesni_mb_pmd -w 0000:00:00.0 -- --ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth --cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo sha1-hmac --auth-op generate --auth-key-sz 64 --auth-digest-sz 12 --total-ops 10000000 --burst-sz 32 --buffer-sz 64 Signed-off-by: Declan Doherty <declan.doherty@intel.com> Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com> Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com> Signed-off-by: Marcin Kerlin <marcinx.kerlin@intel.com> Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
2017-01-25 16:27:33 +00:00
F: app/test-crypto-perf/
F: doc/guides/tools/cryptoperf.rst
app/crypto-perf: introduce performance test application This patchset introduce new application which allows measuring performance parameters of PMDs available in crypto tree. The goal of this application is to replace existing performance tests in app/test. Parameters available are: throughput (--ptest throughput) and latency (--ptest latency). User can use multiply cores to run tests on but only one type of crypto PMD can be measured during single application execution. Cipher parameters, type of device, type of operation and chain mode have to be specified in the command line as application parameters. These parameters are checked using device capabilities structure. Couple of new library functions in librte_cryptodev are introduced for application use. To build the application a CONFIG_RTE_APP_CRYPTO_PERF flag has to be set (it is set by default). Example of usage: -c 0xc0 --vdev crypto_aesni_mb_pmd -w 0000:00:00.0 -- --ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth --cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --auth-algo sha1-hmac --auth-op generate --auth-key-sz 64 --auth-digest-sz 12 --total-ops 10000000 --burst-sz 32 --buffer-sz 64 Signed-off-by: Declan Doherty <declan.doherty@intel.com> Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com> Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com> Signed-off-by: Marcin Kerlin <marcinx.kerlin@intel.com> Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
2017-01-25 16:27:33 +00:00
Eventdev test application
M: Jerin Jacob <jerinj@marvell.com>
T: git://dpdk.org/next/dpdk-next-eventdev
F: app/test-eventdev/
F: doc/guides/tools/testeventdev.rst
F: doc/guides/tools/img/eventdev_*
F: app/test/test_event_ring.c
Procinfo tool
M: Maryam Tahhan <maryam.tahhan@intel.com>
M: Reshma Pattan <reshma.pattan@intel.com>
F: app/proc-info/
F: doc/guides/tools/proc_info.rst
Other Example Applications
--------------------------
Ethtool example
F: examples/ethtool/
F: doc/guides/sample_app_ug/ethtool.rst
FIPS validation example
M: Brian Dooley <brian.dooley@intel.com>
F: examples/fips_validation/
F: doc/guides/sample_app_ug/fips_validation.rst
Flow filtering example
M: Ori Kam <orika@nvidia.com>
F: examples/flow_filtering/
F: doc/guides/sample_app_ug/flow_filtering.rst
Helloworld example
M: Bruce Richardson <bruce.richardson@intel.com>
F: examples/helloworld/
F: doc/guides/sample_app_ug/hello_world.rst
IPsec security gateway example
M: Radu Nicolau <radu.nicolau@intel.com>
M: Akhil Goyal <gakhil@marvell.com>
F: examples/ipsec-secgw/
F: doc/guides/sample_app_ug/ipsec_secgw.rst
IPv4 multicast example
F: examples/ipv4_multicast/
F: doc/guides/sample_app_ug/ipv4_multicast.rst
L2 forwarding example
M: Bruce Richardson <bruce.richardson@intel.com>
F: examples/l2fwd/
F: doc/guides/sample_app_ug/l2_forward_real_virtual.rst
L2 forwarding with cache allocation example
M: Tomasz Kantecki <tomasz.kantecki@intel.com>
F: doc/guides/sample_app_ug/l2_forward_cat.rst
F: examples/l2fwd-cat/
L2 forwarding with eventdev example
M: Sunil Kumar Kori <skori@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
T: git://dpdk.org/next/dpdk-next-eventdev
F: examples/l2fwd-event/
F: doc/guides/sample_app_ug/l2_forward_event.rst
L3 forwarding example
F: examples/l3fwd/
F: doc/guides/sample_app_ug/l3_forward.rst
Link status interrupt example
F: examples/link_status_interrupt/
F: doc/guides/sample_app_ug/link_status_intr.rst
PTP client example
M: Kirill Rybalchenko <kirill.rybalchenko@intel.com>
F: examples/ptpclient/
Rx/Tx callbacks example
M: Bruce Richardson <bruce.richardson@intel.com>
M: John McNamara <john.mcnamara@intel.com>
F: examples/rxtx_callbacks/
F: doc/guides/sample_app_ug/rxtx_callbacks.rst
Service cores example
M: Harry van Haaren <harry.van.haaren@intel.com>
F: examples/service_cores/
F: doc/guides/sample_app_ug/service_cores.rst
Skeleton example
M: Bruce Richardson <bruce.richardson@intel.com>
M: John McNamara <john.mcnamara@intel.com>
F: examples/skeleton/
F: doc/guides/sample_app_ug/skeleton.rst
VMDq examples
F: examples/vmdq/
F: doc/guides/sample_app_ug/vmdq_forwarding.rst
F: examples/vmdq_dcb/
F: doc/guides/sample_app_ug/vmdq_dcb_forwarding.rst