Introduce CPU crypto action type allowing to differentiate between
regular async 'none security' and synchronous, CPU crypto accelerated
sessions.
This mode is similar to ACTION_TYPE_NONE but crypto processing is
performed synchronously on a CPU.
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Add new API allowing to process crypto operations in a synchronous
manner. Operations are performed on a set of SG arrays.
Cryptodevs which allows CPU crypto operation mode have to
use RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO capability.
Add a helper method to easily convert mbufs to a SGL form.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Move IPSEC_SAD_NAMESIZE into public header
and rename it to RTE_IPSEC_SAD_NAMESIZE
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
This patch catches an overflow that could happen if an
invalid region size or page alignment is provided by the
guest via the VHOST_USER_SET_MEM_TABLE request.
If the sum of the size to mmap and the alignment overflows
uint64_t, then RTE_ALIGN_CEIL(mmap_size, alignment) macro
will return 0. This value was passed as is as size argument
to mmap().
While kernel handling of mmap() syscall returns an error
if size is 0, it is better to catch it earlier and provide
a meaningful error log.
Fixes: ec09c280b8 ("vhost: fix mmap not aligned with hugepage size")
Cc: stable@dpdk.org
Reported-by: Ilja Van Sprundel <ivansprundel@ioactive.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Consider a virtqueue ready when, apart from the descriptor area,
both event suppression areas have been mapped.
Fixes: 2d1541e2b6 ("vhost: add vring address setup for packed queues")
Cc: stable@dpdk.org
Signed-off-by: Adrian Moreno <amorenoz@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
The current implementation of vhost_net in packed vring tries to fill
the shadow vector before send any actual changes to the guest. While
this can be beneficial for the throughput, it conflicts with some
bufferfloats methods like the linux kernel napi, that stops
transmitting packets if there are too much bytes/buffers in the
driver.
To solve it, we flush the shadow packets at the end of
virtio_dev_tx_packed if we have starved the vring, i.e. the next
buffer is not available for the device.
Since this last check can be expensive because of the atomic, we only
check it if we have not obtained the expected "count" packets. If it
happens to obtain "count" packets and there is no more available
packets the caller needs to keep call virtio_dev_tx_packed again.
Fixes: 31d6c6a5b8 ("vhost: optimize packed ring dequeue")
Cc: stable@dpdk.org
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
According to recvmsg() specification, 0 is a valid
return code when client is disconnecting.
Therefore, it should not be reported as error, unless there
are other dependencies that require message to not be empty.
But there are none, since the next immediate caller of recvmsg()
reports "vhost peer closed" info (not error) when message is empty.
This patch changes return code check for recvmsg() so that
misleading error message is not printed when the code is 0.
Fixes: 8f972312b8 ("vhost: support vhost-user")
Cc: stable@dpdk.org
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
The vhost_user_read_cb() and rte_vhost_driver_unregister()
can be called at the same time by 2 threads. Eg thread1
calls vhost_user_read_cb() and removes the vsocket from
conn_list, then thread2 calls rte_vhost_driver_unregister()
and frees the vsocket since it is NOT in the conn_list.
So thread1 will access invalid memory when trying to
reconnect.
The fix is to move the "removing of vsocket from conn_list"
to end of the vhost_user_read_cb(), then avoid the race
condition.
The core trace is:
Program terminated with signal 11, Segmentation fault.
Fixes: af14759181 ("vhost: introduce API to start a specific driver")
Cc: stable@dpdk.org
Signed-off-by: Zhike Wang <wangzhike@jd.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
ppc64le failed when using large physical memory. I found problems in my two
commits in the past.
In commit e072d16f89 ("vfio: fix expanding DMA area in ppc64le"), I added
a sanity check using a mapped address to resolve an issue around expanding
IOMMU window, but this was not enough, since memory allocation can return
memory anywhere dependent on memory fragmentation. DPDK may still skip DMA
mapping and attempts to unmap non-mapped DMA during expanding IOMMU window.
As a result, SPDK apps using large physical memory frequently failed to
proceed the communication with NVMe and/or went into an infinite loop.
The root cause of the bug was in a gap between memory segments managed by
DPDK and firmware-level DMA mapping. DPDK's memory segments don't contain
the state of DMA mapping, and so, the memesg_walk cannot determine if an
iterated memory segment is mapped or not. This resulted in incorrect DMA
maps and unmaps.
At this time, I added the code to avoid iterating non-mapped memory
segments during DMA mapping. The memseg_walk iterates over memory segments
marked as "used", and so, the code sets memory segments that will be
mapped or unmapped as "free" transiently.
The commit db90b4969e ("vfio: retry creating sPAPR DMA window") allows
retring different page levels and sizes to create DMA window. However, this
allows page sizes different from hugepage sizes. This inconsistency caused
failures at the time of DMA mapping after the window creation. This patch
fixes to retry only different page levels.
Fixes: e072d16f89 ("vfio: fix expanding DMA area in ppc64le")
Fixes: db90b4969e ("vfio: retry creating sPAPR DMA window")
Cc: stable@dpdk.org
Signed-off-by: Takeshi Yoshimura <tyos@jp.ibm.com>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Timer, LPM and Distributor libraries no longer use function versioning
and therefore do not need separate build for static and shared version
of libraries.
This patch removes use_function_versioning from their meson build files
and corresponding include from the sources.
Fixes: f2fb215843 ("timer: remove deprecated code")
Fixes: 6e5b516761 ("distributor: remove deprecated code")
Fixes: c381a8d554 ("lpm: remove deprecated code")
Cc: stable@dpdk.org
Signed-off-by: Andrzej Ostruszka <aostruszka@marvell.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
The logtype USER1 should not be overloaded for library function.
Instead use a dynamic log type.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
API makes think that rte_cryptodev_info_get() cannot return
a value >= 3 (RTE_CRYPTO_AEAD_LIST_END in 19.11).
20.02-rc1 was returning 3 (RTE_CRYPTO_AEAD_CHACHA20_POLY1305).
So the ABI compatibility contract was broken.
It could be solved with some function versioning,
but because a lack of time, the feature is reverted for now.
This reverts following commits:
- 6c9f3b347e ("cryptodev: add Chacha20-Poly1305 AEAD algorithm")
- 2c512e64d6 ("crypto/qat: support Chacha Poly")
- d55e01f579 ("test/crypto: add Chacha Poly cases")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Those headers are internal and should not be distributed.
Fixes: 5b9656b157 ("lib: build with meson")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Contrary to the -c/-l options, where a logical core runs on the same
physical core in a 1:1 fashion (example: lcore 0 runs on core 0, lcore
16 runs on core 16), the --lcores option makes it possible to select the
physical cores on which runs a logical core.
However the current parsing code still limits the cpuset to the
[0, RTE_MAX_LCORE] range.
Example, before the patch, on a 24 cores system with RTE_MAX_LCORE == 16:
$ ./master/app/testpmd --no-huge --no-pci -m 512 --log-level *:debug \
--lcores 0@16,1@17 -- -i --total-num-mbufs 2048
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Detected lcore 6 as core 6 on socket 0
EAL: Detected lcore 7 as core 8 on socket 0
EAL: Detected lcore 8 as core 9 on socket 0
EAL: Detected lcore 9 as core 10 on socket 0
EAL: Detected lcore 10 as core 11 on socket 0
EAL: Detected lcore 11 as core 12 on socket 0
EAL: Detected lcore 12 as core 13 on socket 0
EAL: Detected lcore 13 as core 14 on socket 0
EAL: Detected lcore 14 as core 0 on socket 0
EAL: Detected lcore 15 as core 1 on socket 0
EAL: Skipped lcore 16 as core 2 on socket 0
EAL: Skipped lcore 17 as core 3 on socket 0
EAL: Skipped lcore 18 as core 4 on socket 0
EAL: Skipped lcore 19 as core 5 on socket 0
EAL: Skipped lcore 20 as core 6 on socket 0
EAL: Skipped lcore 21 as core 8 on socket 0
EAL: Skipped lcore 22 as core 9 on socket 0
EAL: Skipped lcore 23 as core 10 on socket 0
EAL: Skipped lcore 24 as core 11 on socket 0
EAL: Skipped lcore 25 as core 12 on socket 0
EAL: Skipped lcore 26 as core 13 on socket 0
EAL: Skipped lcore 27 as core 14 on socket 0
EAL: Support maximum 16 logical core(s) by configuration.
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: invalid parameter for --lcores
We can remove this limitation by using a cpuset_t (which is a more
natural type since this is what gets passed to pthread_setaffinity*
in the end).
After the patch:
$ ./master/app/testpmd --no-huge --no-pci -m 512 --log-level *:debug \
--lcores 0@16,1@17 -- -i --total-num-mbufs 2048
[...]
EAL: Master lcore 0 is ready (tid=7f94217bbc00;cpuset=[16])
EAL: lcore 1 is ready (tid=7f941f491700;cpuset=[17])
Signed-off-by: David Marchand <david.marchand@redhat.com>
Add debug logs to have a trace of unused cores for -c/-l options on
systems with more cores than RTE_MAX_LCORE.
Signed-off-by: David Marchand <david.marchand@redhat.com>
We use this state in control path only for services cores and -c/-l
options.
The value is not updated when using --lcores.
Use the internal helper where needed.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Fix the name of CPU_SETSIZE in hope we can reuse it in other parts of
the dpdk manipulating some rte_cpuset_t.
Fixes: 4dc2b4d2a4 ("eal/windows: add headers for compatibility")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
The dedicated routine rte_pktmbuf_pool_create_extbuf() is
provided to create mbuf pool with data buffers located in
the pinned external memory. The application provides the
external memory description and routine initializes each
mbuf with appropriate virtual and physical buffer address.
It is entirely application responsibility to register
external memory with rte_extmem_register() API, map this
memory, etc.
The new introduced flag RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF
is set in private pool structure, specifying the new special
pool type. The allocated mbufs from pool of this kind will
have the EXT_ATTACHED_MBUF flag set and initialiazed shared
info structure, allowing cloning with regular mbufs (without
attached external buffers of any kind).
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Update detach routine to check the mbuf pool type.
Introduce the special internal version of detach routine to handle
the special case of pinned external bufferon mbuf freeing.
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The routine rte_pktmbuf_priv_flags is introduced to fetch
the flags from the mbuf memory pool private structure
in unified fashion.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Use new marker typedef available in EAL and remove private marker
typedef.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Introduce EAL typedef for structure 1B, 2B, 4B, 8B alignment marking and
a generic marker for a point in a structure.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Matan Azrad <matan@mellanox.com>
In case of too low number of memzone segments user notification
was misleading. This patch improves the description by providing
better explanation about the cause.
Signed-off-by: Artur Trybula <arturx.trybula@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
In (currently unreleased) FreeBSD 13, the CPU_NAND macro has been renamed
to CPU_ANDNOT, so we need to use different DPDK-specific macros depending
on what system-defined ones are present.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Moved RFC4115 APIs to non-experimental as they have been there
since 19.02. Also, these APIs are the same as the non RFC4115 APIs.
Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Enhance API documentation of rte_pktmbuf_attach_extbuf() to
explain that the attached mbuf is initialized with length = 0.
Link: https://bugs.dpdk.org/show_bug.cgi?id=362
Signed-off-by: Jörg Thalheim <joerg@thalheim.io>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The existing optimize_object_size() function address the memory object
alignment constraint on x86 for better performance.
Different (micro) architecture may have different memory alignment
constraint for better performance and it not the same as the existing
optimize_object_size().
Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution
based on the address and some may have a different formula.
Introducing arch_mem_object_align() function to abstract
the difference between different (micro) architectures to avoid
wasting memory for mempool object alignment for the architecture
that it is not required to do so.
Details on the amount of memory saving:
Currently, arm64 based architectures use the default (nchan=4,
nrank=1). The worst case is for an object whose size (including mempool
header) is 2 cache lines, where it is optimized to 3 cache lines (+50%).
Examples for cache lines size = 64:
orig optimized
64 -> 64 +0%
128 -> 192 +50%
192 -> 192 +0%
256 -> 320 +25%
320 -> 320 +0%
384 -> 448 +16%
...
2304 -> 2368 +2.7% (~mbuf size)
Additional details:
https://www.mail-archive.com/dev@dpdk.org/msg149157.html
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
To populate a mempool with a virtual area, the mempool code calls
rte_mempool_populate_iova() for each iova-contiguous area. It happens
(rarely) that this area is too small to store one object. In this case,
rte_mempool_populate_iova() returns an error, which is forwarded by
rte_mempool_populate_virt().
This case should not throw an error in rte_mempool_populate_virt().
Instead, the area that is too small should just be ignored.
To fix this issue, change the return value of
rte_mempool_populate_iova() to 0 when no object can be populated,
so it can be ignored by the caller. As this would be an API/ABI change,
only do this modification internally for now.
Fixes: 354788b60c ("mempool: allow populating with unaligned virtual area")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Alvin Zhang <alvinx.zhang@intel.com>
When allocating a mempool which is larger than the largest
available area, it can take a lot of time:
a- the mempool calculate the required memory size, and tries
to allocate it, it fails
b- then it tries to allocate the largest available area (this
does not request new huge pages)
c- add this zone to the mempool, this triggers the allocation
of a mem hdr, which request a new huge page
d- back to a- until mempool is populated or until there is no
more memory
This can take a lot of time to finally fail (several minutes): in step
a- it takes all available hugepages on the system, then release them
after it fails.
The problem appeared with commit eba11e3646 ("mempool: reduce wasted
space on populate"), because smaller chunks are now allowed. Previously,
it had to be at least one page size, which is not the case in step b-.
To fix this, implement our own way to allocate the largest available
area instead of using the feature from memzone: if an allocation fails,
try to divide the size by 2 and retry. When the requested size falls
below min_chunk_size, stop and return an error.
Fixes: eba11e3646 ("mempool: reduce wasted space on populate")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Tested-by: Ali Alnubani <alialnu@mellanox.com>
The documentation says that a negative errno is returned on error, but
in most places that's not the case.
Fix the documentation and the exceptions in code. The second one
(return from populate_virt) also fixes a memory leak.
Note that testpmd was using the function correctly.
Fixes: aa10457eb4 ("mempool: make mempool populate and free api public")
Fixes: 6780f72fb8 ("mempool: populate with anonymous memory")
Fixes: 66e7ba0bad ("mempool: ensure mempool is initialized before populating")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The rte_security API which enables inline protocol/crypto feature
mandates that for every security session an rte_flow is created. This
would internally translate to a rule in the hardware which would do
packet classification.
In rte_security, one SA would be one security session. And if an rte_flow
need to be created for every session, the number of SAs supported by an
inline implementation would be limited by the number of rte_flows the
PMD would be able to support.
If the fields SPI & IP addresses are allowed to be a range, then this
limitation can be overcome. Multiple flows will be able to use one rule
for SECURITY processing. In this case, the security session provided as
conf would be NULL.
Application should do an rte_flow_validate() to make sure the flow is
supported on the PMD.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ori Kam <orika@mellanox.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
It is useful to know when the next timer will expire when
using rte_epoll_wait (or sleep when idle). This experimental
API provides a hook to query the number of ticks remaining.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
Make latency calculation multithread safe by
using spinlock.
Fixes: 5cd3cac9ed ("latency: added new library for latency stats")
Cc: stable@dpdk.org
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Adding new API function to query the maximum key ID
that could possibly be returned by rte_hash_add_key and
rte_hash_add_key_with_hash. When RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD
is set, the maximum key id is larger than the entry count specified
by the user.
Signed-off-by: Kumar Amber <kumar.amber@intel.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
rte_cfgfile_section_num_entries_by_index was missing from the map file.
meson build failed when calling this function,
due to linking a binary to cfgfile built as a shared library.
Fixes: 3d2e0448eb ("cfgfile: add section number of entries by index")
Cc: stable@dpdk.org
Signed-off-by: Liron Himi <lironh@marvell.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
The header linux/version.h isn't included when CONFIG_RTE_EAL_VFIO
is explicitly disabled. LINUX_VERSION_CODE and KERNEL_VERSION are
therefore undefined, causing the build failure:
lib/librte_eal/linux/eal/eal.c: In function ‘rte_eal_init’:
lib/librte_eal/linux/eal/eal.c:1076:32: error: "LINUX_VERSION_CODE" is
not defined, evaluates to 0 [-Werror=undef]
Fixes: a0dede62a5 ("eal/linux: remove KNI restriction on IOVA")
Cc: stable@dpdk.org
Signed-off-by: Ali Alnubani <alialnu@mellanox.com>
Introduce an API which dump the device's internal representation
information of rte flows in hardware.
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Avoid overwriting device flags and other information in device
data stored in shared memory when a secondary process
probes PCI device.
Fixes: 494adb7f63 ("ethdev: add device fields from PCI layer")
Cc: stable@dpdk.org
Signed-off-by: Fang TongHao <fangtonghao@sangfor.com.cn>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Currently, there is a potential problem that changing the content of
dev->data->dev_conf.rxmode.offloads even when there is no
vlan_offload_set driver callback.
It is a good idea that prevent the side effect and make the API return
success if no change requested. This patch fixes the problem, the detail
information as below:
- keep possibility to do dummy set even if there is no driver callback
- do not touch Rx mode offloads in device data before checking the
driver callback availability
- ensure that Rx mode offloads are rolled back correctly if driver
callback returns error
Fixes: 81f9db8ecc ("ethdev: add vlan offload support")
Cc: stable@dpdk.org
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Signed-off-by: Min Wang (Jushui) <wangmin3@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
The maximum amount of unique swutching domain is supposed
to be equal RTE_MAX_ETHPORTS. Current implementation allows
to allocate only RTE_MAX_ETHPORTS-1 domains.
The definition of RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID is
changed from 0 to UINT16_MAX, the rte_eth_dev_info_get is
updated to initialize dev_ibfo structure accordingly.
Fixes: ce92504063 ("ethdev: add switch domain allocator")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
If the vhost-user application (e.g. OVS) deletes the vhost-user
port while Qemu sends a vhost-user request, a deadlock can
happen if the request handler tries to acquire vhost-user's
global mutex, which is also locked by the vhost-user port
deletion API (rte_vhost_driver_unregister).
This patch prevents the deadlock by making
rte_vhost_driver_unregister() to release the mutex and try
again if a request is being handled to give a chance to
the request handler to complete.
Fixes: 8b4b949144 ("vhost: fix dead lock on closing in server mode")
Fixes: 5fbb3941da ("vhost: introduce driver features related APIs")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Acked-by: Eelco Chaudron <echaudro@redhat.com>
This msg is used to notify qemu that should get the config of backend.
For example, vhost-user-blk uses this msg to notify guest OS the
capacity of backend has changed.
The need_reply flag is not mandatory because it will block the sender
thread and master process will send get_config message to fetch the
configuration, this need an extra thread to process the vhost message.
Signed-off-by: Li Feng <fengli@smartx.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch adds the new flow item RTE_FLOW_ITEM_TYPE_L2TPV3OIP to
flow API to match a L2TPv3 over IP header. This patch supports only
L2TPv3 over IP header format which is different to L2TPv2/L2TPv3
over UDP. The difference in header formats between L2TPv3 over IP
and L2TP over UDP require a separate implementation for each.
Signed-off-by: Rory Sexton <rory.sexton@intel.com>
Signed-off-by: Dariusz Jagus <dariuszx.jagus@intel.com>
Acked-by: Ori Kam <orika@mellanox.com>
The function was checking -1 against the callback data instead of
the given cb_arg parameter.
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Ricardo Roldan <rroldan@bequant.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
For some overlay network, such as VXLAN, the DSCP field in the new outer
IP header after VXLAN decapsulation may need to be updated accordingly.
This commit introduce the DSCP modify action for IPv4 and IPv6.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ori Kam <orika@mellanox.com>