Allow binding KNI thread to specific core in single threaded mode
by setting core_id and force_bind config parameters.
Signed-off-by: Vladyslav Buslov <vladyslav.buslov@harmonicinc.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
As a bug-fix for lpm tbl8 recycle is introduced,
add a test case to verify tbl8 group is correctly
freed when it only includes a rule with depth=24.
Signed-off-by: Wei Dai <wei.dai@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
When a rule with depth > 24 is added into an existing
rule with depth <=24, a new tbl8 is allocated, the existing
rule first fulfill whole new tbl8, so the filed valid of
each entry in this tbl8 is always true and depth of each
entry is always <= 24 before adding the new rule with depth > 24.
Signed-off-by: Wei Dai <wei.dai@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
When all rules with depth > 24 are deleted in a same sub-table
(tlb8 group) and only a rule with depth <=24 is left in it,
this sub-table (tlb8 group) should be recycled.
Fixes: dc81ebbaca ("lpm: extend IPv4 next hop field")
Fixes: af75078fec ("first public release")
Signed-off-by: Wei Dai <wei.dai@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Before this patch, application-specific loggers could not be
installed before rte_eal_init completed (the initialization process
called rte_openlog_stream, overwriting any previously installed
logger). This made it impossible for an application to capture the
initial log messages generated during rte_eal_init. This patch changes
initialization so that information from a previous call to
rte_openlog_stream is not lost. Specifically:
* The default log stream is now maintained separately from an
application-specific log stream installed with rte_openlog_stream.
* rte_eal_common_log_init has been renamed to eal_log_set_default,
since this is all it does. It no longer invokes rte_openlog_stream; it
just updates the default stream. Also, this method now returns void,
rather than int, since there are no errors.
This patch also removes the "early log" mechanism and cleans up the
log initialization mechanism:
* The default log stream defaults to stderr on all platforms if
eal_log_set_default hasn't been invoked (Linux used to use stdout
during the first part of initialization).
* Removed rte_eal_log_early_init; all of the desired functionality can
be achieved by calling eal_log_set_default.
* Removed lib/librte_eal/bsdapp/eal/eal_log.c: it contained only one
function, rte_eal_log_init, which is not needed or invoked for BSD.
* Removed declaration for eal_default_log_stream in rte_log.h (it's now
private to eal_common_log.c).
* Moved call to rte_eal_log_init earlier in rte_eal_init for Linux, so
that it starts using the preferrred log ASAP.
Signed-off-by: John Ousterhout <ouster@cs.stanford.edu>
Previous patch updated the functions without updating all the comments.
Fixes: 591a9d7985 ("add FILE argument to debug functions")
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
Acked-by: John McNamara <john.mcnamara@intel.com>
In csumonly engine, display the value of LRO segment if the
LRO flag is set.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
When receiving coalesced packets in virtio, the original size of the
segments is provided. This is a useful information because it allows to
resegment with the same size.
Add a RX new flag in mbuf, that can be set when packets are coalesced by
a hardware or virtual driver when the m->tso_segsz field is valid and is
set to the segment size of original packets.
This flag is used in next commits in the virtio pmd.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Following discussions in [1] and [2], introduce a new bit to
describe the Rx checksum status in mbuf.
Before this patch, only one flag was available:
PKT_RX_L4_CKSUM_BAD: L4 cksum of RX pkt. is not OK.
And same for L3:
PKT_RX_IP_CKSUM_BAD: IP cksum of RX pkt. is not OK.
This had 2 issues:
- it was not possible to differentiate "checksum good" from
"checksum unknown".
- it was not possible for a virtual driver to say "the checksum
in packet may be wrong, but data integrity is valid".
This patch tries to solve this issue by having 4 states (2 bits)
for the IP and L4 Rx checksums. New values are:
- PKT_RX_L4_CKSUM_UNKNOWN: no information about the RX L4 checksum
-> the application should verify the checksum by sw
- PKT_RX_L4_CKSUM_BAD: the L4 checksum in the packet is wrong
-> the application can drop the packet without additional check
- PKT_RX_L4_CKSUM_GOOD: the L4 checksum in the packet is valid
-> the application can accept the packet without verifying the
checksum by sw
- PKT_RX_L4_CKSUM_NONE: the L4 checksum is not correct in the packet
data, but the integrity of the L4 data is verified.
-> the application can process the packet but must not verify the
checksum by sw. It has to take care to recalculate the cksum
if the packet is transmitted (either by sw or using tx offload)
And same for L3 (replace L4 by IP in description above).
This commit tries to be compatible with existing applications that
only check the existing flag (CKSUM_BAD).
[1] http://dpdk.org/ml/archives/dev/2016-May/039920.html
[2] http://dpdk.org/ml/archives/dev/2016-June/040007.html
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This function can be used to calculate the checksum of data embedded in
mbuf, that can be composed of several segments.
This function will be used by the virtio pmd in next commits to calculate
the checksum in software in case the protocol is not recognized.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add the ability to reset the virtio device in the configure callback
if the features flag changed since previous reset. This will be possible
with the introduction of offload support in next commits.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Move the configuration of control queue in the configure callback.
This is needed by next commit, which introduces the reinitialization
of the device in the configure callback to change the feature flags.
Therefore, the control queue will have to be restarted at the same
place.
As virtio_dev_cq_queue_setup() is called from a place where
config->max_virtqueue_pairs is not available, we need to store this in
the private structure. It replaces max_rx_queues and max_tx_queues which
have the same value. The log showing the value of max_rx_queues and
max_tx_queues is also removed since config->max_virtqueue_pairs is
already displayed above.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Move all code related to device initialization in a new function
virtio_init_device().
This commit brings no functional change, it prepares the next commits
that will add the offload support. For that, it will be needed to
reinitialize the device from ethdev->configure(), using this new
function.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch fixes a Windows VM compatibility issue in DPDK 16.07 vhost code
which causes the guest to hang once any packets are enqueued when mrg_rxbuf
is turned on by setting the right id and len in the used ring.
As defined in virtio spec 0.95 and 1.0, in each used ring element, id means
index of start of used descriptor chain, and len means total length of the
descriptor chain which was written to. While in 16.07 code, index of the
last descriptor is assigned to id, and the length of the last descriptor is
assigned to len.
How to test?
1. Start testpmd in the host with a vhost port.
2. Start a Windows VM image with qemu and connect to the vhost port.
3. Start io forwarding with tx_first in host testpmd.
For 16.07 code, the Windows VM will hang once any packets are enqueued.
Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add an option, dequeue-zero-copy, to enable this feature in vhost-pmd.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Add an option, --dequeue-zero-copy, to enable dequeue zero copy.
One thing worth noting while using dequeue zero copy is the nb_tx_desc
has to be small enough so that the eth driver will hit the mbuf free
threshold easily and thus free mbuf more frequently.
The reason behind that is, when dequeue zero copy is enabled, guest Tx
used vring will be updated only when corresponding mbuf is freed. If mbuf
is not freed frequently, the guest Tx vring could be starved.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Dequeue zero copy is disabled by default. Here add a new flag
``RTE_VHOST_USER_DEQUEUE_ZERO_COPY`` to explictily enable it.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
The basic idea of dequeue zero copy is, instead of copying data from
the desc buf, here we let the mbuf reference the desc buf addr directly.
Doing so, however, has one major issue: we can't update the used ring
at the end of rte_vhost_dequeue_burst. Because we don't do the copy
here, an update of the used ring would let the driver to reclaim the
desc buf. As a result, DPDK might reference a stale memory region.
To update the used ring properly, this patch does several tricks:
- when mbuf references a desc buf, refcnt is added by 1.
This is to pin lock the mbuf, so that a mbuf free from the DPDK
won't actually free it, instead, refcnt is subtracted by 1.
- We chain all those mbuf together (by tailq)
And we check it every time on the rte_vhost_dequeue_burst entrance,
to see if the mbuf is freed (when refcnt equals to 1). If that
happens, it means we are the last user of this mbuf and we are
safe to update the used ring.
- "struct zcopy_mbuf" is introduced, to associate an mbuf with the
right desc idx.
Dequeue zero copy is introduced for performance reason, and some rough
tests show about 50% perfomance boost for packet size 1500B. For small
packets, (e.g. 64B), it actually slows a bit down (well, it could up to
15%). That is expected because this patch introduces some extra works,
and it outweighs the benefit from saving few bytes copy.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
So far, we retrieve both the used ring and avail ring idx by the var
last_used_idx; it won't be a problem because the used ring is updated
immediately after those avail entries are consumed.
But that's not true when dequeue zero copy is enabled, that used ring is
updated only when the mbuf is consumed. Thus, we need use another var to
note the last avail ring idx we have consumed.
Therefore, last_avail_idx is introduced.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
So that we can convert a guest physical address to host physical
address, which will be used in later Tx zero copy implementation.
MAP_POPULATE is set while mmaping guest memory regions, to make
sure the page tables are setup and then rte_mem_virt2phy() could
yield proper physical address.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Due to history reason (that vhost-cuse comes before vhost-user), some
fields for maintaining the vhost-user memory mappings (such as mmapped
address and size, with those we then can unmap on destroy) are kept in
"orig_region_map" struct, a structure that is defined only in vhost-user
source file.
The right way to go is to remove the structure and move all those fields
into virtio_memory_region struct. But we simply can't do that before,
because it breaks the ABI.
Now, thanks to the ABI refactoring, it's never been a blocking issue
any more. And here it goes: this patch removes orig_region_map and
redefines virtio_memory_region, to include all necessary info.
With that, we can simplify the guest/host address convert a bit.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Negotiate VIRTIO_F_IOMMU_PLATFORM to have IOMMU support.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add modern device id and rename VIRTIO_PCI_DEVICEID_MIN to
VIRTIO_PCI_LEGACY_DEVICEID_NET. While at it, remove unused macros too.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The driver name has been lost with the eal rework.
Restore it.
Fixes: c830cb2954 ("drivers: use PCI registration macro")
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Virtio interfaces do not currently allow the user to specify a particular
Maximum Transmission Unit (MTU). Consequently, the MTU of Virtio interfaces
is typically set to the Ethernet default value of 1500.
This is problematic in the case of cloud deployments, in which a specific
(and potentially non-standard) MTU needs to be set by a DHCP server, which
needs to be honored by all interfaces across the traffic path.To acheive
this Virtio interfaces should support setting of MTU.
In case when GRE/VXLAN tunneling is used for internal communication, there
will be an overhead added by the infrastructure in the packet over and
above the ETHER MTU of 1518. So to take care of this overhead in these
cases the DHCP server corrects the L3 MTU to 1454. But since virtio
interfaces was not having the MTU set functionality that MTU sent by the
DHCP server was ignored and the instance will still send packets with 1500
MTU which after encapsulation will become more than 1518 and eventually
gets dropped in the infrastructure.
By adding an additional 'set_mtu' function to the Virtio driver, we can
honor the MTU sent by the DHCP server. The dhcp server/controller can
then leverage this 'set_mtu' functionality to resolve the above
mentioned issue of packets getting dropped due to incorrect size.
Signed-off-by: Souvik Dey <sodey@sonusnet.com>
Reviewed-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
- Fix copy/paste error in description of how to capture both rx
& tx traffic in a single pcap file
- Replace duplicate word with what original author presumably
intended, such that description now makes sense
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Explain default testpmd behavior in mac fwd mode to remove
amiguity/confusion regarding user's ability to specify Ethernet
addresses.
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Update the testpmd user guide with instructions for retrieving extended
NIC statistics.
Signed-off-by: Maryam Tahhan <maryam.tahhan@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Support to configure 25G and 50G speeds is missing from testpmd.
This patch also updates the testpmd user guide accordingly.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
"flowgen" forwarding mode has fixed packet size (300).
Let it re-use --txpkts option for specifying generated packet size.
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
An issue is found that DCB cannot be configured on ixgbe
NICs. It's said the TX queue number is not right.
On ixgbe the max TX queue number is not fixed, it depends
on the multi-queue mode.
This patch adds the device configuration before getting
info in the DCB configuration process. So the right info
can be got depending on the configuration.
Fixes: 1a572499 ("app/testpmd: setup DCB forwarding based on traffic class")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
RSS hash-key-size is retrieved from device configuration instead of
using a fixed size of 40 bytes.
Fixes: f79959ea15 ("app/testpmd: allow to configure RSS hash key")
Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The commit that disabled TSO for small packets was broken during the
rebase. The problem is the IP checksum is not calculated in software if:
- TX IP checksum is disabled
- TSO is enabled
- the current packet is smaller than tso segment size
When checking if the PKT_TX_IP_CKSUM flag should be set (in case
of tso), use the local tso_segsz variable, which is set to 0 when the
packet is too small to require tso. Therefore the IP checksum will be
correctly calculated in software.
Moreover, we should not use tunnel segment size for non-tunnel tso, else
TSO will stay disabled for all packets.
Fixes: 97c21329d4 ("app/testpmd: do not use TSO for small packets")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The vdev eth_bond has been renamed to net_bond.
testpmd is creating a bonding device with the old prefix.
It is changed for consistency.
The script test-null.sh was failing because using the old name
for the null vdev.
Fixes also the bonding and testpmd doc.
Fixes: 2f45703c17 ("drivers: make driver names consistent")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The vdev eth_ring has been renamed to net_ring.
Some unit tests are using the old name and fail.
Fixes also the vdev comments in EAL and ethdev.
Fixes: 2f45703c17 ("drivers: make driver names consistent")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The mempool function rte_mempool_walk was not tested.
It will print the name of all mempools.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
copy app/test/test_lpm6_routes.h to app/test/test_lpm6_data.h .
and then delete app/test/test_lpm6_routes.h and clear the
large_ips_table[ ] to make LPM6 test case size much smaller than
before. Also add codes in app/test/test_lpm6_data.h to generate test
data in large_ips_table[ ] at run time.
Signed-off-by: Wei Dai <wei.dai@intel.com>
remove the large file app/test/test_lpm_routes.h and add codes to
auto-generate similar large route rule table which keeps same depth
and IP class distribution as previous one in test_lpm_routes.h .
With the rule table auto-generated at run time, the performance
of looking up keep similar to that from pervious constant table.
Signed-off-by: Wei Dai <wei.dai@intel.com>