Current ixgbe VF base driver only really read the status register when:
- get_link_status is true
- link reset
- mailbox timeout.
We only set get_link_status to true when we start the PF/VF, so
following calls to ixgbe_dev_link_update will just keep the old link
status unless the link has been reset.
Because of this behaviour, when the link status of the PF changes after
the VF has been initialized, we do not read the current status register
from the nic and instead we just keep the old link status.
Fix the problem by setting this field to true before calling
ixgbe_check_link function from base driver. We don't need to check after
this call for get_link_status anymore, so remove it.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
The logic to select ixgbe VF RX function is different than PF side.
There are a few issues with its current state:
- it does not allow to select ixgbe_recv_pkts_vec among other options.
- it can cause memory corruption for scatter mode as it does not allocate
enough entries in sw_ring.
- when checksum is enabled, incorrect vector RX function is selected.
To solve above issues, change the VF RX function selection logic to
mimic PF side.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The flexbytes offset can not be set, because the value is over written
when fdir is enabled.
This patch fixes this issue, and also removes some reduplicate lines.
Fixes: d54a9888267c ("ixgbe: support flexpayload configuration of flow director")
Reported-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Each object stored in mempools are suffixed by a trailer, storing
a cookie in debug mode which help to detect memory corruptions.
Like for headers, introduce a structure that materializes the content of
this trailer.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Each object stored in mempools are prefixed by a header, allowing for
instance to retrieve the mempool pointer from the object. When debug is
enabled, a cookie is also added in this header that helps to detect
corruptions and double-frees.
Introduce a structure that materializes the content of this header,
and will simplify future patches adding things in this header.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This patch adds a new auto-test for testing the scaling
of concurrent inserts into rte_hash when protected by
the normal spinlock vs. the spinlock with HTM lock
elision. The test also benchmarks single-threaded
access without any locks.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch adds methods that use hardware memory transactions (HTM) on
fast-path for rwlock (a.k.a. lock elision). Here the methods are implemented
for x86 using Restricted Transactional Memory instructions (Intel(r)
Transactional Synchronization Extensions). The implementation fall-backs to
the normal rwlock if HTM is not available or memory transactions fail. This is
not a replacement for all rwlock usages since not all critical sections
protected by locks are friendly to HTM. For example, an attempt to perform
a HW I/O operation inside a hardware memory transaction always aborts
the transaction since the CPU is not able to roll-back should the transaction
fail. Therefore, hardware transactional locks are not advised to be used around
rte_eth_rx_burst() and rte_eth_tx_burst() calls.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch adds methods that use hardware memory transactions (HTM) on fast-path
for spinlocks (a.k.a. lock elision). Here the methods are implemented for x86
using Restricted Transactional Memory instructions (Intel(r) Transactional
Synchronization Extensions). The implementation fall-backs to the normal
spinlock if HTM is not available or memory transactions fail. This is not
a replacement for all spinlock usages since not all critical sections protected
by spinlocks are friendly to HTM. For example, an attempt to perform a HW I/O
operation inside a hardware memory transaction always aborts the transaction
since the CPU is not able to roll-back should the transaction fail.
Therefore, hardware transactional locks are not advised to be used around
rte_eth_rx_burst() and rte_eth_tx_burst() calls.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Some test rules had equal priority for the same category.
That can cause an ambiguity in build trie and test results.
Specify different priority value for each rule from the same category.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Introduce new RTE_ACL_MASKLEN_TO_BITMASK macro, that will be used
in several places inside librte_acl and it's UT.
Simplify and cleanup build_trie() code a bit.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When rebuilding a trie for limited rule-set,
don't try to split the rule-set even further.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Move check for build confg parameter into a separate function.
Simplify acl_calc_wildness() function.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When ring_writer_nodrop port fails to send data, it tries to resend.
Operation is aborted when maximum number of retries is reached.
Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
When ethdev_writer_nodrop port fails to send data, it tries to resend.
Operation is aborted when maximum number of retries is reached.
Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
New implementation sends burst without copying data to internal buffer
if it is possible. It is similar to tx_bulk function in ethdev_writer
port.
Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
There was two implementations of tx_bulk function in ethdev_writer port.
The function to run is chosen with WRITER_APPROACH define. This patch
removes WRITER_APPROACH = 0 implementation, as it seems to be slower.
Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
This patch fix doxygen warnings when generating documentation
for qos_meter and qos_sched.
Signed-off-by: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Remove these unnecessary vring descriptor length updating, vhost should
not change them.
virtio in front end should assign value to desc.len for both rx and tx.
Test report: http://dpdk.org/ml/archives/dev/2015-June/018610.html
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Extract codes into a function:
update_secure_len which is used to accumulate the buffer len in the
vring descriptors and to fill struct buf_vec.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Vring enqueue need consider the 2 cases:
1. use separate descriptors to contain virtio header and actual data,
e.g. the first descriptor is for virtio header, and then followed
by descriptors for actual data.
2. virtio header and some data are put together in one descriptor,
e.g. the first descriptor contain both virtio header and part of
actual data, and then followed by more descriptors for rest of packet
data, current DPDK based virtio-net pmd implementation is this case;
So does vring dequeue, it should not assume vring descriptor is chained
or not chained, it should use desc->flags to check whether it is chained
or not. This patch also fixes TX corrupt issue when vhost co-work with
virtio-net driver which uses one single vring descriptor (header and data
are in one descriptor) for virtio tx process on default.
Test report: http://dpdk.org/ml/archives/dev/2015-June/018610.html
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
In containers like docker, current->pid returns current process's global
PID instead of its own PID under containers's PID namespace, and
get_net_ns_by_pid() suppose to accept a virtual PID under its own
namespace, so we should use task_pid_vnr(current) to get current process's
virtual PID instead of current->pid.
Signed-off-by: Wenfeng Liu <liuwf@arraynetworks.com.cn>
Acked-by: Helin Zhang <helin.zhang@intel.com>
We did some (very basic) tests with IGMP, which involves adding
multicast addresses to ETH interfaces. This is done via the ip tool,
an example can be found on e.g.,
http://superuser.com/questions/324824/linux-built-in-or-open-source-program-to-join-multicast-group
and this will fail on KNI interfaces because of an unimplemented ioctl
SIOCADDMULTI. The patch simply adds an empty callback for set_rx_mode
(typically used for setting up hardware) so that the ioctl succeeds.
This is the same thing as the Linux tap interface does.
Signed-off-by: Simon Kagstrom <simon.kagstrom@netinsight.net>
Signed-off-by: Johan Faltstrom <johan.faltstrom@netinsight.net>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Loop processing packets dequeued from rx_q was using the number of
packets requested, not how many it actually received.
Variable rename to make code a little more clear
Signed-off-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
No reason to check out many entries are in kni->rx_q prior to
actually pulling them from the fifo. You can't dequeue more than
are there anyway. Max entries to dequeue is either the max batch
size or however much space is available on kni->free_q (lesser of the two).
Signed-off-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Do not need the 'safe' version of list_for_each_entry() if you are
not deleting from the list as you iterate over it.
Signed-off-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Implement .ndo_change_carrier to enable
DPDK applications to propagate link state changes to
kni virtual interfaces through sysfs
Signed-off-by: Vijayakumar Muthuvel Manickam <mmvijay@gmail.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
When a KNI object is created, a name is assigned to it which is stored
internally. There is also an API function to look up a KNI object by
name, but there is no API to query the current name of an existing
KNI object. This patch adds just such an API.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Added new test that verifies that rte_jhash_1words,
rte_jhash_2words and rte_jhash_3words return the same
values as rte_jhash.
Note that this patch has been added after the update
of the jhash function because these 3 functions did not
return the same values as rte_jhash before
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Changed name to something more meaningful,
and mark rte_jhash2 as deprecated.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
rte_jhash is basically like __rte_jhash_2hashes but
it returns only 1 hash, instead of 2.
In order to remove duplicated code, rte_jhash calls __rte_jhash_2hashes,
passing 0 as the second seed and returning just the first hash value.
(performance penalty is negligible)
The same is done with rte_jhash2. Also, rte_jhash2 is just an specific case
where keys are multiple of 32 bits, and where no key alignment check is required.
So,to avoid duplicated code, the function calls __rte_jhash_2hashes
with check_align = 0 (to use the optimal path)
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
With the jhash update, two new functions were introduced:
- rte_jhash_2hashes: Same as rte_jhash, but takes two seeds
and return two hashes (uint32_ts)
- rte_jhash2_2hashes: Same as rte_jhash2, but takes two seeds
and return two hashes (uint32_ts)
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
[Thomas: fix doxygen typos]
Jenkins hash function was developed originally in 1996,
and was integrated in first versions of DPDK.
The function has been improved in 2006,
achieving up to 35% better performance, compared to the original one.
This patch integrates that code into the rte_jhash library.
It also updates the precalculated hash values in the unit test,
as the code now returns different values (expected).
A final note has been added in release notes for stating
the changes made.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
In order to make sure that the hash functions are returning
the correct values, new tests have been added:
- First test compares precalculated hash values with values calculated
from the existing hash functions.
- Second test compares values returned from rte_jhash2 and rte_jhash,
expecting same return (only for multiple of 4 bytes keys)
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
In order to see more clearly the performance difference
between different hash functions, order of the loops
have been changed, so it iterates first through initial values,
then key sizes and then the hash functions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Previous key sizes used for testing did not have much purpose.
This patch substitutes them with some more meaninful
(standard multiple of 2 key sizes, plus IPv4/v6 tuple and others)
Also an arbitrary initial value has been added to increase
the test coverage, and RTE_DIM macro is used to iterate the loops.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Cycles per hash calculation were measured per single operation.
It is much more accurate to run several iterations between measurements
and divide by number of iterations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch moves hash function performance tests to a separate file,
so user can check performance of the existing hash functions quicker,
without having to run all the other hash operation performance tests,
which takes some time.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
On x550, flow director doesn't support other IP packets directly.
If we want to monitor IP other packets, the L4 protocol and ports must
be masked. It means, on x550, if we want to add a flow director filter
for other IP packets, a flow director mask must have been configed to
mask L4 protocol and ports.
Return err when the user try to config a flow director filter for other
IP packets without flow director mask configed before. And print err log
for it.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
This patch sets the setup_EEE function pointer to NULL for the
interfaces which do not support EEE (Energy Efficient Ethernet).
Currently only the KR backplane interface (0x15AB) supports EEE.
Setting this pointer to NULL prevents EEE registers from being
incorrectly modified and gives base drivers a flag to check for
EEE support.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds x550em PHY reset function ixgbe_reset_phy_t_X550em.
ixgbe_reset_phy_t_X550em calls the reset PHY generic, and then enables
the x550em PHY LASI(Link Alarm Status Interrupt) interrupts.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Set the lan_id before the first I2C access. The existing call was
clearly being done after a previous I2C access in the same function
and that can't be right, so call the set_lan_id method earlier. At
this point it probably doesn't matter for this QSFP function, but
it makes sense to do it consistently anyway.
On X550, be sure to set the lan_id before using it to configure the
mux control output, else the mux will not be controlled.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>