The parameters port_id didn't match with comments about port.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
EAL misses 4 device ID but base codes support them, so add them into EAL.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
configure flexible payload and flex mask in i40e driver
It includes arguments verification and HW setting.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
implement operation to get flow director statistics in i40e pmd driver
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
define structures for getting flow director statistics
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
implement operation to get flow director information in i40e pmd driver, includes
- mode
- supported flow types
- table space
- flexible payload size and granularity
- configured flexible payload and mask information
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
define structures for getting flow director information includes:
- mode
- supported flow types
- table space
- flexible payload size and granularity
- configured flexible payload and mask information
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
implement operation to flush flow director table
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
setting the FDIR flag and report FD_ID plus flex bytes in mbuf if match
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
fdir field in rte_mbuf is extended to support flex bytes reported when fdir match.
8 flex bytes can be reported in maximum.
The reported flex bytes are part of flexible payload.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
deal with two operations for flow director
- RTE_ETH_FILTER_ADD
- RTE_ETH_FILTER_DELETE
encode the flow inputs to programming packet
sent the packet to filter programming queue and check status on the status report queue
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
- macros to validate flow_type and pctype
- functions for transition between flow_type and pctype:
- i40e_flowtype_to_pctype
- i40e_pctype_to_flowtype
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
define structures to add or delete flow director filter
- struct rte_eth_fdir_filter
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
set flexible payload related registers to default value at initialization time.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
set up fortville resources to support flow director, includes
- queue 0 pair allocated and set up for flow director
- create vsi
- reserve memzone for flow director programming packet
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Support of updating/querying redirection table has been added for VF.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
As 40G NIC supports different sizes (128/512/64 entries) of
redirection table from that (128 entries) of 1G and 10G NICs,
support of multiple sizes of redirection table is needed.
It includes,
* Redefine 'struct rte_eth_rss_reta' in ethdev.
- To 'struct rte_eth_rss_reta_entry64' which contains 64
entries and 64 bits mask.
- Array of above new structure can be used for any number of
redirection table entries, as long as the number is multiple
of 64. This is quite flexible for the future expanding of
redirection table.
* Redefinition of relevant interfaces in ethdev.
- Interface of reta update has been redefined with new parameters.
- Interface of reta query has been redefined with new parameters.
* Rework of 1G PMD in igb.
- reta update has been reworked.
- reta query has been reworked.
* Rework of 10G PMD in ixgbe.
- reta update has been reworked.
- reta query has been reworked.
* Rework of 40G PMD (PF only) in i40e.
- reta update has been reworked.
- reta query has been reworked.
* Implement relevant commands in testpmd.
Test report: http://dpdk.org/ml/archives/dev/2014-November/008362.html
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Erlu Chen <erlu.chen@intel.com>
Returning redirection table size has been supported in ops of
'dev_infos_get' for both PF and VF. Default RX/TX configurations
of VF can be returned in ops of 'dev_infos_get', while it was
missed before.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
As more and more information are different between PF and VF, ops
of 'dev_infos_get' has been implemented respectively. In addition,
returning redirection table size has been supported in it.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
As more and more information are different between PF and VF,
ops of 'dev_infos_get' has been implemented respectively. In
addition, new field of 'reta_size' has been added in
'struct rte_eth_dev_info' for returning redirection table size.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add support of setting hash lookup table size according
to the hardawre capability.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Adding support for lsc interrupt from bonded device to link
bonding library with supporting unit tests in the test application.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Tested-by: SunX Jiajia <sunx.jiajia@intel.com>
This is a Linux-specific virtual PMD driver backed by an AF_PACKET
socket. This implementation uses mmap'ed ring buffers to limit copying
and user/kernel transitions. The PACKET_FANOUT_HASH behavior of
AF_PACKET is used for frame reception. In the current implementation,
Tx and Rx queues are always paired, and therefore are always equal
in number -- changing this would be a Simple Matter Of Programming.
Interfaces of this type are created with a command line option like
"--vdev=eth_af_packet0,iface=...". There are a number of options available
as arguments:
- Interface is chosen by "iface" (required)
- Number of queue pairs set by "qpairs" (optional, default: 1)
- AF_PACKET MMAP block size set by "blocksz" (optional, default: 4096)
- AF_PACKET MMAP frame size set by "framesz" (optional, default: 2048)
- AF_PACKET MMAP frame count set by "framecnt" (optional, default: 512)
Signed-off-by: John W. Linville <linville@tuxdriver.com>
[Thomas: disable because of incompatibility with some kernels]
Some features of the cmdline were broken in FreeBSD as a result of
termios not being compiled.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Since commit 08b563ffb1 ("mbuf: replace data pointer by an offset"),
data is not an mbuf field anymore.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
It eliminates a race between threads using rte_alarm_cancel() and
rte_alarm_set().
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
During initialization of rte_table_hash_ext and rte_table_hash_lru, a
contiguous region of memory is allocated to store meta data, buckets,
extended buckets, keys, stack of keys, stack of extended buckets and
data entries. The size of each region depends on the hash table
configuration.
The address of each region is calculated using offsets relative to the
beginning of the memory region. Without this patch, the offsets
contain the size of the table meta data (sizeof(struct
rte_table_hash)). These addresses are stored in pointers which are
used when entries are added or deleted and lookups are performed.
Instead of adding these offsets to the address of the beginning of the
memory region, they are added to the address of the end of the meta
data (= address of the beginning of the memory region + sizeof(struct
rte_table_hash)). The resulting addresses are off by sizeof(struct
rte_table_hash) bytes. As a consequence, memory past the allocated
region can be accessed by the add, delete and lookup operations.
This patch corrects the address calculation by not including the size
of the meta data in the offsets.
Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
During initialization of rte_hash_table_ext and rte_hash_table_lru,
t->data_size_shl is calculated. This member contains the number of
bits to shift left during calculation of the location of entries in
the hash table. To determine the number of bits to shift left, the
size of the entry (as provided to the rte_table_hash_ext_create and
rte_table_hash_lru_create) has to be used instead of the size of the
key.
Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
If a key is not found in a bucket and the bucket has been extended,
the extended buckets also have to checked for potentially matching
keys. The extended buckets are checked at the end of the lookup. In
most cases, this logic is skipped as it is uncommon to have buckets in
an extended state.
In case the lookup is performed with less than 5 packets, an
unoptimized version is run instead (the optimized version requires at
least 5 packets). The extended buckets should also be checked in this
case instead of simply ignoring the extended buckets.
Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
When an entry is deleted from an extensible rte_table_hash, the bucket
that stored the entry can become empty. If this is the case, the
bucket needs to be removed from the chain of buckets.
During removal of the bucket, the chain should be updated first. If
the bucket that will be removed is cleared first, the chain is broken
and the information to update the chain is lost.
Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Data_ring is a pre-mapped guest ring buffer that vmxnet3
backend has access to directly without a need for buffer
address mapping and unmapping during packet transmission.
It is useful in reducing device emulation cost on the tx
path. There are some additional cost though on the guest
driver for packet copy and overall it's a win.
This patch leverages the data_ring for packets with a
length less than or equal to the data_ring entry size
(128B). For larger packet, we won't use the data_ring
as that requires one extra tx descriptor and it's not
clear if doing this will be beneficial.
Performance results show that this patch significantly
boosts vmxnet3 64B tx performance (pkt rate) for l2fwd
application on a Ivy Bridge server by >20% at which
point we start to hit some bottleneck on the rx side.
Signed-off-by: Yong Wang <yongwang@vmware.com>
This patch includes two small performance optimizations
on the rx path:
(1) It adds unlikely hints on various infrequent error
paths to the compiler to make branch prediction more
efficient.
(2) It also moves a constant assignment out of the pkt
polling loop. This saves one branching per packet.
Performance evaluation configs:
- On the DPDK-side, it's running some l3 forwarding app
inside a VM on ESXi with one core assigned for polling.
- On the client side, pktgen/dpdk is used to generate
64B tcp packets at line rate (14.8M PPS).
Performance results on a Nehalem box (4cores@2.8GHzx2)
shown below. CPU usage is collected factoring out the
idle loop cost.
- Before the patch, ~900K PPS with 65% CPU of a core
used for DPDK.
- After the patch, only 45% of a core used, while
maintaining the same packet rate.
Signed-off-by: Yong Wang <yongwang@vmware.com>
This change makes vmxnet3 consistent with other pmds in
terms of dev_stop behavior: rather than releasing tx/rx
rings, it only resets the ring structure and release the
pending mbufs.
Verified with various tests (test-pmd and pktgen) over
vmxnet3 that dev stop/restart works fine.
Signed-off-by: Yong Wang <yongwang@vmware.com>
With introduction of in_flight_bitmask, the whole 32 bits of tag can be
used. Further more, this patch fixed the integer overflow when finding
the matched tags.
The maximum number workers is now defined as 64, which is length of
double-word. The link between number of workers and RTE_MAX_LCORE is
now removed. Compile time check is added to ensure the
RTE_DISTRIB_MAX_WORKERS is less than or equal to size of double-word.
Signed-off-by: Qinglai Xiao <jigsaw@gmail.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This field is added for librte_distributor. User of librte_distributor
is advocated to set value of mbuf->hash.usr before calling
rte_distributor_process. The value of usr is the tag which stands as
identifier of flow.
Signed-off-by: Qinglai Xiao <jigsaw@gmail.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
According to the changes of the i40e base driver, two device
IDs (0x1573, 0x1582) are not supported anymore, and one new
device ID (0x1586) is supported. The list of i40e device IDs
DPDK supported should be modified accordingly.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The scattered_rx configuration is updated in dev_start().
For the execution sequence "stop, re-configure and then re-start",
it expects using the new configuration.
But during re-configure, the stored data may still be the old one.
The patch clean the configuration anyway in dev_stop().
So that make sure always get the best Rx routine.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Per definition, rte_eth_rx_burst/rte_eth_tx_burst/rte_eth_rx_queue_count
returns the packet number.
When RTE_LIBRTE_ETHDEV_DEBUG turns on, retval of FUNC_PTR_OR_ERR_RTE was
set to -ENOTSUP. It makes confusing.
The patch always return 0 no matter no packet or there's error.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This is to enable user space vhost receiving and forwarding broadcast
and multicast packets:
Use new option in command line to enable promisc mode;
Enable 2 bits in VMDQ RX mode: ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Config PFVML2FLT register in ixgbe PMD to enable it receive broadcast and multicast packets;
also factorize the common logic with ixgbe_set_pool_rx_mode.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Config VM offload register in igb PMD to enable it receive broadcast and multicast packets.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Add vmdq rx mode field into rx config struct, it is flag from ETH_VMDQ_ACCEPT_*.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Adding this check is to avoid breakage from future data structure changes.
Signed-off-by: Jia Yu <jyu@vmware.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Since commit 08b563ffb1 ("mbuf: replace data pointer by an offset"),
KNI vhost compilation (CONFIG_RTE_KNI_VHOST=y) was broken.
rte_pktmbuf_mtod() is not used in the kernel context but is replaced
by a simple addition of the base address and the offset.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add in some additional comments around more complex areas of the code
so as to make the code easier to read and understand.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Following the big headers rework, all C++ stuff has moved to arch-specific
headers. The generic headers should not contain this so that this is done only
once.
There was a remaining #ifdef __cplusplus in "eal: split CPU cycle operation to
architecture specific" (fa4001c30e).
Reported-by: Keunhong Lee <dlrmsghd@gmail.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Since commit d798a94 ("mac vlan filter"),
ICC reports this error:
lib/librte_pmd_i40e/i40e_ethdev.c(1763): error #188:
enumerated type mixed with another type
Indeed, RTE_ETH_FILTER_NONE comes from enum rte_filter_type but
enum rte_filter_op is expected.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Configurable CRC stripping needs to be supported in VF,
and the configuration should be finally set in relevant
RX queue context with PF host support.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Support of configurable crc stripping in context of
VF RX queues.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Rename some local variables to express more accurately
and briefly. Fix several code style issues reported by
checkpatch.pl. Line warpping for some source lines which
has more than 80 characters, and merge lines together for
those source lines which does not need any line wrapping
actually. Add macros for numeric or calculating memory
sizes.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
API version number is straightfoward enough for checking
the PF host, and no need to use 'host_is_dpdk'.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Commit aec8283d47 fixes the compilation issue, but it leads to
one runtime issue: early exit wrongly. In some case, 'path' is NULL, but
'resolved_path' has effective path, it should continue going ahead rather
than exit.
This is due to that qemu unlink the file after it maps the huge page file.
In this special case, it is ok to check the resolved path
when path is NULL if errno indicates "No such file or directory".
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Fix alignment issues, lengthy lines, misordered type and other coding style issues.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
No need to keep the same code duplicated for 32 and 64bits x86.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Chao Zhu <bjzhuc@cn.ibm.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Architecture can have their own specific headers, just install all headers from
arch directory.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Chao Zhu <bjzhuc@cn.ibm.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch splits CPU flags related operations from DPDK and push them
to architecture specific arch directories, so that other processor
architecture can implement its own CPU flag functions to support DPDK.
Signed-off-by: Chao Zhu <bjzhuc@cn.ibm.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch splits the SSE based memory copy function from DPDK and push
them to architecture specific arch directories. Other processor
architecture can implement its own vector based memory copy functions.
Signed-off-by: Chao Zhu <bjzhuc@cn.ibm.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch splits the spinlock operations from DPDK and push them to
architecture specific arch directories, so that other processor
architecture to support DPDK can be easily adopted.
Signed-off-by: Chao Zhu <bjzhuc@cn.ibm.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch splits the prefetch operations from DPDK and push them to
architecture specific arch directories, so that other processor
architecture to support DPDK can implement their own functions.
Signed-off-by: Chao Zhu <bjzhuc@cn.ibm.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch splits the CPU TSC read operations from DPDK and push them to
architecture specific arch directories, so that other processors that
don't have tsc register can implement its own functions.
Signed-off-by: Chao Zhu <bjzhuc@cn.ibm.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch splits the byte order operations from DPDK and push them to
architecture specific arch directories, so that other processor
architecture to support DPDK can be easily adopted.
Signed-off-by: Chao Zhu <bjzhuc@cn.ibm.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch first adds architecture specific directories to eal.
Then split the atomic operations to architecture specific and generic files.
Architecture specific files are put into the corresponding architecture
directory and common header are put into generic directory.
Update documentation generation with new generic/ directory.
Signed-off-by: Chao Zhu <bjzhuc@cn.ibm.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
On FreeBSD, when initializing a secondary process,
EAL was complaining if there were ports not bound
to nic_uio module, exiting the application, which
should not happen, as this is expected behaviour,
and not an error
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
When using Intel C++ compiler(icc) 14.0.1.106 or the older icc 13.x
version, the mbuf initializer variable was not getting configured
correctly, as the mb_def variable was not set correctly. This is due
to an issue with icc (DPD200249565 which already been fixed in
icc 14.0.2 and newer compiler release) where it incorrectly calculates
the field offsets with initializers when zero-sized fields
are used in a structure.
To work around this, the code in ixgbe_rxq_vec_setup does not setup the
fields using an initializer, but instead assigns the values individually
in code.
NOTE: There is no performance impact to this change as the queue
setup functions are not data-plane APIs, but are only used at app
initialization.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The commit 15dbb63ef9 ("VXLAN packet identification") didn't compile,
if CONFIG_RTE_LIBRTE_I40E_DEBUG_DRIVER is enabled.
Signed-off-by: Choonho Son <choonho.son@gmail.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
1. Function i40e_vsi_* name change to i40e_dev_* since PF can contains
more than 1 VSI after VMDQ enabled.
2. i40e_dev_rx/tx_queue_setup change to have capability of setup
queues that belongs to VMDQ pools.
3. Add queue mapping. This will do a convertion between queue index
that application used and real NIC queue index.
3. i40e_dev_start/stop change to have capability switching VMDQ queues.
4. i40e_pf_config_rss change to calculate actual main VSI queue numbers
after VMDQ pools introduced.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Tested-by: Min Cao <min.cao@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Change i40e_macaddr_add and i40e_macaddr_remove functions to support
multiple macaddr add/delete. In the meanwhile, support macaddr ops
on different pools.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Tested-by: Min Cao <min.cao@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The change includes several parts:
1. Get maximum number of VMDQ pools supported in dev_init.
2. Fill VMDQ info in i40e_dev_info_get.
3. Setup VMDQ pools in i40e_dev_configure.
4. i40e_vsi_setup change to support creation of VMDQ VSI.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Tested-by: Min Cao <min.cao@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The change includes several parts:
1. Clear pool bitmap when trying to remove specific MAC.
2. Define RSS, DCB and VMDQ flags to combine rx_mq_mode.
3. Use 'struct' to replace 'union', which to expand the rx_adv_conf
arguments to better support RSS, DCB and VMDQ.
4. Fix bug in rte_eth_dev_config_restore function, which will restore
all MAC address to default pool.
5. Define additional 3 arguments for better VMDQ support.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
'PFINT_ICR0_ENA' shouldn't be cleared in user space ISR,
otherwise adminq interrupts might be missed during
co-working with VF initialization.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
As driver flag of 'RTE_PCI_DRV_INTR_LSC' was introduced
recently, it must be added in i40e. One second waiting
is needed for link up event, to wait the hardware into a
stable state, so the interrupt could be processed in two
parts. In addition, it should finally call the callback
function registered by application.
Test: http://dpdk.org/ml/archives/dev/2014-September/006091.html
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Jing Chen <jing.d.chen@intel.com>
Reviewed-by: Jijiang Liu <jijiang.liu@intel.com>
Tested-by: Min Cao <min.cao@intel.com>
To get the code cleaner and more straightforward, a macro
is defined for all interrupt cause enable flags. Two more
causes are enabled, and all the interrupt causes for
reporting any errors are compiled conditionally, as they
are for debug only.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Jing Chen <jing.d.chen@intel.com>
Reviewed-by: Jijiang Liu <jijiang.liu@intel.com>
To be more straightforward, two local variables in interrupt
handler are renamed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Jing Chen <jing.d.chen@intel.com>
Reviewed-by: Jijiang Liu <jijiang.liu@intel.com>
This patch fixes two issues:
- one is to fix the log issues
- the other is to set filter type when updating the default MAC filter.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Use the enum type for filter type.
It fixes the compilation error under ICC compiler.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
It mainly add i40e_vf_mac_filter_set() function to support perfect match
and hash match of MAC address and VLAN ID for VF.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
This patch mainly optimizes the i40e_add_macvlan_filters() and
the i40e_remove_macvlan_filters() functions in order that
we are able to provide filter type configuration.
And another relevant MAC filter codes are changed based on new data structures.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
In the case where a userspace reports itself as Ubuntu, but the
kernel isn't providing the expected version signature interface,
turn off Ubuntu specializations.
This situation happens often enough in development environments,
and with multi-distribution build servers (e.g. chroot, containers).
Signed-off-by: Alexander Guy <alexander@andern.org>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
rte_ipv4_fragment_packet() and rte_ipv6_fragment packet()
call rte_pktmbuf_attach() to attach the segment of the original
packet to the segment of the new fragmented one. Such function
is not declared if RTE_MBUF_REFCNT is disabled, as it needs to
call rte_mbuf_refcnt_update, not declared either.
Therefore, the ipv4/v6 fragmentation libraries are disabled
in that situation.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
If the vector pmd option is turned on in the compile time config file,
then always call the vector rxq setup, since we can now use the vector
PMD for receiving jumbo frames that need chained mbufs, a.k.a scattered
packets. Up till now, this function was not being called when receiving
scattered packets, potentially leading to problems with mbufs not being
properly initialized.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Changchun Ouyang <Changchun.ouyang@intel.com>
Remove the "static" prefix to the template mbuf variable in
ixgbe_rxq_vec_setup function. This will then allow different
threads to initialize different RX queues at the same time,
without one overwriting the other's data.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Changchun Ouyang <Changchun.ouyang@intel.com>
It fixes this compilation complain: "error: ignoring return value of 'realpath',
declared with attribute warn_unused_result [-Werror=unused-result]"
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Tested-by: Jingguo Fu <jingguox.fu@intel.com>
As i40e_register_x710_int.h is defined for early hardware
only, it should be deleted.
For the register which is still required, just define it in
code directly as workaround.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Inside NIC RX interrupt is needed for single RX descriptor
write back. The fix is to correct the wrong configuration
of register 'I40E_QINT_RQCTL'.
Note that interrupt will be inside NIC only, that means it
will never be reported outside NIC hardware.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The filter types supported are listed below for VXLAN:
1. Inner MAC and Inner VLAN ID.
2. Inner MAC address, inner VLAN ID and tenant ID.
3. Inner MAC and tenant ID.
4. Inner MAC address.
5. Outer MAC address, tenant ID and inner MAC address.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
Add definitions of the data structures of tunneling packet filter.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Implement the configuration API of VXLAN destination UDP port,
and add new Rx offload flags for supporting VXLAN packet offload.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
Add two functions to support UDP tunneling port configuration.
There are "some" destination UDP port numbers that have unique meaning.
In terms of VxLAN, "IANA has assigned the value 4789 for the VXLAN UDP port,
and this value SHOULD be used by default as the destination UDP port.
Some early implementations of VXLAN have used other values for the destination
port. To enable interoperability with these implementations, the destination
port SHOULD be configurable."
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Replace the "reserved2" field with the "packet_type" field
and add the "inner_l2_l3_len" field in the rte_mbuf structure.
The "packet_type" field is used to indicate ordinary packet format and also
tunneling packet format such as IP in IP, IP in GRE, MAC in GRE and MAC in UDP.
The "inner_l2_len" and the "inner_l3_len" fields are added
in the second cache line, they use 2 bytes for TX offloading of tunnels.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Link bonding broadcast mode requires refcnt parameter in the mbuf struct to
allow efficient transmission of duplicated mbufs on slave ports.
This patch disables broadcast mode when the complication option RTE_MBUF_REFCNT
is disabled to allow clean building of the bonding library.
A warning message notify user of disabling of broadcast mode.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Fix compilation warning 'missing-field-initializers' for some GCC and clang
versions introduced in commit 0c6bc8e due to the use of C89/C90 initializers.
Using C99-style initializers
Signed-off-by: Marc Sune <marc.sune@bisdn.de>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The previous implementation of rte_kni_alloc() was allocating memzones with a
name composed of a fixed string and the interface name. When an application was
allocating and deallocating multiple interfaces with different names, memzones
were quickly exhausted, even though memzones from deallocated interfaces were
never used anymore (unless an interface with the same name was re-allocated).
As a result, the application was unable to allocate more KNI interfaces with
different names.
This patch implements the KNI memzone pool in order to prevent memzone
exhaustion when allocating/deallocating KNI interfaces. It adds a new API call,
rte_kni_init(max_kni_ifaces) that shall be called before any call to
rte_kni_alloc() if KNI is used. The memzones are pre-allocated with interface-
independent names so that they can be reused.
Signed-off-by: Marc Sune <marc.sune@bisdn.de>
Acked-by: Helin Zhang <helin.zhang@intel.com>
An error has been introduced by commit 1f22652ca8
("fix perf regression due to moved pool ptr").
Fix the case where RTE_MBUF_REFCNT is disabled.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Only provide empty handler.
It can be completed to support filter features on fortville.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
[Thomas: remove unused empty functions]
Define a new API umbrella to configure any kind of Rx filtering.
New functions:
- rte_eth_dev_filter_supported
- rte_eth_dev_filter_ctrl
Filter types, operations, and structures are defined specifically in new
header file lib/librte_eth/rte_dev_ctrl.h.
As to the implementation discussion, please refer to
http://dpdk.org/ml/archives/dev/2014-September/005179.html
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
[Thomas: rename ops and remove unused types]
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
i40e hardware supports RSS in VF.
It's now supported in this driver.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
To reuse code, 'i40e_config_hena()' and 'i40e_parse_hena()' and
their relevant macros need to be extern, and then can be used for
both PF and VF parts.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
Forced type conversion is not needed to define a macro with
constant. The alternate is to let compiler use the default width,
or specify the width with suffix of 'U', 'UL', 'ULL', etc.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
The maximum mount contiguous memory regions for FreeBSD is limited by
RTE_CONTIGMEM_MAX_NUM_BUFS, a pointer to each region is stored in
static void * contigmem_buffers[RTE_CONTIGMEM_MAX_NUM_BUFS]
A user can specify a greater amount via hw.contigmem.num_buffers,
while the allocation logic will prevent this allocation from occuring the logic
in contigmem_unload() will attempt to free hw.contigmem.num_buffers and an
overrun occurs.
This patch limits the freeing to a maximum of RTE_CONTIGMEM_MAX_NUM_BUFS.
Signed-off-by: Alan Carew <alan.carew@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Commit fbde27f1 (get default Rx/Tx configuration from dev info),
introduced a bug, which caused memory corruption in dev_info.
To get RX/TX configuration, both rx/tx queue setup functions were calling
dev_info_get from PMDs, so dev_info structure was not being reseted
before being populated.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Since commit 13ce5e7eb9 ("virtio: mergeable buffers"),
this driver has the capability of receiving and transmitting jumbo frame.
So update max Rx packet length.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Tested-by: Jingguo Fu <jingguox.fu@intel.com>
It fixes the compile error as below on gcc version 4.3.4.
cc1: error: unrecognized command line option "-Wno-unused-but-set-variable"
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Tested-by: Zhaochen Zhan <zhaochen.zhan@intel.com>
Fix one issue in virtio TX: it needs one more vring descriptor to hold the virtio
header when transmitting packets, it is used later to determine whether to free
more entries from used vring.
It fixes failing to transmit any packet with 1 segment in the circumstance of only
1 descriptor in the vring free list.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
vhost lib is turned off by default.
vhost lib is based on cuse, which requires fuse development package
to be installed.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: fix build dependencies]
1) FIXME: concurrent calls to vhost set mem table from different guests
could cause mem_temp to be overrided.
2) TODO: cmpset cost quite some cpu cyles. Allow app to disable this
feature if there is no contention in real workload.
3) FIXME: fix scatter gather mbuf copy to vhost vring chained buffers.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
priv field could be used to store application specific context.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
VHOST_SUPPORTED_FEATURES is the feature mask that vhost lib supports.
VHOST_FEATURES is the feature mask vhost currently supports after some features are turned on/off.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: split patch]
Rename init_virtio_net as rte_vhost_callback_register API.
rte_vhost_callback_register register the callbacks called when a
vhost device is created and ready to be added to data processing core
or is de-actived by guest.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
vhost_net_device_ops is internal implementation in vhost lib.
register_cuse_device will be vhost driver register API.
There is no need for it to know the internal vhost ops.
Instead, that ops is retrieved in register_cuse_device
through get_virtio_net_callbacks.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
rte_vhost_enqueue_burst copies host packets to guest.
rte_vhost_enqueue_burst will call virtio_dev_rx and virtio_dev_merge_rx
respectively depending on whether merge-able feature is negotiated or not
in the vhost device.
virtio_dev_merge_tx is renamed to rte_vhost_dequeue_burst.
rte_vhost_dequeue_burst gets to-be-sent packets from guest.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: merged patches]
queue_id parameter is added to Rx/Tx functions for multiple queue support
in future.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
As a lib, we have no idea the app defined mbuf size.
This patch will calculate mbuf size dynamically.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
This patch makes virtio_dev_merge_tx return the received packets to app layer.
Previously virtio_tx_route was called to route these packets and then free them.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
The structure virtio_net_config_ll is moved to virtio_net.c.
It is related to internal virtio device management,
so it should not be exposed to other files.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
It was used to wait some time and retry when there are not enough descriptors.
App could implement this policy easily if it needs.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
Currently zero copy feature isn't generic as it couples closely with nic.
It isn't put in the vhost lib in this version.
gpa(guest physical address) to hpa(host physical address) mapping region
logic is removed.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
The following logics will be moved to vhost example:
1. mac learning, which is used to learn the mac address from the first
transmitted packet of guest and bind the vhost device to a queue in a
pool of VMDQ.
2. VMDQ mac/vlan filter: Each pool the vhost device is bind to is
assigned a mac/vlan filter.
3. num_devices is used to specify the maximum vhost devices the nic supports.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
Remove all other codes and only keep virtio_dev_rx, copy_from_mbuf_to_vring,
virtio_dev_merge_rx, virtio_dev_merge_tx.
Previous vhost merge-able feature introduces another version of tx function,
virtio_dev_merge_tx. Actually it is not related to merge-able feature but is
the fix for memcpy between mbuf and vring descriptors.
This lib will create the tx functions based on virtio_dev_merge_tx.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: do not remove code used or moved later]
Those files will be refactored in subsequent patches to form user space
vhost library.
Makefile and main.h are removed.
main.c is renamed to vhost_rxtx.c and will provide vring enqueue/dequeue API.
virtio-net.h is renamed to rte_virtio_net.h which is the API header file.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: remove from examples Makefile and merge file renaming]
Remove n_orig variable as it is not required.
Signed-off-by: Keith Wiles <keith.wiles@windriver.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch disables compilation complain from lower GCC version (less than 4.6).
Note: Only supported versions of GCC are 4.x.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Many sample apps use duplicated code to set rte_eth_txconf and rte_eth_rxconf
structures. This patch allows the user to get a default optimal RX/TX configuration
through rte_eth_dev_info get, and still any parameters may be tweaked as wished,
before setting up queues.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
[Thomas: split patch]
Many sample apps use duplicated code to set rte_eth_txconf and rte_eth_rxconf
structures. This patch allows the user to get a default optimal RX/TX configuration
through rte_eth_dev_info get, and still any parameters may be tweaked as wished,
before setting up queues.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
[Thomas: split patch]
Many sample apps use duplicated code to set rte_eth_txconf and rte_eth_rxconf
structures. This patch allows the user to get a default optimal RX/TX configuration
through rte_eth_dev_info get, and still any parameters may be tweaked as wished,
before setting up queues.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
[Thomas: split patch]
Many sample apps use duplicated code to set rte_eth_txconf and rte_eth_rxconf
structures. This patch allows the user to get a default optimal RX/TX configuration
through rte_eth_dev_info get, and still any parameters may be tweaked as wished,
before setting up queues.
Besides, if a NULL pointer is passed to rte_eth_rx_queue_setup or
rte_eth_tx_queue_setup, these functions get internally the default RX/TX
configuration for the user.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
[Thomas: split patch]
To guarantee that RX/TX configuration structures are reseted
before modifying them, plus the other dev info fields,
dev info structure is zeroed beforehand.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
librte_pmd_pcap driver was opening the pcap/interfaces only at init time and
closing them only when the port was being stopped. This behaviour would cause
problems (leading to segfault) if the user closed the port 2 times. The first
time the pcap/interfaces would be normally closed but libpcap would throw an
error causing a segfault if the closed pcaps/interfaces were closed again.
This behaviour is solved by re-openning pcaps/interfaces when the port is
started (only if these weren't open already for example at init time).
Signed-off-by: Nicolás Pernas Maradei <nico@emutex.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Fix the descriptor initialization loop, so that it initializes
the i40e_tx_desc::cmd_type_offset_bsz for the correct index
into the tx_ring array.
Previously it would use the index once to initialize the txd
local variable, then again when setting cmd_type_offset_bsz.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Since commit aae1047905 ("use the right debug macro"),
DEBUGOUT was replaced by PMD_DRV_LOG which requires at least
2 arguments. But the level argument was missing.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
When enabling RTE_LIBRTE_MEMPOOL_DEBUG and compiling with clang
compiler an error occurs, because ifdefed code includes push/pop pragmas.
Signed-off-by: Keith Wiles <keith.wiles@windriver.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Add in a doxygen comment for the ctrl mbuf flag definition.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Update the format of the RX flags to match that of the TX flags. In
general the flags are now specified as "1ULL << X", with a few
exceptions.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch takes the existing TX flags defined for the mbuf and shifts
each uniquely defined one left so that additional RX flags can be
defined without having RX and TX flags mixed together. Under the new
scheme, RX flags start at bit 0 and work left, TX flags start at bit 55
and work right, and bits 56-63 are reserved for generic mbuf use, not
for offloads.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Make a small improvement to slow path TX performance by adding in a
prefetch for the second mbuf cache line.
Also move assignment of l2/l3 length values only when needed.
What I've done with the prefetches is two-fold:
1) changed it from prefetching the mbuf (first cache line) to prefetching
the mbuf pool pointer (second cache line) so that when we go to access
the pool pointer to free transmitted mbufs we don't get a cache miss. When
clearing the ring and freeing mbufs, the pool pointer is the only mbuf
field used, so we don't need that first cache line.
2) changed the code to prefetch earlier - in effect to prefetch one mbuf
ahead. The original code prefetched the mbuf to be freed as soon as it
started processing the mbuf to replace it. Instead now, every time we
calculate what the next mbuf position is going to be we prefetch the mbuf
in that position (i.e. the mbuf pool pointer we are going to free the mbuf
to), even while we are still updating the previous mbuf slot on the ring.
This gives the prefetch much more time to resolve and get the data we need
in the cache before we need it.
In terms of performance difference, a quick sanity test using testpmd
on a Xeon (Sandy Bridge uarch) platform showed performance increases
between approx 8-18%, depending on the particular RX path used in
conjuntion with this TX path code.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Move the vlan_tci field up by two bytes in the mbuf data structure. This
has two effects:
* Ensures the the ixgbe vector driver places the vlan tag in the correct
place in the mbuf.
* Allows a second vlan tag field, if one is added in the future, to be
placed after the existing vlan field, rather than before.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
While some applications may store metadata about packets in the packet
mbuf headroom, this is not a workable solution for packet metadata which
is either:
* larger than the headroom (or headroom is needed for adding pkt headers)
* needs to be shared or copied among packets
To support these use cases in applications, we reserve a general
"userdata" pointer field inside the second cache-line of the mbuf. This
is better than having the application store the pointer to the external
metadata in the packet headroom, as it saves an additional cache-line
from being used.
Apart from storing metadata, this field also provides a general 8-byte
scratch space inside the mbuf for any other application uses that are
applicable.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The receive functions for packets do not modify the next pointer so
the next pointer should always be cleared on mbuf free, just in case.
The slow-path TX needs to clear it, and the standard mbuf free function
also needs to clear it. Fast path TX does not handle chained mbufs so
is unaffected
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Overloading the 'msg_size' field in the 'arq_event_info' struct
is a bad idea. It leads to bugs when the structure is used in a
loop, since the input value (buffer size) is overwritten by the
output value (actual message length). The fix introduces one
more field of 'buf_len' for the buffer size, and renames the
field of 'msg_size' to 'msg_len' for the real message size.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
The firmware api request of writes to hardware registers should be
exposed to driver. The new API of 'i40e_aq_debug_write_register'
is introduced for that.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
10G base T type support is added.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
The fix is to use get_link_status but not get_phy_capabilities
for reporting FC settings.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
The workaround helps fix the API if the FW is 4.2 or later.
In addition, an unreachable 'break' statement has been removed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
There are variables that represent values in little endian.
Adding prefix of '__Le' can remove warnings during sparse
checks. In addition, remove some unreachable 'break' statements,
and add 'UL' on a couple of constants.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
Force a shifted '1' to be 'unsigned' to avoid shifting a signed int.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
The code wrapped in '#ifdef I40E_TPH_SUPPORT' was added
to check if 'TPH' (TLP Processing Hints) is supported, and enable it.
It is not used currently and can be removed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
The code wrapped in '#ifdef I40E_DCB_SW' is currently for software
validation only, it should be removed at all.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
The code wrapped in '#ifdef PREBOOT_SUPPORT' was added for
queue context initialization specifically for A0 silicon.
As A0 silicon has gone for a long time, the code should be
removed at all. In addition, the checks of 'QV_RELEASE'
and 'PREBOOT_SUPPORT' are also not needed anymore and can
be removed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
The code wrapped in '#ifdef DMA_SYNC_SUPPORT' was written specially
for Solaris, it is not needed anymore for others including DPDK.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
The code wrapped in '#ifdef ETHTOOL_TEST' in i40e_diag.c is for
ethtool testing only, it is not needed anymore and can be removed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
'nvmupdate' is intended to support the userland NVMUpdate tool for
Fortville eeprom. These code changes is to remove the conditional
compile macro, and support those by default. In addition, renaming
all 'errno' to avoid any compile warning or error.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
In share code, 'tab' is used to align values rather than 'space'.
The changes in i40e_adminq_cmd.h is to make the indentation more
consistent in share code.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Chen Jing <jing.d.chen@intel.com>
Tested-by: HuilongX Xu <huilongx.xu@intel.com>
Add new file to support controller X550, therefore update the Makefile
and README file. It also updates the API functions, DCB related functions,
mailbox related functions, etc to support X550.
In addition, some new macros used by X550 are added.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: merge dependent patches]
- Implement functions to do I2C byte read and write
- Support 82599_QSFP_SF_QP and 82599_LS
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: split patch]
- Store lan_id and physical semaphore mask into hardware physical information,
and use them to control read and write physical registers in IXGBE base code.
- Extend mask from 16 bits to 32 bits for releasing or acquiring SWFW semaphore
in IXGBE base code. It is used in reading and writing I2C byte.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: merge dependent patches]
Remove unnecessary delay when setting up physical link and negotiating
in IXGBE base code.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Introduce a new argument to let caller determine if it need read and
return data or not after executing host interface command in IXGBE base code.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Refines function to let eeprom checksum calculation return
either a negative error code on error, or the 16-bit checksum
in IXGBE base code.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Implements a function to check command complete for flow director in
IXGBE base code, and replaces related code snippet with this function.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
This patch defines new error type in IXGBE base code; they are
used to report different kinds of error.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
If VIRTIO_NET_F_CTRL_VQ is not negotiated hw->cvq will be NULL
Signed-off-by: Damjan Marion <damarion@cisco.com>
Acked-by: Changchun Ouyang <Changchun.ouyang@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Recent Ubuntu 12.04.5 LTS is shipped with 3.13.0-36.63 as the only
supported kernel.
So skb_set_hash has been backported and is conflicting with kni kcompat one.
Commit a09b359dac ("fix build on Ubuntu 14.04") describes the initial problem.
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
[Thomas: reorder conditions to ease reading]
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Clang fails with an error about a variable being used uninitialized:
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c:67:30:
error: variable 'dma_addr0' is uninitialized
when used here [-Werror,-Wuninitialized]
dma_addr0 = _mm_xor_si128(dma_addr0, dma_addr0);
^~~~~~~~~
This error can be fixed by replacing the call to xor which
takes two parameters, by a call to setzero, which does not take any.
Reported-by: Keith Wiles <keith.wiles@windriver.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Rename start_?x_per_q to ?x_deferred_start
and add comments.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
A doxygen group begins with /**@{*/ and ends with /**@}*/.
By enabling DISTRIBUTE_GROUP_DOC, the first comment is applied
to each undocumented member of the group.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The port parameter in mbuf should be set with an input port id
because DPDK apps may use it to know where each packet came from.
Signed-off-by: Saori Usami <susami@igel.co.jp>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The flag RTE_PCI_DRV_MULTIPLE was used to register an eth_driver allowing
multiples devices with a single PCI id.
It is now possible to register a pci_driver and create ethdev objects
using rte_eth_dev_allocate().
Suggested-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The function rte_snprintf() was deprecated in version 1.7.0
(commit 6f41fe75e2).
It's now totally removed.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
This field is not used anymore, remove it.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
There is no need for ioport access for applications that won't use virtio pmds.
Make rte_eal_iopl_init() non-static so that it is called from pmds that need it.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
From man(4) io:
"The initial implementation simply raised the IOPL of the current thread
when open(2) was called on the device. This behaviour is retained in the
current implementation as legacy support for both i386 and amd64."
http://www.freebsd.org/cgi/man.cgi?query=io&sektion=4
Nothing prevents from closing it just after.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Rework bond pmd initialisation so that we don't need to modify eal
for this pmd to work.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
No need to restrict usage of non Intel SFP.
If (hw->phy.type == ixgbe_phy_sfp_intel) is false,
a warning will be logged.
It was disabled for ixgbe and enabled but unused for i40e.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Since the introduction of vector PMD, a bug in ixgbe_rxq_rearm could
cause a crash. As long as the memory pool allocated to the RX queue
has mbufs available, there is no problem. After allocation of _all_
mbufs from the memory pool, previously returned mbufs by
rte_eth_rx_burst could be accessed by subsequent calls to the PMD and
could be returned by subsequent calls to rte_eth_rx_burst. From the
perspective of the application, the means that fields within the mbuf
could change and that previously allocated mbufs could appear multiple
times.
After failure of mbuf allocation, the dd bits should indicate that the
packets are not ready. For this, this patch adds code to reset the dd
bits in the first RTE_IXGBE_DESCS_PER_LOOP packets of the next
RTE_IXGBE_RXQ_REARM_THRESH packets only if the next
RTE_IXGBE_RXQ_REARM_THRESH packets that will be accessed contain
previously allocated packets.
Setting the bits is not enough. The bits are checked _after_ setting
the mbuf fields, thus a mechanism is needed to prevent the previously
used mbuf pointers from being accessed during the speculative load of
the mbuf fields. For this reason, not only the dd bits are reset, but
also the mbufs associated to those descriptors are set to point to a
"fake" mbuf.
Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When Flow Director was used together with bulk alloc, id and hash
was swapped when packet matches flow director filter due to improper
fdir field initialization.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
[Thomas: merged with mbuf changes]
igb_ethdev.c contains function eth_igb_infos_get() which should set
number of tx/rx queues supported by the hardware. It contains huge
[switch] but there is no i211 case!
Also, there are few other places which mention i210, but not mention i211.
I didn't check it enough to say it is totally correct.
For now I see that it just able to send and receive some packets.
Signed-off-by: Sergey Mironov <grrwlf@gmail.com>
Reviewed-by: Helin Zhang <helin.zhang@intel.com>
[Thomas: fix indent]
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This method can be implemented by a poll mode driver to provide
non-standard statistics (which are not part of the generic statistics
structure). Each statistic is returned in a generic form: "name" and
"value" and can be used to dump PMD-specific statistics in the same way
than ethtool in linux kernel.
If the PMD does not provide the xstats_get and xstats_set functions, the
ethdev API will return the generic statistics in the xstats format
(name, value).
This commit opens the door for a clean-up of the generic statistics
structure, only keeping statistics that are really common to all PMDs
and moving specific ones into the xstats API.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
[Thomas: fix some comments]
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
KNI applies only to linux, so there should be no need for any kni files to
be present in the bsdapp eal folder.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Identify all options through the getopt_long return value.
This way, we only need a big switch/case.
Indentation is broken to ease commit review (fixed in next commit).
Suggested-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
All common options are now in a single file.
Common usage() has been moved as well.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
We can handle both short and long options for those in the same case.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Following commit cac6d08c8b and 4bf3fe634a
(replace --use-device option by --pci-whitelist and --vdev),
this option is not available anymore.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Add a --log-level option to set the default eal log level.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
'init' messages should always be logged and filtered at runtime by rte_log.
All the more so as these messages are not in the datapath.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
- remove leading \n in some messages,
- remove trailing \n in some messages,
- split multi lines messages,
- introduce PMD_INIT_FUNC_TRACE macro and use it instead of
PMD_INIT_LOG(DEBUG, "some_func")
Signed-off-by: David Marchand <david.marchand@6wind.com>
Reviewed-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Prepare for next commit, indent sections where log messages will be modified so
that next patch is only about \n.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Since base driver always add a trailing \n, add a PMD_DRV_LOG_RAW macro that
will not add one.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
- We should not use DEBUGOUT* / DEBUGFUNC macros in pmd.
These macros come as compat wrappers for base driver.
- We should avoid calling RTE_LOG directly as pmd provides a wrapper for logs.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Reviewed-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
'init' messages should always be logged and filtered at runtime by rte_log.
All the more so as these messages are not in the datapath.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>