This patch removes all references to RTE_MBUF_REFCNT, setting the refcnt
field in the mbuf struct permanently.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Currently for mbufs with refcnt, we cannot free mbufs with external memory
buffers (ie. vhost zero copy), as they are recognized as indirect
attached mbufs and therefore we free the direct mbuf it points to,
resulting in an error in the case of external memory buffers.
We solve the issue by introducing the IND_ATTACHED_MBUF flag, which indicates
that the mbuf is an indirect attached mbuf pointing to another mbuf.
When we free an mbuf, we only free the direct mbuf if the flag is set.
Freeing an mbuf with external buffer is the same as freeing a non attached mbuf.
The flag is set during attach and clear on detach.
So in the case of vhost zero copy where we have mbufs with external
buffers, by default we just free the mbuf and it is up to the user to deal with
the external buffer.
This patch would allow the removal of the RTE_MBUF_REFCNT config option,
setting refcnt for all mbufs permanently.
The patch also modifies the vhost example as it was using the
RTE_MBUF_INDIRECT macro to detect if it was an mbuf with external buffer.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch introduces CONFIG_RTE_KNI_PREEMPT_DEFAULT flag. When set to 'no',
KNI kernel thread(s) do not call schedule_timeout_interruptible(), which
improves overall KNI performance at the expense of CPU cycles (polling).
Default values is 'yes', maintaining the same behaviour as of now.
Signed-off-by: Marc Sune <marc.sune@bisdn.de>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Calculating hash for data of variable length is more efficient
when that data is sliced into 8-byte pieces. The rest part of data
is hashed using CRC32 functions with either 8 and 4 byte operands.
Signed-off-by: Yerden Zhumabekov <e_zhumabekov@sts.kz>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Initially, SSE4.2 support is detected via the constructor function.
Added rte_hash_crc_set_alg() function to detect and set CRC32
implementation if necessary. SSE4.2 is allowed by default.
rte_hash_crc_*byte() functions reworked so they choose available
CRC32 implementation in the runtime.
Signed-off-by: Yerden Zhumabekov <e_zhumabekov@sts.kz>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
SSE4.2 provides CRC32 intrinsic with 8-byte operand.
Signed-off-by: Yerden Zhumabekov <e_zhumabekov@sts.kz>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Give up using built-in intrinsics and use our own assembly
implementation. Remove #include entry as well.
Signed-off-by: Yerden Zhumabekov <e_zhumabekov@sts.kz>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Added:
- crc32c_sse42_u32() emits 'crc32l' asm instruction;
- crc32c_sse42_u64() emits 'crc32q' asm instruction;
- crc32c_sse42_u64_mimic(), wrapper in case of run on 32-bit platform.
Signed-off-by: Yerden Zhumabekov <e_zhumabekov@sts.kz>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Add lookup tables for CRC32 algorithm, crc32c_1word() and
crc32c_2words() functions returning hash of 32-bit and 64-bit
operand.
Signed-off-by: Yerden Zhumabekov <e_zhumabekov@sts.kz>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The filter types supported are listed below for NVGRE packet:
1. Inner MAC and Inner VLAN ID.
2. Inner MAC address, inner VLAN ID and tenant ID.
3. Inner MAC and tenant ID.
4. Inner MAC address.
5. Outer MAC address, tenant ID and inner MAC address.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Add an Ethernet type definition for Transparent Ethernet Bridging.
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
RSS offload types were defined separately for 1/10G and 40G NICs,
and have no relationship with flow types. The modifications are to
unify all RSS offload types for all PMDs. Unified RSS offload types
have new and common names which can be used for any PMD or
applications, and decouple from specific hardwares.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
[Thomas: merge with fm10k]
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Flow types was defined actually for i40e hardware specifically,
and wasn't able to be used for defining RSS offload types of all
PMDs. It removed the enum flow types, and uses macros instead
with new names. The new macros can be used for defining RSS
offload types later. Also modifications are made in i40e and
testpmd accordingly.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
[Thomas: merge with new flow director API]
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
It wrongly calculates the size of the flow type mask array. The fix
is to align the flow type maximum index ID with the number of
element bit width, and then divide the number of element bit width.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
The old ethertype filter API was removed in commit 75db20648,
but was still in (newly integrated) version map for ABI.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Following structures are removed:
- rte_2tuple_filter
- rte_5tuple_filter
Following APIs are removed:
- rte_eth_dev_add_2tuple_filter
- rte_eth_dev_remove_2tuple_filter
- rte_eth_dev_get_2tuple_filter
- rte_eth_dev_add_5tuple_filter
- rte_eth_dev_remove_5tuple_filter
- rte_eth_dev_get_5tuple_filter
It also move macros TCP_*_FLAG to rte_eth_ctrl.h, and removes the macro
TCP_UGR_FLAG which is duplicated with TCP_URG_FLAG.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
[Thomas: remove also from version map]
This patch defines new functions dealing with ntuple filters which is
corresponding to 5tuple in HW.
It removes old functions which deal with 5tuple filters.
Ntuple filter is dealt with through entrance ixgbe_dev_filter_ctrl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch defines new functions dealing with ntuple filters which is
corresponding to 2tuple filter for 82580 and i350 in HW, and to 5tuple
filter for 82576 in HW.
It removes old functions which deal with 2tuple and 5tuple filters in igb driver.
Ntuple filter is dealt with through entrance eth_igb_filter_ctrl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch defines ntuple filter type RTE_ETH_FILTER_NTUPLE and its structure rte_eth_ntuple_filter.
It also corrects the typo TCP_UGR_FLAG to TCP_URG_FLAG
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Structure rte_syn_filter is removed.
Following APIs are removed:
- rte_eth_dev_add_syn_filter
- rte_eth_dev_remove_syn_filter
- rte_eth_dev_get_syn_filter
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
[Thomas: remove also from version map]
This patch defines new functions dealing with syn filter.
It removes old functions which deal with syn filter.
Syn filter is dealt with through entrance ixgbe_dev_filter_ctrl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
This patch defines new functions dealing with syn filter.
It removes old functions of syn filter in igb driver.
Syn filter is dealt with through entrance eth_igb_filter_ctrl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Structure rte_flex_filter is removed.
Following APIs are removed:
- rte_eth_dev_add_flex_filter
- rte_eth_dev_remove_flex_filter
- rte_eth_dev_get_flex_filter
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
[Thomas: remove also from version map]
This patch defines new functions dealing with flex filter.
It removes old functions of flex filter in igb driver.
Syn filter is dealt with through entrance eth_igb_filter_ctrl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
This patch implement RTE_ETH_FILTER_FLUSH operation to delete all
flow director filters in ixgbe driver.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch changes the get info operation to be implemented through
filter_ctrl API and RTE_ETH_FILTER_INFO/RTE_ETH_FILTER_STATS ops.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch implement the mask configuration of flow director filter,
by using the mask defined in rte_fdir_conf instead of callback function
fdir_set_masks.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch removes the flexbytes_offset from rte_fdir_conf, because
the flexible payload setting is done by flex_conf instead of flexbytes_offset.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch implement the flexpayload configuration of flow director filter.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds RTE_ETH_FLOW_TYPE_RAW and RTE_ETH_RAW_PAYLOAD to support the
flexible payload is started from the beginning of the packet.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch changes the add/delete/update operations to be implemented through
filter_ctrl API and RTE_ETH_FILTER_ADD/RTE_ETH_FILTER_DELETE/RTE_ETH_FILTER_UPDATE ops.
It also removes the callback functions:
- ixgbe_eth_dev_ops.fdir_add_signature_filter
- ixgbe_eth_dev_ops.fdir_update_signature_filter
- ixgbe_eth_dev_ops.fdir_remove_signature_filter
- ixgbe_eth_dev_ops.fdir_add_perfect_filter
- ixgbe_eth_dev_ops.fdir_update_perfect_filter
- ixgbe_eth_dev_ops.fdir_remove_perfect_filter
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
enable/disable interrupt by manipulating a control bit of command
register on NIC's PCIe configuration space.
Signed-off-by: Danny Zhou <danny.zhou@intel.com>
Tested-by: Qun Wan <qun.wan@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Change the EAL PCI code so that it can work with both the
uio_pci_generic in-tree driver, as well as the igb_uio
DPDK-specific driver.
This involves changes to
1) Modify method of retrieving BAR resource mapping information
2) Mapping using resource files in /sys rather than /dev/uio*
2) Setup bus master bit in NIC's PCIe configuration space for
uio_pci_generic.
Signed-off-by: Danny Zhou <danny.zhou@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch modify mode older name from
BONDING_MODE_ADAPTIVE_TRANSMIT_LOAD_BALANCING to BONDING_MODE_TLB
This patch also changes order of TEST_ASSERT macro in
test_tlb_verify_slave_link_status_change_failover.
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch add some debug information when using link bonding mode 6.
It prints basic information about ARP packets on RX and TX (MAC, ip,
packet number, arp packet type).
If CONFIG_RTE_LIBRTE_BOND_DEBUG_ALB == y.
If CONFIG_RTE_LIBRTE_BOND_DEBUG_ALB_L1 is enabled instead of previous
one, use show command to see IPv4 balancing from clients.
Signed-off-by: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This mode includes adaptive TLB and receive load balancing (RLB). In RLB
the bonding driver intercepts ARP replies send by local system and
overwrites its source MAC address, so that different peers send data to
the server on different slave interfaces. When local system sends ARP
request, it saves IP information from it. When ARP reply from that peer
is received, its MAC is stored, one of slave MACs assigned and ARP reply
send to that peer.
Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Signed-off-by: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Changed MAC address type from uint8_t[6] to struct ether_addr and IP
address type from uint8_t[4] to uint32_t to make it consistent with
other DPDK code using MAC and IP addresses. It allows us to use
is_same_ether_addr and ether_addr_copy functions on MAC addresses in ARP header. Also
removed union from arp_hdr struct to make calls to arp_data items
shorter. Updated test-pmd to match new arp_hdr version.
Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
[Thomas: doxygenize comments]
Remove those hotspots which is unnecessary when early returning occurs.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Make virtio not require UIO for some security reasons, this is to match
6WIND's virtio-net-pmd.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
This makes virtio driver work like ixgbe. Transmit buffers are
held until a transmit threshold is reached. The previous behavior
was to hold mbuf's until the ring entry was reused which caused
more memory usage than needed.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Need to have do special things to set default mac address.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Change order of initialization to match Linux kernel.
Don't blow away control queue by doing reset when stopped.
Calling dev_stop then dev_start would not work.
Dev_stop was calling virtio reset and that would clear all queues
and clear all feature negotiation.
Resolved by only doing reset on device removal.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>