Currently select() is used to monitor file descriptors for vhostuser
ports. This limits the number of ports possible to create since the
fd number is used as index in the fd_set and we have seen fds > 1023.
This patch changes select() to poll(). This way we can keep an
packed (pollfd) array for the fds, e.g. as many fds as the size of
the array.
Also see:
http://dpdk.org/ml/archives/dev/2016-April/037024.html
Reported-by: Patrik Andersson <patrik.r.andersson@ericsson.com>
Signed-off-by: Jan Wickbom <jan.wickbom@ericsson.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
REPLY_ACK features provide a generic way for QEMU to ensure both
completion and success of a request.
As described in vhost-user spec in QEMU repository, QEMU sets
VHOST_USER_NEED_REPLY flag (bit 3) when expecting a reply_ack from
the backend. Backend must reply with 0 for success or non-zero
otherwise when flag is set.
Currently, only VHOST_USER_SET_MEM_TABLE request implements reply_ack,
in order to synchronize mapping updates.
This patch enables REPLY_ACK feature generally, but only checks error
code for VHOST_USER_SET_MEM_TABLE.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
In function vhost_new_device(), current code dose not free 'dev'
in "i == MAX_VHOST_DEVICE" condition statements. It will lead to a
memory leak.
Fixes: 45ca9c6f7b ("vhost: get rid of linked list for devices")
Cc: stable@dpdk.org
Signed-off-by: Yong Wang <wang.yong19@zte.com.cn>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
When reg_size < page_size the function read in
rte_mem_virt2phy would not return, because
host_user_addr is invalid.
Fixes: e246896178 ("vhost: get guest/host physical address mappings")
Cc: stable@dpdk.org
Signed-off-by: Haifeng Lin <haifeng.lin@huawei.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch adds function rte_pktmbuf_linearize to let crypto PMD coalesce
chained mbuf before crypto operation and extend their capabilities to
support segmented mbufs when device cannot handle them natively.
Included unit tests for rte_pktmbuf_linearize functionality:
1) Creates banch of segmented mbufs with different size and number of
segments.
2) Fills noncontigouos mbuf with sequential values.
3) Uses rte_pktmbuf_linearize to coalesce segmented buffer into one
contiguous.
4) Verifies data in linearized buffer.
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
If these flags are advertised by a PMD, the NIC supports the MACsec
offload. The incoming MACsec traffics can be offloaded transparently
after the MACsec offload is configured correctly by the application.
And the application can set the PKT_TX_MACSEC flag in mbufs to enable
the MACsec offload for the packets to be transmitted.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
This commit adds a below event type:
- RTE_ETH_EVENT_MACSEC
This event will occur when the PN counter in a MACsec connection
reaches the exhaustion threshold.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Add a new Tx flag in mbuf, that can be set by applications to
enable the MACsec offload for a packet to be transmitted.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Change the parameters of functions from const char *valid[] to
const char * const valid[]. This additional const is needed to
allow us to fix some checkpatch warnings, as well as being good
programming practice.
For the checkpatch warnings, if we have a set of command line
args that we want to check defined as:
static const char *args[] = { "arg1", "arg2", NULL };
kvlist = rte_kvargs_parse(params, args);
checkpatch will complain:
WARNING:STATIC_CONST_CHAR_ARRAY: static const char *
array should probably be static const char * const
Adding the additional const to the definition of the args
will then trigger a compiler error in the absence of this
change to the kvargs library, as we lose the const in the
call to kvargs_parse.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Currently we will check mempool flags when we put/get objects from
mempool. However, this makes cache useless when mempool is SC|SP,
SC|MP, MC|SP cases.
This patch makes cache available in above cases and improves performance.
Signed-off-by: Wenfeng Liu <liuwf@arraynetworks.com.cn>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Instead of passing domain, bus, devid, func, just pass
an rte_pci_addr.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Attaching and detaching ethernet ports from an application
is not the same thing as physically removing a PCI device,
so clarify the flags indicating support. All PCI devices
are assumed to be physically removable, so no flag is
necessary in the PCI layer.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
If resources were mapped prior to probe, unmap them
if probe fails.
This does not handle the case where the kernel driver was
forcibly unbound prior to probe.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Leaving default pattern item mask values up for interpretation by PMDs is
an undefined behavior that applications might find difficult to use in the
wild. It also needlessly complicates PMD implementation.
This commit addresses this by defining consistent default masks for each
item type.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Contrary to the current description, mbuf RSS hash result storage does not
overlap with the returned MARK value (hash.fdir.lo vs. hash.fdir.hi), and
both may be combined.
Reflect this change by allowing testpmd to display both values
simultaneously.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Both actions share the PKT_RX_FDIR mbuf flag, as a result there is no way
to tell them apart. Moreover, the maximum allowed value for the MARK action
may not necessarily cover the entire 32-bit space.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Based on initial PMD implementations of the flow API, returning the error
structure which may be NULL is useless and always discarded.
Returning the error code instead appears to be much more convenient.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Add common vector type definitions to all CPU architectures.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
Rename tools/ into usertools/ to differentiate from buildtools/
and devtools/ while making clear these scripts are part of
DPDK runtime.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Added API for `rte_eth_tx_prepare`
uint16_t rte_eth_tx_prepare(uint8_t port_id, uint16_t queue_id,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
Added fields to the `struct rte_eth_desc_lim`:
uint16_t nb_seg_max;
/**< Max number of segments per whole packet. */
uint16_t nb_mtu_seg_max;
/**< Max number of segments per one MTU */
These fields can be used to create valid packets according to the
following rules:
* For non-TSO packet, a single transmit packet may span up to
"nb_mtu_seg_max" buffers.
* For TSO packet the total number of data descriptors is "nb_seg_max",
and each segment within the TSO may span up to "nb_mtu_seg_max".
Added functions:
int
rte_validate_tx_offload(struct rte_mbuf *m)
to validate general requirements for tx offload set in mbuf of packet
such a flag completness. In current implementation this function is
called optionaly when RTE_LIBRTE_ETHDEV_DEBUG is enabled.
int rte_net_intel_cksum_prepare(struct rte_mbuf *m)
to prepare pseudo header checksum for TSO and non-TSO tcp/udp packets
before hardware tx checksum offload.
- for non-TSO tcp/udp packets full pseudo-header checksum is
counted and set.
- for TSO the IP payload length is not included.
int
rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
this function uses same logic as rte_net_intel_cksum_prepare, but
allows application to choose which offloads should be taken into
account, if full preparation is not required.
PERFORMANCE TESTS
-----------------
This feature was tested with modified csum engine from test-pmd.
The packet checksum preparation was moved from application to Tx
preparation step placed before burst.
We may expect some overhead costs caused by:
1) using additional callback before burst,
2) rescanning burst,
3) additional condition checking (packet validation),
4) worse optimization (e.g. packet data access, etc.)
We tested it using ixgbe Tx preparation implementation with some parts
disabled to have comparable information about the impact of different
parts of implementation.
IMPACT:
1) For unimplemented Tx preparation callback the performance impact is
negligible,
2) For packet condition check without checksum modifications (nb_segs,
available offloads, etc.) is 14626628/14252168 (~2.62% drop),
3) Full support in ixgbe driver (point 2 + packet checksum
initialization) is 14060924/13588094 (~3.48% drop)
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The function rte_eth_xstats_get() return an array of tuples (id,
value). The value is the statistic counter, while the id references a
name in the array returned by rte_eth_xstats_get_name().
Today, each 'id' returned by rte_eth_xstats_get() is equal to the index
in the returned array, making this value useless. It also prevents a
driver from having different indexes for names and value, like in the
example below:
rte_eth_xstats_get_name() returns:
0: "rx0_stat"
1: "rx1_stat"
2: ...
7: "rx7_stat"
8: "tx0_stat"
9: "tx1_stat"
...
15: "tx7_stat"
rte_eth_xstats_get() returns:
0: id=0, val=<stat> ("rx0_stat")
1: id=1, val=<stat> ("rx1_stat")
2: id=8, val=<stat> ("tx0_stat")
3: id=9, val=<stat> ("tx1_stat")
This patch fixes the drivers to set the 'id' in their ethdev->xstats_get()
(except e1000 which was already doing it), and fixes ethdev by not setting
the 'id' field to the index of the table for pmd-specific stats: instead,
they should just be shifted by the max number of generic statistics.
Fixes: bd6aa172cf ("ethdev: fetch extended statistics with integer ids")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This makes struct rte_eth_dev independent of struct rte_pci_device by
replacing it with a pointer to the generic struct rte_device.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Only the drivers itself can decide if it could fill PCI information fields
of dev_info.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
The struct rte_intr_handle is an abstraction layer for different types of
interrupt mechanisms. It is embedded in the low-level device (e.g. PCI).
On allocation of a struct rte_eth_dev a reference to the intr_handle
should be stored for devices supporting interrupts.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
The info in rte_device about driver is immutable and
shouldn't change.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Jan Blunck <jblunck@infradead.org>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Both register/unregister and enable/disable don't necessarily require the
rte_intr_handle to be modifiable. Therefore lets constify it.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
This macro is based on Jan Viktorin's original patch but also checks the
type of the passed pointer against the type of the member.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
[jblunck@infradead.org: add type checking and __extension__]
Signed-off-by: Jan Blunck <jblunck@infradead.org>
This prevents sigbus errors on architectures that cannot handle unexpected
unaligned accesses to the output buffer.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Olga Shern <olgas@mellanox.com>
Considering tokens must be hard-coded in a list part of the instruction
structure, context-dependent tokens cannot be expressed.
This commit adds support for building dynamic token lists through a
user-provided function, which is called when the static token list is empty
(a single NULL entry).
Because no structures are modified (existing fields are reused), this
commit has no impact on the current ABI.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Olga Shern <olgas@mellanox.com>
This new API supersedes all the legacy filter types described in
rte_eth_ctrl.h. It is slightly higher level and as a result relies more on
PMDs to process and validate flow rules.
Benefits:
- A unified API is easier to program for, applications do not have to be
written for a specific filter type which may or may not be supported by
the underlying device.
- The behavior of a flow rule is the same regardless of the underlying
device, applications do not need to be aware of hardware quirks.
- Extensible by design, API/ABI breakage should rarely occur if at all.
- Documentation is self-standing, no need to look up elsewhere.
Existing filter types will be deprecated and removed in the near future.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Olga Shern <olgas@mellanox.com>
- Grouped related items using empty lines
- Aligned arguments to same column
- All item comments that doesn't fit same line are placed blow the item
itself
- Moved some comments to same line if overall line < 100 chars
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
If all queues are released lets also free up the dev->data->rx/tx_queues
to be able to properly reinitialize.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
This moves the non-PCI related initialization of the link state interrupt
callback list and the setting of the default MTU to rte_eth_dev_allocate()
so that drivers only need to set non-default values.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Lets clear the eth_dev->data when allocating a new rte_eth_dev so that
drivers only need to set non-zero values.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
No device driver sets the unbind flag in current public code base.
Therefore it is good time to remove the unused dead code.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Some platform like octeontx may use pci and
vdev based combined device to represent a logical
dpdk functional device.In such case, postponing the
vdev initialization after pci device
initialization will provide the better view of
the pci device resources in the system in
vdev's probe function, and it allows better
functional subsystem registration in vdev probe
function.
As a bonus, This patch fixes a bond device
initialization use case.
example command to reproduce the issue:
./testpmd -c 0x2 --vdev 'eth_bond0,mode=0,
slave=0000:02:00.0,slave=0000:03:00.0' --
--port-topology=chained
root cause:
In existing case(vdev initialization and then pci
initialization), creates three Ethernet ports with
following port ids
0 - Bond device
1 - PCI device 0
2 - PCI devive 1
Since testpmd, calls the configure/start on all the ports on
start up,it will translate to following illegal setup sequence
1)bond device configure/start
1.1) pci device0 stop/configure/start
1.2) pci device1 stop/configure/start
2)pci device 0 configure(illegal setup case,
as device in start state)
The fix changes the initialization sequence and
allow initialization in following valid setup order
1) pcie device 0 configure/start
2) pcie device 1 configure/start
3) bond device 2 configure/start
3.1) pcie device 0/stop/configure/start
3.2) pcie device 1/stop/configure/start
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
This patch uses pthread_getaffinity_np() to narrow down used
cores when none of below options is specified:
* coremask (-c)
* corelist (-l)
* and coremap (--lcores)
The purpose of this patch is to leave out these core related options
when DPDK applications are deployed under container env, so that
users do not need decide the core related parameters when developing
applications. Instead, when applications are deployed in containers,
use cpu-set to constrain which cores can be used inside this container
instance. And DPDK application inside containers just rely on this
auto detect mechanism to start polling threads.
Note: previously, some users are using isolated CPUs, which could
be excluded by default. Please add commands like taskset to use
those cores.
Test example:
$ taskset 0xc0000 ./examples/helloworld/build/helloworld -m 1024
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
It's been a source of confusion in the past, and even with this update
may continue to be a source of confusion. However, the original
language seems to imply that the DPDK EAL will take ownership of the
array passed in. Loosening the language up a bit might give a better
understanding for what is actually happening.
Signed-off-by: Aaron Conole <aconole@redhat.com>
Add a new macro RTE_PMD_REGISTER_KMOD_DEP() that allows a driver to
declare the list of kernel modules required to run properly.
Today, most PCI drivers require uio/vfio.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Aside from avoiding doing useless work, this also fixes a segfault
when calling rte_eth_dev_get_port_by_name() whenever no devices
were found yet, and therefore rte_eth_dev_data wasn't yet allocated.
Fixes: 9c5b8d8b9f ("ethdev: clean port id retrieval when attaching")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The lsb_release script is part of an optional package which is not
always installed. On the other hand, /etc/lsb-release is always present
even on minimal Ubuntu installations.
root@ubuntu1604:~# dpkg -S /etc/lsb-release
base-files: /etc/lsb-release
Read the file if present and use the variables defined in it.
Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
There was an option CONFIG_RTE_INSECURE_FUNCTION_WARNING (disabled by
default), which prevents from using some libc functions:
sprintf, snprintf, vsnprintf, strcpy, strncpy, strcat, strncat, sscanf,
strtok, strsep and strlen.
It's all about using them at the right place with the right precautions.
However, it is neither really possible nor a good advice to disable them.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
A previous commit changed the local_cache table into a
pointer, reducing the size of the rte_mempool structure.
Fix the API comment of rte_mempool_create() related to
this modification.
Fixes: 213af31e09 ("mempool: reduce structure size if no cache needed")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: John McNamara <john.mcnamara@intel.com>