- remove leading \n in some messages,
- remove trailing \n in some messages,
- split multi lines messages.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Reviewed-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Prepare for next commit, indent sections where log messages will be modified so
that next patch is only about \n.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Since base driver always add a trailing \n, add a PMD_DRV_LOG_RAW macro that
will not add one.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
- Don't use DEBUGFUNC macro in pmd.
- Don't use printf for logs.
- We should avoid calling RTE_LOG directly as pmd provides a wrapper for logs.
- Replace some PMD_INIT_LOG(DEBUG, "some_func") with PMD_INIT_FUNC_TRACE().
Signed-off-by: David Marchand <david.marchand@6wind.com>
Reviewed-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
'init' messages should always be logged and filtered at runtime by rte_log.
All the more so as these messages are not in the datapath.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
- remove leading \n in some messages,
- remove trailing \n in some messages,
- split multi lines messages.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Reviewed-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Prepare for next commit, indent sections where log messages will be modified so
that next patch is only about \n.
Signed-off-by: David Marchand <david.marchand@6wind.com>
[Thomas: fix also some missing whitespaces]
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Since base driver always add a trailing \n, add a PMD_DRV_LOG_RAW macro that
will not add one.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
- We should not use DEBUGOUT*/DEBUGFUNC macros in pmd code.
These macros come as compat wrappers for base driver.
- We should avoid calling RTE_LOG directly as pmd provides a wrapper for logs.
- Replace some PMD_INIT_LOG(DEBUG, "some_func") with PMD_INIT_FUNC_TRACE().
Signed-off-by: David Marchand <david.marchand@6wind.com>
Reviewed-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
It is helpful when you want outside code to cooperate with and respect
log levels set in DPDK. Then you can avoid using duplicate incompatible
log code in the DPDK and non-DPDK parts of the app.
Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
[Thomas: add void to fix function signature]
The refcnt field is contained within an anonymous union within the mbuf
data structure, and gcc 4.4 gives an error about an unknown field unless
the initialiser for the field is contained within extra braces.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Provide a wrapper routine to enable receive of scattered packets with a
vector driver. This improves the performance of the slow-path RX.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Adjust the fast-path code to fix the regression caused by the pool
pointer moving to the second cache line. This change adjusts the
prefetching and also the way in which the mbufs are freed back to the
mempool.
Note: slow-path e.g. path supporting jumbo frames, is still slower, but
is dealt with by a later commit
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The vector PMD expects fields to be in a specific order so that it can
do vector operations on multiple fields at a time. Following mbuf
rework, adjust driver to take account of the new layout and re-enable it
in the config.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The l2_len and l3_len fields are used for TX offloads and so should be
put on the second cache line, along with the other fields only used on
TX.
The l2 and l3 lengths can be accessed as a single uint16_t for
performance, as well as individually.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This change splits the mbuf in two to move the pool and next pointers to
the second cache line. This frees up 16 bytes in first cache line.
The reason for this change is that we believe that there is no possible
way that we can ever fit all the fields we need to fit into a 64-byte
mbuf, and so we need to start looking at a 128-byte mbuf instead. Examples
of new fields that need to fit in, include -
* 32-bits more for filter information for support for the new filters in
the i40e driver (and possibly other future drivers)
* an additional 2-4 bytes for storing info on a second vlan tag to allow
drivers to support double Vlan/QinQ
* 4-bytes for storing a sequence number to enable out of order packet
processing and subsequent packet reordering
as well as potentially a number of other fields or splitting out fields
that are superimposed over each other right now, e.g. for the qos scheduler.
We also want to allow space for use by other non-Intel NIC drivers that may
be open-sourced to dpdk.org in the future too, where they support fields
and offloads that currently supported hardware doesn't.
If we accept the fact of a 2-cache-line mbuf, then the issue becomes
how to rework things so that we spread our fields over the two
cache lines while causing the lowest slow-down possible. The general
approach that we are looking to take is to focus the first cache
line on fields that are updated on RX , so that receive only deals
with one cache line. The second cache line can be used for application
data and information that will only be used on the TX leg. This would
allow us to work on the first cache line in RX as now, and have the
second cache line being prefetched in the background so that it is
available when necessary. Hardware prefetches should help us out
here. We also may move rarely used, or slow-path RX fields e.g. such
as those for chained mbufs with jumbo frames, to the second
cache line, depending upon the performance impact and bytes savings
achieved.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Add markers or "labels" at given points inside the mbuf which can be
used instead of individual fields to identify the start of logical
sections inside the mbuf.
The use of typedefs and dummy fields was chosen over using unions
because of a couple reasons:
* unions cause an extra level of indentation (more likely two levels as
a union containing a struct for multiple fields would be needed). This
makes the lines longer than they need to be and increases the need for
wrapping. [This was the main reason]
* with markers, you can apply multiple markers at the same point if
wanted.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The metadata macros are only used by libs and apps using the rte_port
packet framework library, so move them to a header file there.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Removed the explicit zero-sized metadata definition at the end of the
mbuf data structure. Updated the metadata macros to take account of this
change so that all existing code which uses those macros still works.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
* Ensure comments line up correctly
* Simplify the #ifdefs around the refcnt fields to make them clearer
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Since the flags field is now 64-bits, we can allow one bit to be used to
indicate a control i.e. non-packet mbuf. Dedicate the high bit (bit 63)
for this purpose and add in a utility macro to test if a given mbuf has
the bit set or not.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The offload flags field (ol_flags) was 16-bits and had no further room
for expansion. This patch increases the field size to 64-bits, using up
the remaining reserved space in the single-cache-line mbuf.
NOTE: none of the values for existing flags have been changed, i.e. no
new numbers have been explicitly reserved between existing flag
definitions.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
* Reorder the fields in the mbuf so that we have fields that are used
together side-by-side in the structure. This means that we have a
contiguous block of 8-bytes in the mbuf which are used to reset an mbuf
of descriptor rearm, and a block of 16-bytes of data (excluding flags)
which are set on RX from the received packet descriptor.
* Use dummy fields as appropriate to ensure alignment or to reserve gaps
for later field additions.
* Place most items which are not used by fast-path RX separately at the end
of the structure so they can later be moved to a separate cache line.
[The l2/l3 length fields are not moved at this stage as doing so will
cause overflow to the next cache line].
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The mbuf structure already contains a pointer to the beginning of the
buffer (m->buf_addr). It is not needed to use 8 bytes again to store
another pointer to the beginning of the data.
Using a 16 bits unsigned integer is enough as we know that a mbuf is
never longer than 64KB. We gain 6 bytes in the structure thanks to
this modification.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
* Updated to apply to latest on mainline.
* Disabled vector PMD in config as it relies heavily on the mbuf layout
This will be re-enabled in a subsequent commit once vPMD has been
reworked to take account of mbuf changes.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The vlan_macip structure combined a vlan tag id with l2 and l3 headers
lengths for tracking offloads. However, this structure was only used as
a unit by the e1000 and ixgbe drivers, not generally.
This patch removes the structure from the mbuf header and places the
fields into the mbuf structure directly at the required point, without
any net effect on the structure layout. This allows us to treat the vlan
tags and header length fields as separate for future mbuf changes. The
drivers which were written to use the combined structure still do so,
using a driver-local definition of it.
Reduce perf regression caused by splitting vlan_macip field. This is
done by providing a single uint16_t value to allow writing/clearing
the l2 and l3 lengths together. There is still a small perf hit to the
slow path TX due to the reads from vlan_tci and l2/l3 lengths being
separated. (<5% in my tests with testpmd with no extra params).
Unfortunately, this cannot be eliminated, without restoring the vlan
tags and l2/l3 lengths as a combined 32-bit field. This would prevent
us from ever looking to move those fields about and is an artificial tie
that applies only for performance in igb and ixgbe drivers. Therefore,
this patch keeps the vlan_tci field separate from the lengths as the
best solution going forward.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
In some cases we may want to tag a packet for a particular destination
or output port, so rename the "in_port" field in the mbuf to just "port"
so that it can be re-used for this purpose if an application needs it.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The rte_pktmbuf structure was initially included in the rte_mbuf
structure. This was needed when there was 2 types of mbuf (ctrl and
packet). As the control mbuf has been removed, we can merge the
rte_pktmbuf into the rte_mbuf structure.
Advantages of doing this:
- the access to mbuf fields is easier (ex: m->data instead of m->pkt.data)
- make the structure more consistent: for instance, there was no reason
to have the ol_flags field in rte_mbuf
- it will allow a deeper reorganization of the rte_mbuf structure in the
next commits, allowing to gain several bytes in it
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
[Bruce: updated for latest code and new example apps]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The initial role of rte_ctrlmbuf is to carry generic messages (data
pointer + data length) but it's not used by the DPDK or it applications.
Keeping it implies:
- loosing 1 byte in the rte_mbuf structure
- having some dead code rte_mbuf.[ch]
This patch removes this feature. Thanks to it, it is now possible to
simplify the rte_mbuf structure by merging the rte_pktmbuf structure
in it. This is done in next commit.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
* Updated patch to HEAD.
* Modified patch to retain the old function names for ctrl mbufs as
macros. This helps with app compatibility, and allows the concept
of a control mbuf to be reintroduced via a single-bit flag in
a future change.
* Updated the packet framework ip_pipeline example application to
work following this change.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
It seems that RTE_MBUF_SCATTER_GATHER is not the proper name for the
feature it provides. "Scatter gather" means that data is stored using
several buffers. RTE_MBUF_REFCNT seems to be a better name for that
feature as it provides a reference counter for mbufs.
The macro RTE_MBUF_SCATTER_GATHER is poisoned to ensure this
modification is seen by drivers or applications using it.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Since all unspecified fields in an initializer are assumed to be zero we
can simplify the empty mbuf definition in the vector driver to only use
the fields that are non-zero, i.e. just nb_segs = 1. This makes things
shorter and means that the structure doesn't need as many updates for
other fields being renamed or moved.
The variable itself is never modified and only used by a single function
so it can be made const and local to the using function.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
- pci_num_vf() is already defined in RHEL 6
- pci_intx_mask_supported is already defined in RHEL 6.3
- pci_check_and_mask_intx is already defined in RHEL 6.3
Signed-off-by: Guillaume Gaudonville <guillaume.gaudonville@6wind.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This reverts commit 399a3f0db8
"fix IRQ mode handling"
and part of commit 4a5c221f9d
"fix compability on old kernel"
MSI implementation is using irq_to_desc which is not exported before
kernel 3.4 and commit 3911ff30.
Let's revert it for release 1.7.1, waiting for another solution.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Make ACL library to build/work on 'default' architecture:
- make rte_acl_classify_scalar really scalar
(make sure it wouldn't use sse4 instrincts through resolve_priority()).
- Provide two versions of rte_acl_classify code path:
rte_acl_classify_sse() - could be build and used only on systems with sse4.2
and upper, return -ENOTSUP on lower arch.
rte_acl_classify_scalar() - a slower version, but could be build and used
on all systems.
- Addition of a new function rte_acl_classify_alg. This function lets you
specify an enum value to override the acl contexts default algorithm when doing
a classification. This allows an application to specify a classification
algorithm without needing to publicize each method. I know there was concern
over keeping those methods public, but we don't have a static ABI at the moment,
so this seems to me a reasonable thing to do, as it gives us less of an ABI
surface to worry about.
- keep common code shared between these two codepaths.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Since Linux commit "set name_assign_type in alloc_netdev" (c835a677331495),
the function alloc_netdev takes a new parameter (name_assign_type)
whose default value is NET_NAME_UNKNOWN.
Signed-off-by: Aaro Koskinen <aaro.koskinen@nsn.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The sysfs directory for hugepages parsing was not closed properly in some
error cases.
Signed-off-by: Zhangkun <zhangk.zhangkun@huawei.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The cmd_ring_release can be called twice if queue has already
been released. This cause crash on shutdown.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Clean both linux and bsd implementations from unused macros.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
No need for that 'x bit' on source files.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
When writing to the mbuf array for receiving packets, do not assume
16-byte alignment by using aligned stores. If the pointers are only
8-byte aligned, the program will crash due to incorrect alignment.
Changing "store" to "storeu" fixes this.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch supports mergeable buffer feature in DPDK based virtio PMD,
which can receive jumbo frame with larger size, like 3K, 4K or even 9K.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Tested-by: Jingguo Fu <jingguox.fu@intel.com>
IPv6 will run NDP with multicast packets, but multicast packets will be
filtered by i40e driver by default. So we need to enable multicast when
promiscuous mode is on, or IPv6 will fail.
Signed-off-by: Ding Heng <hengx.ding@intel.com>
Reviewed-by: Helin Zhang <helin.zhang@intel.com>
i40e was failing to run in XEN domain0, as the physical
memory for adminq DMA should be allocated and translated
in a different way for XEN domain0. So
rte_memzone_reserve_bounded() should be used for DMA
memory allocation, and rte_mem_phy2mch() should be used
for DMA memory address translation to support running
i40e PMD in XEN domain0.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Tested-by: Zhaochen Zhan <zhaochen.zhan@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>