eth_hw_addr_random() sets address type correctly.
eth_hw_addr_random() is available since Linux v3.4, so
no compat is required.
Also fix the warning:
warning: passing argument 1 of ‘memcpy’ discards ‘const’
qualifier from pointer target type
Variable dev_addr is done const intentionally in Linux v5.17 to
prevent using it directly.
Fixes: ea6b39b5b8 ("kni: remove ethtool support")
Cc: stable@dpdk.org
Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
A new event RTE_ETH_EVENT_RX_AVAIL_THRESH should be generated by HW
when number of available descriptors in Rx queue goes below the
threshold.
The threshold is defined as a percentage of an Rx queue size with valid
values from 0 to 99 (inclusive). Zero (default) value disables it.
There is no capability reporting for the feature. Application should
simply try to set required threshold value and handle result.
Add testpmd commands to control the threshold:
set port <port_id> rxq <rxq_id> avail_thresh <avail_thresh_num>
Signed-off-by: Spike Du <spiked@nvidia.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
When building the kernel modules, try to get the kernel version from
the kernel sources first.
This fixes the kernel modules installation directory if the target kernel
version differs from the host kernel version, like for CI build or when
packaging for linux distributions.
Signed-off-by: Ferdinand Thiessen <rpm@fthiessen.de>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
The error path was calling rte_eth_dev_release_port() function,
which frees eth_dev->data->dev_private, and then tries to free
pmd->intr_handle, which causes the use after free issue.
The free can be moved to before the release function is called.
Fixes: d61138d4f0 ("drivers: remove direct access to interrupt handle")
Cc: stable@dpdk.org
Signed-off-by: Xiangjun Meng <mengxiangjun4@huawei.com>
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The PMD destroy function was calling the release function, which frees
dev->data->dev_private, and then tries to free PRIV(dev)->intr_handle,
which causes the heap use after free issue.
The free can be moved to before the release function is called.
Fixes: d61138d4f0 ("drivers: remove direct access to interrupt handle")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
A multicast address pool is allocated for a port when
using mcast_addr testpmd commands.
When closing a port or stopping testpmd, this pool was
not freed, resulting in a leak.
This issue has been caught using ASan.
Free this pool when closing the port.
Error info as following:
ERROR: LeakSanitizer: detected memory leaksDirect leak of
192 byte(s)
0 0x7f6a2e0aeffe in __interceptor_realloc
(/lib/x86_64-linux-gnu/libasan.so.5+0x10dffe)
1 0x565361eb340f in mcast_addr_pool_extend
../app/test-pmd/config.c:5162
2 0x565361eb3556 in mcast_addr_pool_append
../app/test-pmd/config.c:5180
3 0x565361eb3aae in mcast_addr_add
../app/test-pmd/config.c:5243
Fixes: 8fff667578 ("app/testpmd: new command to add/remove multicast MAC addresses")
Cc: stable@dpdk.org
Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
Acked-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
When --mbuf-size cmdline parameter is specified, the segments to scatter
packets on are allocated sequentially from these extra memory pools
(the mbuf for the first segment is allocated from the first pool, the
second one from the second pool, and so on, if segment number is greater
then pool’s the mbuf for remaining segments will be allocated from the
last valid pool).
A bug in comparing segment index with mbuf index caused wrong mapping
of one of the segments.
Fix the comparison.
Fixes: 2befc67ff6 ("app/testpmd: add extended Rx queue setup")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
MCS lock, PF lock and Ticket lock have no arch specific implementation,
there is no need for the extra redirection in headers.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Stanislaw Kardach <kda@semihalf.com>
- New firmware version for UDM (Upstream Data Mover)
- Remove device-level start, stop, and reset operations
- Add queue-based start, stop and reset as required by firmware
- Remove performance structs as they are not in the firmware module
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
- New firmware version for DDM (Downstream Data Mover)
- Remove device-level start, stop, and reset operations
- Add queue-based start, stop and reset as required by firmware
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
- New firmware version for MPU (Mbuf Prefetch Unit)
- Remove device-level global operations
- Remove ark_mpu_reset_stats function
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
This release contains changes listed below.
- Fast mbuf free feature support.
- Device argument to disable the LLQ.
- Simplification of the MTU verification.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
The PMD attempts to enable the LLQ (Low Latency Queue) whenever it's
possible. The LLQ requires the user to enable the Write Combining for
the supported igb_uio/vfio-pci modules.
The vfio-pci module officially doesn't support the WC. Moreover, in some
Linux distributions, it can be built into the kernel, so any
modifications to the vfio-pci module require a full rebuild of the
kernel. This can make the configuration process much harder and for some
users, that are not interested in the great network performance for
their setups, it may be redundant. These users requested to be able to
turn off LLQ to avoid the hassle of such a setup.
It's generally not recommended to disable the LLQ, as it won't result in
the performance improvement and on the 6th generation AWS instances the
lack of LLQ can have a huge negative impact on hardware performance.
The device argument which controls the LLQ is called 'enable_llq` and by
default, it's set to 1 (which means that the LLQ is enabled). Setting
it to 0 disables the LLQ.
This commit also adds the explicit initialization of the devarg for the
'use_large_llq_hdr'. The PMD_REGISTER_PARAM_STRING() call for the ENA
was updated with all the available devargs (including
ENA_DEVARG_MISS_TXC_TO, which wasn't added previously).
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
Remove MTU verification from ena_mtu_set() and ena_start(). It is done
by rte_ethdev already, so there is no reason to repeat it inside the ENA
driver.
Signed-off-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
Add support for RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE offload. It can be
enabled if all the mbufs for a given queue belong to the same mempool
and their reference count is equal to 1.
Signed-off-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
l3fwd-acl contains duplicate functions to l3fwd.
For this reason we merge l3fwd-acl code into l3fwd
with '--lookup acl' cmdline option to run ACL.
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add missing em_mask_key() implementation and fix l3fwd_common.h
inclusion in FIB lookup functions to enable the l3fwd to be run on
RISC-V.
Sponsored-by: Frank Zhao <frank.zhao@starfivetech.com>
Sponsored-by: Sam Grove <sam.grove@sifive.com>
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Define the missing __NR_bpf syscall id to enable the tap PMD.
Sponsored-by: Frank Zhao <frank.zhao@starfivetech.com>
Sponsored-by: Sam Grove <sam.grove@sifive.com>
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Define the missing __NR_memfd_create syscall id to enable the memif PMD.
Sponsored-by: Frank Zhao <frank.zhao@starfivetech.com>
Sponsored-by: Sam Grove <sam.grove@sifive.com>
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Re-use vector processing stubs in ixgbe PMD defined for PPC for RISC-V.
This enables ixgbe PMD usage in scalar mode on this architecture.
The ixgbe PMD driver was validated with Intel X520-DA2 NIC and the
test-pmd application. Packet transfer checked using all UIO drivers
available for non-IOMMU platforms: uio_pci_generic, vfio-pci noiommu and
igb_uio.
Sponsored-by: Frank Zhao <frank.zhao@starfivetech.com>
Sponsored-by: Sam Grove <sam.grove@sifive.com>
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Add checks for all flag values defined in the RISC-V misa CSR register.
Sponsored-by: Frank Zhao <frank.zhao@starfivetech.com>
Sponsored-by: Sam Grove <sam.grove@sifive.com>
Signed-off-by: Michal Mazurek <maz@semihalf.com>
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Validate RISC-V compilation when test-meson-builds.sh is called. The
check will be only performed if appropriate toolchain is present on the
system (same as with other architectures).
Sponsored-by: Frank Zhao <frank.zhao@starfivetech.com>
Sponsored-by: Sam Grove <sam.grove@sifive.com>
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Add all necessary elements for DPDK to compile and run EAL on SiFive
Freedom U740 SoC which is based on SiFive U74-MC (ISA: rv64imafdc)
core complex.
This includes:
- EAL library implementation for rv64imafdc ISA.
- meson build structure for 'riscv' architecture. RTE_ARCH_RISCV define
is added for architecture identification.
- xmm_t structure operation stubs as there is no vector support in the
U74 core.
Compilation was tested on Ubuntu and Arch Linux using riscv64 toolchain.
Clang compilation currently not supported due to issues with missing
relocation relaxation.
Two rte_rdtsc() schemes are provided: stable low-resolution using rdtime
(default) and unstable high-resolution using rdcycle. User can override
the scheme by defining RTE_RISCV_RDTSC_USE_HPM=1 during compile time of
both DPDK and the application. The reasoning for this is as follows.
The RISC-V ISA mandates that clock read by rdtime has to be of constant
period and synchronized between all hardware threads within 1 tick
(chapter 10.1 in version 20191213 of RISC-V spec).
However this clock may not be of high-enough frequency for dataplane
uses. I.e. on HiFive Unmatched (FU740) it is 1MHz.
There is a high-resolution alternative in form of rdcycle which is
clocked at the core clock frequency. The drawbacks are that it may be
disabled during sleep (WFI), its frequency might change due to DVFS and
it is core-local and therefore cannot be used as a wall-clock. It can
however be used for micro-benchmarking user applications, similarly to
Aarch64's PMCCNTR PMU counter.
The platform is currently marked as linux-only because rte_cycles
implementation uses the timebase-frequency device-tree node read through
the proc file system. Such approach was chosen because Linux kernel
depends on the presence of this device-tree node.
The i40e PMD driver is disabled on RISC-V as the rv64gc ISA has no vector
operations.
The compilation of following modules has been disabled by this commit
and will be re-enabled in later commits as fixes are introduced:
net/ixgbe, net/memif, net/tap, example/l3fwd.
Sponsored-by: Frank Zhao <frank.zhao@starfivetech.com>
Sponsored-by: Sam Grove <sam.grove@sifive.com>
Signed-off-by: Michal Mazurek <maz@semihalf.com>
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Batch op data is initialized inside mempool alloc. But
in case of empty mempools, the alloc function is not
called and hence the initialization of batch op data is
also not done. So ensure the validity of batch op data
inside mempool free.
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
Controlling existing GPIO should be normally frowned upon because
we want to avoid situation where multiple contenders modify GPIO
state simultaneously.
Still there might be situations where this is actually needed.
Restarting killed application being an example here.
So relax current restrictions and respect user needs.
Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
When sending a command to an idxd device via PCI BAR, the response from
HW is checked to ensure it was successful. The response was incorrectly
being negated before being returned by the function, meaning error codes
cannot be checked against the HW specification.
This patch fixes the return values of the function by removing the
negation.
Fixes: 9449330a84 ("dma/idxd: create dmadev instances on PCI probe")
Fixes: 452c1916b0 ("dma/idxd: fix truncated error code in status check")
Cc: stable@dpdk.org
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Conor Walsh <conor.walsh@intel.com>
Move the "source_org" page to after overview, where it fits
better to explain the source-code layout of DPDK, before getting
into details of specific libraries such as EAL.
Also removes the older titles from the 3 documents which still had them.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Small improvements to the documentation based on Sphinx HTML doc output.
Fixes: 14b8f0bbe5 ("doc: add BPF library guide")
Fixes: b901d92836 ("bpf: support packet data load instructions")
Cc: stable@dpdk.org
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Allow building generic arm64 target using config/arm/arm64_armv8_linux_*
config which works on both cn9k and cn10k by relaxing cache line size
requirements a bit.
While at it move cache line checks to common place.
Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Update and add maintainers for NXP devices and RAW device API.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
If the setup function returns TEST_SKIPPED, the logs would say the test
case is skipped while the summary count would consider it under failed
cases. Address this by counting such test cases under 'skipped'.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
To allow other projects to easily use DPDK as a subproject, add in the
necessary dependency definitions. Slightly different definitions are
necessary for static and shared builds, since for shared builds the
drivers should not be linked in, and the internal meson dependency
objects are more complete.
To use DPDK as a subproject fallback i.e. use installed DPDK if present,
otherwise the shipped one, the following meson statement can be used:
libdpdk = dependency('libdpdk', fallback: ['dpdk', 'dpdk_dep'])
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Ben Magistro <koncept1@gmail.com>
Tested-by: Ben Magistro <koncept1@gmail.com>
At device probe, the fslmc bus driver calls rte_vfio_get_group_fd() to
get a fd associated to a vfio group. This function first checks if the
group is already opened, else it opens /dev/vfio/%u, and increases the
number of active groups in default_vfio_cfg (which references the
default vfio container).
When adding the first group to a vfio_cfg, the caller is supposed to
pick an IOMMU type and set up DMA mappings for container, as it's done
by pci bus, but it is not done here. Instead, a new container is created
and used.
This prevents the pci bus driver, which uses the default_vfio_cfg
container, to configure the container because
default_vfio_cfg->active_group > 1.
This patch fixes the issue by always creating a new container (and its
associated vfio_cfg) and binding the group to it.
Fixes: a69f793002 ("bus/fslmc: support multi VFIO group")
Cc: stable@dpdk.org
Signed-off-by: Romain Delhomel <romain.delhomel@6wind.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch replaces instances of zero-sized arrays i.e. those at the end
of structures with "[0]" with the more standard syntax of "[]".
Replacement was done using coccinelle script, with some revert and
cleanup of whitespace afterwards.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Add script to replace [0] with [] when used at the end of a struct.
The script also includes an additional struct member to match against so
as to avoid issues with arrays with only a single zero-length element.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
PAC N6000 is the first OFS platform, its device id is added to ifpga
device support list.
Previous FPGA platform like Intel PAC N3000 and N5000, FME DFL (Device
Feature List) starts from BAR0 by default, port DFL location is indicated
in PORTn_OFFSET register in FME. In OFS implementation, FME DFL and port
DFL location can be defined individually in PCIe VSEC (Vendor Specific
Extended Capabilities). In this patch, DFL definition is searched in VSEC,
the legacy DFL is used only when DFL VSEC is not present.
In original DFL enumeration process, AFU is expected to locate in port DFL,
but this is not the case in OFS implementation. In this patch, enumeration
can search AFU in any PF/VF which has no FME and port.
Signed-off-by: Wei Huang <wei.huang@intel.com>
Acked-by: Tianfei Zhang <tianfei.zhang@intel.com>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
There is an API rte_pmd_ifpga_cleanup provided by ifpga driver to
free the software resource used by ifpga card. The function call
of rte_pmd_ifpga_cleanup is list below.
rte_pmd_ifpga_cleanup()
ifpga_rawdev_cleanup()
rte_rawdev_pmd_release()
rte_rawdev_close()
ifpga_rawdev_close()
The interrupts are unregistered in ifpga_rawdev_destroy instead of
ifpga_rawdev_close function, so rte_pmd_ifpga_cleanup cannot free
interrupt resource as expected.
To fix such issue, interrupt unregistration is moved from
ifpga_rawdev_destroy to ifpga_rawdev_close function. The change of
function call of ifpga_rawdev_destroy is as below.
ifpga_rawdev_destroy()
ifpga_unregister_msix_irq() // removed
rte_rawdev_pmd_release()
rte_rawdev_close()
ifpga_rawdev_close()
Fixes: e0a1aafe2a ("raw/ifpga: introduce IRQ functions")
Cc: stable@dpdk.org
Signed-off-by: Wei Huang <wei.huang@intel.com>
Acked-by: Tianfei Zhang <tianfei.zhang@intel.com>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Virtual devices created on ifpga raw device will not be removed
when ifpga device has closed. To avoid resource leak problem,
this patch introduces an ifpga virtual device remove function,
virtual devices will be destroyed after the ifpga raw device closed.
Fixes: ef1e8ede3d ("raw/ifpga: add Intel FPGA bus rawdev driver")
Cc: stable@dpdk.org
Signed-off-by: Wei Huang <wei.huang@intel.com>
Acked-by: Tianfei Zhang <tianfei.zhang@intel.com>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
These APIs are introduced in DPDK 21.05 and have been tested in several
release, experimental tag can be formally removed.
Signed-off-by: Wei Huang <wei.huang@intel.com>
Acked-by: Tianfei Zhang <tianfei.zhang@intel.com>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Add functions for setting and getting the priority of a thread.
Priorities on multiple platforms are similarly determined by a priority
value and a priority class/policy.
Currently in DPDK most threads operate at the OS-default priority level
but there are cases when increasing the priority is useful. For
example, high performance applications may require elevated priority
levels.
For these reasons, EAL will expose two priority levels which are named
suggestively "normal" and "realtime_critical" and are computed as
follows:
On Linux, the following mapping is created:
RTE_THREAD_PRIORITY_NORMAL corresponds to
* policy SCHED_OTHER
* priority value: (sched_get_priority_min(SCHED_OTHER) +
sched_get_priority_max(SCHED_OTHER))/2;
RTE_THREAD_PRIORITY_REALTIME_CRITICAL corresponds to
* policy SCHED_RR
* priority value: sched_get_priority_max(SCHED_RR);
On Windows, the following mapping is created:
RTE_THREAD_PRIORITY_NORMAL corresponds to
* class NORMAL_PRIORITY_CLASS
* priority THREAD_PRIORITY_NORMAL
RTE_THREAD_PRIORITY_REALTIME_CRITICAL corresponds to
* class REALTIME_PRIORITY_CLASS (when running with privileges)
* class HIGH_PRIORITY_CLASS (when running without privileges)
* priority THREAD_PRIORITY_TIME_CRITICAL
Note that on Linux the resulting priority value will be 0, in
accordance to the documentation that mention the value should be 0 for
SCHED_OTHER policy.
Signed-off-by: Narcisa Vasile <navasile@linux.microsoft.com>
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
A sequence lock (seqlock) is a synchronization primitive which allows
for data-race free, low-overhead, high-frequency reads, suitable for
data structures shared across many cores and which are updated
relatively infrequently.
A seqlock permits multiple parallel readers. A spinlock is used to
serialize writers. In cases where there is only a single writer, or
writer-writer synchronization is done by some external means, the
"raw" sequence counter type (and accompanying rte_seqcount_*()
functions) may be used instead.
To avoid resource reclamation and other issues, the data protected by
a seqlock is best off being self-contained (i.e., no pointers [except
to constant data]).
One way to think about seqlocks is that they provide means to perform
atomic operations on data objects larger than what the native atomic
machine instructions allow for.
DPDK seqlocks (and the underlying sequence counters) are not
preemption safe on the writer side. A thread preemption affects
performance, not correctness.
A seqlock contains a sequence number, which can be thought of as the
generation of the data it protects.
A reader will
1. Load the sequence number (sn).
2. Load, in arbitrary order, the seqlock-protected data.
3. Load the sn again.
4. Check if the first and second sn are equal, and even numbered.
If they are not, discard the loaded data, and restart from 1.
The first three steps need to be ordered using suitable memory fences.
A writer will
1. Take the spinlock, to serialize writer access.
2. Load the sn.
3. Store the original sn + 1 as the new sn.
4. Perform load and stores to the seqlock-protected data.
5. Store the original sn + 2 as the new sn.
6. Release the spinlock.
Proper memory fencing is required to make sure the first sn store, the
data stores, and the second sn store appear to the reader in the
mentioned order.
The sn loads and stores must be atomic, but the data loads and stores
need not be.
The original seqlock design and implementation was done by Stephen
Hemminger. This is an independent implementation, using C11 atomics.
For more information on seqlocks, see
https://en.wikipedia.org/wiki/Seqlock
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
musl lacks __ppc_get_timebase() but has __builtin_ppc_get_timebase()
Signed-off-by: Duncan Bellamy <dunk@denkimushi.com>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
This patch fixes the following build failure seen on Ubuntu 16.04
with gcc 5.4.0 because of uninitialized variable:
[..]
examples/pipeline/cli.c:2853:9: error: 'session_id' may be used
uninitialized in this function [-Werror=maybe-uninitialized]
[..]
Fixes: 172254555f ("examples/pipeline: support packet mirroring")
Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>