Remove variable declaration from within for loop.
Fixes: f14791a812 ("examples/vm_power_mgr: add policy to channels")
Signed-off-by: David Hunt <david.hunt@intel.com>
We need to set vf mac from the host, so that they will be in sync on the
guest and the host. Otherwise, we'll have a random mac on the guest, and
a 00:00:00:00:00:00 mac on the host.
Signed-off-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Here we're adding an example of setting up a policy, and allowing the
vm_cli_guest app to send it to the host using the cli command
"send_policy now"
Signed-off-by: Nemanja Marjanovic <nemanja.marjanovic@intel.com>
Signed-off-by: Rory Sexton <rory.sexton@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
We need to initialise the port's we're monitoring to be able to see
the throughput.
Signed-off-by: Nemanja Marjanovic <nemanja.marjanovic@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add extra commands to guest cli to allow enable/disable of
per-core turbo. Includes messages to vm_power_mgr in host.
Signed-off-by: David Hunt <david.hunt@intel.com>
Add extra commands to command line to allow enable/disable of
per-core turbo.
When a core has turbo enabled, calling for max frequency will allow it to
go to a turbo frequency (P0n).
When a core has turbo disabled, calling for max frequency will allow it to
go to the maximum non-turbo frequency (P1), but not beyond.
Signed-off-by: David Hunt <david.hunt@intel.com>
Macro CHANNEL_CMDS_MAX_CPUS stand for the maximum number of cores
controlled by virtual channels. This macro only be used in the example,
so remove it from library to example header file.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
In add_vm: The string buffer may not have a null terminator if the source
string's length is equal to the buffer size
Coverity issue: 30691
Fixes: e8ae9b6625 ("examples/vm_power: channel manager and monitor in host")
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
vm_power_manager utilize libvirt API virDomainGetVcpuPinInfo to
retrieve domU vcpu information. This API is implemented from version 0.9.3.
Suse11 SP3 32bit default libvirt version is 0.8.8.
examples/vm_power_manager/channel_manager.c:
channel_manager.c:117:3: error: implicit declaration of function
'virDomainGetVcpuPinInfo'
Check and skip it from examples or raise an error when trying to compile
without libvirt or with a too old libvirt.
Fixes: e8ae9b662 ("examples/vm_power: channel manager and monitor in host")
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The file rte_config.h is automatically generated and included.
No need to #include it.
The example performance-thread needs a makefile fix to avoid
overwriting the default cflags.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
virNodeGetCPUMap introduced in libvirt 1.0. In some linux distributions
like Ubuntu12/14 and Fedora18, libvirt version is older than 1.0. So this
sample will not build pass.
Replace "virNodeGetCPUMap" with another libvirt API "virNodeGetInfo".
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Host cpu mapping structure can only support 64 cpus. When run vm_power sample
on platform with more than 64 cpus, will generate improper physical core mask.
After limited supported host cpus to 64 will fix this issue.
Fixes: e9f64db946 ("examples/vm_power: show warning when more than 64 cores")
Signed-off-by: Marvin Liu <yong.liu@intel.com>
When using VM power manager app on systems with more than 64 cores,
app could not run even though user does not use cores 64 or higher.
The problem happens only in that case, in which case it will result
in an undefined behaviour.
Thefere, this patch allows the user to run the app on a system with more
than 64 cores, warning the user not to use cores higher than 64 in the VM(s).
Add new known issue where VM power manager app may not work
in a system with more than 64 cores, in release notes.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Tested-by: Marvin Liu <yong.liu@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
rte_free handles getting passed a NULL pointer.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Fix a typo: cmdline_parse_token_string_t was used in place of
cmdline_parse_num_string_t.
Seen with clang-3.5.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The check for NULL is in the wrong position in the "if" error leg. The
pointer should be checked for NULL before checking what the value of
what the pointer points to is.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The length of the path to a unix socket is not PATH_MAX but instead is
UNIX_PATH_MAX which is generally just over 100 bytes in size. It's not
actually defined in sys/un.h on linux - despite the man page referencing
it, so calculate the size in the case where it's not defined.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
CACHE_LINE_SIZE is a macro defined in machine/param.h in FreeBSD and
conflicts with DPDK macro version.
Adding RTE_ prefix to avoid conflicts.
CACHE_LINE_MASK and CACHE_LINE_ROUNDUP are also prefixed.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
[Thomas: updated on HEAD, including PPC]
Signed-off-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Provides a small sample application(guest_vm_power_mgr) to run on a VM.
The application is run by providing a core mask(-c) and number of memory
channels(-n). The core mask corresponds to the number of lcore channels to
attempt to open. A maximum of 64 channels per VM is allowed. The channels must
be monitored by the host.
After successful initialisation a CPU frequency command can be sent to the host
using:
set_cpu_freq <lcore_num> <up|down|min|max>.
Signed-off-by: Alan Carew <alan.carew@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
For launching CLI thread and Monitor thread and initialising
resources.
Requires a minimum of two lcores to run, additional cores specified by eal core
mask are not used.
Signed-off-by: Alan Carew <alan.carew@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
A wrapper around librte_power(using ACPI cpufreq), providing locking around the
non-threadsafe library, allowing for frequency changes based on core masks and
core numbers from both the CLI thread and epoll monitor thread.
Signed-off-by: Alan Carew <alan.carew@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The CLI is used for administrating the channel monitor and manager and
manually setting the CPU frequency on the host.
Supports the following commands:
add_vm [Mul-choice STRING]: add_vm|rm_vm <name>, add a VM for subsequent
operations with the CLI or remove a previously added VM from the VM Power
Manager
rm_vm [Mul-choice STRING]: add_vm|rm_vm <name>, add a VM for subsequent
operations with the CLI or remove a previously added VM from the VM Power
Manager
add_channels [Fixed STRING]: add_channels <vm_name> <list>|all, add
communication channels for the specified VM, the virtio channels must be
enabled in the VM configuration(qemu/libvirt) and the associated VM must be
active. <list> is a comma-separated list of channel numbers to add, using the
keyword 'all' will attempt to add all channels for the VM
set_channel_status [Fixed STRING]:
set_channel_status <vm_name> <list>|all enabled|disabled, enable or disable
the communication channels in list(comma-separated) for the specified VM,
alternatively list can be replaced with keyword 'all'. Disabled channels will
still receive packets on the host, however the commands they specify will be
ignored. Set status to 'enabled' to begin processing requests again.
show_vm [Fixed STRING]: show_vm <vm_name>, prints the information on the
specified VM(s), the information lists the number of vCPUS, the pinning to
pCPU(s) as a bit mask, along with any communication channels associated with
each VM
show_cpu_freq_mask [Fixed STRING]: show_cpu_freq_mask <mask>, Get the current
frequency for each core specified in the mask
set_cpu_freq_mask [Fixed STRING]: set_cpu_freq <core_mask> <up|down|min|max>,
Set the current frequency for the cores specified in <core_mask> by scaling
each up/down/min/max.
show_cpu_freq [Fixed STRING]: Get the current frequency for the specified core
set_cpu_freq [Fixed STRING]: set_cpu_freq <core_num> <up|down|min|max>,
Set the current frequency for the specified core by scaling up/down/min/max
quit [Fixed STRING]: close the application
Signed-off-by: Alan Carew <alan.carew@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The manager is responsible for adding communications channels to the Monitor
thread, tracking and reporting VM state and employs the libvirt API for
synchronization with the KVM Hypervisor. The manager interacts with the
Hypervisor to discover the mapping of virtual CPUS(vCPUs) to the host
physical CPUS(pCPUs) and to inspect the VM running state.
The manager provides the following functionality to the CLI:
1) Connect to a libvirtd instance, default: qemu:///system
2) Add a VM to an internal list, each VM is identified by a "name" which must
correspond a valid libvirt Domain Name.
3) Add communication channels associated with a VM to the epoll based Monitor
thread.
The channels must exist and be in the form of:
/tmp/powermonitor/<vm_name>.<channel_number>. Each channel is a
Virtio-Serial endpoint configured as an AF_UNIX file socket and opened in
non-blocking mode.
Each VM can have a maximum of 64 channels associated with it.
4) Disable or re-enable VM communication channels, channels once added to the
Monitor thread remain in that threads control, however acting on channel
requests can be disabled and renabled via CLI.
The monitor is an epoll based infinite loop running in a separate thread that
waits on channel events from VMs and calls the corresponding functions. Channel
definitions from the manager are registered via the epoll event opaque pointer
when calling epoll_ctl(EPOLL_CTL_ADD), this allows for obtaining the channels
file descriptor for reading EPOLLIN events and mapping the vCPU to pCPU(s)
associated with a request from a particular VM.
Signed-off-by: Alan Carew <alan.carew@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>