a PMC. It was possible that we could have turned a bit on but
never cleared it.
Extend the calls to rdmsr() to all necessary functions, not
just those which previously caused a panic.
Pointed out by: jhb@
MFC after: 1 week
All of the necessary wrmsr calls are now preceded by a rdmsr
and we leave the reserved bits alone.
Document the bits in the relevant registers for future reference.
Tested by: mdf
MFC after: 1 week
This bug would cause allocating sample-mode PMCs to fail with ENOMEM after allocating several counting-mode PMCs.
Approved by: jkoshy (mentor)
MFC after: 2 weeks
domain clock, 8 programmable PMC.
- Westmere based CPU (Xeon 5600, Corei7 980X) support.
- New man pages with events list for core and uncore.
- Updated Corei7 events with Intel 253669-033US December 2009 doc.
There is some removed events in the documentation, they have been
kept in the code but documented in the man page as obsolete.
- Offcore response events can be setup with rsp token.
Sponsored by: NETASQ
pmc_flush_logfile is now non-blocking and just ask the kernel
to shutdown the file. From that point, no more data is
accepted by the log thread and when the last buffer is flushed
the file is closed.
This will remove a deadlock between pmcstat asking for
flush while it cannot flush the pipe itself.
MFC after: 3 days
* Correct a group of typos: for Core2 programmable events, check
user supplied umask values against the correct event descriptor
field.
Submitted by: Ryan Stone <rysto32 at gmail dot com>
This brings hwpmc(4) support for 2nd and 3rd generation XScale cores.
Right now it's enabled by default to make sure we test this a bit.
When the time comes it can be disabled by default.
Tested on Gateworks boards.
A man page is coming.
Obtained from: //depot/user/rpaulo/xscalepmc/...
- fix a system deadlock on process exit when the sample buffer
is full (pmclog_loop blocked in fo_write) and pmcstat exit.
Reviewed by: jkoshy
MFC after: 3 weeks
out of the original commit of i7 support. These are all the counters
on pages A-32 and A-33 of the _Intel(R) 64 and IA32 Architectures
Software Developer's Manual Vol 3B_, June 2009. Almost all
of these counters relate to operations on the L2 cache.
Reviewed by: jkoshy
MFC after: 1 month
- Provide lapic_disable_pmc(), lapic_enable_pmc(), and lapic_reenable_pmc()
routines in the local APIC code that the hwpmc(4) driver can use to
manage the local APIC PMC interrupt vector.
- Do not enable the local APIC PMC interrupt vector by default when
HWPMC_HOOKS is enabled. Instead, the hwpmc(4) driver explicitly
enables the interrupt when it is succesfully initialized and disables
the interrupt when it is unloaded. This avoids enabling the interrupt
on unsupported CPUs which may result in spurious NMIs.
Reported by: rnoland
Reviewed by: jkoshy
Approved by: re (kib)
MFC after: 2 weeks
for the pmclog.
Reported by: Ryan Stone <rstone at sandvine dot com>
Tested by: Ryan Stone <rstone at sandvine dot com>
Sponsored by: Sandvine Incorporated
- Initialize variables before use.
- Remove a KASSERT() that could falsely trigger if there are other sources
of NMIs in the system.
Efficiency tweak:
- When checking PMCs that overflowed, ignore PMCs that were not configured for
sampling.
time it is marked for user space callchain capture in the NMI
handler and the time the callchain capture callback runs.
- Improve code and control flow clarity by invoking hwpmc(4)'s user
space callchain capture callback directly from low-level code.
Reviewed by: jhb (kern/subr_trap.c)
Testing (various patch revisions): gnn,
Fabien Thomas <fabien dot thomas at netasq dot com>,
Artem Belevich <artemb at gmail dot com>
hardware for PMCs that have been configured for sampling.
- Bug fix: acknowledge PMC hardware overflows irrespective of the
the (software) PMC's state.
and Core Duo), models 0xF (Core2), model 0x17 (Core2Extreme) and
model 0x1C (Atom).
In these CPUs, the actual numbers, kinds and widths of PMCs present
need to queried at run time. Support for specific "architectural"
events also needs to be queried at run time.
Model 0xE CPUs support programmable PMCs, subsequent CPUs
additionally support "fixed-function" counters.
- Use event names that are close to vendor documentation, taking in
account that:
- events with identical semantics on two or more CPUs in this family
can have differing names in vendor documentation,
- identical vendor event names may map to differing events across
CPUs,
- each type of CPU supports a different subset of measurable
events.
Fixed-function and programmable counters both use the same vendor
names for events. The use of a class name prefix ("iaf-" or
"iap-" respectively) permits these to be distinguished.
- In libpmc, refactor pmc_name_of_event() into a public interface
and an internal helper function, for use by log handling code.
- Minor code tweaks: staticize a global, freshen a few comments.
Tested by: gnn
dependencies. A 'struct pmc_classdep' structure describes operations
on PMCs; 'struct pmc_mdep' contains one or more 'struct pmc_classdep'
structures depending on the CPU in question.
Inside PMC class dependent code, row indices are relative to the
PMCs supported by the PMC class; MI code in "hwpmc_mod.c" translates
global row indices before invoking class dependent operations.
- Augment the OP_GETCPUINFO request with the number of PMCs present
in a PMC class.
- Move code common to Intel CPUs to file "hwpmc_intel.c".
- Move TSC handling to file "hwpmc_tsc.c".
vnode in question does not need to be held. All the data structures used
during the name lookup are protected by the global name cache lock.
Instead, the caller merely needs to ensure a reference is held on the
vnode (such as vhold()) to keep it from being freed.
In the case of procfs' <pid>/file entry, grab the process lock while we
gain a new reference (via vhold()) on p_textvp to fully close races with
execve(2).
For the kern.proc.vmmap sysctl handler, use a shared vnode lock around
the call to VOP_GETATTR() rather than an exclusive lock.
MFC after: 1 month
reduce ABI disruptions when new cpu types and new PMC events are added
in the future.
- Support alternate spellings for PMC events. Derive the canonical
spelling of an event name from its enumeration name in 'enum pmc_event'.
- Provide a way for users to disambiguate between identically named events
supported by multiple classes of PMCs in a CPU.
- Change libpmc's machine-dependent event specifier parsing code to
better support CPUs containing two or more classes of PMC resources.