6895 Commits

Author SHA1 Message Date
Bjoern A. Zeeb
14f2533c56 As per [1] Intel only supports this driver on 64bit platforms.
For now restrict it to amd64.  Other architectures might be
re-added later once tested.

Remove the drivers from the global NOTES and files files and move
them to the amd64 specifics.
Remove the drivers from the i386 modules build and only leave the
amd64 version.

Rather than depending on "inet" depend on "pci" and make sure that
ixl(4) and ixlv(4) can be compiled independently [2].  This also
allows the drivers to build properly on IPv4-only or IPv6-only
kernels.

PR:		193824 [2]
Reviewed by:	eric.joyner intel.com
MFC after:	3 days

References:
[1] http://lists.freebsd.org/pipermail/svn-src-all/2014-August/090470.html
2014-09-23 08:33:03 +00:00
Neel Natu
af198d882a Allow more VMCB fields to be cached:
- CR2
- CR0, CR3, CR4 and EFER
- GDT/IDT base/limit fields
- CS/DS/ES/SS selector/base/limit/attrib fields

The caching can be further restricted via the tunable 'hw.vmm.svm.vmcb_clean'.

Restructure the code such that the fields above are only modified in a single
place. This makes it easy to invalidate the VMCB cache when any of these fields
is modified.
2014-09-21 23:42:54 +00:00
Neel Natu
4eea1566cb Get rid of unused stat VMM_HLT_IGNORED. 2014-09-21 18:52:56 +00:00
Konstantin Belousov
060cd4d500 Update and clarify comments. Remove the useless counter for impossible, but
seen in wild situation (on buggy hypervisors).

In collaboration with:	bde
MFC after:	1 week
2014-09-21 09:06:50 +00:00
Neel Natu
ba28c094bb The memory type bits (PAT, PCD, PWT) associated with a nested PTE or PDE
are identical to the traditional x86 page tables.
2014-09-21 06:36:17 +00:00
Neel Natu
8f02c5e456 IFC r271888.
Restructure MSR emulation so it is all done in processor-specific code.
2014-09-20 21:46:31 +00:00
Neel Natu
b6cf6c8ca6 IFC @r271887 2014-09-20 06:27:37 +00:00
Neel Natu
9d8d8e3ee7 Add some more KTR events to help debugging. 2014-09-20 05:13:03 +00:00
Neel Natu
cb44ea41cb MSR_KGSBASE is no longer saved and restored from the guest MSR save area. This
behavior was changed in r271888 so update the comment block to reflect this.

MSR_KGSBASE is accessible from the guest without triggering a VM-exit. The
permission bitmap for MSR_KGSBASE is modified by vmx_msr_guest_init() so get
rid of redundant code in vmx_vminit().
2014-09-20 05:12:34 +00:00
Neel Natu
c3498942a5 Restructure the MSR handling so it is entirely handled by processor-specific
code. There are only a handful of MSRs common between the two so there isn't
too much duplicate functionality.

The VT-x code has the following types of MSRs:

- MSRs that are unconditionally saved/restored on every guest/host context
  switch (e.g., MSR_GSBASE).

- MSRs that are restored to guest values on entry to vmx_run() and saved
  before returning. This is an optimization for MSRs that are not used in
  host kernel context (e.g., MSR_KGSBASE).

- MSRs that are emulated and every access by the guest causes a trap into
  the hypervisor (e.g., MSR_IA32_MISC_ENABLE).

Reviewed by:	grehan
2014-09-20 02:35:21 +00:00
Konstantin Belousov
6dfc9e44fa - Use NULL instead of 0 for fpcurthread.
- Note the quirk with the interrupt enabled state of the dna handler.
- Use just panic() instead of printf() and panic().  Print tid instead
  of pid, the fpu state is per-thread.

Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
2014-09-18 09:13:20 +00:00
Bjoern A. Zeeb
581980cff8 Re-gen after r271743 implementing most of
timer_{create,settime,gettime,getoverrun,delete}.

MFC after:		3 days
Sponsored by:		DARPA, AFRL
2014-09-18 08:40:00 +00:00
Bjoern A. Zeeb
0a041f3b47 Implement most of timer_{create,settime,gettime,getoverrun,delete}
for amd64/linux32.  Fix the entirely bogus (untested) version from
r161310 for i386/linux using the same shared code in compat/linux.

It is unclear to me if we could support more clock mappings but
the current set allows me to successfully run commercial
32bit linux software under linuxolator on amd64.

Reviewed by:		jhb
Differential Revision:	D784
MFC after:		3 days
Sponsored by:		DARPA, AFRL
2014-09-18 08:36:45 +00:00
Konstantin Belousov
490356e5b7 Presence of any VM_PROT bits in the permission argument on x86 implies
that the entry is readable and valid.

Reported by:	markj
Submitted by:	alc
Tested by:	pho (previous version), markj
MFC after:	3 days
2014-09-17 18:49:57 +00:00
Neel Natu
4e27d36d38 IFC @r271694 2014-09-17 18:46:51 +00:00
Neel Natu
6b844b87e2 Rework vNMI injection.
Keep track of NMI blocking by enabling the IRET intercept on a successful
vNMI injection. The NMI blocking condition is cleared when the handler
executes an IRET and traps back into the hypervisor.

Don't inject NMI if the processor is in an interrupt shadow to preserve the
atomic nature of "STI;HLT". Take advantage of this and artificially set the
interrupt shadow to prevent NMI injection when restarting the "iret".

Reviewed by:	Anish Gupta (akgupt3@gmail.com), grehan
2014-09-17 00:30:25 +00:00
Neel Natu
5fb3bc71f8 Minor cleanup.
Get rid of unused 'svm_feature' from the softc.

Get rid of the redundant 'vcpu_cnt' checks in svm.c. There is a similar check
in vmm.c against 'vm->active_cpus' before the AMD-specific code is called.

Submitted by:	Anish Gupta (akgupt3@gmail.com)
2014-09-16 04:01:55 +00:00
Neel Natu
79ad53fba3 Use V_IRQ, V_INTR_VECTOR and V_TPR to offload APIC interrupt delivery to the
processor. Briefly, the hypervisor sets V_INTR_VECTOR to the APIC vector
and sets V_IRQ to 1 to indicate a pending interrupt. The hardware then takes
care of injecting this vector when the guest is able to receive it.

Legacy PIC interrupts are still delivered via the event injection mechanism.
This is because the vector injected by the PIC must reflect the state of its
pins at the time the CPU is ready to accept the interrupt.

Accesses to the TPR via %CR8 are handled entirely in hardware. This requires
that the emulated TPR must be synced to V_TPR after a #VMEXIT.

The guest can also modify the TPR via the memory mapped APIC. This requires
that the V_TPR must be synced with the emulated TPR before a VMRUN.

Reviewed by:	Anish Gupta (akgupt3@gmail.com)
2014-09-16 03:31:40 +00:00
Neel Natu
bbadcde418 Set the 'vmexit->inst_length' field properly depending on the type of the
VM-exit and ultimately on whether nRIP is valid. This allows us to update
the %rip after the emulation is finished so any exceptions triggered during
the emulation will point to the right instruction.

Don't attempt to handle INS/OUTS VM-exits unless the DecodeAssist capability
is available. The effective segment field in EXITINFO1 is not valid without
this capability.

Add VM_EXITCODE_SVM to flag SVM VM-exits that cannot be handled. Provide the
VMCB fields exitinfo1 and exitinfo2 as collateral to help with debugging.

Provide a SVM VM-exit handler to dump the exitcode, exitinfo1 and exitinfo2
fields in bhyve(8).

Reviewed by:	Anish Gupta (akgupt3@gmail.com)
Reviewed by:	grehan
2014-09-14 04:39:04 +00:00
Neel Natu
74accc3170 Bug fixes.
- Don't enable the HLT intercept by default. It will be enabled by bhyve(8)
  if required. Prior to this change HLT exiting was always enabled making
  the "-H" option to bhyve(8) meaningless.

- Recognize a VM exit triggered by a non-maskable interrupt. Prior to this
  change the exit would be punted to userspace and the virtual machine would
  terminate.
2014-09-13 23:48:43 +00:00
Neel Natu
fa7caa91cb style(9): insert an empty line if the function has no local variables
Pointed out by:	grehan
2014-09-13 22:45:04 +00:00
Neel Natu
c2a875f970 AMD processors that have the SVM decode assist capability will store the
instruction bytes in the VMCB on a nested page fault. This is useful because
it saves having to walk the guest page tables to fetch the instruction.

vie_init() now takes two additional parameters 'inst_bytes' and 'inst_len'
that map directly to 'vie->inst[]' and 'vie->num_valid'.

The instruction emulation handler skips calling 'vmm_fetch_instruction()'
if 'vie->num_valid' is non-zero.

The use of this capability can be turned off by setting the sysctl/tunable
'hw.vmm.svm.disable_npf_assist' to '1'.

Reviewed by:	Anish Gupta (akgupt3@gmail.com)
Discussed with:	grehan
2014-09-13 22:16:40 +00:00
John Baldwin
7d8312cc92 Add a sysctl to export the EFI memory map along with a handler in the
sysctl(8) binary to format it.

Reviewed by:	emaste
MFC after:	2 weeks
Differential Revision:	https://reviews.freebsd.org/D771
2014-09-13 03:10:02 +00:00
Neel Natu
d181963296 Optimize the common case of injecting an interrupt into a vcpu after a HLT
by explicitly moving it out of the interrupt shadow. The hypervisor is done
"executing" the HLT and by definition this moves the vcpu out of the
1-instruction interrupt shadow.

Prior to this change the interrupt would be held pending because the VMCS
guest-interruptibility-state would indicate that "blocking by STI" was in
effect. This resulted in an unnecessary round trip into the guest before
the pending interrupt could be injected.

Reviewed by:	grehan
2014-09-12 06:15:20 +00:00
Neel Natu
442a04ca83 style(9): indent the switch, don't indent the case, indent case body one tab. 2014-09-11 06:17:56 +00:00
Neel Natu
e441104d63 Repurpose the V_IRQ interrupt injection to implement VMX-style interrupt
window exiting. This simply involves setting V_IRQ and enabling the VINTR
intercept. This instructs the CPU to trap back into the hypervisor as soon
as an interrupt can be injected into the guest. The pending interrupt is
then injected via the traditional event injection mechanism.

Rework vcpu interrupt injection so that Linux guests now idle with host cpu
utilization close to 0%.

Reviewed by:	Anish Gupta (earlier version)
Discussed with:	grehan
2014-09-11 02:37:02 +00:00
John Baldwin
de2b02fc74 MFamd64: Use initializecpu() to set various model-specific registers on
AP startup and AP resume (it was already used for BSP startup and BSP
resume).
- Split code to do one-time probing of cache properties out of
  initializecpu() and into initializecpucache().  This is called once on
  the BSP during boot.
- Move enable_sse() into initializecpu().
- Call initializecpu() for AP startup instead of enable_sse() and
  manually frobbing MSR_EFER to enable PG_NX.
- Call initializecpu() when an AP resumes.  In theory this will now
  properly re-enable PG_NX in MSR_EFER when resuming a PAE kernel on
  APs.
2014-09-10 21:37:47 +00:00
Neel Natu
238b6cb761 Allow intercepts and irq fields to be cached by the VMCB.
Provide APIs svm_enable_intercept()/svm_disable_intercept() to add/delete
VMCB intercepts. These APIs ensure that the VMCB state cache is invalidated
when intercepts are modified.

Each intercept is identified as a (index,bitmask) tuple. For e.g., the
VINTR intercept is identified as (VMCB_CTRL1_INTCPT,VMCB_INTCPT_VINTR).
The first 20 bytes in control area that are used to enable intercepts
are represented as 'uint32_t intercept[5]' in 'struct vmcb_ctrl'.

Modify svm_setcap() and svm_getcap() to use the new APIs.

Discussed with:	Anish Gupta (akgupt3@gmail.com)
2014-09-10 03:13:40 +00:00
Neel Natu
e5397c9fdd Move the VMCB initialization into svm.c in preparation for changes to the
interrupt injection logic.

Discussed with:	Anish Gupta (akgupt3@gmail.com)
2014-09-10 02:35:19 +00:00
Neel Natu
840b1a2760 Move the event injection function into svm.c and add KTR logging for
every event injection.

This in in preparation for changes to SVM guest interrupt injection.

Discussed with:	Anish Gupta (akgupt3@gmail.com)
2014-09-10 02:20:32 +00:00
Neel Natu
2591ee3e80 Remove a bogus check that flagged an error if the guest %rip was zero.
An AP begins execution with %rip set to 0 after a startup IPI.

Discussed with:	Anish Gupta (akgupt3@gmail.com)
2014-09-10 01:46:22 +00:00
Neel Natu
5e467bd098 Make the KTR tracepoints uniform and ensure that every VM-exit is logged.
Discussed with:	Anish Gupta (akgupt3@gmail.com)
2014-09-10 01:37:32 +00:00
Neel Natu
a2901ce7ad Allow guest read access to MSR_EFER without hypervisor intervention.
Dirty the VMCB_CACHE_CR state cache when MSR_EFER is modified.
2014-09-10 01:10:53 +00:00
Neel Natu
501f03eba2 Remove gratuitous forward declarations.
Remove tabs on empty lines.
2014-09-09 23:39:43 +00:00
Neel Natu
a268481428 Do proper ASID management for guest vcpus.
Prior to this change an ASID was hard allocated to a guest and shared by all
its vcpus. The meant that the number of VMs that could be created was limited
to the number of ASIDs supported by the CPU. It was also inefficient because
it forced a TLB flush on every VMRUN.

With this change the number of guests that can be created is independent of
the number of available ASIDs. Also, the TLB is flushed only when a new ASID
is allocated.

Discussed with:	grehan
Reviewed by:	Anish Gupta (akgupt3@gmail.com)
2014-09-06 19:02:52 +00:00
John Baldwin
b1d735ba4c Create a separate structure for per-CPU state saved across suspend and
resume that is a superset of a pcb.  Move the FPU state out of the pcb and
into this new structure.  As part of this, move the FPU resume code on
amd64 into a C function.  This allows resumectx() to still operate only on
a pcb and more closely mirrors the i386 code.

Reviewed by:	kib (earlier version)
2014-09-06 15:23:28 +00:00
Neel Natu
3865879753 Merge svm_set_vmcb() and svm_init_vmcb() into a single function that is called
just once when a vcpu is initialized.

Discussed with:	Anish Gupta (akgupt3@gmail.com)
2014-09-05 03:33:16 +00:00
Pedro F. Giffuni
11db54f172 Apply known workarounds for modern MacBooks.
The legacy USB circuit tends to give trouble on MacBook.
While the original report covered MacBook, extend the fix
preemptively for the newer MacBookPro too.

PR:		191693
Reviewed by:	emaste
MFC after:	5 days
2014-09-05 01:06:45 +00:00
Mark Johnston
a58b4afa9f Add mrsas(4) to GENERIC for i386 and amd64.
Approved by:	ambrisko, kadesai
MFC after:	3 days
2014-09-04 21:06:33 +00:00
John Baldwin
33a50f1b0f Merge the amd64 and i386 identcpu.c into a single x86 implementation.
This brings the structured extended features mask and VT-x reporting to
i386 and Intel cache and TLB info (under bootverbose) to amd64.
2014-09-04 14:26:25 +00:00
Neel Natu
bf4993ba2a Remove unused header file.
Discussed with:	Anish Gupta (akgupt3@gmail.com)
2014-09-04 06:07:32 +00:00
Neel Natu
fea6bd5cd3 Consolidate the code to restore the host TSS after a #VMEXIT into a single
function restore_host_tss().

Don't bother to restore MSR_KGSBASE after a #VMEXIT since it is not used in
the kernel. It will be restored on return to userspace.

Discussed with:	Anish Gupta (akgupt3@gmail.com)
2014-09-04 06:00:18 +00:00
John Baldwin
2b793beefd Remove trailing whitespace. 2014-09-04 01:56:15 +00:00
John Baldwin
7fb40488d6 - Move prototypes for various functions into out of C files and into
<machine/md_var.h>.
- Move some CPU-related variables out of i386/i386/identcpu.c to
  initcpu.c to match amd64.
- Move the declaration of has_f00f_hack out of identcpu.c to machdep.c.
- Remove a misleading comment from i386/i386/initcpu.c (locore zeros
  the BSS before it calls identify_cpu()) and remove explicit zero
  assignments to reduce the diff with amd64.
2014-09-04 01:46:06 +00:00
Neel Natu
246e7a2b64 IFC @r269962
Submitted by:	Anish Gupta (akgupt3@gmail.com)
2014-09-02 04:22:42 +00:00
Alan Cox
ad2e88a14f Update a comment to reflect the changes in r213408.
MFC after:	5 days
2014-09-02 04:11:20 +00:00
Neel Natu
4c98655ece The "SUB" instruction used in getcc() actually does 'x -= y' so use the
proper constraint for 'x'. The "+r" constraint indicates that 'x' is an
input and output register operand.

While here generate code for different variants of getcc() using a macro
GETCC(sz) where 'sz' indicates the operand size.

Update the status bits in %rflags when emulating AND and OR opcodes.

Reviewed by:	grehan
2014-08-30 19:59:42 +00:00
Pedro F. Giffuni
ec4a0b4408 Minor space/tab cleanups.
Most of them were ripped from the GSoC 2104
SMAP + kpatch project.
This is only a cosmetic change.

Taken from:	Oliver Pinter (op@)
MFC after:	5 days
2014-08-30 15:41:07 +00:00
John Baldwin
89871cdeb6 - Add a new structure type for the ACPI 3.0 SMAP entry that includes the
optional attributes field.
- Add a 'machdep.smap' sysctl that exports the SMAP table of the running
  system as an array of the ACPI 3.0 structure.  (On older systems, the
  attributes are given a value of zero.)  Note that the sysctl only
  exports the SMAP table if it is available via the metadata passed from
  the loader to the kernel.  If an SMAP is not available, an empty array
  is returned.
- Add a format handler for the ACPI 3.0 SMAP structure to the sysctl(8)
  binary to format the SMAP structures in a readable format similar to
  the format found in boot messages.

MFC after:	2 weeks
2014-08-29 21:25:47 +00:00
Peter Grehan
fc3dde9099 Implement the 0x2B SUB instruction, and the OR variant of 0x81.
Found with local APIC accesses from bitrig/amd64 bsd.rd, 07/15-snap.

Reviewed by:	neel
MFC after:	3 days
2014-08-27 00:53:56 +00:00