This is first step to move the generic part of HV code into kernel instead
of module, so that it is possible to use hypercall to implement some other
paravirtualization code in the kernel.
Submitted by: Howard Su <howard0su@gmail.com>
Reviewed by: royger, delphij, adrian
Approved by: adrian (mentor)
Sponsored by: Microsoft OSTC
Differential Revision: https://reviews.freebsd.org/D3072
providing compiled-in static environment data that is used instead of any
data passed in from a boot loader.
Previously 'env' worked only on i386 and arm xscale systems, because it
required the MD startup code to examine the global envmode variable and
decide whether to use static_env or an environment obtained from the boot
loader, and set the global kern_envp accordingly. Most startup code wasn't
doing so. Making things even more complex, some mips startup code uses an
alternate scheme that involves calling init_static_kenv() to pass an empty
buffer and its size, then uses a series of kern_setenv() calls to populate
that buffer.
Now all MD startup code calls init_static_kenv(), and that routine provides
a single point where envmode is checked and the decision is made whether to
use the compiled-in static_kenv or the values provided by the MD code.
The routine also continues to serve its original purpose for mips; if a
non-zero buffer size is passed the routine installs the empty buffer ready
to accept kern_setenv() values. Now if the size is zero, the provided buffer
full of existing env data is installed. A NULL pointer can be passed if the
boot loader provides no env data; this allows the static env to be installed
if envmode is set to do so.
Most of the work here is a near-mechanical change to call the init function
instead of directly setting kern_envp. A notable exception is in xen/pv.c;
that code was originally installing a buffer full of preformatted env data
along with its non-zero size (like mips code does), which would have allowed
kern_setenv() calls to wipe out the preformatted data. Now it passes a zero
for the size so that the buffer of data it installs is treated as
non-writeable.
While here, move the common bits of <machine/cputypes.h> to
<x86/cputypes.h> as well.
Reviewed by: kib
Differential Revision: https://reviews.freebsd.org/D4670
"The availability of CLWB instruction is indicated by the presence of
the CPUID feature flag CLWB (bit 24 of the EBX register)."
CLWB is similar to CLFLUSHOPT, except that it is not required to discard
cacheline contents.
"On processors that supports PCOMMIT, PCOMMIT is enumerated through
CPUID (CPUID.7.0.EBX[22]) only when the feature is enabled by BIOS."
PCOMMIT is used to cause store-to-memory operations to become persistent
(protected from power failure).
Sponsored by: EMC / Isilon Storage Division
Current code doesn't try to make use of the full page when bouncing because
the size is only expanded to be a multiple of the alignment. Instead try to
always create segments of PAGE_SIZE when using bounce pages.
This allows us to remove the specific casing done for
BUS_DMA_KEEP_PG_OFFSET, since the requirement is to make sure the offsets
into contiguous segments are aligned, and now this is done by default.
Sponsored by: Citrix Systems R&D
Reviewed by: hps, kib
Differential revision: https://reviews.freebsd.org/D4119
new headers x86/include x86_var.h and x86_smp.h.
Reviewed by: emaste, jhb
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D4358
suppression but the version of the IOAPIC reported is 0x11 and neither
IOAPIC EOIR nor the Linux trick of temporal reprogramming of the pin
to edge-trigger mode to issue EOI work.
Disable eoi suppression if KVM is detected. The mode can still be
forced with the tunable.
Reported and tested by: Roman Mamontov <mr.xanto@gmail.com>
Sponsored by: The FreeBSD Foundation
the PG_G global pte flag, pmap_invalidate_all() fails to flush global
TLB entries [*]. This is because TLB shootdown handler for such
configs reloads CR3, and on i386 pmap_invalidate_all() does the same
for the initiating CPU. Note that current code does not issue total
invalidation requests for the kernel_pmap.
Rename amd64 function invltlb_globpcid() to invltlb_glob(), it is not
specific for PCID for quite some time, and implement the same
functionality for i386. Use the function instead of invltlb() in
shootdown handlers and in i386 pmap_invalidate_all(), but only for the
kernel pmap (which maps pages with the PG_G attribute set), which
takes care of PG_G TLB entries on flush.
To detect the affected pmap in i386 TLB shootdown handler, pmap should
be passed to the smp_masked_invltlb() function, which makes amd64 and
i386 TLB shootdown code almost identical. Merge the code under x86/.
Noted by: jhb [*]
Reviewed by: cem, jhb, pho
Tested by: pho
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D4346
created for bus_dma_tag_t tag, bounce pages should be allocated
only if needed.
Before the fix, they were allocated always if BUS_DMA_COULD_BOUNCE flag
was set but BUS_DMA_MIN_ALLOC_COMP not. As bounce pages are never freed,
it could cause memory exhaustion when a lot of such tags together with
their maps were created.
Note that there could be more maps in one tag by current design.
However BUS_DMA_MIN_ALLOC_COMP flag is tag's flag. It's set after
bounce pages are allocated. Thus, they are allocated only for first
tag's map which needs them.
Approved by: kib (mentor)
the map has been created via bounce_bus_dmamem_alloc(). In that case
bus_dmamap_unload(9) typically isn't called during normal operation
but still should be during detach, cleanup from failed attach etc.
Submitted by: yongari
MFC after: 3 days
map has been created via bounce_bus_dmamem_alloc(). Even for coherent
DMA - which bus_dmamem_alloc(9) typically is used for -, calling of
bus_dmamap_sync(9) isn't optional.
PR: 188899 (non-original problem)
MFC after: 3 days
Current Xen resume code clears all pending bitmap IPIs on resume, which is
not correct. Instead re-inject bitmap IPI vectors on resume to all CPUs in
order to acknowledge any pending bitmap IPIs.
Sponsored by: Citrix Systems R&D
MFC after: 2 weeks
All event channels are torn down when performing a migration on Xen, make
sure all handlers are also removed and the event channel structure is
properly disposed so it can be reused.
Sponsored by: Citrix Systems R&D
MFC after: 2 weeks
This is needed so interrupt handlers can be removed while the PIC is
resuming, it was previously not possible due to intr_resume holding the
intr_table_lock and intr_remove_handler recursing on it.
Sponsored by: Citrix Systems R&D
Reviewed by: kib (previous version)
MFC after: 2 weeks
Differential Revision: https://reviews.freebsd.org/D4114
The implementation of bus_dmamap_load_ma_triv currently calls
_bus_dmamap_load_phys on each page that is part of the passed in buffer.
Since each page is treated as an individual buffer, the resulting behaviour
is different from the behaviour of _bus_dmamap_load_buffer. This breaks
certain drivers, like Xen blkfront.
If an unmapped buffer of size 4096 that starts at offset 13 into the first
page is passed to the current _bus_dmamap_load_ma implementation (so the ma
array contains two pages), the result is that two segments are created, one
with a size of 4083 and the other with size 13 (because two independant
calls to _bus_dmamap_load_phys are performed, one for each physical page).
If the same is done with a mapped buffer and calling _bus_dmamap_load_buffer
the result is that only one segment is created, with a size of 4096.
This patch relegates the usage of bus_dmamap_load_ma_triv in x86 bounce
buffer code to drivers requesting BUS_DMA_KEEP_PG_OFFSET and implements
_bus_dmamap_load_ma so that it's behaviour is the same as the mapped version
(_bus_dmamap_load_buffer). This patch only modifies the x86 bounce buffer
code, other arches are left untouched.
Sponsored by: Citrix Systems R&D
Reviewed by: kib, jah (previous version)
MFC after: 2 weeks
Differential Revision: https://reviews.freebsd.org/D888
variable during mp_start() which is too late. Move this to mp_setmaxid()
where other architectures set it and move x86 assertions to MI code.
Reviewed by: kib (x86 part)
Fix two issues with the current event channel code, first ENABLED_SETSIZE is
not correctly defined and then using a BITSET to store the per-cpu masks is
not portable to other arches, since on arm32 the event channel arrays shared
with the hypervisor are of type uint64_t and not long. Partially restore the
previous code but switch the bit operations to use the recently introduced
xen_{set/clear/test}_bit versions.
Reviewed by: Julien Grall <julien.grall@citrix.com>
Sponsored by: Citrix Systems R&D
Differential Revision: https://reviews.freebsd.org/D4080
This will enable the elimination of a workaround in the USB driver that
artifically allocates buffers twice as big as they need to be (which
actually saves memory for very small buffers on the buggy platforms).
When deciding how to allocate a dma buffer, armv4, armv6, mips, and
x86/iommu all correctly check for the tag alignment <= maxsize as enabling
simple uma/malloc based allocation. Powerpc, sparc64, x86/bounce, and
arm64/bounce were all checking for alignment < maxsize; on those platforms
when alignment was equal to the max size it would fall back to page-based
allocators even for very small buffers.
This change makes all platforms use the <= check. It should be noted that
on all platforms other than arm[v6] and mips, this check is relying on
undocumented behavior in malloc(9) that if you allocate a block of a given
size it will be aligned to the next larger power-of-2 boundary. There is
nothing in the malloc(9) man page that makes that explicit promise (but the
busdma code has been relying on this behavior all along so I guess it works).
Arm and mips code uses the allocator in kern/subr_busdma_buffalloc.c, which
does explicitly implement this promise about size and alignment. Other
platforms probably should switch to the aligned allocator.
The new load_ma implementation can cause dereferences when used with
certain drivers, back it out until the reason is found:
Fatal trap 12: page fault while in kernel mode
cpuid = 11; apic id = 03
fault virtual address = 0x30
fault code = supervisor read data, page not present
instruction pointer = 0x20:0xffffffff808a2d22
stack pointer = 0x28:0xfffffe07cc737710
frame pointer = 0x28:0xfffffe07cc737790
code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 13 (g_down)
trap number = 12
panic: page fault
cpuid = 11
KDB: stack backtrace:
#0 0xffffffff80641647 at kdb_backtrace+0x67
#1 0xffffffff80606762 at vpanic+0x182
#2 0xffffffff806067e3 at panic+0x43
#3 0xffffffff8084eef1 at trap_fatal+0x351
#4 0xffffffff8084f0e4 at trap_pfault+0x1e4
#5 0xffffffff8084e82f at trap+0x4bf
#6 0xffffffff80830d57 at calltrap+0x8
#7 0xffffffff8063beab at _bus_dmamap_load_ccb+0x1fb
#8 0xffffffff8063bc51 at bus_dmamap_load_ccb+0x91
#9 0xffffffff8042dcad at ata_dmaload+0x11d
#10 0xffffffff8042df7e at ata_begin_transaction+0x7e
#11 0xffffffff8042c18e at ataaction+0x9ce
#12 0xffffffff802a220f at xpt_run_devq+0x5bf
#13 0xffffffff802a17ad at xpt_action_default+0x94d
#14 0xffffffff802c0024 at adastart+0x8b4
#15 0xffffffff802a2e93 at xpt_run_allocq+0x193
#16 0xffffffff802c0735 at adastrategy+0xf5
#17 0xffffffff80554206 at g_disk_start+0x426
Uptime: 2m29s
The implementation of bus_dmamap_load_ma_triv currently calls
_bus_dmamap_load_phys on each page that is part of the passed in buffer.
Since each page is treated as an individual buffer, the resulting behaviour
is different from the behaviour of _bus_dmamap_load_buffer. This breaks
certain drivers, like Xen blkfront.
If an unmapped buffer of size 4096 that starts at offset 13 into the first
page is passed to the current _bus_dmamap_load_ma implementation (so the ma
array contains two pages), the result is that two segments are created, one
with a size of 4083 and the other with size 13 (because two independant
calls to _bus_dmamap_load_phys are performed, one for each physical page).
If the same is done with a mapped buffer and calling _bus_dmamap_load_buffer
the result is that only one segment is created, with a size of 4096.
This patch relegates the usage of bus_dmamap_load_ma_triv in x86 bounce
buffer code to drivers requesting BUS_DMA_KEEP_PG_OFFSET and implements
_bus_dmamap_load_ma so that it's behaviour is the same as the mapped version
(_bus_dmamap_load_buffer). This patch only modifies the x86 bounce buffer
code, other arches are left untouched.
Reviewed by: kib, jah
Differential Revision: https://reviews.freebsd.org/D888
Sponsored by: Citrix Systems R&D
xen/hypervisor.h:
- Remove unused helpers: MULTI_update_va_mapping, is_initial_xendomain,
is_running_on_xen
- Remove unused define CONFIG_X86_PAE
- Remove unused variable xen_start_info: note that it's used inpcifront
which is not built at all
- Remove forward declaration of HYPERVISOR_crash
xen/xen-os.h:
- Remove unused define CONFIG_X86_PAE
- Drop unused helpers: test_and_clear_bit, clear_bit,
force_evtchn_callback
- Implement a generic version (based on ofed/include/linux/bitops.h) of
set_bit and test_bit and prefix them by xen_ to avoid any use by other
code than Xen. Note that It would be worth to investigate a generic
implementation in FreeBSD.
- Replace barrier() by __compiler_membar()
- Replace cpu_relax() by cpu_spinwait(): it's exactly the same as rep;nop
= pause
xen/xen_intr.h:
- Move the prototype of xen_intr_handle_upcall in it: Use by all the
platform
x86/xen/xen_intr.c:
- Use BITSET* for the enabledbits: Avoid to use custom helpers
- test_bit/set_bit has been renamed to xen_test_bit/xen_set_bit
- Don't export the variable xen_intr_pcpu
dev/xen/blkback/blkback.c:
- Fix the string format when XBB_DEBUG is enabled: host_addr is typed
uint64_t
dev/xen/balloon/balloon.c:
- Remove set but not used variable
- Use the correct type for frame_list: xen_pfn_t represents the frame
number on any architecture
dev/xen/control/control.c:
- Return BUS_PROBE_WILDCARD in xs_probe: Returning 0 in a probe callback
means the driver can handle this device. If by any chance xenstore is the
first driver, every new device with the driver is unset will use
xenstore.
dev/xen/grant-table/grant_table.c:
- Remove unused cmpxchg
- Drop unused include opt_pmap.h: Doesn't exist on ARM64 and it doesn't
contain anything required for the code on x86
dev/xen/netfront/netfront.c:
- Use the correct type for rx_pfn_array: xen_pfn_t represents the frame
number on any architecture
dev/xen/netback/netback.c:
- Use the correct type for gmfn: xen_pfn_t represents the frame number on
any architecture
dev/xen/xenstore/xenstore.c:
- Return BUS_PROBE_WILDCARD in xctrl_probe: Returning 0 in a probe callback
means the driver can handle this device. If by any chance xenstore is the
first driver, every new device with the driver is unset will use xenstore.
Note that with the changes, x86/include/xen/xen-os.h doesn't contain anymore
arch-specific code. Although, a new series will add some helpers that differ
between x86 and ARM64, so I've kept the headers for now.
Submitted by: Julien Grall <julien.grall@citrix.com>
Reviewed by: royger
Differential Revision: https://reviews.freebsd.org/D3921
Sponsored by: Citrix Systems R&D
amd64 and i386 platform code contain very similar xen/xen-os.h
The only differences are:
- Functions/variables/types which were unused in i386/xen/xen-os.h:
* xen_xchg
* __xchg_dummy
* __xg
* __xchg
* atomic_t
* atomic_inc
* rdtscll
The functions/variables/types unused in xen-os.h can be dropped and there
is no more differences betwen amd64 and i386.
The new header is placed in x86/include/xen and each platform will have
dummy headers include x86/xen/*.h. This is to be able to include
machine/xen/*.h in the PV drivers.
Submitted by: Julien Grall <julien.grall@citrix.com>
Reviewed by: royger
Differential Revision: https://reviews.freebsd.org/D3880
Sponsored by: Citrix Systems R&D
When the system has more than a single PCI domain, the bus numbers
are not unique, thus they cannot be used for "pci" device numbering.
Change bus numbers to -1 (i.e. to-be-determined automatically)
wherever the code did not care about domains.
Reviewed by: jhb
Obtained from: Semihalf
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D3406
that was recently added for Lenovo laptops.
This is a prime candidate for conversion into a table and also
checking other fields like "product".
Tested:
* ASUS UX31E
running thread.
It is currently implemented only on amd64 and i386; on these
architectures, it is implemented by raising an NMI on the CPU on which
the target thread is currently running. Unlike stack_save_td(), it may
fail, for example if the thread is running in user mode.
This change also modifies the kern.proc.kstack sysctl to use this function,
so that stacks of running threads are shown in the output of "procstat -kk".
This is handy for debugging threads that are stuck in a busy loop.
Reviewed by: bdrewery, jhb, kib
Sponsored by: EMC / Isilon Storage Division
Differential Revision: https://reviews.freebsd.org/D3256
since on amd64 the first argument to a function is generally not on the
stack.
Revert an old DTrace bug fix to some code that assumed that
sizeof(struct amd64_frame) == 16.
Reviewed by: jhb, kib
Sponsored by: EMC / Isilon Storage Division
Differential Revision: https://reviews.freebsd.org/D3255
Add a check to preload_search_info to make sure mod is set. Most of the
callers of preload_search_info don't check that the mod parameter is
set, which can cause page faults. While at it, remove some now unnecessary
checks before calling preload_search_info.
Sponsored by: Citrix Systems R&D
Reviewed by: kib
Differential Revision: https://reviews.freebsd.org/D3440
Introduce two new loader tunnables that can be used to disable PV disks and
PV nics at boot time. They default to 0 and should be set to 1 (or any
number different than 0) in order to disable the PV devices:
hw.xen.disable_pv_disks=1
hw.xen.disable_pv_nics=1
In /boot/loader.conf will disable both PV disks and nics.
Sponsored by: Citrix Systems R&D
Tested by: Karl Pielorz <kpielorz_lst@tdx.co.uk>
MFC after: 1 week
believe that the bug only affects mobile CPUs, at least I did not see
other reports, but it is impossible to detect it in madt_setup_local().
While there, reduce duplication in the information strings printed
when x2APIC is auto-disabled, and do not print the line when user
manually override the setting.
Tested and reviewed by: royger (previous version)
Sponsored by: The FreeBSD Foundation
This allows two things:
1. Sync'ing bounced maps in non-sleepable contexts. The physcopy* calls previously used could sleep on sf_buf operations in some cases.
2. Sync'ing user buffers outside the context of the owning process
Approved by: kib (mentor)
frame buffers and memory mapped UARTs.
1. Delay calling cninit() until after pmap_bootstrap(). This makes
sure we have PMAP initialized enough to add translations. Keep
kdb_init() after cninit() so that we have console when we need
to break into the debugger on boot.
2. Unfortunately, the ATPIC code had be moved as well so as to
avoid a spurious trap #30. The reason for which is not known
at this time.
3. In pmap_mapdev_attr(), when we need to map a device prior to the
VM system being initialized, use virtual_avail as the KVA to map
the device at. In particular, avoid using the direct map on amd64
because we can't demote by virtue of not being able to allocate
yet. Keep track of the translation.
Re-use the translation after the VM has been initialized to not
waste KVA and to satisfy the assumption in uart(4) that the handle
returned for the low-level console is the same as later returned
when the device is probed and attached.
4. In pmap_unmapdev() remove the mapping from the table when called
pre-init. Otherwise keep the mapping. During bus probe and attach
device resources are mapped and unmapped multiple times, which
would have us destroy the mapping used by the low-level console.
5. In pmap_init(), set pmap_initialized to signal that we're not
pre-init anymore. On amd64, bring the direct map in sync with the
translations created at that time.
6. Implement bus_space_map() and bus_space_unmap() for real: when
the tag corresponds to memory space, call the corresponding
pmap_mapdev() and pmap_unmapdev() functions to construct and
actual handle.
7. In efifb.c and vt_vga.c, remove the crutches and hacks and simply
call pmap_mapdev_attr() or bus_space_map() as desired.
Notes:
1. uart(4) already used bus_space_map() during low-level console
setup but since serial ports have traditionally been I/O port
based, the lack of a proper implementation for said function
was not a problem. It has always supported memory mapped UARTs
for low-level consoles by setting hw.uart.console accordingly.
2. The use of the direct map on amd64 without setting caching
attributes has been a bigger problem than previously thought.
This change has the fortunate (and unexpected) side-effect of
fixing various EFI frame buffer problems (though not all).
PR: 191564, 194952
Special thanks to:
1. XipLink, Inc -- generously donated an Intel Bay Trail E3800
based eval board (ADLE3800PC).
2. The FreeBSD Foundation, in particular emaste@ -- for UEFI
support in general and testing.
3. Everyone who tested the proposed for PR 191564.
4. jhb@ and kib@ for being a soundboard and applying a clue bat
if so needed.
single ICR MSR write. This is in contrast with the xAPIC mode, where
we must read current ICR value, do bit fiddling and perform two 32-bit
register writes. As a consequence, there is no need to disable
interrupts around ICR value calculation and write.
Note that typical users of ipi_raw() and ipi_vectored() take spinlock,
which already disables interrupts. For them, the change removes
unneeded CLI and POPFL/Q instructions.
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks