2008-09-10 07:11:08 +00:00
|
|
|
/*-
|
|
|
|
* Copyright (c) 1996, by Steve Passe
|
|
|
|
* Copyright (c) 2008, by Kip Macy
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. The name of the developer may NOT be used to endorse or promote products
|
|
|
|
* derived from this software without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
|
|
|
#include "opt_apic.h"
|
|
|
|
#include "opt_cpu.h"
|
|
|
|
#include "opt_kstack_pages.h"
|
|
|
|
#include "opt_mp_watchdog.h"
|
2010-03-10 19:50:52 +00:00
|
|
|
#include "opt_pmap.h"
|
2008-09-10 07:11:08 +00:00
|
|
|
#include "opt_sched.h"
|
|
|
|
#include "opt_smp.h"
|
|
|
|
|
|
|
|
#if !defined(lint)
|
|
|
|
#if !defined(SMP)
|
|
|
|
#error How did you get here?
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifndef DEV_APIC
|
|
|
|
#error The apic device is required for SMP, add "device apic" to your config file.
|
|
|
|
#endif
|
|
|
|
#if defined(CPU_DISABLE_CMPXCHG) && !defined(COMPILING_LINT)
|
|
|
|
#error SMP not supported with CPU_DISABLE_CMPXCHG
|
|
|
|
#endif
|
|
|
|
#endif /* not lint */
|
|
|
|
|
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
|
|
|
#include <sys/bus.h>
|
|
|
|
#include <sys/cons.h> /* cngetc() */
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
#include <sys/cpuset.h>
|
2008-09-10 07:11:08 +00:00
|
|
|
#ifdef GPROF
|
|
|
|
#include <sys/gmon.h>
|
|
|
|
#endif
|
|
|
|
#include <sys/kernel.h>
|
|
|
|
#include <sys/ktr.h>
|
|
|
|
#include <sys/lock.h>
|
|
|
|
#include <sys/malloc.h>
|
|
|
|
#include <sys/memrange.h>
|
|
|
|
#include <sys/mutex.h>
|
|
|
|
#include <sys/pcpu.h>
|
|
|
|
#include <sys/proc.h>
|
2012-10-12 23:26:00 +00:00
|
|
|
#include <sys/rwlock.h>
|
2008-09-10 07:11:08 +00:00
|
|
|
#include <sys/sched.h>
|
|
|
|
#include <sys/smp.h>
|
|
|
|
#include <sys/sysctl.h>
|
|
|
|
|
|
|
|
#include <vm/vm.h>
|
|
|
|
#include <vm/vm_param.h>
|
|
|
|
#include <vm/pmap.h>
|
|
|
|
#include <vm/vm_kern.h>
|
|
|
|
#include <vm/vm_extern.h>
|
|
|
|
#include <vm/vm_page.h>
|
|
|
|
|
2010-11-01 18:18:46 +00:00
|
|
|
#include <x86/apicreg.h>
|
2008-09-10 07:11:08 +00:00
|
|
|
#include <machine/md_var.h>
|
|
|
|
#include <machine/mp_watchdog.h>
|
|
|
|
#include <machine/pcb.h>
|
|
|
|
#include <machine/psl.h>
|
|
|
|
#include <machine/smp.h>
|
|
|
|
#include <machine/specialreg.h>
|
|
|
|
#include <machine/pcpu.h>
|
|
|
|
|
Implement vector callback for PVHVM and unify event channel implementations
Re-structure Xen HVM support so that:
- Xen is detected and hypercalls can be performed very
early in system startup.
- Xen interrupt services are implemented using FreeBSD's native
interrupt delivery infrastructure.
- the Xen interrupt service implementation is shared between PV
and HVM guests.
- Xen interrupt handlers can optionally use a filter handler
in order to avoid the overhead of dispatch to an interrupt
thread.
- interrupt load can be distributed among all available CPUs.
- the overhead of accessing the emulated local and I/O apics
on HVM is removed for event channel port events.
- a similar optimization can eventually, and fairly easily,
be used to optimize MSI.
Early Xen detection, HVM refactoring, PVHVM interrupt infrastructure,
and misc Xen cleanups:
Sponsored by: Spectra Logic Corporation
Unification of PV & HVM interrupt infrastructure, bug fixes,
and misc Xen cleanups:
Submitted by: Roger Pau Monné
Sponsored by: Citrix Systems R&D
sys/x86/x86/local_apic.c:
sys/amd64/include/apicvar.h:
sys/i386/include/apicvar.h:
sys/amd64/amd64/apic_vector.S:
sys/i386/i386/apic_vector.s:
sys/amd64/amd64/machdep.c:
sys/i386/i386/machdep.c:
sys/i386/xen/exception.s:
sys/x86/include/segments.h:
Reserve IDT vector 0x93 for the Xen event channel upcall
interrupt handler. On Hypervisors that support the direct
vector callback feature, we can request that this vector be
called directly by an injected HVM interrupt event, instead
of a simulated PCI interrupt on the Xen platform PCI device.
This avoids all of the overhead of dealing with the emulated
I/O APIC and local APIC. It also means that the Hypervisor
can inject these events on any CPU, allowing upcalls for
different ports to be handled in parallel.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
Map Xen per-vcpu area during AP startup.
sys/amd64/include/intr_machdep.h:
sys/i386/include/intr_machdep.h:
Increase the FreeBSD IRQ vector table to include space
for event channel interrupt sources.
sys/amd64/include/pcpu.h:
sys/i386/include/pcpu.h:
Remove Xen HVM per-cpu variable data. These fields are now
allocated via the dynamic per-cpu scheme. See xen_intr.c
for details.
sys/amd64/include/xen/hypercall.h:
sys/dev/xen/blkback/blkback.c:
sys/i386/include/xen/xenvar.h:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/xen/gnttab.c:
Prefer FreeBSD primatives to Linux ones in Xen support code.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
sys/dev/xen/balloon/balloon.c:
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/console/xencons_ring.c:
sys/dev/xen/control/control.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/dev/xen/xenpci/xenpci.c:
sys/i386/i386/machdep.c:
sys/i386/include/pmap.h:
sys/i386/include/xen/xenfunc.h:
sys/i386/isa/npx.c:
sys/i386/xen/clock.c:
sys/i386/xen/mp_machdep.c:
sys/i386/xen/mptable.c:
sys/i386/xen/xen_clock_util.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/xen_rtc.c:
sys/xen/evtchn/evtchn_dev.c:
sys/xen/features.c:
sys/xen/gnttab.c:
sys/xen/gnttab.h:
sys/xen/hvm.h:
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbus_if.m:
sys/xen/xenbus/xenbusb_front.c:
sys/xen/xenbus/xenbusvar.h:
sys/xen/xenstore/xenstore.c:
sys/xen/xenstore/xenstore_dev.c:
sys/xen/xenstore/xenstorevar.h:
Pull common Xen OS support functions/settings into xen/xen-os.h.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
Remove constants, macros, and functions unused in FreeBSD's Xen
support.
sys/xen/xen-os.h:
sys/i386/xen/xen_machdep.c:
sys/x86/xen/hvm.c:
Introduce new functions xen_domain(), xen_pv_domain(), and
xen_hvm_domain(). These are used in favor of #ifdefs so that
FreeBSD can dynamically detect and adapt to the presence of
a hypervisor. The goal is to have an HVM optimized GENERIC,
but more is necessary before this is possible.
sys/amd64/amd64/machdep.c:
sys/dev/xen/xenpci/xenpcivar.h:
sys/dev/xen/xenpci/xenpci.c:
sys/x86/xen/hvm.c:
sys/sys/kernel.h:
Refactor magic ioport, Hypercall table and Hypervisor shared
information page setup, and move it to a dedicated HVM support
module.
HVM mode initialization is now triggered during the
SI_SUB_HYPERVISOR phase of system startup. This currently
occurs just after the kernel VM is fully setup which is
just enough infrastructure to allow the hypercall table
and shared info page to be properly mapped.
sys/xen/hvm.h:
sys/x86/xen/hvm.c:
Add definitions and a method for configuring Hypervisor event
delievery via a direct vector callback.
sys/amd64/include/xen/xen-os.h:
sys/x86/xen/hvm.c:
sys/conf/files:
sys/conf/files.amd64:
sys/conf/files.i386:
Adjust kernel build to reflect the refactoring of early
Xen startup code and Xen interrupt services.
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:
sys/dev/xen/control/control.c:
sys/dev/xen/evtchn/evtchn_dev.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/xen/xenstore/xenstore.c:
sys/xen/evtchn/evtchn_dev.c:
sys/dev/xen/console/console.c:
sys/dev/xen/console/xencons_ring.c
Adjust drivers to use new xen_intr_*() API.
sys/dev/xen/blkback/blkback.c:
Since blkback defers all event handling to a taskqueue,
convert this task queue to a "fast" taskqueue, and schedule
it via an interrupt filter. This avoids an unnecessary
ithread context switch.
sys/xen/xenstore/xenstore.c:
The xenstore driver is MPSAFE. Indicate as much when
registering its interrupt handler.
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbusvar.h:
Remove unused event channel APIs.
sys/xen/evtchn.h:
Remove all kernel Xen interrupt service API definitions
from this file. It is now only used for structure and
ioctl definitions related to the event channel userland
device driver.
Update the definitions in this file to match those from
NetBSD. Implementing this interface will be necessary for
Dom0 support.
sys/xen/evtchn/evtchnvar.h:
Add a header file for implemenation internal APIs related
to managing event channels event delivery. This is used
to allow, for example, the event channel userland device
driver to access low-level routines that typical kernel
consumers of event channel services should never access.
sys/xen/interface/event_channel.h:
sys/xen/xen_intr.h:
Standardize on the evtchn_port_t type for referring to
an event channel port id. In order to prevent low-level
event channel APIs from leaking to kernel consumers who
should not have access to this data, the type is defined
twice: Once in the Xen provided event_channel.h, and again
in xen/xen_intr.h. The double declaration is protected by
__XEN_EVTCHN_PORT_DEFINED__ to ensure it is never declared
twice within a given compilation unit.
sys/xen/xen_intr.h:
sys/xen/evtchn/evtchn.c:
sys/x86/xen/xen_intr.c:
sys/dev/xen/xenpci/evtchn.c:
sys/dev/xen/xenpci/xenpcivar.h:
New implementation of Xen interrupt services. This is
similar in many respects to the i386 PV implementation with
the exception that events for bound to event channel ports
(i.e. not IPI, virtual IRQ, or physical IRQ) are further
optimized to avoid mask/unmask operations that aren't
necessary for these edge triggered events.
Stubs exist for supporting physical IRQ binding, but will
need additional work before this implementation can be
fully shared between PV and HVM.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
sys/i386/xen/mp_machdep.c
sys/x86/xen/hvm.c:
Add support for placing vcpu_info into an arbritary memory
page instead of using HYPERVISOR_shared_info->vcpu_info.
This allows the creation of domains with more than 32 vcpus.
sys/i386/i386/machdep.c:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/exception.s:
Add support for new event channle implementation.
2013-08-29 19:52:18 +00:00
|
|
|
#include <xen/xen-os.h>
|
2008-12-29 06:31:03 +00:00
|
|
|
#include <xen/evtchn.h>
|
|
|
|
#include <xen/xen_intr.h>
|
|
|
|
#include <xen/hypervisor.h>
|
2008-09-10 07:11:08 +00:00
|
|
|
#include <xen/interface/vcpu.h>
|
|
|
|
|
2013-09-02 22:22:56 +00:00
|
|
|
/*---------------------------- Extern Declarations ---------------------------*/
|
|
|
|
extern struct pcpu __pcpu[];
|
|
|
|
|
|
|
|
extern void Xhypervisor_callback(void);
|
|
|
|
extern void failsafe_callback(void);
|
|
|
|
extern void pmap_lazyfix_action(void);
|
|
|
|
|
|
|
|
/*--------------------------- Forward Declarations ---------------------------*/
|
2013-09-06 22:17:02 +00:00
|
|
|
static driver_filter_t smp_reschedule_interrupt;
|
|
|
|
static driver_filter_t smp_call_function_interrupt;
|
|
|
|
static void assign_cpu_ids(void);
|
|
|
|
static void set_interrupt_apic_ids(void);
|
|
|
|
static int start_all_aps(void);
|
|
|
|
static int start_ap(int apic_id);
|
|
|
|
static void release_aps(void *dummy);
|
|
|
|
|
|
|
|
/*---------------------------------- Macros ----------------------------------*/
|
|
|
|
#define IPI_TO_IDX(ipi) ((ipi) - APIC_IPI_INTS)
|
2013-09-02 22:22:56 +00:00
|
|
|
|
|
|
|
/*-------------------------------- Local Types -------------------------------*/
|
|
|
|
typedef void call_data_func_t(uintptr_t , uintptr_t);
|
|
|
|
|
|
|
|
struct cpu_info {
|
|
|
|
int cpu_present:1;
|
|
|
|
int cpu_bsp:1;
|
|
|
|
int cpu_disabled:1;
|
|
|
|
};
|
|
|
|
|
2013-09-06 22:17:02 +00:00
|
|
|
struct xen_ipi_handler
|
|
|
|
{
|
|
|
|
driver_filter_t *filter;
|
|
|
|
const char *description;
|
|
|
|
};
|
|
|
|
|
|
|
|
enum {
|
|
|
|
RESCHEDULE_VECTOR,
|
|
|
|
CALL_FUNCTION_VECTOR,
|
|
|
|
};
|
|
|
|
|
2013-09-02 22:22:56 +00:00
|
|
|
/*-------------------------------- Global Data -------------------------------*/
|
|
|
|
static u_int hyperthreading_cpus;
|
|
|
|
static cpuset_t hyperthreading_cpus_mask;
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
int mp_naps; /* # of Applications processors */
|
|
|
|
int boot_cpu_id = -1; /* designated BSP */
|
|
|
|
|
|
|
|
static int bootAP;
|
|
|
|
static union descriptor *bootAPgdt;
|
|
|
|
|
|
|
|
/* Free these after use */
|
|
|
|
void *bootstacks[MAXCPU];
|
|
|
|
|
|
|
|
struct pcb stoppcbs[MAXCPU];
|
|
|
|
|
|
|
|
/* Variables needed for SMP tlb shootdown. */
|
|
|
|
vm_offset_t smp_tlb_addr1;
|
|
|
|
vm_offset_t smp_tlb_addr2;
|
|
|
|
volatile int smp_tlb_wait;
|
|
|
|
|
|
|
|
static u_int logical_cpus;
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
static volatile cpuset_t ipi_nmi_pending;
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
/* used to hold the AP's until we are ready to release them */
|
|
|
|
static struct mtx ap_boot_mtx;
|
|
|
|
|
|
|
|
/* Set to 1 once we're ready to let the APs out of the pen. */
|
|
|
|
static volatile int aps_ready = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Store data from cpu_add() until later in the boot when we actually setup
|
|
|
|
* the APs.
|
|
|
|
*/
|
2013-09-02 22:22:56 +00:00
|
|
|
static struct cpu_info cpu_info[MAX_APIC_ID + 1];
|
2008-09-10 07:11:08 +00:00
|
|
|
int cpu_apic_ids[MAXCPU];
|
2009-01-31 21:40:27 +00:00
|
|
|
int apic_cpuids[MAX_APIC_ID + 1];
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
/* Holds pending bitmap based IPIs per CPU */
|
|
|
|
static volatile u_int cpu_ipi_pending[MAXCPU];
|
|
|
|
|
2009-05-02 22:22:00 +00:00
|
|
|
static int cpu_logical;
|
|
|
|
static int cpu_cores;
|
|
|
|
|
2013-09-06 22:17:02 +00:00
|
|
|
static const struct xen_ipi_handler xen_ipis[] =
|
|
|
|
{
|
|
|
|
[RESCHEDULE_VECTOR] = { smp_reschedule_interrupt, "resched" },
|
|
|
|
[CALL_FUNCTION_VECTOR] = { smp_call_function_interrupt,"callfunc" }
|
|
|
|
};
|
|
|
|
|
2013-09-02 22:22:56 +00:00
|
|
|
/*------------------------------- Per-CPU Data -------------------------------*/
|
2013-09-06 22:17:02 +00:00
|
|
|
DPCPU_DEFINE(xen_intr_handle_t, ipi_handle[nitems(xen_ipis)]);
|
Implement vector callback for PVHVM and unify event channel implementations
Re-structure Xen HVM support so that:
- Xen is detected and hypercalls can be performed very
early in system startup.
- Xen interrupt services are implemented using FreeBSD's native
interrupt delivery infrastructure.
- the Xen interrupt service implementation is shared between PV
and HVM guests.
- Xen interrupt handlers can optionally use a filter handler
in order to avoid the overhead of dispatch to an interrupt
thread.
- interrupt load can be distributed among all available CPUs.
- the overhead of accessing the emulated local and I/O apics
on HVM is removed for event channel port events.
- a similar optimization can eventually, and fairly easily,
be used to optimize MSI.
Early Xen detection, HVM refactoring, PVHVM interrupt infrastructure,
and misc Xen cleanups:
Sponsored by: Spectra Logic Corporation
Unification of PV & HVM interrupt infrastructure, bug fixes,
and misc Xen cleanups:
Submitted by: Roger Pau Monné
Sponsored by: Citrix Systems R&D
sys/x86/x86/local_apic.c:
sys/amd64/include/apicvar.h:
sys/i386/include/apicvar.h:
sys/amd64/amd64/apic_vector.S:
sys/i386/i386/apic_vector.s:
sys/amd64/amd64/machdep.c:
sys/i386/i386/machdep.c:
sys/i386/xen/exception.s:
sys/x86/include/segments.h:
Reserve IDT vector 0x93 for the Xen event channel upcall
interrupt handler. On Hypervisors that support the direct
vector callback feature, we can request that this vector be
called directly by an injected HVM interrupt event, instead
of a simulated PCI interrupt on the Xen platform PCI device.
This avoids all of the overhead of dealing with the emulated
I/O APIC and local APIC. It also means that the Hypervisor
can inject these events on any CPU, allowing upcalls for
different ports to be handled in parallel.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
Map Xen per-vcpu area during AP startup.
sys/amd64/include/intr_machdep.h:
sys/i386/include/intr_machdep.h:
Increase the FreeBSD IRQ vector table to include space
for event channel interrupt sources.
sys/amd64/include/pcpu.h:
sys/i386/include/pcpu.h:
Remove Xen HVM per-cpu variable data. These fields are now
allocated via the dynamic per-cpu scheme. See xen_intr.c
for details.
sys/amd64/include/xen/hypercall.h:
sys/dev/xen/blkback/blkback.c:
sys/i386/include/xen/xenvar.h:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/xen/gnttab.c:
Prefer FreeBSD primatives to Linux ones in Xen support code.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
sys/dev/xen/balloon/balloon.c:
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/console/xencons_ring.c:
sys/dev/xen/control/control.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/dev/xen/xenpci/xenpci.c:
sys/i386/i386/machdep.c:
sys/i386/include/pmap.h:
sys/i386/include/xen/xenfunc.h:
sys/i386/isa/npx.c:
sys/i386/xen/clock.c:
sys/i386/xen/mp_machdep.c:
sys/i386/xen/mptable.c:
sys/i386/xen/xen_clock_util.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/xen_rtc.c:
sys/xen/evtchn/evtchn_dev.c:
sys/xen/features.c:
sys/xen/gnttab.c:
sys/xen/gnttab.h:
sys/xen/hvm.h:
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbus_if.m:
sys/xen/xenbus/xenbusb_front.c:
sys/xen/xenbus/xenbusvar.h:
sys/xen/xenstore/xenstore.c:
sys/xen/xenstore/xenstore_dev.c:
sys/xen/xenstore/xenstorevar.h:
Pull common Xen OS support functions/settings into xen/xen-os.h.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
Remove constants, macros, and functions unused in FreeBSD's Xen
support.
sys/xen/xen-os.h:
sys/i386/xen/xen_machdep.c:
sys/x86/xen/hvm.c:
Introduce new functions xen_domain(), xen_pv_domain(), and
xen_hvm_domain(). These are used in favor of #ifdefs so that
FreeBSD can dynamically detect and adapt to the presence of
a hypervisor. The goal is to have an HVM optimized GENERIC,
but more is necessary before this is possible.
sys/amd64/amd64/machdep.c:
sys/dev/xen/xenpci/xenpcivar.h:
sys/dev/xen/xenpci/xenpci.c:
sys/x86/xen/hvm.c:
sys/sys/kernel.h:
Refactor magic ioport, Hypercall table and Hypervisor shared
information page setup, and move it to a dedicated HVM support
module.
HVM mode initialization is now triggered during the
SI_SUB_HYPERVISOR phase of system startup. This currently
occurs just after the kernel VM is fully setup which is
just enough infrastructure to allow the hypercall table
and shared info page to be properly mapped.
sys/xen/hvm.h:
sys/x86/xen/hvm.c:
Add definitions and a method for configuring Hypervisor event
delievery via a direct vector callback.
sys/amd64/include/xen/xen-os.h:
sys/x86/xen/hvm.c:
sys/conf/files:
sys/conf/files.amd64:
sys/conf/files.i386:
Adjust kernel build to reflect the refactoring of early
Xen startup code and Xen interrupt services.
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:
sys/dev/xen/control/control.c:
sys/dev/xen/evtchn/evtchn_dev.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/xen/xenstore/xenstore.c:
sys/xen/evtchn/evtchn_dev.c:
sys/dev/xen/console/console.c:
sys/dev/xen/console/xencons_ring.c
Adjust drivers to use new xen_intr_*() API.
sys/dev/xen/blkback/blkback.c:
Since blkback defers all event handling to a taskqueue,
convert this task queue to a "fast" taskqueue, and schedule
it via an interrupt filter. This avoids an unnecessary
ithread context switch.
sys/xen/xenstore/xenstore.c:
The xenstore driver is MPSAFE. Indicate as much when
registering its interrupt handler.
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbusvar.h:
Remove unused event channel APIs.
sys/xen/evtchn.h:
Remove all kernel Xen interrupt service API definitions
from this file. It is now only used for structure and
ioctl definitions related to the event channel userland
device driver.
Update the definitions in this file to match those from
NetBSD. Implementing this interface will be necessary for
Dom0 support.
sys/xen/evtchn/evtchnvar.h:
Add a header file for implemenation internal APIs related
to managing event channels event delivery. This is used
to allow, for example, the event channel userland device
driver to access low-level routines that typical kernel
consumers of event channel services should never access.
sys/xen/interface/event_channel.h:
sys/xen/xen_intr.h:
Standardize on the evtchn_port_t type for referring to
an event channel port id. In order to prevent low-level
event channel APIs from leaking to kernel consumers who
should not have access to this data, the type is defined
twice: Once in the Xen provided event_channel.h, and again
in xen/xen_intr.h. The double declaration is protected by
__XEN_EVTCHN_PORT_DEFINED__ to ensure it is never declared
twice within a given compilation unit.
sys/xen/xen_intr.h:
sys/xen/evtchn/evtchn.c:
sys/x86/xen/xen_intr.c:
sys/dev/xen/xenpci/evtchn.c:
sys/dev/xen/xenpci/xenpcivar.h:
New implementation of Xen interrupt services. This is
similar in many respects to the i386 PV implementation with
the exception that events for bound to event channel ports
(i.e. not IPI, virtual IRQ, or physical IRQ) are further
optimized to avoid mask/unmask operations that aren't
necessary for these edge triggered events.
Stubs exist for supporting physical IRQ binding, but will
need additional work before this implementation can be
fully shared between PV and HVM.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
sys/i386/xen/mp_machdep.c
sys/x86/xen/hvm.c:
Add support for placing vcpu_info into an arbritary memory
page instead of using HYPERVISOR_shared_info->vcpu_info.
This allows the creation of domains with more than 32 vcpus.
sys/i386/i386/machdep.c:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/exception.s:
Add support for new event channle implementation.
2013-08-29 19:52:18 +00:00
|
|
|
DPCPU_DEFINE(struct vcpu_info *, vcpu_info);
|
2008-09-10 07:11:08 +00:00
|
|
|
|
2013-09-02 22:22:56 +00:00
|
|
|
/*------------------------------ Implementation ------------------------------*/
|
2008-09-10 07:11:08 +00:00
|
|
|
struct cpu_group *
|
|
|
|
cpu_topo(void)
|
|
|
|
{
|
|
|
|
if (cpu_cores == 0)
|
|
|
|
cpu_cores = 1;
|
|
|
|
if (cpu_logical == 0)
|
|
|
|
cpu_logical = 1;
|
|
|
|
if (mp_ncpus % (cpu_cores * cpu_logical) != 0) {
|
|
|
|
printf("WARNING: Non-uniform processors.\n");
|
|
|
|
printf("WARNING: Using suboptimal topology.\n");
|
|
|
|
return (smp_topo_none());
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* No multi-core or hyper-threaded.
|
|
|
|
*/
|
|
|
|
if (cpu_logical * cpu_cores == 1)
|
|
|
|
return (smp_topo_none());
|
|
|
|
/*
|
|
|
|
* Only HTT no multi-core.
|
|
|
|
*/
|
|
|
|
if (cpu_logical > 1 && cpu_cores == 1)
|
|
|
|
return (smp_topo_1level(CG_SHARE_L1, cpu_logical, CG_FLAG_HTT));
|
|
|
|
/*
|
|
|
|
* Only multi-core no HTT.
|
|
|
|
*/
|
|
|
|
if (cpu_cores > 1 && cpu_logical == 1)
|
|
|
|
return (smp_topo_1level(CG_SHARE_NONE, cpu_cores, 0));
|
|
|
|
/*
|
|
|
|
* Both HTT and multi-core.
|
|
|
|
*/
|
|
|
|
return (smp_topo_2level(CG_SHARE_NONE, cpu_cores,
|
|
|
|
CG_SHARE_L1, cpu_logical, CG_FLAG_HTT));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Calculate usable address in base memory for AP trampoline code.
|
|
|
|
*/
|
|
|
|
u_int
|
|
|
|
mp_bootaddress(u_int basemem)
|
|
|
|
{
|
|
|
|
|
|
|
|
return (basemem);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
cpu_add(u_int apic_id, char boot_cpu)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (apic_id > MAX_APIC_ID) {
|
|
|
|
panic("SMP: APIC ID %d too high", apic_id);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
KASSERT(cpu_info[apic_id].cpu_present == 0, ("CPU %d added twice",
|
|
|
|
apic_id));
|
|
|
|
cpu_info[apic_id].cpu_present = 1;
|
|
|
|
if (boot_cpu) {
|
|
|
|
KASSERT(boot_cpu_id == -1,
|
|
|
|
("CPU %d claims to be BSP, but CPU %d already is", apic_id,
|
|
|
|
boot_cpu_id));
|
|
|
|
boot_cpu_id = apic_id;
|
|
|
|
cpu_info[apic_id].cpu_bsp = 1;
|
|
|
|
}
|
|
|
|
if (mp_ncpus < MAXCPU)
|
|
|
|
mp_ncpus++;
|
|
|
|
if (bootverbose)
|
|
|
|
printf("SMP: Added CPU %d (%s)\n", apic_id, boot_cpu ? "BSP" :
|
|
|
|
"AP");
|
2013-09-19 14:41:10 +00:00
|
|
|
|
|
|
|
/* Set the ACPI id (it is needed by VCPU operations) */
|
|
|
|
pcpu_find(apic_id)->pc_acpi_id = apic_id;
|
2008-09-10 07:11:08 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
cpu_mp_setmaxid(void)
|
|
|
|
{
|
|
|
|
|
|
|
|
mp_maxid = MAXCPU - 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
cpu_mp_probe(void)
|
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Always record BSP in CPU map so that the mbuf init code works
|
|
|
|
* correctly.
|
|
|
|
*/
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
CPU_SETOF(0, &all_cpus);
|
2008-09-10 07:11:08 +00:00
|
|
|
if (mp_ncpus == 0) {
|
|
|
|
/*
|
|
|
|
* No CPUs were found, so this must be a UP system. Setup
|
|
|
|
* the variables to represent a system with a single CPU
|
|
|
|
* with an id of 0.
|
|
|
|
*/
|
|
|
|
mp_ncpus = 1;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* At least one CPU was found. */
|
|
|
|
if (mp_ncpus == 1) {
|
|
|
|
/*
|
|
|
|
* One CPU was found, so this must be a UP system with
|
|
|
|
* an I/O APIC.
|
|
|
|
*/
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* At least two CPUs were found. */
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize the IPI handlers and start up the AP's.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
cpu_mp_start(void)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* Initialize the logical ID to APIC ID table. */
|
|
|
|
for (i = 0; i < MAXCPU; i++) {
|
|
|
|
cpu_apic_ids[i] = -1;
|
|
|
|
cpu_ipi_pending[i] = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Set boot_cpu_id if needed. */
|
|
|
|
if (boot_cpu_id == -1) {
|
|
|
|
boot_cpu_id = PCPU_GET(apic_id);
|
|
|
|
cpu_info[boot_cpu_id].cpu_bsp = 1;
|
|
|
|
} else
|
|
|
|
KASSERT(boot_cpu_id == PCPU_GET(apic_id),
|
|
|
|
("BSP's APIC ID doesn't match boot_cpu_id"));
|
|
|
|
cpu_apic_ids[0] = boot_cpu_id;
|
2009-01-31 21:40:27 +00:00
|
|
|
apic_cpuids[boot_cpu_id] = 0;
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
assign_cpu_ids();
|
|
|
|
|
|
|
|
/* Start each Application Processor */
|
|
|
|
start_all_aps();
|
|
|
|
|
|
|
|
/* Setup the initial logical CPUs info. */
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
logical_cpus = 0;
|
|
|
|
CPU_ZERO(&logical_cpus_mask);
|
2008-09-10 07:11:08 +00:00
|
|
|
if (cpu_feature & CPUID_HTT)
|
|
|
|
logical_cpus = (cpu_procinfo & CPUID_HTT_CORES) >> 16;
|
|
|
|
|
|
|
|
set_interrupt_apic_ids();
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2008-10-21 06:39:40 +00:00
|
|
|
static void
|
|
|
|
iv_rendezvous(uintptr_t a, uintptr_t b)
|
|
|
|
{
|
2008-10-21 08:03:12 +00:00
|
|
|
smp_rendezvous_action();
|
2008-10-21 06:39:40 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
iv_invltlb(uintptr_t a, uintptr_t b)
|
|
|
|
{
|
2008-10-21 08:03:12 +00:00
|
|
|
xen_tlb_flush();
|
2008-10-21 06:39:40 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
iv_invlpg(uintptr_t a, uintptr_t b)
|
|
|
|
{
|
2008-10-21 08:03:12 +00:00
|
|
|
xen_invlpg(a);
|
2008-10-21 06:39:40 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
iv_invlrng(uintptr_t a, uintptr_t b)
|
|
|
|
{
|
2008-10-21 08:03:12 +00:00
|
|
|
vm_offset_t start = (vm_offset_t)a;
|
|
|
|
vm_offset_t end = (vm_offset_t)b;
|
|
|
|
|
|
|
|
while (start < end) {
|
|
|
|
xen_invlpg(start);
|
|
|
|
start += PAGE_SIZE;
|
|
|
|
}
|
2008-10-21 06:39:40 +00:00
|
|
|
}
|
|
|
|
|
2008-10-21 08:03:12 +00:00
|
|
|
|
2008-10-21 06:39:40 +00:00
|
|
|
static void
|
|
|
|
iv_invlcache(uintptr_t a, uintptr_t b)
|
|
|
|
{
|
2008-10-21 08:03:12 +00:00
|
|
|
|
|
|
|
wbinvd();
|
2008-10-23 07:20:43 +00:00
|
|
|
atomic_add_int(&smp_tlb_wait, 1);
|
2008-10-21 06:39:40 +00:00
|
|
|
}
|
|
|
|
|
2011-05-20 14:53:16 +00:00
|
|
|
static void
|
|
|
|
iv_lazypmap(uintptr_t a, uintptr_t b)
|
|
|
|
{
|
|
|
|
pmap_lazyfix_action();
|
|
|
|
atomic_add_int(&smp_tlb_wait, 1);
|
|
|
|
}
|
|
|
|
|
2009-05-31 08:11:39 +00:00
|
|
|
/*
|
|
|
|
* These start from "IPI offset" APIC_IPI_INTS
|
|
|
|
*/
|
2013-09-06 22:17:02 +00:00
|
|
|
static call_data_func_t *ipi_vectors[6] =
|
2008-10-24 07:58:38 +00:00
|
|
|
{
|
2013-09-02 22:22:56 +00:00
|
|
|
iv_rendezvous,
|
|
|
|
iv_invltlb,
|
|
|
|
iv_invlpg,
|
|
|
|
iv_invlrng,
|
|
|
|
iv_invlcache,
|
|
|
|
iv_lazypmap,
|
2008-10-24 07:58:38 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reschedule call back. Nothing to do,
|
|
|
|
* all the work is done automatically when
|
|
|
|
* we return from the interrupt.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
smp_reschedule_interrupt(void *unused)
|
|
|
|
{
|
2008-10-23 07:20:43 +00:00
|
|
|
int cpu = PCPU_GET(cpuid);
|
|
|
|
u_int ipi_bitmap;
|
|
|
|
|
|
|
|
ipi_bitmap = atomic_readandclear_int(&cpu_ipi_pending[cpu]);
|
|
|
|
|
|
|
|
if (ipi_bitmap & (1 << IPI_PREEMPT)) {
|
|
|
|
#ifdef COUNT_IPIS
|
|
|
|
(*ipi_preempt_counts[cpu])++;
|
|
|
|
#endif
|
|
|
|
sched_preempt(curthread);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ipi_bitmap & (1 << IPI_AST)) {
|
|
|
|
#ifdef COUNT_IPIS
|
|
|
|
(*ipi_ast_counts[cpu])++;
|
|
|
|
#endif
|
|
|
|
/* Nothing to do for AST */
|
|
|
|
}
|
|
|
|
return (FILTER_HANDLED);
|
2008-10-21 06:39:40 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
struct _call_data {
|
2008-10-24 07:58:38 +00:00
|
|
|
uint16_t func_id;
|
|
|
|
uint16_t wait;
|
2008-10-21 06:39:40 +00:00
|
|
|
uintptr_t arg1;
|
|
|
|
uintptr_t arg2;
|
|
|
|
atomic_t started;
|
|
|
|
atomic_t finished;
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct _call_data *call_data;
|
|
|
|
|
2008-10-23 07:20:43 +00:00
|
|
|
static int
|
2008-10-21 06:39:40 +00:00
|
|
|
smp_call_function_interrupt(void *unused)
|
|
|
|
{
|
2008-10-24 07:58:38 +00:00
|
|
|
call_data_func_t *func;
|
2008-10-21 06:39:40 +00:00
|
|
|
uintptr_t arg1 = call_data->arg1;
|
|
|
|
uintptr_t arg2 = call_data->arg2;
|
|
|
|
int wait = call_data->wait;
|
2008-10-24 07:58:38 +00:00
|
|
|
atomic_t *started = &call_data->started;
|
|
|
|
atomic_t *finished = &call_data->finished;
|
2008-10-21 06:39:40 +00:00
|
|
|
|
2009-05-31 08:11:39 +00:00
|
|
|
/* We only handle function IPIs, not bitmap IPIs */
|
2013-09-02 22:22:56 +00:00
|
|
|
if (call_data->func_id < APIC_IPI_INTS ||
|
|
|
|
call_data->func_id > IPI_BITMAP_VECTOR)
|
2008-10-24 07:58:38 +00:00
|
|
|
panic("invalid function id %u", call_data->func_id);
|
|
|
|
|
2013-09-06 22:17:02 +00:00
|
|
|
func = ipi_vectors[IPI_TO_IDX(call_data->func_id)];
|
2008-10-21 06:39:40 +00:00
|
|
|
/*
|
|
|
|
* Notify initiating CPU that I've grabbed the data and am
|
|
|
|
* about to execute the function
|
|
|
|
*/
|
|
|
|
mb();
|
2008-10-24 07:58:38 +00:00
|
|
|
atomic_inc(started);
|
2008-10-21 06:39:40 +00:00
|
|
|
/*
|
|
|
|
* At this point the info structure may be out of scope unless wait==1
|
|
|
|
*/
|
|
|
|
(*func)(arg1, arg2);
|
|
|
|
|
|
|
|
if (wait) {
|
|
|
|
mb();
|
2008-10-24 07:58:38 +00:00
|
|
|
atomic_inc(finished);
|
2008-10-21 06:39:40 +00:00
|
|
|
}
|
2008-10-24 07:58:38 +00:00
|
|
|
atomic_add_int(&smp_tlb_wait, 1);
|
2008-10-23 07:20:43 +00:00
|
|
|
return (FILTER_HANDLED);
|
2008-10-21 06:39:40 +00:00
|
|
|
}
|
|
|
|
|
2008-09-10 07:11:08 +00:00
|
|
|
/*
|
|
|
|
* Print various information about the SMP system hardware and setup.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
cpu_mp_announce(void)
|
|
|
|
{
|
|
|
|
int i, x;
|
|
|
|
|
|
|
|
/* List CPUs */
|
|
|
|
printf(" cpu0 (BSP): APIC ID: %2d\n", boot_cpu_id);
|
|
|
|
for (i = 1, x = 0; x <= MAX_APIC_ID; x++) {
|
|
|
|
if (!cpu_info[x].cpu_present || cpu_info[x].cpu_bsp)
|
|
|
|
continue;
|
|
|
|
if (cpu_info[x].cpu_disabled)
|
|
|
|
printf(" cpu (AP): APIC ID: %2d (disabled)\n", x);
|
|
|
|
else {
|
|
|
|
KASSERT(i < mp_ncpus,
|
|
|
|
("mp_ncpus and actual cpus are out of whack"));
|
|
|
|
printf(" cpu%d (AP): APIC ID: %2d\n", i++, x);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-10-21 06:39:40 +00:00
|
|
|
static int
|
Implement vector callback for PVHVM and unify event channel implementations
Re-structure Xen HVM support so that:
- Xen is detected and hypercalls can be performed very
early in system startup.
- Xen interrupt services are implemented using FreeBSD's native
interrupt delivery infrastructure.
- the Xen interrupt service implementation is shared between PV
and HVM guests.
- Xen interrupt handlers can optionally use a filter handler
in order to avoid the overhead of dispatch to an interrupt
thread.
- interrupt load can be distributed among all available CPUs.
- the overhead of accessing the emulated local and I/O apics
on HVM is removed for event channel port events.
- a similar optimization can eventually, and fairly easily,
be used to optimize MSI.
Early Xen detection, HVM refactoring, PVHVM interrupt infrastructure,
and misc Xen cleanups:
Sponsored by: Spectra Logic Corporation
Unification of PV & HVM interrupt infrastructure, bug fixes,
and misc Xen cleanups:
Submitted by: Roger Pau Monné
Sponsored by: Citrix Systems R&D
sys/x86/x86/local_apic.c:
sys/amd64/include/apicvar.h:
sys/i386/include/apicvar.h:
sys/amd64/amd64/apic_vector.S:
sys/i386/i386/apic_vector.s:
sys/amd64/amd64/machdep.c:
sys/i386/i386/machdep.c:
sys/i386/xen/exception.s:
sys/x86/include/segments.h:
Reserve IDT vector 0x93 for the Xen event channel upcall
interrupt handler. On Hypervisors that support the direct
vector callback feature, we can request that this vector be
called directly by an injected HVM interrupt event, instead
of a simulated PCI interrupt on the Xen platform PCI device.
This avoids all of the overhead of dealing with the emulated
I/O APIC and local APIC. It also means that the Hypervisor
can inject these events on any CPU, allowing upcalls for
different ports to be handled in parallel.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
Map Xen per-vcpu area during AP startup.
sys/amd64/include/intr_machdep.h:
sys/i386/include/intr_machdep.h:
Increase the FreeBSD IRQ vector table to include space
for event channel interrupt sources.
sys/amd64/include/pcpu.h:
sys/i386/include/pcpu.h:
Remove Xen HVM per-cpu variable data. These fields are now
allocated via the dynamic per-cpu scheme. See xen_intr.c
for details.
sys/amd64/include/xen/hypercall.h:
sys/dev/xen/blkback/blkback.c:
sys/i386/include/xen/xenvar.h:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/xen/gnttab.c:
Prefer FreeBSD primatives to Linux ones in Xen support code.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
sys/dev/xen/balloon/balloon.c:
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/console/xencons_ring.c:
sys/dev/xen/control/control.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/dev/xen/xenpci/xenpci.c:
sys/i386/i386/machdep.c:
sys/i386/include/pmap.h:
sys/i386/include/xen/xenfunc.h:
sys/i386/isa/npx.c:
sys/i386/xen/clock.c:
sys/i386/xen/mp_machdep.c:
sys/i386/xen/mptable.c:
sys/i386/xen/xen_clock_util.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/xen_rtc.c:
sys/xen/evtchn/evtchn_dev.c:
sys/xen/features.c:
sys/xen/gnttab.c:
sys/xen/gnttab.h:
sys/xen/hvm.h:
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbus_if.m:
sys/xen/xenbus/xenbusb_front.c:
sys/xen/xenbus/xenbusvar.h:
sys/xen/xenstore/xenstore.c:
sys/xen/xenstore/xenstore_dev.c:
sys/xen/xenstore/xenstorevar.h:
Pull common Xen OS support functions/settings into xen/xen-os.h.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
Remove constants, macros, and functions unused in FreeBSD's Xen
support.
sys/xen/xen-os.h:
sys/i386/xen/xen_machdep.c:
sys/x86/xen/hvm.c:
Introduce new functions xen_domain(), xen_pv_domain(), and
xen_hvm_domain(). These are used in favor of #ifdefs so that
FreeBSD can dynamically detect and adapt to the presence of
a hypervisor. The goal is to have an HVM optimized GENERIC,
but more is necessary before this is possible.
sys/amd64/amd64/machdep.c:
sys/dev/xen/xenpci/xenpcivar.h:
sys/dev/xen/xenpci/xenpci.c:
sys/x86/xen/hvm.c:
sys/sys/kernel.h:
Refactor magic ioport, Hypercall table and Hypervisor shared
information page setup, and move it to a dedicated HVM support
module.
HVM mode initialization is now triggered during the
SI_SUB_HYPERVISOR phase of system startup. This currently
occurs just after the kernel VM is fully setup which is
just enough infrastructure to allow the hypercall table
and shared info page to be properly mapped.
sys/xen/hvm.h:
sys/x86/xen/hvm.c:
Add definitions and a method for configuring Hypervisor event
delievery via a direct vector callback.
sys/amd64/include/xen/xen-os.h:
sys/x86/xen/hvm.c:
sys/conf/files:
sys/conf/files.amd64:
sys/conf/files.i386:
Adjust kernel build to reflect the refactoring of early
Xen startup code and Xen interrupt services.
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:
sys/dev/xen/control/control.c:
sys/dev/xen/evtchn/evtchn_dev.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/xen/xenstore/xenstore.c:
sys/xen/evtchn/evtchn_dev.c:
sys/dev/xen/console/console.c:
sys/dev/xen/console/xencons_ring.c
Adjust drivers to use new xen_intr_*() API.
sys/dev/xen/blkback/blkback.c:
Since blkback defers all event handling to a taskqueue,
convert this task queue to a "fast" taskqueue, and schedule
it via an interrupt filter. This avoids an unnecessary
ithread context switch.
sys/xen/xenstore/xenstore.c:
The xenstore driver is MPSAFE. Indicate as much when
registering its interrupt handler.
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbusvar.h:
Remove unused event channel APIs.
sys/xen/evtchn.h:
Remove all kernel Xen interrupt service API definitions
from this file. It is now only used for structure and
ioctl definitions related to the event channel userland
device driver.
Update the definitions in this file to match those from
NetBSD. Implementing this interface will be necessary for
Dom0 support.
sys/xen/evtchn/evtchnvar.h:
Add a header file for implemenation internal APIs related
to managing event channels event delivery. This is used
to allow, for example, the event channel userland device
driver to access low-level routines that typical kernel
consumers of event channel services should never access.
sys/xen/interface/event_channel.h:
sys/xen/xen_intr.h:
Standardize on the evtchn_port_t type for referring to
an event channel port id. In order to prevent low-level
event channel APIs from leaking to kernel consumers who
should not have access to this data, the type is defined
twice: Once in the Xen provided event_channel.h, and again
in xen/xen_intr.h. The double declaration is protected by
__XEN_EVTCHN_PORT_DEFINED__ to ensure it is never declared
twice within a given compilation unit.
sys/xen/xen_intr.h:
sys/xen/evtchn/evtchn.c:
sys/x86/xen/xen_intr.c:
sys/dev/xen/xenpci/evtchn.c:
sys/dev/xen/xenpci/xenpcivar.h:
New implementation of Xen interrupt services. This is
similar in many respects to the i386 PV implementation with
the exception that events for bound to event channel ports
(i.e. not IPI, virtual IRQ, or physical IRQ) are further
optimized to avoid mask/unmask operations that aren't
necessary for these edge triggered events.
Stubs exist for supporting physical IRQ binding, but will
need additional work before this implementation can be
fully shared between PV and HVM.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
sys/i386/xen/mp_machdep.c
sys/x86/xen/hvm.c:
Add support for placing vcpu_info into an arbritary memory
page instead of using HYPERVISOR_shared_info->vcpu_info.
This allows the creation of domains with more than 32 vcpus.
sys/i386/i386/machdep.c:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/exception.s:
Add support for new event channle implementation.
2013-08-29 19:52:18 +00:00
|
|
|
xen_smp_cpu_init(unsigned int cpu)
|
2008-10-21 06:39:40 +00:00
|
|
|
{
|
2013-09-06 22:17:02 +00:00
|
|
|
xen_intr_handle_t *ipi_handle;
|
|
|
|
const struct xen_ipi_handler *ipi;
|
|
|
|
int idx, rc;
|
2008-10-21 06:39:40 +00:00
|
|
|
|
2013-09-06 22:17:02 +00:00
|
|
|
ipi_handle = DPCPU_ID_GET(cpu, ipi_handle);
|
|
|
|
for (ipi = xen_ipis, idx = 0; idx < nitems(xen_ipis); ipi++, idx++) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The PCPU variable pc_device is not initialized on i386 PV,
|
|
|
|
* so we have to use the root_bus device in order to setup
|
|
|
|
* the IPIs.
|
|
|
|
*/
|
|
|
|
rc = xen_intr_alloc_and_bind_ipi(root_bus, cpu,
|
|
|
|
ipi->filter, INTR_TYPE_TTY, &ipi_handle[idx]);
|
|
|
|
if (rc != 0) {
|
|
|
|
printf("Unable to allocate a XEN IPI port. "
|
|
|
|
"Error %d\n", rc);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
xen_intr_describe(ipi_handle[idx], "%s", ipi->description);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (;idx < nitems(xen_ipis); idx++)
|
|
|
|
ipi_handle[idx] = NULL;
|
|
|
|
|
|
|
|
if (rc == 0)
|
|
|
|
return (0);
|
|
|
|
|
|
|
|
/* Either all are successfully mapped, or none at all. */
|
|
|
|
for (idx = 0; idx < nitems(xen_ipis); idx++) {
|
|
|
|
if (ipi_handle[idx] == NULL)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
xen_intr_unbind(ipi_handle[idx]);
|
|
|
|
ipi_handle[idx] = NULL;
|
|
|
|
}
|
2008-10-21 06:39:40 +00:00
|
|
|
|
2013-09-02 22:22:56 +00:00
|
|
|
return (rc);
|
2008-10-21 06:39:40 +00:00
|
|
|
}
|
|
|
|
|
2008-10-23 07:20:43 +00:00
|
|
|
static void
|
|
|
|
xen_smp_intr_init_cpus(void *unused)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < mp_ncpus; i++)
|
Implement vector callback for PVHVM and unify event channel implementations
Re-structure Xen HVM support so that:
- Xen is detected and hypercalls can be performed very
early in system startup.
- Xen interrupt services are implemented using FreeBSD's native
interrupt delivery infrastructure.
- the Xen interrupt service implementation is shared between PV
and HVM guests.
- Xen interrupt handlers can optionally use a filter handler
in order to avoid the overhead of dispatch to an interrupt
thread.
- interrupt load can be distributed among all available CPUs.
- the overhead of accessing the emulated local and I/O apics
on HVM is removed for event channel port events.
- a similar optimization can eventually, and fairly easily,
be used to optimize MSI.
Early Xen detection, HVM refactoring, PVHVM interrupt infrastructure,
and misc Xen cleanups:
Sponsored by: Spectra Logic Corporation
Unification of PV & HVM interrupt infrastructure, bug fixes,
and misc Xen cleanups:
Submitted by: Roger Pau Monné
Sponsored by: Citrix Systems R&D
sys/x86/x86/local_apic.c:
sys/amd64/include/apicvar.h:
sys/i386/include/apicvar.h:
sys/amd64/amd64/apic_vector.S:
sys/i386/i386/apic_vector.s:
sys/amd64/amd64/machdep.c:
sys/i386/i386/machdep.c:
sys/i386/xen/exception.s:
sys/x86/include/segments.h:
Reserve IDT vector 0x93 for the Xen event channel upcall
interrupt handler. On Hypervisors that support the direct
vector callback feature, we can request that this vector be
called directly by an injected HVM interrupt event, instead
of a simulated PCI interrupt on the Xen platform PCI device.
This avoids all of the overhead of dealing with the emulated
I/O APIC and local APIC. It also means that the Hypervisor
can inject these events on any CPU, allowing upcalls for
different ports to be handled in parallel.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
Map Xen per-vcpu area during AP startup.
sys/amd64/include/intr_machdep.h:
sys/i386/include/intr_machdep.h:
Increase the FreeBSD IRQ vector table to include space
for event channel interrupt sources.
sys/amd64/include/pcpu.h:
sys/i386/include/pcpu.h:
Remove Xen HVM per-cpu variable data. These fields are now
allocated via the dynamic per-cpu scheme. See xen_intr.c
for details.
sys/amd64/include/xen/hypercall.h:
sys/dev/xen/blkback/blkback.c:
sys/i386/include/xen/xenvar.h:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/xen/gnttab.c:
Prefer FreeBSD primatives to Linux ones in Xen support code.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
sys/dev/xen/balloon/balloon.c:
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/console/xencons_ring.c:
sys/dev/xen/control/control.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/dev/xen/xenpci/xenpci.c:
sys/i386/i386/machdep.c:
sys/i386/include/pmap.h:
sys/i386/include/xen/xenfunc.h:
sys/i386/isa/npx.c:
sys/i386/xen/clock.c:
sys/i386/xen/mp_machdep.c:
sys/i386/xen/mptable.c:
sys/i386/xen/xen_clock_util.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/xen_rtc.c:
sys/xen/evtchn/evtchn_dev.c:
sys/xen/features.c:
sys/xen/gnttab.c:
sys/xen/gnttab.h:
sys/xen/hvm.h:
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbus_if.m:
sys/xen/xenbus/xenbusb_front.c:
sys/xen/xenbus/xenbusvar.h:
sys/xen/xenstore/xenstore.c:
sys/xen/xenstore/xenstore_dev.c:
sys/xen/xenstore/xenstorevar.h:
Pull common Xen OS support functions/settings into xen/xen-os.h.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
Remove constants, macros, and functions unused in FreeBSD's Xen
support.
sys/xen/xen-os.h:
sys/i386/xen/xen_machdep.c:
sys/x86/xen/hvm.c:
Introduce new functions xen_domain(), xen_pv_domain(), and
xen_hvm_domain(). These are used in favor of #ifdefs so that
FreeBSD can dynamically detect and adapt to the presence of
a hypervisor. The goal is to have an HVM optimized GENERIC,
but more is necessary before this is possible.
sys/amd64/amd64/machdep.c:
sys/dev/xen/xenpci/xenpcivar.h:
sys/dev/xen/xenpci/xenpci.c:
sys/x86/xen/hvm.c:
sys/sys/kernel.h:
Refactor magic ioport, Hypercall table and Hypervisor shared
information page setup, and move it to a dedicated HVM support
module.
HVM mode initialization is now triggered during the
SI_SUB_HYPERVISOR phase of system startup. This currently
occurs just after the kernel VM is fully setup which is
just enough infrastructure to allow the hypercall table
and shared info page to be properly mapped.
sys/xen/hvm.h:
sys/x86/xen/hvm.c:
Add definitions and a method for configuring Hypervisor event
delievery via a direct vector callback.
sys/amd64/include/xen/xen-os.h:
sys/x86/xen/hvm.c:
sys/conf/files:
sys/conf/files.amd64:
sys/conf/files.i386:
Adjust kernel build to reflect the refactoring of early
Xen startup code and Xen interrupt services.
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:
sys/dev/xen/control/control.c:
sys/dev/xen/evtchn/evtchn_dev.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/xen/xenstore/xenstore.c:
sys/xen/evtchn/evtchn_dev.c:
sys/dev/xen/console/console.c:
sys/dev/xen/console/xencons_ring.c
Adjust drivers to use new xen_intr_*() API.
sys/dev/xen/blkback/blkback.c:
Since blkback defers all event handling to a taskqueue,
convert this task queue to a "fast" taskqueue, and schedule
it via an interrupt filter. This avoids an unnecessary
ithread context switch.
sys/xen/xenstore/xenstore.c:
The xenstore driver is MPSAFE. Indicate as much when
registering its interrupt handler.
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbusvar.h:
Remove unused event channel APIs.
sys/xen/evtchn.h:
Remove all kernel Xen interrupt service API definitions
from this file. It is now only used for structure and
ioctl definitions related to the event channel userland
device driver.
Update the definitions in this file to match those from
NetBSD. Implementing this interface will be necessary for
Dom0 support.
sys/xen/evtchn/evtchnvar.h:
Add a header file for implemenation internal APIs related
to managing event channels event delivery. This is used
to allow, for example, the event channel userland device
driver to access low-level routines that typical kernel
consumers of event channel services should never access.
sys/xen/interface/event_channel.h:
sys/xen/xen_intr.h:
Standardize on the evtchn_port_t type for referring to
an event channel port id. In order to prevent low-level
event channel APIs from leaking to kernel consumers who
should not have access to this data, the type is defined
twice: Once in the Xen provided event_channel.h, and again
in xen/xen_intr.h. The double declaration is protected by
__XEN_EVTCHN_PORT_DEFINED__ to ensure it is never declared
twice within a given compilation unit.
sys/xen/xen_intr.h:
sys/xen/evtchn/evtchn.c:
sys/x86/xen/xen_intr.c:
sys/dev/xen/xenpci/evtchn.c:
sys/dev/xen/xenpci/xenpcivar.h:
New implementation of Xen interrupt services. This is
similar in many respects to the i386 PV implementation with
the exception that events for bound to event channel ports
(i.e. not IPI, virtual IRQ, or physical IRQ) are further
optimized to avoid mask/unmask operations that aren't
necessary for these edge triggered events.
Stubs exist for supporting physical IRQ binding, but will
need additional work before this implementation can be
fully shared between PV and HVM.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
sys/i386/xen/mp_machdep.c
sys/x86/xen/hvm.c:
Add support for placing vcpu_info into an arbritary memory
page instead of using HYPERVISOR_shared_info->vcpu_info.
This allows the creation of domains with more than 32 vcpus.
sys/i386/i386/machdep.c:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/exception.s:
Add support for new event channle implementation.
2013-08-29 19:52:18 +00:00
|
|
|
xen_smp_cpu_init(i);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
xen_smp_intr_setup_cpus(void *unused)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < mp_ncpus; i++)
|
|
|
|
DPCPU_ID_SET(i, vcpu_info,
|
|
|
|
&HYPERVISOR_shared_info->vcpu_info[i]);
|
2008-10-23 07:20:43 +00:00
|
|
|
}
|
|
|
|
|
2008-09-10 07:11:08 +00:00
|
|
|
#define MTOPSIZE (1<<(14 + PAGE_SHIFT))
|
|
|
|
|
|
|
|
/*
|
|
|
|
* AP CPU's call this to initialize themselves.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
init_secondary(void)
|
|
|
|
{
|
|
|
|
vm_offset_t addr;
|
2011-07-04 12:04:52 +00:00
|
|
|
u_int cpuid;
|
2008-09-10 07:11:08 +00:00
|
|
|
int gsel_tss;
|
|
|
|
|
|
|
|
|
|
|
|
/* bootAP is set in start_ap() to our ID. */
|
|
|
|
PCPU_SET(currentldt, _default_ldt);
|
|
|
|
gsel_tss = GSEL(GPROC0_SEL, SEL_KPL);
|
|
|
|
#if 0
|
|
|
|
gdt[bootAP * NGDT + GPROC0_SEL].sd.sd_type = SDT_SYS386TSS;
|
|
|
|
#endif
|
|
|
|
PCPU_SET(common_tss.tss_esp0, 0); /* not used until after switch */
|
|
|
|
PCPU_SET(common_tss.tss_ss0, GSEL(GDATA_SEL, SEL_KPL));
|
|
|
|
PCPU_SET(common_tss.tss_ioopt, (sizeof (struct i386tss)) << 16);
|
|
|
|
#if 0
|
|
|
|
PCPU_SET(tss_gdt, &gdt[bootAP * NGDT + GPROC0_SEL].sd);
|
|
|
|
|
|
|
|
PCPU_SET(common_tssd, *PCPU_GET(tss_gdt));
|
|
|
|
#endif
|
|
|
|
PCPU_SET(fsgs_gdt, &gdt[GUFS_SEL].sd);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set to a known state:
|
|
|
|
* Set by mpboot.s: CR0_PG, CR0_PE
|
|
|
|
* Set by cpu_setregs: CR0_NE, CR0_MP, CR0_TS, CR0_WP, CR0_AM
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* signal our startup to the BSP.
|
|
|
|
*/
|
|
|
|
mp_naps++;
|
|
|
|
|
|
|
|
/* Spin until the BSP releases the AP's. */
|
|
|
|
while (!aps_ready)
|
|
|
|
ia32_pause();
|
|
|
|
|
|
|
|
/* BSP may have changed PTD while we were waiting */
|
|
|
|
invltlb();
|
|
|
|
for (addr = 0; addr < NKPT * NBPDR - 1; addr += PAGE_SIZE)
|
|
|
|
invlpg(addr);
|
|
|
|
|
|
|
|
/* set up FPU state on the AP */
|
2009-03-05 18:43:54 +00:00
|
|
|
npxinit();
|
2008-09-10 07:11:08 +00:00
|
|
|
#if 0
|
|
|
|
|
|
|
|
/* set up SSE registers */
|
|
|
|
enable_sse();
|
|
|
|
#endif
|
|
|
|
#if 0 && defined(PAE)
|
|
|
|
/* Enable the PTE no-execute bit. */
|
|
|
|
if ((amd_feature & AMDID_NX) != 0) {
|
|
|
|
uint64_t msr;
|
|
|
|
|
|
|
|
msr = rdmsr(MSR_EFER) | EFER_NXE;
|
|
|
|
wrmsr(MSR_EFER, msr);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
#if 0
|
|
|
|
/* A quick check from sanity claus */
|
|
|
|
if (PCPU_GET(apic_id) != lapic_id()) {
|
|
|
|
printf("SMP: cpuid = %d\n", PCPU_GET(cpuid));
|
|
|
|
printf("SMP: actual apic_id = %d\n", lapic_id());
|
|
|
|
printf("SMP: correct apic_id = %d\n", PCPU_GET(apic_id));
|
|
|
|
panic("cpuid mismatch! boom!!");
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Initialize curthread. */
|
|
|
|
KASSERT(PCPU_GET(idlethread) != NULL, ("no idle thread"));
|
|
|
|
PCPU_SET(curthread, PCPU_GET(idlethread));
|
|
|
|
|
|
|
|
mtx_lock_spin(&ap_boot_mtx);
|
|
|
|
#if 0
|
|
|
|
|
|
|
|
/* Init local apic for irq's */
|
|
|
|
lapic_setup(1);
|
|
|
|
#endif
|
|
|
|
smp_cpus++;
|
|
|
|
|
2011-07-04 12:04:52 +00:00
|
|
|
cpuid = PCPU_GET(cpuid);
|
|
|
|
CTR1(KTR_SMP, "SMP: AP CPU #%d Launched", cpuid);
|
|
|
|
printf("SMP: AP CPU #%d Launched!\n", cpuid);
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
/* Determine if we are a logical CPU. */
|
|
|
|
if (logical_cpus > 1 && PCPU_GET(apic_id) % logical_cpus != 0)
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_SET(cpuid, &logical_cpus_mask);
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
/* Determine if we are a hyperthread. */
|
|
|
|
if (hyperthreading_cpus > 1 &&
|
|
|
|
PCPU_GET(apic_id) % hyperthreading_cpus != 0)
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_SET(cpuid, &hyperthreading_cpus_mask);
|
2008-09-10 07:11:08 +00:00
|
|
|
#if 0
|
|
|
|
if (bootverbose)
|
|
|
|
lapic_dump("AP");
|
|
|
|
#endif
|
|
|
|
if (smp_cpus == mp_ncpus) {
|
|
|
|
/* enable IPI's, tlb shootdown, freezes etc */
|
|
|
|
atomic_store_rel_int(&smp_started, 1);
|
|
|
|
smp_active = 1; /* historic */
|
|
|
|
}
|
|
|
|
|
|
|
|
mtx_unlock_spin(&ap_boot_mtx);
|
|
|
|
|
|
|
|
/* wait until all the AP's are up */
|
|
|
|
while (smp_started == 0)
|
|
|
|
ia32_pause();
|
|
|
|
|
2008-09-18 01:09:15 +00:00
|
|
|
PCPU_SET(curthread, PCPU_GET(idlethread));
|
2011-05-13 15:20:57 +00:00
|
|
|
|
|
|
|
/* Start per-CPU event timers. */
|
|
|
|
cpu_initclocks_ap();
|
|
|
|
|
2008-09-10 07:11:08 +00:00
|
|
|
/* enter the scheduler */
|
|
|
|
sched_throw(NULL);
|
|
|
|
|
|
|
|
panic("scheduler returned us to %s", __func__);
|
|
|
|
/* NOTREACHED */
|
|
|
|
}
|
|
|
|
|
|
|
|
/*******************************************************************
|
|
|
|
* local functions and data
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We tell the I/O APIC code about all the CPUs we want to receive
|
|
|
|
* interrupts. If we don't want certain CPUs to receive IRQs we
|
|
|
|
* can simply not tell the I/O APIC code about them in this function.
|
|
|
|
* We also do not tell it about the BSP since it tells itself about
|
|
|
|
* the BSP internally to work with UP kernels and on UP machines.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
set_interrupt_apic_ids(void)
|
|
|
|
{
|
|
|
|
u_int i, apic_id;
|
|
|
|
|
|
|
|
for (i = 0; i < MAXCPU; i++) {
|
|
|
|
apic_id = cpu_apic_ids[i];
|
|
|
|
if (apic_id == -1)
|
|
|
|
continue;
|
|
|
|
if (cpu_info[apic_id].cpu_bsp)
|
|
|
|
continue;
|
|
|
|
if (cpu_info[apic_id].cpu_disabled)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Don't let hyperthreads service interrupts. */
|
|
|
|
if (hyperthreading_cpus > 1 &&
|
|
|
|
apic_id % hyperthreading_cpus != 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
intr_add_cpu(i);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Assign logical CPU IDs to local APICs.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
assign_cpu_ids(void)
|
|
|
|
{
|
|
|
|
u_int i;
|
|
|
|
|
|
|
|
/* Check for explicitly disabled CPUs. */
|
|
|
|
for (i = 0; i <= MAX_APIC_ID; i++) {
|
|
|
|
if (!cpu_info[i].cpu_present || cpu_info[i].cpu_bsp)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Don't use this CPU if it has been disabled by a tunable. */
|
|
|
|
if (resource_disabled("lapic", i)) {
|
|
|
|
cpu_info[i].cpu_disabled = 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Assign CPU IDs to local APIC IDs and disable any CPUs
|
|
|
|
* beyond MAXCPU. CPU 0 has already been assigned to the BSP,
|
|
|
|
* so we only have to assign IDs for APs.
|
|
|
|
*/
|
|
|
|
mp_ncpus = 1;
|
|
|
|
for (i = 0; i <= MAX_APIC_ID; i++) {
|
|
|
|
if (!cpu_info[i].cpu_present || cpu_info[i].cpu_bsp ||
|
|
|
|
cpu_info[i].cpu_disabled)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (mp_ncpus < MAXCPU) {
|
|
|
|
cpu_apic_ids[mp_ncpus] = i;
|
2009-01-31 21:40:27 +00:00
|
|
|
apic_cpuids[i] = mp_ncpus;
|
2008-09-10 07:11:08 +00:00
|
|
|
mp_ncpus++;
|
|
|
|
} else
|
|
|
|
cpu_info[i].cpu_disabled = 1;
|
|
|
|
}
|
|
|
|
KASSERT(mp_maxid >= mp_ncpus - 1,
|
|
|
|
("%s: counters out of sync: max %d, count %d", __func__, mp_maxid,
|
|
|
|
mp_ncpus));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* start each AP in our list
|
|
|
|
*/
|
|
|
|
/* Lowest 1MB is already mapped: don't touch*/
|
|
|
|
#define TMPMAP_START 1
|
|
|
|
int
|
|
|
|
start_all_aps(void)
|
|
|
|
{
|
|
|
|
int x,apic_id, cpu;
|
|
|
|
struct pcpu *pc;
|
|
|
|
|
|
|
|
mtx_init(&ap_boot_mtx, "ap boot", NULL, MTX_SPIN);
|
|
|
|
|
|
|
|
/* set up temporary P==V mapping for AP boot */
|
|
|
|
/* XXX this is a hack, we should boot the AP on its own stack/PTD */
|
|
|
|
|
|
|
|
/* start each AP */
|
|
|
|
for (cpu = 1; cpu < mp_ncpus; cpu++) {
|
|
|
|
apic_id = cpu_apic_ids[cpu];
|
|
|
|
|
|
|
|
|
|
|
|
bootAP = cpu;
|
|
|
|
bootAPgdt = gdt + (512*cpu);
|
|
|
|
|
|
|
|
/* Get per-cpu data */
|
|
|
|
pc = &__pcpu[bootAP];
|
2008-09-18 02:59:19 +00:00
|
|
|
pcpu_init(pc, bootAP, sizeof(struct pcpu));
|
2013-08-07 06:21:20 +00:00
|
|
|
dpcpu_init((void *)kmem_malloc(kernel_arena, DPCPU_SIZE,
|
|
|
|
M_WAITOK | M_ZERO), bootAP);
|
2008-09-10 07:11:08 +00:00
|
|
|
pc->pc_apic_id = cpu_apic_ids[bootAP];
|
|
|
|
pc->pc_prvspace = pc;
|
|
|
|
pc->pc_curthread = 0;
|
|
|
|
|
|
|
|
gdt_segs[GPRIV_SEL].ssd_base = (int) pc;
|
|
|
|
gdt_segs[GPROC0_SEL].ssd_base = (int) &pc->pc_common_tss;
|
|
|
|
|
2010-11-20 20:04:29 +00:00
|
|
|
PT_SET_MA(bootAPgdt, VTOM(bootAPgdt) | PG_V | PG_RW);
|
2008-09-10 07:11:08 +00:00
|
|
|
bzero(bootAPgdt, PAGE_SIZE);
|
|
|
|
for (x = 0; x < NGDT; x++)
|
|
|
|
ssdtosd(&gdt_segs[x], &bootAPgdt[x].sd);
|
|
|
|
PT_SET_MA(bootAPgdt, vtomach(bootAPgdt) | PG_V);
|
2008-09-25 07:11:04 +00:00
|
|
|
#ifdef notyet
|
|
|
|
|
|
|
|
if (HYPERVISOR_vcpu_op(VCPUOP_get_physid, cpu, &cpu_id) == 0) {
|
|
|
|
apicid = xen_vcpu_physid_to_x86_apicid(cpu_id.phys_id);
|
|
|
|
acpiid = xen_vcpu_physid_to_x86_acpiid(cpu_id.phys_id);
|
|
|
|
#ifdef CONFIG_ACPI
|
|
|
|
if (acpiid != 0xff)
|
|
|
|
x86_acpiid_to_apicid[acpiid] = apicid;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-09-10 07:11:08 +00:00
|
|
|
/* attempt to start the Application Processor */
|
|
|
|
if (!start_ap(cpu)) {
|
|
|
|
printf("AP #%d (PHY# %d) failed!\n", cpu, apic_id);
|
|
|
|
/* better panic as the AP may be running loose */
|
|
|
|
printf("panic y/n? [y] ");
|
|
|
|
if (cngetc() != 'n')
|
|
|
|
panic("bye-bye");
|
|
|
|
}
|
|
|
|
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
CPU_SET(cpu, &all_cpus); /* record AP in CPU map */
|
2008-09-10 07:11:08 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
pmap_invalidate_range(kernel_pmap, 0, NKPT * NBPDR - 1);
|
|
|
|
|
|
|
|
/* number of APs actually started */
|
2013-09-02 22:22:56 +00:00
|
|
|
return (mp_naps);
|
2008-09-10 07:11:08 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
extern uint8_t *pcpu_boot_stack;
|
|
|
|
extern trap_info_t trap_table[];
|
|
|
|
|
|
|
|
static void
|
|
|
|
smp_trap_init(trap_info_t *trap_ctxt)
|
|
|
|
{
|
|
|
|
const trap_info_t *t = trap_table;
|
|
|
|
|
|
|
|
for (t = trap_table; t->address; t++) {
|
|
|
|
trap_ctxt[t->vector].flags = t->flags;
|
|
|
|
trap_ctxt[t->vector].cs = t->cs;
|
|
|
|
trap_ctxt[t->vector].address = t->address;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-10-12 23:26:00 +00:00
|
|
|
extern struct rwlock pvh_global_lock;
|
2008-09-10 07:11:08 +00:00
|
|
|
extern int nkpt;
|
2008-10-21 06:39:40 +00:00
|
|
|
static void
|
2008-09-10 07:11:08 +00:00
|
|
|
cpu_initialize_context(unsigned int cpu)
|
|
|
|
{
|
|
|
|
/* vcpu_guest_context_t is too large to allocate on the stack.
|
|
|
|
* Hence we allocate statically and protect it with a lock */
|
2011-12-20 20:29:45 +00:00
|
|
|
vm_page_t m[NPGPTD + 2];
|
2008-09-10 07:11:08 +00:00
|
|
|
static vcpu_guest_context_t ctxt;
|
|
|
|
vm_offset_t boot_stack;
|
2008-09-18 01:09:15 +00:00
|
|
|
vm_offset_t newPTD;
|
|
|
|
vm_paddr_t ma[NPGPTD];
|
2008-09-10 07:11:08 +00:00
|
|
|
int i;
|
|
|
|
|
|
|
|
/*
|
2008-09-18 01:09:15 +00:00
|
|
|
* Page 0,[0-3] PTD
|
|
|
|
* Page 1, [4] boot stack
|
|
|
|
* Page [5] PDPT
|
2008-09-10 07:11:08 +00:00
|
|
|
*
|
|
|
|
*/
|
2008-09-18 01:09:15 +00:00
|
|
|
for (i = 0; i < NPGPTD + 2; i++) {
|
2011-12-15 05:07:16 +00:00
|
|
|
m[i] = vm_page_alloc(NULL, 0,
|
2008-09-10 07:11:08 +00:00
|
|
|
VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED |
|
|
|
|
VM_ALLOC_ZERO);
|
|
|
|
|
|
|
|
pmap_zero_page(m[i]);
|
|
|
|
|
|
|
|
}
|
2013-08-07 06:21:20 +00:00
|
|
|
boot_stack = kva_alloc(PAGE_SIZE);
|
|
|
|
newPTD = kva_alloc(NPGPTD * PAGE_SIZE);
|
2010-11-20 20:04:29 +00:00
|
|
|
ma[0] = VM_PAGE_TO_MACH(m[0])|PG_V;
|
2008-09-18 01:09:15 +00:00
|
|
|
|
|
|
|
#ifdef PAE
|
|
|
|
pmap_kenter(boot_stack, VM_PAGE_TO_PHYS(m[NPGPTD + 1]));
|
|
|
|
for (i = 0; i < NPGPTD; i++) {
|
|
|
|
((vm_paddr_t *)boot_stack)[i] =
|
2010-11-20 20:04:29 +00:00
|
|
|
ma[i] = VM_PAGE_TO_MACH(m[i])|PG_V;
|
2008-09-10 07:11:08 +00:00
|
|
|
}
|
2008-09-18 01:09:15 +00:00
|
|
|
#endif
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Copy cpu0 IdlePTD to new IdlePTD - copying only
|
|
|
|
* kernel mappings
|
|
|
|
*/
|
2008-09-18 01:09:15 +00:00
|
|
|
pmap_qenter(newPTD, m, 4);
|
2008-09-10 07:11:08 +00:00
|
|
|
|
2008-09-18 01:09:15 +00:00
|
|
|
memcpy((uint8_t *)newPTD + KPTDI*sizeof(vm_paddr_t),
|
|
|
|
(uint8_t *)PTOV(IdlePTD) + KPTDI*sizeof(vm_paddr_t),
|
|
|
|
nkpt*sizeof(vm_paddr_t));
|
|
|
|
|
|
|
|
pmap_qremove(newPTD, 4);
|
2013-08-07 06:21:20 +00:00
|
|
|
kva_free(newPTD, 4 * PAGE_SIZE);
|
2008-09-10 07:11:08 +00:00
|
|
|
/*
|
|
|
|
* map actual idle stack to boot_stack
|
|
|
|
*/
|
2008-09-18 01:09:15 +00:00
|
|
|
pmap_kenter(boot_stack, VM_PAGE_TO_PHYS(m[NPGPTD]));
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
|
2010-11-20 20:04:29 +00:00
|
|
|
xen_pgdpt_pin(VM_PAGE_TO_MACH(m[NPGPTD + 1]));
|
2012-10-12 23:26:00 +00:00
|
|
|
rw_wlock(&pvh_global_lock);
|
2008-09-10 07:11:08 +00:00
|
|
|
for (i = 0; i < 4; i++) {
|
2008-09-18 01:09:15 +00:00
|
|
|
int pdir = (PTDPTDI + i) / NPDEPG;
|
|
|
|
int curoffset = (PTDPTDI + i) % NPDEPG;
|
|
|
|
|
2008-09-10 07:11:08 +00:00
|
|
|
xen_queue_pt_update((vm_paddr_t)
|
2008-09-18 01:09:15 +00:00
|
|
|
((ma[pdir] & ~PG_V) + (curoffset*sizeof(vm_paddr_t))),
|
2008-09-10 07:11:08 +00:00
|
|
|
ma[i]);
|
|
|
|
}
|
|
|
|
PT_UPDATES_FLUSH();
|
2012-10-12 23:26:00 +00:00
|
|
|
rw_wunlock(&pvh_global_lock);
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
memset(&ctxt, 0, sizeof(ctxt));
|
|
|
|
ctxt.flags = VGCF_IN_KERNEL;
|
|
|
|
ctxt.user_regs.ds = GSEL(GDATA_SEL, SEL_KPL);
|
|
|
|
ctxt.user_regs.es = GSEL(GDATA_SEL, SEL_KPL);
|
|
|
|
ctxt.user_regs.fs = GSEL(GPRIV_SEL, SEL_KPL);
|
|
|
|
ctxt.user_regs.gs = GSEL(GDATA_SEL, SEL_KPL);
|
|
|
|
ctxt.user_regs.cs = GSEL(GCODE_SEL, SEL_KPL);
|
|
|
|
ctxt.user_regs.ss = GSEL(GDATA_SEL, SEL_KPL);
|
|
|
|
ctxt.user_regs.eip = (unsigned long)init_secondary;
|
|
|
|
ctxt.user_regs.eflags = PSL_KERNEL | 0x1000; /* IOPL_RING1 */
|
|
|
|
|
|
|
|
memset(&ctxt.fpu_ctxt, 0, sizeof(ctxt.fpu_ctxt));
|
|
|
|
|
|
|
|
smp_trap_init(ctxt.trap_ctxt);
|
|
|
|
|
|
|
|
ctxt.ldt_ents = 0;
|
2013-09-02 22:22:56 +00:00
|
|
|
ctxt.gdt_frames[0] =
|
|
|
|
(uint32_t)((uint64_t)vtomach(bootAPgdt) >> PAGE_SHIFT);
|
2008-09-10 07:11:08 +00:00
|
|
|
ctxt.gdt_ents = 512;
|
|
|
|
|
|
|
|
#ifdef __i386__
|
|
|
|
ctxt.user_regs.esp = boot_stack + PAGE_SIZE;
|
|
|
|
|
|
|
|
ctxt.kernel_ss = GSEL(GDATA_SEL, SEL_KPL);
|
|
|
|
ctxt.kernel_sp = boot_stack + PAGE_SIZE;
|
|
|
|
|
|
|
|
ctxt.event_callback_cs = GSEL(GCODE_SEL, SEL_KPL);
|
|
|
|
ctxt.event_callback_eip = (unsigned long)Xhypervisor_callback;
|
|
|
|
ctxt.failsafe_callback_cs = GSEL(GCODE_SEL, SEL_KPL);
|
|
|
|
ctxt.failsafe_callback_eip = (unsigned long)failsafe_callback;
|
|
|
|
|
2010-11-20 20:04:29 +00:00
|
|
|
ctxt.ctrlreg[3] = VM_PAGE_TO_MACH(m[NPGPTD + 1]);
|
2008-09-10 07:11:08 +00:00
|
|
|
#else /* __x86_64__ */
|
|
|
|
ctxt.user_regs.esp = idle->thread.rsp0 - sizeof(struct pt_regs);
|
|
|
|
ctxt.kernel_ss = GSEL(GDATA_SEL, SEL_KPL);
|
|
|
|
ctxt.kernel_sp = idle->thread.rsp0;
|
|
|
|
|
|
|
|
ctxt.event_callback_eip = (unsigned long)hypervisor_callback;
|
|
|
|
ctxt.failsafe_callback_eip = (unsigned long)failsafe_callback;
|
|
|
|
ctxt.syscall_callback_eip = (unsigned long)system_call;
|
|
|
|
|
|
|
|
ctxt.ctrlreg[3] = xen_pfn_to_cr3(virt_to_mfn(init_level4_pgt));
|
|
|
|
|
|
|
|
ctxt.gs_base_kernel = (unsigned long)(cpu_pda(cpu));
|
|
|
|
#endif
|
|
|
|
|
|
|
|
printf("gdtpfn=%lx pdptpfn=%lx\n",
|
|
|
|
ctxt.gdt_frames[0],
|
|
|
|
ctxt.ctrlreg[3] >> PAGE_SHIFT);
|
|
|
|
|
|
|
|
PANIC_IF(HYPERVISOR_vcpu_op(VCPUOP_initialise, cpu, &ctxt));
|
|
|
|
DELAY(3000);
|
|
|
|
PANIC_IF(HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This function starts the AP (application processor) identified
|
|
|
|
* by the APIC ID 'physicalCpu'. It does quite a "song and dance"
|
|
|
|
* to accomplish this. This is necessary because of the nuances
|
|
|
|
* of the different hardware we might encounter. It isn't pretty,
|
|
|
|
* but it seems to work.
|
|
|
|
*/
|
2008-09-18 01:09:15 +00:00
|
|
|
|
|
|
|
int cpus;
|
2008-09-10 07:11:08 +00:00
|
|
|
static int
|
|
|
|
start_ap(int apic_id)
|
|
|
|
{
|
|
|
|
int ms;
|
|
|
|
|
|
|
|
/* used as a watchpoint to signal AP startup */
|
|
|
|
cpus = mp_naps;
|
|
|
|
|
|
|
|
cpu_initialize_context(apic_id);
|
|
|
|
|
|
|
|
/* Wait up to 5 seconds for it to start. */
|
|
|
|
for (ms = 0; ms < 5000; ms++) {
|
|
|
|
if (mp_naps > cpus)
|
2013-09-02 22:22:56 +00:00
|
|
|
return (1); /* return SUCCESS */
|
2008-09-10 07:11:08 +00:00
|
|
|
DELAY(1000);
|
|
|
|
}
|
2013-09-02 22:22:56 +00:00
|
|
|
return (0); /* return FAILURE */
|
2008-09-10 07:11:08 +00:00
|
|
|
}
|
|
|
|
|
Implement vector callback for PVHVM and unify event channel implementations
Re-structure Xen HVM support so that:
- Xen is detected and hypercalls can be performed very
early in system startup.
- Xen interrupt services are implemented using FreeBSD's native
interrupt delivery infrastructure.
- the Xen interrupt service implementation is shared between PV
and HVM guests.
- Xen interrupt handlers can optionally use a filter handler
in order to avoid the overhead of dispatch to an interrupt
thread.
- interrupt load can be distributed among all available CPUs.
- the overhead of accessing the emulated local and I/O apics
on HVM is removed for event channel port events.
- a similar optimization can eventually, and fairly easily,
be used to optimize MSI.
Early Xen detection, HVM refactoring, PVHVM interrupt infrastructure,
and misc Xen cleanups:
Sponsored by: Spectra Logic Corporation
Unification of PV & HVM interrupt infrastructure, bug fixes,
and misc Xen cleanups:
Submitted by: Roger Pau Monné
Sponsored by: Citrix Systems R&D
sys/x86/x86/local_apic.c:
sys/amd64/include/apicvar.h:
sys/i386/include/apicvar.h:
sys/amd64/amd64/apic_vector.S:
sys/i386/i386/apic_vector.s:
sys/amd64/amd64/machdep.c:
sys/i386/i386/machdep.c:
sys/i386/xen/exception.s:
sys/x86/include/segments.h:
Reserve IDT vector 0x93 for the Xen event channel upcall
interrupt handler. On Hypervisors that support the direct
vector callback feature, we can request that this vector be
called directly by an injected HVM interrupt event, instead
of a simulated PCI interrupt on the Xen platform PCI device.
This avoids all of the overhead of dealing with the emulated
I/O APIC and local APIC. It also means that the Hypervisor
can inject these events on any CPU, allowing upcalls for
different ports to be handled in parallel.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
Map Xen per-vcpu area during AP startup.
sys/amd64/include/intr_machdep.h:
sys/i386/include/intr_machdep.h:
Increase the FreeBSD IRQ vector table to include space
for event channel interrupt sources.
sys/amd64/include/pcpu.h:
sys/i386/include/pcpu.h:
Remove Xen HVM per-cpu variable data. These fields are now
allocated via the dynamic per-cpu scheme. See xen_intr.c
for details.
sys/amd64/include/xen/hypercall.h:
sys/dev/xen/blkback/blkback.c:
sys/i386/include/xen/xenvar.h:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/xen/gnttab.c:
Prefer FreeBSD primatives to Linux ones in Xen support code.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
sys/dev/xen/balloon/balloon.c:
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/console/xencons_ring.c:
sys/dev/xen/control/control.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/dev/xen/xenpci/xenpci.c:
sys/i386/i386/machdep.c:
sys/i386/include/pmap.h:
sys/i386/include/xen/xenfunc.h:
sys/i386/isa/npx.c:
sys/i386/xen/clock.c:
sys/i386/xen/mp_machdep.c:
sys/i386/xen/mptable.c:
sys/i386/xen/xen_clock_util.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/xen_rtc.c:
sys/xen/evtchn/evtchn_dev.c:
sys/xen/features.c:
sys/xen/gnttab.c:
sys/xen/gnttab.h:
sys/xen/hvm.h:
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbus_if.m:
sys/xen/xenbus/xenbusb_front.c:
sys/xen/xenbus/xenbusvar.h:
sys/xen/xenstore/xenstore.c:
sys/xen/xenstore/xenstore_dev.c:
sys/xen/xenstore/xenstorevar.h:
Pull common Xen OS support functions/settings into xen/xen-os.h.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
Remove constants, macros, and functions unused in FreeBSD's Xen
support.
sys/xen/xen-os.h:
sys/i386/xen/xen_machdep.c:
sys/x86/xen/hvm.c:
Introduce new functions xen_domain(), xen_pv_domain(), and
xen_hvm_domain(). These are used in favor of #ifdefs so that
FreeBSD can dynamically detect and adapt to the presence of
a hypervisor. The goal is to have an HVM optimized GENERIC,
but more is necessary before this is possible.
sys/amd64/amd64/machdep.c:
sys/dev/xen/xenpci/xenpcivar.h:
sys/dev/xen/xenpci/xenpci.c:
sys/x86/xen/hvm.c:
sys/sys/kernel.h:
Refactor magic ioport, Hypercall table and Hypervisor shared
information page setup, and move it to a dedicated HVM support
module.
HVM mode initialization is now triggered during the
SI_SUB_HYPERVISOR phase of system startup. This currently
occurs just after the kernel VM is fully setup which is
just enough infrastructure to allow the hypercall table
and shared info page to be properly mapped.
sys/xen/hvm.h:
sys/x86/xen/hvm.c:
Add definitions and a method for configuring Hypervisor event
delievery via a direct vector callback.
sys/amd64/include/xen/xen-os.h:
sys/x86/xen/hvm.c:
sys/conf/files:
sys/conf/files.amd64:
sys/conf/files.i386:
Adjust kernel build to reflect the refactoring of early
Xen startup code and Xen interrupt services.
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:
sys/dev/xen/control/control.c:
sys/dev/xen/evtchn/evtchn_dev.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/xen/xenstore/xenstore.c:
sys/xen/evtchn/evtchn_dev.c:
sys/dev/xen/console/console.c:
sys/dev/xen/console/xencons_ring.c
Adjust drivers to use new xen_intr_*() API.
sys/dev/xen/blkback/blkback.c:
Since blkback defers all event handling to a taskqueue,
convert this task queue to a "fast" taskqueue, and schedule
it via an interrupt filter. This avoids an unnecessary
ithread context switch.
sys/xen/xenstore/xenstore.c:
The xenstore driver is MPSAFE. Indicate as much when
registering its interrupt handler.
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbusvar.h:
Remove unused event channel APIs.
sys/xen/evtchn.h:
Remove all kernel Xen interrupt service API definitions
from this file. It is now only used for structure and
ioctl definitions related to the event channel userland
device driver.
Update the definitions in this file to match those from
NetBSD. Implementing this interface will be necessary for
Dom0 support.
sys/xen/evtchn/evtchnvar.h:
Add a header file for implemenation internal APIs related
to managing event channels event delivery. This is used
to allow, for example, the event channel userland device
driver to access low-level routines that typical kernel
consumers of event channel services should never access.
sys/xen/interface/event_channel.h:
sys/xen/xen_intr.h:
Standardize on the evtchn_port_t type for referring to
an event channel port id. In order to prevent low-level
event channel APIs from leaking to kernel consumers who
should not have access to this data, the type is defined
twice: Once in the Xen provided event_channel.h, and again
in xen/xen_intr.h. The double declaration is protected by
__XEN_EVTCHN_PORT_DEFINED__ to ensure it is never declared
twice within a given compilation unit.
sys/xen/xen_intr.h:
sys/xen/evtchn/evtchn.c:
sys/x86/xen/xen_intr.c:
sys/dev/xen/xenpci/evtchn.c:
sys/dev/xen/xenpci/xenpcivar.h:
New implementation of Xen interrupt services. This is
similar in many respects to the i386 PV implementation with
the exception that events for bound to event channel ports
(i.e. not IPI, virtual IRQ, or physical IRQ) are further
optimized to avoid mask/unmask operations that aren't
necessary for these edge triggered events.
Stubs exist for supporting physical IRQ binding, but will
need additional work before this implementation can be
fully shared between PV and HVM.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
sys/i386/xen/mp_machdep.c
sys/x86/xen/hvm.c:
Add support for placing vcpu_info into an arbritary memory
page instead of using HYPERVISOR_shared_info->vcpu_info.
This allows the creation of domains with more than 32 vcpus.
sys/i386/i386/machdep.c:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/exception.s:
Add support for new event channle implementation.
2013-08-29 19:52:18 +00:00
|
|
|
static void
|
|
|
|
ipi_pcpu(int cpu, u_int ipi)
|
|
|
|
{
|
2013-09-06 22:17:02 +00:00
|
|
|
KASSERT((ipi <= nitems(xen_ipis)), ("invalid IPI"));
|
|
|
|
xen_intr_signal(DPCPU_ID_GET(cpu, ipi_handle[ipi]));
|
Implement vector callback for PVHVM and unify event channel implementations
Re-structure Xen HVM support so that:
- Xen is detected and hypercalls can be performed very
early in system startup.
- Xen interrupt services are implemented using FreeBSD's native
interrupt delivery infrastructure.
- the Xen interrupt service implementation is shared between PV
and HVM guests.
- Xen interrupt handlers can optionally use a filter handler
in order to avoid the overhead of dispatch to an interrupt
thread.
- interrupt load can be distributed among all available CPUs.
- the overhead of accessing the emulated local and I/O apics
on HVM is removed for event channel port events.
- a similar optimization can eventually, and fairly easily,
be used to optimize MSI.
Early Xen detection, HVM refactoring, PVHVM interrupt infrastructure,
and misc Xen cleanups:
Sponsored by: Spectra Logic Corporation
Unification of PV & HVM interrupt infrastructure, bug fixes,
and misc Xen cleanups:
Submitted by: Roger Pau Monné
Sponsored by: Citrix Systems R&D
sys/x86/x86/local_apic.c:
sys/amd64/include/apicvar.h:
sys/i386/include/apicvar.h:
sys/amd64/amd64/apic_vector.S:
sys/i386/i386/apic_vector.s:
sys/amd64/amd64/machdep.c:
sys/i386/i386/machdep.c:
sys/i386/xen/exception.s:
sys/x86/include/segments.h:
Reserve IDT vector 0x93 for the Xen event channel upcall
interrupt handler. On Hypervisors that support the direct
vector callback feature, we can request that this vector be
called directly by an injected HVM interrupt event, instead
of a simulated PCI interrupt on the Xen platform PCI device.
This avoids all of the overhead of dealing with the emulated
I/O APIC and local APIC. It also means that the Hypervisor
can inject these events on any CPU, allowing upcalls for
different ports to be handled in parallel.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
Map Xen per-vcpu area during AP startup.
sys/amd64/include/intr_machdep.h:
sys/i386/include/intr_machdep.h:
Increase the FreeBSD IRQ vector table to include space
for event channel interrupt sources.
sys/amd64/include/pcpu.h:
sys/i386/include/pcpu.h:
Remove Xen HVM per-cpu variable data. These fields are now
allocated via the dynamic per-cpu scheme. See xen_intr.c
for details.
sys/amd64/include/xen/hypercall.h:
sys/dev/xen/blkback/blkback.c:
sys/i386/include/xen/xenvar.h:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/xen/gnttab.c:
Prefer FreeBSD primatives to Linux ones in Xen support code.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
sys/dev/xen/balloon/balloon.c:
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/console/xencons_ring.c:
sys/dev/xen/control/control.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/dev/xen/xenpci/xenpci.c:
sys/i386/i386/machdep.c:
sys/i386/include/pmap.h:
sys/i386/include/xen/xenfunc.h:
sys/i386/isa/npx.c:
sys/i386/xen/clock.c:
sys/i386/xen/mp_machdep.c:
sys/i386/xen/mptable.c:
sys/i386/xen/xen_clock_util.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/xen_rtc.c:
sys/xen/evtchn/evtchn_dev.c:
sys/xen/features.c:
sys/xen/gnttab.c:
sys/xen/gnttab.h:
sys/xen/hvm.h:
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbus_if.m:
sys/xen/xenbus/xenbusb_front.c:
sys/xen/xenbus/xenbusvar.h:
sys/xen/xenstore/xenstore.c:
sys/xen/xenstore/xenstore_dev.c:
sys/xen/xenstore/xenstorevar.h:
Pull common Xen OS support functions/settings into xen/xen-os.h.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
Remove constants, macros, and functions unused in FreeBSD's Xen
support.
sys/xen/xen-os.h:
sys/i386/xen/xen_machdep.c:
sys/x86/xen/hvm.c:
Introduce new functions xen_domain(), xen_pv_domain(), and
xen_hvm_domain(). These are used in favor of #ifdefs so that
FreeBSD can dynamically detect and adapt to the presence of
a hypervisor. The goal is to have an HVM optimized GENERIC,
but more is necessary before this is possible.
sys/amd64/amd64/machdep.c:
sys/dev/xen/xenpci/xenpcivar.h:
sys/dev/xen/xenpci/xenpci.c:
sys/x86/xen/hvm.c:
sys/sys/kernel.h:
Refactor magic ioport, Hypercall table and Hypervisor shared
information page setup, and move it to a dedicated HVM support
module.
HVM mode initialization is now triggered during the
SI_SUB_HYPERVISOR phase of system startup. This currently
occurs just after the kernel VM is fully setup which is
just enough infrastructure to allow the hypercall table
and shared info page to be properly mapped.
sys/xen/hvm.h:
sys/x86/xen/hvm.c:
Add definitions and a method for configuring Hypervisor event
delievery via a direct vector callback.
sys/amd64/include/xen/xen-os.h:
sys/x86/xen/hvm.c:
sys/conf/files:
sys/conf/files.amd64:
sys/conf/files.i386:
Adjust kernel build to reflect the refactoring of early
Xen startup code and Xen interrupt services.
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:
sys/dev/xen/control/control.c:
sys/dev/xen/evtchn/evtchn_dev.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/xen/xenstore/xenstore.c:
sys/xen/evtchn/evtchn_dev.c:
sys/dev/xen/console/console.c:
sys/dev/xen/console/xencons_ring.c
Adjust drivers to use new xen_intr_*() API.
sys/dev/xen/blkback/blkback.c:
Since blkback defers all event handling to a taskqueue,
convert this task queue to a "fast" taskqueue, and schedule
it via an interrupt filter. This avoids an unnecessary
ithread context switch.
sys/xen/xenstore/xenstore.c:
The xenstore driver is MPSAFE. Indicate as much when
registering its interrupt handler.
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbusvar.h:
Remove unused event channel APIs.
sys/xen/evtchn.h:
Remove all kernel Xen interrupt service API definitions
from this file. It is now only used for structure and
ioctl definitions related to the event channel userland
device driver.
Update the definitions in this file to match those from
NetBSD. Implementing this interface will be necessary for
Dom0 support.
sys/xen/evtchn/evtchnvar.h:
Add a header file for implemenation internal APIs related
to managing event channels event delivery. This is used
to allow, for example, the event channel userland device
driver to access low-level routines that typical kernel
consumers of event channel services should never access.
sys/xen/interface/event_channel.h:
sys/xen/xen_intr.h:
Standardize on the evtchn_port_t type for referring to
an event channel port id. In order to prevent low-level
event channel APIs from leaking to kernel consumers who
should not have access to this data, the type is defined
twice: Once in the Xen provided event_channel.h, and again
in xen/xen_intr.h. The double declaration is protected by
__XEN_EVTCHN_PORT_DEFINED__ to ensure it is never declared
twice within a given compilation unit.
sys/xen/xen_intr.h:
sys/xen/evtchn/evtchn.c:
sys/x86/xen/xen_intr.c:
sys/dev/xen/xenpci/evtchn.c:
sys/dev/xen/xenpci/xenpcivar.h:
New implementation of Xen interrupt services. This is
similar in many respects to the i386 PV implementation with
the exception that events for bound to event channel ports
(i.e. not IPI, virtual IRQ, or physical IRQ) are further
optimized to avoid mask/unmask operations that aren't
necessary for these edge triggered events.
Stubs exist for supporting physical IRQ binding, but will
need additional work before this implementation can be
fully shared between PV and HVM.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
sys/i386/xen/mp_machdep.c
sys/x86/xen/hvm.c:
Add support for placing vcpu_info into an arbritary memory
page instead of using HYPERVISOR_shared_info->vcpu_info.
This allows the creation of domains with more than 32 vcpus.
sys/i386/i386/machdep.c:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/exception.s:
Add support for new event channle implementation.
2013-08-29 19:52:18 +00:00
|
|
|
}
|
|
|
|
|
2011-05-02 13:56:47 +00:00
|
|
|
/*
|
|
|
|
* send an IPI to a specific CPU.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ipi_send_cpu(int cpu, u_int ipi)
|
|
|
|
{
|
|
|
|
u_int bitmap, old_pending, new_pending;
|
|
|
|
|
|
|
|
if (IPI_IS_BITMAPED(ipi)) {
|
|
|
|
bitmap = 1 << ipi;
|
|
|
|
ipi = IPI_BITMAP_VECTOR;
|
|
|
|
do {
|
|
|
|
old_pending = cpu_ipi_pending[cpu];
|
|
|
|
new_pending = old_pending | bitmap;
|
|
|
|
} while (!atomic_cmpset_int(&cpu_ipi_pending[cpu],
|
|
|
|
old_pending, new_pending));
|
|
|
|
if (!old_pending)
|
|
|
|
ipi_pcpu(cpu, RESCHEDULE_VECTOR);
|
|
|
|
} else {
|
|
|
|
KASSERT(call_data != NULL, ("call_data not set"));
|
|
|
|
ipi_pcpu(cpu, CALL_FUNCTION_VECTOR);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-09-10 07:11:08 +00:00
|
|
|
/*
|
|
|
|
* Flush the TLB on all other CPU's
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
smp_tlb_shootdown(u_int vector, vm_offset_t addr1, vm_offset_t addr2)
|
|
|
|
{
|
|
|
|
u_int ncpu;
|
2008-10-23 07:20:43 +00:00
|
|
|
struct _call_data data;
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
ncpu = mp_ncpus - 1; /* does not shootdown self */
|
|
|
|
if (ncpu < 1)
|
|
|
|
return; /* no other cpus */
|
|
|
|
if (!(read_eflags() & PSL_I))
|
|
|
|
panic("%s: interrupts disabled", __func__);
|
|
|
|
mtx_lock_spin(&smp_ipi_mtx);
|
2009-05-30 15:20:25 +00:00
|
|
|
KASSERT(call_data == NULL, ("call_data isn't null?!"));
|
|
|
|
call_data = &data;
|
2008-10-24 07:58:38 +00:00
|
|
|
call_data->func_id = vector;
|
2008-10-21 06:39:40 +00:00
|
|
|
call_data->arg1 = addr1;
|
|
|
|
call_data->arg2 = addr2;
|
2008-09-10 07:11:08 +00:00
|
|
|
atomic_store_rel_int(&smp_tlb_wait, 0);
|
|
|
|
ipi_all_but_self(vector);
|
|
|
|
while (smp_tlb_wait < ncpu)
|
|
|
|
ia32_pause();
|
2008-10-24 07:58:38 +00:00
|
|
|
call_data = NULL;
|
2008-09-10 07:11:08 +00:00
|
|
|
mtx_unlock_spin(&smp_ipi_mtx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2013-09-02 22:22:56 +00:00
|
|
|
smp_targeted_tlb_shootdown(cpuset_t mask, u_int vector, vm_offset_t addr1,
|
|
|
|
vm_offset_t addr2)
|
2008-09-10 07:11:08 +00:00
|
|
|
{
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
int cpu, ncpu, othercpus;
|
2008-10-24 07:58:38 +00:00
|
|
|
struct _call_data data;
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
othercpus = mp_ncpus - 1;
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
if (CPU_ISFULLSET(&mask)) {
|
|
|
|
if (othercpus < 1)
|
2008-09-10 07:11:08 +00:00
|
|
|
return;
|
|
|
|
} else {
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_CLR(PCPU_GET(cpuid), &mask);
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
if (CPU_EMPTY(&mask))
|
2008-09-10 07:11:08 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (!(read_eflags() & PSL_I))
|
|
|
|
panic("%s: interrupts disabled", __func__);
|
|
|
|
mtx_lock_spin(&smp_ipi_mtx);
|
2009-05-30 15:20:25 +00:00
|
|
|
KASSERT(call_data == NULL, ("call_data isn't null?!"));
|
2008-10-24 07:58:38 +00:00
|
|
|
call_data = &data;
|
|
|
|
call_data->func_id = vector;
|
|
|
|
call_data->arg1 = addr1;
|
|
|
|
call_data->arg2 = addr2;
|
2008-09-10 07:11:08 +00:00
|
|
|
atomic_store_rel_int(&smp_tlb_wait, 0);
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
if (CPU_ISFULLSET(&mask)) {
|
|
|
|
ncpu = othercpus;
|
2008-09-10 07:11:08 +00:00
|
|
|
ipi_all_but_self(vector);
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
} else {
|
|
|
|
ncpu = 0;
|
2013-06-13 20:46:03 +00:00
|
|
|
while ((cpu = CPU_FFS(&mask)) != 0) {
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
cpu--;
|
|
|
|
CPU_CLR(cpu, &mask);
|
|
|
|
CTR3(KTR_SMP, "%s: cpu: %d ipi: %x", __func__, cpu,
|
|
|
|
vector);
|
|
|
|
ipi_send_cpu(cpu, vector);
|
|
|
|
ncpu++;
|
|
|
|
}
|
|
|
|
}
|
2008-09-10 07:11:08 +00:00
|
|
|
while (smp_tlb_wait < ncpu)
|
|
|
|
ia32_pause();
|
2008-10-24 07:58:38 +00:00
|
|
|
call_data = NULL;
|
2008-09-10 07:11:08 +00:00
|
|
|
mtx_unlock_spin(&smp_ipi_mtx);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
smp_cache_flush(void)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (smp_started)
|
|
|
|
smp_tlb_shootdown(IPI_INVLCACHE, 0, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
smp_invltlb(void)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (smp_started) {
|
|
|
|
smp_tlb_shootdown(IPI_INVLTLB, 0, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
smp_invlpg(vm_offset_t addr)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (smp_started) {
|
|
|
|
smp_tlb_shootdown(IPI_INVLPG, addr, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
smp_invlpg_range(vm_offset_t addr1, vm_offset_t addr2)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (smp_started) {
|
|
|
|
smp_tlb_shootdown(IPI_INVLRNG, addr1, addr2);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
smp_masked_invltlb(cpuset_t mask)
|
2008-09-10 07:11:08 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
if (smp_started) {
|
|
|
|
smp_targeted_tlb_shootdown(mask, IPI_INVLTLB, 0, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
smp_masked_invlpg(cpuset_t mask, vm_offset_t addr)
|
2008-09-10 07:11:08 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
if (smp_started) {
|
|
|
|
smp_targeted_tlb_shootdown(mask, IPI_INVLPG, addr, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
smp_masked_invlpg_range(cpuset_t mask, vm_offset_t addr1, vm_offset_t addr2)
|
2008-09-10 07:11:08 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
if (smp_started) {
|
|
|
|
smp_targeted_tlb_shootdown(mask, IPI_INVLRNG, addr1, addr2);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* send an IPI to a set of cpus.
|
|
|
|
*/
|
|
|
|
void
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
ipi_selected(cpuset_t cpus, u_int ipi)
|
2008-09-10 07:11:08 +00:00
|
|
|
{
|
|
|
|
int cpu;
|
|
|
|
|
2009-08-15 18:37:06 +00:00
|
|
|
/*
|
|
|
|
* IPI_STOP_HARD maps to a NMI and the trap handler needs a bit
|
|
|
|
* of help in order to understand what is the source.
|
|
|
|
* Set the mask of receiving CPUs for this purpose.
|
|
|
|
*/
|
|
|
|
if (ipi == IPI_STOP_HARD)
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
CPU_OR_ATOMIC(&ipi_nmi_pending, &cpus);
|
2009-08-15 18:37:06 +00:00
|
|
|
|
2013-06-13 20:46:03 +00:00
|
|
|
while ((cpu = CPU_FFS(&cpus)) != 0) {
|
2008-09-10 07:11:08 +00:00
|
|
|
cpu--;
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
CPU_CLR(cpu, &cpus);
|
2011-05-02 13:56:47 +00:00
|
|
|
CTR3(KTR_SMP, "%s: cpu: %d ipi: %x", __func__, cpu, ipi);
|
|
|
|
ipi_send_cpu(cpu, ipi);
|
2008-09-10 07:11:08 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-08-06 15:36:59 +00:00
|
|
|
/*
|
|
|
|
* send an IPI to a specific CPU.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ipi_cpu(int cpu, u_int ipi)
|
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
|
|
|
* IPI_STOP_HARD maps to a NMI and the trap handler needs a bit
|
|
|
|
* of help in order to understand what is the source.
|
|
|
|
* Set the mask of receiving CPUs for this purpose.
|
|
|
|
*/
|
|
|
|
if (ipi == IPI_STOP_HARD)
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
CPU_SET_ATOMIC(cpu, &ipi_nmi_pending);
|
2010-08-06 15:36:59 +00:00
|
|
|
|
|
|
|
CTR3(KTR_SMP, "%s: cpu: %d ipi: %x", __func__, cpu, ipi);
|
2011-05-02 13:56:47 +00:00
|
|
|
ipi_send_cpu(cpu, ipi);
|
2010-08-06 15:36:59 +00:00
|
|
|
}
|
|
|
|
|
2008-09-10 07:11:08 +00:00
|
|
|
/*
|
|
|
|
* send an IPI to all CPUs EXCEPT myself
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
ipi_all_but_self(u_int ipi)
|
|
|
|
{
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
cpuset_t other_cpus;
|
2009-08-15 18:37:06 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* IPI_STOP_HARD maps to a NMI and the trap handler needs a bit
|
|
|
|
* of help in order to understand what is the source.
|
|
|
|
* Set the mask of receiving CPUs for this purpose.
|
|
|
|
*/
|
2011-07-04 12:04:52 +00:00
|
|
|
other_cpus = all_cpus;
|
|
|
|
CPU_CLR(PCPU_GET(cpuid), &other_cpus);
|
2009-08-15 18:37:06 +00:00
|
|
|
if (ipi == IPI_STOP_HARD)
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
CPU_OR_ATOMIC(&ipi_nmi_pending, &other_cpus);
|
2009-08-15 18:37:06 +00:00
|
|
|
|
2008-09-10 07:11:08 +00:00
|
|
|
CTR2(KTR_SMP, "%s: ipi: %x", __func__, ipi);
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
ipi_selected(other_cpus, ipi);
|
2008-09-10 07:11:08 +00:00
|
|
|
}
|
|
|
|
|
2009-08-15 18:37:06 +00:00
|
|
|
int
|
|
|
|
ipi_nmi_handler()
|
|
|
|
{
|
2011-07-04 12:04:52 +00:00
|
|
|
u_int cpuid;
|
2009-08-15 18:37:06 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* As long as there is not a simple way to know about a NMI's
|
|
|
|
* source, if the bitmask for the current CPU is present in
|
|
|
|
* the global pending bitword an IPI_STOP_HARD has been issued
|
|
|
|
* and should be handled.
|
|
|
|
*/
|
2011-07-04 12:04:52 +00:00
|
|
|
cpuid = PCPU_GET(cpuid);
|
|
|
|
if (!CPU_ISSET(cpuid, &ipi_nmi_pending))
|
2009-08-15 18:37:06 +00:00
|
|
|
return (1);
|
|
|
|
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_CLR_ATOMIC(cpuid, &ipi_nmi_pending);
|
2009-08-15 18:37:06 +00:00
|
|
|
cpustop_handler();
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2008-09-10 07:11:08 +00:00
|
|
|
/*
|
|
|
|
* Handle an IPI_STOP by saving our current context and spinning until we
|
|
|
|
* are resumed.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
cpustop_handler(void)
|
|
|
|
{
|
Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).
Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.
The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN
while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.
Some technical notes:
- This commit may be considered an ABI nop for all the architectures
different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
accessed avoiding migration, because the size of cpuset_t should be
considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
primirally done in order to leave some more space in userland to cope
with KBI extensions). If you need to access kernel cpuset_t from the
userland please refer to example in this patch on how to do that
correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now
The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.
Tested by: pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by: jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
|
|
|
int cpu;
|
|
|
|
|
|
|
|
cpu = PCPU_GET(cpuid);
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
savectx(&stoppcbs[cpu]);
|
|
|
|
|
|
|
|
/* Indicate that we are stopped */
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_SET_ATOMIC(cpu, &stopped_cpus);
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
/* Wait for restart */
|
2011-07-04 12:04:52 +00:00
|
|
|
while (!CPU_ISSET(cpu, &started_cpus))
|
2008-09-10 07:11:08 +00:00
|
|
|
ia32_pause();
|
|
|
|
|
2011-07-04 12:04:52 +00:00
|
|
|
CPU_CLR_ATOMIC(cpu, &started_cpus);
|
|
|
|
CPU_CLR_ATOMIC(cpu, &stopped_cpus);
|
2008-09-10 07:11:08 +00:00
|
|
|
|
|
|
|
if (cpu == 0 && cpustop_restartfunc != NULL) {
|
|
|
|
cpustop_restartfunc();
|
|
|
|
cpustop_restartfunc = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is called once the rest of the system is up and running and we're
|
|
|
|
* ready to let the AP's out of the pen.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
release_aps(void *dummy __unused)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (mp_ncpus == 1)
|
|
|
|
return;
|
|
|
|
atomic_store_rel_int(&aps_ready, 1);
|
|
|
|
while (smp_started == 0)
|
|
|
|
ia32_pause();
|
|
|
|
}
|
|
|
|
SYSINIT(start_aps, SI_SUB_SMP, SI_ORDER_FIRST, release_aps, NULL);
|
Implement vector callback for PVHVM and unify event channel implementations
Re-structure Xen HVM support so that:
- Xen is detected and hypercalls can be performed very
early in system startup.
- Xen interrupt services are implemented using FreeBSD's native
interrupt delivery infrastructure.
- the Xen interrupt service implementation is shared between PV
and HVM guests.
- Xen interrupt handlers can optionally use a filter handler
in order to avoid the overhead of dispatch to an interrupt
thread.
- interrupt load can be distributed among all available CPUs.
- the overhead of accessing the emulated local and I/O apics
on HVM is removed for event channel port events.
- a similar optimization can eventually, and fairly easily,
be used to optimize MSI.
Early Xen detection, HVM refactoring, PVHVM interrupt infrastructure,
and misc Xen cleanups:
Sponsored by: Spectra Logic Corporation
Unification of PV & HVM interrupt infrastructure, bug fixes,
and misc Xen cleanups:
Submitted by: Roger Pau Monné
Sponsored by: Citrix Systems R&D
sys/x86/x86/local_apic.c:
sys/amd64/include/apicvar.h:
sys/i386/include/apicvar.h:
sys/amd64/amd64/apic_vector.S:
sys/i386/i386/apic_vector.s:
sys/amd64/amd64/machdep.c:
sys/i386/i386/machdep.c:
sys/i386/xen/exception.s:
sys/x86/include/segments.h:
Reserve IDT vector 0x93 for the Xen event channel upcall
interrupt handler. On Hypervisors that support the direct
vector callback feature, we can request that this vector be
called directly by an injected HVM interrupt event, instead
of a simulated PCI interrupt on the Xen platform PCI device.
This avoids all of the overhead of dealing with the emulated
I/O APIC and local APIC. It also means that the Hypervisor
can inject these events on any CPU, allowing upcalls for
different ports to be handled in parallel.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
Map Xen per-vcpu area during AP startup.
sys/amd64/include/intr_machdep.h:
sys/i386/include/intr_machdep.h:
Increase the FreeBSD IRQ vector table to include space
for event channel interrupt sources.
sys/amd64/include/pcpu.h:
sys/i386/include/pcpu.h:
Remove Xen HVM per-cpu variable data. These fields are now
allocated via the dynamic per-cpu scheme. See xen_intr.c
for details.
sys/amd64/include/xen/hypercall.h:
sys/dev/xen/blkback/blkback.c:
sys/i386/include/xen/xenvar.h:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/xen/gnttab.c:
Prefer FreeBSD primatives to Linux ones in Xen support code.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
sys/dev/xen/balloon/balloon.c:
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/console/xencons_ring.c:
sys/dev/xen/control/control.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/dev/xen/xenpci/xenpci.c:
sys/i386/i386/machdep.c:
sys/i386/include/pmap.h:
sys/i386/include/xen/xenfunc.h:
sys/i386/isa/npx.c:
sys/i386/xen/clock.c:
sys/i386/xen/mp_machdep.c:
sys/i386/xen/mptable.c:
sys/i386/xen/xen_clock_util.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/xen_rtc.c:
sys/xen/evtchn/evtchn_dev.c:
sys/xen/features.c:
sys/xen/gnttab.c:
sys/xen/gnttab.h:
sys/xen/hvm.h:
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbus_if.m:
sys/xen/xenbus/xenbusb_front.c:
sys/xen/xenbus/xenbusvar.h:
sys/xen/xenstore/xenstore.c:
sys/xen/xenstore/xenstore_dev.c:
sys/xen/xenstore/xenstorevar.h:
Pull common Xen OS support functions/settings into xen/xen-os.h.
sys/amd64/include/xen/xen-os.h:
sys/i386/include/xen/xen-os.h:
sys/xen/xen-os.h:
Remove constants, macros, and functions unused in FreeBSD's Xen
support.
sys/xen/xen-os.h:
sys/i386/xen/xen_machdep.c:
sys/x86/xen/hvm.c:
Introduce new functions xen_domain(), xen_pv_domain(), and
xen_hvm_domain(). These are used in favor of #ifdefs so that
FreeBSD can dynamically detect and adapt to the presence of
a hypervisor. The goal is to have an HVM optimized GENERIC,
but more is necessary before this is possible.
sys/amd64/amd64/machdep.c:
sys/dev/xen/xenpci/xenpcivar.h:
sys/dev/xen/xenpci/xenpci.c:
sys/x86/xen/hvm.c:
sys/sys/kernel.h:
Refactor magic ioport, Hypercall table and Hypervisor shared
information page setup, and move it to a dedicated HVM support
module.
HVM mode initialization is now triggered during the
SI_SUB_HYPERVISOR phase of system startup. This currently
occurs just after the kernel VM is fully setup which is
just enough infrastructure to allow the hypercall table
and shared info page to be properly mapped.
sys/xen/hvm.h:
sys/x86/xen/hvm.c:
Add definitions and a method for configuring Hypervisor event
delievery via a direct vector callback.
sys/amd64/include/xen/xen-os.h:
sys/x86/xen/hvm.c:
sys/conf/files:
sys/conf/files.amd64:
sys/conf/files.i386:
Adjust kernel build to reflect the refactoring of early
Xen startup code and Xen interrupt services.
sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:
sys/dev/xen/control/control.c:
sys/dev/xen/evtchn/evtchn_dev.c:
sys/dev/xen/netback/netback.c:
sys/dev/xen/netfront/netfront.c:
sys/xen/xenstore/xenstore.c:
sys/xen/evtchn/evtchn_dev.c:
sys/dev/xen/console/console.c:
sys/dev/xen/console/xencons_ring.c
Adjust drivers to use new xen_intr_*() API.
sys/dev/xen/blkback/blkback.c:
Since blkback defers all event handling to a taskqueue,
convert this task queue to a "fast" taskqueue, and schedule
it via an interrupt filter. This avoids an unnecessary
ithread context switch.
sys/xen/xenstore/xenstore.c:
The xenstore driver is MPSAFE. Indicate as much when
registering its interrupt handler.
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbusvar.h:
Remove unused event channel APIs.
sys/xen/evtchn.h:
Remove all kernel Xen interrupt service API definitions
from this file. It is now only used for structure and
ioctl definitions related to the event channel userland
device driver.
Update the definitions in this file to match those from
NetBSD. Implementing this interface will be necessary for
Dom0 support.
sys/xen/evtchn/evtchnvar.h:
Add a header file for implemenation internal APIs related
to managing event channels event delivery. This is used
to allow, for example, the event channel userland device
driver to access low-level routines that typical kernel
consumers of event channel services should never access.
sys/xen/interface/event_channel.h:
sys/xen/xen_intr.h:
Standardize on the evtchn_port_t type for referring to
an event channel port id. In order to prevent low-level
event channel APIs from leaking to kernel consumers who
should not have access to this data, the type is defined
twice: Once in the Xen provided event_channel.h, and again
in xen/xen_intr.h. The double declaration is protected by
__XEN_EVTCHN_PORT_DEFINED__ to ensure it is never declared
twice within a given compilation unit.
sys/xen/xen_intr.h:
sys/xen/evtchn/evtchn.c:
sys/x86/xen/xen_intr.c:
sys/dev/xen/xenpci/evtchn.c:
sys/dev/xen/xenpci/xenpcivar.h:
New implementation of Xen interrupt services. This is
similar in many respects to the i386 PV implementation with
the exception that events for bound to event channel ports
(i.e. not IPI, virtual IRQ, or physical IRQ) are further
optimized to avoid mask/unmask operations that aren't
necessary for these edge triggered events.
Stubs exist for supporting physical IRQ binding, but will
need additional work before this implementation can be
fully shared between PV and HVM.
sys/amd64/amd64/mp_machdep.c:
sys/i386/i386/mp_machdep.c:
sys/i386/xen/mp_machdep.c
sys/x86/xen/hvm.c:
Add support for placing vcpu_info into an arbritary memory
page instead of using HYPERVISOR_shared_info->vcpu_info.
This allows the creation of domains with more than 32 vcpus.
sys/i386/i386/machdep.c:
sys/i386/xen/clock.c:
sys/i386/xen/xen_machdep.c:
sys/i386/xen/exception.s:
Add support for new event channle implementation.
2013-08-29 19:52:18 +00:00
|
|
|
SYSINIT(start_ipis, SI_SUB_SMP, SI_ORDER_ANY, xen_smp_intr_init_cpus, NULL);
|
|
|
|
SYSINIT(start_cpu, SI_SUB_INTR, SI_ORDER_ANY, xen_smp_intr_setup_cpus, NULL);
|