very early (SI_SUB_TUNABLES - 1) and is responsible for setting mp_maxid.
cpu_mp_probe() is now called at SI_SUB_CPU and determines if SMP is
actually present and sets mp_ncpus and all_cpus. Splitting these up
allows an architecture to probe CPUs later than SI_SUB_TUNABLES by just
setting mp_maxid to MAXCPU in cpu_mp_setmaxid(). This could allow the
CPU probing code to live in a module, for example, since modules
sysinit's in modules cannot be invoked prior to SI_SUB_KLD. This is
needed to re-enable the ACPI module on i386.
- For the alpha SMP probing code, use LOCATE_PCS() instead of duplicating
its contents in a few places. Also, add a smp_cpu_enabled() function
to avoid duplicating some code. There is room for further code
reduction later since much of this code is also present in cpu_mp_start().
- All archs besides i386 still set mp_maxid to the same values they set it
to before this change. i386 now sets mp_maxid to MAXCPU.
Tested on: alpha, amd64, i386, ia64, sparc64
Approved by: re (scottl)
delete of objects. Also revert our temporary workaround in dsmthdat.c
that always copied objects. This is the correct fix for errors
evaluating _BST (and GBST) on IBM Thinkpads where an argument (Arg3)
was returned to the caller and the object was freed while still in use.
This will be in a future ACPI-CA dist.
Thanks to: kochi@netbsd.org, shaohua.li@intel.com
fixes an interrupt storm for certain users. This is done on the vendor
branch since the code is already in the 20031029 ACPI-CA dist and will
be imported after 5.2R.
Tested by: sebastian ssmoller <sebastian.ssmoller@gmx.net>
PR: i386/57909
Approved by: re (jhb)
different kernel to boot with kernel="NAME" would load the kernel and
loader.conf-selected modules from /boot/NAME, but it would not change
module_path. So, for instance, the automatically loaded acpi.ko would come
from /boot/kernel/acpi.ko, *always*.
Mind you, this happened for unassisted boot. If you interrupted, typed
"unload" and then "boot NAME", it would Do The Right Thing.
The source of the problem is the double initialization with beastie's
loader.rc. One would happen inside "start", and would load the kernel. The
next one would happen later in the loader.rc script, resetting module_path.
Because module_path is set to the Right Value by the functions in support.4th
that actually load the kernel, when beastie.4th proceeded to boot
module_path would remain wrong, as the kernel was already loaded.
This can be corrected by removing either initialization, and also by changing
the command used by beastie.4th from "boot" to "boot-conf", which makes sure
you use the right kernel and modules.
I chose to remove the second initialization, since this let you interrupt
(or confirm) boot before beastie even comes up. I avoid also doing the
boot-conf change because that would simply cause the kernel and modules to
be loaded twice (in fact, that was my original patch, until, in writing this
very commit message, I saw the error of my ways).
This commit changes the semantics of module loading when using the beastie
menu. Now it does what one would expect it to, but not what it was actually
doing, so something may break for unusual setups depending on broken
behavior. As our japanese friends so nicely put it, shikata ga nakatta. :-)
Approved by: re (scottl)
known samples of broken chipsets that needed mixed mode in the first place
are so broken (ie: locks up) that we can't use IO APIC mode at all and it
needs to be turned off in the bios. So, the MIXED_MODE penalty on the
good chipsets gained nothing.
Approved by: re (scottl)
the compiler having to parse and optimize the PCPU_GET(curthread) so often.
__curthread() is an inline optimized version of PCPU_GET(curthread) that
knows that pc_curthread is at offset zero in the pcpu struct. Add a
CTASSERT() to catch any possible changes to this. This accounts for
just over a 1% wall clock speedup for total kernel compile/link time,
and 20% compile time speedup on some specific files depending on which
compile options are used.
Approved by: re (jhb)
the routing table. Move all usage and references in the tcp stack
from the routing table metrics to the tcp hostcache.
It caches measured parameters of past tcp sessions to provide better
initial start values for following connections from or to the same
source or destination. Depending on the network parameters to/from
the remote host this can lead to significant speedups for new tcp
connections after the first one because they inherit and shortcut
the learning curve.
tcp_hostcache is designed for multiple concurrent access in SMP
environments with high contention and is hash indexed by remote
ip address.
It removes significant locking requirements from the tcp stack with
regard to the routing table.
Reviewed by: sam (mentor), bms
Reviewed by: -net, -current, core@kame.net (IPv6 parts)
Approved by: re (scottl)
boot-disabled devices instead of skipping the last interrupt. This is
especially important for devices that only have one interrupt as this
bug was keeping any interrupt from being tried at all.
Reviewed by: msmith
Approved by: re (scottl)
the routing table. Move all usage and references in the tcp stack
from the routing table metrics to the tcp hostcache.
It caches measured parameters of past tcp sessions to provide better
initial start values for following connections from or to the same
source or destination. Depending on the network parameters to/from
the remote host this can lead to significant speedups for new tcp
connections after the first one because they inherit and shortcut
the learning curve.
tcp_hostcache is designed for multiple concurrent access in SMP
environments with high contention and is hash indexed by remote
ip address.
It removes significant locking requirements from the tcp stack with
regard to the routing table.
Reviewed by: sam (mentor), bms
Reviewed by: -net, -current, core@kame.net (IPv6 parts)
Approved by: re (scottl)
accordingly. The define is left intact for ABI compatibility
with userland.
This is a pre-step for the introduction of tcp_hostcache. The
network stack remains fully useable with this change.
Reviewed by: sam (mentor), bms
Reviewed by: -net, -current, core@kame.net (IPv6 parts)
Approved by: re (scottl)
on SMP systems has a chance of working. This was a loose end of the
implementation of the ACPI Cx idle states. Since our logical CPU Id
is the ACPI processor Id, we do not need to jump through hoops to
obtain it.
Approved: re@ (jhb)
happen in interrupt context; 1) sleep locks, and 2) malloc/free
calls.
1) is fixed by using spin locks instead.
2) is fixed by preallocating a FIFO (implemented with a STAILQ)
and using elements from this FIFO instead. This turns out
to be rather fast.
OK'ed by: re (scottl)
Thanks to: peter, jhb, rwatson, jake
Apologies to: *
(1) Document the notion of using jail(8) to run "virtual servers" or
just to constrain specific applications. If only running specific
applications, some configuration steps are unnecessary (such as
editing rc.conf).
(2) Add some more subsection headers to break up the bigger chunks of
text.
(3) Clarify the problems associated with applications binding all IP
addresses in the host, and attempt to be more specific about
potential application problems. Document how to force sshd to
bind the the right socket.
(4) Suggest that in a jailed application scenario, you might want to
have the host syslogd listen on the socket in the jail, rather
than running syslogd in the jail.
(5) Catch another reference to /stand/sysinstall.
Approved by: re (bmah implicitly)
valid value for cx_lowest. To disable sleeping, use machdep.cpu_idle_hlt
instead. Update the version of the ACPI spec we implement.
Approved by: re (implicitly)
an acpi_cpu method for shutdown that disables entry to acpi_cpu_idle
and then IPIs/waits for threads to exit. This fixes a panic late in
reboot in the SMP case.
* In the !SMP case, don't use the processor id filled out by the MADT
since there can only be one processor. This was causing a panic in
acpi_cpu_idle if the id was 1 since the data was being dereferenced from
cpu_softc[1] even though the actual data was in cpu_softc[0] (which is
correct).
* Rework the initialization functions so that cpu_idle_hook is written
late in the boot process.
* Make the P_BLK, P_BLK_LEN, and cpu_cx_count all softc-local variables.
This will help SMP boxes that have _CST or multiple P_BLKs. No such
boxes are known at this time.
* Always allocate the C1 state, even if the P_BLK is invalid. This means
we will always take over idling if enabled. Remove the value -1 as
valid for cx_lowest since this is redundant with machdep.cpu_idle_hlt.
* Reduce locking for the throttle initialization case to around the write
to the smi_cmd port. Add disabled code to write the CST_CNT. It will
be enabled once _CST re-evaluation is tested (post 5.2R).
Thank you: dfr, imp, jhb, marcel, peter
Tested by: rwatson, Harald Schmalzbauer <h@schmalzbauer.de>
Approved by: re (rwatson)
occurs when kmem_malloc() fails to allocate a sufficient number of vm
pages. Specifically, we avoid the lock-order reversal by not grabbing
Giant around pmap_remove() if the map is the kmem_map.
Approved by: re (jhb)
Reported by: Eugene <eugene3@web.de>
- turn on SMP in generic
- add 'device atpic' - this is unconditional on i386, but certain nvidia
based systems need to disable acpi because the reference bios seems to be
hosed. If acpi is disabled, we won't find the apic. amd64 has the
mptable code in a seperate compile option as well.
- turn sym back on, it doesn't fail to compile anymore.
Approved by: re
to gcc have not been made for ia64, which means that executables still
have /usr/libexec/ld-elf.so.1 as the dynamic linker. This simply does
not work if /usr is a seperate filesystem not mounted when the kernel
tries to execute init(8).
Note that this is a temporary fix until a new gcc has been imported
that does have the required changes.
Approved: re@
source count pointers at them so that intr_execute_handlers() won't
choke when it tries to handle an unregisterd ATPIC interrupt source.
- Install the low-level ATPIC interrupt handlers when we first program the
ATPIC in atpic_startup() rather than at SI_SUB_INTR. This is only
necessary to work around buggy code that enables interrupts too early
in the boot process (namely, the vm86 code).
Approved by: re (rwatson)