the following bugs.
- When constructing a resource configuration, respect the order
in which resource descriptors are read, in order to establish
the correct mapping between the descriptors and configuration
registers.
"Plug and Play ISA Specification, Version 1.0a", Sec 4.6.1, May 5,
1994. "Clarifications to the Plug and Play ISA Specification,
Version 1.0a", Sec 6.2.1, Dec. 10, 1994.
- Do not ignore null (empty) descriptors; they are valid descriptors
acting as filler.
"Clarifications to the Plug and Play ISA Specification, Version 1.0a",
Sec 6.2.1.
- Correctly set up logical device configuration registers for null
resources.
"Clarifications to the Plug and Play ISA Specification, Version 1.0a"
- Handle null resources properly in the resource allocator for the
ISA bus.
with system statistics monitoring tools (such as systat, vmstat...)
because of stopping RTC interrupts generation.
Restore all the timers (RTC and i8254) atomically.
Reviewed by: bde
MFC after: 1 week
Instead introduce the [M] prefix to existing keywords. e.g.
MSTD is the MP SAFE version of STD. This is prepatory for a
massive Giant lock pushdown. The old MPSAFE keyword made
syscalls.master too messy.
Begin comments MP-Safe procedures with the comment:
/*
* MPSAFE
*/
This comments means that the procedure may be called without
Giant held (The procedure itself may still need to obtain
Giant temporarily to do its thing).
sv_prepsyscall() is now MP SAFE and assumed to be MP SAFE
sv_transtrap() is now MP SAFE and assumed to be MP SAFE
ktrsyscall() and ktrsysret() are now MP SAFE (Giant Pushdown)
trapsignal() is now MP SAFE (Giant Pushdown)
Places which used to do the if (mtx_owned(&Giant)) mtx_unlock(&Giant)
test in syscall[2]() in */*/trap.c now do not. Instead they
explicitly unlock Giant if they previously obtained it, and then
assert that it is no longer held to catch broken system calls.
Rebuild syscall tables.
multiple times, others do. The last strategy, which was to assume
that already routed interrupts were good and just return them doesn't
work for some laptops. So, instead, we have a new strategy: we notice
that we have an interrupt that's already routed. We go ahead and try
to route it, none the less. We will assume that it is correctly
routed, even if the route fails. We still assume that other failures
in the bios32 call are because the interrupt is NOT routed.
Note: some laptops do not support the bios32 interface to PCI BIOS and
we need to call it via the INT 2A interface. That is another windmill
to till at later.
Also correct a minor typo and minor whitespace nits.
Strong MFC candidate.
level implementation stuff out of machine/globaldata.h to avoid exposing
UPAGES to lots more places. The end result is that we can double
the kernel stack size with 'options UPAGES=4' etc.
This is mainly being done for the benefit of a MFC to RELENG_4 at some
point. -current doesn't really need this so much since each interrupt
runs on its own kstack.
timeout callwheel and buffer cache, out of the platform specific areas
and into the machine independant area. i386 and alpha adjusted here.
Other cpus can be fixed piecemeal.
Reviewed by: freebsd-smp, jake
purely informational and can give some advance indications of tuning
problems. These are i386 only for now as it seems that the i386 is
the only one suffering kvm pressure.
trap_fatal() to make restarting from panic's slightly easier. Before if
one did 'w 0 0' in ddb, the longjmp in ddb inside of trap_fatal() would
result in Giant being held (or recursed one level deeper) which led to
problems later on. You can now drop to teh debugger, do 'w 0 0', and
continue w/o a problem.
and such was just a bad idea and one that users should be forced to
enable if they want it. This patch introduces a hw.pci.enable_pcibios
tunable for those people. This does not impact the pcibios interrupt
routing at all.
Approved by: peter, msmith
some bios vendors took it apon themselves to "censor" the
host->pci bridges from PCIBIOS callers, even when the caller
explicitly asks for them. This includes certain Compaq machines
(eg: DL360) and some laptops.
If we detect this, shut down pcibios and revert to using IO
port bashing.
Under -current, apcica does a better job anyway.
information. The default limits only effect machines with > 1GB of ram
and can be overriden with two new kernel conf variables VM_SWZONE_SIZE_MAX
and VM_BCACHE_SIZE_MAX, or with loader variables kern.maxswzone and
kern.maxbcache. This has the effect of leaving more KVM available for
sizing NMBCLUSTERS and 'maxusers' and should avoid tripups where a sysad
adds memory to a machine and then sees the kernel panic on boot due to
running out of KVM.
Also change the default swap-meta auto-sizing calculation to allocate half
of what it was previously allocating. The prior defaults were way too high.
Note that we cannot afford to run out of swap-meta structures so we still
stay somewhat conservative here.
but it's better than the buggy behavior we have now. If we contigmalloc()
buffers in bus_dmamem_alloc(), then we must configfree() them in
bus_dmamem_free(). Trying to free() them is wrong, and will cause
a panic (at least, it does on the alpha.)
I tripped over this when trying to kldunload my busdma-ified if_rl
driver.
traps, so that ddb can keep control (almost) no matter how it is
entered. This breaks time-critical interrupts while the system is
stopped in ddb, but I haven't noticed any significant problems except
that applications become confused about the time. Lost time will be
adjusted for later. Anyway, the half-baked disabling of interrupts in
Debugger() gives the same problems for the usual way of entering ddb.
bug for bug compatibility to ddb trap handlers after fixing the debugger
trap gates to be interrupt gates, but the fix was never committed. Now
I want the fix to apply to ddb.
- fix segment limit mis-calculation for GCODE_SEL, GDATA_SEL, GPRIV_SEL,
LUCODE_SEL and LUDATA_SEL.
- move `loader(8) metadata' related printf() after cninit().
- use atop macro (address to pages) for segment limit calculation
instead of i386_btop macro (bytes to pages).
- fix style bugs for the declarations of ints.
Reviewed by: bde, msmith (and arch & audit ML)
the process of exiting the kernel. The ast() function now loops as long
as the PS_ASTPENDING or PS_NEEDRESCHED flags are set. It returns with
preemption disabled so that any further AST's that arrive via an
interrupt will be delayed until the low-level MD code returns to user
mode.
- Use u_int's to store the tick counts for profiling purposes so that we
do not need sched_lock just to read p_sticks. This also closes a
problem where the call to addupc_task() could screw up the arithmetic
due to non-atomic reads of p_sticks.
- Axe need_proftick(), aston(), astoff(), astpending(), need_resched(),
clear_resched(), and resched_wanted() in favor of direct bit operations
on p_sflag.
- Fix up locking with sched_lock some. In addupc_intr(), use sched_lock
to ensure pr_addr and pr_ticks are updated atomically with setting
PS_OWEUPC. In ast() we clear pr_ticks atomically with clearing
PS_OWEUPC. We also do not grab the lock just to test a flag.
- Simplify the handling of Giant in ast() slightly.
Reviewed by: bde (mostly)
are a really nasty interface that should have been killed long ago
when 'ptrace(PT_[SG]ETREGS' etc came along. The entity that they
operate on (struct user) will not be around much longer since it
is part-per-process and part-per-thread in a post-KSE world.
gdb does not actually use this except for the obscure 'info udot'
command which does a hexdump of as much of the child's 'struct user'
as it can get. It carries its own #defines so it doesn't break
compiles.
dynamic symbol table buckets and chains. The sparc64 toolchain uses 32
bit .hash entries, unlike other 64 bits architectures (alpha), which use
64 bit entries.
Discussed with: dfr, jdp