has been set in the first place. This should reduce
unwanted side-effects in rc.d scripts that don't mean
to use $command and rc.subr(8) methods associated with
it at all.
Discussed with: brooks
Reviewed by: -rc (silence)
o Oxford Semiconductor PCI Dual Port Serial
o Netmos Nm9845 PCI Bridge with Dual UART
Add PCI IDs for single-port cards:
o Various SIIG Cyber Serial
o Oxford Semiconductor OXCB950 UART
Update description as per puc(4).
immediately, back off to the next higher Cx sleep state. Some machines
with a Via chipset report a valid C3 but a register read doesn't actually
halt the CPU. This would cause the machine to appear unresponsive as it
repeatedly called cpu_idle() which immediately returned. Causing interrupts
(i.e. by pressing the power button) would cause the system to make forward
progress, showing that it wasn't actually hung.
Also, enable interrupts a little earlier. We don't need them disabled
to calculate the delta time for the read.
Reported by: silby
MFC after: 2 weeks
and increase flexibility to allow various different approaches to be tried
in the future.
- Split struct ithd up into two pieces. struct intr_event holds the list
of interrupt handlers associated with interrupt sources.
struct intr_thread contains the data relative to an interrupt thread.
Currently we still provide a 1:1 relationship of events to threads
with the exception that events only have an associated thread if there
is at least one threaded interrupt handler attached to the event. This
means that on x86 we no longer have 4 bazillion interrupt threads with
no handlers. It also means that interrupt events with only INTR_FAST
handlers no longer have an associated thread either.
- Renamed struct intrhand to struct intr_handler to follow the struct
intr_foo naming convention. This did require renaming the powerpc
MD struct intr_handler to struct ppc_intr_handler.
- INTR_FAST no longer implies INTR_EXCL on all architectures except for
powerpc. This means that multiple INTR_FAST handlers can attach to the
same interrupt and that INTR_FAST and non-INTR_FAST handlers can attach
to the same interrupt. Sharing INTR_FAST handlers may not always be
desirable, but having sio(4) and uhci(4) fight over an IRQ isn't fun
either. Drivers can always still use INTR_EXCL to ask for an interrupt
exclusively. The way this sharing works is that when an interrupt
comes in, all the INTR_FAST handlers are executed first, and if any
threaded handlers exist, the interrupt thread is scheduled afterwards.
This type of layout also makes it possible to investigate using interrupt
filters ala OS X where the filter determines whether or not its companion
threaded handler should run.
- Aside from the INTR_FAST changes above, the impact on MD interrupt code
is mostly just 's/ithread/intr_event/'.
- A new MI ddb command 'show intrs' walks the list of interrupt events
dumping their state. It also has a '/v' verbose switch which dumps
info about all of the handlers attached to each event.
- We currently don't destroy an interrupt thread when the last threaded
handler is removed because it would suck for things like ppbus(8)'s
braindead behavior. The code is present, though, it is just under
#if 0 for now.
- Move the code to actually execute the threaded handlers for an interrrupt
event into a separate function so that ithread_loop() becomes more
readable. Previously this code was all in the middle of ithread_loop()
and indented halfway across the screen.
- Made struct intr_thread private to kern_intr.c and replaced td_ithd
with a thread private flag TDP_ITHREAD.
- In statclock, check curthread against idlethread directly rather than
curthread's proc against idlethread's proc. (Not really related to intr
changes)
Tested on: alpha, amd64, i386, sparc64
Tested on: arm, ia64 (older version of patch by cognet and marcel)
tool:
- Use uname(3) to query the OS name to report in the HTTP headers.
This is probably more useful than hard-coding FreeBSD.
- If no path is specified, create a 1k temporary file and send that
instead. Pass a file descriptor into http_serve() rather than using
a global fd.
- Add more carriage returns to the HTTP headers to be a bit more
correct. (Suggested by: andre)
- Read to a buffer rather than a single character to reduce the number
of recv() system calls pulling in the HTTP request.
- Properly wait for two, not one, \n's on input.
to the actual dates when code actually changed. Also add special case
link state change handling for RELENG_5, which doesn't have
if_link_state_change(). No actual operational changes are done.
devices can be opened multiple times simultaneously but we're
expected to be able to do so by the rest of the loader.
This fixes booting from disks attached to the on-board SCSI
controller of Sun Ultra 1 (previously this triggered a trap)
and probably also of AX1115 boards.
- While here, remove unused variables and add empty lines where
style(9) requires such.
Tested on: powerpc (grehan), sparc64
MFC after: 1 month
to floats (mainly i386's). All errors of more than 1 ulp for float
precision trig functions were supposed to have been fixed; however,
compiling with gcc -O2 uncovered 18250 more such errors for cosf(),
with a maximum error of 1.409 ulps.
Use essentially the same fix as in rev.1.8 of k_rem_pio2f.c (access a
non-volatile variable as a volatile). Here the -O1 case apparently
worked because the variable is in a 2-element array and it takes -O2
to mess up such a variable by putting it in a register.
The maximum error for cosf() on i386 with gcc -O2 is now 0.5467 (it
is still 0.5650 with gcc -O1). This shows that -O2 still causes some
extra precision, but the extra precision is now good.
Extra precision is harmful mainly for implementing extra precision in
software. We want to represent x+y as w+r where both "+" operations
are in infinite precision and r is tiny compared with w. There is a
standard algorithm for this (Knuth (1981) 4.2.2 Theorem C), and fdlibm
uses this routinely, but the algorithm requires w and r to have the
same precision as x and y. w is just x+y (calculated in the same
finite precision as x and y), and r is a tiny correction term. The
i386 gcc bugs tend to give extra precision in w, and then using this
extra precision in the calculation of r results in the correction
mostly staying in w and being missing from r. There still tends to
be no problem if the result is a simple expression involving w and r
-- modulo spills, w keeps its extra precision and r remains the right
correction for this wrong w. However, here we want to pass w and r
to extern functions. Extra precision is not retained in function args,
so w gets fixed up, but the change to the tiny r is tinier, so r almost
remains as a wrong correction for the right w.
Try to make everyone happy: David (to have debug kernels installed
by default), Warner (to be able to override that), and myself (for
actually making it all work and to be consistent).
Now, if kernel was configured for debugging (through DEBUG=-g in
the kernel config file or "config -g"), doing "make install" will
install debug versions of kernel and module objects with their
canonical names,
kernel.debug -> /boot/kernel/kernel
if_fxp.ko.debug -> /boot/kernel/if_fxp.ko
Installing a kernel not configured for debugging, or debug kernel
with INSTALL_NODEBUG variable defined, will install non-debug
kernel and module objects.
Also, restore the install.debug and reinstall.debug targets that
are part of the existing API (they cause some additional gdb(1)
scripts to be installed).
of the PCIR_HDRTYPE register. It's the value returned from this
read access that determines whether or not we decide a device is
present at the current slot index. For some reason that I can't
adequately explain, this read fails on my machine when probing the
USB controller on my machine (which happens a multifunction device
at slot index 3 hung off the PCI-PCI bridge on the AMD8111 (bus
index 1)). The read will return 0xFF even though it should return
0x80 to indicate the presence of a multifunction device.
As near as I can tell, there's some timing issue involved with reading
the 'dead' slot indexes 0 through 2 that causes the read of the actual
device at slot 3 to fail. I tried a couple of different tricks to
correct the problem (the patch to amd64/pci/pci_cfgreg.c fixes it
for the amd64 arch), but adding this delay is the only thing that
always allows the USB controllers to be correctly probed 100% of the
time. Whatever the problem is, it's likely confined to the AMD8111
chipset. However, a simple 1us delay is fairly harmless and should
have no side effects for other hardware. I consider this to be
voodoo, but it's fairly benign voodoo and it makes my USB keyboard
and mouse work again.
Note that this is the second time that I've had to resort to a
1us delay to fix a PCI-related problem with this AMD8111/Opteron
system (the first being a fix I made a while back to the NDISulator).
It's possible the delay really belongs in the cfgreg code itself,
or that pci_cfgreg needs some custom hackery for an errata in the
8111. (I checked but couldn't find any documented errata on AMD's
site that could account for these problems.)
other OSes (Solaris, Linux, VxWorks). It's not necessary to write a 0
to the config address register when using config mechanism 1 to turn
off config access. In fact, it can be downright troublesome, since it
seems to confuse the PCI-PCI bridge in the AMD8111 chipset and cause
it to sporadically botch reads from some devices. This is the cause
of the missing USP ports problem I was experiencing with my Sun Opteron
system.
Also correct the case for mechanism 2: it's only necessary to write
a 0 to the ENABLE port.
- Move hardware counter reading/zeroing to hme_tick(). This saves
8 register access per interrupt. [1]
- Use imax macro for getting max. argument between two integers.
- Invoke bus_dmamap_sync(9) first before freeing mbuf.
- Check driver queue first to reduce locking operation in hme_start_locked()
and interrupt handler.
- Simplyfy watchdog timer setup in interrupt handler.
- Don't log normal errors such as RX overrun. If we have DMA stuck
condition, reinitialize the driver and log it.
Reviewed by: marius
Obtained from: OpenBSD [1]
confused with the Credit Card Adapter II and its spawn (which the xe
driver supports). These changes get my card probing and attaching. I
recently won one of these (and a NEC rebadged version) in an lot
auction. The NEC didn't work, so I took it apart and found the
MB86960A chip and then modified if_fe_pccard.c to attach. I can't
test this card further since I have no dongle for this card.
be installed. It should have been optional to install a non-debug
one, just like it was formerly optional to install a debug one. In
order to do that, most of 1.84 had to go.
Instead, make installing the debug kernel the default, but create a
new option INSTALL_NODEBUG for those people that have small /
partitions and good source control habits.
This preserves the behavior of 1.84 while allowing it to be overriden
for people (like me) that do not have the time to upgrade to get a
bigger / and also don't have time for stupid makefile tricks when
upgrading their older system, but still want a kernel.debug around if
things go south.
command is handled as a shell function. This avoids the following
peculiar behaviour when /usr/bin is on a case-insensitive filesystem:
# READ foo
(... long pause, depending upon the amount of swap space available ...)
sh: Resource temporarily unavailable.
Reported by: I can't remember; someone on IRC.
MFC after: 1 week
IPI_STOP IPIs.
- Change the i386 and amd64 MD IPI code to send an NMI if STOP_NMI is
enabled if an attempt is made to send an IPI_STOP IPI. If the kernel
option is enabled, there is also a sysctl to change the behavior at
runtime (debug.stop_cpus_with_nmi which defaults to enabled). This
includes removing stop_cpus_nmi() and making ipi_nmi_selected() a
private function for i386 and amd64.
- Fix ipi_all(), ipi_all_but_self(), and ipi_self() on i386 and amd64 to
properly handle bitmapped IPIs as well as IPI_STOP IPIs when STOP_NMI is
enabled.
- Fix ipi_nmi_handler() to execute the restart function on the first CPU
that is restarted making use of atomic_readandclear() rather than
assuming that the BSP is always included in the set of restarted CPUs.
Also, the NMI handler didn't clear the function pointer meaning that
subsequent stop and restarts could execute the function again.
- Define a new macro HAVE_STOPPEDPCBS on i386 and amd64 to control the use
of stoppedpcbs[] and always enable it for i386 and amd64 instead of
being dependent on KDB_STOP_NMI. It works fine in both the NMI and
non-NMI cases.
still works. Also, this is consistent with 'show pcpu' vs
'show allpcpu'. (And 'show allstacks' on OS X for that matter.)
- Add 'bt' as an alias for 'trace'. We already have a 'where' alias as
well, so this makes it easier for gdb-wired hands to work in ddb.
Ok'd by: rwatson (1)
Requested by: scottl (2)
MFC after: 1 day
{cos_sin}[f](x) so that x doesn't need to be reclassified in the
"kernel" functions to determine if it is tiny (it still needs to be
reclassified in the cosine case for other reasons that will go away).
This optimization is quite large for exponentially distributed x, since
x is tiny for almost half of the domain, but it is a pessimization for
uniformally distributed x since it takes a little time for all cases
but rarely applies. Arg reduction on exponentially distributed x
rarely gives a tiny x unless the reduction is null, so it is best to
only do the optimization if the initial x is tiny, which is what this
commit arranges. The imediate result is an average optimization of
1.4% relative to the previous version in a case that doesn't favour
the optimization (double cos(x) on all float x) and a large
pessimization for the relatively unimportant cases of lgamma[f][_r](x)
on tiny, negative, exponentially distributed x. The optimization should
be recovered for lgamma*() as part of fixing lgamma*()'s low-quality
arg reduction.
Fixed various wrong constants for the cutoff for "tiny". For cosine,
the cutoff is when x**2/2! == {FLT or DBL}_EPSILON/2. We round down
to an integral power of 2 (and for cos() reduce the power by another
1) because the exact cutoff doesn't matter and would take more work
to determine. For sine, the exact cutoff is larger due to the ration
of terms being x**2/3! instead of x**2/2!, but we use the same cutoff
as for cosine. We now use a cutoff of 2**-27 for double precision and
2**-12 for single precision. 2**-27 was used in all cases but was
misspelled 2**27 in comments. Wrong and sloppy cutoffs just cause
missed optimizations (provided the rounding mode is to nearest --
other modes just aren't supported).