the start of the section headers has to take into account the fact
that the image_nt_header is really variable sized. It happens that
the existing calculation is correct for _most_ production binaries
produced by the Windows DDK, but if we get a binary with oddball
offsets, the PE loader could crash.
Changes from the supplied patch are:
- We don't really need to use the IMAGE_SIZEOF_NT_HEADER() macro when
computing how much of the header to return to callers of
pe_get_optional_header(). While it's important to take the variable
size of the header into account in other calculations, we never
actually look at anything outside the non-variable portion of the
header. This saves callers from having to allocate a variable sized
buffer off the heap (I purposely tried to avoid using malloc()
in subr_pe.c to make it easier to compile in both the -D_KERNEL and
!-D_KERNEL case), and since we're copying into a buffer on the
stack, we always have to copy the same amount of data or else
we'll trash the stack something fierce.
- We need <stddef.h> to get offsetof() in the !-D_KERNEL case.
- ndiscvt.c needs the IMAGE_FIRST_SECTION() macro too, since it does
a little bit of section pre-processing.
PR: kern/83477
This supersedes the fix for the old algorithm in rev.1.8 of k_cosf.c.
I want this change mainly because it is an optimization. It helps
make software cos[f](x) and sin[f](x) faster than the i387 hardware
versions for small x. It is also a simplification, and reduces the
maximum relative error for cosf() and sinf() on machines like amd64
from about 0.87 ulps to about 0.80 ulps. It was validated for cosf()
and sinf() by exhaustive testing. Exhaustive testing is not possible
for cos() and sin(), but ucbtest reports a similar reduction for the
worst case found by non-exhaustive testing. ucbtest's non-exhaustive
testing seems to be good enough to find problems in algorithms but not
maximum relative errors when there are spikes. E.g., short runs of
it find only 3 ulp error where the i387 hardware cos() has an error
of about 2**40 ulps near pi/2.
threads can wait for a thread to exit, and safely assume that the thread
has left userland and is no longer using its userland stack, this is
necessary for pthread_join when a thread is waiting for another thread
to exit which has user customized stack, after pthread_join returns,
the userland stack can be reused for other purposes, without this change,
the joiner thread has to spin at the address to ensure the thread is really
exited.
and ndis_halt_nic(). It's been disabled for some time anyway, and
it turns out there's a possible deadlock in NdisMInitializeTimer() when
acquiring the miniport block lock to modify the timer list: it's
possible for a driver to call NdisMInitializeTimer() when the miniport
block lock has already been acquired by an earlier piece of code. You
can't acquire the same spinlock twice, so this can deadlock.
Also, implement MmMapIoSpace() and MmUnmapIoSpace(), and make
NdisMMapIoSpace() and NdisMUnmapIoSpace() use them. There are some
drivers that want MmMapIoSpace() and MmUnmapIoSpace() so that they can
map arbitrary register spaces not directly associated with their
device resources. For example, there's an Atheros driver for
a miniPci card (0x168C:0x1014) on the IBM Thinkpad x40 that wants
to map some I/O spaces at 0xF00000 and 0xE00000 which are held by
the acpi0 device. I don't know what it wants these ranges for,
but if it can't map and access them, the MiniportInitialize() method
fails.
has been set in the first place. This should reduce
unwanted side-effects in rc.d scripts that don't mean
to use $command and rc.subr(8) methods associated with
it at all.
Discussed with: brooks
Reviewed by: -rc (silence)
o Oxford Semiconductor PCI Dual Port Serial
o Netmos Nm9845 PCI Bridge with Dual UART
Add PCI IDs for single-port cards:
o Various SIIG Cyber Serial
o Oxford Semiconductor OXCB950 UART
Update description as per puc(4).
immediately, back off to the next higher Cx sleep state. Some machines
with a Via chipset report a valid C3 but a register read doesn't actually
halt the CPU. This would cause the machine to appear unresponsive as it
repeatedly called cpu_idle() which immediately returned. Causing interrupts
(i.e. by pressing the power button) would cause the system to make forward
progress, showing that it wasn't actually hung.
Also, enable interrupts a little earlier. We don't need them disabled
to calculate the delta time for the read.
Reported by: silby
MFC after: 2 weeks
and increase flexibility to allow various different approaches to be tried
in the future.
- Split struct ithd up into two pieces. struct intr_event holds the list
of interrupt handlers associated with interrupt sources.
struct intr_thread contains the data relative to an interrupt thread.
Currently we still provide a 1:1 relationship of events to threads
with the exception that events only have an associated thread if there
is at least one threaded interrupt handler attached to the event. This
means that on x86 we no longer have 4 bazillion interrupt threads with
no handlers. It also means that interrupt events with only INTR_FAST
handlers no longer have an associated thread either.
- Renamed struct intrhand to struct intr_handler to follow the struct
intr_foo naming convention. This did require renaming the powerpc
MD struct intr_handler to struct ppc_intr_handler.
- INTR_FAST no longer implies INTR_EXCL on all architectures except for
powerpc. This means that multiple INTR_FAST handlers can attach to the
same interrupt and that INTR_FAST and non-INTR_FAST handlers can attach
to the same interrupt. Sharing INTR_FAST handlers may not always be
desirable, but having sio(4) and uhci(4) fight over an IRQ isn't fun
either. Drivers can always still use INTR_EXCL to ask for an interrupt
exclusively. The way this sharing works is that when an interrupt
comes in, all the INTR_FAST handlers are executed first, and if any
threaded handlers exist, the interrupt thread is scheduled afterwards.
This type of layout also makes it possible to investigate using interrupt
filters ala OS X where the filter determines whether or not its companion
threaded handler should run.
- Aside from the INTR_FAST changes above, the impact on MD interrupt code
is mostly just 's/ithread/intr_event/'.
- A new MI ddb command 'show intrs' walks the list of interrupt events
dumping their state. It also has a '/v' verbose switch which dumps
info about all of the handlers attached to each event.
- We currently don't destroy an interrupt thread when the last threaded
handler is removed because it would suck for things like ppbus(8)'s
braindead behavior. The code is present, though, it is just under
#if 0 for now.
- Move the code to actually execute the threaded handlers for an interrrupt
event into a separate function so that ithread_loop() becomes more
readable. Previously this code was all in the middle of ithread_loop()
and indented halfway across the screen.
- Made struct intr_thread private to kern_intr.c and replaced td_ithd
with a thread private flag TDP_ITHREAD.
- In statclock, check curthread against idlethread directly rather than
curthread's proc against idlethread's proc. (Not really related to intr
changes)
Tested on: alpha, amd64, i386, sparc64
Tested on: arm, ia64 (older version of patch by cognet and marcel)
tool:
- Use uname(3) to query the OS name to report in the HTTP headers.
This is probably more useful than hard-coding FreeBSD.
- If no path is specified, create a 1k temporary file and send that
instead. Pass a file descriptor into http_serve() rather than using
a global fd.
- Add more carriage returns to the HTTP headers to be a bit more
correct. (Suggested by: andre)
- Read to a buffer rather than a single character to reduce the number
of recv() system calls pulling in the HTTP request.
- Properly wait for two, not one, \n's on input.
to the actual dates when code actually changed. Also add special case
link state change handling for RELENG_5, which doesn't have
if_link_state_change(). No actual operational changes are done.
devices can be opened multiple times simultaneously but we're
expected to be able to do so by the rest of the loader.
This fixes booting from disks attached to the on-board SCSI
controller of Sun Ultra 1 (previously this triggered a trap)
and probably also of AX1115 boards.
- While here, remove unused variables and add empty lines where
style(9) requires such.
Tested on: powerpc (grehan), sparc64
MFC after: 1 month
to floats (mainly i386's). All errors of more than 1 ulp for float
precision trig functions were supposed to have been fixed; however,
compiling with gcc -O2 uncovered 18250 more such errors for cosf(),
with a maximum error of 1.409 ulps.
Use essentially the same fix as in rev.1.8 of k_rem_pio2f.c (access a
non-volatile variable as a volatile). Here the -O1 case apparently
worked because the variable is in a 2-element array and it takes -O2
to mess up such a variable by putting it in a register.
The maximum error for cosf() on i386 with gcc -O2 is now 0.5467 (it
is still 0.5650 with gcc -O1). This shows that -O2 still causes some
extra precision, but the extra precision is now good.
Extra precision is harmful mainly for implementing extra precision in
software. We want to represent x+y as w+r where both "+" operations
are in infinite precision and r is tiny compared with w. There is a
standard algorithm for this (Knuth (1981) 4.2.2 Theorem C), and fdlibm
uses this routinely, but the algorithm requires w and r to have the
same precision as x and y. w is just x+y (calculated in the same
finite precision as x and y), and r is a tiny correction term. The
i386 gcc bugs tend to give extra precision in w, and then using this
extra precision in the calculation of r results in the correction
mostly staying in w and being missing from r. There still tends to
be no problem if the result is a simple expression involving w and r
-- modulo spills, w keeps its extra precision and r remains the right
correction for this wrong w. However, here we want to pass w and r
to extern functions. Extra precision is not retained in function args,
so w gets fixed up, but the change to the tiny r is tinier, so r almost
remains as a wrong correction for the right w.
Try to make everyone happy: David (to have debug kernels installed
by default), Warner (to be able to override that), and myself (for
actually making it all work and to be consistent).
Now, if kernel was configured for debugging (through DEBUG=-g in
the kernel config file or "config -g"), doing "make install" will
install debug versions of kernel and module objects with their
canonical names,
kernel.debug -> /boot/kernel/kernel
if_fxp.ko.debug -> /boot/kernel/if_fxp.ko
Installing a kernel not configured for debugging, or debug kernel
with INSTALL_NODEBUG variable defined, will install non-debug
kernel and module objects.
Also, restore the install.debug and reinstall.debug targets that
are part of the existing API (they cause some additional gdb(1)
scripts to be installed).
of the PCIR_HDRTYPE register. It's the value returned from this
read access that determines whether or not we decide a device is
present at the current slot index. For some reason that I can't
adequately explain, this read fails on my machine when probing the
USB controller on my machine (which happens a multifunction device
at slot index 3 hung off the PCI-PCI bridge on the AMD8111 (bus
index 1)). The read will return 0xFF even though it should return
0x80 to indicate the presence of a multifunction device.
As near as I can tell, there's some timing issue involved with reading
the 'dead' slot indexes 0 through 2 that causes the read of the actual
device at slot 3 to fail. I tried a couple of different tricks to
correct the problem (the patch to amd64/pci/pci_cfgreg.c fixes it
for the amd64 arch), but adding this delay is the only thing that
always allows the USB controllers to be correctly probed 100% of the
time. Whatever the problem is, it's likely confined to the AMD8111
chipset. However, a simple 1us delay is fairly harmless and should
have no side effects for other hardware. I consider this to be
voodoo, but it's fairly benign voodoo and it makes my USB keyboard
and mouse work again.
Note that this is the second time that I've had to resort to a
1us delay to fix a PCI-related problem with this AMD8111/Opteron
system (the first being a fix I made a while back to the NDISulator).
It's possible the delay really belongs in the cfgreg code itself,
or that pci_cfgreg needs some custom hackery for an errata in the
8111. (I checked but couldn't find any documented errata on AMD's
site that could account for these problems.)
other OSes (Solaris, Linux, VxWorks). It's not necessary to write a 0
to the config address register when using config mechanism 1 to turn
off config access. In fact, it can be downright troublesome, since it
seems to confuse the PCI-PCI bridge in the AMD8111 chipset and cause
it to sporadically botch reads from some devices. This is the cause
of the missing USP ports problem I was experiencing with my Sun Opteron
system.
Also correct the case for mechanism 2: it's only necessary to write
a 0 to the ENABLE port.
- Move hardware counter reading/zeroing to hme_tick(). This saves
8 register access per interrupt. [1]
- Use imax macro for getting max. argument between two integers.
- Invoke bus_dmamap_sync(9) first before freeing mbuf.
- Check driver queue first to reduce locking operation in hme_start_locked()
and interrupt handler.
- Simplyfy watchdog timer setup in interrupt handler.
- Don't log normal errors such as RX overrun. If we have DMA stuck
condition, reinitialize the driver and log it.
Reviewed by: marius
Obtained from: OpenBSD [1]
confused with the Credit Card Adapter II and its spawn (which the xe
driver supports). These changes get my card probing and attaching. I
recently won one of these (and a NEC rebadged version) in an lot
auction. The NEC didn't work, so I took it apart and found the
MB86960A chip and then modified if_fe_pccard.c to attach. I can't
test this card further since I have no dongle for this card.