are supposed to continue firing as long as there is work to do, not
stop after the first invocation.
This is damage control after a patch that has been committed prematurely.
Tested by: kris
instead of ephemeral mappings using pmap_qenter() by the writer. The
writer is still, however, responsible for wiring the pages, just not
mapping them. Consequently, the allocation of KVA for the direct case is
unnecessary. Remove it and the sysctls limiting it, i.e.,
kern.ipc.maxpipekvawired and kern.ipc.amountpipekvawired. The number
of temporarily wired pages is still, however, limited by
kern.ipc.maxpipekva.
Note: On platforms lacking a direct virtual-to-physical mapping,
uiomove_fromphys() uses sf_bufs to cache ephemeral mappings. Thus,
the number of available sf_bufs can influence the performance of pipes
on platforms such i386. Surprisingly, I saw the greatest gain from this
change on such a machine: lmbench's pipe bandwidth result increased from
~1050MB/s to ~1850MB/s on my 2.4GHz, 400MHz FSB P4 Xeon.
long as there are still explicit uses of int, whether in types or
in function names (such as atomic_set_int() in sched_ule.c), we can
not change cpumask_t to be anything other than u_int. See also the
commit log for sys/sys/types.h, revision 1.84.
* all
- s/__FUNCTION__/__func__/.
Submitted by: Stefan Farfeleder <stefan@fafoe.narf.at>
- Compatibility for RELENG_4 and DragonFly.
* firewire
- Timestamp just before queuing.
- Retry bus probe if it fails.
- Use device_printf() for debug message.
- Invalidiate CROM while update.
- Don't process minimum/invalid CROM.
* sbp
- Add ORB_SHORTAGE flag.
- Add sbp.tags tunable.
- Revive doorbell support. It's not enabled by default.
objects rather than synchronization objects. When a sync object is
signaled, only the first thread waiting on it is woken up, and then
it's automatically reset to the not-signaled state. When a
notification object is signaled, all threads waiting on it will
be woken up, and it remains in the signaled state until someone
resets it manually. We want the latter behavior for NDIS events.
- In kern_ndis.c:ndis_convert_res(), we have to create a temporary
copy of the list returned by BUS_GET_RESOURCE_LIST(). When the PCI
bus code probes resources for a given device, it enters them into
a singly linked list, head first. The result is that traversing
this list gives you the resources in reverse order. This means when
we create the Windows resource list, it will be in reverse order too.
Unfortunately, this can hose drivers for devices with multiple I/O
ranges of the same type, like, say, two memory mapped I/O regions (one
for registers, one to map the NVRAM/bootrom/whatever). Some drivers
test the range size to figure out which region is which, but others
just assume that the resources will be listed in ascending order from
lowest numbered BAR to highest. Reversing the order means such drivers
will choose the wrong resource as their I/O register range.
Since we can't traverse the resource SLIST backwards, we have to
make a temporary copy of the list in the right order and then build
the Windows resource list from that. I suppose we could just fix
the PCI bus code to use a TAILQ instead, but then I'd have to track
down all the consumers of the BUS_GET_RESOURCE_LIST() and fix them
too.
Compute the payload checksum for a locally originated IP multicast where
God intended, in ip_mloopback(), rather than doing it in ip_output() and
only when multicast router is active. This is more correct as we do not
fool ip_input() that the packet has the correct payload checksum when in
fact it does not (when multicast router is inactive). This is also more
efficient if we don't join the multicast group we send to, thus allowing
the hardware to checksum the payload.
which pulls a job off a thread work queue (assuming it hasn't run yet).
This is needed for KeRemoveQueueDpc().
- In subr_ntoskrnl.c, implement KeInsertQueueDpc() and KeRemoveQueueDpc(),
to go with KeInitializeDpc() to round out the API. Also change the
KeTimer implementation to use this API instead of the private
timer callout scheduler. Functionality of the timer API remains
unchanged, but we get a couple new Windows kernel API routines and
more closely imitate the way thing works in Windows. (As of yet
I haven't encountered any drivers that use KeInsertQueueDpc() or
KeRemoveQueueDpc(), but it doesn't hurt to have them.)
Report the %ecx bits in cpuid function 1. This is a hack.
When reporting AMD Features, only mask off the common bits. Otherwise
the SEP bit masks off SYSCALL etc in the report.
variable length, so we should not be trying to copy it into a fixed
length buffer, especially one on the stack. malloc() a buffer of the
right size and return a pointer to that instead.
Fixes a crash I discovered when testing whe a Cisco AP in infrastructure
mode, which returns several information elements that make the
ndis_wlan_bssid_ex structure larger than expected.
Because xfer->send.payload is a pointer to the buffer, '&' shouldn't be there.
Submitted by: John Weisgerber <weisgerberj@gsilumonics.com>
PR: misc/64623
(1 << 24) - 2 instead of 1 << 24, which it was obviously intended to
be). This fixes SBus isp(4)s on sparc64 machines.
Report and testing: Marius Strobl <marius@alchemy.franken.de>
iommu_dvma_vallocseg(), which I botched in r1.32. This bug could
cause an endless loop when a map was loaded and DVMA was scarce,
or that map had a stringent alignment or boundary.
Report and additional testing: Marius Strobl <marius@alchemy.franken.de>