and bio_inbed fields to 0. Without this change we can end up with
I/O leakage in some rare situations.
I tested this change by putting failure probability mechanism simlar
to this used in NOP class into g_clone_bio(9) function, so it was
able to return NULL with the given probability.
Discussed with: phk
we update the registers. That way we don't have any dirty registers to
worry about and also know that bsp=bspstore, which makes updating the
RSE related registers predictable.
This is not the end of it. We need more validity checks, but for now
this allows us to complete the gdb testsuite without crashing the
kernel.
if_start routines cannot currently be entered without Giant. When
the kernel is running with debug.mpsafenet != 0, this will defer
if_start execution to a task queue thread holding Giant, which may
introduce additional latency, but avoid incorrect execution.
Suggested by: dfr
full, avoiding the cost of mutex operations if it is. We re-test
once the mutex is acquired to make sure it's still true before doing
the -modify-write part of the read-modify-write. Note that due to
the maximum fifo depth being pretty deep, this is unlikely to improve
harvesting performance yet.
Approved by: markm
to allow dumping per-thread machine specific notes. On ia64 we use this
function to flush the dirty registers onto the backingstore before we
write out the PRSTATUS notes.
Tested on: alpha, amd64, i386, ia64 & sparc64
Not tested on: arm, powerpc
a standard configuration similar to [NO_]ADAPTIVE_MUTEXES. This
feature causes Giant to be included in the set of mutexes adaptively
spun on. It appears to have a positive effect on performance on SMP
across several workloads, including measurements of a 16% improvement
on buildworld, and 30%+ improvement for MySQL using the supersmack
benchmark with Giant over the network stack; a 6% improvement without
Giant on the network stack (as a result of less giant contention).
we may sleep when doing so; check that we didn't race with another thread
allocating storage for the vnode after allocation is made to a local
pointer, and only update the vnode pointer if it's still NULL. Otherwise,
accept that another thread got there first, and release the local storage.
Discussed with: jmg
Implement the protection check required by the pmap_extract_and_hold()
specification.
Remove the acquisition and release of Giant from pmap_extract_and_hold() and
pmap_protect().
Many thanks to Ken Smith for resolving a sparc64-specific initialization
problem in my original patch.
Tested by: kensmith@
bio_driver1 (as all the rest).
This introduced a small memory leak, but it wasn't really critical,
because maximum memory for g_stripe_zone is always set, so after few
requests gstripe was working in "economic" mode.
are resevered, they can be written with anything, but they always read
as zero, we should simulate it in set_regs() as we are reading/writting
real hardware %rflags register.
contributed to the transferable load count. This prevents any potential
problems with sched_pin() being used around calls to setrunqueue().
- Change the sched_add() load balancing algorithm to try to migrate on
wakeup. This attempts to place threads that communicate with each other
on the same CPU.
- Don't clear the idle counts in kseq_transfer(), let the cpus do that when
they call sched_add() from kseq_assign().
- Correct a few out of date comments.
- Make sure the ke_cpu field is correct when we preempt.
- Call kseq_assign() from sched_clock() to catch any assignments that were
done without IPI. Presently all assignments are done with an IPI, but I'm
trying a patch that limits that.
- Don't migrate a thread if it is still runnable in sched_add(). Previously,
this could only happen for KSE threads, but due to changes to
sched_switch() all threads went through this path.
- Remove some code that was added with preemption but is not necessary.
umich copyright is asserting.
Clarify that the copyright I'm asserting is the standard Berkeley
license.
Remove Giant assertions from AARP and DDP input routines.
is here so that we can gather stats on the nature of the recent rash of
hard lockups, and in this particular case panic the machine instead of
letting it deadlock forever.
becauses some syscalls using set_mcontext can sneakily change
parameters and later when those syscalls references parameters,
they will wrongly use register values in mcontext_t.
Approved by: peter
chipsets, based on Linux's via-agp.c. On boot, the system selects which AGP
version to use based on the inserted card. If v2 was chosen, the chipset
needs to be programmed with the v2 registers still. Also included in kern/69953
are changes to make the programming of the v3 registers match linux, but that
will be left out until the need to do so is confirmed (want specs or a tester).
PR: kern/69953
Submitted by: Oleg Sharoiko <os@rsu.ru>
Tested by: Oleg Sharoiko <os@rsu.ru>, Geoff Speicher <geoff@speicher.org>
(full version from PR)
The hardware always gives read access for privilege level 0, which
means that we cannot use the hardware access rights and privilege
level in the PTE to test whether there's a change in protection. So,
we save the original vm_prot_t in the PTE as well.
Add pmap_pte_prot() to set the proper access rights and privilege
level on the PTE given a pmap and the requested protection.
The above allows us to compare the protection in pmap_extract_and_hold()
which was missing. While in pmap_extract_and_hold(), add pmap locking.
While here, clean up most (i.e. all but one) PTE macros we inherited
from alpha. They were either unused, used inconsistently, badly named
or simply weren't beneficial. We save the wired and managed state of
the PTE in distinct (bit) fields.
While in pte.h, s/u_int64_t/uint64_t/g
pmap locking obtained from: alc@
feedback & review by: alc@
* Allow no-fault wiring/unwiring to succeed for consistency;
however, the wired count remains at zero, so it's a special case.
* Fix issues inside vm_map_wire() and vm_map_unwire() where the
exact state of user wiring (one or zero) and system wiring
(zero or more) could be confused; for example, system unwiring
could succeed in removing a user wire, instead of being an
error.
* Require all mappings to be unwired before they are deleted.
When VM space is still wired upon deletion, it will be waited
upon for the following unwire. This makes vslock(9) work
rather than allowing kernel-locked memory to be deleted
out from underneath of its consumer as it would before.
1. Move a comment to its proper place, updating it. (Except for white-
space, this comment had been unchanged since revision 1.1!)
2. Remove spl calls.
fix the obvious bugs, nastier ones reside below the surfac), and having
it commented out here just encourages people to try it.
# I'm not removing it from the base system, yet.
For incoming packets, the packet's source address is checked if it
belongs to a directly connected network. If the network is directly
connected, then the interface the packet came on in is compared to
the interface the network is connected to. When incoming interface
and directly connected interface are not the same, the packet does
not match.
Usage example:
ipfw add deny ip from any to any not antispoof in
Manpage education by: ru
It allows to fix problems when last provider's sector is shared between few
providers.
- Bump version number for CONCAT and STRIPE and add code for backward
compatibility.
- Do not bump version number of MIRROR, as it wasn't officially introduced yet.
Even if someone started to play with it, there is no big deal, because
wrong MD5 sum of metadata will deny those providers.
- Update manual pages.
- Add version history to g_(stripe|concat).h files.
thread, after the bound thread leaves critical region, the thread should
check debug flag may suspend itself by using the command.
2.Schedule upcall after thread is suspended by debugger
3.Wakeup upcall thread after process suspension.
Reviewed by: deischen
understood. This makes room for additional binary compatibility in the
future.
Put fields in the class for the geom's methods and initialize the methods
of a new geom from these fields. This saves some code in all classes.
o Change the motion calculation to result in
a more reasonable speed of motion
This should fix the 'aiming' problems people have reported. It also
mitigates (but doesn't completely solve) the 'stalling' problems at
very low speeds.
Tested by: many subscribers to -current
Approved by: njl
o Catch 'taps' as button presses
o One finger sends button1, two fingers send button3,
three fingers send button2 (double-click)
Tested by: many subscribers to -current
Approved by: njl
o Handle the 'up/down' buttons some touchpads have as
a z-axis (scrollwheel) as recommended by the specs
o Report the buttons as button4 and button5 instead
of button2 and button4, button2 can be emulated by
pressing button1 and button3 simultaneously. This
allows one to use the two extra buttons for other
purposes if one so desires.
Tested by: many subscribers to -current
Approved by: njl
o Clean up whitespace and comments in the
enable_synaptics() probing function
o Only use (and rely on) the extended capability
bits when we are told they actually exist
o Partly ignore the (possibly dated?) part of the
specification about the mode byte so that we
can support 'guest devices' too.
Tested by: many subscribers to -current
Approved by: njl
path. The basic problem is that we cannot set the single stepping flag
directly, because we don't leave the kernel via an interrupt return. So,
we need another way to set the single stepping flag.
The way we do this is by enabling the lower-privilege transfer trap, which
gets raised when we drop the privilege level. However, since we're still
running in kernel space (sec), we're not yet done. We clear the lower-
privilege transfer trap, enable the taken-branch trap and continue exiting
the kernel until we branch into user space.
Given the current code, there's a total of two traps this way before
we can raise SIGTRAP.
after a fork(2) in fork_trampoline(). By moving the epc_syscall_return
label immediately before the call to do_ast() in epc_syscall(), we not
only achieve that but also handle the detour through exception_return
when the frame corresponds to an asynchronous kernel entry. Hence, we
simplified fork_trampoline() as a side-effect.
related to breakpoints and single stepping into SIGTRAP so gdb(1) knows
why the remote target has stopped. In particular, gdb(1) needs to know
if the reason is something of its own doing.
catch leaking into VFS without Giant.
Inch Giant a little lower in several file descriptor operations on
vnodes to cover only VFS operations that need it, rather than file
flag reading, etc.
was being unconditionally dereferenced but was NULL for PIO requests.
Check the request flags for a DMA transaction before dereferencing.
Reported by: ceri
Tested by: Radek Kozlowski <radek -at- raadradd.com>
fcntl() operations, including:
F_DUPFD dup() alias
F_GETFD retrieve close-on-exec flag
F_SETFD set close-on-exec flag
F_GETFL retrieve file descriptor flags
For the remaining fcntl() operations, do acquire Giant, especially
where we call into fo_ioctl() as a result. We're not yet ready to
push Giant into fo_ioctl(). Once we do, this can all become quite a
bit prettier.
calls. Note that the information included is a bit different from the
existing KTR traces generated on powerpc, as I'm primarily interested
in kernel context (thread, syscall #, proc, etc), not the user
arguments to the system call. Some convergence would be useful here.
with it that need to be understood better before they can be resolved.
This takes time and time is already in short supply.
Reported & tested by: glebius@
will prepend the current kernel booting... This prevents a problem of
loading /boot/kernel's modules when a different kernel has no modules,
but you left your module_load="YES" in loader.conf...
Reviewed by: dcs (minus the help part)
something goes wrong while running in "fast" mode, we free all bios and
falling back to "economic" mode. Freeing bios, doesn't mean decrease
bio_children, so bio_inbed couldn't be equal to bio_children and request
was never finished.
Decrease bio_children manually when destroying bios.
Reported by: Sam Lawrance <boris@brooknet.com.au>, simon
message if they are incorrect. Also, remove the hack of allowing the
initial irq setting to not be in _PRS. As before, the old behavior can be
regained by defining ACPI_OLD_PCI_LINK.
structures, allowing in6_pcbnotify() to lock the pcbinfo and each
inpcb that it notifies of ICMPv6 events. This prevents inpcb
assertions from firing when IPv6 generates and delievers event
notifications for inpcbs.
Reported by: kuriyama
Tested by: kuriyama
a result of scheduling an ithread, cut a KTR_INTR trace record so
that it's clear in tracing interrupt activity where and when the
entropy harvesting code is invoked.
callout_reset rather than calling callout_stop. This results in a few
lines of code duplication, but it provides a significant performance
improvement because it avoids recursing on callout_lock.
Requested by: rwatson
or multicast packet, we don't need to acquire the inpcb mutex
unless we are actually using inpcb fields other than the bound port
and address. Since we hold the pcbinfo lock already, these can't
change. Defer acquiring the inpcb mutex until we have a high
chance of a match. This avoids about 120 mutex operations per UDP
broadcast packet received on one of my work systems.
Reviewed by: sam
want a splash screen.
There seems to be some confusion in the syscons code as to the meaning of
the SC_KERNEL_CONSOLE flag. Its absence is sometimes interpreted to mean
"I am not the system console", and sometimes to mean "I am not the only
VGA console" (see the font loading code for an example of the latter).
Someone with better syscons fu than myself should take a closer look.
(that does not compile with !gcc). Moreover we get the benefit for all archs
that have a hand optimized in_cksum_skip().
Submitted by: yongari
Tested by: me (i386, extensivly), pf4freebsd ML (various)
better check for 'adjacent'. The old code assumed that if two resources
were adjacent in the linked list that they were also adjacent range wise.
This is not true when a resource manager has to manage disparate regions.
For example, the current interrupt code on i386/amd64 will instruct
irq_rman to manage two disjoint regions: 0-1 and 3-15 for the non-APIC
case. If IRQs 1 and 3 were allocated and then released, the old code
would coalesce across the 1 to 3 boundary because the resources were
adjacent in the linked list thus adding 2 to the area of resources that
irq_rman managed as a side effect. The fix adds extra checks so that
adjacent unallocated resources are only merged with the resource being
freed if the start and end values of the resources also match up. The
patch also consolidates the checks for adjacent resources being allocated.
following behavior:
* Link devices return invalid status (_STA) values. The results are very
unreliable -- sometimes never present. Just ignore the status and pick
the best configuration from _PRS.
* Link devices return invalid current settings (_CRS). Even after setting
the link value, many systems still return a different setting for _CRS.
When setting an IRQ, don't bother to check _CRS to see if we succeeded.
Note that we still check _CRS before routing and this should be addressed
as well.
Since this is a sensitive area, leave the old behavior accessible via
uncommenting the define for ACPI_OLD_PCI_LINK at the top of the file. Once
this has been thoroughly tested, this option and the code it covers will
be removed.
Thanks to Len Brown at Intel for informing us of these issues as he worked
around them in Linux.
location (for the wake code). It should not be needed since we don't
map other pages at the same location and if there was an old mapping, it
would be restored by a fault. The old code had serious problems, namely
that it was restoring the new page it had just removed (not opage) and
it could only guess at the right protection (since there's no
pmap_extract_protect function). Thanks to Alan Cox for explaining much
of this to me.
Also, remove a commented-out initializecpu() call since it is not needed.
Restoring the cpu context is better than attempting to init from scratch.
Reviewed by: alc (earlier version)
have clear idea on boot2 BSS size and leaves portion of it not zeroed out.
btxcsu.s is in much better position for this job.
Obtained from: DragonflyBSD (with minor adjustments)
Since HME doesn't compensate the checksum for UDP datagram which
can yield to 0x0, UDP transmit checksum offload is disabled by
default. The UDP Transmit checksum offload can be reactivated
by setting special link option link0 with ifconfig(8).
Approved by: jake (mentor)
Reviewed by: tmm
Tested by: Herve Boulouis <amon@sockar.homeip.net>
before grabbing BPF locks to see if there are any entries in order to
avoid the cost of locking if there aren't any. Avoids a mutex lock/
unlock for each packet received if there are no BPF listeners.
consumer and 'bio_pflags' which can be used by provider.
- Remove BIO_FLAG1 and BIO_FLAG2 flags. From now on new fields should be
used for internal flags.
- Update g_bio(9) manual page.
- Update some comments.
- Update GEOM_MIRROR, which was the only one using BIO_FLAGs.
Idea from: phk
Reviewed by: phk
spin-wait code to use the same spin mutex (smp_tlb_mtx) as the TLB ipi
and spin-wait code snippets so that you can't get into the situation of
one CPU doing a TLB shootdown to another CPU that is doing a lazy pmap
shootdown each of which are waiting on each other. With this change, only
one of the CPUs would do an IPI and spin-wait at a time.
the immediate awakening of proc0 (scheduler kproc, controls swapping
processes in and out). The scheduler process periodically awakens already,
so this will not result in processes not being swapped in, there will just
be more latency in between a thread being made runnable and the scheduler
waking up to swap the affected process back in.
macros and pass the value to the associated _mtx_*() functions to avoid
more curthread dereferences in the function implementations. This provided
a very modest perf improvement in some benchmarks.
Suggested by: rwatson
Tested by: scottl
text/data are covered on APs. This enables the kernel to boot on
a 4 way Intel Itanium-2 platform. This has a secondary effect of
keeping the TRs identical on BP and the APs.
reviewed by: marcel@
a sleep() call waking up in namei(), a later assertion triggers that
Giant is not held. By asserting Giant at the start of namei(), we can
know that if that assertion triggers, Giant is lost during the call to
namei(), and not before.
lock assertions even if IPv6 is compiled into the kernel. Previously,
inclusion of IPv6 and locking assertions would result in a rapid
assertion failure as IPv6 was not properly locking inpcbs.
- In ntoskrnl_var.h, I had defined compat macros for
ntoskrnl_acquire_spinlock() and ntoskrnl_release_spinlock() but
never used them. This is fortunate since they were stale. Fix them
to work properly. (In Windows/x86 KeAcquireSpinLock() is a macro that
calls KefAcquireSpinLock(), which lives in HAL.dll. To imitate this,
ntoskrnl_acquire_spinlock() is just a macro that calls hal_lock(),
which lives in subr_hal.o.)
- Add macros for ntoskrnl_raise_irql() and ntoskrnl_lower_irql() that
call hal_raise_irql() and hal_lower_irql().
- Use these macros in kern_ndis.c, subr_ndis.c and subr_ntoskrnl.c.
- Along the way, I realised subr_ndis.c:ndis_lock() was not calling
hal_lock() correctly (it was using the FASTCALL2() wrapper when
in reality this routine is FASTCALL1()). Using the
ntoskrnl_acquire_spinlock() fixes this. Not sure if this actually
caused any bugs since hal_lock() would have just ignored what
was in %edx, but it was still bogus.
This hides many of the uses of the FASTCALLx() macros which makes the
code a little cleaner. Should not have any effect on generated object
code, other than the one fix in ndis_lock().
vm_page_sleep_if_busy() and the page table page's busy flag as a
synchronization mechanism on page table pages.
Also, relocate the inline pmap_unwire_pte_hold() so that it can be used
to shorten _pmap_unwire_pte_hold() on alpha and amd64. This places
pmap_unwire_pte_hold() next to a comment that more accurately describes
it than _pmap_unwire_pte_hold().
by a transaction performing a driver handled message sequence (an
scb with the MK_MESSAGE flag set).
SCBs that perform host managed messaging must always be
at the head of their per-target selection queue so that
the firmware knows to manually assert ATN if the current
negotiation agreement is packetized. In the past we
guaranteed this by queuing these SCBs separarately in
the execution queue. This exposes the system to potential
command reordering in two cases:
1) Another SCB for the same ITL nexus is queued that does
not have the MK_MESSAGE flag set. This SCB will be
queued to the per-target list which can be serviced
before the MK_MESSAGE scb that preceeded it.
2) If the target cannot accept all of the commands in the
per-target selection queue in one selection, the remainder
is queued to the tail of the selection queues so as to
effect round-robin scheduling. This could allow the
MK_MESSAGE scb to be sent to the target before the
requeued commands.
This commit changes the firmware policy to defer queuing
MK_MESSAGE SCBs into the selection queues until this can
be done without affecting order. This means that the
target's selection queue is either empty, or the last
SCB on the execution queue is also a MK_MESSAGE SCB.
During any wait, the firmware halts the download of new
SCBs so only a single "holding location" is required.
Luckily, MK_MESSAGE SCBs are rare and typically occur only
during CAM's bus probe where only one command is outstanding
at a time. However, during some recovery scenarios, the
reordering *could* occur.
aic79xx.c:
Update ahd_search_qinfifo() and helper routines to
search for pending MK_MESSAGE scbs and properly
restitch the execution queue if either the MK_MESSAGE
SCB is being aborted, or the MK_MESSAGE SCB can be
queued due to the execution queue draining due to
aborts.
Enable LQOBUSFREE status to assert an interrupt.
This should be redundant since a BUSFREE interrupt
should always occur along with an LQOBUSFREE event,
but on the Rev A, this doesn't seem to be guaranteed.
When a PPR request is rejected when a previously
existing packetized agreement is in place, assume
that the target has been reset without our knowledge
and revert to async/narrow transfers. This corrects
two issues: the stale ENATNO setting that was used
to send the PPR is cleared so the firmware is not
confused by a future packetized selection with
ATN asserted but no MK_MESSAGE flag in the SCB and
it speeds up recovery by aborting any pending
packetized transactions that by definition are now
dead.
When re-queueing SCBs after a failed negotiation
attempt, ensure command ordering by freezing the
device queue first.
Traverse the list of pending SCBs rather than the
whole SCB array on the controller when pushing
MK_MESSAGE flag changes out to the controller.
The original code was optimized for the aic7xxx
controllers where there are fewer controller slots
then pending SCBs and the firmware picks SCB
slots. For the U320 controller, the hope is
that we have fewer pending SCBs then the 512
slots on the controller.
Enhance some diagnostics.
Factor out some common code.
aic79xx.h:
Add prototype for new ahd_done_with_status() that is
used to factor out some commone code.
aic79xx.reg:
Add definisions for the pending MK_MESSAGE SCB.
aic79xx.seq:
Defer MK_MESSAGE SCB queing to the execution queue
so as to preserve command ordering. Re-arrange some
of the selection processing code so the above change
had no performance impact on the common code path.
Close a few critical section holes.
When entering a non-packetized phase, manually enable
busfree interrupts, since the controller hardware
does not do this automatically.
aic79xx_inline.h:
Enhance logging for queued SCBs.
aic79xx_osm.c:
Add new a new DDB ahd command, ahd_dump, which
invokes the ahd_dump_card_state() routine on the
unit specified with the ahd_sunit DDB command.
aic79xx_pci.c:
Turn on the BUSFREEREV bug for the Rev B. controller.
This is required to close the busfree during non-packetized
phase hole.
functions. Basically, the ip_next() function was used to get the PPTP and
Skinny headers when tcp_next() should have been used instead. Symptoms of
this included a segfault in natd when trying to process a PPTP or Skinny
packet.
Approved by: des
should be set to VM_PAGE_BITS_ALL before returning, to ensure that
neither vm_pager_get_pages nor vm_fault calls vm_page_zero_invalid
after dev_pager_getpages has returned.
Submitted by: tegge
into single-user mode (as seen on sparc64 and PPC). Problems were due
to a minor oversight in the changes committed in revision 1.25.
Submitted by: grehan
Tested by: gad & yongari
being defined, define and use a new MD macro, cpu_spinwait(). It only
expands to something on i386 and amd64, so the compiled code should be
identical.
Name of the macro found by: jhb
Reviewed by: jhb
make it fully self-contained.
o ip_reass() now returns a new mbuf with the reassembled packet and ip->ip_len
including the IP header.
o Computation of the delayed checksum is moved into divert_packet().
Reviewed by: silby
- according to RFC2661 an offset size of 0 is allowed.
- when skipping offset padding do not forget to also skip
the 2 octets of the offset size field.
Reviewed by: archie
Approved by: pjd (mentor)
link[n].latency calculated from user supplied value.
This prevents repeated NGM_PPP_SET_CONFIG/NGM_PPP_GET_CONFIG
from failing because of link[n].conf.latency being out of range.
Reviewed by: archie
Approved by: pjd (mentor)
and setting MSR. This was most evident with the idle proc running
with interrupts disabled and causing a lockup. Switch over to the
i386 style which does things in the right order.
debug assisted by: gallatin, and the invaluable KTR option.
pipelock(), not via a mixture of mutexes and pipelock(). Additionally,
add a few KASSERTS, and change some statements that should have been
KASSERTS into KASSERTS.
As a result of these cleanups, some segments of code have become
significantly shorter and/or easier to read.
unconditionally, stop after the first one (system board) if no EISA hardware
is detected. This fixes a boot hang (i.e. Thinkpad) when ACPI is disabled.
Also, split the probe code into a separate function and do some style cleanup.
Note that the Adaptec 2842 VLB controller probe is broken by this change
and will fail to probe. It should be fixed separately.
in case of a CHECK CONDITION.
- Make this driver return SCSI status information.
- While here, factor out the clearing of the CAM status from every
element of the switch statement to only once before the switch.
This fixes burning CDs with recent cdrecord 2.01 alpha versions and
burners attached to asr(4) controllers but there could have been
other applications and da(4) etc. also affected.
Reviewed by: gibbs, scottl
MFC after: 2 weeks
UFS2 was here. It so happened that UFS2 did not need a seperate
partition type. Keep the definition as a comment for documentation
purposes. If there is a benefit for UFS2 file systems to have a
seperate partition type under GPT, then this definition should be
restored as that was the intention of the definition.
Currently one cannot load the mem.ko module without panicing if mem is
compiled into the kernel and one cannot build a kernel w/o "device mem"
right now either. Thus it is too dangerous to install mem.ko right now
because if one puts 'mem_load="YES"' in /etc/loader.conf they cannot
boot an "old" kernel (at the time that a kernel doesn't have to be built
with "device mem).
pic_eoi_source() into one call. This halves the number of spinlock operations
and indirect function calls in the normal case of handling a normal (ithread)
interrupt. Optimize the atpic and ioapic drivers to use inlines where
appropriate in supporting the intr_execute_handlers() change.
This knocks 900ns, or roughly 1350 cycles, off of the time spent servicing an
interrupt in the common case on my 1.5GHz P4 uniprocessor system. SMP systems
likely won't see as much of a gain due to the ioapic being more efficient than
the atpic. I'll investigate porting this to amd64 soon.
Reviewed by: jhb
skip blocks that are too big by a factor of two or greater. This
avoids some cases of extremely inefficient memory use that can occur
when large (e.g. 64k) blocks on the free list get used when allocating
a 4k chunk of 64-byte fragments. Because fragments have their own
free list, the 60k difference got lost forever every time.
system BIOS to disable legacy device emulation as per the "EHCI
Extended Capability: Pre-OS to OS Handoff Synchronisation" section
of the EHCI spec. BIOSes that implement legacy emulation using SMIs
are supposed to disable the emulation when this procedure is performed.
set gp->softc to NULL and return ENXIO when it is NULL, so GEOM
will not panic or hang, but unload one device on every 'unload'.
This make 'unload' command usable, but it have to be executed
<number of devices> + 1 times.
- Made use of 'pp' variable.
so that they know whether the allocation is supposed to be able to sleep
or not.
* Allow uma_zone constructors and initialation functions to return either
success or error. Almost all of the ones in the tree currently return
success unconditionally, but mbuf is a notable exception: the packet
zone constructor wants to be able to fail if it cannot suballocate an
mbuf cluster, and the mbuf allocators want to be able to fail in general
in a MAC kernel if the MAC mbuf initializer fails. This fixes the
panics people are seeing when they run out of memory for mbuf clusters.
* Allow debug.nosleepwithlocks on WITNESS to be disabled, without changing
the default.
Both bmilekic and jeff have reviewed the changes made to make failable
zone allocations work.