something goes wrong while running in "fast" mode, we free all bios and
falling back to "economic" mode. Freeing bios, doesn't mean decrease
bio_children, so bio_inbed couldn't be equal to bio_children and request
was never finished.
Decrease bio_children manually when destroying bios.
Reported by: Sam Lawrance <boris@brooknet.com.au>, simon
message if they are incorrect. Also, remove the hack of allowing the
initial irq setting to not be in _PRS. As before, the old behavior can be
regained by defining ACPI_OLD_PCI_LINK.
structures, allowing in6_pcbnotify() to lock the pcbinfo and each
inpcb that it notifies of ICMPv6 events. This prevents inpcb
assertions from firing when IPv6 generates and delievers event
notifications for inpcbs.
Reported by: kuriyama
Tested by: kuriyama
a result of scheduling an ithread, cut a KTR_INTR trace record so
that it's clear in tracing interrupt activity where and when the
entropy harvesting code is invoked.
callout_reset rather than calling callout_stop. This results in a few
lines of code duplication, but it provides a significant performance
improvement because it avoids recursing on callout_lock.
Requested by: rwatson
or multicast packet, we don't need to acquire the inpcb mutex
unless we are actually using inpcb fields other than the bound port
and address. Since we hold the pcbinfo lock already, these can't
change. Defer acquiring the inpcb mutex until we have a high
chance of a match. This avoids about 120 mutex operations per UDP
broadcast packet received on one of my work systems.
Reviewed by: sam
want a splash screen.
There seems to be some confusion in the syscons code as to the meaning of
the SC_KERNEL_CONSOLE flag. Its absence is sometimes interpreted to mean
"I am not the system console", and sometimes to mean "I am not the only
VGA console" (see the font loading code for an example of the latter).
Someone with better syscons fu than myself should take a closer look.
(that does not compile with !gcc). Moreover we get the benefit for all archs
that have a hand optimized in_cksum_skip().
Submitted by: yongari
Tested by: me (i386, extensivly), pf4freebsd ML (various)
better check for 'adjacent'. The old code assumed that if two resources
were adjacent in the linked list that they were also adjacent range wise.
This is not true when a resource manager has to manage disparate regions.
For example, the current interrupt code on i386/amd64 will instruct
irq_rman to manage two disjoint regions: 0-1 and 3-15 for the non-APIC
case. If IRQs 1 and 3 were allocated and then released, the old code
would coalesce across the 1 to 3 boundary because the resources were
adjacent in the linked list thus adding 2 to the area of resources that
irq_rman managed as a side effect. The fix adds extra checks so that
adjacent unallocated resources are only merged with the resource being
freed if the start and end values of the resources also match up. The
patch also consolidates the checks for adjacent resources being allocated.
following behavior:
* Link devices return invalid status (_STA) values. The results are very
unreliable -- sometimes never present. Just ignore the status and pick
the best configuration from _PRS.
* Link devices return invalid current settings (_CRS). Even after setting
the link value, many systems still return a different setting for _CRS.
When setting an IRQ, don't bother to check _CRS to see if we succeeded.
Note that we still check _CRS before routing and this should be addressed
as well.
Since this is a sensitive area, leave the old behavior accessible via
uncommenting the define for ACPI_OLD_PCI_LINK at the top of the file. Once
this has been thoroughly tested, this option and the code it covers will
be removed.
Thanks to Len Brown at Intel for informing us of these issues as he worked
around them in Linux.
location (for the wake code). It should not be needed since we don't
map other pages at the same location and if there was an old mapping, it
would be restored by a fault. The old code had serious problems, namely
that it was restoring the new page it had just removed (not opage) and
it could only guess at the right protection (since there's no
pmap_extract_protect function). Thanks to Alan Cox for explaining much
of this to me.
Also, remove a commented-out initializecpu() call since it is not needed.
Restoring the cpu context is better than attempting to init from scratch.
Reviewed by: alc (earlier version)
have clear idea on boot2 BSS size and leaves portion of it not zeroed out.
btxcsu.s is in much better position for this job.
Obtained from: DragonflyBSD (with minor adjustments)
Since HME doesn't compensate the checksum for UDP datagram which
can yield to 0x0, UDP transmit checksum offload is disabled by
default. The UDP Transmit checksum offload can be reactivated
by setting special link option link0 with ifconfig(8).
Approved by: jake (mentor)
Reviewed by: tmm
Tested by: Herve Boulouis <amon@sockar.homeip.net>
before grabbing BPF locks to see if there are any entries in order to
avoid the cost of locking if there aren't any. Avoids a mutex lock/
unlock for each packet received if there are no BPF listeners.
consumer and 'bio_pflags' which can be used by provider.
- Remove BIO_FLAG1 and BIO_FLAG2 flags. From now on new fields should be
used for internal flags.
- Update g_bio(9) manual page.
- Update some comments.
- Update GEOM_MIRROR, which was the only one using BIO_FLAGs.
Idea from: phk
Reviewed by: phk
spin-wait code to use the same spin mutex (smp_tlb_mtx) as the TLB ipi
and spin-wait code snippets so that you can't get into the situation of
one CPU doing a TLB shootdown to another CPU that is doing a lazy pmap
shootdown each of which are waiting on each other. With this change, only
one of the CPUs would do an IPI and spin-wait at a time.
the immediate awakening of proc0 (scheduler kproc, controls swapping
processes in and out). The scheduler process periodically awakens already,
so this will not result in processes not being swapped in, there will just
be more latency in between a thread being made runnable and the scheduler
waking up to swap the affected process back in.
macros and pass the value to the associated _mtx_*() functions to avoid
more curthread dereferences in the function implementations. This provided
a very modest perf improvement in some benchmarks.
Suggested by: rwatson
Tested by: scottl
text/data are covered on APs. This enables the kernel to boot on
a 4 way Intel Itanium-2 platform. This has a secondary effect of
keeping the TRs identical on BP and the APs.
reviewed by: marcel@
a sleep() call waking up in namei(), a later assertion triggers that
Giant is not held. By asserting Giant at the start of namei(), we can
know that if that assertion triggers, Giant is lost during the call to
namei(), and not before.
lock assertions even if IPv6 is compiled into the kernel. Previously,
inclusion of IPv6 and locking assertions would result in a rapid
assertion failure as IPv6 was not properly locking inpcbs.
- In ntoskrnl_var.h, I had defined compat macros for
ntoskrnl_acquire_spinlock() and ntoskrnl_release_spinlock() but
never used them. This is fortunate since they were stale. Fix them
to work properly. (In Windows/x86 KeAcquireSpinLock() is a macro that
calls KefAcquireSpinLock(), which lives in HAL.dll. To imitate this,
ntoskrnl_acquire_spinlock() is just a macro that calls hal_lock(),
which lives in subr_hal.o.)
- Add macros for ntoskrnl_raise_irql() and ntoskrnl_lower_irql() that
call hal_raise_irql() and hal_lower_irql().
- Use these macros in kern_ndis.c, subr_ndis.c and subr_ntoskrnl.c.
- Along the way, I realised subr_ndis.c:ndis_lock() was not calling
hal_lock() correctly (it was using the FASTCALL2() wrapper when
in reality this routine is FASTCALL1()). Using the
ntoskrnl_acquire_spinlock() fixes this. Not sure if this actually
caused any bugs since hal_lock() would have just ignored what
was in %edx, but it was still bogus.
This hides many of the uses of the FASTCALLx() macros which makes the
code a little cleaner. Should not have any effect on generated object
code, other than the one fix in ndis_lock().