- Depend on opt_ddb.h, since npcb_dump() is ifdef'd DDB.
- Include ddb/ddb.h so we can call db_printf() and use DB_SHOW_COMMAND().
- Don't test results of malloc() under DIAGNOSTIC, let the memory allocator
take care of its own invariants.
MFC after: 1 month
list head structure; this improves congruence to IPv4, and also allows
in6_pcbpurgeif0() to lock the pcbinfo. Modify in6_pcbpurgeif0() to lock
the pcbinfo before iterating the pcb list, use queue(9)'s LIST_FOREACH()
for the iteration, and to lock individual inpcb's while manipulating
them.
MFC after: 3 months
date: 2006/04/12 04:22:50; author: alc; state: Exp; lines: +14 -41
Retire pmap_track_modified(). We no longer need it because we do not
create managed mappings within the clean submap. To prevent regressions,
add assertions blocking the creation of managed mappings within the clean
submap.
Reviewed by: tegge
number state, rather than re-using pcbinfo. This introduces some
additional mutex operations during isn query, but avoids hitting the TCP
pcbinfo lock out of yet another frequently firing TCP timer.
MFC after: 3 months
holding the inpcb lock is sufficient to prevent races in reading
the address and port, as both the inpcb lock and pcbinfo lock are
required to change the address/port.
Improve consistency of spelling in assertions about inp != NULL.
MFC after: 3 months
A slight difference of this chip from its previous siblings is that
it need a gentle "wake up" on every (full) DMA buffer completion to
avoid stalled interrupt handler.
Thanks to George Hartzell for permission on doing remote debugging.
Prime MFC candidate for 6.1-RELEASE. Please reply to this commit if
there are any objections (so I won't bug re@), since the changes
are too small and only specific to VT8251.
PR: i386/95949
Tested by: [1] George Hartzel
myself (remotely)
MFC after: 3 days
[1] http://lists.freebsd.org/pipermail/freebsd-multimedia/2006-April/004003.html
Pull in some target mode changes from a private branch.
Pull in some more RELENG_4 compilation changes.
A lot of lines changed, but not much content change yet.
Make this compile, assuming that you have linux installed in a
sensible place. tag_list is disabled by default, since we don't
distribute linux, but it is desirable to allow the boot loader to boot
Linux or FreeBSD (mostly for testing).
enables multilabel, or any option for that matter, most likely they have
a reason. This will allow users to see that mulilabel is enabled via an
issued "mount" command and remove an annoying warning - printed only when
a MAC kernel is not installed - on boot up.
Discussed with: green, brueffer, Samy Al Bahra.
Probably ran past: csjp (though I can't remember).
xmodem download. Then download the image you want in the flash.
This will burn the image into the flash. You must then reset the
unit and the new flash image will be used for booting...
xmodem download. Then download the image you want in the eeprom.
This will burn the image into the eeprom. You must then reset the
unit and the new eeprom image will be used for booting...
Major differences:
* since there is no direct map region, there is no custom uma memory
allocator to modify to include its pages in the dumps.
* Various data entries are reduced from 64 bit to 32 bit to match the
native size.
dump_add_page() and dump_drop_page() are still present in case one wants to
arrange for arbitary pages to be dumped. This is of marginal use though
because libkvm+kgdb cannot address physical memory that isn't mapped into
kvm.
via the debug.minidump sysctl and tunable.
Traditional dumps store all physical memory. This was once a good thing
when machines had a maximum of 64M of ram and 1GB of kvm. These days,
machines often have many gigabytes of ram and a smaller amount of kvm.
libkvm+kgdb don't have a way to access physical ram that is not mapped
into kvm at the time of the crash dump, so the extra ram being dumped
is mostly wasted.
Minidumps invert the process. Instead of dumping physical memory in
in order to guarantee that all of kvm's backing is dumped, minidumps
instead dump only memory that is actively mapped into kvm.
amd64 has a direct map region that things like UMA use. Obviously we
cannot dump all of the direct map region because that is effectively
an old style all-physical-memory dump. Instead, introduce a bitmap
and two helper routines (dump_add_page(pa) and dump_drop_page(pa)) that
allow certain critical direct map pages to be included in the dump.
uma_machdep.c's allocator is the intended consumer.
Dumps are a custom format. At the very beginning of the file is a header,
then a copy of the message buffer, then the bitmap of pages present in
the dump, then the final level of the kvm page table trees (2MB mappings
are expanded into a 4K page mappings), then the sparse physical pages
according to the bitmap. libkvm can now conveniently access the kvm
page table entries.
Booting my test 8GB machine, forcing it into ddb and forcing a dump
leads to a 48MB minidump. While this is a best case, I expect minidumps
to be in the 100MB-500MB range. Obviously, never larger than physical
memory of course.
minidumps are on by default. It would want be necessary to turn them off
if it was necessary to debug corrupt kernel page table management as that
would mess up minidumps as well.
Both minidumps and regular dumps are supported on the same machine.
o Use a directory layout that is more akin to the i386 boot layout.
o Create a libat91 for library routines that are used by one or more
of the boot loaders.
o Create bootiic for booting from an iic part.
o Create bootspi for booting from an spi part.
o Optimize the size of many of these routines (especially emac.c). Except
for the emac.c optimizations, all these have been tested.
o eliminate the inc directory, libat91 superceeds it.
o Move linker.cfg up a layer to allow it to be shared.
state structure. This field is only for CCBs that are associated with
actions that are occurring on the HBA (i.e., XPT_CONT_IO actions).
This way we also don't get confused when the upstream listener stalls
try and look at a CCB which has already been freed (by CAM).
to reduce the pv_entry_count counter. This was found by Tor Egge. In the
same email, Tor also pointed out the pv_stats problem in the previous
commit, but I'd forgotten about it until I went looking for this email
about this allocation problem.
locked. In general the adaptive spinning is similar to the same code
for mutexes with some extra trickiness in rw_wunlock_hard(). Specifically,
even though both wait bits might be set and we might have a turnstile with
at least one waiting thread, there might not be any threads blocked on the
queue we are not waking up (they might all be spinning), and we should
only preserve the waiting flag for the queue we aren't waking up if there
are in fact threads blocked on that queue. Secondly, there might not be
any threads blocked on the queue we have chosen to waken threads from
(there might only be threads blocked on the other queue and the threads
for this queue are all spinning) in which case we disown the turnstile
instead of doing a braodcast and unpend.
stored in metadata instead of an offset in single disk.
After reboot/crash synchronization process started from a wrong offset
skipping (not synchronizing) part of the component which can lead to data
corrutpion (when synchronization process was interrupted on initial
synchronization) or other strange situations like 'graid3 status' showing
value more than 100%.
Reported, reviewed and tested by: ru
Reported by: Dmitry Morozovsky <marck@rinet.ru>
MFC after: 1 day
as pcf_ebus and pcf_isa, they should probably be fixed back to pcf),
and bti2c doesn't exist, bktr has smbus or iicbb as children..
Brought to you by: http://people.FreeBSD.org/~jmg/driver.pdf
use it in places that only care about the write owner instead of
rw_owner() as a baby step towards limited read-lock owner.
- Tidy the code that sets the WAITER flag bits to not duplicate a test
around the atomic operation and the KTR trace in both of the lock
functions.
above what's used for fast interrupts, only interrupts with the level of
the interrupt which led to calling intr_fast() (which is used with both
fast and ithread interrupts) are blocked while in that function. Thus
intr_fast() can be preempted by a fast interrupt (which are of a higher
level than ithread interrupts) while servicing an ithread interrupt. This
can lead to a stale pointer to the head of the active interrupt requests
list when back in the ithread interrupt invocation of intr_fast(), in turn
resulting in corruption of the interrupt request lists and consequently
in a panic. Solve this be turning off interrupts in intr_fast() before
reading the pointer to the head of the active list rather than after. [1]
- Add a KASSERT in intr_fast() which asserts that ir_func is non-zero before
calling it. [1]
- Increment interrupt stats after calling the handlers rather than before.
This reduces the delay until direct and fast handlers are serviced, in my
testings by 30% on average for the direct tick interrupt handler, in turn
resulting in less clock drift.
PR: 94778 [1]
Submitted by: Andrew Belashov [1]
MFC after: 2 weeks
with a given module_t. I use this in some the MOD_LOAD event handler for
some test kernel modules to ask the kernel linker to look up the linker
sets in my test modules. (I use linker sets to generate the list of
possible events that I then signal to execute via a sysctl. On non-amd64,
ld(8) would resolve the entire linker set, but on amd64 I have to ask the
kernel linker to do it for me, and having the kernel linker do it works on
all archs.)
if the specified priority is zero. This avoids a race where the calling
thread could read a snapshot of it's current priority, then a different
thread could change the first thread's priority, then the original thread
would call sched_prio() inside msleep() undoing the change made by the
second thread. I used a priority of zero as no thread that calls msleep()
or tsleep() should be specifying a priority of zero anyway.
The various places that passed 'curthread->td_priority' or some variant
as the priority now pass 0.
have not been passed to the h/w yet. This remedies watchdog timeout
of buffered multicast frames in hostap mode.
While here eliminate an extraneous check; ieee80211_beacon_update sets
the tim bit based on ncabq != 0 so there's no reason to check it too.
Noticed by: Christophe Prevotaux