xmodem download. Then download the image you want in the flash.
This will burn the image into the flash. You must then reset the
unit and the new flash image will be used for booting...
xmodem download. Then download the image you want in the eeprom.
This will burn the image into the eeprom. You must then reset the
unit and the new eeprom image will be used for booting...
Major differences:
* since there is no direct map region, there is no custom uma memory
allocator to modify to include its pages in the dumps.
* Various data entries are reduced from 64 bit to 32 bit to match the
native size.
dump_add_page() and dump_drop_page() are still present in case one wants to
arrange for arbitary pages to be dumped. This is of marginal use though
because libkvm+kgdb cannot address physical memory that isn't mapped into
kvm.
via the debug.minidump sysctl and tunable.
Traditional dumps store all physical memory. This was once a good thing
when machines had a maximum of 64M of ram and 1GB of kvm. These days,
machines often have many gigabytes of ram and a smaller amount of kvm.
libkvm+kgdb don't have a way to access physical ram that is not mapped
into kvm at the time of the crash dump, so the extra ram being dumped
is mostly wasted.
Minidumps invert the process. Instead of dumping physical memory in
in order to guarantee that all of kvm's backing is dumped, minidumps
instead dump only memory that is actively mapped into kvm.
amd64 has a direct map region that things like UMA use. Obviously we
cannot dump all of the direct map region because that is effectively
an old style all-physical-memory dump. Instead, introduce a bitmap
and two helper routines (dump_add_page(pa) and dump_drop_page(pa)) that
allow certain critical direct map pages to be included in the dump.
uma_machdep.c's allocator is the intended consumer.
Dumps are a custom format. At the very beginning of the file is a header,
then a copy of the message buffer, then the bitmap of pages present in
the dump, then the final level of the kvm page table trees (2MB mappings
are expanded into a 4K page mappings), then the sparse physical pages
according to the bitmap. libkvm can now conveniently access the kvm
page table entries.
Booting my test 8GB machine, forcing it into ddb and forcing a dump
leads to a 48MB minidump. While this is a best case, I expect minidumps
to be in the 100MB-500MB range. Obviously, never larger than physical
memory of course.
minidumps are on by default. It would want be necessary to turn them off
if it was necessary to debug corrupt kernel page table management as that
would mess up minidumps as well.
Both minidumps and regular dumps are supported on the same machine.
o Use a directory layout that is more akin to the i386 boot layout.
o Create a libat91 for library routines that are used by one or more
of the boot loaders.
o Create bootiic for booting from an iic part.
o Create bootspi for booting from an spi part.
o Optimize the size of many of these routines (especially emac.c). Except
for the emac.c optimizations, all these have been tested.
o eliminate the inc directory, libat91 superceeds it.
o Move linker.cfg up a layer to allow it to be shared.
state structure. This field is only for CCBs that are associated with
actions that are occurring on the HBA (i.e., XPT_CONT_IO actions).
This way we also don't get confused when the upstream listener stalls
try and look at a CCB which has already been freed (by CAM).
to reduce the pv_entry_count counter. This was found by Tor Egge. In the
same email, Tor also pointed out the pv_stats problem in the previous
commit, but I'd forgotten about it until I went looking for this email
about this allocation problem.
locked. In general the adaptive spinning is similar to the same code
for mutexes with some extra trickiness in rw_wunlock_hard(). Specifically,
even though both wait bits might be set and we might have a turnstile with
at least one waiting thread, there might not be any threads blocked on the
queue we are not waking up (they might all be spinning), and we should
only preserve the waiting flag for the queue we aren't waking up if there
are in fact threads blocked on that queue. Secondly, there might not be
any threads blocked on the queue we have chosen to waken threads from
(there might only be threads blocked on the other queue and the threads
for this queue are all spinning) in which case we disown the turnstile
instead of doing a braodcast and unpend.
stored in metadata instead of an offset in single disk.
After reboot/crash synchronization process started from a wrong offset
skipping (not synchronizing) part of the component which can lead to data
corrutpion (when synchronization process was interrupted on initial
synchronization) or other strange situations like 'graid3 status' showing
value more than 100%.
Reported, reviewed and tested by: ru
Reported by: Dmitry Morozovsky <marck@rinet.ru>
MFC after: 1 day
as pcf_ebus and pcf_isa, they should probably be fixed back to pcf),
and bti2c doesn't exist, bktr has smbus or iicbb as children..
Brought to you by: http://people.FreeBSD.org/~jmg/driver.pdf
use it in places that only care about the write owner instead of
rw_owner() as a baby step towards limited read-lock owner.
- Tidy the code that sets the WAITER flag bits to not duplicate a test
around the atomic operation and the KTR trace in both of the lock
functions.
above what's used for fast interrupts, only interrupts with the level of
the interrupt which led to calling intr_fast() (which is used with both
fast and ithread interrupts) are blocked while in that function. Thus
intr_fast() can be preempted by a fast interrupt (which are of a higher
level than ithread interrupts) while servicing an ithread interrupt. This
can lead to a stale pointer to the head of the active interrupt requests
list when back in the ithread interrupt invocation of intr_fast(), in turn
resulting in corruption of the interrupt request lists and consequently
in a panic. Solve this be turning off interrupts in intr_fast() before
reading the pointer to the head of the active list rather than after. [1]
- Add a KASSERT in intr_fast() which asserts that ir_func is non-zero before
calling it. [1]
- Increment interrupt stats after calling the handlers rather than before.
This reduces the delay until direct and fast handlers are serviced, in my
testings by 30% on average for the direct tick interrupt handler, in turn
resulting in less clock drift.
PR: 94778 [1]
Submitted by: Andrew Belashov [1]
MFC after: 2 weeks
with a given module_t. I use this in some the MOD_LOAD event handler for
some test kernel modules to ask the kernel linker to look up the linker
sets in my test modules. (I use linker sets to generate the list of
possible events that I then signal to execute via a sysctl. On non-amd64,
ld(8) would resolve the entire linker set, but on amd64 I have to ask the
kernel linker to do it for me, and having the kernel linker do it works on
all archs.)