. interrupt storm detected on "intr70:"; throttling interrupt source;
. Added access serialization on iicbus_transfer(), previously there was
no such protection and a new transfer could easily confuse the
controller;
. Add error checkings (i.e. stop the transfer when a error is detected
and do _not_ overwrite the previous error);
. On command done interrupt do not assume that the transfer was finished
sucessfully as we will receive the command done interrupt even after
errors;
. Simplify the FIFO handling;
. Reset the FIFO between the transfers as the FIFO may contain data from
the last (failed) transfer;
. Fix the iicbus speed for AM335x, which in turn will make better use of
the I2C noise filter (set to one internal clock cycle);
. Move the read and write handler to ithread instead of notifying the
requesting thread with wakeup(9);
. Fix the comments based on OMAP4 TRM.
The above changes allows me to read the EDID from my HDMI monitor on BBB
with gonzo's patches to support TDA19988 (which does 128 bytes reads) and
repeatedly scan the iicbus (with a modified i2c(8)) without lock up the bus.
Phabric: D465
header (Elf_Ehdr) to determine if a particular interpretor wants to
accept it or not. Use this mechanism to filter EABI arm on OABI arm
kernels, and vice versa. This method could also be used to implement
OABI on EABI arm kernels, if desired, or to allow a single mips kernel
to run o32, n32 and n64 binaries.
Differential Revision: https://reviews.freebsd.org/D609
soc-wide info lives. It was under dev.imx6_anatop.0.
What does anatop mean anyway? Nobody seems to know, so it's probably
not where somebody will think to look for imx6 hardware info.
configure the mux and config registers for PIO devices based on what
we find in the FDT. I developed it per the spec that had been
committed to Linux in the January 2014 time frame and haven't
updated. In short, bundles of pins are activated in specific ways for
specific configurations, and we implement all of that.
What's not included is a MI device infrastructure, any dynamic
run-time changing of these pins, etc. Also not included are hooks into
all the drivers to enable the latter (static at boot no driver changes
are needed). These larger questions will need to be answered once we
have more drivers like this for more platforms, or somebody has a heck
of a lot of time to research a bunch of platforms, the Linux solution
(which is good, but has its warts), etc.
work. This gets my AT91SAM9260-based boards almost booting with
current in multi pass. The MCI driver is broken, but it is equally
broken before multi-pass.
By Richard Earnshaw at ARM
>
>GCC has for a number of years provides a set of pre-defined macros for
>use with determining the ISA and features of the target during
>pre-processing. However, the design was always somewhat cumbersome in
>that each new architecture revision created a new define and then
>removed the previous one. This meant that it was necessary to keep
>updating the support code simply to recognise a new architecture being
>added.
>
>The ACLE specification (ARM C Language Extentions)
>(http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.set.swdev/index.html)
>provides a much more suitable interface and GCC has supported this
>since gcc-4.8.
>
>This patch makes use of the ACLE pre-defines to map to the internal
>feature definitions. To support older versions of GCC a compatibility
>header is provided that maps the traditional pre-defines onto the new
>ACLE ones.
Stop using __FreeBSD_ARCH_armv6__ and switch to __ARM_ARCH >= 6 in the
couple of places in tree. clang already implements ACLE. Add a define
that says we implement version 1.1, even though the implementation
isn't quite complete.
mapping size (currently unused). The flags includes the fault access
bits, wired flag as PMAP_ENTER_WIRED, and a new flag
PMAP_ENTER_NOSLEEP to indicate that pmap should not sleep.
For powerpc aim both 32 and 64 bit, fix implementation to ensure that
the requested mapping is created when PMAP_ENTER_NOSLEEP is not
specified, in particular, wait for the available memory required to
proceed.
In collaboration with: alc
Tested by: nwhitehorn (ppc aim32 and booke)
Sponsored by: The FreeBSD Foundation and EMC / Isilon Storage Division
MFC after: 2 weeks
device attachment on arm platforms. If this is defined, nexus attaches
early in BUS_PASS_BUS, and other busses and devices attach later, in the
pass number they are set up for. Without it defined, nexus attaches in
BUS_PASS_DEFAULT and thus so does everything else, which is status quo.
Arm platforms which use FDT data to enumerate devices have been relying
on devices being attached in the exact order they're listed in the dts
source file. That's one of things currently preventing us from using
vendor-supplied fdt data (because then we don't control the order of the
devices in the data). Multi-pass attachment can go a long way towards
solving that problem by ensuring things like clock and interrupt drivers
are attached before the more mundane devices that need them.
The long-term goal is to have all arm fdt-based platforms using multipass.
This option is a bridge to that, letting us enable it selectively as
platforms are converted and tested (the alternative being to just throw
a big switch and try to fight fires as they're reported).
The MD allocators were very common, however there were some minor
differencies. These differencies were all consolidated in the MI allocator,
under ifdefs. The defines from machine/vmparam.h turn on features required
for a particular machine. For details look in the comment in sys/sf_buf.h.
As result no MD code left in sys/*/*/vm_machdep.c. Some arches still have
machine/sf_buf.h, which is usually quite small.
Tested by: glebius (i386), tuexen (arm32), kevlo (arm32)
Reviewed by: kib
Sponsored by: Netflix
Sponsored by: Nginx, Inc.
We continue to use pmap_enter() for that. For unwiring virtual pages, we
now use pmap_unwire(), which unwires a range of virtual addresses instead
of a single virtual page.
Sponsored by: EMC / Isilon Storage Division
don't need any #ifdef stuff to use atomic_load/store_64() elsewhere in
the kernel. For armv4 the atomics are trivial to implement for kernel
code (just disable interrupts), less so for user mode, so this only has
the kernel mode implementations for now.
value shared across multiple cores is with atomic_load_64() and
atomic_store_64(), because the normal 64-bit load/store instructions
are not atomic on 32-bit arm. Luckily the ldrexd/strexd instructions
that are atomic are fairly cheap on armv6. Because it's fairly simple
to do, this implements all the ops for 64-bit, not just load/store.
Reviewed by: andrew, cognet
We have functions nested within functions, and places where we start a
function then never end it, we just jump to the middle of something else.
We tried to express this with nested ENTRY()/END() macros (which result
in .fnstart and .fnend directives), but it turns out there's no way to
express that nesting in ARM EHABI unwind info, and newer tools treat
multiple .fnstart directives without an intervening .fnend as an error.
These changes introduce two new macros, EENTRY() and EEND(). EENTRY()
creates a global label you can call/jump to just like ENTRY(), but it
doesn't emit a .fnstart. EEND() is a no-op that just documents the
conceptual endpoint that matches up with the same-named EENTRY().
This is based on patches submitted by Stepan Dyatkovskiy, but I made some
changes and added the EEND() stuff, so blame any problems on me.
Submitted by: Stepan Dyatkovskiy <stpworld@narod.ru>
(4 in operation), 4GB ram (3.5 usable) ARM machine.
Support covers device drivers for:
- Serial Peripheral Interface (SPI)
- Chrome Embedded Controller (EC) - SPI-based version
- XHCI and USB 3.0 dual-role device PHY
Also:
- Add support for Exynos5420 in Pad module
- Move power-related functions to separate driver --
Power Management Unit (PMU)
- Enable XHCI for Chromebook1
Special thanks to grehan@ for hardware, and to
hselasky@ for r269139.
moved from the stack into the tag structure. In retrospect that was a bad
idea, because nothing protects that array from concurrent access by
multiple threads.
This change moves the array to the map structure (actually it's allocated
following the structure, but all in a single malloc() call).
This also establishes a "sane" limit of 4096 segments per map. This is
mostly to prevent trying to allocate all of memory if someone accidentally
uses a tag with nsegments set to BUS_SPACE_UNRESTRICTED. If there's ever
a genuine need for more than 4096, don't hesitate to increase this (or
maybe make it tunable).
Reviewed by: cognet
triggers a need to bounce due to cacheline alignment. These buffers
are always aligned to cacheline boundaries, and even when the DMA operation
starts at an offset within the buffer or doesn't extend to the end of the
buffer, it's safe to flush the complete cachelines that were only partially
involved in the DMA. This is because there's a very strict rule on these
types of buffers that there will not be concurrent access by the CPU and
one or more DMA transfers within the buffer.
Reviewed by: cognet
functions, it has evolved to make a variety of decisions about whether
the DMA needs to bounce, so rename it to must_bounce(). Rewrite it to
perform checks outside of the ancestor loop if they're based on information
that's wholly contained within the original tag. Now the loop only checks
exclusion zones in ancestor tags.
Also, add a new function, might_bounce() which does a fast inline check
of flags within the tag and map to quickly eliminate the need to call
the more expensive must_bounce() for each page in the DMA operation.
Within the mapping loops, use map->pagesneeded != 0 as a proxy for all
the various checks on whether bouncing might be required. If no pages
were reserved for bouncing during the checks before the mapping loop,
then there's no need to re-check any of the conditions that can lead
to bouncing -- all those checks already decided there would be no bouncing.
Reviewed by: cognet
exclusion zones and phsyical memory. The phys_avail[i] entries are the
address of the first byte of ram in the region, and phys_avail[i+1]
entries are the address of the first byte of ram in the next region
(i.e., they're not included in the region that starts at phys_avail[i]).
Reviewed by: cognet
unchanging values in the phys_avail array, so do the comparisons just once
at tag creation time and set a flag to remember the result.
Reviewed by: cognet
DMA on arm can bounce for several reasons, and _bus_dma_can_bounce() only
checks for the lowaddr/highaddr exclusion ranges in the dma tag, so now
it's named exclusion_bounce(). The other reasons for bouncing are checked
by the new functions alignment_bounce() and cacheline_bounce().
Reviewed by: cognet
This commit does not add error returns to minidumpsys() or
textdump_dumpsys(); those can also be added later.
Submitted by: Conrad Meyer (EMC / Isilon storage division)
handling. For statically linked apps this uses the __exidx_start/end
symbols set up by the linker. For dynamically linked apps it finds the
shared object that contains the given address and returns the location and
size of the exidx section in that shared object.
The dl_unwind_find_exidx() name is used by other BSD projects and Android,
and is mentioned in clang 3.5 comments as "the BSD interface" for finding
exidx data. GCC (in libgcc_s) expects the exact same API and functionality
to be provided by a function named __gnu_Unwind_Find_exidx(), so we provide
that with an alias ("strong reference").
Reviewed by: kib@
MFC after: 1 week
Previously, the "no execute" bit was being set directly in the PTE, instead
of the local variable in which the new PTE value is being constructed. So,
when the local variable was finally assigned to the PTE, the "no execute"
bit setting was lost.
that it can connect to switches at speeds other than 1gb.
This requires changing the reference clock speed. Since we still don't
have a general clock API that lets a SoC-independant driver manipulate its
own clocks, this change includes a weak reference to a routine named
cgem_set_ref_clk(). The default implementation is a no-op; SoC-specific
code can provide an implementation that actually changes the speed.
Submitted by: Thomas Skibo <ThomasSkibo@sbcglobal.net>
These changes prevent sysctl(8) from returning proper output,
such as:
1) no output from sysctl(8)
2) erroneously returning ENOMEM with tools like truss(1)
or uname(1)
truss: can not get etype: Cannot allocate memory
there is an environment variable which shall initialize the SYSCTL
during early boot. This works for all SYSCTL types both statically and
dynamically created ones, except for the SYSCTL NODE type and SYSCTLs
which belong to VNETs. A new flag, CTLFLAG_NOFETCH, has been added to
be used in the case a tunable sysctl has a custom initialisation
function allowing the sysctl to still be marked as a tunable. The
kernel SYSCTL API is mostly the same, with a few exceptions for some
special operations like iterating childrens of a static/extern SYSCTL
node. This operation should probably be made into a factored out
common macro, hence some device drivers use this. The reason for
changing the SYSCTL API was the need for a SYSCTL parent OID pointer
and not only the SYSCTL parent OID list pointer in order to quickly
generate the sysctl path. The motivation behind this patch is to avoid
parameter loading cludges inside the OFED driver subsystem. Instead of
adding special code to the OFED driver subsystem to post-load tunables
into dynamically created sysctls, we generalize this in the kernel.
Other changes:
- Corrected a possibly incorrect sysctl name from "hw.cbb.intr_mask"
to "hw.pcic.intr_mask".
- Removed redundant TUNABLE statements throughout the kernel.
- Some minor code rewrites in connection to removing not needed
TUNABLE statements.
- Added a missing SYSCTL_DECL().
- Wrapped two very long lines.
- Avoid malloc()/free() inside sysctl string handling, in case it is
called to initialize a sysctl from a tunable, hence malloc()/free() is
not ready when sysctls from the sysctl dataset are registered.
- Bumped FreeBSD version to indicate SYSCTL API change.
MFC after: 2 weeks
Sponsored by: Mellanox Technologies
the queue where to enqueue pages that are going to be unwired.
- Add stronger checks to the enqueue/dequeue for the pagequeues when
adding and removing pages to them.
Of course, for unmanaged pages the queue parameter of vm_page_unwire() will
be ignored, just as the active parameter today.
This makes adding new pagequeues quicker.
This change effectively modifies the KPI. __FreeBSD_version will be,
however, bumped just when the full cache of free pages will be
evicted.
Sponsored by: EMC / Isilon storage division
Reviewed by: alc
Tested by: pho
portsnap extract, where previously it would panic.. clearly someone
who knows pmap should optimize this code per alc's comment...
Submitted by: alc
MFC after: probably
In particular, don't check the value of the bus_dma map against NULL
to determine if either bus_dmamem_alloc() or bus_dmamap_load() succeeded.
Instead, assume that bus_dmamap_load() succeeeded (and thus that
bus_dmamap_unload() should be called) if the bus address for a resource
is non-zero, and assume that bus_dmamem_alloc() succeeded (and thus
that bus_dmamem_free() should be called) if the virtual address for a
resource is not NULL.
In many cases these bugs could result in leaks when a driver was detached.
Reviewed by: yongari
MFC after: 2 weeks
don't create a map before calling bus_dmamem_alloc() (such maps were
leaked). It is believed that the extra destroy of the map was generally
harmless since bus_dmamem_alloc() often uses special maps for which
bus_dmamap_destroy() is a no-op (e.g. on x86).
Reviewed by: scottl
a partially populated reservation becomes fully populated, and decrease this
field when a fully populated reservation becomes partially populated.
Use this field to simplify the implementation of pmap_enter_object() on
amd64, arm, and i386.
On all architectures where we support superpages, the cost of creating a
superpage mapping is roughly the same as creating a base page mapping. For
example, both kinds of mappings entail the creation of a single PTE and PV
entry. With this in mind, use the page size field to make the
implementation of vm_map_pmap_enter(..., MAP_PREFAULT_PARTIAL) a little
smarter. Previously, if MAP_PREFAULT_PARTIAL was specified to
vm_map_pmap_enter(), that function would only map base pages. Now, it will
create up to 96 base page or superpage mappings.
Reviewed by: kib
Sponsored by: EMC / Isilon Storage Division