(4 in operation), 4GB ram (3.5 usable) ARM machine.
Support covers device drivers for:
- Serial Peripheral Interface (SPI)
- Chrome Embedded Controller (EC) - SPI-based version
- XHCI and USB 3.0 dual-role device PHY
Also:
- Add support for Exynos5420 in Pad module
- Move power-related functions to separate driver --
Power Management Unit (PMU)
- Enable XHCI for Chromebook1
Special thanks to grehan@ for hardware, and to
hselasky@ for r269139.
moved from the stack into the tag structure. In retrospect that was a bad
idea, because nothing protects that array from concurrent access by
multiple threads.
This change moves the array to the map structure (actually it's allocated
following the structure, but all in a single malloc() call).
This also establishes a "sane" limit of 4096 segments per map. This is
mostly to prevent trying to allocate all of memory if someone accidentally
uses a tag with nsegments set to BUS_SPACE_UNRESTRICTED. If there's ever
a genuine need for more than 4096, don't hesitate to increase this (or
maybe make it tunable).
Reviewed by: cognet
triggers a need to bounce due to cacheline alignment. These buffers
are always aligned to cacheline boundaries, and even when the DMA operation
starts at an offset within the buffer or doesn't extend to the end of the
buffer, it's safe to flush the complete cachelines that were only partially
involved in the DMA. This is because there's a very strict rule on these
types of buffers that there will not be concurrent access by the CPU and
one or more DMA transfers within the buffer.
Reviewed by: cognet
functions, it has evolved to make a variety of decisions about whether
the DMA needs to bounce, so rename it to must_bounce(). Rewrite it to
perform checks outside of the ancestor loop if they're based on information
that's wholly contained within the original tag. Now the loop only checks
exclusion zones in ancestor tags.
Also, add a new function, might_bounce() which does a fast inline check
of flags within the tag and map to quickly eliminate the need to call
the more expensive must_bounce() for each page in the DMA operation.
Within the mapping loops, use map->pagesneeded != 0 as a proxy for all
the various checks on whether bouncing might be required. If no pages
were reserved for bouncing during the checks before the mapping loop,
then there's no need to re-check any of the conditions that can lead
to bouncing -- all those checks already decided there would be no bouncing.
Reviewed by: cognet
exclusion zones and phsyical memory. The phys_avail[i] entries are the
address of the first byte of ram in the region, and phys_avail[i+1]
entries are the address of the first byte of ram in the next region
(i.e., they're not included in the region that starts at phys_avail[i]).
Reviewed by: cognet
unchanging values in the phys_avail array, so do the comparisons just once
at tag creation time and set a flag to remember the result.
Reviewed by: cognet
DMA on arm can bounce for several reasons, and _bus_dma_can_bounce() only
checks for the lowaddr/highaddr exclusion ranges in the dma tag, so now
it's named exclusion_bounce(). The other reasons for bouncing are checked
by the new functions alignment_bounce() and cacheline_bounce().
Reviewed by: cognet
This commit does not add error returns to minidumpsys() or
textdump_dumpsys(); those can also be added later.
Submitted by: Conrad Meyer (EMC / Isilon storage division)
handling. For statically linked apps this uses the __exidx_start/end
symbols set up by the linker. For dynamically linked apps it finds the
shared object that contains the given address and returns the location and
size of the exidx section in that shared object.
The dl_unwind_find_exidx() name is used by other BSD projects and Android,
and is mentioned in clang 3.5 comments as "the BSD interface" for finding
exidx data. GCC (in libgcc_s) expects the exact same API and functionality
to be provided by a function named __gnu_Unwind_Find_exidx(), so we provide
that with an alias ("strong reference").
Reviewed by: kib@
MFC after: 1 week
Previously, the "no execute" bit was being set directly in the PTE, instead
of the local variable in which the new PTE value is being constructed. So,
when the local variable was finally assigned to the PTE, the "no execute"
bit setting was lost.
that it can connect to switches at speeds other than 1gb.
This requires changing the reference clock speed. Since we still don't
have a general clock API that lets a SoC-independant driver manipulate its
own clocks, this change includes a weak reference to a routine named
cgem_set_ref_clk(). The default implementation is a no-op; SoC-specific
code can provide an implementation that actually changes the speed.
Submitted by: Thomas Skibo <ThomasSkibo@sbcglobal.net>
These changes prevent sysctl(8) from returning proper output,
such as:
1) no output from sysctl(8)
2) erroneously returning ENOMEM with tools like truss(1)
or uname(1)
truss: can not get etype: Cannot allocate memory
there is an environment variable which shall initialize the SYSCTL
during early boot. This works for all SYSCTL types both statically and
dynamically created ones, except for the SYSCTL NODE type and SYSCTLs
which belong to VNETs. A new flag, CTLFLAG_NOFETCH, has been added to
be used in the case a tunable sysctl has a custom initialisation
function allowing the sysctl to still be marked as a tunable. The
kernel SYSCTL API is mostly the same, with a few exceptions for some
special operations like iterating childrens of a static/extern SYSCTL
node. This operation should probably be made into a factored out
common macro, hence some device drivers use this. The reason for
changing the SYSCTL API was the need for a SYSCTL parent OID pointer
and not only the SYSCTL parent OID list pointer in order to quickly
generate the sysctl path. The motivation behind this patch is to avoid
parameter loading cludges inside the OFED driver subsystem. Instead of
adding special code to the OFED driver subsystem to post-load tunables
into dynamically created sysctls, we generalize this in the kernel.
Other changes:
- Corrected a possibly incorrect sysctl name from "hw.cbb.intr_mask"
to "hw.pcic.intr_mask".
- Removed redundant TUNABLE statements throughout the kernel.
- Some minor code rewrites in connection to removing not needed
TUNABLE statements.
- Added a missing SYSCTL_DECL().
- Wrapped two very long lines.
- Avoid malloc()/free() inside sysctl string handling, in case it is
called to initialize a sysctl from a tunable, hence malloc()/free() is
not ready when sysctls from the sysctl dataset are registered.
- Bumped FreeBSD version to indicate SYSCTL API change.
MFC after: 2 weeks
Sponsored by: Mellanox Technologies
the queue where to enqueue pages that are going to be unwired.
- Add stronger checks to the enqueue/dequeue for the pagequeues when
adding and removing pages to them.
Of course, for unmanaged pages the queue parameter of vm_page_unwire() will
be ignored, just as the active parameter today.
This makes adding new pagequeues quicker.
This change effectively modifies the KPI. __FreeBSD_version will be,
however, bumped just when the full cache of free pages will be
evicted.
Sponsored by: EMC / Isilon storage division
Reviewed by: alc
Tested by: pho
portsnap extract, where previously it would panic.. clearly someone
who knows pmap should optimize this code per alc's comment...
Submitted by: alc
MFC after: probably
In particular, don't check the value of the bus_dma map against NULL
to determine if either bus_dmamem_alloc() or bus_dmamap_load() succeeded.
Instead, assume that bus_dmamap_load() succeeeded (and thus that
bus_dmamap_unload() should be called) if the bus address for a resource
is non-zero, and assume that bus_dmamem_alloc() succeeded (and thus
that bus_dmamem_free() should be called) if the virtual address for a
resource is not NULL.
In many cases these bugs could result in leaks when a driver was detached.
Reviewed by: yongari
MFC after: 2 weeks
don't create a map before calling bus_dmamem_alloc() (such maps were
leaked). It is believed that the extra destroy of the map was generally
harmless since bus_dmamem_alloc() often uses special maps for which
bus_dmamap_destroy() is a no-op (e.g. on x86).
Reviewed by: scottl
a partially populated reservation becomes fully populated, and decrease this
field when a fully populated reservation becomes partially populated.
Use this field to simplify the implementation of pmap_enter_object() on
amd64, arm, and i386.
On all architectures where we support superpages, the cost of creating a
superpage mapping is roughly the same as creating a base page mapping. For
example, both kinds of mappings entail the creation of a single PTE and PV
entry. With this in mind, use the page size field to make the
implementation of vm_map_pmap_enter(..., MAP_PREFAULT_PARTIAL) a little
smarter. Previously, if MAP_PREFAULT_PARTIAL was specified to
vm_map_pmap_enter(), that function would only map base pages. Now, it will
create up to 96 base page or superpage mappings.
Reviewed by: kib
Sponsored by: EMC / Isilon Storage Division