This follows section 18.4.2.2 SD Soft Reset Flow in the TI AM335x Technical
Reference Manual and seems to fix the "ti_mmchs0: Error: current cmd NULL,
already done?" messages.
Gcc outputs pre-UAL asm and expects the ldcl instruction with a condition
in the form ldc<c>l, where the code produces the instruction in the UAL
form ldcl<c>. Work around this by checking if we are using clang or gcc and
adjusting the instruction.
While here correct the cmp instruction's value to include the # before the
immediate value.
ePWM is controlled by sysctl nodes dev.am335x_pwm.N.period,
dev.am335x_pwm.N.dutyA and dev.am335x_pwm.N.dutyB that controls
PWM period and duty cycles for channels A and B respectively.
Period and duty cycle are measured in clock ticks. Default
clock frequency for AM335x PWM subsystem is 100MHz
pmap_remove_all()
This flag should already be cleared by pmap_nuke_pv()
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
When doing pmap_enter_locked(), enable write permission only when access
type indicates attempt to write. Otherwise, leave the page read only but
mark it writable in pv_flags.
This will result in:
1. Marking page writable during pmap_enter() but only when ensured that it
will be written right away so that we will not get redundant permissions
fault on write attempt.
2. Keeping page read only when it is permitted to be written but there was
no actual write attempt. Hence, we will get permissions fault on write
access and mark page writable in pmap_fault_fixup() what will indicate
modification status.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
Force UMA zone to allocate service structures like slabs using own
allocator. uma_debug code performs atomic ops on uma_slab_t fields
and safety of this operation is not guaranteed for write-back caches
Issues were noted by Bruce Evans and are present on all architectures.
On i386, a counter fetch should use atomic read of 64bit value,
otherwise carry from the increment on other CPU could be lost for the
given fetch, making error of 2^32. If 64bit read (cmpxchg8b) is not
available on the machine, it cannot be SMP and it is enough to disable
preemption around read to avoid the split read.
On x86 the counter increment is not atomic on purpose, which makes it
possible for the store of the incremented result to override just
zeroed per-cpu slot. The effect would be a counter going off by
arbitrary value after zeroing. Perform the counter zeroing on the
same processor which does the increments, making the operations
mutually exclusive. On i386, same as for the fetching, if the
cmpxchg8b is not available, machine is not SMP and we disable
preemption for zeroing.
PowerPC64 is treated the same as amd64.
For other architectures, the changes made to allow the compilation to
succeed, without fixing the issues with zeroing or fetching. It
should be possible to handle them by using the 64bit loads and stores
atomic WRT preemption (assuming the architectures also converted from
using critical sections to proper asm). If architecture does not
provide the facility, using global (spin) mutex would be non-optimal
but working solution.
Noted by: bde
Sponsored by: The FreeBSD Foundation
- Initialize SMAPx registers too although they're unused in QEMU
- Do not pass IO/MEM resources to upper bus for activation, handle them locally.
Previously ACTIVATE method of upper bus was no-op so nothing bad
happened. But now FDT maps physaddr to vaddr and it causes
troubles: fdtbus_activate_resource resource assumes that
bustag/bushandle are already set which in this case is wrong.
past a trapframe.
Use this macro in exception_exit as it is the function the unwinder enters
as the functions that store the frame setting lr to point to it.
Provide both __sync_*-style and __atomic_*-style functions that perform
the atomic operations on ARMv5 by using Restartable Atomic Sequences.
While there, clean up some pieces of code where it's sufficient to use
regular uint32_t to store register contents and don't need full reg_t's.
Also sync this back to the MIPS code.
check which variant we are on, and if it is a VFPv3 or v4, and has 32
double registers we save these. This fixes VFP support on Raspberry Pi.
While here clean fmrx and fmxr up to use the register names from vfp.h
as opposed to the raw register names.
Basically the situation is as follows:
- When using Clang + armv6, we should not need any intrinsics. It should
support it, even though due to a target misconfiguration it does not.
We should fix this in Clang.
- When using Clang + noarmv6, provide __atomic_* functions that disable
interrupts.
- When using GCC + armv6, we can provide __sync_* intrinsics, similar to
what we did for MIPS. As ARM and MIPS are quite similar, simply base
this implementation on the one I did for MIPS.
- When using GCC + noarmv6, disable the interrupts, like we do for
Clang.
This implementation still lacks functions for noarmv6 userspace. To be
done.
* Stop pretending we support anything other than ELF by removing code
surrounded by #ifdef __ELF__ ... #endif.
* Remove _JB_MAGIC_SETJMP and _JB_MAGIC__SETJMP, they are defined in
setjmp.h, which is able to be included from asm.
* Fix the spelling of dependent.
* Rename END _END and add END and ASEND to complement ENTRY and ASENTRY
respectively
* Add macros to simplify accessing the Global Offset Table, some of these
will be used in the upcoming update to the setjmp functions.
In order to become independent of Coherency Fabric frequency, configure
Timer and Watchdog to operate in 25MHz mode.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Copy the given range of mappings from the source map to the
destination map, thereby reducing the number of VM faults on fork.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
pmap_enter_locked() implementation was very ambiguous and confusing.
Rearrange it so that each part of the mapping creation is separated.
Avoid walking through the redundant conditions.
Extract vector_page specific PTE setup from normal PTE setting.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
Using PVF_MOD, PVF_REF and PVF_EXEC is redundant as we can get the proper
info from PTE bits.
When the mapping is marked as executable and has been referenced we assume
that it has been executed. Similarly, when the mapping is set to be writable
and is referenced, it must have been due to write access to it.
PVF_MOD and PVF_REF flags are kept just for pmap_clearbit() usage,
to pass the information on which bit should be cleared.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
Use pmap_find_pv if needed instead of multiplying its code throughout
pmap-v6.
Avoid possible NULL pointer dereference in pmap_enter_locked()
When trying to get m->md.pv_memattr, make sure that m != NULL,
in particular that vector_page is set to be NULL.
Do not set PGA_REFERENCED flag in pmap_enter_pv().
On ARM any new page reference will result in either entering the new
mapping by calling pmap_enter, etc. or fixing-up the existing mapping in
pmap_fault_fixup().
Therefore we set PGA_REFERENCED flag in the earlier mentioned cases and
setting it later in pmap_enter_pv() is just waste of cycles.
Delete unused pm_pdir pointer from the pmap structure.
Rearrange brackets in the fault cause detection in trap.c
Place the brackets correctly in order to see course of the conditions
instantaneously.
Unify naming in pmap-v6.c and improve style
Use naming common for whole pmap and compatible with other pmaps,
improve style where possible:
pm -> pmap
pg -> m
opg -> om
*pt -> *ptep
*pte -> *ptep
*pde -> *pdep
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
bit in PTE.
Enable Access Flag in CPU control. With AF enabled each valid mapping
needs to have referenced bit in PTE set in order to be able to cache
it in the TLB.
AP[0] bit is to be used as reference flag.
All access permissions are encoded by AP[2:1] wherein AP[1] is in fact
"user enable" and AP[2](APX) is "write disable".
All mappings are always set to be valid. Reference emulation is performed
by setting/clearing reference flag in PTE.
md.pvh_attrs are no longer necessary however pv_flags are still being used
for now.
Marking vm_page as "dirty" or "referenced" is being performed on:
- page or flag fault servicing in pmap_fault_fixup(), basing on the fault
type
- vm_fault servicing in pmap_enter() according to the desired protections
and faulty access type
Redundant page marking has been removed as on ARM we know exactly when the
particular page is referenced or is going to be written.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
o Relax locking assertions for pmap_enter_object() and add them also
to architectures that currently don't have any
o Introduce VM_OBJECT_LOCK_DOWNGRADE() which is basically a downgrade
operation on the per-object rwlock
o Use all the mechanisms above to make vm_map_pmap_enter() to work
mostl of the times only with readlocks.
Sponsored by: EMC / Isilon storage division
Reviewed by: alc