Change the order of operations for the initial cache setup. Turning off
the cache before clean/invalidate ensured that no new lines can come into the cache or migrate between levels during the operation, but may not be safe on some chips. Instead, if the cache was enabled on entry, do the wbinv while it's still enabled, and then disable it and do a separate invalidate pass. After the intitial writeback we know there are no dirty lines left and no new dirty lines can be created as long as we carefully avoid touching memory before turning the cache off. Add a comment about that so no new code gets inserted between those points.
This commit is contained in:
parent
3ed72a94a4
commit
9cb25d2d4c
Notes:
svn2git
2020-12-20 02:59:44 +00:00
svn path=/head/; revision=276445
@ -84,11 +84,9 @@ ASENTRY_NP(_start)
|
||||
*/
|
||||
mrc CP15_SCTLR(r7)
|
||||
tst r7, #CPU_CONTROL_DC_ENABLE
|
||||
beq 1f
|
||||
bic r7, #CPU_CONTROL_DC_ENABLE
|
||||
mcr CP15_SCTLR(r7)
|
||||
ISB
|
||||
bl dcache_wbinv_poc_all
|
||||
blne dcache_wbinv_poc_all
|
||||
|
||||
/* ! Do not write to memory between wbinv and disabling cache ! */
|
||||
|
||||
/*
|
||||
* Now there are no dirty lines, but there may still be lines marked
|
||||
@ -96,6 +94,7 @@ ASENTRY_NP(_start)
|
||||
* before setting up new page tables and re-enabling the mmu.
|
||||
*/
|
||||
1:
|
||||
bic r7, #CPU_CONTROL_DC_ENABLE
|
||||
bic r7, #CPU_CONTROL_MMU_ENABLE
|
||||
bic r7, #CPU_CONTROL_IC_ENABLE
|
||||
bic r7, #CPU_CONTROL_UNAL_ENABLE
|
||||
|
Loading…
Reference in New Issue
Block a user