Commit Graph

17 Commits

Author SHA1 Message Date
Zbigniew Bodek
232e189a56 Add support for branch instruction on armv7 with ptrace single step
Previous code supported only "continuous" code without any kind of
branch instructions. To change that, new function was implemented
which parses current instruction and returns an addres where
the jump might happen (alternative addr).
mdthread structure was extended to support two breakpoints
(one directly below current instruction and the second placed
at the alternative location).
One of them must trigger regardless the instruction has or has not been
executed due to condition field.
Upon cleanup, both software breakpoints are removed.

This implementation parses only the most common instructions
that are present in the code (like 99.99% of all), but there
is a chance there are some left, not covered by the parsing routine.
Parsing is done only for 32-bit instruction, no Thumb nor Thumb-2
support is provided.

Reviewed by:   kib
Submitted by:  Wojciech Macek <wma@semihalf.com>
Obtained from: Semihalf
Sponsored by:  Juniper Networks Inc.
Differential Revision: https://reviews.freebsd.org/D4021
2015-11-02 16:56:34 +00:00
Konstantin Belousov
4a0589d1ba Typo. 2015-08-20 13:37:08 +00:00
Andrew Turner
ae160b23fa We only support the ARM EABI in head, remove the check on __ARM_EABI__. 2015-05-31 10:51:06 +00:00
Andrew Turner
b7112ead32 Clean up struct syscall_args:
1. Align to a 64-bit address so 64-bit data will be correctly aligned.
 2. Add a comment explaining why.
 3. Remove an unneeded value from the struct.

This fixes an issue where the struct may not be correctly aligned on the
stack in the syscall function. This may lead to accesing a 64-bit value
at a non 64-bit. This will raise an exception and panic the kernel.

We have been lucky where on arm and armv6 both clang and gcc correctly
align the data, even without us asking to, however, on armeb with clang to
not be the case. This tells the compiler we really do need this to be
aligned.

Reported and tested by:	jmg (on armeb with clang)
MFC after:	1 Week [1, 2]
2015-05-17 18:35:58 +00:00
Ian Lepore
7e55f8c198 Add a new trap-v6.c which has support for all armv7 exceptions. This
mostly paves the way for the new pmap code, and shouldn't result in any
noticible behavior differences.

Submitted by: Svatopluk Kraus <onwahe@gmail.com>,
              Michal Meloun <meloun@miracle.cz
2015-01-03 22:33:18 +00:00
Andrew Turner
9fd57b44e4 * Correct KINFO_PROC_SIZE for ARM EABI.
* Update the syscall interface to pass in the syscall value in register r7.
2013-01-17 09:52:35 +00:00
Konstantin Belousov
6bfe4c78c8 Remove unused define.
MFC after:	1 month
2011-10-07 16:09:44 +00:00
Konstantin Belousov
bc3a00cc9c Convert ARM to the syscallenter/syscallret system call sequence handlers.
Tested by:	gber
MFC after:	1 month
2011-10-04 13:14:24 +00:00
Warner Losh
ae5b8077df Make md_tp a register_t not a void *. This will keep us from
accidentally dereferencng it and might be one fewer things to change
if arm64 happens...

Submitted by:	rwatson's question on irc...
2011-02-05 03:30:29 +00:00
Konstantin Belousov
8bac98182a Style: use #define<TAB> instead of #define<SPACE>.
Noted by:	bde, pluknet gmail com
MFC after:	11 days
2010-04-27 09:48:43 +00:00
Konstantin Belousov
ed7806879b Move the constants specifying the size of struct kinfo_proc into
machine-specific header files. Add KINFO_PROC32_SIZE for struct
kinfo_proc32 for architectures providing COMPAT_FREEBSD32. Add
CTASSERT for the size of struct kinfo_proc32.

Submitted by:	pluknet
Reviewed by:	imp, jhb, nwhitehorn
MFC after:	2 weeks
2010-04-24 12:49:52 +00:00
Olivier Houchard
a43268a746 To prevent various race conditions in the RAS code, store and restore the
values in ARM_RAS_START and ARM_RAS_END at context switch time.

MFC after:	1 week
2009-02-12 23:23:30 +00:00
John Baldwin
c6a37e8413 Divorce critical sections from spinlocks. Critical sections as denoted by
critical_enter() and critical_exit() are now solely a mechanism for
deferring kernel preemptions.  They no longer have any affect on
interrupts.  This means that standalone critical sections are now very
cheap as they are simply unlocked integer increments and decrements for the
common case.

Spin mutexes now use a separate KPI implemented in MD code: spinlock_enter()
and spinlock_exit().  This KPI is responsible for providing whatever MD
guarantees are needed to ensure that a thread holding a spin lock won't
be preempted by any other code that will try to lock the same lock.  For
now all archs continue to block interrupts in a "spinlock section" as they
did formerly in all critical sections.  Note that I've also taken this
opportunity to push a few things into MD code rather than MI.  For example,
critical_fork_exit() no longer exists.  Instead, MD code ensures that new
threads have the correct state when they are created.  Also, we no longer
try to fixup the idlethreads for APs in MI code.  Instead, each arch sets
the initial curthread and adjusts the state of the idle thread it borrows
in order to perform the initial context switch.

This change is largely a big NOP, but the cleaner separation it provides
will allow for more efficient alternative locking schemes in other parts
of the kernel (bare critical sections rather than per-CPU spin mutexes
for per-CPU data for example).

Reviewed by:	grehan, cognet, arch@, others
Tested on:	i386, alpha, sparc64, powerpc, arm, possibly more
2005-04-04 21:53:56 +00:00
Olivier Houchard
b6e4194946 Add the field in the md part of the struct thread required by ARM_[GET|SET]_TP. 2005-02-26 00:02:14 +00:00
Olivier Houchard
9026d36c6e Add support for ptrace() and gdb breakpoints. 2005-01-10 22:43:16 +00:00
Warner Losh
d8315c79d9 Start all license statements with /*- 2005-01-05 21:58:49 +00:00
Olivier Houchard
6fc729af63 Import FreeBSD/arm kernel bits.
It only supports sa1110 (on simics) right now, but xscale support should come
soon.
Some of the initial work has been provided by :
Stephane Potvin <sepotvin at videotron.ca>
Most of this comes from NetBSD.
2004-05-14 11:46:45 +00:00