Fix how we place each objects thread local data. The code used was based

on the Variant II code, however arm64 uses Variant I. The former placed the
thread pointer after the data, pointing at the thread control block, while
the latter places these before said data.

Because of this we need to use the size of the previous entry to calculate
where to place the current entry. We also need to reserve 16 bytes at the
start for the thread control block.

This also fixes the value of TLS_TCB_SIZE to be correct. This is the size
of two unsigned longs, i.e. 2 * 8 bytes.

While here remove the bogus adjustment of the pointer in the
R_AARCH64_TLS_TPREL64 case. It should be the offset of the data relative
to the thread pointer, including the thread control block.

Sponsored by:	ABT Systems Ltd
This commit is contained in:
Andrew Turner 2015-09-01 15:57:03 +00:00
parent 878165d2ef
commit 7c81294224
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=287370
2 changed files with 4 additions and 4 deletions

View File

@ -381,7 +381,7 @@ reloc_non_plt(Obj_Entry *obj, Obj_Entry *obj_rtld, int flags,
}
*where = def->st_value + rela->r_addend +
defobj->tlsoffset - TLS_TCB_SIZE;
defobj->tlsoffset;
break;
case R_AARCH64_RELATIVE:
*where = (Elf_Addr)(obj->relocbase + rela->r_addend);

View File

@ -64,12 +64,12 @@ Elf_Addr reloc_jmpslot(Elf_Addr *where, Elf_Addr target,
#define round(size, align) \
(((size) + (align) - 1) & ~((align) - 1))
#define calculate_first_tls_offset(size, align) \
round(size, align)
round(16, align)
#define calculate_tls_offset(prev_offset, prev_size, size, align) \
round((prev_offset) + (size), align)
round(prev_offset + prev_size, align)
#define calculate_tls_end(off, size) ((off) + (size))
#define TLS_TCB_SIZE 8
#define TLS_TCB_SIZE 16
typedef struct {
unsigned long ti_module;
unsigned long ti_offset;