a3c8e04ef7
for the mapping by the object' file with the protection and mode of the first loadable segment over the whole region. Then, it maps other segments at the appropriate addresses inside the region. On amd64, due to default alignment of the segments being 1Gb, the subsequent segment mappings leave the holes in the region, that usually contain mapping of the object' file past eof. Such mappings prevent wiring of the address space, because the pages cannot be faulted in. Change the way the mapping of the ELF objects is constructed, by first mapping PROT_NONE anonymous memory over the whole range, and then mapping the segments of the object over it. Take advantage of this new order and allocate .bss by changing the protection of the range instead of remapping. Note that we cannot simply keep the holes between segments, because other mappings may be made there. Among other issues, when the dso is unloaded, rtld unmaps the whole region, deleting unrelated mappings. The kernel ELF image activator does put the holes between segments, but this is not critical for now because kernel loads only executable image and interpreter, both cannot be unloaded. This will be fixed later, if needed. Reported and tested by: Hans Ottevanger <fbsdhackers beasties demon nl> Suggested and reviewed by: kan, alc |
||
---|---|---|
.. | ||
amd64 | ||
arm | ||
i386 | ||
ia64 | ||
mips | ||
powerpc | ||
sparc64 | ||
debug.c | ||
debug.h | ||
libmap.c | ||
libmap.h | ||
Makefile | ||
malloc.c | ||
map_object.c | ||
rtld_lock.c | ||
rtld_lock.h | ||
rtld_tls.h | ||
rtld.1 | ||
rtld.c | ||
rtld.h | ||
Symbol.map | ||
xmalloc.c |