d5c90663b2
since the branch caches on at least Athlon XP through Athlon 64 CPU's don't understand such instructions and guarantee a cache miss taking at least 10 cycles. Use the documented workaround "ret $0" instead ("nop; ret" also works, but "ret $0" is probably faster on old CPUs). Normal code (even asm code) doesn't branch to "ret", since there is usually some cleanup to do, but the __mcount, .mcount and .mexitcount entry points were optimized too well to have the minimum number of instructions (3 instructions each if profiling is not enabled) and they did this. I didn't see a significant number of cache misses for .mexitcount, but for the shared "ret" for __mcount and .mcount I observed cache misses costing 26 cycles each. For a send(2) syscall that makes about 70 function calls, the cost of these cache misses alone increased the syscall time from about 4000 cycles to about 7000 cycles. 4000 is for a profiling (GUPROF) kernel with profiling disabled; after this fix, configuring profiling only costs about 600 cycles in the 4000, which is consistent with almost perfect branch prediction in the mcounting calls. |
||
---|---|---|
.. | ||
atpic_vector.s | ||
atpic.c | ||
ccbque.h | ||
clock.c | ||
elcr.c | ||
elink.c | ||
elink.h | ||
icu.h | ||
isa_dma.c | ||
isa.c | ||
isa.h | ||
nmi.c | ||
npx.c | ||
pmtimer.c | ||
prof_machdep.c | ||
spic.c | ||
spicreg.h | ||
vesa.c |