Mateusz Guzik
12360b3079
amd64: depessimize copyinstr_smap
...
The stac/clac combo around each byte copy is causing a measurable
slowdown in benchmarks. Do it only before and after all data is
copied. While here reorder the code to avoid a forward branch in
the common case.
Note the copying loop (originating from copyinstr) is avoidably slow
and will be fixed later.
Reviewed by: kib
Approved by: re (gjb)
Differential Revision: https://reviews.freebsd.org/D17063
2018-09-06 19:42:40 +00:00
..
2018-09-06 19:42:40 +00:00
2018-09-06 17:25:50 +00:00
2018-09-02 21:37:05 +00:00
2018-09-06 14:03:10 +00:00
2018-09-03 14:34:09 +00:00
2018-08-28 18:50:34 +00:00
2018-09-03 14:43:16 +00:00
2018-09-06 14:03:11 +00:00
2018-08-31 01:01:16 +00:00
2018-08-09 11:21:31 +00:00
2018-09-06 12:41:09 +00:00
2018-08-24 15:00:36 +00:00
2018-08-26 12:51:46 +00:00
2018-07-18 00:56:25 +00:00
2018-08-23 13:21:01 +00:00
2018-08-29 12:24:19 +00:00
2018-08-19 00:46:22 +00:00
2018-09-06 02:10:59 +00:00
2018-07-18 00:56:25 +00:00
2018-06-21 17:35:13 +00:00
2018-09-06 19:21:31 +00:00
2018-09-02 21:37:05 +00:00
2018-09-06 18:51:52 +00:00
2018-08-10 20:37:32 +00:00
2018-08-26 12:51:46 +00:00
2018-09-06 16:11:24 +00:00
2018-09-03 22:27:27 +00:00
2018-07-24 16:35:52 +00:00
2018-08-22 19:38:48 +00:00
2018-07-30 15:46:40 +00:00
2018-06-16 08:26:23 +00:00
2018-09-06 12:26:57 +00:00
2018-08-17 04:40:01 +00:00
2018-09-06 17:07:21 +00:00
2018-09-05 11:34:58 +00:00
2018-07-20 12:03:16 +00:00
2018-09-03 14:26:43 +00:00
2018-08-22 20:44:30 +00:00
2018-09-06 19:28:52 +00:00
2018-07-15 00:31:17 +00:00
2018-08-23 13:23:21 +00:00
2018-08-26 12:51:46 +00:00
2018-09-06 19:28:52 +00:00
2018-08-31 18:26:37 +00:00
2018-07-19 10:14:52 +00:00