the file which is compiled with SSE disabled. The functions set up
the FPU context for kernel, and compiler optimizations which could
lead to use of XMM registers before the fpu_kern_enter(9) is called or
after fpu_kern_leave(9), panic the machine.
Discussed with: jmg
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
context into memory for the kernel threads which called
fpu_kern_thread(9). This allows the fpu_kern_enter() callers to not
check for is_fpu_kern_thread() to get the optimization.
Apply the flag to padlock(4) and aesni(4). In aesni_cipher_process(),
do not leak FPU context state on error.
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
and finish the job. ncurses is now the only Makefile in the tree that
uses it since it wasn't a simple mechanical change, and will be
addressed in a future commit.
my tests, it is faster ~20%, even on an old IXP425 533MHz it is ~45%
faster... This is partly due to loop unrolling, so the code size does
significantly increase... I do plan on committing a version that
rolls up the loops again for smaller code size for embedded systems
where size is more important than absolute performance (it'll save ~6k
code)...
The kernel implementation is now shared w/ userland's libcrypt and
libmd...
We drop support for sha256 from sha2.c, so now sha2.c only contains
sha384 and sha512...
Reviewed by: secteam@
context switch just to call the done callback... On my machine, this
improves geli/gzero decrypt performance by ~27% from 550MB/sec to
~700MB/sec...
MFC after: 3 days
regression manages to do it)... We use a packed struct to coerce
gcc/clang into producing unaligned loads (there is not packed pointer
attribute, otherwise this would be easier)...
use _storeu_ and _loadu_ when using the structure is overkill...
be better at using types properly... Since we allocate our own key
schedule and make sure it's aligned, use the __m128i type in various
arguments to functions...
clang ignores __aligned on prototypes and gcc errors on them, leave them
in comments to document that these function arguments are require to be
aligned...
about all that changes is movdqa -> movdqu from reading the diff of the
disassembly output...
Noticed by: symbolics at gmx.com
MFC after: 3 days
performance... Use SSE2 instructions for calculating the XTS tweek
factor... Let the compiler do more work and handle register allocation
by using intrinsics, now only the key schedule is in assembly...
Replace .byte hard coded instructions w/ the proper instructions now
that both clang and gcc support them...
On my machine, pulling the code to userland I saw performance go from
~150MB/sec to 2GB/sec in XTS mode. GELI on GNOP saw a more modest
increase of about 3x due to other system overhead (geom and
opencrypto)...
These changes allow almost full disk io rate w/ geli...
Reviewed by: -current, -security
Thanks to: Mike Hamburg for the XTS tweek algorithm
hash function) optimized for speed on short messages returning a 64bit hash/
digest value.
SipHash is simpler and much faster than other secure MACs and competitive
in speed with popular non-cryptographic hash functions. It uses a 128-bit
key without the hidden cost of a key expansion step. SipHash iterates a
simple round function consisting of four additions, four xors, and six
rotations, interleaved with xors of message blocks for a pre-defined number
of compression and finalization rounds. The absence of secret load/store
addresses or secret branch conditions avoid timing attacks. No state is
shared between messages. Hashing is deterministic and doesn't use nonces.
It is not susceptible to length extension attacks.
Target applications include network traffic authentication, message
authentication (MAC) and hash-tables protection against hash-flooding
denial-of-service attacks.
The number of update/finalization rounds is defined during initialization:
SipHash24_Init() for the fast and reasonable strong version.
SipHash48_Init() for the strong version (half as fast).
SipHash usage is similar to other hash functions:
struct SIPHASH_CTX ctx;
char *k = "16bytes long key"
char *s = "string";
uint64_t h = 0;
SipHash24_Init(&ctx);
SipHash_SetKey(&ctx, k);
SipHash_Update(&ctx, s, strlen(s));
SipHash_Final(&h, &ctx); /* or */
h = SipHash_End(&ctx); /* or */
h = SipHash24(&ctx, k, s, strlen(s));
It was designed by Jean-Philippe Aumasson and Daniel J. Bernstein and
is described in the paper "SipHash: a fast short-input PRF", 2012.09.18:
https://131002.net/siphash/siphash.pdf
Permanent ID: b9a943a805fbfc6fde808af9fc0ecdfa
Implemented by: andre (based on the paper)
Reviewed by: cperciva
64bit and 32bit ABIs. As a side-effect, it enables AVX on capable
CPUs.
In particular:
- Query the CPU support for XSAVE, list of the supported extensions
and the required size of FPU save area. The hw.use_xsave tunable is
provided for disabling XSAVE, and hw.xsave_mask may be used to
select the enabled extensions.
- Remove the FPU save area from PCB and dynamically allocate the
(run-time sized) user save area on the top of the kernel stack,
right above the PCB. Reorganize the thread0 PCB initialization to
postpone it after BSP is queried for save area size.
- The dumppcb, stoppcbs and susppcbs now do not carry the FPU state as
well. FPU state is only useful for suspend, where it is saved in
dynamically allocated suspfpusave area.
- Use XSAVE and XRSTOR to save/restore FPU state, if supported and
enabled.
- Define new mcontext_t flag _MC_HASFPXSTATE, indicating that
mcontext_t has a valid pointer to out-of-struct extended FPU
state. Signal handlers are supplied with stack-allocated fpu
state. The sigreturn(2) and setcontext(2) syscall honour the flag,
allowing the signal handlers to inspect and manipilate extended
state in the interrupted context.
- The getcontext(2) never returns extended state, since there is no
place in the fixed-sized mcontext_t to place variable-sized save
area. And, since mcontext_t is embedded into ucontext_t, makes it
impossible to fix in a reasonable way. Instead of extending
getcontext(2) syscall, provide a sysarch(2) facility to query
extended FPU state.
- Add ptrace(2) support for getting and setting extended state; while
there, implement missed PT_I386_{GET,SET}XMMREGS for 32bit binaries.
- Change fpu_kern KPI to not expose struct fpu_kern_ctx layout to
consumers, making it opaque. Internally, struct fpu_kern_ctx now
contains a space for the extended state. Convert in-kernel consumers
of fpu_kern KPI both on i386 and amd64.
First version of the support for AVX was submitted by Tim Bird
<tim.bird am sony com> on behalf of Sony. This version was written
from scratch.
Tested by: pho (previous version), Yamagi Burmeister <lists yamagi org>
MFC after: 1 month
- Operate on uint64_t types when doing XORing, etc. instead of uint8_t.
- Don't bzero() temporary block for every AES block. Do it once for entire
data block.
- AES-NI is available only on little endian architectures. Simplify code
that takes block number from IV.
Benchmarks:
Memory-backed md(4) device, software AES-XTS, 4kB sector:
# dd if=/dev/md0.eli bs=1m
59.61MB/s
Memory-backed md(4) device, old AES-NI AES-XTS, 4kB sector:
# dd if=/dev/md0.eli bs=1m
97.29MB/s
Memory-backed md(4) device, new AES-NI AES-XTS, 4kB sector:
# dd if=/dev/md0.eli bs=1m
221.26MB/s
127% performance improvement between old and new code.
Harddisk, raw speed:
# dd if=/dev/ada0 bs=1m
137.63MB/s
Harddisk, software AES-XTS, 4kB sector:
# dd if=/dev/ada0.eli bs=1m
47.83MB/s (34% of raw disk speed)
Harddisk, old AES-NI AES-XTS, 4kB sector:
# dd if=/dev/ada0.eli bs=1m
68.33MB/s (49% of raw disk speed)
Harddisk, new AES-NI AES-XTS, 4kB sector:
# dd if=/dev/ada0.eli bs=1m
108.35MB/s (78% of raw disk speed)
58% performance improvement between old and new code.
As a side-note, GELI with AES-NI using AES-CBC can achive native disk speed.
MFC after: 3 days
The aeskeys_{amd64,i386}.S content was mostly obtained from OpenBSD,
no objections to the license from core.
Hardware provided by: Sentex Communications
Tested by: fabient, pho (previous versions)
MFC after: 1 month
context from in-kernel execution of padlock instructions and to handle
spurious FPUDNA exceptions that sometime are raised when doing padlock
calculations.
Globally mark crypto(9) kthread as using FPU.
Reviewed by: pjd
Hardware provided by: Sentex Communications
Tested by: pho
PR: amd64/135014
MFC after: 1 month
separate index variable.
It gives more then double rc4_init() performance increase on tested i386 P4.
It also gives about 15% speedup to PPTP VPN with stateless MPPE encryption
(by ng_mppc) which calls rc4_init() for every packet.
o make all crypto drivers have a device_t; pseudo drivers like the s/w
crypto driver synthesize one
o change the api between the crypto subsystem and drivers to use kobj;
cryptodev_if.m defines this api
o use the fact that all crypto drivers now have a device_t to add support
for specifying which of several potential devices to use when doing
crypto operations
o add new ioctls that allow user apps to select a specific crypto device
to use (previous ioctls maintained for compatibility)
o overhaul crypto subsystem code to eliminate lots of cruft and hide
implementation details from drivers
o bring in numerous fixes from Michale Richardson/hifn; mostly for
795x parts
o add an optional mechanism for mmap'ing the hifn 795x public key h/w
to user space for use by openssl (not enabled by default)
o update crypto test tools to use new ioctl's and add cmd line options
to specify a device to use for tests
These changes will also enable much future work on improving the core
crypto subsystem; including proper load balancing and interposing code
between the core and drivers to dispatch small operations to the s/w
driver as appropriate.
These changes were instigated by the work of Michael Richardson.
Reviewed by: pjd
Approved by: re
Such an address can be used directly in padlock's AES.
This improves speed of geli(8) significantly:
# sysctl kern.geom.zero.clear=0
# geli onetime -s 4096 gzero
# dd if=/dev/gzero.eli of=/dev/null bs=1m count=1000
Before: 113MB/s
After: 203MB/s
BTW. If sector size is set to 128kB, I can read at 276MB/s :)
new VIA CPUs.
For older CPUs HMAC/SHA1 and HMAC/SHA256 (and others) will still be done
in software.
Move symmetric cryptography (currently only AES-CBC 128/192/256) to
padlock_cipher.c file. Move HMAC cryptography to padlock_hash.c file.
Hardware from: Centaur Technologies
them twice.
This is possible for example in situation when session is used in
authentication context, then freed and then used in encryption context
and freed - in encryption context ses_ictx and ses_octx are not touched
at newsession time, but padlock_freesession could still try to free them
when they are not NULL.
with fast_ipsec(4) and geli(8) authentication (comming soon).
If consumer requests only for HMAC algorithm (without encryption), return
EINVAL.
- Add support for the CRD_F_KEY_EXPLICIT flag, for both encryption and
authentication.
It checked other algorithms against this bug and it seems they aren't
affected.
Reported by: Mike Tancsa <mike@sentex.net>
PR: i386/84860
Reviewed by: phk, cperciva(x2)
- redo updating.
rijndael-api-fst.[ch]:
- switch to use new low level rijndael api.
- stop using u8, u16 and u32.
- space cleanup.
Tested by: gbde(8) and phk's test program
rijndael_blockDecrypt() as both input and output.
This property is important because inside rijndael we can get away
with allocating just a 16 byte "work" buffer on the stack (which
is very cheap), whereas the calling code would need to allocate the
full sized buffer, and in all likelyhood would have to do so with
an expensive malloc(9).