Support for the new hashing algorithms in ZFS was introduced in r289422
However it was disconnected because FreeBSD lacked implementations of
SHA-512 (truncated to 256 bits), and Skein.
These implementations were introduced in r300921 and r300966 respectively
This commit connects them to ZFS and enabled these new checksum algorithms
This new algorithms are not supported by the boot blocks, so do not use them
on your root dataset if you boot from ZFS.
Relnotes: yes
Sponsored by: ScaleEngine Inc.
a preference for memory load instructions over large code footprints
with embedded immediate variables.
On amd64 CPUs from 2007-2008 there is not a significant change, but
amd64 CPUs from 2009-2010 get roughly 10% more throughput with this
code; amd64 CPUs from 2011-2012 get roughly 15% more throughput; and
AMD64 CPUs from 2013-2015 get 20-25% more throughput. The Raspberry
Pi 2 increases its throughput by 6-8%.
Sponsored by: Tarsnap Backup Inc.
Performance tested by: allanjude
MFC after: 3 weeks
Connect it to userland (libmd, libcrypt, sbin/md5) and kernel (crypto.ko)
Support for skein as a ZFS checksum algorithm was introduced in r289422
but is disconnected because FreeBSD lacked a Skein implementation.
A further commit will enable it in ZFS.
Reviewed by: cem
Sponsored by: ScaleEngine Inc.
Differential Revision: https://reviews.freebsd.org/D6166
This implements SHA-512/256, which generates a 256 bit hash by
calculating the SHA-512 then truncating the result. A different initial
value is used, making the result different from the first 256 bits of
the SHA-512 of the same input. SHA-512 is ~50% faster than SHA-256 on
64bit platforms, so the result is a faster 256 bit hash.
The main goal of this implementation is to enable support for this
faster hashing algorithm in ZFS. The feature was introduced into ZFS
in r289422, but is disconnected because SHA-512/256 support was missing.
A further commit will enable it in ZFS.
This is the follow on to r292782
Reviewed by: cem
Sponsored by: ScaleEngine Inc.
Differential Revision: https://reviews.freebsd.org/D6061
Use the C99 'static' keyword to hint to the compiler IVs and output digest
sizes. The keyword informs the compiler of the minimum valid size for a given
array. Obviously not every pointer can be validated (i.e., the compiler can
produce false negative but not false positive reports).
No functional change. No ABI change.
Sponsored by: EMC / Isilon Storage Division
Keep xform.c as a meta-file including the broken out bits
existing code that includes xform.c continues to work as normal
Individual algorithms can now be reused elsewhere, including outside
of the kernel
Reviewed by: bapt (previous version), gnn, delphij
Approved by: secteam
MFC after: 1 week
Sponsored by: ScaleEngine Inc.
Differential Revision: https://reviews.freebsd.org/D4674
cperciva's libmd implementation is 5-30% faster
The same was done for SHA256 previously in r263218
cperciva's implementation was lacking SHA-384 which I implemented, validated against OpenSSL and the NIST documentation
Extend sbin/md5 to create sha384(1)
Chase dependancies on sys/crypto/sha2/sha2.{c,h} and replace them with sha512{c.c,.h}
Reviewed by: cperciva, des, delphij
Approved by: secteam, bapt (mentor)
MFC after: 2 weeks
Sponsored by: ScaleEngine Inc.
Differential Revision: https://reviews.freebsd.org/D3929
session in multiple threads w/o locking.. There was a single fpu
context shared per session, if multiple threads were using the session,
and both migrated away, they could corrupt each other's fpu context...
This patch adds a per cpu context and a lock to protect it...
It also tries to better address unloading of the aesni module...
The pause will be removed once the OpenCrypto Framework provides a
better method for draining callers into _newsession...
I first discovered the fpu context sharing issue w/ a flood ping over
an IPsec tunnel between two bhyve machines... The patch in D3015
was used to verify that this fix does fix the issue...
Reviewed by: gnn, kib (both earlier versions)
Differential Revision: https://reviews.freebsd.org/D3016
flags are not specified... This bug was introduced in r275732...
This only affects IPsec ESP only policies w/ the aesni module loaded,
other subsystems specify one or both of the flags...
Reviewed by: gnn, delphij, eri
the compiler in svn r242182:
#if STDC_HOSTED
#include <mm_malloc.h>
#endif
A similar change was done to clang in the FreeBSD tree in svn r218893:
However, for external gcc toolchains, this patch is not in the compiler's header
file.
This patch to FreeBSD's aesni code allows compilation with an external
gcc toolchain.
Differential Revision: https://reviews.freebsd.org/D2285
Reviewed by: jmg, dim
Approved by: dim
for counter mode), and AES-GCM. Both of these modes have been added to
the aesni module.
Included is a set of tests to validate that the software and aesni
module calculate the correct values. These use the NIST KAT test
vectors. To run the test, you will need to install a soon to be
committed port, nist-kat that will install the vectors. Using a port
is necessary as the test vectors are around 25MB.
All the man pages were updated. I have added a new man page, crypto.7,
which includes a description of how to use each mode. All the new modes
and some other AES modes are present. It would be good for someone
else to go through and document the other modes.
A new ioctl was added to support AEAD modes which AES-GCM is one of them.
Without this ioctl, it is not possible to test AEAD modes from userland.
Add a timing safe bcmp for use to compare MACs. Previously we were using
bcmp which could leak timing info and result in the ability to forge
messages.
Add a minor optimization to the aesni module so that single segment
mbufs don't get copied and instead are updated in place. The aesni
module needs to be updated to support blocked IO so segmented mbufs
don't have to be copied.
We require that the IV be specified for all calls for both GCM and ICM.
This is to ensure proper use of these functions.
Obtained from: p4: //depot/projects/opencrypto
Relnotes: yes
Sponsored by: FreeBSD Foundation
Sponsored by: NetGate
the file which is compiled with SSE disabled. The functions set up
the FPU context for kernel, and compiler optimizations which could
lead to use of XMM registers before the fpu_kern_enter(9) is called or
after fpu_kern_leave(9), panic the machine.
Discussed with: jmg
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
context into memory for the kernel threads which called
fpu_kern_thread(9). This allows the fpu_kern_enter() callers to not
check for is_fpu_kern_thread() to get the optimization.
Apply the flag to padlock(4) and aesni(4). In aesni_cipher_process(),
do not leak FPU context state on error.
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
and finish the job. ncurses is now the only Makefile in the tree that
uses it since it wasn't a simple mechanical change, and will be
addressed in a future commit.
my tests, it is faster ~20%, even on an old IXP425 533MHz it is ~45%
faster... This is partly due to loop unrolling, so the code size does
significantly increase... I do plan on committing a version that
rolls up the loops again for smaller code size for embedded systems
where size is more important than absolute performance (it'll save ~6k
code)...
The kernel implementation is now shared w/ userland's libcrypt and
libmd...
We drop support for sha256 from sha2.c, so now sha2.c only contains
sha384 and sha512...
Reviewed by: secteam@
context switch just to call the done callback... On my machine, this
improves geli/gzero decrypt performance by ~27% from 550MB/sec to
~700MB/sec...
MFC after: 3 days
regression manages to do it)... We use a packed struct to coerce
gcc/clang into producing unaligned loads (there is not packed pointer
attribute, otherwise this would be easier)...
use _storeu_ and _loadu_ when using the structure is overkill...
be better at using types properly... Since we allocate our own key
schedule and make sure it's aligned, use the __m128i type in various
arguments to functions...
clang ignores __aligned on prototypes and gcc errors on them, leave them
in comments to document that these function arguments are require to be
aligned...
about all that changes is movdqa -> movdqu from reading the diff of the
disassembly output...
Noticed by: symbolics at gmx.com
MFC after: 3 days
performance... Use SSE2 instructions for calculating the XTS tweek
factor... Let the compiler do more work and handle register allocation
by using intrinsics, now only the key schedule is in assembly...
Replace .byte hard coded instructions w/ the proper instructions now
that both clang and gcc support them...
On my machine, pulling the code to userland I saw performance go from
~150MB/sec to 2GB/sec in XTS mode. GELI on GNOP saw a more modest
increase of about 3x due to other system overhead (geom and
opencrypto)...
These changes allow almost full disk io rate w/ geli...
Reviewed by: -current, -security
Thanks to: Mike Hamburg for the XTS tweek algorithm
hash function) optimized for speed on short messages returning a 64bit hash/
digest value.
SipHash is simpler and much faster than other secure MACs and competitive
in speed with popular non-cryptographic hash functions. It uses a 128-bit
key without the hidden cost of a key expansion step. SipHash iterates a
simple round function consisting of four additions, four xors, and six
rotations, interleaved with xors of message blocks for a pre-defined number
of compression and finalization rounds. The absence of secret load/store
addresses or secret branch conditions avoid timing attacks. No state is
shared between messages. Hashing is deterministic and doesn't use nonces.
It is not susceptible to length extension attacks.
Target applications include network traffic authentication, message
authentication (MAC) and hash-tables protection against hash-flooding
denial-of-service attacks.
The number of update/finalization rounds is defined during initialization:
SipHash24_Init() for the fast and reasonable strong version.
SipHash48_Init() for the strong version (half as fast).
SipHash usage is similar to other hash functions:
struct SIPHASH_CTX ctx;
char *k = "16bytes long key"
char *s = "string";
uint64_t h = 0;
SipHash24_Init(&ctx);
SipHash_SetKey(&ctx, k);
SipHash_Update(&ctx, s, strlen(s));
SipHash_Final(&h, &ctx); /* or */
h = SipHash_End(&ctx); /* or */
h = SipHash24(&ctx, k, s, strlen(s));
It was designed by Jean-Philippe Aumasson and Daniel J. Bernstein and
is described in the paper "SipHash: a fast short-input PRF", 2012.09.18:
https://131002.net/siphash/siphash.pdf
Permanent ID: b9a943a805fbfc6fde808af9fc0ecdfa
Implemented by: andre (based on the paper)
Reviewed by: cperciva
64bit and 32bit ABIs. As a side-effect, it enables AVX on capable
CPUs.
In particular:
- Query the CPU support for XSAVE, list of the supported extensions
and the required size of FPU save area. The hw.use_xsave tunable is
provided for disabling XSAVE, and hw.xsave_mask may be used to
select the enabled extensions.
- Remove the FPU save area from PCB and dynamically allocate the
(run-time sized) user save area on the top of the kernel stack,
right above the PCB. Reorganize the thread0 PCB initialization to
postpone it after BSP is queried for save area size.
- The dumppcb, stoppcbs and susppcbs now do not carry the FPU state as
well. FPU state is only useful for suspend, where it is saved in
dynamically allocated suspfpusave area.
- Use XSAVE and XRSTOR to save/restore FPU state, if supported and
enabled.
- Define new mcontext_t flag _MC_HASFPXSTATE, indicating that
mcontext_t has a valid pointer to out-of-struct extended FPU
state. Signal handlers are supplied with stack-allocated fpu
state. The sigreturn(2) and setcontext(2) syscall honour the flag,
allowing the signal handlers to inspect and manipilate extended
state in the interrupted context.
- The getcontext(2) never returns extended state, since there is no
place in the fixed-sized mcontext_t to place variable-sized save
area. And, since mcontext_t is embedded into ucontext_t, makes it
impossible to fix in a reasonable way. Instead of extending
getcontext(2) syscall, provide a sysarch(2) facility to query
extended FPU state.
- Add ptrace(2) support for getting and setting extended state; while
there, implement missed PT_I386_{GET,SET}XMMREGS for 32bit binaries.
- Change fpu_kern KPI to not expose struct fpu_kern_ctx layout to
consumers, making it opaque. Internally, struct fpu_kern_ctx now
contains a space for the extended state. Convert in-kernel consumers
of fpu_kern KPI both on i386 and amd64.
First version of the support for AVX was submitted by Tim Bird
<tim.bird am sony com> on behalf of Sony. This version was written
from scratch.
Tested by: pho (previous version), Yamagi Burmeister <lists yamagi org>
MFC after: 1 month
- Operate on uint64_t types when doing XORing, etc. instead of uint8_t.
- Don't bzero() temporary block for every AES block. Do it once for entire
data block.
- AES-NI is available only on little endian architectures. Simplify code
that takes block number from IV.
Benchmarks:
Memory-backed md(4) device, software AES-XTS, 4kB sector:
# dd if=/dev/md0.eli bs=1m
59.61MB/s
Memory-backed md(4) device, old AES-NI AES-XTS, 4kB sector:
# dd if=/dev/md0.eli bs=1m
97.29MB/s
Memory-backed md(4) device, new AES-NI AES-XTS, 4kB sector:
# dd if=/dev/md0.eli bs=1m
221.26MB/s
127% performance improvement between old and new code.
Harddisk, raw speed:
# dd if=/dev/ada0 bs=1m
137.63MB/s
Harddisk, software AES-XTS, 4kB sector:
# dd if=/dev/ada0.eli bs=1m
47.83MB/s (34% of raw disk speed)
Harddisk, old AES-NI AES-XTS, 4kB sector:
# dd if=/dev/ada0.eli bs=1m
68.33MB/s (49% of raw disk speed)
Harddisk, new AES-NI AES-XTS, 4kB sector:
# dd if=/dev/ada0.eli bs=1m
108.35MB/s (78% of raw disk speed)
58% performance improvement between old and new code.
As a side-note, GELI with AES-NI using AES-CBC can achive native disk speed.
MFC after: 3 days
The aeskeys_{amd64,i386}.S content was mostly obtained from OpenBSD,
no objections to the license from core.
Hardware provided by: Sentex Communications
Tested by: fabient, pho (previous versions)
MFC after: 1 month
context from in-kernel execution of padlock instructions and to handle
spurious FPUDNA exceptions that sometime are raised when doing padlock
calculations.
Globally mark crypto(9) kthread as using FPU.
Reviewed by: pjd
Hardware provided by: Sentex Communications
Tested by: pho
PR: amd64/135014
MFC after: 1 month
separate index variable.
It gives more then double rc4_init() performance increase on tested i386 P4.
It also gives about 15% speedup to PPTP VPN with stateless MPPE encryption
(by ng_mppc) which calls rc4_init() for every packet.
o make all crypto drivers have a device_t; pseudo drivers like the s/w
crypto driver synthesize one
o change the api between the crypto subsystem and drivers to use kobj;
cryptodev_if.m defines this api
o use the fact that all crypto drivers now have a device_t to add support
for specifying which of several potential devices to use when doing
crypto operations
o add new ioctls that allow user apps to select a specific crypto device
to use (previous ioctls maintained for compatibility)
o overhaul crypto subsystem code to eliminate lots of cruft and hide
implementation details from drivers
o bring in numerous fixes from Michale Richardson/hifn; mostly for
795x parts
o add an optional mechanism for mmap'ing the hifn 795x public key h/w
to user space for use by openssl (not enabled by default)
o update crypto test tools to use new ioctl's and add cmd line options
to specify a device to use for tests
These changes will also enable much future work on improving the core
crypto subsystem; including proper load balancing and interposing code
between the core and drivers to dispatch small operations to the s/w
driver as appropriate.
These changes were instigated by the work of Michael Richardson.
Reviewed by: pjd
Approved by: re
Such an address can be used directly in padlock's AES.
This improves speed of geli(8) significantly:
# sysctl kern.geom.zero.clear=0
# geli onetime -s 4096 gzero
# dd if=/dev/gzero.eli of=/dev/null bs=1m count=1000
Before: 113MB/s
After: 203MB/s
BTW. If sector size is set to 128kB, I can read at 276MB/s :)
new VIA CPUs.
For older CPUs HMAC/SHA1 and HMAC/SHA256 (and others) will still be done
in software.
Move symmetric cryptography (currently only AES-CBC 128/192/256) to
padlock_cipher.c file. Move HMAC cryptography to padlock_hash.c file.
Hardware from: Centaur Technologies
them twice.
This is possible for example in situation when session is used in
authentication context, then freed and then used in encryption context
and freed - in encryption context ses_ictx and ses_octx are not touched
at newsession time, but padlock_freesession could still try to free them
when they are not NULL.
with fast_ipsec(4) and geli(8) authentication (comming soon).
If consumer requests only for HMAC algorithm (without encryption), return
EINVAL.
- Add support for the CRD_F_KEY_EXPLICIT flag, for both encryption and
authentication.
It checked other algorithms against this bug and it seems they aren't
affected.
Reported by: Mike Tancsa <mike@sentex.net>
PR: i386/84860
Reviewed by: phk, cperciva(x2)