freebsd-dev/sys/dev/random/hash.c

248 lines
7.2 KiB
C
Raw Normal View History

/*-
Huge cleanup of random(4) code. * GENERAL - Update copyright. - Make kernel options for RANDOM_YARROW and RANDOM_DUMMY. Set neither to ON, which means we want Fortuna - If there is no 'device random' in the kernel, there will be NO random(4) device in the kernel, and the KERN_ARND sysctl will return nothing. With RANDOM_DUMMY there will be a random(4) that always blocks. - Repair kern.arandom (KERN_ARND sysctl). The old version went through arc4random(9) and was a bit weird. - Adjust arc4random stirring a bit - the existing code looks a little suspect. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Redo read_random(9) so as to duplicate random(4)'s read internals. This makes it a first-class citizen rather than a hack. - Move stuff out of locked regions when it does not need to be there. - Trim RANDOM_DEBUG printfs. Some are excess to requirement, some behind boot verbose. - Use SYSINIT to sequence the startup. - Fix init/deinit sysctl stuff. - Make relevant sysctls also tunables. - Add different harvesting "styles" to allow for different requirements (direct, queue, fast). - Add harvesting of FFS atime events. This needs to be checked for weighing down the FS code. - Add harvesting of slab allocator events. This needs to be checked for weighing down the allocator code. - Fix the random(9) manpage. - Loadable modules are not present for now. These will be re-engineered when the dust settles. - Use macros for locks. - Fix comments. * src/share/man/... - Update the man pages. * src/etc/... - The startup/shutdown work is done in D2924. * src/UPDATING - Add UPDATING announcement. * src/sys/dev/random/build.sh - Add copyright. - Add libz for unit tests. * src/sys/dev/random/dummy.c - Remove; no longer needed. Functionality incorporated into randomdev.*. * live_entropy_sources.c live_entropy_sources.h - Remove; content moved. - move content to randomdev.[ch] and optimise. * src/sys/dev/random/random_adaptors.c src/sys/dev/random/random_adaptors.h - Remove; plugability is no longer used. Compile-time algorithm selection is the way to go. * src/sys/dev/random/random_harvestq.c src/sys/dev/random/random_harvestq.h - Add early (re)boot-time randomness caching. * src/sys/dev/random/randomdev_soft.c src/sys/dev/random/randomdev_soft.h - Remove; no longer needed. * src/sys/dev/random/uint128.h - Provide a fake uint128_t; if a real one ever arrived, we can use that instead. All that is needed here is N=0, N++, N==0, and some localised trickery is used to manufacture a 128-bit 0ULLL. * src/sys/dev/random/unit_test.c src/sys/dev/random/unit_test.h - Improve unit tests; previously the testing human needed clairvoyance; now the test will do a basic check of compressibility. Clairvoyant talent is still a good idea. - This is still a long way off a proper unit test. * src/sys/dev/random/fortuna.c src/sys/dev/random/fortuna.h - Improve messy union to just uint128_t. - Remove unneeded 'static struct fortuna_start_cache'. - Tighten up up arithmetic. - Provide a method to allow eternal junk to be introduced; harden it against blatant by compress/hashing. - Assert that locks are held correctly. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Turn into self-sufficient module (no longer requires randomdev_soft.[ch]) * src/sys/dev/random/yarrow.c src/sys/dev/random/yarrow.h - Improve messy union to just uint128_t. - Remove unneeded 'staic struct start_cache'. - Tighten up up arithmetic. - Provide a method to allow eternal junk to be introduced; harden it against blatant by compress/hashing. - Assert that locks are held correctly. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Turn into self-sufficient module (no longer requires randomdev_soft.[ch]) - Fix some magic numbers elsewhere used as FAST and SLOW. Differential Revision: https://reviews.freebsd.org/D2025 Reviewed by: vsevolod,delphij,rwatson,trasz,jmg Approved by: so (delphij)
2015-06-30 17:00:45 +00:00
* Copyright (c) 2000-2015 Mark R V Murray
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer
* in this position and unchanged.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des)
2014-10-30 21:21:53 +00:00
#ifdef _KERNEL
#include <sys/param.h>
#include <sys/malloc.h>
#include <sys/random.h>
#include <sys/sysctl.h>
#include <sys/systm.h>
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des)
2014-10-30 21:21:53 +00:00
#else /* !_KERNEL */
#include <sys/param.h>
#include <sys/types.h>
#include <assert.h>
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des)
2014-10-30 21:21:53 +00:00
#include <inttypes.h>
#include <signal.h>
#include <stdbool.h>
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des)
2014-10-30 21:21:53 +00:00
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <threads.h>
#define KASSERT(x, y) assert(x)
#define CTASSERT(x) _Static_assert(x, "CTASSERT " #x)
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des)
2014-10-30 21:21:53 +00:00
#endif /* _KERNEL */
#define CHACHA_EMBED
#define KEYSTREAM_ONLY
#define CHACHA_NONCE0_CTR128
#include <crypto/chacha20/chacha.c>
#include <crypto/rijndael/rijndael-api-fst.h>
#include <crypto/sha2/sha256.h>
#include <dev/random/hash.h>
#ifdef _KERNEL
#include <dev/random/randomdev.h>
#endif
Huge cleanup of random(4) code. * GENERAL - Update copyright. - Make kernel options for RANDOM_YARROW and RANDOM_DUMMY. Set neither to ON, which means we want Fortuna - If there is no 'device random' in the kernel, there will be NO random(4) device in the kernel, and the KERN_ARND sysctl will return nothing. With RANDOM_DUMMY there will be a random(4) that always blocks. - Repair kern.arandom (KERN_ARND sysctl). The old version went through arc4random(9) and was a bit weird. - Adjust arc4random stirring a bit - the existing code looks a little suspect. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Redo read_random(9) so as to duplicate random(4)'s read internals. This makes it a first-class citizen rather than a hack. - Move stuff out of locked regions when it does not need to be there. - Trim RANDOM_DEBUG printfs. Some are excess to requirement, some behind boot verbose. - Use SYSINIT to sequence the startup. - Fix init/deinit sysctl stuff. - Make relevant sysctls also tunables. - Add different harvesting "styles" to allow for different requirements (direct, queue, fast). - Add harvesting of FFS atime events. This needs to be checked for weighing down the FS code. - Add harvesting of slab allocator events. This needs to be checked for weighing down the allocator code. - Fix the random(9) manpage. - Loadable modules are not present for now. These will be re-engineered when the dust settles. - Use macros for locks. - Fix comments. * src/share/man/... - Update the man pages. * src/etc/... - The startup/shutdown work is done in D2924. * src/UPDATING - Add UPDATING announcement. * src/sys/dev/random/build.sh - Add copyright. - Add libz for unit tests. * src/sys/dev/random/dummy.c - Remove; no longer needed. Functionality incorporated into randomdev.*. * live_entropy_sources.c live_entropy_sources.h - Remove; content moved. - move content to randomdev.[ch] and optimise. * src/sys/dev/random/random_adaptors.c src/sys/dev/random/random_adaptors.h - Remove; plugability is no longer used. Compile-time algorithm selection is the way to go. * src/sys/dev/random/random_harvestq.c src/sys/dev/random/random_harvestq.h - Add early (re)boot-time randomness caching. * src/sys/dev/random/randomdev_soft.c src/sys/dev/random/randomdev_soft.h - Remove; no longer needed. * src/sys/dev/random/uint128.h - Provide a fake uint128_t; if a real one ever arrived, we can use that instead. All that is needed here is N=0, N++, N==0, and some localised trickery is used to manufacture a 128-bit 0ULLL. * src/sys/dev/random/unit_test.c src/sys/dev/random/unit_test.h - Improve unit tests; previously the testing human needed clairvoyance; now the test will do a basic check of compressibility. Clairvoyant talent is still a good idea. - This is still a long way off a proper unit test. * src/sys/dev/random/fortuna.c src/sys/dev/random/fortuna.h - Improve messy union to just uint128_t. - Remove unneeded 'static struct fortuna_start_cache'. - Tighten up up arithmetic. - Provide a method to allow eternal junk to be introduced; harden it against blatant by compress/hashing. - Assert that locks are held correctly. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Turn into self-sufficient module (no longer requires randomdev_soft.[ch]) * src/sys/dev/random/yarrow.c src/sys/dev/random/yarrow.h - Improve messy union to just uint128_t. - Remove unneeded 'staic struct start_cache'. - Tighten up up arithmetic. - Provide a method to allow eternal junk to be introduced; harden it against blatant by compress/hashing. - Assert that locks are held correctly. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Turn into self-sufficient module (no longer requires randomdev_soft.[ch]) - Fix some magic numbers elsewhere used as FAST and SLOW. Differential Revision: https://reviews.freebsd.org/D2025 Reviewed by: vsevolod,delphij,rwatson,trasz,jmg Approved by: so (delphij)
2015-06-30 17:00:45 +00:00
/* This code presumes that RANDOM_KEYSIZE is twice as large as RANDOM_BLOCKSIZE */
CTASSERT(RANDOM_KEYSIZE == 2*RANDOM_BLOCKSIZE);
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des)
2014-10-30 21:21:53 +00:00
/* Validate that full Chacha IV is as large as the 128-bit counter */
_Static_assert(CHACHA_STATELEN == RANDOM_BLOCKSIZE, "");
/*
* Knob to control use of Chacha20-based PRF for Fortuna keystream primitive.
*
* Benefits include somewhat faster keystream generation compared with
* unaccelerated AES-ICM; reseeding is much cheaper than computing AES key
* schedules.
*/
bool random_chachamode __read_frequently = true;
#ifdef _KERNEL
SYSCTL_BOOL(_kern_random, OID_AUTO, use_chacha20_cipher, CTLFLAG_RDTUN,
&random_chachamode, 0,
"If non-zero, use the ChaCha20 cipher for randomdev PRF (default). "
"If zero, use AES-ICM cipher for randomdev PRF (12.x default).");
#endif
/* Initialise the hash */
void
randomdev_hash_init(struct randomdev_hash *context)
{
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des)
2014-10-30 21:21:53 +00:00
SHA256_Init(&context->sha);
}
/* Iterate the hash */
void
randomdev_hash_iterate(struct randomdev_hash *context, const void *data, size_t size)
{
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des)
2014-10-30 21:21:53 +00:00
SHA256_Update(&context->sha, data, size);
}
/* Conclude by returning the hash in the supplied <*buf> which must be
Huge cleanup of random(4) code. * GENERAL - Update copyright. - Make kernel options for RANDOM_YARROW and RANDOM_DUMMY. Set neither to ON, which means we want Fortuna - If there is no 'device random' in the kernel, there will be NO random(4) device in the kernel, and the KERN_ARND sysctl will return nothing. With RANDOM_DUMMY there will be a random(4) that always blocks. - Repair kern.arandom (KERN_ARND sysctl). The old version went through arc4random(9) and was a bit weird. - Adjust arc4random stirring a bit - the existing code looks a little suspect. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Redo read_random(9) so as to duplicate random(4)'s read internals. This makes it a first-class citizen rather than a hack. - Move stuff out of locked regions when it does not need to be there. - Trim RANDOM_DEBUG printfs. Some are excess to requirement, some behind boot verbose. - Use SYSINIT to sequence the startup. - Fix init/deinit sysctl stuff. - Make relevant sysctls also tunables. - Add different harvesting "styles" to allow for different requirements (direct, queue, fast). - Add harvesting of FFS atime events. This needs to be checked for weighing down the FS code. - Add harvesting of slab allocator events. This needs to be checked for weighing down the allocator code. - Fix the random(9) manpage. - Loadable modules are not present for now. These will be re-engineered when the dust settles. - Use macros for locks. - Fix comments. * src/share/man/... - Update the man pages. * src/etc/... - The startup/shutdown work is done in D2924. * src/UPDATING - Add UPDATING announcement. * src/sys/dev/random/build.sh - Add copyright. - Add libz for unit tests. * src/sys/dev/random/dummy.c - Remove; no longer needed. Functionality incorporated into randomdev.*. * live_entropy_sources.c live_entropy_sources.h - Remove; content moved. - move content to randomdev.[ch] and optimise. * src/sys/dev/random/random_adaptors.c src/sys/dev/random/random_adaptors.h - Remove; plugability is no longer used. Compile-time algorithm selection is the way to go. * src/sys/dev/random/random_harvestq.c src/sys/dev/random/random_harvestq.h - Add early (re)boot-time randomness caching. * src/sys/dev/random/randomdev_soft.c src/sys/dev/random/randomdev_soft.h - Remove; no longer needed. * src/sys/dev/random/uint128.h - Provide a fake uint128_t; if a real one ever arrived, we can use that instead. All that is needed here is N=0, N++, N==0, and some localised trickery is used to manufacture a 128-bit 0ULLL. * src/sys/dev/random/unit_test.c src/sys/dev/random/unit_test.h - Improve unit tests; previously the testing human needed clairvoyance; now the test will do a basic check of compressibility. Clairvoyant talent is still a good idea. - This is still a long way off a proper unit test. * src/sys/dev/random/fortuna.c src/sys/dev/random/fortuna.h - Improve messy union to just uint128_t. - Remove unneeded 'static struct fortuna_start_cache'. - Tighten up up arithmetic. - Provide a method to allow eternal junk to be introduced; harden it against blatant by compress/hashing. - Assert that locks are held correctly. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Turn into self-sufficient module (no longer requires randomdev_soft.[ch]) * src/sys/dev/random/yarrow.c src/sys/dev/random/yarrow.h - Improve messy union to just uint128_t. - Remove unneeded 'staic struct start_cache'. - Tighten up up arithmetic. - Provide a method to allow eternal junk to be introduced; harden it against blatant by compress/hashing. - Assert that locks are held correctly. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Turn into self-sufficient module (no longer requires randomdev_soft.[ch]) - Fix some magic numbers elsewhere used as FAST and SLOW. Differential Revision: https://reviews.freebsd.org/D2025 Reviewed by: vsevolod,delphij,rwatson,trasz,jmg Approved by: so (delphij)
2015-06-30 17:00:45 +00:00
* RANDOM_KEYSIZE bytes long.
*/
void
randomdev_hash_finish(struct randomdev_hash *context, void *buf)
{
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des)
2014-10-30 21:21:53 +00:00
SHA256_Final(buf, &context->sha);
}
/* Initialise the encryption routine by setting up the key schedule
Huge cleanup of random(4) code. * GENERAL - Update copyright. - Make kernel options for RANDOM_YARROW and RANDOM_DUMMY. Set neither to ON, which means we want Fortuna - If there is no 'device random' in the kernel, there will be NO random(4) device in the kernel, and the KERN_ARND sysctl will return nothing. With RANDOM_DUMMY there will be a random(4) that always blocks. - Repair kern.arandom (KERN_ARND sysctl). The old version went through arc4random(9) and was a bit weird. - Adjust arc4random stirring a bit - the existing code looks a little suspect. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Redo read_random(9) so as to duplicate random(4)'s read internals. This makes it a first-class citizen rather than a hack. - Move stuff out of locked regions when it does not need to be there. - Trim RANDOM_DEBUG printfs. Some are excess to requirement, some behind boot verbose. - Use SYSINIT to sequence the startup. - Fix init/deinit sysctl stuff. - Make relevant sysctls also tunables. - Add different harvesting "styles" to allow for different requirements (direct, queue, fast). - Add harvesting of FFS atime events. This needs to be checked for weighing down the FS code. - Add harvesting of slab allocator events. This needs to be checked for weighing down the allocator code. - Fix the random(9) manpage. - Loadable modules are not present for now. These will be re-engineered when the dust settles. - Use macros for locks. - Fix comments. * src/share/man/... - Update the man pages. * src/etc/... - The startup/shutdown work is done in D2924. * src/UPDATING - Add UPDATING announcement. * src/sys/dev/random/build.sh - Add copyright. - Add libz for unit tests. * src/sys/dev/random/dummy.c - Remove; no longer needed. Functionality incorporated into randomdev.*. * live_entropy_sources.c live_entropy_sources.h - Remove; content moved. - move content to randomdev.[ch] and optimise. * src/sys/dev/random/random_adaptors.c src/sys/dev/random/random_adaptors.h - Remove; plugability is no longer used. Compile-time algorithm selection is the way to go. * src/sys/dev/random/random_harvestq.c src/sys/dev/random/random_harvestq.h - Add early (re)boot-time randomness caching. * src/sys/dev/random/randomdev_soft.c src/sys/dev/random/randomdev_soft.h - Remove; no longer needed. * src/sys/dev/random/uint128.h - Provide a fake uint128_t; if a real one ever arrived, we can use that instead. All that is needed here is N=0, N++, N==0, and some localised trickery is used to manufacture a 128-bit 0ULLL. * src/sys/dev/random/unit_test.c src/sys/dev/random/unit_test.h - Improve unit tests; previously the testing human needed clairvoyance; now the test will do a basic check of compressibility. Clairvoyant talent is still a good idea. - This is still a long way off a proper unit test. * src/sys/dev/random/fortuna.c src/sys/dev/random/fortuna.h - Improve messy union to just uint128_t. - Remove unneeded 'static struct fortuna_start_cache'. - Tighten up up arithmetic. - Provide a method to allow eternal junk to be introduced; harden it against blatant by compress/hashing. - Assert that locks are held correctly. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Turn into self-sufficient module (no longer requires randomdev_soft.[ch]) * src/sys/dev/random/yarrow.c src/sys/dev/random/yarrow.h - Improve messy union to just uint128_t. - Remove unneeded 'staic struct start_cache'. - Tighten up up arithmetic. - Provide a method to allow eternal junk to be introduced; harden it against blatant by compress/hashing. - Assert that locks are held correctly. - Fix the nasty pre- and post-read overloading by providing explictit functions to do these tasks. - Turn into self-sufficient module (no longer requires randomdev_soft.[ch]) - Fix some magic numbers elsewhere used as FAST and SLOW. Differential Revision: https://reviews.freebsd.org/D2025 Reviewed by: vsevolod,delphij,rwatson,trasz,jmg Approved by: so (delphij)
2015-06-30 17:00:45 +00:00
* from the supplied <*data> which must be RANDOM_KEYSIZE bytes of binary
* data.
*/
void
randomdev_encrypt_init(union randomdev_key *context, const void *data)
{
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des)
2014-10-30 21:21:53 +00:00
if (random_chachamode) {
chacha_keysetup(&context->chacha, data, RANDOM_KEYSIZE * 8);
} else {
rijndael_cipherInit(&context->cipher, MODE_ECB, NULL);
rijndael_makeKey(&context->key, DIR_ENCRYPT, RANDOM_KEYSIZE*8, data);
}
}
/*
random(4): Generalize algorithm-independent APIs At a basic level, remove assumptions about the underlying algorithm (such as output block size and reseeding requirements) from the algorithm-independent logic in randomdev.c. Chacha20 does not have many of the restrictions that AES-ICM does as a PRF (Pseudo-Random Function), because it has a cipher block size of 512 bits. The motivation is that by generalizing the API, Chacha is not penalized by the limitations of AES. In READ_RANDOM_UIO, first attempt to NOWAIT allocate a large enough buffer for the entire user request, or the maximal input we'll accept between signal checking, whichever is smaller. The idea is that the implementation of any randomdev algorithm is then free to divide up large requests in whatever fashion it sees fit. As part of this, two responsibilities from the "algorithm-generic" randomdev code are pushed down into the Fortuna ra_read implementation (and any other future or out-of-tree ra_read implementations): 1. If an algorithm needs to rekey every N bytes, it is responsible for handling that in ra_read(). (I.e., Fortuna's 1MB rekey interval for AES block generation.) 2. If an algorithm uses a block cipher that doesn't tolerate partial-block requests (again, e.g., AES), it is also responsible for handling that in ra_read(). Several APIs are changed from u_int buffer length to the more canonical size_t. Several APIs are changed from taking a blockcount to a bytecount, to permit PRFs like Chacha20 to directly generate quantities of output that are not multiples of RANDOM_BLOCKSIZE (AES block size). The Fortuna algorithm is changed to NOT rekey every 1MiB when in Chacha20 mode (kern.random.use_chacha20_cipher="1"). This is explicitly supported by the math in FS&K §9.4 (Ferguson, Schneier, and Kohno; "Cryptography Engineering"), as well as by their conclusion: "If we had a block cipher with a 256-bit [or greater] block size, then the collisions would not have been an issue at all." For now, continue to break up reads into PAGE_SIZE chunks, as they were before. So, no functional change, mostly. Reviewed by: markm Approved by: secteam(delphij) Differential Revision: https://reviews.freebsd.org/D20312
2019-06-17 15:09:12 +00:00
* Create a psuedorandom output stream of 'bytecount' bytes using a CTR-mode
* cipher or similar. The 128-bit counter is supplied in the in-out parmeter
random(4): Generalize algorithm-independent APIs At a basic level, remove assumptions about the underlying algorithm (such as output block size and reseeding requirements) from the algorithm-independent logic in randomdev.c. Chacha20 does not have many of the restrictions that AES-ICM does as a PRF (Pseudo-Random Function), because it has a cipher block size of 512 bits. The motivation is that by generalizing the API, Chacha is not penalized by the limitations of AES. In READ_RANDOM_UIO, first attempt to NOWAIT allocate a large enough buffer for the entire user request, or the maximal input we'll accept between signal checking, whichever is smaller. The idea is that the implementation of any randomdev algorithm is then free to divide up large requests in whatever fashion it sees fit. As part of this, two responsibilities from the "algorithm-generic" randomdev code are pushed down into the Fortuna ra_read implementation (and any other future or out-of-tree ra_read implementations): 1. If an algorithm needs to rekey every N bytes, it is responsible for handling that in ra_read(). (I.e., Fortuna's 1MB rekey interval for AES block generation.) 2. If an algorithm uses a block cipher that doesn't tolerate partial-block requests (again, e.g., AES), it is also responsible for handling that in ra_read(). Several APIs are changed from u_int buffer length to the more canonical size_t. Several APIs are changed from taking a blockcount to a bytecount, to permit PRFs like Chacha20 to directly generate quantities of output that are not multiples of RANDOM_BLOCKSIZE (AES block size). The Fortuna algorithm is changed to NOT rekey every 1MiB when in Chacha20 mode (kern.random.use_chacha20_cipher="1"). This is explicitly supported by the math in FS&K §9.4 (Ferguson, Schneier, and Kohno; "Cryptography Engineering"), as well as by their conclusion: "If we had a block cipher with a 256-bit [or greater] block size, then the collisions would not have been an issue at all." For now, continue to break up reads into PAGE_SIZE chunks, as they were before. So, no functional change, mostly. Reviewed by: markm Approved by: secteam(delphij) Differential Revision: https://reviews.freebsd.org/D20312
2019-06-17 15:09:12 +00:00
* 'ctr.' The output stream goes to 'd_out.'
*
* If AES is used, 'bytecount' is guaranteed to be a multiple of
* RANDOM_BLOCKSIZE.
*/
void
randomdev_keystream(union randomdev_key *context, uint128_t *ctr,
random(4): Generalize algorithm-independent APIs At a basic level, remove assumptions about the underlying algorithm (such as output block size and reseeding requirements) from the algorithm-independent logic in randomdev.c. Chacha20 does not have many of the restrictions that AES-ICM does as a PRF (Pseudo-Random Function), because it has a cipher block size of 512 bits. The motivation is that by generalizing the API, Chacha is not penalized by the limitations of AES. In READ_RANDOM_UIO, first attempt to NOWAIT allocate a large enough buffer for the entire user request, or the maximal input we'll accept between signal checking, whichever is smaller. The idea is that the implementation of any randomdev algorithm is then free to divide up large requests in whatever fashion it sees fit. As part of this, two responsibilities from the "algorithm-generic" randomdev code are pushed down into the Fortuna ra_read implementation (and any other future or out-of-tree ra_read implementations): 1. If an algorithm needs to rekey every N bytes, it is responsible for handling that in ra_read(). (I.e., Fortuna's 1MB rekey interval for AES block generation.) 2. If an algorithm uses a block cipher that doesn't tolerate partial-block requests (again, e.g., AES), it is also responsible for handling that in ra_read(). Several APIs are changed from u_int buffer length to the more canonical size_t. Several APIs are changed from taking a blockcount to a bytecount, to permit PRFs like Chacha20 to directly generate quantities of output that are not multiples of RANDOM_BLOCKSIZE (AES block size). The Fortuna algorithm is changed to NOT rekey every 1MiB when in Chacha20 mode (kern.random.use_chacha20_cipher="1"). This is explicitly supported by the math in FS&K §9.4 (Ferguson, Schneier, and Kohno; "Cryptography Engineering"), as well as by their conclusion: "If we had a block cipher with a 256-bit [or greater] block size, then the collisions would not have been an issue at all." For now, continue to break up reads into PAGE_SIZE chunks, as they were before. So, no functional change, mostly. Reviewed by: markm Approved by: secteam(delphij) Differential Revision: https://reviews.freebsd.org/D20312
2019-06-17 15:09:12 +00:00
void *d_out, size_t bytecount)
{
random(4): Generalize algorithm-independent APIs At a basic level, remove assumptions about the underlying algorithm (such as output block size and reseeding requirements) from the algorithm-independent logic in randomdev.c. Chacha20 does not have many of the restrictions that AES-ICM does as a PRF (Pseudo-Random Function), because it has a cipher block size of 512 bits. The motivation is that by generalizing the API, Chacha is not penalized by the limitations of AES. In READ_RANDOM_UIO, first attempt to NOWAIT allocate a large enough buffer for the entire user request, or the maximal input we'll accept between signal checking, whichever is smaller. The idea is that the implementation of any randomdev algorithm is then free to divide up large requests in whatever fashion it sees fit. As part of this, two responsibilities from the "algorithm-generic" randomdev code are pushed down into the Fortuna ra_read implementation (and any other future or out-of-tree ra_read implementations): 1. If an algorithm needs to rekey every N bytes, it is responsible for handling that in ra_read(). (I.e., Fortuna's 1MB rekey interval for AES block generation.) 2. If an algorithm uses a block cipher that doesn't tolerate partial-block requests (again, e.g., AES), it is also responsible for handling that in ra_read(). Several APIs are changed from u_int buffer length to the more canonical size_t. Several APIs are changed from taking a blockcount to a bytecount, to permit PRFs like Chacha20 to directly generate quantities of output that are not multiples of RANDOM_BLOCKSIZE (AES block size). The Fortuna algorithm is changed to NOT rekey every 1MiB when in Chacha20 mode (kern.random.use_chacha20_cipher="1"). This is explicitly supported by the math in FS&K §9.4 (Ferguson, Schneier, and Kohno; "Cryptography Engineering"), as well as by their conclusion: "If we had a block cipher with a 256-bit [or greater] block size, then the collisions would not have been an issue at all." For now, continue to break up reads into PAGE_SIZE chunks, as they were before. So, no functional change, mostly. Reviewed by: markm Approved by: secteam(delphij) Differential Revision: https://reviews.freebsd.org/D20312
2019-06-17 15:09:12 +00:00
size_t i, blockcount, read_chunk;
This is the much-discussed major upgrade to the random(4) device, known to you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des)
2014-10-30 21:21:53 +00:00
if (random_chachamode) {
uint128_t lectr;
/*
* Chacha always encodes and increments the counter little
* endian. So on BE machines, we must provide a swapped
* counter to chacha, and swap the output too.
*/
le128enc(&lectr, *ctr);
chacha_ivsetup(&context->chacha, NULL, (const void *)&lectr);
random(4): Generalize algorithm-independent APIs At a basic level, remove assumptions about the underlying algorithm (such as output block size and reseeding requirements) from the algorithm-independent logic in randomdev.c. Chacha20 does not have many of the restrictions that AES-ICM does as a PRF (Pseudo-Random Function), because it has a cipher block size of 512 bits. The motivation is that by generalizing the API, Chacha is not penalized by the limitations of AES. In READ_RANDOM_UIO, first attempt to NOWAIT allocate a large enough buffer for the entire user request, or the maximal input we'll accept between signal checking, whichever is smaller. The idea is that the implementation of any randomdev algorithm is then free to divide up large requests in whatever fashion it sees fit. As part of this, two responsibilities from the "algorithm-generic" randomdev code are pushed down into the Fortuna ra_read implementation (and any other future or out-of-tree ra_read implementations): 1. If an algorithm needs to rekey every N bytes, it is responsible for handling that in ra_read(). (I.e., Fortuna's 1MB rekey interval for AES block generation.) 2. If an algorithm uses a block cipher that doesn't tolerate partial-block requests (again, e.g., AES), it is also responsible for handling that in ra_read(). Several APIs are changed from u_int buffer length to the more canonical size_t. Several APIs are changed from taking a blockcount to a bytecount, to permit PRFs like Chacha20 to directly generate quantities of output that are not multiples of RANDOM_BLOCKSIZE (AES block size). The Fortuna algorithm is changed to NOT rekey every 1MiB when in Chacha20 mode (kern.random.use_chacha20_cipher="1"). This is explicitly supported by the math in FS&K §9.4 (Ferguson, Schneier, and Kohno; "Cryptography Engineering"), as well as by their conclusion: "If we had a block cipher with a 256-bit [or greater] block size, then the collisions would not have been an issue at all." For now, continue to break up reads into PAGE_SIZE chunks, as they were before. So, no functional change, mostly. Reviewed by: markm Approved by: secteam(delphij) Differential Revision: https://reviews.freebsd.org/D20312
2019-06-17 15:09:12 +00:00
while (bytecount > 0) {
/*
* We are limited by the chacha_encrypt_bytes API to
* u32 bytes per chunk.
*/
read_chunk = MIN(bytecount,
rounddown((size_t)UINT32_MAX, CHACHA_BLOCKLEN));
chacha_encrypt_bytes(&context->chacha, NULL, d_out,
read_chunk);
d_out = (char *)d_out + read_chunk;
bytecount -= read_chunk;
}
/*
* Decode Chacha-updated LE counter to native endian and store
* it back in the caller's in-out parameter.
*/
chacha_ctrsave(&context->chacha, (void *)&lectr);
*ctr = le128dec(&lectr);
random(4): Generalize algorithm-independent APIs At a basic level, remove assumptions about the underlying algorithm (such as output block size and reseeding requirements) from the algorithm-independent logic in randomdev.c. Chacha20 does not have many of the restrictions that AES-ICM does as a PRF (Pseudo-Random Function), because it has a cipher block size of 512 bits. The motivation is that by generalizing the API, Chacha is not penalized by the limitations of AES. In READ_RANDOM_UIO, first attempt to NOWAIT allocate a large enough buffer for the entire user request, or the maximal input we'll accept between signal checking, whichever is smaller. The idea is that the implementation of any randomdev algorithm is then free to divide up large requests in whatever fashion it sees fit. As part of this, two responsibilities from the "algorithm-generic" randomdev code are pushed down into the Fortuna ra_read implementation (and any other future or out-of-tree ra_read implementations): 1. If an algorithm needs to rekey every N bytes, it is responsible for handling that in ra_read(). (I.e., Fortuna's 1MB rekey interval for AES block generation.) 2. If an algorithm uses a block cipher that doesn't tolerate partial-block requests (again, e.g., AES), it is also responsible for handling that in ra_read(). Several APIs are changed from u_int buffer length to the more canonical size_t. Several APIs are changed from taking a blockcount to a bytecount, to permit PRFs like Chacha20 to directly generate quantities of output that are not multiples of RANDOM_BLOCKSIZE (AES block size). The Fortuna algorithm is changed to NOT rekey every 1MiB when in Chacha20 mode (kern.random.use_chacha20_cipher="1"). This is explicitly supported by the math in FS&K §9.4 (Ferguson, Schneier, and Kohno; "Cryptography Engineering"), as well as by their conclusion: "If we had a block cipher with a 256-bit [or greater] block size, then the collisions would not have been an issue at all." For now, continue to break up reads into PAGE_SIZE chunks, as they were before. So, no functional change, mostly. Reviewed by: markm Approved by: secteam(delphij) Differential Revision: https://reviews.freebsd.org/D20312
2019-06-17 15:09:12 +00:00
explicit_bzero(&lectr, sizeof(lectr));
} else {
random(4): Generalize algorithm-independent APIs At a basic level, remove assumptions about the underlying algorithm (such as output block size and reseeding requirements) from the algorithm-independent logic in randomdev.c. Chacha20 does not have many of the restrictions that AES-ICM does as a PRF (Pseudo-Random Function), because it has a cipher block size of 512 bits. The motivation is that by generalizing the API, Chacha is not penalized by the limitations of AES. In READ_RANDOM_UIO, first attempt to NOWAIT allocate a large enough buffer for the entire user request, or the maximal input we'll accept between signal checking, whichever is smaller. The idea is that the implementation of any randomdev algorithm is then free to divide up large requests in whatever fashion it sees fit. As part of this, two responsibilities from the "algorithm-generic" randomdev code are pushed down into the Fortuna ra_read implementation (and any other future or out-of-tree ra_read implementations): 1. If an algorithm needs to rekey every N bytes, it is responsible for handling that in ra_read(). (I.e., Fortuna's 1MB rekey interval for AES block generation.) 2. If an algorithm uses a block cipher that doesn't tolerate partial-block requests (again, e.g., AES), it is also responsible for handling that in ra_read(). Several APIs are changed from u_int buffer length to the more canonical size_t. Several APIs are changed from taking a blockcount to a bytecount, to permit PRFs like Chacha20 to directly generate quantities of output that are not multiples of RANDOM_BLOCKSIZE (AES block size). The Fortuna algorithm is changed to NOT rekey every 1MiB when in Chacha20 mode (kern.random.use_chacha20_cipher="1"). This is explicitly supported by the math in FS&K §9.4 (Ferguson, Schneier, and Kohno; "Cryptography Engineering"), as well as by their conclusion: "If we had a block cipher with a 256-bit [or greater] block size, then the collisions would not have been an issue at all." For now, continue to break up reads into PAGE_SIZE chunks, as they were before. So, no functional change, mostly. Reviewed by: markm Approved by: secteam(delphij) Differential Revision: https://reviews.freebsd.org/D20312
2019-06-17 15:09:12 +00:00
KASSERT(bytecount % RANDOM_BLOCKSIZE == 0,
("%s: AES mode invalid bytecount, not a multiple of native "
"block size", __func__));
blockcount = bytecount / RANDOM_BLOCKSIZE;
for (i = 0; i < blockcount; i++) {
/*-
* FS&K - r = r|E(K,C)
* - C = C + 1
*/
rijndael_blockEncrypt(&context->cipher, &context->key,
(void *)ctr, RANDOM_BLOCKSIZE * 8, d_out);
d_out = (char *)d_out + RANDOM_BLOCKSIZE;
uint128_increment(ctr);
}
}
}
/*
* Fetch a pointer to the relevant key material and its size.
*
* This API is expected to only be used only for reseeding, where the
* endianness does not matter; the goal is to simply incorporate the key
* material into the hash iterator that will produce key'.
*
* Do not expect the buffer pointed to by this API to match the exact
* endianness, etc, as the key material that was supplied to
* randomdev_encrypt_init().
*/
void
randomdev_getkey(union randomdev_key *context, const void **keyp, size_t *szp)
{
if (!random_chachamode) {
*keyp = &context->key.keyMaterial;
*szp = context->key.keyLen / 8;
return;
}
/* Chacha20 mode */
*keyp = (const void *)&context->chacha.input[4];
/* Sanity check keysize */
if (context->chacha.input[0] == U8TO32_LITTLE(sigma) &&
context->chacha.input[1] == U8TO32_LITTLE(&sigma[4]) &&
context->chacha.input[2] == U8TO32_LITTLE(&sigma[8]) &&
context->chacha.input[3] == U8TO32_LITTLE(&sigma[12])) {
*szp = 32;
return;
}
#if 0
/*
* Included for the sake of completeness; as-implemented, Fortuna
* doesn't need or use 128-bit Chacha20.
*/
if (context->chacha->input[0] == U8TO32_LITTLE(tau) &&
context->chacha->input[1] == U8TO32_LITTLE(&tau[4]) &&
context->chacha->input[2] == U8TO32_LITTLE(&tau[8]) &&
context->chacha->input[3] == U8TO32_LITTLE(&tau[12])) {
*szp = 16;
return;
}
#endif
#ifdef _KERNEL
panic("%s: Invalid chacha20 keysize: %16D\n", __func__,
(void *)context->chacha.input, " ");
#else
raise(SIGKILL);
#endif
}