UDP checksum offload does not work in Azure if following conditions are
met:
- sizeof(IP hdr + UDP hdr + payload) > 1420.
- IP_DF is not set in IP hdr
Use software checksum for UDP datagrams falling into this category.
Add two tunables to disable UDP/IPv4 and UDP/IPv6 checksum offload, in
case something unexpected happened.
MFC after: 1 week
Sponsored by: Microsoft
Differential Revision: https://reviews.freebsd.org/D12429
Said checks were inherently racy anyway as jokers could unmap target areas
before the handler got around to accessing them.
This saves time by avoiding locking the address space.
MFC after: 1 week
It was supposed to provide a recovery mechanism against bugs in procfs's
long deprecated tracing capabilities.
Remove the tool as a prerequisite to axing the kernel side.
The tracing facility to use is ptrace(2).
MFC after: 2 weeks
tid must be equal to curthread and the target routine was already reading
it anyway, which is not a problem. Not passing it as a parameter allows for
a little bit shorter code in callers.
MFC after: 1 week
This patch adds "vers" and "minorvers" arguments to nfscl_reqstart().
The patch always passes them in as "0" and that implies no change
in semantics. These arguments will be used by a future commit that
adds support for the Flexible File Layout.
Normally wakeups() are performed for completed softupdates work items
in workitem_free() before the underlying memory is free()'d.
complete_jseg() was clearing the "wakeup needed" flag in work items to
defer the wakeup until the end of each loop iteration. However, this
resulted in the item being free'd before it's address was used with
wakeup(). As a result, another part of the kernel could allocate this
memory from malloc() and use it as a wait channel for a different
"event" with a different lock. This triggered an assertion failure
when the lock passed to sleepq_add() did not match the existing lock
associated with the sleep queue. Fix this by removing the code to
defer the wakeup in complete_jseg() allowing the wakeup to occur
slightly earlier in workitem_free() before free() is called.
The main reason I can think of for deferring a wakeup() would be to
avoid waking up a waiter while holding a lock that the waiter would
need. However, no locks are dropped in between the wakeup() in
workitem_free() and the end of the loop in complete_jseg() as far as I
can tell.
In general I think it is not safe to do a wakeup() after free() as one
cannot control how other parts of the kernel that might reuse the
address for a different wait channel will handle spurious wakeups.
Reported by: pho
Reviewed by: kib
MFC after: 2 weeks
Differential Revision: https://reviews.freebsd.org/D12494
GPUs: radeonkms, i915kms
NICs: if_em, if_igb, if_bnxt
This metadata isn't used yet, but it will be handy to have later to
implement automatic module loading.
Reviewed by: imp, mmacy
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D12488
Some x86 class CPUs have accelerated intrinsics for SHA1 and SHA256.
Provide this functionality on CPUs that support it.
This implements CRYPTO_SHA1, CRYPTO_SHA1_HMAC, and CRYPTO_SHA2_256_HMAC.
Correctness: The cryptotest.py suite in tests/sys/opencrypto has been
enhanced to verify SHA1 and SHA256 HMAC using standard NIST test vectors.
The test passes on this driver. Additionally, jhb's cryptocheck tool has
been used to compare various random inputs against OpenSSL. This test also
passes.
Rough performance averages on AMD Ryzen 1950X (4kB buffer):
aesni: SHA1: ~8300 Mb/s SHA256: ~8000 Mb/s
cryptosoft: ~1800 Mb/s SHA256: ~1800 Mb/s
So ~4.4-4.6x speedup depending on algorithm choice. This is consistent with
the results the Linux folks saw for 4kB buffers.
The driver borrows SHA update code from sys/crypto sha1 and sha256. The
intrinsic step function comes from Intel under a 3-clause BSDL.[0] The
intel_sha_extensions_sha<foo>_intrinsic.c files were renamed and lightly
modified (added const, resolved a warning or two; included the sha_sse
header to declare the functions).
[0]: https://software.intel.com/en-us/articles/intel-sha-extensions-implementations
Reviewed by: jhb
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D12452
Some SoC require a write to a unknown register to work corectly.
This write should be in the pmu region not in the phy ctrl one.
Reported by: Mark Millard (markmi@dsl-only.net)
A misordering in the Via padlock driver really strongly suggested that these
should use C99 named initializers.
No functional change.
Sponsored by: Dell EMC Isilon
Theoretically, HMACs do not actually have any limit on key sizes.
Transforms should compact input keys larger than the HMAC block size by
using the transform (hash) on the input key.
(Short input keys are padded out with zeros to the HMAC block size.)
Still, not all FreeBSD crypto drivers that provide HMAC functionality
handle longer-than-blocksize keys appropriately, so enforce a "maximum" key
length in the crypto API for auth_hashes that previously expressed a
requirement. (The "maximum" is the size of a single HMAC block for the
given transform.) Unconstrained auth_hashes are left as-is.
I believe the previous hardcoded sizes were committed in the original
import of opencrypto from OpenBSD and are due to specific protocol
details of IPSec. Note that none of the previous sizes actually matched
the appropriate HMAC block size.
The previous hardcoded sizes made the SHA tests in cryptotest.py
useless for testing FreeBSD crypto drivers; none of the NIST-KAT example
inputs had keys sized to the previous expectations.
The following drivers were audited to check that they handled keys up to
the block size of the HMAC safely:
Software HMAC:
* padlock(4)
* cesa
* glxsb
* safe(4)
* ubsec(4)
Hardware accelerated HMAC:
* ccr(4)
* hifn(4)
* sec(4) (Only supports up to 64 byte keys despite claiming to
support SHA2 HMACs, but validates input key sizes)
* cryptocteon (MIPS)
* nlmsec (MIPS)
* rmisec (MIPS) (Amusingly, does not appear to use key material at
all -- presumed broken)
Reviewed by: jhb (previous version), rlibby (previous version)
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D12437
I managed to commit an older version of the change.
Plus, even the latest version was not ready for userland compilation.
Reported by: "O. Hartmann" <ohartmann@walstatt.org>,
cy
MFC after: 1 week
X-MFC with: r324011
Introduced in r324007, the data alloced by strdup was never free'ed.
While here, remove cast to caddr_t when freeing dp.
Reported by: bde
MFC after: 1 week
X MFC With: r324007
FreeBSD notes:
- this MFV reverts FreeBSD commit r314549 to make the merge easier
- at present our emulation of cv_timedwait_hires is rather poor,
so I elected to use cv_timedwait_sbt directly
Please see the differential revision for details.
Unfortunately, I did not get any positive reviews, so there could be
bugs in the FreeBSD-specific piece of the merge.
Hence, the long MFC timeout.
illumos/illumos-gate@1271e4b10d1271e4b10dhttps://www.illumos.org/issues/8585
The current implementation of zil_commit() can introduce significant
latency, beyond what is inherent due to the latency of the underlying
storage. The additional latency comes from two main problems:
1. When there's outstanding ZIL blocks being written (i.e. there's
already a "writer thread" in progress), then any new calls to
zil_commit() will block waiting for the currently oustanding ZIL
blocks to complete. The blocks written for each "writer thread" is
coined a "batch", and there can only ever be a single "batch" being
written at a time. When a batch is being written, any new ZIL
transactions will have to wait for the next batch to be written,
which won't occur until the current batch finishes.
As a result, the underlying storage may not be used as efficiently
as possible. While "new" threads enter zil_commit() and are blocked
waiting for the next batch, it's possible that the underlying
storage isn't fully utilized by the current batch of ZIL blocks. In
that case, it'd be better to allow these new threads to generate
(and issue) a new ZIL block, such that it could be serviced by the
underlying storage concurrently with the other ZIL blocks that are
being serviced.
2. Any call to zil_commit() must wait for all ZIL blocks in its "batch"
to complete, prior to zil_commit() returning. The size of any given
batch is proportional to the number of ZIL transaction in the queue
at the time that the batch starts processing the queue; which
doesn't occur until the previous batch completes. Thus, if there's a
lot of transactions in the queue, the batch could be composed of
many ZIL blocks, and each call to zil_commit() will have to wait for
all of these writes to complete (even if the thread calling
zil_commit() only cared about one of the transactions in the batch).
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Author: Prakash Surya <prakash.surya@delphix.com>
MFC after: 1 month
Differential Revision: https://reviews.freebsd.org/D12355
In base, locales (and encoding) specific directories are not used
by any tool. Just remove them.
While here also remove the cat page directory for openssl
When crypto_newsession() is given a request for an unsupported capability,
raise a more specific error than EINVAL.
This allows cryptotest.py to skip some HMAC tests that a driver does not
support.
Reviewed by: jhb, rlibby
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D12451
If unwinding stops due to hitting the end of the call chain, the return
value is supposed to be _URC_END_OF_STACK; other values indicate internal
errors. The return value from get_eit_entry() is now returned without
translating it to _URC_FAILURE, so that callers can see _URC_END_OF_STACK
when it happens.
When raising an exception, the unwinder searches for a catch handler and if
none is found it should invoke std::terminate() with the uncaught exception
as the "current" exception. Before this change, the terminate handler was
invoked with no exception as current (abi::__cxa_current_exception_type()
returned NULL), because the return value from the unwinder indicated an
internal failure in unwinding. It turns out that was because all errors
from get_eit_entry() were translated to _URC_FAILURE. Now the error is
returned untranslated, which allows _URC_END_OF_STACK to percolate upwards
to throw_exception() in libcxxrt. When it sees that return status it
properly calls std::terminate() with the uncaught exception installed
as the current exception, allowing custom terminate handlers to work
with it.
vm_page_try_to_free() is testing conditions, like clean versus dirty,
that only vary in managed pages.
Suggested by: kib
Reviewed by: markj
X-MFC after: never
There was a panic() in the NFS server's write operation that didn't
need to be a panic() and could just be an error return.
This patch makes that change.
Found by code inspection during development of the pNFS service.
MFC after: 2 weeks
Like r266444, g_resize_provider_event can attempt to orphan an already
orphaned geom_dev consumer. This will cause a panic in g_dev_orphan. Apply
the same fix as was applied to g_orphan_register.
Reviewed by: ae
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D12469
nfsm_uiombuflist() zero filled the mbuf list to a multiple of 4bytes
as required for XDR. Unfortunately that modified an mbuf list after
it was m_copym()'d and was broken. This patch removes the zero filling code.
Since nfsm_uiombuflist() is not yet used in head/current, this has no
effect on users.
The function will be used by a future commit of code that adds Flex
File Layout support.
can be avoided when the page's containing object has a reference count of
zero. (If the object has a reference count of zero, then none of its pages
can possibly be mapped.)
Address nearby style issues in vm_page_try_to_free(), and change its
return type to "bool".
Reviewed by: kib, markj
MFC after: 1 week