is to be requested via a "ro" option. At the same time, MNT_RDONLY
is gradually becoming an indicator of the current state of the FS
instead of a command flag. Today passing MNT_RDONLY alone to the
kernel's mount machinery will lead to various glitches. (See the
PRs for examples.)
Therefore mount the root FS with a "ro" option instead of the
MNT_RDONLY flag. (Note that MNT_RDONLY still is added to the mount
flags internally, by vfs_donmount(), if "ro" was specified.)
To be able to pass "ro" cleanly to kernel_vmount(), teach the latter
function to accept options with NULL values.
Also correct the comment explaining how mount_arg() handles length
of -1.
PR: bin/106636 kern/120319
Submitted by: Jaakko Heinonen <see PR kern/120319 for email> (originally)
legacy interrupts rather than MSI as a special case. Prior to this
commit, the interrupt handler was doing the slow handshaking with
the device to ensure the legacy interrupt was lowered in both
the legacy and MSI-X case. This handshaking was not
required for MSI-X.
|x| or |y| and b is |y| or |x|) when mixing NaN arg(s).
hypot*() had its own foot shooting for mixing NaNs -- it swaps the
args so that |x| in bits is largest, but does this before quieting
signaling NaNs, so on amd64 (where the result of adding NaNs depends
on the order) it gets inconsistent results if setting the quiet bit
makes a difference, just like a similar ia64 and i387 hardware comparison.
The usual fix (see e_powf.c 1.13 for more details) of mixing using
(a+0.0)+-(b+0.0) doesn't work on amd64 if the args are swapped (since
the rder makes a difference with SSE). Fortunately, the original args
are unchanged and don't need to be swapped when we let the hardware
decide the mixing after quieting them, but we need to take their
absolute value.
hypotf() doesn't seem to have any real bugs masked by this non-bug.
On amd64, its maximum error in 2^32 trials on amd64 is now 0.8422 ulps,
and on i386 the maximum error is unchanged and about the same, except
with certain CFLAGS it magically drops to 0.5 (perfect rounding).
Convert to __FBSDID().
(x+0.0)-(y+0.0) for mixing NaNs documented in a previous log message
didn't actually get committed. Apparently, adding 0.0 uniformizes
the order enough to give consistent results.
be into 12+24 bits of precision for extra-precision multiplication,
but was into 13+24 bits. On i386 with -O1 the bug was hidden by
accidental extra precision, but on amd64, in 2^32 trials the bug
caused about 200000 errors of more than 1 ulp, with a maximum error
of about 80 ulps. Now the maximum error in 2^32 trials on amd64
is 0.8573 ulps. It is still 0.8316 ulps on i386 with -O1.
The nearby decomposition of 1/ln2 and the decomposition of 2/(3ln2) in
the double precision version seem to be sub-optimal but not broken.
This uses 2 tricks to improve consistency so that more serious problems
aren't hidden in simple regression tests by noise for the NaNs:
- for a signaling NaN, adding 0.0 generates the invalid exception and
converts to a quiet NaN, and doesn't have too many effects for other
types of args (it converts -0 to +0 in some rounding modes, but that
hopefully doesn't change the result after adding the NaN arg). This
avoids some inconsistencies on i386 and ia64. On these arches, the
result of an operation on 2 NaNs is apparently the largest or the
smallest of the NaNs as bits (consistently largest or smallest for
each arch, but the opposite). I forget which way the comparison
goes and if the sign bit affects it. The quiet bit is is handled
poorly by not always setting it before the comparision or ignoring
it. Thus if one of the args was originally a signaling NaN and the
other was originally a quiet NaN, then the result depends too much
on whether the signaling NaN has been quieted at this point, which
in turn depends on optimizations and promotions. E.g., passing float
signaling NaNs to double functions must quiet them on conversion;
on i387, loading a signaling NaN of type float or double (but not
long double) into a register involves a conversion, so it quiets
signaling NaNs, so if the addition has 2 register operands than it
only sees quiet NaNs, but if the addition has a memory operand then
it sees a signaling NaN iff it is in the memory operand.
- subtraction instead of addition is used to avoid a dubious optimization
in old versions of gcc. For SSE operations, mixing of NaNs apparently
always gives the target operand. This is not as good as the i387
and ia64 behaviour. It doesn't mix NaNs at all, and makes addition
not quite commutative. Old versions of gcc sometimes rewrite x+y
to y+x and thus give different results (in bits) for NaNs. gcc-3.3.3
rewrites x+y to y+x for one of pow() and powf() but not the other,
so starting from float NaN args x and y, powf(x, y) was almost always
different from pow(x, y).
These tricks won't give consistency of 2-arg float and double functions
with long double ones on amd64, since long double ones use the i387
which has different semantics from SSE.
Convert to __FBSDID().
allocator for jumbo frame.
o Removed unneeded jlist lock which was used to manage jumbo
buffers.
o Don't reinitialize hardware if MTU was not changed.
o Added additional check for minimal MTU size.
o Added a new tunable hw.skc.jumbo_disable to disable jumbo frame
support for the driver. The tunable could be set for systems that
do not need to use jumbo frames and it would save
(9K * number of Rx descriptors) bytes kernel memory.
o Jumbo buffer allocation failure is no longer critical error for
the operation of sk(4). If sk(4) encounter the allocation failure
it just disables jumbo frame support and continues to work without
user intervention.
With these changes jumbo frame performance of sk(4) was slightly
increased and users should not encounter jumbo buffer allocation
failure. Previously sk(4) tried to allocate physically contiguous
memory, 3388KB for 256 Rx descriptors. Sometimes that amount of
contiguous memory region could not be available for running systems
which in turn resulted in failure of loading the driver.
Tested by: Cy Schubert < Cy.Schubert () komquats dot com >
modules using invalid ABI versions (e.g. a 7.x module with an 8.x kernel)
for a given kernel:
- Add a 'kernel' module version whose value is __FreeBSD_version.
- Add a version dependency on 'kernel' in every module that has an
acceptable version range of __FreeBSD_version up to the end of the
branch __FreeBSD_version is part of. E.g. a module compiled on 701000
would work on kernels with versions between 701000 and 799999 inclusive.
Discussed on: arch@
MFC after: 1 week
A couple of notes for this:
* WITNESS support, when enabled, is only used for shared locks in order
to avoid problems with the "disowned" locks
* KA_HELD and KA_UNHELD only exists in the lockmgr namespace in order
to assert for a generic thread (not curthread) owning or not the
lock. Really, this kind of check is bogus but it seems very
widespread in the consumers code. So, for the moment, we cater this
untrusted behaviour, until the consumers are not fixed and the
options could be removed (hopefully during 8.0-CURRENT lifecycle)
* Implementing KA_HELD and KA_UNHELD (not surported natively by
WITNESS) made necessary the introduction of LA_MASKASSERT which
specifies the range for default lock assertion flags
* About other aspects, lockmgr_assert() follows exactly what other
locking primitives offer about this operation.
- Build real assertions for buffer cache locks on the top of
lockmgr_assert(). They can be used with the BUF_ASSERT_*(bp)
paradigm.
- Add checks at lock destruction time and use a cookie for verifying
lock integrity at any operation.
- Redefine BUF_LOCKFREE() in order to not use a direct assert but
let it rely on the aforementioned destruction time check.
KPI results evidently broken, so __FreeBSD_version bumping and
manpage update result necessary and will be committed soon.
Side note: lockmgr_assert() will be used soon in order to implement
real assertions in the vnode namespace replacing the legacy and still
bogus "VOP_ISLOCKED()" way.
Tested by: kris (earlier version)
Reviewed by: jhb
access cache improvements:
- Flush just access control state on CODA_PURGEUSER, not the full
namecache for /coda.
- When replacing a fid on a cnode as a result of, e.g.,
reintegration after offline operation, we no longer need to
purge the namecache entries associated with its vnode.
MFC after: 1 month
said that the previous commit was almost a null forced commit too. It
just converted to __FBSDID(). I was going to change `huge' from its
double precision value of 1e300, but that seems to be unnecessary since
`huge' is only used to set FE_INEXACT, and any value with an exponent
larger than LDBL_MANT_DIG will do for that, while initializing a really
huge value in a portable way would require more code.
and trunc() to the corresponding long double functions. This is not
just an optimization for these arches. The full long double functions
have a wrong value for `huge', and the arches without full long doubles
depended on it being wrong.
modeled on the access cache found in NFS, smbfs, and the Linux coda
module. This is a positive access cache of a single entry per file,
tracking recently granted rights, but unlike NFS and smbfs,
supporting explicit invalidation by the distributed file system.
For each cnode, maintain a C_ACCCACHE flag indicating the validity
of the cache, and a cached uid and mode tracking recently granted
positive access control decisions.
Prefer the cache to venus_access() in VOP_ACCESS() if it is valid,
and when we must fall back to venus_access(), update the cache.
Allow Venus to clear the access cache, either the whole cache on
CODA_FLUSH, or just entries for a specific uid on CODA_PURGEUSER.
Unlike the Coda module on Linux, we don't flush all entries on a
user purge using a generation number, we instead walk present
cnodes and clear only entries for the specific user, meaning it is
somewhat more expensive but won't hit all users.
Since the Coda module is agressive about not keeping around
unopened cnodes, the utility of the cache is somewhat limited for
files, but works will for directories. We should make Coda less
agressive about GCing cnodes in VOP_INACTIVE() in order to improve
the effectiveness of in-kernel caching of attributes and access
rights.
MFC after: 1 month
VFS namecache, as is done by the Coda module on Linux. Unlike the Coda
namecache, the global VFS namecache isn't tagged by credential, so use
ore conservative flushing behavior (for now) when CODA_PURGEUSER is
issued by Venus.
This improves overall integration with the FreeBSD VFS, including
allowing __getcwd() to work better, procfs/procstat monitoring, and so
on. This improves shell behavior in many cases, and improves ".."
handling. It may lead to some slowdown until we've implemented a
specific access cache, which should net improve performance, but in the
mean time, lookup access control now always goes to Venus, whereas
previously it didn't.
MFC after: 1 month
When ntfs_ntput() reaches 0 in the refcount the inode lockmgr is not
released and directly destroyed. Fix this by unlocking the lockmgr() even
in the case of zero-refcount.
Reported by: dougb, yar, Scot Hetzel <swhetzel at gmail dot com>
Submitted by: yar
This has the side effect of confusing gcc-4.2.1's optimizer into more
often doing the right thing. When it does the wrong thing here, it
seems to be mainly making too many copies of x with dependency chains.
This effect is tiny on amd64, but in some cases on i386 it is enormous.
E.g., on i386 (A64) with -O1, the current version of exp2() should
take about 50 cycles, but took 83 cycles before this change and 66
cycles after this change. exp2f() with -O1 only speeded up from 51
to 47 cycles. (exp2f() should take about 40 cycles, on an Athlon in
either i386 or amd64 mode, and now takes 42 on amd64). exp2l() with
-O1 slowed down from 155 cycles to 123 for some args; this is unimportant
since the i386 exp2l() is a fake; the wrong thing for it seems to
involve branch misprediction.
faster on all machines tested (old Celeron (P2), A64 (amd64 and i386)
and ia64) except on ia64 when compiled with -O1. It takes 2 more
multiplications, so it will be slower on old machines. The speedup
is about 8 cycles = 17% on A64 (amd64 and i386) with best CFLAGS
and some parallelism in the caller.
Move the evaluation of 2**k up a bit so that it doesn't compete too
much with the new polynomial evaluation. Unlike the previous
optimization, this rearrangement cannot change the result, so compilers
and CPU schedulers can do it, but they don't do it quite right yet.
This saves a whole 1 or 2 cycles on A64.
- mention LRO support
- describe multi-slice related tunables.
- correct DIAGNOSTICS section to reflect that missing firmware
is non-fatal.
Submitted by: gallatin
nfs_xid_gen() function instead of duplicating the logic in both
nfsm_rpchead() and the NFS3ERR_JUKEBOX handling in nfs_request().
MFC after: 1 week
Submitted by: mohans (a long while ago)
through the FreeBSD ABI. IPC_INFO, SHM_INFO, SHM_STAT were added
specifically for Linux binary support. They are not documented
as being a part of the FreeBSD ABI, also, the structures necessary
for them have been hidden away from the users for a long time.
Also, the Linux ABI layer uses it's own structures to populate the
responses back to the user to ensure that the ABI is consistent.
I think there is a bit more separation work that needs to happen.
Reviewed by: jhb
Discussed with: jhb
Discussed on: freebsd-arch@ (very briefly)
MFC after: 1 month