the provider - also apply to UFS1 filesystems. This should help with
resizing filesystems created by makefs(8), which still uses UFS1.
Tested by: jmg@
Sponsored by: The FreeBSD Foundation
and geom_uncompress(4). Now, they produce an almost clean diff(1) output.
Remove a duplicated variable from g_uncompress.c and an unnecessary header
from g_uzip.c.
No functional changes.
Instead opening/closing provider by each of metadata classes, do it only
once in core code. Since for SCSI disks open/close means sending some
SCSI commands to the device, this change reduces taste time.
MFC after: 2 weeks
Sponsored by: iXsystems, Inc.
Make sure not to start I/O bigger than MAXPHYS bytes.
Quoting r264504:
When we detect the condition, we'll reduce the block count and perform
a "short" read. In g_uncompress_done() we need to consider the original
I/O length and stop early if we're about to deflate a block that we didn't
read. By using bio_completed in the cloned BIO and not bio_length to
check for this, we automatically and gracefully handle short reads that
our providers may be doing on top of the short reads we may initiate
ourselves.
Reviewed by: marcel
problems in our providers, such as a KASSERT in md(4). We can initiate
I/O for more than MAXPHYS bytes if we've been given a BIO for MAXPHYS
bytes, the blocks from which we're reading couldn't be compressed and
we had compression in preceeding blocks resulting in misalignment of
the blocks we're trying to read relative to the sector. We're forced to
round up the I/O length to make it an multiple of the sector size.
When we detect the condition, we'll reduce the block count and perform
a "short" read. In g_uzip_done() we need to consider the original I/O
length and stop early if we're about to deflate a block that we didn't
read. By using bio_completed in the cloned BIO and not bio_length to
check for this, we automatically and gracefully handle short reads that
our providers may be doing on top of the short reads we may initiate
ourselves.
Obtained from: Juniper Networks, Inc.
This caused incorrect behavior of arrays with big-endian DDF metadata.
Little-endian (like used by Adaptec controllers) should not be harmed.
Add workaround should be enough to manage compatibility.
MFC after: 2 weeks
systems need fine-grained control over what's in and what's out.
That's ideal. For now, separate GPT labels from the rest and allow
g_label to be built with just GPT labels.
Obtained from: Juniper Networks, Inc.
the geom->softc pounter to NULL before freeing the g_slicer softc.
In g_slicer_free() the pointer is checked first.
Obtained from: Juniper Networks, Inc.
k_ipad.
Note that the two consumers in geli(4) are not affected by this
issue because the way the code is constructed and as such, we
believe there is no security impact with or without this change
with geli(4)'s usage.
Reported by: Serge van den Boom <serge vdboom.org>
Reviewed by: pjd
MFC after: 2 weeks
the output buffer wasn't being cleared between the inflate() calls,
producing zeroed output after the first inflate() call.
This fixes the read of mkuzip(8) images with geom_uncompress(4).
Reviewed by: ray
Approved by: adrian (mentor)
geom_alloc_copyin() can't return ENOMEM, so describe its fail as bad
control request. Add check for NULL pointer in gctl_dump(), since it
can be NULL when geom_alloc_copyin() failed.
MFC after: 1 week
Add to the gctl_error() an ability to specify error description even
if numeric error code is already specified. Also by default set
error code to EINVAL.
PR: 185852
MFC after: 1 week
Fix geom_uncompress(4) module loading. Don't link zlib.c (which is a module
itself) directly.
The built module was verified and used to read a few mkulzma(8) images on
amd64 to validate some of the informations on the manual page.
While here, don't overwrite CFLAGS.
Reviewed by: ray
Approved by: adrian (mentor)
This fixes the problem, when gmirror starts again just after stop.
The problem occurs when gmirror's component has geom label with equal size.
E.g. gpt and gptid have the same size as partition, diskid has the same
size as entire disk. When gmirror's geom has been destroyed, glabel
creates its providers and this initiate retaste.
Now "gmirror destroy" command is available. It destroys geom and also
erases gmirror's metadata.
MFC after: 2 weeks
going on in here than can be fixed, and I introduced some of my own. Rather
than fix the whole host of them, back out my bugs.
Found by: bde
X-MFC with: r259080
shifts into the sign bit. Instead use (1U << 31) which gets the
expected result.
This fix is not ideal as it assumes a 32 bit int, but does fix the issue
for most cases.
A similar change was made in OpenBSD.
Discussed with: -arch, rdivacky
Reviewed by: cperciva
The purpose of the PMBR is to have the disk appear in use to GPT
unaware utilities (like fdisk). However, if the PMBR has been changed
by a GPT unaware utlity then we must assume that this was deliberate
(as it involved removal of the special slice) and we should not treat
the unmodified GPT-specific sectors as being valid. By lowering the
probe priority in that case, the MBR scheme will take precedence and
the kernel will end up using the MBR and not the GPT. We will still
use the GPT if the kernel does not support the MBR scheme.
Now it is easy to expand the size of the mirror when all its components
are replaced. Also add g_resize method to geom_mirror class. It will write
updated metadata to new last sector, when parent provider is resized.
Silence from: geom@
MFC after: 1 month
already valid metadata found at the new location. This should allow easy
transparent recovery if first resize was done by mistake.
While there, unify metadata write code and fix minor memory leak.
MFC after: 1 month
In "manual" mode just automatically resize provider in any direction.
In "automatic" mode allow only growth (with new metadata write); in case
of shrinking destroy the multipath device same as before since it may be
undesirable to write new metadata within old user area.
MFC after: 1 month
Without this change, in the worst but unlikely case scenario, certain
administrative operations, including change of configuration, set or
delete key from a GEOM ELI provider, may leave potentially sensitive
information in buffer allocated from kernel memory.
We believe that it is not possible to actively exploit these issues, nor
does it impact the security of normal usage of GEOM ELI providers when
these operations are not performed after system boot.
Security: possible sensitive information disclosure
Submitted by: Clement Lecigne <clecigne google com>
MFC after: 3 days
information.
The existing algorithm selects a preferred leaf vdev based on offset of the zio
request modulo the number of members in the mirror. It assumes the devices are
of equal performance and that spreading the requests randomly over both drives
will be sufficient to saturate them. In practice this results in the leaf vdevs
being under utilized.
The new algorithm takes into the following additional factors:
* Load of the vdevs (number outstanding I/O requests)
* The locality of last queued I/O vs the new I/O request.
Within the locality calculation additional knowledge about the underlying vdev
is considered such as; is the device backing the vdev a rotating media device.
This results in performance increases across the board as well as significant
increases for predominantly streaming loads and for configurations which don't
have evenly performing devices.
The following are results from a setup with 3 Way Mirror with 2 x HD's and
1 x SSD from a basic test running multiple parrallel dd's.
With pre-fetch disabled (vfs.zfs.prefetch_disable=1):
== Stripe Balanced (default) ==
Read 15360MB using bs: 1048576, readers: 3, took 161 seconds @ 95 MB/s
== Load Balanced (zfslinux) ==
Read 15360MB using bs: 1048576, readers: 3, took 297 seconds @ 51 MB/s
== Load Balanced (locality freebsd) ==
Read 15360MB using bs: 1048576, readers: 3, took 54 seconds @ 284 MB/s
With pre-fetch enabled (vfs.zfs.prefetch_disable=0):
== Stripe Balanced (default) ==
Read 15360MB using bs: 1048576, readers: 3, took 91 seconds @ 168 MB/s
== Load Balanced (zfslinux) ==
Read 15360MB using bs: 1048576, readers: 3, took 108 seconds @ 142 MB/s
== Load Balanced (locality freebsd) ==
Read 15360MB using bs: 1048576, readers: 3, took 48 seconds @ 320 MB/s
In addition to the performance changes the code was also restructured, with
the help of Justin Gibbs, to provide a more logical flow which also ensures
vdevs loads are only calculated from the set of valid candidates.
The following additional sysctls where added to allow the administrator
to tune the behaviour of the load algorithm:
* vfs.zfs.vdev.mirror.rotating_inc
* vfs.zfs.vdev.mirror.rotating_seek_inc
* vfs.zfs.vdev.mirror.rotating_seek_offset
* vfs.zfs.vdev.mirror.non_rotating_inc
* vfs.zfs.vdev.mirror.non_rotating_seek_inc
These changes where based on work started by the zfsonlinux developers:
https://github.com/zfsonlinux/zfs/pull/1487
Reviewed by: gibbs, mav, will
MFC after: 2 weeks
Sponsored by: Multiplay