pfs_visible(). The recursion does not cause deadlock because the sx
implementation does not prefer exclusive waiters over the shared, but
this is an implementation detail.
Reported by: pho, Matthew Bryan <matthew.bryan@isilon.com>
Reviewed by: jhb
Tested by: pho
Approved by: des (pseudofs maintainer)
Sponsored by: The FreeBSD Foundation
MFC after: 2 weeks
At least FAST_DEPEND won't even run 'make depend', so the code was
potentially broken with FAST_DEPEND anyhow. The .dinclude directive
will ignore missing files rather than make them be fatal.
Sponsored by: EMC / Isilon Storage Division
The inclusion of .MAKE.DEPENDFILE (.depend) has special logic in make
to ignore stale/missing dependencies. bmake 20160220 added a '.dinclude'
directive that uses the special logic for .depend when including the file.
This fixes a build error when a file is moved or deleted that exists in a
.depend.OBJ file. This happened in r292782 when sha512c.c "moved" and an
incremental build of lib/libmd would fail with:
make: don't know how to make /usr/src/lib/libcrypt/../libmd/sha512c.c. Stop
Now this will just be seen as a stale dependency and cause a rebuild:
make: /usr/obj/usr/src/lib/libmd/.depend.sha512c.o, 13: ignoring stale .depend for /usr/src/lib/libcrypt/../libmd/sha512c.c
--- sha512c.o ---
...
This rebuild will only be done once since the .depend.sha512c.o will
be updated on the build with the -MF flags.
This also removes -MP being passed for the .depend.OBJ generation (which
would create fake targets for system headers) since the logic is no
longer needed to protect from missing files.
Sponsored by: EMC / Isilon Storage Division
This may be used in later checks, such as in bsd.dep.mk, to
enable features that rely on the built-in value.
Sponsored by: EMC / Isilon Storage Division
similar Makefile.libsoft which will do the same for armv6 soft fp API
libraries in prep for pulling the trigger on moving to armv6 hard
float. Once there's two files, I'll work with bdrewery@ to merge the
two files as they are mostly the same. The high rate of churn for
Makefile* makes it quite difficult to make progress out of tree.
Differential Review: https://reviews.freebsd.org/D5566
- Query the location of the log very early during attach. Refresh the
location later after establishing contact with the firmware.
- Save the log's location as a flat address in devlog_params.
- Use a memory window instead of backdoor access to the EDC/MC to read
the log.
These files are installed, likely after r288230. In
tools/build/mk/OptionalObsoleteFiles.inc they are bound
to the MK_BINUTILS option rather than unconditionally
deleted here.
Reported by: Kurt Lidl <lidl@pix.net>
MFC after: 1 week
Sponsored by: EMC / Isilon Storage Division
I believe that this patch handled the problem from the wrong side.
Instead of making ZFS properly handle large stripe sizes, it made
unrelated driver to lie in reported parameters to workaround that.
Alternative solution for this problem from ZFS side was committed at
r296615.
Discussed with: smh
If device has stripe size bigger then maximal sector size supported by
ZFS, there is nothing can be done to avoid read-modify-write cycles.
Taking that stripe size into account will only reduce space efficiency
and pointlessly bother user with warnings that can not be fixed.
Discussed with: smh
Use of misaligned or non-power-of-2 stripes is not really useful for ZFS,
since increased ashift won't help to avoid read-modify-write cycles, and
only reduce pool space efficiency and compression rates.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Chris Williamson <chris.williamson@delphix.com>
Reviewed by: Stefan Ring <stefanrin@gmail.com>
Reviewed by: Steven Burgess <sburgess@datto.com>
Reviewed by: Arne Jansen <sensille@gmx.net>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Paul Dagnelie <pcd@delphix.com>
In certain circumstances, "zfs send -i" (incremental send) can produce a
stream which will result in incorrect sparse file contents on the
target.
The problem manifests as regions of the received file that should be
sparse (and read a zero-filled) actually contain data from a file that
was deleted (and which happened to share this file's object ID).
Note: this can happen only with filesystems (not zvols, because they do
not free (and thus can not reuse) object IDs).
Note: This can happen only if, since the incremental source (FromSnap),
a file was deleted and then another file was created, and the new file
is sparse (i.e. has areas that were never written to and should be
implicitly zero-filled).
We suspect that this was introduced by 4370 (applies only if hole_birth
feature is enabled), and made worse by 5243 (applies if hole_birth
feature is disabled, and we never send any holes).
The bug is caused by the hole birth feature. When an object is deleted
and replaced, all the holes in the object have birth time zero. However,
zfs send cannot tell that the holes are new since the file was replaced,
so it doesn't send them in an incremental. As a result, you can end up
with invalid data when you receive incremental send streams. As a
short-term fix, we can always send holes with birth time 0 (unless it's
a zvol or a dataset where we can guarantee that no objects have been
reused).
Closes#37openzfs/openzfs@adef853162
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Chris Williamson <chris.williamson@delphix.com>
Reviewed by: Stefan Ring <stefanrin@gmail.com>
Reviewed by: Steven Burgess <sburgess@datto.com>
Reviewed by: Arne Jansen <sensille@gmx.net>
Approved by: Robert Mustacchi <rm@joyent.com>
Author: Paul Dagnelie <pcd@delphix.com>
In certain circumstances, "zfs send -i" (incremental send) can produce a
stream which will result in incorrect sparse file contents on the
target.
The problem manifests as regions of the received file that should be
sparse (and read a zero-filled) actually contain data from a file that
was deleted (and which happened to share this file's object ID).
Note: this can happen only with filesystems (not zvols, because they do
not free (and thus can not reuse) object IDs).
Note: This can happen only if, since the incremental source (FromSnap),
a file was deleted and then another file was created, and the new file
is sparse (i.e. has areas that were never written to and should be
implicitly zero-filled).
We suspect that this was introduced by 4370 (applies only if hole_birth
feature is enabled), and made worse by 5243 (applies if hole_birth
feature is disabled, and we never send any holes).
The bug is caused by the hole birth feature. When an object is deleted
and replaced, all the holes in the object have birth time zero. However,
zfs send cannot tell that the holes are new since the file was replaced,
so it doesn't send them in an incremental. As a result, you can end up
with invalid data when you receive incremental send streams. As a
short-term fix, we can always send holes with birth time 0 (unless it's
a zvol or a dataset where we can guarantee that no objects have been
reused).
Closes#37
TSO packets will signal segments TX completion in the separate CQ
descriptors. Each CQ descriptor for HW TSO will point to the same
SQ entry.
Do not invoke nicvf_put_sq_desc() for secondary segments to avoid
free_cnt corruption and eventually integer overflow that will result
in the negative free_cnt value and hence impossibility of further
transmission.
Reviewed by: wma
Obtained from: Semihalf
Sponsored by: Cavium
Differential Revision: https://reviews.freebsd.org/D5535
Do not modify NIC_QSET_CQ_0_7_HEAD manually, especially
in non-atomic context.
It doesn't seem to be necessary to recreate CQ head after
interrupt clearing too.
Reviewed by: wma
Obtained from: Semihalf
Sponsored by: Cavium
Differential Revision: https://reviews.freebsd.org/D5533