Rewrite EFI part device interface to present disk devices in more
user friendly way.
We keep list of three types of devices: floppy, cd and disk, the
visible names: fdX: cdX: and diskX:
Use common/disk.c and common/part.c interfaces to manage the
partitioning.
The lsdev -l will additionally list the device path.
Reviewed by: imp, allanjude
Approved by: imp (mentor), allanjude (mentor)
Differential Revision: https://reviews.freebsd.org/D8581
Need interface to extract information about disk abstraction,
to read disk or partition size depending on the provided argument
and adjust disk size based on information in partition table.
The disk handle from disk_open() has d_offset field to point to
partition start. So we can use this fact to return either whole disk
size or partition size. For this we only need to record partition size
we get from disk_open() anyhow.
In addition, this will also make it possible to adjust the disk media size
based on information from partition table. The problem with disk size is
about some BIOS systems reporting bogus disk size for 2+TB disks, but
since such disks are using GPT partitioning, and GPT does have information
about disk size (alternate LBA + 1), we can use this fact to record disk
size based on partition table.
This patch does exactly this: implements DIOCGSECTORSIZE and DIOCGMEDIASIZE
ioctl, and DIOCGMEDIASIZE will report either disk media size or partition size.
Adds ptable_getsize() call to read partition size in bytes from ptable pointer.
Updates disk_open() to use ptable_getsize() to update mediasize value.
Implements GPT detection function to update ptable size (used by
ptable_getsize()) according to alternate lba (which is location of backup copy
of GPT header table).
Reviewed by: allanjude
Approved by: allanjude (mentor)
Differential Revision: https://reviews.freebsd.org/D8594
The disk_* and part_* api is using 64bit values for media size and
offsets. However, the current api is using off_t type, which is signed
64-bit int.
In this context the signed media size does not make any sense, and
the offsets are used to mark absolute, not relative locations.
Also, the data from GPT partition table and some other sources is
already using uint64_t data type, so using signed off_t can cause sign
issues.
Reviewed by: imp
Approved by: imp (mentor)
Differential Revision: https://reviews.freebsd.org/D8710
Apparently the libstand dosfs optimization is a bit too optimistic
and did introduce possible memory corruption.
This patch is backing out the bad part and since this results in
dosfs reading full blocks now, we can also remove extra offset argument
from dv_strategy callback.
The analysis of the issue and the backout patch is provided by Mikhail Kupchik.
PR: 214423
Submitted by: Mikhail Kupchik
Reported by: Mikhail Kupchik
Reviewed by: bapt, allanjude
Approved by: allanjude (mentor)
MFC after: 1 month
Differential Revision: https://reviews.freebsd.org/D8644
lsdev command does walk over devsw list, prints list element name and
will use dv_print() callback to print the device list.
Unfortunately this approach will add unneeded noise when there are no
particular devices detected.
To remove "empty" device section headers, the dv_print() callback
should print the header instead.
In addition, fixed dv_print callback for md module.
Reviewed by: imp
Approved by: imp (mentor)
Differential Revision: https://reviews.freebsd.org/D8551
This change does modify devsw dv_print() to return the int value,
enabling walkers to interrupt the walk on non zero value from dv_print().
This will allow the pager_print actually to stop displaying data on
user input, and additionally pager is used in various *dev_print callbacks,
where it was missing.
For test, lsdev [-v] command should display data by screenfuls and should
stop when the key 'q' is pressed on pager prompt.
Reviewed by: allanjude
Approved by: allanjude (mentor)
Differential Revision: https://reviews.freebsd.org/D5461
OpenZFS uses feature flags instead of a zpool version number to track
features since the split from Oracle. In addition to avoiding confusion
on ZFS vs OpenZFS version numbers, this also allows features to be added
to different operating systems that use OpenZFS in different order.
The previous zfs boot code (gptzfsboot) and loader (zfsloader) blindly
tries to read the pool, and if failed provided only a vague error message.
With this change, both the boot code and loader check the MOS features
list in the ZFS label and compare it against the list of features that
the loader supports. If any unsupported feature is active, the pool is
not considered as a candidate for booting, and a helpful diagnostic
message is printed to the screen. Features that are merely enabled via
zpool upgrade, but not in use, do not block booting from the pool.
Submitted by: Toomas Soome <tsoome@me.com>
Reviewed by: delphij, mav
Relnotes: yes
Differential Revision: https://reviews.freebsd.org/D6857
There is no reason to return non-zero value from zfs_probe_partition()
as that causes following partitions to not be probed for ZFS vdevs.
A particular scenario that I encountered is a GPT partitioned disk
where several partitions have freebsd-zfs type. A partition with a lower
index is used as a cache (l2arc) vdev and in that case case zfs_probe()
returned a non-zero status. That status was returned to ptable_iterate()
and caused it to abort the iteration. Because of that the subsequent
partitions were not probed and a root pool was not discovered resulting
in a boot failure.
While there fix the style for nearby return statements.
Approved by: re (kib)
return value when it could return 1 (indicating we should stop).
Fix a few instances of pager_open() / pager_close() not being called.
Actually use these routines for the environment variable printing code
I just committed.
rounddown2 tends to produce longer lines than the original code
and when the code has a high indentation level it was not really
advantageous to do the replacement.
This tries to strike a balance between readability using the macros
and flexibility of having the expressions, so not everything is
converted.
The block cache implementation in loader has proven to be almost useless, and in worst case even slowing down the disk reads due to insufficient cache size and extra memory copy.
Also the current cache implementation does not cache reads from CDs, or work with zfs built on top of multiple disks.
Instead of an LRU, this code uses a simple hash (O(1) read from cache), and instead of a single global cache, a separate cache per block device.
The cache also implements limited read-ahead to increase performance.
To simplify read ahead management, the read ahead will not wrap over bcache end, so in worst case, single block physical read will be performed to fill the last block in bcache.
Booting from a virtual CD over IPMI:
0ms latency, before: 27 second, after: 7 seconds
60ms latency, before: over 12 minutes, after: under 5 minutes.
Submitted by: Toomas Soome <tsoome@me.com>
Reviewed by: delphij (previous version), emaste (previous version)
Relnotes: yes
Differential Revision: https://reviews.freebsd.org/D4713
This causes boot from external media (installer USB image) to mount / from
the default ZFS BE, rather than the USB device.
Reported by: kmoore
MFC after: 5 days
Sponsored by: ScaleEngine Inc.
member secsz was a uint16_t
sys/boot/zfs/zfs.c has a probe args structure member, secsz, that is a
uint16_t for media sector size; it is used as an argument for ioctl()
at line 484. however, this ioctl writes 32 bits of data (u_int *) and
therefore this ioctl will overwrite and corrupt 16 bits of memory.
other use cases seem to use correct u_int type for secsz.
PR: 204358
Submitted by: Toomas Soome <tsoome at me.com>
Reviewed by: asomers, delphij, smh
MFC after: 5 days
Differential Revision: https://reviews.freebsd.org/D4811
Add a few other safeguards to ensure things do not break when the
boot device cannot be determined
Reported by: flo
MFC after: 3 days
Sponsored by: ScaleEngine Inc.
If the system was booted with ZFS, a new menu item (#7) appears
It contains an autogenerated list of ZFS Boot Environments
This allows the user to switch to an alternate root file system
Use Cases:
- Revert a failed upgrade
- Concurrently run different versions of FreeBSD with common home directory
- Easier integration with the sysadmin/beadm utility
Requested by: many
Reviewed by: dteske
MFC after: 10 days
Relnotes: yes
Sponsored by: ScaleEngine Inc.
Differential Revision: https://reviews.freebsd.org/D3167
- only filesystem datasets are supported
- children names are printed to stdout
To do: allow to iterate over the list and fetch names programatically
MFC after: 17 days
This is to silence warnings that result from different definitions of
uint64_t on different architectures, specifically i386 and sparc64.
MFC after: 1 month
In zfs loader zfs device name format now is "zfs:pool/fs",
fully qualified file path is "zfs:pool/fs:/path/to/file"
loader allows accessing files from various pools and filesystems as well
as changing currdev to a different pool/filesystem.
zfsboot accepts kernel/loader name in a format pool:fs:path/to/file or,
as before, pool:path/to/file; in the latter case a default filesystem
is used (pool root or bootfs). zfsboot passes guids of the selected
pool and dataset to zfsloader to be used as its defaults.
zfs support should be architecture independent and is provided
in a separate library, but architectures wishing to use this zfs support
still have to provide some glue code and their devdesc should be
compatible with zfs_devdesc.
arch_zfs_probe method is used to discover all disk devices that may
be part of ZFS pool(s).
libi386 unconditionally includes zfs support, but some zfs-specific
functions are stubbed out as weak symbols. The strong definitions
are provided in libzfsboot.
This change mean that the size of i386_devspec becomes larger
to match zfs_devspec.
Backward-compatibility shims are provided for recently added sparc64
zfs boot support. Currently that architecture still works the old
way and does not support the new features.
TODO:
- clear up pool root filesystem vs pool bootfs filesystem distinction
- update sparc64 support
- set vfs.root.mountfrom based on currdev (for zfs)
Mid-future TODO:
- loader sub-menu for selecting alternative boot environment
Distant future TODO:
- support accessing snapshots, using a snapshot as readonly root
Reviewed by: marius (sparc64),
Gavin Mu <gavin.mu@gmail.com> (sparc64)
Tested by: Florian Wagner <florian@wagner-flo.net> (x86),
marius (sparc64)
No objections: fs@, hackers@
MFC after: 1 month
V100, the firmware is known to be broken and not allowing to simultaneously
open disk devices, causing attempts to boot from a mirror or RAIDZ to cause
a crash. This will be worked around later. The firmwares of newer sun4u models
don't seem to exhibit this problem though.
Steps for ZFS booting:
1. create VTOC8 label
# gpart create -s vtoc8 da0
2. add partitions, f.e.:
# gpart add -t freebsd-zfs -s 60g da0
# gpart add -t freebsd-swap da0
resulting in something like:
# gpart show
=> 0 143331930 da0 VTOC8 (68G)
0 125821080 1 freebsd-zfs (60G)
125821080 17510850 2 freebsd-swap (8.4G)
3. create zpool
# zpool create bunker da0a
or for mirror/RAIDZ (after preparing additional disks as in steps 1. + 2.):
# zpool create bunker mirror da0a da1a
# zpool create bunker raidz da0a da1a da2a ...
4. set bootfs
# zpool set bootfs=bunker bunker
5. install zfsboot
# zpool export bunker
# gpart bootcode -p /boot/zfsboot da0
6. write zfsloader to the ZFS Boot Block (so far, there's no dedicated tool
for this, so dd(1) has to be used for this purpose)
When using mirror/RAIDZ, step 4. and the dd(1) invocation should be repeated
for the additional disks in order to be able to boot from another disk in
case of failure.
# sysctl kern.geom.debugflags=0x10
# dd if=/boot/zfsloader of=/dev/da0a bs=512 oseek=1024 conv=notrunc
# zpool import bunker
7. install system on ZFS filesystem
Don't forget to set 'zfs_load="YES"' and vfs.root.mountfrom="zfs:bunker" in
loader.conf as well as 'zfs_enable="YES"'in rc.conf.
8. copy zpool.cache to the ZFS filesystem
cp -p /boot/zfs/zpool.cache /bunker/boot/zfs/zpool.cache
9. set mountpoint
# zfs set mountpoint=/ bunker
10. Now, given that aliases for all disks in the zpool exists (check with
the `devalias` command on the boot monitor prompt) and disk0 corresponds
to da0 (likewise for additional disks), the system can be booted from the
ZFS with:
{1} ok boot disk0
PR: 165025
Submitted by: Gavin Mu
Few new things available from now on:
- Data deduplication.
- Triple parity RAIDZ (RAIDZ3).
- zfs diff.
- zpool split.
- Snapshot holds.
- zpool import -F. Allows to rewind corrupted pool to earlier
transaction group.
- Possibility to import pool in read-only mode.
MFC after: 1 month
- Teach it to read gang blocks. (essentially untested)
If you see "ZFS: gang block detected!", please let
me know, so we can either remove the printf if it
works, or fix it if it doesn't.
- If multiple partitions exist on a disk, probe them all.
We also need to reset dsk->start to 0 to read the right
sector here.
- With GPT, we can have 128 partitions.
- If the bootfs property has ever been set on a pool
it seems that it never goes away. zpool won't allow
you to add to the pool with the bootfs property set.
However, if you clear the property back to default
we end up getting 0 for the object number and read
a bogus block pointer and fail to boot.
- Fix some error printfs. The printf in the loader is
only capable of c,s and u formats.
- Teach printf how to display %llu
Reviewed by: dfr, jhb
MFC after: 2 weeks
to gptboot, i.e. installed in a freebsd-boot partition using /sbin/gpart or
/sbin/gpt.
Tweak the /boot/loader ZFS support so that it can find ZFS pools that are
contained in GPT partitions.
This bring huge amount of changes, I'll enumerate only user-visible changes:
- Delegated Administration
Allows regular users to perform ZFS operations, like file system
creation, snapshot creation, etc.
- L2ARC
Level 2 cache for ZFS - allows to use additional disks for cache.
Huge performance improvements mostly for random read of mostly
static content.
- slog
Allow to use additional disks for ZFS Intent Log to speed up
operations like fsync(2).
- vfs.zfs.super_owner
Allows regular users to perform privileged operations on files stored
on ZFS file systems owned by him. Very careful with this one.
- chflags(2)
Not all the flags are supported. This still needs work.
- ZFSBoot
Support to boot off of ZFS pool. Not finished, AFAIK.
Submitted by: dfr
- Snapshot properties
- New failure modes
Before if write requested failed, system paniced. Now one
can select from one of three failure modes:
- panic - panic on write error
- wait - wait for disk to reappear
- continue - serve read requests if possible, block write requests
- Refquota, refreservation properties
Just quota and reservation properties, but don't count space consumed
by children file systems, clones and snapshots.
- Sparse volumes
ZVOLs that don't reserve space in the pool.
- External attributes
Compatible with extattr(2).
- NFSv4-ACLs
Not sure about the status, might not be complete yet.
Submitted by: trasz
- Creation-time properties
- Regression tests for zpool(8) command.
Obtained from: OpenSolaris