geometry and partitions may start from withing the first track.
If we found such partitions, then do not reserve space of the
first track, only first sector.
partition tables and lost an ability to boot after r221788.
Also unhide an error message from bootverbose, this would help to
easier determine the problem.
probed and read successfull, but it contains invalid values (e.g.
overlapped partitions, offset or size is out of bounds), then table
will be rejected.
MFC after: 1 month
was not updated to pass CRD_F_KEY_EXPLICIT flag to opencrypto. This resulted in
always using first key.
We need to support providers created with this bug, so set special
G_ELI_FLAG_FIRST_KEY flag for GELI provider in integrity mode with version
smaller than 6 and pass the CRD_F_KEY_EXPLICIT flag to opencrypto only if
G_ELI_FLAG_FIRST_KEY doesn't exist.
Reported by: Anton Yuzhaninov <citrin@citrin.ru>
MFC after: 1 week
device in /dev/ create symbolic link with adY name, trying to mimic old ATA
numbering. Imitation is not complete, but should be enough in most cases to
mount file systems without touching /etc/fstab.
- To know what behavior to mimic, restore ATA_STATIC_ID option in cases
where it was present before.
- Add some more details to UPDATING.
bytes. Remove bogus assertion and while here remove another too obvious
assertion.
Reported by: Fabian Keil <freebsd-listen@fabiankeil.de>
MFC after: 2 weeks
allocate all of them at attach time. This allows to avoid moving
keys around in the most-recently-used queue and needs no mutex
synchronization nor refcounting.
MFC after: 2 weeks
create reasonably large cache for the keys that is filled when
needed. The previous version was problematic for very large providers
(hundreds of terabytes or serval petabytes). Every terabyte of data
needs around 256kB for keys. Make the default cache limit big enough
to fit all the keys needed for 4TB providers, which will eat at most
1MB of memory.
MFC after: 2 weeks
embedded flash stores.
Some devices - notably those with uboot - don't have an
explicit partition table (eg like Redboot's FIS.)
geom_map thus provides an easy way to export the hard-coded
flash layout as geom providers for use by filesystems and
other tools.
It also includes a "search" function which allows for
dynamic creation of partition layouts where the device only
has a single hard-coded partition. For example, if
there is a "kernel+rootfs" partition, a single image can
be created which appends the rootfs after the kernel with
an appropriate search string. geom_map can be told to
search for said search string and create a partition
beginning after it.
Submitted by: Aleksandr Rybalko <ray@dlink.ua>
bio_children > 1, g_destroy_bio() is never called and the bio
leaks. Fix this by calling g_destroy_bio() earlier, before the check.
Submitted by: Victor Balada Diaz <victor@bsdes.net> (initial version)
Approved by: pjd (mentor)
MFC after: 1 week
g_io_deliver(). In such case it increases 'pace' counter on each ENOMEM and
reschedules the request. The 'pace' counter is decreased for each request going
down, but until 'pace' is greater than zero, GEOM will handle at most 10
requests per second. For GEOM GATE users that are proxy to local GEOM providers
(like ggatel(8) and HAST) we can end up with almost permanent slow down of GEOM
down queue. This is because once we reach GEOM GATE queue limit, we return
ENOMEM to the GEOM. This means that we have, eg. 1024 I/O requests in the GEOM
GATE queue. To make room in the queue and stop returning ENOMEM we need to
proceed the requests of course, but those requests are handled by userland
daemons that handle them by reading/writing also from/to local GEOM providers.
For example with HAST, a new requests comes to /dev/hast/data, which is GEOM
GATE provider. GEOM GATE passes the request to hastd(8) and hastd(8)
reads/writes from/to /dev/da0. Once we reach GEOM GATE queue limit, to free up
a slot in GEOM GATE queue, hastd(8) has to read/write from/to /dev/da0, but
this request will also be very slow, because GEOM now slows down all the
requests. We end up with full queue that we can unload at the speed of 10
requests per second. This simply looks like a deadlock.
Fix it by allowing userland daemons that work with both GEOM GATE and local
GEOM providers to specify unlimited queue size, so GEOM GATE will never return
ENOMEM to the GEOM.
MFC after: 1 week