nvme(4): Report NPWA before NPWG as stripesize.

New Samsung 980 SSDs report Namespace Preferred Write Alignment of
8 (4KB) and Namespace Preferred Write Granularity of 32 (16KB).
My quick tests show that 16KB is a minimal sequential write size
when the SSD reaches peak IOPS, so writing much less is very slow.
But writing slightly less or slightly more does not change much,
so it seems not so much a size granularity as minimum I/O size.

Thinking about different stripesize consumers:
 - Partition alignment should be based on NPWA by definition.
 - ZFS ashift in part of forcing alignment of all I/Os should also
be based on NPWA.  In part of forcing size granularity, if really
needed, it may be set to NPWG, but too big value can make ZFS too
space-inefficient, and the 16KB is actually the biggest supported
value there now.
 - ZFS recordsize/volblocksize could potentially be tuned up toward
NPWG to work as I/O size granularity, but enabled compression makes
it too fuzzy.  And those are normally user-configurable things.
 - ZFS I/O aggregation code could definitely use Optimal Write Size
value and may be NPWG, but we don't have fields in GEOM now to report
the minimal and optimal I/O sizes, and even maximal is not reported
outside GEOM DISK to be used by ZFS.

MFC after:	1 week
This commit is contained in:
Alexander Motin 2021-07-05 22:19:48 -04:00
parent e41fde3ed7
commit e3bcd07d83

View File

@ -231,10 +231,15 @@ nvme_ns_get_data(struct nvme_namespace *ns)
uint32_t
nvme_ns_get_stripesize(struct nvme_namespace *ns)
{
uint32_t ss;
if (((ns->data.nsfeat >> NVME_NS_DATA_NSFEAT_NPVALID_SHIFT) &
NVME_NS_DATA_NSFEAT_NPVALID_MASK) != 0 && ns->data.npwg != 0) {
return ((ns->data.npwg + 1) * nvme_ns_get_sector_size(ns));
NVME_NS_DATA_NSFEAT_NPVALID_MASK) != 0) {
ss = nvme_ns_get_sector_size(ns);
if (ns->data.npwa != 0)
return ((ns->data.npwa + 1) * ss);
else if (ns->data.npwg != 0)
return ((ns->data.npwg + 1) * ss);
}
return (ns->boundary);
}