Modern GCC and Clang simply ignore the qualifier, while the old base GCC
produces a warning (treated as an error in the kernel build).
Approved by: cem
MFC after: 5 days
Fix ioat_release to only set is_completion_pending if DMAs were actually
queued. Otherwise, the spurious flag could trigger an assert in the
reset path on INVARIANTS kernels.
Reviewed by: bdrewery, Suraj Raju @ Isilon
Sponsored by: Dell EMC Isilon
In the case where a hardware error is detected during
ioat_process_events, hardware may advance (by one descriptor, probably)
and a subsequent ioat_process_events may race the intended ioat_reset_hw
followup. In that case, the second process_events would observe a
completion update that does not match the software "last_seen" status,
and attempt to successfully complete already-failed descriptors.
Guard against this race with the resetting_cleanup flag.
Reviewed by: bdrewery, markj
Sponsored by: Dell EMC Isilon
The device doesn't accurately update the CHANCMP address with the device state
when the device is suspended or halted. So, read the CHANSTS register to check
for those states.
We still need to read the CHANCMP address for the last completed descriptor.
Sponsored by: Dell EMC Isilon
This allows us to make strong assertions about descriptor address
validity. Additionally, future generations of the ioat(4) hardware will
require contiguous descriptors.
Reviewed by: markj
Sponsored by: Dell EMC Isilon
This paves the way for a contiguous descriptor array.
A contiguous descriptor array has the benefit that we can make strong
assertions about whether an address is a valid descriptor or not. The
other benefit is that future generations of I/OAT hardware will require
a contiguous descriptor array anyway. The downside is that after system
boot, big chunks of contiguous memory is much harder to find. So
dynamic scaling after boot is basically impossible.
Reviewed by: markj
Sponsored by: Dell EMC Isilon
The CHANSTS register is a split 64-bit register on CBDMA units before
hardware v3.3. If a torn read happens during ioat_process_events(),
software cannot know when to stop completing descriptors correctly.
So, just use the device-pushed main memory channel status instead.
Remove the ioat_get_active() seatbelt as well. It does nothing if the
completion address is valid.
Sponsored by: Dell EMC Isilon
In r304602, I mistakenly removed the ioat_process_events check that we weren't
processing events before the hardware had completed the descriptor
("last_seen"). Reinstate that logic.
Keep the defensive loop condition and additionally make sure we've actually
completed a descriptor before blindly chasing the ring around.
In reset, queue and finish the startup command before allowing any event
processing or submission to occur. Avoid potential missed callouts by
requeueing the poll later.
is_completion_pending governs whether or not a callout will be scheduled
when new work is queued on the IOAT device. If true, a callout is
already scheduled, so we do not need a new one. If false, we schedule
one and set it true. Because resetting the hardware completed all
outstanding work but failed to clear is_completion_pending, no new
callout could be scheduled after a reset with pending work.
This resulted in a driver hang for polled-only work.
Fix the race between ioat_reset_hw and ioat_process_events.
HW reset isn't protected by a lock because it can sleep for a long time
(40.1 ms). This resulted in a race where we would process bogus parts
of the descriptor ring as if it had completed. This looked like
duplicate completions on old events, if your ring had looped at least
once.
Block callout and interrupt work while reset runs so the completion end
of things does not observe indeterminate state and process invalid parts
of the ring.
Start the channel with a manually implemented ioat_null() to keep other
submitters quiesced while we wait for the channel to start (100 us).
r295605 may have made the race between ioat_reset_hw and
ioat_process_events wider, but I believe it already existed before that
revision. ioat_process_events can be invoked by two asynchronous
sources: callout (softclock) and device interrupt. Those could race
each other, to the same effect.
Reviewed by: markj
Approved by: re
Sponsored by: EMC / Isilon Storage Division
Differential Revision: https://reviews.freebsd.org/D7097
Add CRC/MOVECRC operations, as well as the TEST and STORE variants.
With these operations, a CRC32C can be computed over one or more
descriptors' source data. When the STORE operation is encountered, the
accumulated CRC32C is emitted to memory. A TEST operations triggers an
IOAT channel error if the accumulated CRC32C does not match one in
memory.
These operations are not exposed through any API yet.
Sponsored by: EMC / Isilon Storage Division
The IOAT engine can only address the low 40 bits (1 TB) of physmem via
the 'next descriptor' pointer. Restrict acceptable range given to
bus_dma_tag_create to match.
Sponsored by: EMC / Isilon Storage Division
The I/OAT HW reset process may sleep, so it is invalid to perform a
channel reset from the software interrupt thread.
Sponsored by: EMC / Isilon Storage Division
Some classes of IOAT hardware prefetch reads. DMA operations that
depend on the result of prior DMA operations must use the DMA_FENCE flag
to prevent stale reads.
(E.g., I've hit this personally on Broadwell-EP. The Broadwell-DE has a
different IOAT unit that is documented to not pipeline DMA operations.)
Sponsored by: EMC / Isilon Storage Division
ioat_acquire_reserve() is an extended version of ioat_acquire(). It
allows users to reserve space in the channel for some number of
descriptors. If this succeeds, it guarantees that at least submission
of N valid descriptors will succeed.
Sponsored by: EMC / Isilon Storage Division