e58e9fbda8
Module, etc., will follow. Notes: * IDXD is an Intel silicon feature available in future Intel CPUs. Initial development is being done on a simulator. Once HW is available and the code fully tested the experimental label will be lifted. Spec can be found here: https://software.intel.com/en-us/download/intel-data-streaming-accelerator-preliminary-architecture-specification * The current implementation will only work with VFIO. * DSA has a number of engines that can be grouped based on application need such as type of memory being served or QoS. Engines are processing units and are assigned to groups. Work queues are on device structures that act as front-end groups for queueing descriptors. Full details on what is configurable & how will come in later doc patches. * There is a finite number of work queue slots that are divided amongst the number of desired work queues in some fashion (ie evenly). * SW (outside of the idxd lib) is required to manage flow control, to not over-run the work queues.This is provided in the accel plug-in module. The upper layers use public API to manage this. * Work queue submissions are done with a 64 byte atomic instruction * The design here creates a set of descriptor rings per channel that match the size of the work queues. Then, an spdk_bit_array is used to make sure we don't overrun a queue. If there are not slots available, the operation is put on a linked list to be retried later from the poller. * As we need to support any number of channels (we can't limit ourselves to the number of work queues) we need to dynamically size/resize our per channel descriptor rings based on the number of current channels. This is done from upper layers via public API into the lib. * As channels are created, the total number of work queue slots is divided across the channels evenly. Same thing when they are destroyed, remaining channels with see the ring sizes increase. This is done from upper layers via public API into the lib. * The sim has 64 total work queue entries (WQE) that get dolled out to the work queues (WQ) evenly. Signed-off-by: paul luse <paul.e.luse@intel.com> Change-Id: I899bbeda3cef3db05bea4197b8757e89dddb579d Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1809 Community-CI: Mellanox Build Bot Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> |
||
---|---|---|
.. | ||
accel_engine.h | ||
assert.h | ||
barrier.h | ||
base64.h | ||
bdev_module.h | ||
bdev_zone.h | ||
bdev.h | ||
bit_array.h | ||
blob_bdev.h | ||
blob.h | ||
blobfs_bdev.h | ||
blobfs.h | ||
conf.h | ||
cpuset.h | ||
crc16.h | ||
crc32.h | ||
dif.h | ||
endian.h | ||
env_dpdk.h | ||
env.h | ||
event.h | ||
fd.h | ||
file.h | ||
ftl.h | ||
gpt_spec.h | ||
histogram_data.h | ||
idxd.h | ||
ioat_spec.h | ||
ioat.h | ||
iscsi_spec.h | ||
json.h | ||
jsonrpc.h | ||
likely.h | ||
log.h | ||
lvol.h | ||
memory.h | ||
mmio.h | ||
nbd.h | ||
net.h | ||
notify.h | ||
nvme_intel.h | ||
nvme_ocssd_spec.h | ||
nvme_ocssd.h | ||
nvme_spec.h | ||
nvme.h | ||
nvmf_cmd.h | ||
nvmf_fc_spec.h | ||
nvmf_spec.h | ||
nvmf_transport.h | ||
nvmf.h | ||
opal_spec.h | ||
opal.h | ||
pci_ids.h | ||
pipe.h | ||
queue_extras.h | ||
queue.h | ||
reduce.h | ||
rpc.h | ||
scsi_spec.h | ||
scsi.h | ||
sock.h | ||
stdinc.h | ||
string.h | ||
thread.h | ||
trace.h | ||
util.h | ||
uuid.h | ||
version.h | ||
vhost.h | ||
vmd.h |