Allow toggling log timestamps on and off by adding new RPC call.
Change-Id: I34c84bf89fae352ade266fbf7fd20594ff67bced
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2024
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Remove assert and add exit codes instead. That in non-debug mode, these
could lead coredump. We don't want the vhost target be crashed after
recieved invalid commands.
fixes issue: #1575
Signed-off-by: yidong0635 <dongx.yi@intel.com>
Change-Id: Ifef6d8f9c32150213bc2c80787e92d428d4c49c3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3951
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: JinYu <jin.yu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
cpumask can be changed by spdk_thread_set_cpumask()
during the time that event takes before it arrives
on _schedule_thread() function, which would make the
function assert(false), even though that is ~ok~.
Currently, that can happen right after thread is created
or between two successive calls to spdk_thread_set_cpumask().
But most importantly, it will constantly happen if we
introduce rescheduler.
This patch just disables the check for now.
Change-Id: Ie6dfe22d6eff2c908c367d1311436cc6769a6960
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3905
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
When the PDU receive handler processes the header of the logout request PDU,
conn->is_logged_out is set to true.
However, if conn->is_logged_out is true, conn->pdu_recv_state is set to ERROR
before the PDU receive handler completes processing the logout request PDU.
Then if conn->pdu_recv_state is ERROR, conn->state is set to EXITING
after returning from the PDU receive handler.
Response PDUs are sent asynchronously now and may not be sent even after
returning from the PDU receive handler.
On the other hand, outside the PDU receive handler, the current connection
is closed if conn->state is EXITING.
Hence logout response PDU may not be sent to the initiator.
For the case that the initiator logs out and then reconnects when receiving
asynchronous logout request, missing logout response is critical
because initiator waits until receiving logout request and gets timeout.
This patch moves the check if PDU comes after logout to the place
just after getting a PDU header.
At the new location, data segment of the PDU is not received yet. But
logout request PDU does not have data segment and initiator will not
send additional PDU after sending logout request PDU, and by this patch,
iSCSI target will continue to stop receiving any new PDU after processing
logout request. Furthermore, even if there is any remaining data in the
kernel buffer, the kernel will discard or flush it when closing the socket.
Fixes issue #1571
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I9554f4d54f3db80bf86abd6bffe81bac8c234531
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3928
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
ANA transition time shall be non-zero if controller supports ANA
reporting. Linux NVMe host sets this value to 10, and we don't
have any reason to change from that.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I61396695dacf47fad40e3cea3311e555729d9e3e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3909
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Registration macro now generates function based on driver's name.
It allows to have multiple registration within single source file.
Similar pattern is used e.g. by SPDK_NVMF_TRANSPORT_REGISTER.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: Ied0887e8dae7fe9ca1517313be5eff8f218b7e98
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3895
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This will be used in another place later.
This patch is part of a series aimed at improving recovery
when we are fail to change the subsystem state.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: I24bfbeb3d006584003164540d6ede540dbcafa86
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3392
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
The loop here was counting the bytes in the cpus array,
but the lcores are represented by bits.
While here, add a unit test that exposes this bug and
demonstrates it is now fixed with this patch.
Fixes#1570.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I3a1fc48a8085254f41587e3b3d5d732154b90134
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3931
Community-CI: Mellanox Build Bot
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This will allow applications to understand why
they were unable to connect.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ic04c7e72098c6ec1823de7d6a07d90150ef5ac20
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3836
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Add a bdev_examine_allowlist_free function, which releases the members
in g_bdev_examine_allowlist. Invoke it in bdev_mgr_unregister_cb.
Signed-off-by: Peng Yu <yupeng0921@gmail.com>
Change-Id: I47faf6959066da6679716b2f2abfab8ac8b8dd79
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3880
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Currently when the uevent processing code finds a non-uio/vfio
uevent, it just stops its loops and returns. This means that if
there are a lot of non-uio/vfio uevents, the netlink socket buffer
can build up until its full because only one non-uio/vfio event
gets drained per spdk_nvme_probe() call (which may be very
infrequently).
So modify parse_event so that it does not indicate error when
a non-uio/vfio event is found.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic8a40f71ee89d597ce46129eac889fe5b7ef5171
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3876
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
This ensures we don't send a nopin immediately after
a connection is established, in case the nopin poller
fires before the connection reaches full feature phase.
Fixes#1441.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ieba9476bec0e9b7f85e60b9113ae8364eda5bda3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3902
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
The md_page alignment is not really required for md_page
buffers.
Allocating 4k aligned buffers all the time, causes memory
to be heavily fragmented. Due to DPDK keeping track of the
allocation in the same DMA region as the allocation themselves.
Removing this alignment requirement will help DPDK when searching
for the right part of memory in the heap.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reported-by: Mike Cui
Change-Id: If2f4ca2be38d432d5740f6145b5e0ff46237806b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3853
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
We should make nvme_tcp_ctrlr_connect_qpair always return
negative value if this function fails.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I457e704e39d7a3acd298fd48e89e8ea51e2ed4ad
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3809
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
The specification says it will return INVALID FIELD if the NS
is in inactive state.
Fix issue #1551.
Change-Id: I1b32f023ed665d410f4705e439068699e2b2f8de
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3860
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Failed qpair will be destroyed on generic nvmf layer during handling
of error code returned from spdk_nvmf_poll_group_add.
The current approach leads to heap-use-after-free.
Change-Id: I99331150fa36a3c3c18176589afb973dee449b3a
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3538
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
The one large global mempool was a waste of memory for apps that
don't use the accel framework as its always allocated a pool sized
to handle a heavy load with multiple threads.
Instead move to a per channel list of just 1024 tasks greatly
decreasing the memory footprint but still able to scale as more
threads are added.
Also renamed all accel_req to acccel_taak and simply task to
accel_task as this was being touched anyways and not consistent.
fixes issue #1510
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I0e93ca6270323e2df4b739711c5d9b667a52e1eb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3740
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Currently rdma acceptor handles only one ibv event per poll
Taking into account the default acceptor poll rate (10ms), it can
take a long time to handle e.g. LAST_WQE_REACHED events when we
close huge amount of qpairs at the same time.
This patch allows to handle up to 32 ibv events per acceptor poll.
Change-Id: Ic2884dfc5b54c6aec0655aaa547b491a9934a386
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3821
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This is an oversight that can cause issues with looping
through the list if we end up allocating the same qpair
twice.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: I513ea35398f4b724366c21be144531fbfbdb4347
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3835
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
We only create one spdk_blob object for a given blob, and just
increase the ref_count if it is opened multiple times. bs_open_blob
would do the lookup for existing opened blobs.
But if the blob is opened again, before the previous open operation
has completed, we would end up with two spdk_blob objects for the same
blob.
Solution is to do another lookup when the open operation completes.
If we find the blob, free the one we just finished opening and return
the existing one instead.
Also added unit test that failed on the existing code but passes now
with this patch.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Reported-by: Mike Cui
Change-Id: I00c3a913b413deddf06f0b63f7a669efb2b5658f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3855
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
pctrlr->cmb.mem_register_addr and pctrlr->cmb.mem_register_size
are assigned after spdk_mem_register.
if spdk_mem_register is failed , ctrlr_map_cmb hasn't been executed.
they are not be used.
So remove them.
Signed-off-by: yidong0635 <dongx.yi@intel.com>
Change-Id: I3d1996eee8b5260b79c4c3e0a2e1d376da2343b7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3856
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
SPDK poller uses microsecond as the input parameter, so we need to
change the correct value when opts.association_timeout is expressed
by millisecond.
Change-Id: Ia674f0115ea176b998e4c0c70b8ce75b28984701
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3861
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
After supporting ANA reporting by default, Linux kernel 5.3 reported
error when parsing NVMe ANA log. The newer kernel fixed the issue
but we should optionalize ANA reporting feature to avoid error for
Linux kernel 5.3 or before.
Add a bool variable ana_reporting to struct spdk_nvmf_subsystem
and disable ANA reporting and initialization of related variables
if it is false. We can expose MNAN (Maximum Number of Allowed
Namespaces) even if ANA reporting is disabled. But MNAN is not
required if ANA reporting is disabled. So do not set MNAN if it is
false too.
Add a public API spdk_nvmf_subsystem_set_ana_reporting() to set
ana_reporting by the nvmf_create_subssytem RPC.
The next patch will add ana_reporting to nvmf_create_subsystem RPC.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Icc77773b4c9513daba2f1a9fdaf951d80574f379
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3850
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Monica Kenguva <monica.kenguva@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Add an new RPC, nvmf_subsystem_get_controllers to retrieve the list
of NVMe-oF controllers of an NVMe-oF subsystem.
One of the main use cases will be to get identification information
of NVMe-oF controllers to configure their ANA states dynamically.
Pause and resume the subsystem to access the controllers safely.
One subtle issue remains. The JSON RPC returns success even if
resuming the subsystem fails. Write FIXME explicitly to address this.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ibf8d1cf56850a705e343b86022d101b4c7204199
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3848
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
A subsystem RPC is not transitioned to a paused state when there
are ios outstanding (tracked by subsystem poll group).
In general AERs, are not tracked as outstanding IOs. However,
there are 3 paths in nvmf_ctrlr_async_event_request which do not
adjust the outstanding io count.
If we get into any of these 3 paths, the subsystem pause can hang
forever.
The issue was reproduced with hot plug stress testing under load.
We can get into the second path (SPDK_NVME_ASYNC_EVENT_TYPE_NOTICE)
under these circumstances:
- An AER completion is sent to the initiator due to a namespace change
(e.g. hot remove/add)
- In this case, type is set to SPDK_NVME_ASYNC_EVENT_TYPE_NOTICE
- The initiator sends a new AER admin command, hitting the second path
where we return without adjusting the outstanding ios.
Fixes: 1552
Change-Id: I45f781966cc1e9a601b2305c7985a21154d802e8
Signed-off-by: Michael Haeuptle <michael.haeuptle@hpe.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3854
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: JinYu <jin.yu@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
There is a fatal bug that could easily cause data corruption when using
thin-provisioned blobs. In blob_request_submit_rw_iov(), we first get
lba by calling blob_calculate_lba_and_lba_count(),
blob_calculate_lba_and_lba_count() calculates different lbas according to
the return of bs_io_unit_is_allocated(). Later, we call bs_io_unit_is_allocated()
again to judge whether the specific cluster is allocated, the problem is it may
have be allocated here while not be allocated when calling blob_calculate_lba_and_lba_count()
before. To ensure the correctness of lba, we can do lba recalculation when
bs_io_unit_is_allocated() returns true, or make
blob_calculate_lba_and_lba_count() return the result of
bs_io_unit_is_allocated(), use the second solution in this patch.
By configuring more than one cpu core, md thread will run in a separate
SPDK thread, this data corruption scenario could be easily reproduced
by running fio verify in VMs using thin-provisioned Lvols as block
devices.
Signed-off-by: Sochin Jiang <jiangxiaoqing.sochin@bytedance.com>
Change-Id: I099865ff291ea42d5d49b693cc53f64b60881684
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3318
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
According to page35 in recent NVMe-oF spec (
NVMe-over-Fabrics-1.1-2019.10.22-Ratified), ioccsz is used
to restrict the incapsule size of I/O command, so do not
restrict the NVMe-oF OPC command and also the admin command.
We accidently trigger an bug in kernel since we do not send
the fabrics command with the incapsule and make the kernel
coredump, though the kernel has bugs.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I869a2c8ab7b9c2ac1e5cc5b603920662591c2c64
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3837
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
The bdev_examine_bdev api will examine a bdev explicitly. After
disabling the auto_examine feature, a user could call
bdev_examine_bdev to examine a specific bdev he/she wants.
Signed-off-by: Peng Yu <yupeng0921@gmail.com>
Change-Id: Ifbbfb6f667287669ddf6175b8208efee39762933
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3219
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
If the transport is broken, we should set errno code in
spdk_nvme_ctrlr_process_admin_completions instead of keeping silence.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ie73763e1329e12a8c82a0223d360991f86c39be3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3773
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This allows for much more granular control over the timeout.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ib23de21e60eec4207c55320579699edf284f4e16
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3794
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Generally, this patch did the following work:
Remove the destruct poller. I think that we do not need this,
the destruct poller is specially for Softwaare RoCE case.
Since SoftRoCE will not have IBV_EVENT_QP_LAST_WQE_REACHED event,
we will not wait the last_wqe_reached flag when srq is enabled.
So we can avoid using the poller.
And the purpose of this patch is to solve the coredump issue.
For example, if we run rdma local test such as, e.g.,
test/nvmf/host/bdevperf.sh --transport=rdma
The coredump reason: the qpair is freed twice. Because for RDMA transport,
we do not really remove the qpair from the group if the upper layer
does it.
The first time is called by nvmf_rdma_destroy_drained_qpair in nvmf_rdma_poller_poll,
and the second time is called by nvmf_rdma_qpair_reject_connection in
in nvme_rdma_close_qpair. Since nvme_rdma_close_qpair will always called,
so we need make sure that the qpair will be close after calling this function.
Otherwise we will have the double free qpair. So our approach here is add a flag
("to_close")in rqpair structure and make sure the rqpair be freed after the
"to_close" is set nvme_rdma_close_qpair
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I6f97debbcd29bbb7c6e3f9725907b4102a1d2892
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3661
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Add MaxR2TPerConnection to iSCSI global options and make it configurable
by JSON RPC.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ida95e5c7dac301a22520656709e1aa4d611f31ef
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3777
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
By the recent refactoring, we have no static size array for outstanding
R2Ts per connection. It looks that we do not have any critical reason
to prohibit us from making max outstanding R2Ts per connection configurable.
There are some use cases to use large write I/O intensively (e.g. 128KB).
Let such use cases change the value of max R2Ts per connection by their
responsibility to do performance tuning.
Maximum outstanding R2Ts per task are defined both for iSCSI target
and NVMe-TCP target but maximum outstanding R2Ts per connection is
unique for iSCSI target.
The next patch will add the corresponding iSCSI option.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I4f6fd3c750a9a0a99bcf23064fe43a3389829aa9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3776
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
It is likely that the raw number 8 in the macro NUM_PDU_PER_CONNECTION
means 2 * DEFAULT_MAXR2T and the raw number 2 means R2T and Data Out, but
is not certain.
On the other hand, the next patch will make the max number of outstanding
R2Ts per connection configurable.
As a preparation to the next patch, add 2 * DEFAULT_MAXR2T explicitly
to the macro NUM_PDU_PER_CONNECTION.
The next patch will replace DEFAULT_MAXR2T by an new variable.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I8a3be14d53c0abf11d7aade401386601d8fe6c11
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3783
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Other count variables in iSCSI library have used uint32_t rather
than int.
Change the type of spdk_iscsi_conn::pending_r2t from int to uint32_t
and add assert to check if pending_r2t is not negative.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I9bd296c0142b0808ae822952277c9ecc133e5f62
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3775
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Add MaxLargeDataInPerConnection to iSCSI global options and make
it configurable by JSON RPC.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ibcd16da2eac64241217bedeb89a7929bbdc67871
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3756
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
For some use case that there is heavy large read I/O, the performance
bottleneck due to MAX_LARGE_DATAIN_PER_CONNECTION was reported.
The following assumes that all I/Os are large read.
Large read primary task whose I/O size is more than
SPDK_BDEV_LARGE_BUF_MAX_SIZE (=64KB) is split into multiple
read subtasks.
spdk_iscsi_globals::MaxQueueDepth limits maximum number of outstanding
read primary tasks, and MAX_LARGE_DATAIN_PER_CONNECTION (=64)
limits maximum number of outstanding read subtasks.
MAX_LARGE_DATAIN_PER_CONNECTION is also used to calculate PDU pool.
To remove the performance bottleneck, change the macro constant
MAX_LARGE_DATAIN_PER_CONNECTION to a global variable
spdk_iscsi_globals::MaxLargeDataInPerConnection.
We don't see any negative side effect if we set
spdk_iscsi_globals::MaxLargeDataInPerConnection to 64.
The use case that reported the performance issue will change the
value of spdk_iscsi_globals::MaxLargeDataInPerConnection by its own
responsibility.
The next patch will add the value of
spdk_iscsi_globals::MaxLargeDataInPerConnection to iSCSI options,
and make it configurable by JSON RPC.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ifc30cdb8e00d50f4d3755ff399263cf5d0b681b6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3755
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Helps us avoid adding a new I/O qpair while the ctrlr
is being destroyed.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: I3bf9318b075125b9d432b885fa9f6f2f44d422d7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3686
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
For the login redirection feature, the current implementation works
only if a portal is redirected from an initial portal to a redirect
portal. However, the login redirection feature should work even if a
portal is redirected from one redirect portal to another redirect
portal.
A public portal group knows only a redirect portal and does not know
the portal group of the redirect portal.
Moreover, it is very likely that an initial portal and a redirect portal
exist in different SPDK iSCSI target applications.
To cover all these concerns, add an new iscsi_target_node_request_logout
RPC to request connections whose portal group tag match for the target
node.
To cover potential use cases, make the second parameter portal group
tag optional.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I612672490722fb22fd4eba055998b7408ab84ca5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3780
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
As written in doc/iscsi.md, typically the login redirection feature
will be used in scale out iSCSI target system, which runs multiple
SPDK iSCSI target applications.
In scale out iSCSI target system, the initial portal, the current
redirect portal, and the next redirect portal are likely to be in
different SPDK iSCSI target applications.
In this case, asynchronous logout request should be sent independently
from the iSCSI target application which has the current redirect portal.
However, we had added asynchronous logout request into the iSCSI target
application which has the next redirect portal. This idea works only
for the case that login is redirected from the initial portal to a
redirect portal.
We remove asynchronous logout request from iscsi_target_node_redirect()
in this patch, and update the corresponding help documents.
The next patch will add an new RPC to send asynchronous logout
request to all connections to the specified portal group and the
specified target.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ib0ac72e8cdad7e8c64e446b7495e572fac4b5bae
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3779
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This data structure is not used.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I143fb9256f692d7bd9bb5e14cdc479f64ddcef45
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3746
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
1. Retrieve actual IBV state when we receive WC with bad status
2. Don't log an error if WC status is IBV_WC_WR_FLUSH_ERR. This
means that we are performing qpair cleanup and this WC is expected.
Change-Id: Id23634092f537861e66ca0f83ab79db9e052507b
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3736
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>