Compare commits

...

18 Commits

Author SHA1 Message Date
3d0d559dbd update dpdk commit 2022-05-27 04:30:06 +08:00
e8a1bac9a1 update dpdk submodule 2022-05-17 01:32:44 -04:00
38644662be update submodules & refs 2022-03-08 04:16:31 +08:00
1edb07ac50 fix build system 2022-03-08 04:08:39 +08:00
Krzysztof Karas
7dc38f83f7 spdk_top: reduce number of global thread data structures
Deletes g_thread_history and g_thread_info to use g_threads_stats
across the whole application to simplify spdk_top code.
Now instead of separate struct, fields last_busy and last_idle are
being used.

get_data() function now uses local structure to get RPC data instead
of filling global one. This has been changed so that g_threads_stats
keeps its last_busy and last_idle fields unchanged.
free_rpc_threads_stats has been moved down so that in future patches,
when multithreading is implemented, there is no need to lock
g_threads_stats during RPC call.

Changes places of allocation/deallocation of g_threads_stats, since
we want to save last_idle and last_busy fields instead of zeroing them
out each application loop.

Changes show_thread() function to use local copy of threads array
instead of pointers to global struct. This is for the convenience
in the future patches implementing multithreading to avoid the need
to lock the global struct for details display.

Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7587 (master)

(cherry picked from commit 081a4a0943)
Change-Id: I0dc87eac4c1b89fa16f14f5387d94ee176dfdf43
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8110
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2021-06-04 22:52:37 +00:00
Krzysztof Karas
464ddc03b6 spdk_top: change where get_data() and free_data() are called
Move part of code with get_data(), refresh_tab() and free_data()
inside show_stats() upwards to make sure data structures are
up to date for pop-up details windows.

Delete get_data(), free_data() calls from show_thread(), show_poller()
and show_core functions.

Add data freeing right before rpc calls inside get_data() to let
pop-up details windows to use updated data before freeing it.

Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7953 (master)

(cherry picked from commit 22edbe9626)
Change-Id: I0d78eb7a48b0cdff4284815afc1a214b0effd7fc
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8109
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: John Kariuki <John.K.Kariuki@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2021-06-04 22:52:37 +00:00
Krzysztof Karas
b20db89532 spdk_top: move sort_threads function
This function is going to be needed in get_data() in the next patch.

Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7947 (master)

(cherry picked from commit e2b6cf2f96)
Change-Id: I9368b4567a92ca20d830c3475e3120ee691b84c1
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8108
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2021-06-04 22:52:37 +00:00
Michal Berger
10b7805b0f scripts/rpc: Make sure address argument is properly interpreted
In case the addr argument was not an existing unix socket file the rpc
client would consider it to be an actual ip address. As a result
connect() would be called with improper set of arguments. This could
cause the rpc.py to block for undesired amount of time until connect()
finally decided to return (seen on some fedora33 builds).

This was affecting sh wrapper functions like waitforlisten() which
use rpc.py to determine if given app is ready to be talk to blocking
execution of the tests for way too long then intendent.

To avoid such a scenario determine the format of the address and use
routines proper for given address family.

Signed-off-by: Michal Berger <michalx.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7777 (master)

(cherry picked from commit 6c1a1a3dca)
Change-Id: Iaac701d72c772629fa7c6478ff4781b0c5d485d5
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8018
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Michal Berger <michalx.berger@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2021-05-31 07:11:13 +00:00
Karol Latecki
fb67ea5148 autobuild: update patches for mainline DPDK
Patches stopped applying cleanly because of dpdk/dpdk changes:
7d5cfaa7508de0fd248b05effbf421a98317006a
4ad4b20a79052d9c8062b64eaf0170c16a333ff8
Needed to rebase custom patches.

Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7903 (master)

(cherry picked from commit 7908736c22)
Change-Id: I1006f7f6ba21a3cee5b607cfc44adedb4c1d5830
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8017
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2021-05-31 07:11:13 +00:00
Tomasz Zawadzki
48f6cd39c9 version: 21.04.1 pre
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Ic08e163c2b37843297dd1b45f341aa8377be8acb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7686
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2021-04-30 09:19:37 +00:00
Tomasz Zawadzki
8016710153 SPDK 21.04
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Id0e3688cf81f2dac4de4ab6c5212f986776ade2f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7685
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2021-04-30 07:11:44 +00:00
Liu Xiaodong
67fab31304 test: add functional test for reactor_set_intr
test script 'test/interrupt/reactor_set_intr.sh' will
do various reactor set intr operations on interrupt_tgt
without spdk_thread and with spdk_thread.

Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7348 (master)

(cherry picked from commit ac0c36d72a)
Change-Id: Ie5af1dc68b0272c34a91e8a66b78088c3794907c
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7678
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2021-04-30 07:11:44 +00:00
Tomasz Zawadzki
0fdf94cf77 lib/blob: force execution of queued persists
When performing snapshot creation the I/O is frozen
during the process. The blob persists for extent page
allocation is delayed until snapshot creation is finished.

This results in multiple blob persists executing one after
the other, with only intent of writing out updated extent table
pointing to new extent pages.
Since blob->state is marked DIRTY before issuing each persist,
but a single persist completion marks state CLEAR.

Blob serialize correctly expects each persist to contain
dirtied metadata, in order to avoid unnecessary md writes.
Since all other instances of marking blob DIRTY is explicit,
assert in blob serialize is left as is.

Instead when running the queued up blob persists, the blob
state is marked DIRTY.

Side effect is that it will write out same md in some cases.

Fixes #1909

Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7640 (master)

(cherry picked from commit 50935184c8)
Change-Id: I39f37299f3f0ebfccbdd4063781b5ecce286e993
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7677
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2021-04-30 07:11:44 +00:00
Nick Connolly
00df37cbb6 ut/nvme_ctrlr_cmd: add missing mutex init
Add missing mutex init for ctrlr ctrlr_lock.

Signed-off-by: Nick Connolly <nick.connolly@mayadata.io>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7613 (master)

(cherry picked from commit 0fdd826a00)
Change-Id: Ib3d665a28e91a72d1f1f6d09c374583ff731fb6f
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7676
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Nick Connolly <nick.connolly@mayadata.io>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2021-04-30 07:11:44 +00:00
Jim Harris
4f8b83cf51 nvme: reset mapping_length correctly for contig SGL
spdk_vtophys() takes a mapping_length parameter, so
it can return the length for which the returned
virtual address is valid.

But spdk_vtophys() will only return the max
between the valid length and the input mapping_length
parameter.

So the nvme SGL building code for contiguous buffers
was broken, since it would only set the mapping_length
once, before the loop started.  Worst case, if a buffer
started just before (maybe 256 bytes) before a huge page
boundary, each time through the loop we would create
a new SGL for only 256 bytes at a time, very quickly
running out of SGL entries for a large buffer.

Fixes #1852.

Signed-off-by: Jim Harris <james.r.harris@intel.com>

Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7659 (master)

(cherry picked from commit 5354d0c63f)
Change-Id: Ib1000d8b130e8e4bfeacccd6e60f8109428dfc1e
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7675
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2021-04-30 07:11:44 +00:00
Jim Harris
35d003dfed nvme: remove IDENTIFY_CNS quirk from normal QEMU SSDs
The IDENTIFY_CNS quirk was applied as part of QEMU
OCSSD handling in commit 6442451b.  But it was applied
not only to the OCSSD dev ID, but also the dev ID
for non-OCSSD NVMe controllers.

Starting with QEMU 5.2, QEMU will allocate a default
256 namespaces, but only some are active (associated
with the backing disks specified by the user).  QEMU
supports IDENTIFY_CNS, but since this quirk was set,
we wouldn't send a real IDENTIFY_CNS and instead
would just populate a fake list where all namespaces
were considered active.  This causes breakage in
a few places - mainly where we iterate through
the active namespaces, and then are surprised that
calling spdk_nvme_ns_is_active() returns false.

It was also breaking bdev_nvme_attach_controller RPC,
since by default we can only support returning 128
names, but since all of the namespaces were deemed
active, it was trying to return 256.

Fixes #1916.

Signed-off-by: Jim Harris <james.r.harris@intel.com>

Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7658 (master)

(cherry picked from commit 6fd1cc3716)
Change-Id: I4fdd27e0e36f0ac07a95f9f29aa83357e8505a45
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7674
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2021-04-30 07:11:44 +00:00
Alexey Marchuk
e6e51ea9a2 sock: Deprecate enable_zerocopy_send in sock_impl_set_options RPC
This deprecated parameter will be removed in SPDK 21.07

Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7608 (master)

(cherry picked from commit 2fd97e28bf)
Change-Id: I2b2fbcc798bb50fa6f9dfe35045f66e41c1ceaa9
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7639
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
2021-04-27 16:25:00 +00:00
Alexey Marchuk
d9efb8f004 sock: Add new params to configure zcopy for server, client sockets
When zcero copy send is enabled and used by initiator,
it could significantly increase latency in some payloads.
To enable more fine graing configuration of zero copy
send feature, add new parameters enable_zerocopy_send_server
and enable_zerocopy_send_client to spdk_sock_impl_opts to
enable/disable zcopy for specific type of sockets.
Exisiting enable_zerocopy_send parameter affects all types
of sockets.

Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7457 (master)

(cherry picked from commit 8e85b675fc)
Change-Id: I111c75608f8826980a56e210c076ab8ff16ddbdc
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7638
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2021-04-27 16:25:00 +00:00
26 changed files with 453 additions and 188 deletions

2
.gitmodules vendored
View File

@ -1,6 +1,6 @@
[submodule "dpdk"]
path = dpdk
url = https://github.com/spdk/dpdk.git
url = https://git.quacker.org/d/numam-dpdk.git
[submodule "intel-ipsec-mb"]
path = intel-ipsec-mb
url = https://github.com/spdk/intel-ipsec-mb.git

View File

@ -1,6 +1,8 @@
# Changelog
## v21.04: (Upcoming Release)
## v21.04.1: (Upcoming Release)
## v21.04: ZNS NVMe bdev, PMR, ADQ initiator, RPM
### accel
@ -161,6 +163,11 @@ For `bdev_raid_create` RPC, the deprecated parameter `strip_size` was removed.
New RPC `bdev_nvme_get_transport_statistics` was added, it allows to get transport statistics
of nvme poll groups.
Parameter `enable-zerocopy-send` of RPC `sock_impl_set_options` is deprecated and will be removed in SPDK 21.07,
use `enable-zerocopy-send-server` or `enable-zerocopy-send-client` instead.
Parameter `disable-zerocopy-send` of RPC `sock_impl_set_options` is deprecated and will be removed in SPDK 21.07,
use `disable-zerocopy-send-server` or `disable-zerocopy-send-client` instead.
### rpm
Added support for new RPM spec, rpmbuild/spdk.spec, which can be used for packaging the
@ -179,6 +186,12 @@ sockets to be marked using the SO_MARK socket option as a hint for which hardwar
queue they should be associated with. This mode leverages that by setting the same
value for all sockets within a poll group.
New parameters `enable_zerocopy_send_server` and `enable_zerocopy_send_client` were added
to struct spdk_sock_impl_opts, these parameters enable or disable zero copy send for server
and client sockets which are created using `spdk_sock_listen` and `spdk_sock_listen_ext` (server);
`spdk_sock_connect` and `spdk_sock_connect_ext` (client) functions. Existing parameter
`enable_zerocopy_send` enables or disables zero copy send for both server and client sockets.
### thread
A new API `spdk_io_channel_get_io_device` was added to get the io_device for the specified

View File

@ -142,7 +142,6 @@ struct core_info {
uint8_t g_sleep_time = 1;
uint16_t g_selected_row;
uint16_t g_max_selected_row;
struct rpc_thread_info *g_thread_info[MAX_THREADS];
const char *poller_type_str[SPDK_POLLER_TYPES_COUNT] = {"Active", "Timed", "Paused"};
const char *g_tab_title[NUMBER_OF_TABS] = {"[1] THREADS", "[2] POLLERS", "[3] CORES"};
struct spdk_jsonrpc_client *g_rpc_client;
@ -276,7 +275,6 @@ struct rpc_threads_stats g_threads_stats;
struct rpc_pollers_stats g_pollers_stats;
struct rpc_cores_stats g_cores_stats;
struct rpc_poller_info g_pollers_history[RPC_MAX_POLLERS];
struct rpc_thread_info g_thread_history[RPC_MAX_THREADS];
static void
init_str_len(void)
@ -546,12 +544,60 @@ rpc_send_req(char *rpc_name, struct spdk_jsonrpc_client_response **resp)
return 0;
}
static int
sort_threads(const void *p1, const void *p2)
{
const struct rpc_thread_info thread_info1 = *(struct rpc_thread_info *)p1;
const struct rpc_thread_info thread_info2 = *(struct rpc_thread_info *)p2;
uint64_t count1, count2;
switch (g_current_sort_col[THREADS_TAB]) {
case 0: /* Sort by name */
return strcmp(thread_info1.name, thread_info2.name);
case 1: /* Sort by core */
count2 = thread_info1.core_num;
count1 = thread_info2.core_num;
break;
case 2: /* Sort by active pollers number */
count1 = thread_info1.active_pollers_count;
count2 = thread_info2.active_pollers_count;
break;
case 3: /* Sort by timed pollers number */
count1 = thread_info1.timed_pollers_count;
count2 = thread_info2.timed_pollers_count;
break;
case 4: /* Sort by paused pollers number */
count1 = thread_info1.paused_pollers_count;
count2 = thread_info2.paused_pollers_count;
break;
case 5: /* Sort by idle time */
count1 = thread_info1.idle - thread_info1.last_idle;
count2 = thread_info2.idle - thread_info2.last_idle;
break;
case 6: /* Sort by busy time */
count1 = thread_info1.busy - thread_info1.last_busy;
count2 = thread_info2.busy - thread_info2.last_busy;
break;
default:
return 0;
}
if (count2 > count1) {
return 1;
} else if (count2 < count1) {
return -1;
} else {
return 0;
}
}
static int
get_data(void)
{
struct spdk_jsonrpc_client_response *json_resp = NULL;
struct rpc_thread_info *thread_info;
struct rpc_core_info *core_info;
struct rpc_threads_stats threads_stats;
uint64_t i, j;
int rc = 0;
@ -561,25 +607,37 @@ get_data(void)
}
/* Decode json */
memset(&g_threads_stats, 0, sizeof(g_threads_stats));
memset(&threads_stats, 0, sizeof(threads_stats));
if (spdk_json_decode_object(json_resp->result, rpc_threads_stats_decoders,
SPDK_COUNTOF(rpc_threads_stats_decoders), &g_threads_stats)) {
SPDK_COUNTOF(rpc_threads_stats_decoders), &threads_stats)) {
rc = -EINVAL;
goto end;
}
/* This is to free allocated char arrays with old thread names */
free_rpc_threads_stats(&g_threads_stats);
spdk_jsonrpc_client_free_response(json_resp);
for (i = 0; i < g_threads_stats.threads.threads_count; i++) {
thread_info = &g_threads_stats.threads.thread_info[i];
g_thread_info[thread_info->id] = thread_info;
for (i = 0; i < threads_stats.threads.threads_count; i++) {
for (j = 0; j < g_threads_stats.threads.threads_count; j++) {
if (g_threads_stats.threads.thread_info[j].id == threads_stats.threads.thread_info[i].id) {
threads_stats.threads.thread_info[i].last_busy = g_threads_stats.threads.thread_info[j].busy;
threads_stats.threads.thread_info[i].last_idle = g_threads_stats.threads.thread_info[j].idle;
}
}
}
memcpy(&g_threads_stats, &threads_stats, sizeof(struct rpc_threads_stats));
qsort(&g_threads_stats.threads.thread_info, threads_stats.threads.threads_count,
sizeof(g_threads_stats.threads.thread_info[0]), sort_threads);
rc = rpc_send_req("thread_get_pollers", &json_resp);
if (rc) {
goto end;
}
/* Free old pollers values before allocating memory for new ones */
free_rpc_pollers_stats(&g_pollers_stats);
/* Decode json */
memset(&g_pollers_stats, 0, sizeof(g_pollers_stats));
if (spdk_json_decode_object(json_resp->result, rpc_pollers_stats_decoders,
@ -595,6 +653,9 @@ get_data(void)
goto end;
}
/* Free old cores values before allocating memory for new ones */
free_rpc_cores_stats(&g_cores_stats);
/* Decode json */
memset(&g_cores_stats, 0, sizeof(g_cores_stats));
if (spdk_json_decode_object(json_resp->result, rpc_cores_stats_decoders,
@ -607,7 +668,7 @@ get_data(void)
core_info = &g_cores_stats.cores.core[i];
for (j = 0; j < core_info->threads.threads_count; j++) {
g_thread_info[core_info->threads.thread[j].id]->core_num = core_info->lcore;
g_threads_stats.threads.thread_info[j].core_num = core_info->lcore;
}
}
@ -797,63 +858,6 @@ get_time_str(uint64_t ticks, char *time_str)
snprintf(time_str, MAX_TIME_STR_LEN, "%" PRIu64, time);
}
static int
sort_threads(const void *p1, const void *p2)
{
const struct rpc_thread_info *thread_info1 = *(struct rpc_thread_info **)p1;
const struct rpc_thread_info *thread_info2 = *(struct rpc_thread_info **)p2;
uint64_t count1, count2;
/* thread IDs may not be allocated contiguously, so we need
* to account for NULL thread_info pointers */
if (thread_info1 == NULL && thread_info2 == NULL) {
return 0;
} else if (thread_info1 == NULL) {
return 1;
} else if (thread_info2 == NULL) {
return -1;
}
switch (g_current_sort_col[THREADS_TAB]) {
case 0: /* Sort by name */
return strcmp(thread_info1->name, thread_info2->name);
case 1: /* Sort by core */
count2 = thread_info1->core_num;
count1 = thread_info2->core_num;
break;
case 2: /* Sort by active pollers number */
count1 = thread_info1->active_pollers_count;
count2 = thread_info2->active_pollers_count;
break;
case 3: /* Sort by timed pollers number */
count1 = thread_info1->timed_pollers_count;
count2 = thread_info2->timed_pollers_count;
break;
case 4: /* Sort by paused pollers number */
count1 = thread_info1->paused_pollers_count;
count2 = thread_info2->paused_pollers_count;
break;
case 5: /* Sort by idle time */
count1 = thread_info1->idle - thread_info1->last_idle;
count2 = thread_info2->idle - thread_info2->last_idle;
break;
case 6: /* Sort by busy time */
count1 = thread_info1->busy - thread_info1->last_busy;
count2 = thread_info2->busy - thread_info2->last_busy;
break;
default:
return 0;
}
if (count2 > count1) {
return 1;
} else if (count2 < count1) {
return -1;
} else {
return 0;
}
}
static void
draw_row_background(uint8_t item_index, uint8_t tab)
{
@ -872,7 +876,7 @@ refresh_threads_tab(uint8_t current_page)
{
struct col_desc *col_desc = g_col_desc[THREADS_TAB];
uint64_t i, threads_count;
uint16_t j, k;
uint16_t j;
uint16_t col;
uint8_t max_pages, item_index;
static uint8_t last_page = 0;
@ -893,35 +897,16 @@ refresh_threads_tab(uint8_t current_page)
g_last_threads_count = threads_count;
}
/* From g_thread_info copy to thread_info without null elements.
* The index of g_thread_info equals to Thread IDs, so it starts from '1'. */
for (i = 0, j = 1; i < g_threads_stats.threads.threads_count; i++) {
while (g_thread_info[j] == NULL) {
j++;
}
memcpy(&thread_info[i], &g_thread_info[j], sizeof(struct rpc_thread_info *));
j++;
for (i = 0; i < threads_count; i++) {
thread_info[i] = &g_threads_stats.threads.thread_info[i];
}
if (last_page != current_page) {
for (i = 0; i < threads_count; i++) {
/* Thread IDs start from 1, so we have to do i + 1 */
g_threads_stats.threads.thread_info[i].last_idle = g_thread_info[i + 1]->idle;
g_threads_stats.threads.thread_info[i].last_busy = g_thread_info[i + 1]->busy;
}
last_page = current_page;
}
max_pages = (threads_count + g_max_data_rows - 1) / g_max_data_rows;
qsort(thread_info, threads_count, sizeof(thread_info[0]), sort_threads);
for (k = 0; k < threads_count; k++) {
g_thread_history[thread_info[k]->id].busy = thread_info[k]->busy - thread_info[k]->last_busy;
g_thread_history[thread_info[k]->id].idle = thread_info[k]->idle - thread_info[k]->last_idle;
}
for (i = current_page * g_max_data_rows;
i < spdk_min(threads_count, (uint64_t)((current_page + 1) * g_max_data_rows));
i++) {
@ -965,7 +950,6 @@ refresh_threads_tab(uint8_t current_page)
col += col_desc[4].max_data_string + 2;
}
g_thread_history[thread_info[i]->id].idle = thread_info[i]->idle - thread_info[i]->last_idle;
if (!col_desc[5].disabled) {
if (g_interval_data == true) {
get_time_str(thread_info[i]->idle - thread_info[i]->last_idle, idle_time);
@ -977,7 +961,6 @@ refresh_threads_tab(uint8_t current_page)
col += col_desc[5].max_data_string;
}
g_thread_history[thread_info[i]->id].busy = thread_info[i]->busy - thread_info[i]->last_busy;
if (!col_desc[6].disabled) {
if (g_interval_data == true) {
get_time_str(thread_info[i]->busy - thread_info[i]->last_busy, busy_time);
@ -993,11 +976,6 @@ refresh_threads_tab(uint8_t current_page)
}
}
for (k = 0; k < threads_count; k++) {
thread_info[k]->last_idle = thread_info[k]->idle;
thread_info[k]->last_busy = thread_info[k]->busy;
}
g_max_selected_row = i - current_page * g_max_data_rows - 1;
return max_pages;
@ -1995,9 +1973,9 @@ display_thread(struct rpc_thread_info *thread_info)
thread_info->core_num);
if (g_interval_data) {
get_time_str(g_thread_history[thread_info->id].idle, idle_time);
get_time_str(thread_info->idle - thread_info->last_idle, idle_time);
mvwprintw(thread_win, 3, THREAD_WIN_FIRST_COL + 32, idle_time);
get_time_str(g_thread_history[thread_info->id].busy, busy_time);
get_time_str(thread_info->busy - thread_info->last_busy, busy_time);
mvwprintw(thread_win, 3, THREAD_WIN_FIRST_COL + 54, busy_time);
} else {
get_time_str(thread_info->idle, idle_time);
@ -2079,22 +2057,13 @@ display_thread(struct rpc_thread_info *thread_info)
static void
show_thread(uint8_t current_page)
{
struct rpc_thread_info *thread_info[g_threads_stats.threads.threads_count];
struct rpc_thread_info thread_info;
uint64_t thread_number = current_page * g_max_data_rows + g_selected_row;
uint64_t i;
get_data();
assert(thread_number < g_threads_stats.threads.threads_count);
for (i = 0; i < g_threads_stats.threads.threads_count; i++) {
thread_info[i] = &g_threads_stats.threads.thread_info[i];
}
thread_info = g_threads_stats.threads.thread_info[thread_number];
qsort(thread_info, g_threads_stats.threads.threads_count, sizeof(thread_info[0]), sort_threads);
display_thread(thread_info[thread_number]);
free_data();
display_thread(&thread_info);
}
static void
@ -2124,8 +2093,6 @@ show_core(uint8_t current_page)
bool stop_loop = false;
char idle_time[MAX_TIME_STR_LEN], busy_time[MAX_TIME_STR_LEN];
get_data();
assert(core_number < g_cores_stats.cores.cores_count);
for (i = 0; i < g_cores_stats.cores.cores_count; i++) {
core_info[i] = &g_cores_stats.cores.core[i];
@ -2231,8 +2198,6 @@ show_core(uint8_t current_page)
del_panel(core_panel);
delwin(core_win);
free_data();
}
static void
@ -2247,8 +2212,6 @@ show_poller(uint8_t current_page)
char poller_period[MAX_TIME_STR_LEN];
int c;
get_data();
prepare_poller_data(current_page, pollers, &count, current_page);
assert(poller_number < count);
@ -2314,8 +2277,6 @@ show_poller(uint8_t current_page)
del_panel(poller_panel);
delwin(poller_win);
free_data();
}
static void
@ -2337,6 +2298,8 @@ show_stats(void)
clock_gettime(CLOCK_REALTIME, &time_now);
time_last = time_now.tv_sec;
memset(&g_threads_stats, 0, sizeof(g_threads_stats));
switch_tab(THREADS_TAB);
while (1) {
@ -2351,6 +2314,27 @@ show_stats(void)
resize_interface(active_tab);
}
clock_gettime(CLOCK_REALTIME, &time_now);
time_dif = time_now.tv_sec - time_last;
if (time_dif < 0) {
time_dif = g_sleep_time;
}
if (time_dif >= g_sleep_time || force_refresh) {
time_last = time_now.tv_sec;
rc = get_data();
if (rc) {
mvprintw(g_max_row - 1, g_max_col - strlen(refresh_error) - 2, refresh_error);
}
max_pages = refresh_tab(active_tab, current_page);
snprintf(current_page_str, CURRENT_PAGE_STR_LEN - 1, "Page: %d/%d", current_page + 1, max_pages);
mvprintw(g_max_row - 1, 1, current_page_str);
refresh();
}
c = getch();
if (c == 'q') {
free_resources();
@ -2429,30 +2413,8 @@ show_stats(void)
force_refresh = false;
break;
}
clock_gettime(CLOCK_REALTIME, &time_now);
time_dif = time_now.tv_sec - time_last;
if (time_dif < 0) {
time_dif = g_sleep_time;
}
if (time_dif >= g_sleep_time || force_refresh) {
time_last = time_now.tv_sec;
rc = get_data();
if (rc) {
mvprintw(g_max_row - 1, g_max_col - strlen(refresh_error) - 2, refresh_error);
}
max_pages = refresh_tab(active_tab, current_page);
snprintf(current_page_str, CURRENT_PAGE_STR_LEN - 1, "Page: %d/%d", current_page + 1, max_pages);
mvprintw(g_max_row - 1, 1, current_page_str);
free_data();
refresh();
}
}
free_data();
}
static void

View File

@ -123,8 +123,8 @@ function build_native_dpdk() {
wget https://github.com/karlatec/dpdk/commit/3219c0cfc38803aec10c809dde16e013b370bda9.patch -O dpdk-pci.patch
wget https://github.com/karlatec/dpdk/commit/adf8f7638de29bc4bf9ba3faf12bbdae73acda0c.patch -O dpdk-qat.patch
else
wget https://github.com/karlatec/dpdk/commit/eac05db0580091ef8e4d338aa5d2210695521894.patch -O dpdk-pci.patch
wget https://github.com/karlatec/dpdk/commit/d649d5efb7bb404ce59dea81768adeb994b284f7.patch -O dpdk-qat.patch
wget https://github.com/karlatec/dpdk/commit/f95e331be3a1f856b816948990dd2afc67ea4020.patch -O dpdk-pci.patch
wget https://github.com/karlatec/dpdk/commit/6fd2fa906ffdcee04e6ce5da40e61cb841be9827.patch -O dpdk-qat.patch
fi
git config --local user.name "spdk"
git config --local user.email "nomail@all.com"

View File

@ -195,6 +195,7 @@ if [ $SPDK_RUN_FUNCTIONAL_TEST -eq 1 ]; then
run_test "bdevperf_config" test/bdev/bdevperf/test_config.sh
if [[ $(uname -s) == Linux ]]; then
run_test "spdk_dd" test/dd/dd.sh
run_test "reactor_set_interrupt" test/interrupt/reactor_set_interrupt.sh
fi
fi

2
configure vendored
View File

@ -845,7 +845,7 @@ fi
echo -n "Creating mk/config.mk..."
cp -f $rootdir/CONFIG $rootdir/mk/config.mk
for key in "${!CONFIG[@]}"; do
sed -i.bak -r "s#^\s*CONFIG_${key}=.*#CONFIG_${key}\?=${CONFIG[$key]}#g" $rootdir/mk/config.mk
sed -i.bak -r "s#[[:space:]]*CONFIG_${key}=.*#CONFIG_${key}\?=${CONFIG[$key]}#g" $rootdir/mk/config.mk
done
# On FreeBSD sed -i 'SUFFIX' - SUFFIX is mandatory. So no way but to delete the backed file.
rm -f $rootdir/mk/config.mk.bak

View File

@ -28,6 +28,13 @@ The following APIs have been deprecated and will be removed in SPDK 21.07:
- `poll_group_free_stat` (transport op in `nvmf_transport.h`).
Please use `spdk_nvmf_poll_group_dump_stat` and `poll_group_dump_stat` instead.
## rpc
Parameter `enable-zerocopy-send` of RPC `sock_impl_set_options` is deprecated and will be removed in SPDK 21.07,
use `enable-zerocopy-send-server` or `enable-zerocopy-send-client` instead.
Parameter `disable-zerocopy-send` of RPC `sock_impl_set_options` is deprecated and will be removed in SPDK 21.07,
use `disable-zerocopy-send-server` or `disable-zerocopy-send-client` instead.
## rpm
`pkg/spdk.spec` is considered to be deprecated and scheduled for removal in SPDK 21.07.

View File

@ -8591,7 +8591,11 @@ Example response:
"recv_buf_size": 2097152,
"send_buf_size": 2097152,
"enable_recv_pipe": true,
"enable_zerocopy_send": true
"enable_zerocopy_send": true,
"enable_quickack": true,
"enable_placement_id": 0,
"enable_zerocopy_send_server": true,
"enable_zerocopy_send_client": false
}
}
~~~
@ -8602,15 +8606,17 @@ Set parameters for the socket layer implementation.
### Parameters
Name | Optional | Type | Description
----------------------- | -------- | ----------- | -----------
impl_name | Required | string | Name of socket implementation, e.g. posix
recv_buf_size | Optional | number | Size of socket receive buffer in bytes
send_buf_size | Optional | number | Size of socket send buffer in bytes
enable_recv_pipe | Optional | boolean | Enable or disable receive pipe
enable_zerocopy_send | Optional | boolean | Enable or disable zero copy on send
enable_quick_ack | Optional | boolean | Enable or disable quick ACK
enable_placement_id | Optional | number | Enable or disable placement_id. 0:disable,1:incoming_napi,2:incoming_cpu
Name | Optional | Type | Description
--------------------------- | -------- | ----------- | -----------
impl_name | Required | string | Name of socket implementation, e.g. posix
recv_buf_size | Optional | number | Size of socket receive buffer in bytes
send_buf_size | Optional | number | Size of socket send buffer in bytes
enable_recv_pipe | Optional | boolean | Enable or disable receive pipe
enable_zerocopy_send | Optional | boolean | Deprecated. Enable or disable zero copy on send for client and server sockets
enable_quick_ack | Optional | boolean | Enable or disable quick ACK
enable_placement_id | Optional | number | Enable or disable placement_id. 0:disable,1:incoming_napi,2:incoming_cpu
enable_zerocopy_send_server | Optional | boolean | Enable or disable zero copy on send for server sockets
enable_zerocopy_send_client | Optional | boolean | Enable or disable zero copy on send for client sockets
### Response
@ -8632,7 +8638,9 @@ Example request:
"enable_recv_pipe": false,
"enable_zerocopy_send": true,
"enable_quick_ack": false,
"enable_placement_id": 0
"enable_placement_id": 0,
"enable_zerocopy_send_server": true,
"enable_zerocopy_send_client": false
}
}
~~~

2
dpdk

@ -1 +1 @@
Subproject commit eb16786836e3a8380bb86fde67efa2ee0d9d3852
Subproject commit 4f93dbc0c0ab3804abaa20123030ad7fccf78709

View File

@ -137,7 +137,7 @@ ifeq ($(MAKE_PID),)
MAKE_PID := $(shell echo $$PPID)
endif
MAKE_NUMJOBS := $(shell ps T | sed -nE 's/\s*$(MAKE_PID)\s.* (-j|--jobs=)( *[0-9]+).*/\1\2/p')
MAKE_NUMJOBS := $(shell ps T | sed -nE 's/[[:space:]]*$(MAKE_PID)[[:space:]].* (-j|--jobs=)( *[0-9]+).*/\1\2/p')
all: $(SPDK_ROOT_DIR)/dpdk/build-tmp
$(Q)# DPDK doesn't handle nested make calls, so unset MAKEFLAGS

View File

@ -335,6 +335,7 @@ perf_set_sock_zcopy(const char *impl_name, bool enable)
}
sock_opts.enable_zerocopy_send = enable;
sock_opts.enable_zerocopy_send_client = enable;
if (spdk_sock_impl_set_opts(impl_name, &sock_opts, opts_size)) {
fprintf(stderr, "Failed to %s zcopy send for sock impl %s: error %d (%s)\n",

View File

@ -111,6 +111,7 @@ struct spdk_sock_impl_opts {
bool enable_recv_pipe;
/**
* **Deprecated, please use enable_zerocopy_send_server or enable_zerocopy_send_client instead**
* Enable or disable use of zero copy flow on send. Used by posix socket module.
*/
bool enable_zerocopy_send;
@ -126,6 +127,15 @@ struct spdk_sock_impl_opts {
*/
uint32_t enable_placement_id;
/**
* Enable or disable use of zero copy flow on send for server sockets. Used by posix socket module.
*/
bool enable_zerocopy_send_server;
/**
* Enable or disable use of zero copy flow on send for client sockets. Used by posix socket module.
*/
bool enable_zerocopy_send_client;
};
/**

View File

@ -54,7 +54,7 @@
* Patch level is incremented on maintenance branch releases and reset to 0 for each
* new major.minor release.
*/
#define SPDK_VERSION_PATCH 0
#define SPDK_VERSION_PATCH 1
/**
* Version string suffix.

View File

@ -1642,6 +1642,7 @@ blob_persist_complete(spdk_bs_sequence_t *seq, struct spdk_blob_persist_ctx *ctx
free(ctx);
if (next_persist != NULL) {
blob->state = SPDK_BLOB_STATE_DIRTY;
blob_persist_check_dirty(next_persist);
}
}

View File

@ -1219,7 +1219,6 @@ nvme_pcie_qpair_build_contig_hw_sgl_request(struct spdk_nvme_qpair *qpair, struc
length = req->payload_size;
virt_addr = req->payload.contig_or_cb_arg + req->payload_offset;
mapping_length = length;
while (length > 0) {
if (nseg >= NVME_MAX_SGL_DESCRIPTORS) {
@ -1233,6 +1232,7 @@ nvme_pcie_qpair_build_contig_hw_sgl_request(struct spdk_nvme_qpair *qpair, struc
return -EFAULT;
}
mapping_length = length;
phys_addr = spdk_vtophys(virt_addr, &mapping_length);
if (phys_addr == SPDK_VTOPHYS_ERROR) {
nvme_pcie_fail_request_bad_vtophys(qpair, tr);

View File

@ -89,7 +89,6 @@ static const struct nvme_quirk nvme_quirks[] = {
NVME_QUIRK_DELAY_AFTER_QUEUE_ALLOC
},
{ {SPDK_PCI_CLASS_NVME, SPDK_PCI_VID_INTEL, 0x5845, SPDK_PCI_ANY_ID, SPDK_PCI_ANY_ID},
NVME_QUIRK_IDENTIFY_CNS |
NVME_INTEL_QUIRK_NO_LOG_PAGES |
NVME_QUIRK_MAXIMUM_PCI_ACCESS_WIDTH
},

View File

@ -791,6 +791,8 @@ spdk_sock_write_config_json(struct spdk_json_write_ctx *w)
spdk_json_write_named_bool(w, "enable_zerocopy_send", opts.enable_zerocopy_send);
spdk_json_write_named_bool(w, "enable_quickack", opts.enable_quickack);
spdk_json_write_named_uint32(w, "enable_placement_id", opts.enable_placement_id);
spdk_json_write_named_bool(w, "enable_zerocopy_send_server", opts.enable_zerocopy_send_server);
spdk_json_write_named_bool(w, "enable_zerocopy_send_client", opts.enable_zerocopy_send_client);
spdk_json_write_object_end(w);
spdk_json_write_object_end(w);
} else {

View File

@ -1,7 +1,7 @@
/*-
* BSD LICENSE
*
* Copyright (c) 2020 Mellanox Technologies LTD. All rights reserved.
* Copyright (c) 2020, 2021 Mellanox Technologies LTD. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@ -77,6 +77,8 @@ rpc_sock_impl_get_options(struct spdk_jsonrpc_request *request,
spdk_json_write_named_bool(w, "enable_zerocopy_send", sock_opts.enable_zerocopy_send);
spdk_json_write_named_bool(w, "enable_quickack", sock_opts.enable_quickack);
spdk_json_write_named_uint32(w, "enable_placement_id", sock_opts.enable_placement_id);
spdk_json_write_named_bool(w, "enable_zerocopy_send_server", sock_opts.enable_zerocopy_send_server);
spdk_json_write_named_bool(w, "enable_zerocopy_send_client", sock_opts.enable_zerocopy_send_client);
spdk_json_write_object_end(w);
spdk_jsonrpc_end_result(request, w);
free(impl_name);
@ -118,7 +120,14 @@ static const struct spdk_json_object_decoder rpc_sock_impl_set_opts_decoders[] =
"enable_placement_id", offsetof(struct spdk_rpc_sock_impl_set_opts, sock_opts.enable_placement_id),
spdk_json_decode_uint32, true
},
{
"enable_zerocopy_send_server", offsetof(struct spdk_rpc_sock_impl_set_opts, sock_opts.enable_zerocopy_send_server),
spdk_json_decode_bool, true
},
{
"enable_zerocopy_send_client", offsetof(struct spdk_rpc_sock_impl_set_opts, sock_opts.enable_zerocopy_send_client),
spdk_json_decode_bool, true
}
};
static void

View File

@ -92,6 +92,8 @@ static struct spdk_sock_impl_opts g_spdk_posix_sock_impl_opts = {
.enable_zerocopy_send = true,
.enable_quickack = false,
.enable_placement_id = PLACEMENT_NONE,
.enable_zerocopy_send_server = true,
.enable_zerocopy_send_client = false
};
static struct spdk_sock_map g_map = {
@ -348,7 +350,7 @@ posix_sock_alloc(int fd, bool enable_zero_copy)
#if defined(SPDK_ZEROCOPY)
flag = 1;
if (enable_zero_copy && g_spdk_posix_sock_impl_opts.enable_zerocopy_send) {
if (enable_zero_copy) {
/* Try to turn on zero copy sends */
rc = setsockopt(sock->fd, SOL_SOCKET, SO_ZEROCOPY, &flag, sizeof(flag));
if (rc == 0) {
@ -441,7 +443,8 @@ posix_sock_create(const char *ip, int port,
int fd, flag;
int val = 1;
int rc, sz;
bool enable_zero_copy = true;
bool enable_zcopy_user_opts = true;
bool enable_zcopy_impl_opts = true;
assert(opts != NULL);
@ -555,6 +558,8 @@ retry:
fd = -1;
break;
}
enable_zcopy_impl_opts = g_spdk_posix_sock_impl_opts.enable_zerocopy_send_server &&
g_spdk_posix_sock_impl_opts.enable_zerocopy_send;
} else if (type == SPDK_SOCK_CREATE_CONNECT) {
rc = connect(fd, res->ai_addr, res->ai_addrlen);
if (rc != 0) {
@ -564,6 +569,8 @@ retry:
fd = -1;
continue;
}
enable_zcopy_impl_opts = g_spdk_posix_sock_impl_opts.enable_zerocopy_send_client &&
g_spdk_posix_sock_impl_opts.enable_zerocopy_send;
}
flag = fcntl(fd, F_GETFL);
@ -582,9 +589,9 @@ retry:
}
/* Only enable zero copy for non-loopback sockets. */
enable_zero_copy = opts->zcopy && !sock_is_loopback(fd);
enable_zcopy_user_opts = opts->zcopy && !sock_is_loopback(fd);
sock = posix_sock_alloc(fd, enable_zero_copy);
sock = posix_sock_alloc(fd, enable_zcopy_user_opts && enable_zcopy_impl_opts);
if (sock == NULL) {
SPDK_ERRLOG("sock allocation failed\n");
close(fd);
@ -1524,6 +1531,8 @@ posix_sock_impl_get_opts(struct spdk_sock_impl_opts *opts, size_t *len)
GET_FIELD(enable_zerocopy_send);
GET_FIELD(enable_quickack);
GET_FIELD(enable_placement_id);
GET_FIELD(enable_zerocopy_send_server);
GET_FIELD(enable_zerocopy_send_client);
#undef GET_FIELD
#undef FIELD_OK
@ -1554,6 +1563,8 @@ posix_sock_impl_set_opts(const struct spdk_sock_impl_opts *opts, size_t len)
SET_FIELD(enable_zerocopy_send);
SET_FIELD(enable_quickack);
SET_FIELD(enable_placement_id);
SET_FIELD(enable_zerocopy_send_server);
SET_FIELD(enable_zerocopy_send_client);
#undef SET_FIELD
#undef FIELD_OK

View File

@ -2,12 +2,12 @@
%bcond_with doc
Name: spdk
Version: master
Version: 21.04.x
Release: 0%{?dist}
Epoch: 0
URL: http://spdk.io
Source: https://github.com/spdk/spdk/archive/master.tar.gz
Source: https://github.com/spdk/spdk/archive/v21.04.x.tar.gz
Summary: Set of libraries and utilities for high performance user-mode storage
%define package_version %{epoch}:%{version}-%{release}

View File

@ -2606,7 +2606,9 @@ Format: 'user:u1 secret:s1 muser:mu1 msecret:ms1,user:u2 secret:s2 muser:mu2 mse
enable_recv_pipe=args.enable_recv_pipe,
enable_zerocopy_send=args.enable_zerocopy_send,
enable_quickack=args.enable_quickack,
enable_placement_id=args.enable_placement_id)
enable_placement_id=args.enable_placement_id,
enable_zerocopy_send_server=args.enable_zerocopy_send_server,
enable_zerocopy_send_client=args.enable_zerocopy_send_client)
p = subparsers.add_parser('sock_impl_set_options', help="""Set options of socket layer implementation""")
p.add_argument('-i', '--impl', help='Socket implementation name, e.g. posix', required=True)
@ -2617,16 +2619,27 @@ Format: 'user:u1 secret:s1 muser:mu1 msecret:ms1,user:u2 secret:s2 muser:mu2 mse
action='store_true', dest='enable_recv_pipe')
p.add_argument('--disable-recv-pipe', help='Disable receive pipe',
action='store_false', dest='enable_recv_pipe')
p.add_argument('--enable-zerocopy-send', help='Enable zerocopy on send',
p.add_argument('--enable-zerocopy-send', help="""Enable zerocopy on send
(Deprecated, use enable-zerocopy-send-server or enable-zerocopy-send-client)""",
action='store_true', dest='enable_zerocopy_send')
p.add_argument('--disable-zerocopy-send', help='Disable zerocopy on send',
p.add_argument('--disable-zerocopy-send', help="""Enable zerocopy on send
(Deprecated, use disable-zerocopy-send-server or disable-zerocopy-send-client)""",
action='store_false', dest='enable_zerocopy_send')
p.add_argument('--enable-quickack', help='Enable quick ACK',
action='store_true', dest='enable_quickack')
p.add_argument('--disable-quickack', help='Disable quick ACK',
action='store_false', dest='enable_quickack')
p.add_argument('--enable-zerocopy-send-server', help='Enable zerocopy on send for server sockets',
action='store_true', dest='enable_zerocopy_send_server')
p.add_argument('--disable-zerocopy-send-server', help='Disable zerocopy on send for server sockets',
action='store_false', dest='enable_zerocopy_server_client')
p.add_argument('--enable-zerocopy-send-client', help='Enable zerocopy on send for client sockets',
action='store_true', dest='enable_zerocopy_send_client')
p.add_argument('--disable-zerocopy-send-client', help='Disable zerocopy on send for client sockets',
action='store_false', dest='enable_zerocopy_send_client')
p.set_defaults(func=sock_impl_set_options, enable_recv_pipe=None, enable_zerocopy_send=None,
enable_quickack=None, enable_placement_id=None)
enable_quickack=None, enable_placement_id=None, enable_zerocopy_send_server=None,
enable_zerocopy_send_client=None)
def sock_set_default_impl(args):
print_json(rpc.sock.sock_set_default_impl(args.client,

View File

@ -14,6 +14,22 @@ def print_json(s):
print(json.dumps(s, indent=2).strip('"'))
def get_addr_type(addr):
try:
socket.inet_pton(socket.AF_INET, addr)
return socket.AF_INET
except Exception as e:
pass
try:
socket.inet_pton(socket.AF_INET6, addr)
return socket.AF_INET6
except Exception as e:
pass
if os.path.exists(addr):
return socket.AF_UNIX
return None
class JSONRPCException(Exception):
def __init__(self, message):
self.message = message
@ -54,23 +70,24 @@ class JSONRPCClient(object):
def _connect(self, addr, port):
try:
if os.path.exists(addr):
addr_type = get_addr_type(addr)
if addr_type == socket.AF_UNIX:
self._logger.debug("Trying to connect to UNIX socket: %s", addr)
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.sock.connect(addr)
elif port:
if ':' in addr:
self._logger.debug("Trying to connect to IPv6 address addr:%s, port:%i", addr, port)
for res in socket.getaddrinfo(addr, port, socket.AF_INET6, socket.SOCK_STREAM, socket.SOL_TCP):
af, socktype, proto, canonname, sa = res
self.sock = socket.socket(af, socktype, proto)
self.sock.connect(sa)
else:
self._logger.debug("Trying to connect to IPv4 address addr:%s, port:%i'", addr, port)
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.connect((addr, port))
elif addr_type == socket.AF_INET6:
self._logger.debug("Trying to connect to IPv6 address addr:%s, port:%i", addr, port)
for res in socket.getaddrinfo(addr, port, socket.AF_INET6, socket.SOCK_STREAM, socket.SOL_TCP):
af, socktype, proto, canonname, sa = res
self.sock = socket.socket(af, socktype, proto)
self.sock.connect(sa)
elif addr_type == socket.AF_INET:
self._logger.debug("Trying to connect to IPv4 address addr:%s, port:%i'", addr, port)
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.connect((addr, port))
else:
raise socket.error("Unix socket '%s' does not exist" % addr)
raise socket.error("Invalid or non-existing address: '%s'" % addr)
except socket.error as ex:
raise JSONRPCException("Error while connecting to %s\n"
"Is SPDK application running?\n"

View File

@ -18,7 +18,9 @@ def sock_impl_set_options(client,
enable_recv_pipe=None,
enable_zerocopy_send=None,
enable_quickack=None,
enable_placement_id=None):
enable_placement_id=None,
enable_zerocopy_send_server=None,
enable_zerocopy_send_client=None):
"""Set parameters for the socket layer implementation.
Args:
@ -26,9 +28,11 @@ def sock_impl_set_options(client,
recv_buf_size: size of socket receive buffer in bytes (optional)
send_buf_size: size of socket send buffer in bytes (optional)
enable_recv_pipe: enable or disable receive pipe (optional)
enable_zerocopy_send: enable or disable zerocopy on send (optional)
enable_zerocopy_send: (Deprecated) enable or disable zerocopy on send (optional)
enable_quickack: enable or disable quickack (optional)
enable_placement_id: option for placement_id. 0:disable,1:incoming_napi,2:incoming_cpu (optional)
enable_zerocopy_send_server: enable or disable zerocopy on send for server sockets(optional)
enable_zerocopy_send_client: enable or disable zerocopy on send for client sockets(optional)
"""
params = {}
@ -40,11 +44,16 @@ def sock_impl_set_options(client,
if enable_recv_pipe is not None:
params['enable_recv_pipe'] = enable_recv_pipe
if enable_zerocopy_send is not None:
print("WARNING: enable_zerocopy_send is deprecated, please use enable_zerocopy_send_server or enable_zerocopy_send_client.")
params['enable_zerocopy_send'] = enable_zerocopy_send
if enable_quickack is not None:
params['enable_quickack'] = enable_quickack
if enable_placement_id is not None:
params['enable_placement_id'] = enable_placement_id
if enable_zerocopy_send_server is not None:
params['enable_zerocopy_send_server'] = enable_zerocopy_send_server
if enable_zerocopy_send_client is not None:
params['enable_zerocopy_send_client'] = enable_zerocopy_send_client
return client.call('sock_impl_set_options', params)

View File

@ -0,0 +1,98 @@
testdir=$(readlink -f $(dirname $0))
rootdir=$(readlink -f $testdir/../..)
source $rootdir/test/common/autotest_common.sh
rpc_py="$rootdir/scripts/rpc.py"
r0_mask=0x1
r1_mask=0x2
r2_mask=0x4
cpu_server_mask=0x07
rpc_server_addr="/var/tmp/spdk.sock"
function cleanup() {
rm -f "$SPDK_TEST_STORAGE/aiofile"
}
function start_intr_tgt() {
local rpc_addr="${1:-$rpc_server_addr}"
local cpu_mask="${2:-$cpu_server_mask}"
"$SPDK_EXAMPLE_DIR/interrupt_tgt" -m $cpu_mask -r $rpc_addr -E -g &
intr_tgt_pid=$!
trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
waitforlisten "$intr_tgt_pid" $rpc_addr
}
function reactor_is_busy_or_idle() {
local pid=$1
local idx=$2
local state=$3
if [[ $state != "busy" ]] && [[ $state != "idle" ]]; then
return 1
fi
if ! hash top; then
# Fail this test if top is missing from system.
return 1
fi
for ((j = 10; j != 0; j--)); do
top_reactor=$(top -bHn 1 -p $pid -w 256 | grep reactor_$idx)
cpu_rate=$(echo $top_reactor | sed -e 's/^\s*//g' | awk '{print $9}')
cpu_rate=${cpu_rate%.*}
if [[ $state = "busy" ]] && [[ $cpu_rate -lt 70 ]]; then
sleep 1
elif [[ $state = "idle" ]] && [[ $cpu_rate -gt 30 ]]; then
sleep 1
else
return 0
fi
done
if [[ $state = "busy" ]]; then
echo "cpu rate ${cpu_rate} of reactor $i probably is not busy polling"
else
echo "cpu rate ${cpu_rate} of reactor $i probably is not idle interrupt"
fi
return 1
}
function reactor_is_busy() {
reactor_is_busy_or_idle $1 $2 "busy"
}
function reactor_is_idle() {
reactor_is_busy_or_idle $1 $2 "idle"
}
function reactor_get_thread_ids() {
local reactor_cpumask=$1
local grep_str
reactor_cpumask=$((reactor_cpumask))
jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
# shellcheck disable=SC2005
echo "$($rpc_py thread_get_stats | jq --arg reactor_cpumask "$reactor_cpumask" "$jq_str")"
}
function setup_bdev_mem() {
"$rpc_py" <<- RPC
bdev_malloc_create -b Malloc0 32 512
bdev_malloc_create -b Malloc1 32 512
bdev_malloc_create -b Malloc2 32 512
RPC
}
function setup_bdev_aio() {
if [[ $(uname -s) != "FreeBSD" ]]; then
dd if=/dev/zero of="$SPDK_TEST_STORAGE/aiofile" bs=2048 count=5000
"$rpc_py" bdev_aio_create "$SPDK_TEST_STORAGE/aiofile" AIO0 2048
fi
}

View File

@ -0,0 +1,102 @@
#!/usr/bin/env bash
testdir=$(readlink -f $(dirname $0))
rootdir=$(readlink -f $testdir/../..)
source $rootdir/test/common/autotest_common.sh
source $testdir/interrupt_common.sh
export PYTHONPATH=$rootdir/examples/interrupt_tgt
function reactor_set_intr_mode() {
local spdk_pid=$1
local without_thd=$2
thd0_ids=($(reactor_get_thread_ids $r0_mask))
thd2_ids=($(reactor_get_thread_ids $r2_mask))
# Nubmer of thd0_ids shouldn't be zero
if [[ ${#thd0_ids[*]} -eq 0 ]]; then
echo "spdk_thread is expected in reactor 0."
return 1
else
echo "spdk_thread ids are ${thd0_ids[*]} on reactor0."
fi
# CPU utilization of reactor 0~2 should be idle
for i in {0..2}; do
reactor_is_idle $spdk_pid $i
done
if [ "$without_thd"x = x ]; then
# Schedule all spdk_threads to reactor 1
for i in ${thd0_ids[*]}; do
$rpc_py thread_set_cpumask -i $i -m $r1_mask
done
for i in ${thd2_ids[*]}; do
$rpc_py thread_set_cpumask -i $i -m $r1_mask
done
fi
# Set reactor 0 and 2 to be poll mode
$rpc_py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d
$rpc_py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d
# CPU utilization of reactor 0 and 2 should be busy
for i in 0 2; do
reactor_is_busy $spdk_pid $i
done
# Set reactor 2 back to intr mode
$rpc_py --plugin interrupt_plugin reactor_set_interrupt_mode 2
if [ "$without_thd"x = x ]; then
# Schedule spdk_threads in thd2_ids back to reactor 2
for i in ${thd2_ids[*]}; do
$rpc_py thread_set_cpumask -i $i -m $r2_mask
done
fi
# CPU utilization of reactor 2 should be idle
reactor_is_idle $spdk_pid 2
# Set reactor 0 back to intr mode
$rpc_py --plugin interrupt_plugin reactor_set_interrupt_mode 0
if [ "$without_thd"x = x ]; then
# Schedule spdk_threads in thd2_ids back to reactor 0
for i in ${thd0_ids[*]}; do
$rpc_py thread_set_cpumask -i $i -m $r0_mask
done
fi
# CPU utilization of reactor 0 should be idle
reactor_is_idle $spdk_pid 0
return 0
}
function reactor_set_mode_without_threads() {
reactor_set_intr_mode $1 "without_thd"
return 0
}
function reactor_set_mode_with_threads() {
reactor_set_intr_mode $1
return 0
}
# Set reactors with intr_tgt without spdk_thread
start_intr_tgt
setup_bdev_mem
setup_bdev_aio
reactor_set_mode_without_threads $intr_tgt_pid
trap - SIGINT SIGTERM EXIT
killprocess $intr_tgt_pid
cleanup
# Set reactors with intr_tgt with spdk_thread
start_intr_tgt
setup_bdev_mem
setup_bdev_aio
reactor_set_mode_with_threads $intr_tgt_pid
trap - SIGINT SIGTERM EXIT
killprocess $intr_tgt_pid
cleanup

View File

@ -899,6 +899,7 @@ test_spdk_nvme_ctrlr_cmd_abort(void)
ctrlr.adminq = &admin_qpair;
admin_qpair.id = 0;
MOCK_SET(nvme_ctrlr_submit_admin_request, 0);
CU_ASSERT(pthread_mutex_init(&ctrlr.ctrlr_lock, NULL) == 0);
rc = spdk_nvme_ctrlr_cmd_abort(&ctrlr, qpair, 2, (void *)0xDEADBEEF, (void *)0xDCADBEEF);
CU_ASSERT(rc == 0);
@ -914,6 +915,7 @@ test_spdk_nvme_ctrlr_cmd_abort(void)
rc = spdk_nvme_ctrlr_cmd_abort(&ctrlr, qpair, 2, (void *)0xDEADBEEF, (void *)0xDCADBEEF);
CU_ASSERT(rc == -ENOMEM);
MOCK_CLEAR(nvme_ctrlr_submit_admin_request);
CU_ASSERT(pthread_mutex_destroy(&ctrlr.ctrlr_lock) == 0);
}
int main(int argc, char **argv)