example/nvmf: Enlarge critical section guarded by g_mutex in nvmf_schedule_spdk_thread()

nvmf_schedule_spdk_thread() is not so performance critical but
should work as designed. Guard the whole operation to decide the target
core by g_mutex. This matches the reactor's _reactor_schedule_thread().

If _reactor_schedule_thread() is refined, change this accordingly too.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I9382230577442897d4d2d22b85b1ae4edd77aa98
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2536
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This commit is contained in:
Shuhei Matsumoto 2020-05-20 06:30:54 +09:00 committed by Tomasz Zawadzki
parent aaca23fb4b
commit 9bea56c004

View File

@ -237,14 +237,13 @@ nvmf_schedule_spdk_thread(struct spdk_thread *thread)
* Here we use the mutex.The way the actual SPDK event framework
* solves this is by using internal rings for messages between reactors
*/
pthread_mutex_lock(&g_mutex);
for (i = 0; i < spdk_env_get_core_count(); i++) {
pthread_mutex_lock(&g_mutex);
if (g_next_reactor == NULL) {
g_next_reactor = TAILQ_FIRST(&g_reactors);
}
nvmf_reactor = g_next_reactor;
g_next_reactor = TAILQ_NEXT(g_next_reactor, link);
pthread_mutex_unlock(&g_mutex);
/* each spdk_thread has the core affinity */
if (spdk_cpuset_get_cpu(cpumask, nvmf_reactor->core)) {
@ -254,6 +253,7 @@ nvmf_schedule_spdk_thread(struct spdk_thread *thread)
break;
}
}
pthread_mutex_unlock(&g_mutex);
if (i == spdk_env_get_core_count()) {
fprintf(stderr, "failed to schedule spdk thread\n");