scheduler_dynamic: prioritize lowest lcore id for active threads

Before this patch _find_optimal_core() returned
1) any core that could fit the thread
2) if current core was over the limit, the least busy core
3) current core if no better candidate was found

Combined with _get_next_target_core() round-robining
the first core to consider, resulted in threads being
unnecessarily spread over the cores.

This patch only places threads on lower lcore id,
or when current core is over limit then any core that can fit it.

Next patch will remove round-robin logic to always start with
lowest lcore id.

Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I54e373d3ca02a5633607d22978305baa1142f8bd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8112
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
This commit is contained in:
Tomasz Zawadzki 2021-06-10 09:33:57 -04:00 committed by Ben Walker
parent d2ed0f45e7
commit a5999f637a

View File

@ -193,6 +193,7 @@ _find_optimal_core(struct spdk_lw_thread *lw_thread)
uint32_t least_busy_lcore = lw_thread->lcore; uint32_t least_busy_lcore = lw_thread->lcore;
struct spdk_thread *thread = spdk_thread_get_from_ctx(lw_thread); struct spdk_thread *thread = spdk_thread_get_from_ctx(lw_thread);
struct spdk_cpuset *cpumask = spdk_thread_get_cpumask(thread); struct spdk_cpuset *cpumask = spdk_thread_get_cpumask(thread);
bool core_over_limit = _is_core_over_limit(current_lcore);
/* Find a core that can fit the thread. */ /* Find a core that can fit the thread. */
for (i = 0; i < spdk_env_get_core_count(); i++) { for (i = 0; i < spdk_env_get_core_count(); i++) {
@ -213,12 +214,18 @@ _find_optimal_core(struct spdk_lw_thread *lw_thread)
continue; continue;
} }
return target_lcore; if (target_lcore < current_lcore) {
/* Lower core id was found, move to consolidate threads on lowest core ids. */
return target_lcore;
} else if (core_over_limit) {
/* When core is over the limit, even higher core ids are better than current one. */
return target_lcore;
}
} }
/* For cores over the limit, place the thread on least busy core /* For cores over the limit, place the thread on least busy core
* to balance threads. */ * to balance threads. */
if (_is_core_over_limit(current_lcore)) { if (core_over_limit) {
return least_busy_lcore; return least_busy_lcore;
} }