Reset non-zero it_need indicator to zero atomically with fetching its

current value.  It is believed that the change is the real fix for the
issue which was covered over by the r252683.

With the current code, if the interrupt handler sets it_need between
read and consequent reset, the update could be lost and
ithread_execute_handlers() would not be called in response to the lost
update.

The r252683 could have hide the issue since at the moment of commit,
atomic_load_acq_int() did locked cmpxchg on the variable, which puts
the cache line into the exclusive owned state and clears store
buffers.  Then the immediate store of zero has very high chance of
reusing the exclusive state of the cache line and make the load and
store sequence operate as atomic swap.

For now, add the acq+rel fence immediately after the swap, to not
disturb current (but excessive) ordering.  Acquire is needed for the
ih_need reads after the load, while release does not serve a useful
purpose [*].

Reviewed by:	alc
Noted by:	alc [*]
Discussed with:	bde
Tested by:	pho
Sponsored by:	The FreeBSD Foundation
MFC after:	2 weeks
This commit is contained in:
Konstantin Belousov 2015-07-15 17:36:35 +00:00
parent 03bbcb2f0c
commit 615b6ea2c8
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=285607

View File

@ -1327,14 +1327,13 @@ ithread_loop(void *arg)
* we are running, it will set it_need to note that we
* should make another pass.
*/
while (atomic_load_acq_int(&ithd->it_need) != 0) {
while (atomic_swap_int(&ithd->it_need, 0) != 0) {
/*
* This might need a full read and write barrier
* to make sure that this write posts before any
* of the memory or device accesses in the
* handlers.
* This needs a release barrier to make sure
* that this write posts before any of the
* memory or device accesses in the handlers.
*/
atomic_store_rel_int(&ithd->it_need, 0);
atomic_thread_fence_acq_rel();
ithread_execute_handlers(p, ie);
}
WITNESS_WARN(WARN_PANIC, NULL, "suspending ithread");
@ -1507,14 +1506,13 @@ ithread_loop(void *arg)
* we are running, it will set it_need to note that we
* should make another pass.
*/
while (atomic_load_acq_int(&ithd->it_need) != 0) {
while (atomic_swap_int(&ithd->it_need, 0) != 0) {
/*
* This might need a full read and write barrier
* to make sure that this write posts before any
* of the memory or device accesses in the
* handlers.
* This needs a release barrier to make sure
* that this write posts before any of the
* memory or device accesses in the handlers.
*/
atomic_store_rel_int(&ithd->it_need, 0);
atomic_thread_fence_acq_rel();
if (priv)
priv_ithread_execute_handler(p, ih);
else