Actually, upcalls cannot be freed while destroying the thread because we

should call uma_zfree() with various spinlock helds.  Rearranging the
code would not help here because we cannot break atomicity respect
prcess spinlock, so the only one choice we have is to defer the operation.
In order to do this use a global queue synchronized through the kse_lock
spinlock which is freed at any thread_alloc() / thread_wait() through a
call to thread_reap().
Note that this approach is not ideal as we should want a per-process
list of zombie upcalls, but it follows initial guidelines of KSE authors.

Tested by: jkim, pav
Approved by: jeff, julian
Approved by: re
This commit is contained in:
Attilio Rao 2007-07-27 09:21:18 +00:00
parent 4eb78fa9a9
commit 34ed040030
3 changed files with 21 additions and 0 deletions

View File

@ -81,6 +81,23 @@ upcall_alloc(void)
return (ku);
}
void
upcall_reap(void)
{
TAILQ_HEAD(, kse_upcall) zupcalls;
struct kse_upcall *ku_item, *ku_tmp;
TAILQ_INIT(&zupcalls);
mtx_lock_spin(&kse_lock);
if (!TAILQ_EMPTY(&zombie_upcalls)) {
TAILQ_CONCAT(&zupcalls, &zombie_upcalls, ku_link);
TAILQ_INIT(&zombie_upcalls);
}
mtx_unlock_spin(&kse_lock);
TAILQ_FOREACH_SAFE(ku_item, &zupcalls, ku_link, ku_tmp)
uma_zfree(upcall_zone, ku_item);
}
void
upcall_remove(struct thread *td)
{

View File

@ -299,6 +299,9 @@ thread_reap(void)
td_first = td_next;
}
}
#ifdef KSE
upcall_reap();
#endif
}
/*

View File

@ -871,6 +871,7 @@ void cpu_set_fork_handler(struct thread *, void (*)(void *), void *);
#ifdef KSE
void kse_unlink(struct thread *);
void kseinit(void);
void upcall_reap(void);
void upcall_remove(struct thread *td);
#endif
void cpu_set_upcall(struct thread *td, struct thread *td0);