There is a call to write_lock() in gen_pool_destroy which is not balanced
by any corresponding write_unlock(). This causes problems with preemption
because the preemption-disable counter is incremented in the write_lock()
call, but never decremented by a write_unlock() call. This bug is
difficult to observe in the field because only two in-tree drivers call
gen_pool_destroy, and one of them is non-x86 arch-specific code.
To fix this, I have chosen removing the write_lock() over adding a
write_unlock() because the lock in question is inside a structure which
is being freed. Any other thread that waited to acquire such a lock
while gen_pool_destroy was running would find itself holding a lock
in recently-freed or about-to-be-freed memory. Using a pool while it is
in the process of being destroyed is a bug that must be resolved outside
of the gen_pool_destroy function.
Signed-off-by: Zygo Blaxell <zygo.blaxell@xandros.com>
int bit, end_bit;
- write_lock(&pool->lock);
list_for_each_safe(_chunk, _next_chunk, &pool->chunks) {
chunk = list_entry(_chunk, struct gen_pool_chunk, next_chunk);
list_del(&chunk->next_chunk);