[PEAK] Re: Trellis-fork

P.J. Eby pje at telecommunity.com
Tue Jun 23 21:27:04 EDT 2009


At 12:25 PM 6/23/2009 +0300, Sergey Schetinin wrote:
>The problem with this is that if different cells have different locks
>as their managers the deadlocks are very likely. There's no way of
>telling what cells will be used in a transaction and in what order so
>two concurrent transactions are very likely to get into a deadlock.
>
>If all cells have the same lock as their manager there are no
>deadlocks, but the whole ctrl.lock() business is pointless as there's
>only one lock to acquire and release.
>
>Another possible case is when there are two or more sets of cells that
>form disjoint networks. All cells in each of these sets would have the
>same lock. This way there's a guarantee of no deadlocks, but the only
>reason to use locks in this case is if there will be concurrent
>transactions on the same cell network.

You've missed the idea behind the design, which was that the 
"manager" of the cells is a single thread-specific object, shared 
between all cells created in that thread.  Thus, cells created by a 
thread share a single lock, thereby reducing the possible ways of deadlocking.

In fact, as long as there are not bidirectional observations taking 
place between threads, deadlock should be impossible.  That is, if 
you have, say, worker threads that produce or consume data in queues 
owned by the workers, and there are other processes that put things 
in the queues or take them out, the workers are never acquiring locks 
on their clients, and thus cannot deadlock.

The net effect is GIL-like, except that it allows essentially 
isolated threads to run freely, with pauses only happening when 
inter-thread communication happens.


>If not atomic there's no guarantee of getting valid value which isn't
>easily resolved.

I don't think that's within design scope, since untransacted reads 
are inherently a volatile sampling.  Without a transaction, you can't 
guarantee that the next read won't have changed, anyway!



More information about the PEAK mailing list