[PEAK] Re: trellis.Set.discard

Sergey Schetinin maluke at gmail.com
Tue Oct 21 09:52:59 EDT 2008

> Hi Sergey; I've been out of town a few days, and won't likely have time for
> in-depth analysis this week either.  But I can tell you that restricting
> futures to single-rule access is a no-go; the whole point of futures was to
> allow multiple rules to touch the same data structure, allowing
> non-deterministic merge at the inter-rule level.
> In other words, data structures do allow for "order" to be between rules,
> but it is a serializable history.

I understand, but can this intention be fulfilled without breaking
some fundamental requirements? Current solution with saved savepoints
can't work because even if it's changed to roll back to the beginning
of the rule where future was first accessed, it would still be
incompatible with tasks / top-level modifiers. So there needs to be a
way to do partial rollback, which requires making a copy each time
it's accessed by a different rule.

And if we do that it's not a long way from what I propose. Also,
making the merge deterministic would allow us to add a number of
useful properties. For example, currently if one rule adds an item to
the Set and then some other discards it, the outcome depends on the
order they run. But if we choose to merge changes ourselves we can
make sure that such conflicting changes would be detected (no matter
what order they run in), but at the same time, adding then removing an
item wouldn't be considered a conflict as long as it's done in the
same rule.

Actually, it seems to me that rules should see the data structures in
the way they entered the transaction, so if some rule adds a new item
to the set, no other rule should be able to remove it before @todo
cell gets the value (Set.added for this example).

Anyway, ISTM that .futures the way they are don't fit Trellis model very well.

> The ability to have a side-transaction is a good idea, though.  (Really a
> "back transaction", since it is sort of happening in the past.)  I haven't
> thought a lot about the isolation parameters, though.
> Prototyping certainly would be helpful.

I don't think I will have anything interesting to show before next
week, but so far I'm using this monkeypatch:

mknone = lambda: None
state_factory = dict(active=bool, in_cleanup=bool, undoing=bool,
                    undo=list, at_commit=list, managers=dict,
                    current_listener=mknone, destinations=mknone, routes=mknone,
                    readonly=bool, reads=dict, writes=dict, has_run=dict,
                    layers=list, to_retry=dict, queues=dict)

def get_state(ctrl):
    return dict((key, getattr(ctrl, key)) for key in state_factory)

def reset_state(ctrl):
    for key, factory in state_factory.items():
        value = factory()
        setattr(ctrl, key, value)

def restore_state(ctrl, state):
    for key, value in state.items():
        setattr(ctrl, key, value)

def side_txn(f, *args):
    assert not ctrl.in_cleanup
    state = get_state(ctrl)
        return ctrl.atomically(f, *args)
        restore_state(ctrl, state)

It works well for me so far, but one crucial thing it's missing is the
check that nothing in the main transaction is changed by it, so I only
use it when I'm absolutely sure the call is safe. I also created a new
component base, but I only used it very little, to make sure the use
of these changes is as explicit as possible.

class IndependentComponent(Component):
    def __class_call__(cls, *args, **kw):
        super_call = super(IndependentComponent, cls).__class_call__
        return side_txn(super_call, *args, **kw)

I have a few more issues / requests, but I want to think about them a
little more.

More information about the PEAK mailing list