The PEAK Developers' Center   Diff for "Trellis" UserPreferences
 
HelpContents Search Diffs Info Edit Subscribe XML Print View
Differences between version dated 2007-11-09 18:55:08 and 2008-05-23 12:49:02 (spanning 6 versions)
Deletions are marked like this.
Additions are marked like this.

Event-Driven Programming The Easy Way, with ``peak.events.trellis``
===================================================================
 
(NOTE: As of 0.7a1, many new features have been added to the Trellis API,
and some old ones have been deprecated. If you are upgrading from an older
version, please see the `porting guide`_ for details.)
 
Whether it's an application server or a desktop application, any sufficiently
complex system is event-driven -- and that usually means callbacks.
 

    >>> from peak.events import trellis
 
    >>> class TempConverter(trellis.Component):
    ... trellis.values(
    ... F = 32,
    ... C = 0,
    ... F = trellis.maintain(
    ... lambda self: self.C * 1.8 + 32,
    ... initially = 32
    ... )
    ... trellis.rules(
    ... F = lambda self: self.C * 1.8 + 32,
    ... C = lambda self: (self.F - 32)/1.8,
    ... C = trellis.maintain(
    ... lambda self: (self.F - 32)/1.8,
    ... initially = 0
    ... )
    ... @trellis.action
    ... @trellis.perform
    ... def show_values(self):
    ... print "Celsius......", self.C
    ... print "Fahrenheit...", self.F

(rule/value object), and offers many other features that either PyCells or
Cellulose lack.
 
The Trellis also boasts an extensive `Tutorial and Reference Manual`_, and
can be `downloaded from the Python Package Index`_ or installed using
`Easy Install`_.
The Trellis package can can be `downloaded from the Python Package Index`_ or
installed using `Easy Install`_, and it has a fair amount of documentation,
including the following manuals:
 
* `Developer's Guide and Tutorial`_
 
* `Time, Event Loops, and Tasks`_
 
* `Event-Driven Collections with the Trellis`_ (New features in 0.7a2)
 
* `Software Transactional Memory (STM) And Observers`_
 
* `Porting Code from Older Trellis Versions`_
 
 
Release highlights for 0.7a2:
 
* Removed APIs that were deprecated in 0.7a1
 
* Rollback now occurs over an entire atomic operation, even if more than one
  recalc pass occurs within that atomic operation.
 
* Added ``collections.Hub`` type for publish/subscribe operations similar to
  PyDispatcher, but in a declarative, callback-free, and extensible manner.
 
* Various bugfixes
 
 
Questions, discussion, and bug reports for the Trellis should be directed to
the `PEAK mailing list`_.

.. _downloaded from the Python Package Index: http://pypi.python.org/pypi/Trellis#toc
.. _Easy Install: http://peak.telecommunity.com/DevCenter/EasyInstall
.. _PEAK mailing list: http://www.eby-sarna.com/mailman/listinfo/PEAK/
.. _Tutorial and Reference Manual: http://peak.telecommunity.com/DevCenter/Trellis#toc
.. _Developer's Guide and Tutorial: http://peak.telecommunity.com/DevCenter/Trellis#toc
.. _Time, Event Loops, and Tasks: http://peak.telecommunity.com/DevCenter/TrellisActivity
.. _Event-Driven Collections with the Trellis: http://peak.telecommunity.com/DevCenter/TrellisCollections
.. _Software Transactional Memory (STM) And Observers: http://peak.telecommunity.com/DevCenter/TrellisSTM
.. _Porting Code from Older Trellis Versions: http://peak.telecommunity.com/DevCenter/TrellisPorting
.. _porting guide: http://peak.telecommunity.com/DevCenter/TrellisPorting
 
.. _toc:
.. contents:: **Table of Contents**

maintained by rules, the way a spreadsheet is maintained by its formulas.
 
These managed attributes are called "cell attributes", because the attribute
values are stored in ``trellis.Cell`` objects. The ``Cell`` objects can
contain preset values, values computed using rules, or even both at the same
time. (Like in the temperature converter example above.)
 
To define a simple cell attribute, you can use the ``trellis.rules()`` and
``trellis.values()`` functions inside the class body to define multiple rules
and values. Or, you can use the ``@trellis.rule`` decorator to turn an
individual function into a rule, or define a single value attribute by calling
``trellis.value``. Last, but not least, you can use ``@trellis.action`` to
define a rule that does something other than just computing a value. Here's an
example that uses all of these approaches, simply for the sake of
illustration::
values are stored in "cell" (``trellis.AbstractCell``) objects. Cell objects
can be variable or constant, and either computed by a rule or explicitly set
to a value -- possibly both, as in the temperature converter example!
 
There are five basic types of cell attributes:
 
Passive, Settable Values (``attr()`` and ``attrs()``)
    These are simple read-write attributes, with a specified default value.
    Rules that read these values will be automatically recalculated after
    the attribute is changed.
 
Computed Constants Or Initialized Values (``make()`` and ``make.attrs()``)
    These attributes are usually used to hold a mutable object, such as a list
    or dictionary (e.g. ``cache = trellis.make(dict)``). The callable (passed
    in when you define the attribute) will be called at most once for each
    instance, in order to initialize the attribute's value. After that, the
    same object will be returned each time. (Unless you make the attribute
    writable, and set the attribute to a new value.)
 
Computed, Observable Values (``@compute`` and ``compute.attrs()``)
    These attributes are used to compute simple formulas, much like those in
    a spreadsheet. That is, ones that calculate a current state based on the
    current state of other values. Formulas used in ``@compute`` attributes
    must be non-circular, side-effect free, and cannot depend on the
    attribute's previous value. They are automatically recalculated when their
    dependencies change, but *only* if a maintenance or action-performing rule
    depends upon the result, either directly or indirectly. (This avoids
    unnecessary recalculation of values that nobody cares about.)
 
Maintenance Rules/Maintained Values (``@maintain`` and ``maintain.attrs()``)
    These rules or attribute values are used to reflect changes in state. A
    maintenance rule can modify other values or use its own previous value in
    a calculation. It is re-invoked any time a value it has previously used
    changes, even if no other rule depends upon it. Maintenance rules can be
    circular, as in the temperature converter example, as their values can be
    explicitly set -- both as an initial value, and at runtime. They are also
    used to implement "push" or "pull" rules that update one data structure in
    response to changes made in another data structure. All side-effects
    in maintenance rules must be undo-able using the Trellis's undo API.
    (Which is automatic if the side-effects happen only on trellis attributes
    or data structures.) But if you must change non-trellis data structures
    inside a maintenance rule, you will need to log undo actions. We'll discuss
    the undo log mechanism in more detail later, in the section on `Creating
    Your Own Data Structures`_.
 
Action-Performing Rules (``@perform``)
    These rules are used to perform non-undoable actions on non-trellis data or
    systems, such as output I/O and calls to other libraries. Like maintenance
    rules, they are automatically re-invoked whenever a value they've
    previously read has changed. Unlike maintenance rules, however, they
    cannot return a value or modify any trellis data.
 
    Note, by the way, that this means performing rules should never raise
    errors. If they do, the changes that caused the rule to run will be rolled
    back, but if any other performing rules were run first, their actions will
    *not* be rolled back, leaving your application in an inconsistent state.
 
For each of the attribute types, you can use the plural ``attrs()`` form (if
there is one) to define multiple attributes at once in the body of a class.
The singular forms (except for ``attr()``) can be used either inline or as
function decorators wrapping a method to be used as the attribute's rule.
 
Let's take a look at a sample class that uses some of these ways to define
different attributes, being deliberately inconsistent just to highlight some
of the possible options::
 
    >>> class Rectangle(trellis.Component):
    ... trellis.values(
    ... trellis.attrs(
    ... top = 0,
    ... width = 20,
    ... )
    ... left = trellis.value(0)
    ... height = trellis.value(30)
    ... left = trellis.attr(0)
    ... height = trellis.attr(30)
    ...
    ... trellis.rules(
    ... trellis.compute.attrs(
    ... bottom = lambda self: self.top + self.height,
    ... )
    ...
    ... @trellis.rule
    ... @trellis.compute
    ... def right(self):
    ... return self.left + self.width
    ...
    ... @trellis.action
    ... @trellis.perform
    ... def show(self):
    ... print self
    ...

    >>> r.left = 25
    Rectangle((25, 0), (17, 10), (42, 10))
 
By the way, any attributes for which you define an action or a rule (but *not*
a value) will be read-only::
By the way, note that computed attributes (as well as ``make`` attributes by
default) will be read-only::
 
    >>> r.bottom = 99
    Traceback (most recent call last):
      ...
    AttributeError: can't set attribute
 
However, if you define both a rule *and* a value for the attribute, as we did
in the ``TemperatureConverter`` example, then you'll be able to both read and
write the attribute's value.
However, "maintained" attributes will be writable if you supply an initial
value, as we did in the ``TemperatureConverter`` example. (Plain ``attr``
attributes are always writable, and ``make`` attributes can be made writable
by passing in ``writable=True`` when creating them.)
 
Note, by the way, that you aren't required to make everything in your program a
``trellis.Component`` in order to use the Trellis. The ``Component`` class

   dictionary that will hold all the ``Cell`` objects used to implement cell
   attributes.
 
2. It takes any keyword arguments it receives, and uses them to initialize any
   named attributes. (Note that you don't necessarily have to do this, but it
   often comes in handy.)
2. The ``__init__`` method takes any keyword arguments it receives, and uses
   them to initialize any named attributes. (Note that this is the *only*
   thing the ``__init__`` method does, so you don't have to call it unless you
   want this behavior.)
 
3. It creates a cell for each of the object's non-optional cell attributes,
   in order to initialize their rules and set up their dependencies. We'll

That's because of two important Trellis principles:
 
1. When a ``Component`` instance is created, all its "non-optional" cell
   attributes are calculated at the end of ``Component.__init__()``. That is,
   if they have a rule, it gets invoked, and the result is used to determine
   the cell's initial value.
 
2. While a cell's rule is running, *any* trellis Cell that is looked at becomes
   a dependency of that rule. If the looked-at cell changes later, it triggers
   recalculation of the rule that looked. In Trellis terms, we say that the
   first cell has become an "observer" of the second cell.
   attributes are calculated after initialization is finished. That is,
   if the attribute is a maintenance or performing rule, and has not been
   marked optional, then the rule is invoked, and the result is used to
   determine the cell's initial value.
 
2. While a cell's rule is running, *any* trellis cell whose value is looked at
   becomes a dependency of that rule. If the looked-at cell changes later, it
   triggers recalculation of the rule that "looked". In Trellis terms, we say
   that the first cell has become a "listener" of the second cell.
 
The first of these principles explains why the rectangle printed itself
immediately: the ``show`` rule was calculated. We can see this if we look at
the rectangle's ``show`` attribute::
immediately: the ``show`` performer cell was activated. We can see this if we
look at the rectangle's ``show`` attribute::
 
    >>> print r.show
    None
 
(The ``show`` rule didn't return a specific value, so the resulting attribute
value is ``None``. Also notice that *rules are not methods* -- they are more
like properties.)
(The ``show`` rule is a performer, so the resulting attribute value is
``None``. Also notice that **rules are not methods** -- they are more like
properties.)
 
The second principle explains why the rectangle re-prints itself any time one
of the attributes changes value: all six attributes are referenced by the
``__repr__`` method, which is called when the ``show`` rule prints the
``__repr__`` method, which is called when the ``show`` performer prints the
rectangle. Since the cells that store those attributes are being looked at
during the execution of another cell's rule, they become dependencies, and the
``show`` rule is thus recalculated whenever the observed cells change.
``show`` rule is thus re-run whenever the listened-to cells change.
 
Each time a rule runs, its dependencies are automatically re-calculated --
which means that if you have more complex rules, they can actually depend on
different cells every time they're calculated. That way, the rule is only
recalculated when it's absolutely necessary.
*different* cells every time they're calculated. That way, the rule is only
re-run when it's absolutely necessary.
 
By the way, an observed cell has to actually *change* its value (as determined
By the way, a listened-to cell has to actually *change* its value (as determined
by the ``!=`` operator), in order to trigger recalculation. Merely setting a
cell doesn't cause its observers to recalculate::
 

    >>> r.width = 18
    Rectangle((25, 0), (18, 10), (43, 10))
 
Note that if a cell rule ever has *no* dependencies -- that is, does not look
at any other cell attributes -- then it will not be recalculated. This means
you can use trellis rules to create attributes that are automatically
initialized, but then keep the same value thereafter::
 
    >>> class Demo(trellis.Component):
    ... aDict = trellis.rule(lambda self: {})
 
    >>> d = Demo()
    >>> d.aDict
    {}
    >>> d.aDict[1] = 2
    >>> d.aDict
    {1: 2}
 
A rule like this will return the same object every time, because it doesn't
use any other cells to compute its value. So it runs once, and never again.
If we also defined a ``trellis.value`` for ``aDict``, then the attribute
would also be writable, and we could put a different value there. But since
we didn't, it becomes read-only::
 
    >>> d.aDict = {}
    Traceback (most recent call last):
      ...
    AttributeError: Constants can't be changed
 
...even though we can override the initial value when the component is created,
or any time before it is first read::
 
    >>> d = Demo(aDict={3:4})
    >>> d.aDict
    {3: 4}
 
However, since this rule is not an "optional" rule, the ``Component.__init__``
method will read it, meaning that the only chance we get to override it is
via the keyword arguments. In the next section, we'll look at how to create
"optional" rules: ones that don't get calculated the moment a component is
created.
 
 
"Optional" Rules and Subclassing
--------------------------------
 
The ``show`` rule we've been playing with on our ``Rectangle`` class is kind of
handy for debugging, but it's kind of annoying when you don't need it. Let's
turn it into an "optional" action, so that it won't run unless we ask it to::
The ``show`` rule we've been playing with on our ``Rectangle`` class is
kind of handy for debugging, but it's kind of annoying when you don't need it.
Let's turn it into an "optional" performer, so that it won't run unless we ask
it to::
 
    >>> class QuietRectangle(Rectangle):
    ... @trellis.optional
    ... @trellis.action
    ... @trellis.perform(optional=True)
    ... def show(self):
    ... print self
 

    >>> q2 = QuietRectangle()
    >>> q2.top = 99
 
Notice, by the way, that rules are more like properties than methods, which
means you can't use ``super()`` to call the inherited version of a rule.
``@compute`` rules are always "optional". ``make()`` attributes are optional
by default, but can be made non-optional by passing in ``optional=False``.
``@maintain`` and ``@perform`` are non-optional by default, but can be made
optional using ``optional=True``.
 
Notice, by the way, that rule attributes are more like properties than methods,
which means you can't use ``super()`` to call the inherited version of a rule.
(Later, we'll look at other ways to access rule definitions.)
 
 
Read-Only and Read-Write Attributes
-----------------------------------
 
Attributes can vary as to whether they're settable:
 
* Passive values (``attr()``, ``attrs()``) and ``@maintain`` rules are
  *always* settable
 
* ``make()`` attributes are settable only if created with ``writable=True``
 
* ``@compute`` and ``@perform`` attributes are *never* settable
 
For example, here's a class with a non-settable ``aDict`` attribute::
 
    >>> class Demo(trellis.Component):
    ... aDict = trellis.make(dict)
 
    >>> d = Demo()
    >>> d.aDict
    {}
    >>> d.aDict[1] = 2
    >>> d.aDict
    {1: 2}
 
    >>> d.aDict = {}
    Traceback (most recent call last):
      ...
    AttributeError: Constants can't be changed
 
Note, however, that even if an attribute isn't settable, you can still
*initialize* the attribute value, before the attribute's cell is created::
 
    >>> d = Demo(aDict={3:4})
    >>> d.aDict
    {3: 4}
 
    >>> d = Demo()
    >>> d.aDict = {1:2}
    >>> d.aDict
    {1: 2}
 
 
Since the ``aDict`` attribute is "optional" (``make`` attributes are optional
by default), it wasn't initialized when the ``Demo`` instance was created. So
we were able to set an alternate initialization value. But, if we make it
non-optional, we can't do this, because the attribute will be initialized
during instance construction::
 
    >>> class Demo(trellis.Component):
    ... aDict = trellis.make(dict, optional=False)
 
    >>> d = Demo()
    >>> d.aDict = {1:2}
    Traceback (most recent call last):
      ...
    AttributeError: Constants can't be changed
    
And so, non-optional read-only attributes can only be set while an instance is
being created::
 
    >>> d = Demo(aDict={3:4})
    >>> d.aDict
    {3: 4}
 
But if an attribute is settable, it can be set at any time, whether the
attribute is optional or not::
 
    >>> class Demo(trellis.Component):
    ... aDict = trellis.make(dict, writable=True)
 
    >>> d = Demo()
    >>> d.aDict = {1:2}
    >>> d.aDict = {3:4}
 
 
 
Model-View-Controller and the "Observer" Pattern
------------------------------------------------
 

For example::
 
    >>> class Viewer(trellis.Component):
    ... trellis.values(model = None)
    ... model = trellis.attr(None)
    ...
    ... @trellis.action
    ... @trellis.perform
    ... def view_it(self):
    ... if self.model is not None:
    ... print self.model

attribute. So if you change ``view.model``, this triggers a recalculation,
too.
 
Remember: once a rule observes another cell, it will be recalculated whenever
the observed value changes. Each time ``view_it`` is recalculated, it renews
Remember: once a rule reads another cell, it will be recalculated whenever the
previously-read value changes. Each time ``view_it`` is invoked, it renews
its dependency on ``self.model``, but *also* acquires new dependencies on
whatever the ``repr()`` of ``self.model`` looks at. Meanwhile, any
dependencies on the attributes of the *previous* ``self.model`` are dropped,
so changing them doesn't cause the rule to be recalculated any more. This
means we can even do things like set ``model`` to a non-component object, like
this::
so changing them doesn't cause the perform rule to be re-invoked any more.
This means we can even do things like set ``model`` to a non-component object,
like this::
 
    >>> view.model = {}
    {}

``trellis.Dict`` and ``trellis.List`` instead of the built-in Python types.
We'll cover how that works in the section below on `Mutable Data Structures`_.
 
By the way, the links from a cell to its observers are defined using weak
By the way, the links from a cell to its listeners are defined using weak
references. This means that views (and cells or components in general) can
be garbage collected even if they have dependencies. For more information
about how Trellis objects are garbage collected, see the later section on

Accessing a Rule's Previous Value
---------------------------------
 
Sometimes it's useful to create a rule whose value is based in part on its
Sometimes it's useful to create a maintained value that's based in part on its
previous value. For example, a rule that produces an average over time, or
that ignores "noise" in an input value, by only returning a new value when the
input changes more than a certain threshhold since the last value. It's fairly
easy to do this, using rules that refer to their previous value::
easy to do this, using a ``@maintain`` rule that refers to its previous value::
 
    >>> class NoiseFilter(trellis.Component):
    ... trellis.values(
    ... trellis.attrs(
    ... value = 0,
    ... threshhold = 5,
    ... filtered = 0
    ... )
    ... @trellis.rule
    ... @trellis.maintain(initially=0)
    ... def filtered(self):
    ... if abs(self.value - self.filtered) > self.threshhold:
    ... return self.value

 
As you can see, referring to the value of a cell from inside the rule that
computes the value of that cell, will return the *previous* value of the cell.
Notice, by the way, that this technique can be extended to keep track of an
arbitrary number of variables, if you create a rule that returns a tuple.
We'll use this technique more later on.
(Note: this is only possible in ``@maintain`` rules.)
 
 
Beyond The Spreadsheet: "Receiver" Cells
----------------------------------------
Beyond The Spreadsheet: "Resetting" Cells
-----------------------------------------
 
So far, all the stuff we've been doing isn't really any different than what you
can do with a spreadsheet, except maybe in degree. Spreadsheets usually don't

But practical programs often need to do more than just reflect the values of
things. They need to *do* things, too.
 
While rule and value cells reflect the current "state" of things, discrete and
receiver cells are designed to handle things that are "happening". They also
let us handle the "Controller" part of "Model-View-Controller".
So far, we've seen only attributes that reflect a current "state" of things.
But attributes can also represent things that are "happening", by automatically
resetting to some sort of null or default value. In this way, you can use
an attribute's value as a trigger to cause some action, following which it
resets to an "empty" or "inactive" value. And this can then help us handle the
"Controller" part of "Model-View-Controller".
 
For example, suppose we want to have a controller that lets you change the
size of a rectangle. We can use "receiver" attributes to do this, which are
sort of like an "event", "message", or "command" in a GUI or other event-driven
size of a rectangle. We can use "resetting" attributes to do this, in a way
similar to an "event", "message", or "command" in a GUI or other event-driven
system::
 
    >>> class ChangeableRectangle(QuietRectangle):
    ... trellis.receivers(
    ... trellis.attrs.resetting_to(
    ... wider = 0,
    ... narrower = 0,
    ... taller = 0,
    ... shorter = 0
    ... )
    ... trellis.rules(
    ... width = lambda self: self.width + self.wider - self.narrower,
    ... height = lambda self: self.height + self.taller - self.shorter,
    ... width = trellis.maintain(
    ... lambda self: self.width + self.wider - self.narrower,
    ... initially = 20
    ... )
    ... height = trellis.maintain(
    ... lambda self: self.height + self.taller - self.shorter,
    ... initially = 30
    ... )
 
    >>> c = ChangeableRectangle()
    >>> view.model = c
    Rectangle((0, 0), (20, 30), (20, 30))
 
A ``receiver`` attribute (created with ``trellis.receiver()`` or
``trellis.receivers()``) works by "receiving" an input value, and then
automatically resetting itself to its default value after its dependencies are
A resetting attribute (created with ``attr(resetting_to=value)`` or
``attrs.resetting_to()``) works by receiving an input value, and then
automatically resetting to its default value after its dependencies are
updated. For example::
 
    >>> c.wider

 
Notice that setting ``c.wider = 1`` updated the rectangle as expected, but as
soon as all updates were finished, the attribute reset to its default value of
zero. In this way, every time you put a value into a receiver, it gets
processed and discarded. And each time you set it to a non-default value,
it's treated as a *change*. Which means that any rule that depends on the
receiver will be recalculated. If we'd used a normal ``trellis.value`` here,
then set ``c.wider = 1`` twice in a row, nothing would happen the second time!
zero. In this way, every time you put a value into a resetting attribute, it
gets processed and discarded. And each time you set it to a non-default value,
it's treated as a *change*. Which means that any maintenance or performing
rules that depends on the attribute will be recalculated (along with any
``@compute`` rules in between). If we'd used a normal ``trellis.attr`` here,
and then set ``c.wider = 1`` twice in a row, nothing would have happen the
second time, because the value would not have changed.
 
Now, we *could* write methods for changing value cells that would do this sort
of resetting for us, but why? We'd need to have both the attribute *and* the
method, and we'd need to remember to never set the attribute directly. It's
much easier to just use a receiver as an "event sink" -- that is, to receive,
consume, and dispose of any messages or commands you want to send to an object.
of resetting for us, but it wouldn't be a good idea. We'd need to have both
the attribute *and* the method, and we'd need to remember to *never* set the
attribute directly. (What's more, it wouldn't even work correctly, for reasons
we'll see later.) It's much easier to just use a resetting attribute as an
"event sink" -- that is, to receive, consume, and dispose of any messages or
commands you want to send to an object.
 
But why do we need such a thing at all? Why not just write code that directly
manipulates the model's width and height? Well, sometimes you *can*, but it

rest of the program. This is a form of something called "referential
transparency", which roughly means "order independent". We'll cover this topic
in more detail in the later section on `Managing State Changes`_. But in the
meantime, let's look at how using receivers instead of methods also helps us
meantime, let's look at how using attributes instead of methods also helps us
implement generic controllers.
 
 

---------------------------------------------
 
Let's create a couple of generic "Spinner" controllers, that take a pair of
"increase" and "decrease" receivers, and hook them up to our changeable
rectangle::
"increase" and "decrease" command attributes, and hook them up to our
changeable rectangle::
 
    >>> class Spinner(trellis.Component):
    ... """Increase or decrease a value"""
    ... increase = trellis.receiver(0)
    ... decrease = trellis.receiver(0)
    ... by = trellis.value(1)
    ... increase = trellis.attr(resetting_to=0)
    ... decrease = trellis.attr(resetting_to=0)
    ... by = trellis.attr(1)
    ...
    ... def up(self):
    ... self.increase = self.by

    >>> height.up()
    Rectangle((0, 0), (22, 30), (22, 30))
 
 
Could you do the same thing with methods? Maybe. But can methods be linked
the *other* way?::
 

define a value cell for the text in its class::
 
    >>> class TextEditor(trellis.Component):
    ... text = trellis.value('')
    ... text = trellis.attr('')
    ...
    ... @trellis.action
    ... @trellis.perform
    ... def display(self):
    ... print "updating GUI to show", repr(self.text)
 

simply link them together at runtime in any way that's useful.
 
 
"Discrete" Rules
----------------
Resetting Rules
---------------
 
Receiver attributes are designed to "accept" what might be called events,
Resetting attributes are designed to "accept" what might be called events,
messages, or commands. But what if you want to generate or transform such
events instead?
 

new high temperature is seen::
 
    >>> class HighDetector(trellis.Component):
    ... value = trellis.value(0)
    ... max_and_new = trellis.value((None, False))
    ... value = trellis.attr(0)
    ... last_max = trellis.attr(None)
    ...
    ... @trellis.rule
    ... def max_and_new(self):
    ... last_max, was_new = self.max_and_new
    ... @trellis.maintain
    ... def new_high(self):
    ... last_max = self.last_max
    ... if last_max is None:
    ... return self.value, False # first seen isn't a new high
    ... self.last_max = self.value
    ... return False # first seen isn't a new high
    ... elif self.value > last_max:
    ... return self.value, True
    ... return last_max, False
    ... self.last_max = self.value
    ... return True
    ... return False
    ...
    ... trellis.rules(
    ... new_high = lambda self: self.max_and_new[1]
    ... )
    ...
    ... @trellis.action
    ... @trellis.perform
    ... def monitor(self):
    ... if self.new_high:
    ... print "New high"
 
The ``max_and_new`` rule returns two values: the current maximum, and a flag
indicating whether a new high was reached. It refers to itself in order to
see its own *previous* value, so it can tell whether a new high has been
reached. We set a default value of ``(None, False)`` so that the first time
it's run, it will initialize itself correctly. We then split out the "new
high" flag from the tuple, using another rule.
 
The reason we do the calculation this way, is that it makes our rule
"re-entrant". Because we're not modifying anything but local variables,
it's impossible for an error in this rule to leave any corrupt data behind.
We'll talk more about how (and why) to do things this way in the section below
on `Managing State Changes`_.
 
In the meantime, let's take our ``HighDetector`` for a test drive::
The ``new_high`` rule runs whenever ``value`` changes, and checks to see
if it's greater than the current highest value. If so, it returns true and
updates the maximum value. Let's try it out::
 
    >>> hd = HighDetector()
 

Oops! We set a new high value, but the ``monitor`` rule didn't detect a new
high, because ``new_high`` was *already True* from the previous high.
 
Normal rules return what might be called "continuous" or "steady state" values.
That is, their value remains the same until something causes them to be
recalculated. In this case, the second recalculation of ``new_high`` returns
``True``, just like the first one... meaning that there's no change, and no
observer recalculation.
 
But "discrete" rules are different. Just like receivers, their value is
automatically reset to a default value as soon as all their observers have
"seen" the original value. Let's try a discrete version of the same thing::
Just as with a regular attribute, rules normally return what might be called
"continuous" or "steady state" values. That is, their value remains the same
until something causes them to be recalculated. In this case, the second
recalculation of ``new_high`` returns ``True``, just like the first one...
meaning that there's no *change*, and thus the performing rule isn't triggered.
 
But, just as with regular attributes, ``@compute`` and ``@maintain`` rules
can be made "resetting", using the ``resetting_to=`` keyword, allowing the
value to reset to a default as soon as all of the value's listeners have
"seen" the original value. Let's try a new version of our high detector::
 
    >>> class HighDetector2(HighDetector):
    ... new_high = trellis.value(False) # <- the default value
    ... new_high = trellis.discrete(lambda self: self.max_and_new[1])
    ...
    ... @trellis.maintain(resetting_to=False)
    ... def new_high(self):
    ... # this is a bit like a super() call, but for a rule:
    ... return HighDetector.new_high.rule(self)
 
    >>> hd = HighDetector2()
 

    New high
 
As you can see, each new high is detected correctly now, because the value
of ``new_high`` resets to ``False`` after it's calculated as (or set to) any
other value::
of ``new_high`` is silently reset to ``False`` after it's calculated as (or
set to) any other value::
 
    >>> hd.new_high
    False

    >>> hd.new_high
    False
 
(By the way, that ``HighDetector.new_high.rule`` in the new ``new_high`` rule
retrieves the base class version of the rule. We could also have done the same
thing this way::
 
    >>> class HighDetector2(HighDetector):
    ... new_high = trellis.maintain(
    ... HighDetector.new_high.rule, resetting_to = False
    ... )
 
and the result would have been the same, except it would run faster since the
lookup of the inherited rule only happens once.)
 
 
Wiring Up Multiple Components
-----------------------------

gets wider and taller whenever the Celsius temperature reaches a new high::
 
    >>> tc = TempConverter()
    Celsius...... 0
    Fahrenheit... 32
    Celsius...... 0.0
    Fahrenheit... 32.0
 
    >>> hd = HighDetector2(value = trellis.Cells(tc)['C'])
    >>> cr = ChangeableRectangle(

 
Time is the enemy of event-driven programs. They say that time is "nature's
way of keeping everything from happening at once", but in event-driven programs
we usually *want* certain things to happen "at once"!
we usually *want* certain things to happen "all at once"!
 
For example, suppose we want to change a rectangle's top and left
co-ordinates::

That way, they don't take effect until the current event is completely
finished.
 
The Trellis actually does the same thing, but its internal "event queue" is
 
Modifiers
---------
 
The Trellis actually does something similar, but its internal "event queue" is
automatically flushed whenever you set a value from outside a rule. If you
want to set multiple values, you need to use a ``@modifier`` function or
method like this one, which we could've made a Rectangle method, but didn't::
method like this one, which we could've made a method of ``Rectangle``, but
didn't::
 
    >>> @trellis.modifier
    ... def set_position(rectangle, left, top):

    >>> set_position(r, 55, 22)
    Rectangle((55, 22), (18, 10), (73, 32))
 
Changes made by a ``modifier`` function do not take effect until the current
recalculation sweep is completed, which will be no sooner than the *outermost*
active ``modifier`` function returns. (In other words, if one ``modifier``
calls another ``modifier``, the inner modifier's changes don't take effect
until the same time as the outer modifier's changes do.)
 
Now, pay close attention to what this delayed update process means. When
we say "changes don't take effect", we *really* mean, "changes don't take
effect"::
Notifications of changes made by a ``modifier`` do not take effect until the
*outermost* active ``modifier`` function returns. (In other words, if one
``modifier`` directly or indirectly calls another ``modifier``, the inner
modifier's changes don't cause notifications to occur until the same time
as the outer modifier's changes do.)
 
Now, notice that this means that within a ``modifier``, you can't rely on any
values controlled by rules to be updated when you make changes. This means
it's generally a bad idea for a rule to look at what it's changing. For
example::
 
    >>> @trellis.modifier
    ... def set_position(rectangle, left, top):

    ... print rectangle
 
    >>> set_position(r, 22, 55)
    Rectangle((55, 22), (18, 10), (73, 32))
    Rectangle((22, 55), (18, 10), (73, 32))
    Rectangle((22, 55), (18, 10), (40, 65))
 
Notice that although the ``set_position`` had just set new values for ``.left``
and ``.top``, it printed the *old* values for those attributes! In other
words, it's not just the notification of observers that's delayed, the actual
*changes* are delayed, too.
 
Why? Because the whole point of a ``modifier`` is that it makes all its
changes *at the same time*. If the changes actually took effect one by one
as you made them, then they wouldn't be happening "at the same time".
 
In other words, there would be an order dependency -- the very thing we want
to **get rid of**.
The first print is from inside the rule, showing that from the rule's
perspective, the bottom/right co-ordinates are not updated to reflect the
changed top/left co-ordinates. The second print is from a perform rule,
showing that the values *do* get updated after the modifier has exited.
 
 
The Evil of Order Dependency
----------------------------
 
The reason that time is the enemy of event driven programs is because time
implies order, and order implies order dependency -- a major source of bugs
implies order, and order implies order **dependency** -- a major source of bugs
in event-driven and GUI programs.
 
Writing a polished GUI program that has no visual glitches or behavioral quirks

 
And all you have to do to get the benefits, is to divide your code three ways:
 
* Input code, that sets trellis cells or calls modifier methods (but does not
  run inside trellis rules)
 
* Processing rules that compute values, but do not make changes to any other
  cells, attributes, or other data structures (apart from local variables)
 
* Action rules that send data on to other systems (like the screen, a socket,
  a database, etc.). This code may appear in ``@trellis.action`` rules, or it
  can be "application" code that reads results from a finished trellis
  calculation.
* Input code, that sets trellis cells or calls modifier methods, but does not
  run *inside* trellis rules. This kind of code is usually invoked by GUI or
  other I/O callbacks, or by top-level non-trellis code.
 
* Processing rules that compute values, and/or make undo-able changes to cells
  or other data structures. (i.e. ``@compute`` and ``@maintain`` rules.)
 
* Output rules that send data on to other systems (like the screen, a socket,
  a database, etc.). This code may appear in ``@perform`` rules, or it can be
  "application" code that reads results from a finished trellis calculation.
 
The first and third kinds of code are inherently order-dependent, since
information comes in (and must go out) in a meaningful order. However, by
putting related outputs in the same action rule (or non-rule code), you can
putting related outputs in the same performer (or non-trellis code), you can
ensure that the required order is enforced by a single piece of code. This
approach is highly bug-resistant.
 
Second, you can reduce the order dependency of input code by making it do as
little as possible, simply dumping data into input cells, where they can be
little as possible, simply dumping data into input cells, where it can be
handled by processing rules. And, since input controllers can be very generic
and highly-reusable, there's a natural limit to how much input code you will
need.
 
By using these approaches, you can maximize the portion of your application
that appears in side effect-free processing rules, which the Trellis makes 100%
immune to order dependencies. Anything that happens in Trellis rules, happens
*instantaneously*. There is no "order", and thus no order dependency.
that appears in side effect-free (or at least undo-able) processing rules,
which the Trellis makes 100% immune to order dependencies. Anything that
happens in Trellis rules, happens *instantaneously*, in a logical sense. Ther
is no "order", and thus no order dependency.
 
In truth, of course, rules do execute in *some* order. However, as long as the
rules don't do anything but compute their own values, then it cannot matter
what order they do it in. (The trellis guarantees this by automatically
recalculating rules when they are read, if they aren't already up-to-date.)
recalculating rules whenever their dependencies change, and undoing any
calculations that "saw" out-of-date or inconsistent values.)
 
 
The Side-Effect Rules

write, understand, and debug.
 
 
Rule 1 - If Order Matters, Use Only One Action
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Rule 1 - If Order Matters, Use Only One Rule
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
If you care what order two "outside world" side-effects happen in, code them
both in the same action rule.
 
For example, in the ``TempConverter`` demo, we had a rule that printed the
Celsius and Fahrenheit temperatures. If we'd put those two ``print``
statements in separate actions, we'd have had no control over the output order;
either Celsius or Fahrenheit might have come first on any given change to the
If you care what order some modifications to a trellis data structure occur in,
then code them both in the same maintenance rule. If you care what order two
"outside world" side-effects happen in, code them both in the same perform
rule.
 
For example, in the ``TempConverter`` demo, we had a performer that printed the
Celsius and Fahrenheit temperatures. If we'd put those two ``print`` statements
in separate rules, we'd have had no control over the output order; either
Celsius or Fahrenheit might have come first on any given change to the
temperatures. So, if you care about the relative order of certain output or
actions, you must put them all in one rule. If that makes the rule too big or
complex, you can always refactor to extract new rules to calculate the
intermediate values. Just don't put any of the *actions* (i.e. side-effects or
outputs) in the other rules, only the *calculations*. Then have an action rule
that *only* does the output or actions.
 
 
Rule 2 - Return Values, Don't Set Them
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
Rules should always *compute* a value, rather than changing other values. If
you need to compute more than one thing at once, just make a rule that returns
a tuple or some other data structure, then make other rules that pull the
values out. E.g.::
 
    >>> class Example(trellis.Component):
    ... trellis.rules(
    ... _foobar = lambda self: (1, 2),
    ... foo = lambda self: self._foobar[0],
    ... bar = lambda self: self._foobar[1]
    ... )
 
In other words, there's no need to write an ``UpdateFooBar`` method that
computes and sets ``foo`` and ``bar``, the way you would in a callback-based
system. Remember: rules are not callbacks! So always *return* values instead
of *assigning* values.
 
If you need to keep track of some value between invocations of the same rule,
make that value part of the rule's return value, then refer back to that value
each time. See the sections above on `Accessing a Rule's Previous Value`_ and
`"Discrete" Rules`_ for examples of rules that re-use their previous value,
and/or use a tuple to keep track of state.
actions, you must put them all in one rule. If that makes the code too big
or complex, you can always refactor to extract computing or maintenance rules
to calculate the intermediate values. (Just don't put any of the external
actions in the other rules, only the *calculations*. Then have a perform rule
that *only* does the external actions.)
 
 
Rule 3 - If You MUST Set, Do It From One Place or With One Value
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Rule 2 - When Setting Or Changing, Use One Rule or One Value
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
If you set a value from more than one place, you are introducing an order
dependency. In fact, if you set a value more than once in an action or
modifier, the Trellis will stop you. After all, all changes in an action or
modifier happen "at the same time". And what would it mean to set a value to
22 and 33 "at the same time"? A conflict error, that's what it would mean::
dependency. In fact, if you set a cell value from more than one rule, the
Trellis will stop you, unless the values are equal. For example::
 
    >>> @trellis.modifier
    ... def set_twice():
    ... set_position(r, 22, 55)
    ... set_position(r, 33, 66)
    >>> class Conflict(trellis.Component):
    ... value = trellis.attr(99)
    ...
    ... @trellis.maintain
    ... def ruleA(self):
    ... self.value = 22
    ...
    ... @trellis.maintain
    ... def ruleB(self):
    ... self.value = 33
 
    >>> set_twice()
    >>> Conflict()
    Traceback (most recent call last):
      ...
    InputConflict: (22, 33)
    InputConflict: (33, 22)
 
This example fails because the two rules set different values for the ``value``
attribute, causing a conflict error. Since the rules don't agree, the result
would depend on the *order* in which the rules happened to run -- which again
is precisely what we don't want in an event-driven program!
 
This rule is for your protection, because it makes it impossible for you to
So this rule is for your protection, because it makes it impossible for you to
accidentally set the same thing in two different places in response to an
event, and then miss the bug or be unable to reproduce it because the second
change masks the first!

Instead, what happens is that assigning two different values to the same cell
in response to the same event always produces an error message, making it
easier to find the problem. Of course, if you arrange your input code so that
only one piece of input code is setting trellis values for a given event, and
you don't change values from inside of computations (rule 2 above), then you'll
never have this problem.
only one piece of input code is setting trellis values for a given event, or
only one piece of code ever modifies a given cell or data structure, then
you'll never have this problem.
 
Of course, if all of your code is setting a cell to the *same* value, you won't
get a conflict error either. This is mostly useful for e.g. receiver cells

discards it.
 
 
Rule 4 - Change Takes Time
~~~~~~~~~~~~~~~~~~~~~~~~~~
Rule 3 - Rule Side-Effects MUST Be Logged For Undo
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
Be aware that if you ever change a cell or other Trellis data structure from
inside an ``@action`` rule, this will trigger a recalculation of the trellis,
once all current action rules are completed. This effectively causes a loop,
which *may not terminate* if your action rule is triggered again. So beware of
making such changes; there is nearly always a better way to get the result
you're looking for -- i.e., one that doesn't involve action rules.
If your rules only set cell values or modify trellis-managed data structures,
you don't need to worry about undo logging, as it's taken care of for you.
 
However, if you implement any other kind of side-effects in a maintenance rule
(such as updating a mutable data structure that's not trellis-managed), you
**must** record undo actions to allow the trellis to roll back your rule's
action(s), in the event that it must be recalculated due to an order
inconsistency, or if an error occurs during recalculation. If you don't do
this, you risk corrupting your program's state. This is especially important
if you are creating a new trellis-managed data structure type.
 
In general, it's best to keep side-effects in rules to a minimum, and use only
cells and other trellis-managed data structures. And of course, any side
effects that can't easily be undone should be placed in a @perform rule, which
is guaranteed to run no more than once per overall recalculation of the trellis.
 
However, if you are creating your own trellis-managed data structure type, you
may need to use the ``trellis.on_undo()`` API to register undo callbacks, to
protect your data structure's integrity. See the section below on `Creating
Your Own Data Structures`_ for more details on how this works.
 
 
Rule 4 - If You Write, Don't Read!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
Be aware that rules with side-effects **cannot** see the ultimate effect of
their changes, and so should avoid reading anything but their minimum required
inputs. For example::
 
    >>> import sys
    
    >>> class ChangeTakesTime(trellis.Component):
    ... v1 = trellis.attr(2)
    ... v2 = trellis.compute(lambda self: self.v1*2)
    ... @trellis.maintain
    ... def update(self):
    ... if self.v1!=3:
    ... print "before", self.v1, self.v2
    ... self.v1 = 3
    ... print "after", self.v1, self.v2
 
    >>> x = ChangeTakesTime()
    before 2 4
    after 3 4
 
    >>> x.v2
    6
 
Here's what's happening: first, ``v2`` is calculated as ``2*2 == 4``. Then,
the ``update`` rule sets ``v1`` to 3. However, ``v2`` is NOT immediately
updated. Instead, it's put on a schedule of rules to be re-run. So the
``update`` rule still sees the OLD value of ``v2``.
 
So, if you are making any kind of changes from inside a rule, beware of trying
to read anything that might be affected by those changes, as you will likely
see something that's out of date. This is particularly important when changing
trellis-managed data structures, since many data structures rely on rules for
their internal consistency. So if you first write and then read the same data
structure from a single rule, you will almost certainly see inconsistent
results.
 
 
Mutable Data Structures

they won't be automatically updated.
 
But this doesn't mean you can't use sets, lists, and dictionaries. You just
need to use Trellis-ized ones. Of course, all the warnings above about
need to use Trellis-managed ones. (Of course, all the warnings above about
changing values still apply; just because you're modifying something other
than attributes, doesn't mean you're not still modifying things!
than attributes, doesn't mean you're not still modifying things!)
 
The Trellis package provides three mutable types for you to use in your
The Trellis package provides three primary mutable types for you to use in your
components: ``Set``, ``List``, and ``Dict``. You can also subclass them or
create your own mutable types, as we'll discuss in a later section.
create your own mutable types, as we'll discuss in a later section. (And, the
``peak.events.collections`` module also provides some fancier data structures;
see the `Collections manual`_ for details.)
 
.. _Collections manual: http://peak.telecommunity.com/DevCenter/TrellisCollections
 
 
trellis.Dict

    >>> view.model = None
 
    >>> class Dumper(trellis.Component):
    ... @trellis.action
    ... @trellis.perform
    ... def dump(self):
    ... for name in 'added', 'changed', 'deleted':
    ... if getattr(d, name):

    >>> view.model = None
 
    >>> class Dumper(trellis.Component):
    ... @trellis.action
    ... @trellis.perform
    ... def dump(self):
    ... for name in 'added', 'removed':
    ... if getattr(s, name):

    >>> view.model = None # quiet, please
 
    >>> class Watcher(trellis.Component):
    ... @trellis.action
    ... @trellis.perform
    ... def dump(self):
    ... print myList.changed
 

need a subset of the list interface, you can implement a changelog-based
structure. For example, the Trellis package includes a ``SortedSet`` type
that maintains an index of items sorted by keys, with a cell that lists
changed regions, as we'll see in the next section.
 
 
trellis.SortedSet
~~~~~~~~~~~~~~~~~
 
A ``trellis.SortedSet`` is a specialized component that lets you wrap a
``trellis.Set`` (or anything similar to one, that offers iteration plus
``added`` and ``removed`` cells) with a sort key, to get a list::
 
    >>> myItems = trellis.Set([3,2,1])
    >>> myIndex = trellis.SortedSet(data=myItems)
 
    >>> view.model = myIndex
    [1, 2, 3]
 
You can change the ``sort_key`` attribute to re-sort the items::
 
    >>> myIndex.sort_key = lambda x: -x
    [3, 2, 1]
 
 
``trellis.SortedSet`` objects also have a discrete rule attribute called
``changes``. It's normally empty, but during the recalculation triggered by a
change to the underlying set, it will temporarily contain one or more ``(start,
end, size)`` tuples, representing the effective changes to the old list. But
as with all discrete or receiver attributes, you'll never see a value in it
from non-rule code::
 
    >>> myIndex.changes
    []
 
Only in rule code will you ever see it containing values, a moment before it
becomes empty again::
 
    >>> view.model = None # quiet, please
 
    >>> class Watcher(trellis.Component):
    ... @trellis.action
    ... def dump(self):
    ... print myIndex.changes
 
    >>> watcher=Watcher()
    []
 
    >>> myItems.remove(2)
    [(1, 2, 0)]
    []
    >>> myIndex
    [3, 1]
 
The ``changed`` values describe the changes made to the index contents. In
this case, ``(1, 2, 0)`` means that the list slice ``1:2`` was reduced to
length zero. Compare with the effect of re-inserting 2::
 
    >>> myItems.add(2)
    [(1, 1, 1)]
    []
    >>> myIndex
    [3, 2, 1]
 
This value means the slice ``1:1`` was increased to length 1. In a sense,
you can view the provided changes as being "set slice" operators. Note, too,
that it's possible to have multiple slices if multiple items are added or
deleted::
 
    >>> @trellis.modifier
    ... def add_some():
    ... myItems.add(0)
    ... myItems.add(4)
 
    >>> add_some()
    [(3, 3, 1), (0, 0, 1)]
    []
    >>> myIndex
    [4, 3, 2, 1, 0]
 
    >>> add_some()
changed regions. (See the `Collections manual`_ for more details.)
    
As you can see, 1 item was inserted at position 3 in the old list, followed
by 1 item being inserted at position 0. In other words, if you loop over the
``changes`` attribute and apply the slice operations in that order, you can
successfully update another sequence to match the changed sequence.
 
Finally, note that adjacent operations may be merged into single slices::
trellis.Pipe
------------
 
    >>> @trellis.modifier
    ... def del_some():
    ... myItems.remove(2)
    ... myItems.remove(3)
 
    >>> del_some()
    [(1, 3, 0)]
    []
    >>> myIndex
    [4, 1, 0]
A ``trellis.Pipe`` is a little bit like a Python list, except it only has
supports for 5 methods: ``append``, ``extend``, ``__iter__``, ``__len__``,
and ``__contains__``. Its purpose is to allow you to easily interconnect
components that communicate streams of objects or data, not unlike an operating
system pipe. You can use ``append()`` and ``extend()`` to put data in the
pipe, and use the other methods to get it back out. And it resets itself to
being empty after all of its observers have had a chance to see the contents::
 
And that insertion+deletion at the same position can lead to change slices that
don't result in a net change to the number of rows::
 
    >>> @trellis.modifier
    ... def add_and_del():
    ... myItems.remove(1)
    ... myItems.add(2)
    >>> p = trellis.Pipe()
 
    >>> add_and_del()
    [(1, 2, 1)]
    >>> view.model = p
    []
    >>> myIndex
    [4, 2, 0]
 
    >>> @trellis.modifier
    ... def add_and_del():
    ... myItems.remove(2)
    ... myItems.add(1)
 
    >>> add_and_del()
    [(1, 2, 1)]
    >>> p.append(42)
    [42]
    []
    >>> myIndex
    [4, 1, 0]
 
And changing the sort key always results in a change slice for the entire
index::
 
    >>> myIndex.sort_key = lambda x:x
    [(0, 3, 3)]
    >>> p.extend([27, 59])
    [27, 59]
    []
    >>> myIndex
    [0, 1, 4]
    
One common use for pipes is to allow you to create objects that communicate
via sockets or other IPC. If you write a component so that it expects to
receive its inputs via one pipe, and sends output to another, then those pipes
can be connected at runtime to a socket. And at *test* time, you can just
append data to the input pipe, and have a performer spit out what gets written
to the output pipe.
 
The ``Pipe`` type is the trellis's simplest data structure type -- so you may
want to have a peek at its source code after you read the next section. (Better
still, try to write your *own* ``Pipe`` clone first, and then compare it to the
real one!)
 
 
Creating Your Own Data Structures

``List``, and ``Set``, you have a few options. First, you can just build
components that use those existing data types, and use ``@modifier`` methods
to perform operations on them. (If you just directly perform operations, then
observers of your data structure may be recalculated in the middle of the
changes.)
listeners of your data structure may be recalculated in the middle of your
changes, and see an inconsistent state.)
 
Depending on the nature of the data structure you need, however, this may not
be sufficient. For example, when you perform multiple operations on a

it::
 
    >>> class Queue(trellis.Component):
    ... items = trellis.todo(lambda self: [])
    ... items = trellis.todo(list)
    ... to_add = items.future
    ...
    ... @trellis.modifier

    [1]
    []
 
Let's break down the pieces here. First, we create a "todo" cell. A todo
cell is basically a ``resetting_to`` attribute, except that it resets to a
*calculated* value instead of a constant. It takes a function or type, just
like ``make``. That is, if you use a function (or other object with a
``__get__`` method), it's called with the object the attribute belongs to,
and if you use a type (or other object lacking a ``__get__`` method), it's
called with no arguments.
 
When the "todo" cell is created, the rule is called to create the resetting
value, just as with a ``make`` attribute. Unlike a ``make`` attribute,
however, its rule will be called again each time a "future" (i.e. modified)
value is required.
 
(By the way, you can define todo cells with either a direct call as shown
above, a ``@trellis.todo`` decorator on a function, or by using
`trellis.todos(attr=func, ...)`` in your class body.)
 
The second thing that we did in this class above is create a "future" property.
Todo cell descriptors have a ``.future`` attribute that returns a new property.
(This property accesses the "future" version of the todo cell's value --
causing the rule to be called to generate a new value, and various undo-log
operations to be performed.)
 
Next, we define a modifier method, ``add()``. This method accesses the
``to_add`` attribute, thereby getting the *future* value of the ``items``
attribute. This future value is initially created by calling the "todo" cell's
rule. In this case, the rule returns an empty list, so that's what ``add()``
sees, and adds a value to it.
 
(Note, by the way, that you cannot access future values except from inside a
``@modifier`` function.)
 
Next, let's create another ``@modifier`` that adds more than one item to the
``to_add`` attribute. This will works because only a single "future value" is
created during a given recalculation sweep, and ``@modifier`` methods guarantee
that no new sweeps can occur while they are running. Thus, the changes made in
the modifier won't take effect until it returns::
 
    >>> @trellis.modifier
    ... def add_many(*args):
    ... for arg in args: q.add(arg)

    [1, 2, 3]
    []
 
Let's break down the pieces here. First, we create a "todo" cell. A todo
cell is discrete (like a ``receiver`` cell or ``@discrete`` rule), which means
it resets to its default value after any changes. (By the way, you can define
todo cells with either a direct call as shown here, a ``@trellis.todo``
decorator on a function, or using ``trellis.todos(attr=func, ...)`` in your
class body.)
 
The default value of a ``@todo`` cell is determined by calling the function it
wraps when the cell is created. This value is then saved as the default value
for the life of the cell.
 
The second thing that we do in this class is create a "future" view. Todo
cell properties have a ``.future`` attribute that returns a new property. This
property accesses the "future" version of the todo cell's value.
 
Next, we define a modifier method, ``add()``. This method accesses the
``to_add`` attribute, and gets the *future* value of the ``items`` attribute.
This future value is initially created by calling the "todo" cell's function.
In this case, the todo function returns an empty list, so that's what ``add()``
sees, and adds a value to it. As a side effect of accessing this future value,
the Trellis schedules a recalculation to occur after the current recalculation
is finished.
 
(Note, by the way, that you cannot access future values except from inside
a ``@modifier`` function, and these in turn can only be called from ``@action``
or non-Trellis code.)
 
In our second example above, we create another ``@modifier`` that adds more
than one item to the ``to_add`` attribute. This works because only a single
"future value" is created during a given recalculation sweep, and ``@modifier``
methods guarantee that no new sweeps can occur while they are running. Thus,
the changes made in the modifier don't take effect until it returns.
 
Finally, after each change, the queue resets itself to empty, because the
default value of the ``items`` cell is the empty list created when the cell
was initialized.
Finally, notice that after each change, the queue resets itself to empty,
because the default value of the ``items`` cell is the empty list that was
created when the cell was initialized.
 
Of course, since "todo" attributes are discrete (i.e., transient), what we've
Of course, since "todo" attributes are automatically resetting, what we've
seen so far isn't enough to create a data structure that actually *keeps* any
data around. To do that, we need to combine "todo" attributes with a rule to
update an existing data structure::
maintain an existing data structure::
 
    >>> class Queue2(Queue):
    ... added = trellis.todo(lambda self: [])
    ... added = trellis.todo(list)
    ... to_add = added.future
    ...
    ... @trellis.rule
    ... @trellis.maintain(make=list)
    ... def items(self):
    ... items = self.items
    ... if items is None:
    ... items = []
    ... if self.added:
    ... return items + self.added
    ... return items
    ... return self.items + self.added
    ... return self.items
 
    >>> q = Queue2()
    >>> view.model = q

 
This version is very similar to the first version, but it separates ``added``
from ``items``, and the ``items`` rule is set up to compute a new value that
includes the added items.
includes the added items. (Notice also the use of the ``make`` keyword to
initialize ``items`` to an empty list before the ``items`` rule is run for the
first time.)
 
Notice, by the way, that the ``items`` rule returns a new list every time there
is a change. If it didn't, the updates wouldn't be tracked::
Notice, by the way, that the ``items`` rule returns a *new* list every time
there is a change. If it didn't, the updates wouldn't be tracked::
 
    >>> class Queue3(Queue2):
    ... @trellis.rule
    ... @trellis.maintain(make=list)
    ... def items(self):
    ... items = self.items
    ... if items is None:
    ... items = []
    ... if self.added:
    ... items.extend(self.added)
    ... return items
    ... self.items.extend(self.added)
    ... return self.items
 
    >>> q = Queue3()
    >>> view.model = q

new list was different.
 
If you are modifying a return value in place like this, you should use the
the ``trellis.dirty()`` API to flag that your return value has changed, even
though it's the same object::
the ``trellis.mark_dirty()`` API to flag that your return value has changed,
even though it's the same object. In addition, you should log an undo action
so that if the trellis needs to roll back some calculations involving your data
structure, it can do so::
 
    >>> class Queue4(Queue2):
    ... @trellis.rule
    ... @trellis.maintain(make=list)
    ... def items(self):
    ... items = self.items
    ... if items is None:
    ... items = []
    ... if self.added:
    ... trellis.on_undo(items.__delitem__, slice(len(items),None))
    ... items.extend(self.added)
    ... trellis.dirty()
    ... trellis.mark_dirty()
    ... return items
 
    >>> q = Queue2()
    >>> q = Queue4()
    >>> view.model = q
    []
 

    >>> add_many(2, 3, 4)
    [1, 2, 3, 4]
 
Please note, however, that using this API is as "dirty" as its name
implies. More precisely, the dirtiness is that we're modifying a value inside
a rule -- the worst sort of no-no. You must take extra care to ensure that
all your dependencies have already been calculated before you perform the
modification, otherwise an unexpected error could leave your data in a
corrupted state. In this example, the modification is the last thing that
happens, and ``self.added`` has already been read, so it should be pretty safe.
As you can see, calling ``mark_dirty()`` caused the trellis to notice the
change to the list, even though the newly-returned list is (by definition)
still equal to the previous value of the rule (i.e., the same list).
 
The ``on_undo()`` function lets you register a callback function (with optional
positional arguments) that will be invoked if the trellis needs to roll back
changes due to an error, or due to an out-of-order calculation. (If a rule
makes a change to a data structure that has already been read by another rule,
the trellis has to undo any changes made by the earlier rule and re-run it to
ensure consistent results.)
 
Registered functions are called in reverse order, so that callbacks registered
by later ``on_undo()`` calls will run before earlier ones. The Trellis keeps
track of what callbacks were registered during each rule's execution, so that
it can roll back the minimum number of changes needed to resolve a calculation
order conflict. In the event of an error, however, all changes are rolled
back::
 
    >>> @trellis.modifier
    ... def error_demo():
    ... @trellis.Performer # make a standalone performing rule
    ... def bad_observer():
    ... print q
    ... raise RuntimeError("ha!")
    ... q.add(5)
 
    >>> try: error_demo()
    ... except RuntimeError: print "caught error"
    [1, 2, 3, 4, 5]
    caught error
 
    >>> print q
    [1, 2, 3, 4]
 
This example is a bit odd, because it's somewhat difficult to force the trellis
to get an error in such a way as to test your undo logging. If we had simply
raised an error in the modifier, the change would *appear* to have been rolled
back, when in fact it hadn't happened yet! (It's easy to see this if you add
a "print" to the ``items`` rule -- if you raise an error in the modifier, it
will never be called, because the rules don't run until the modifier is over.)
 
So to actually test the undo-ing, we have to raise the error in a new performer
cell, which then runs after ``q.items`` is updated. (Performers don't run
until/unless there are no other kinds of rules pending.)
 
In later sections on `Working with Cell Objects`_, we'll see more about how to
create and use one-off cells like this ``Performer``, without needing to
make them part of a component.
 
In the meantime, please note that creating good trellis data structures can be
tricky: be sure to write automated tests for your code, and verify that they
actually test what you *think* they test. This is one situation where it's
REALLY a good idea to write your tests first, and try to make them fail
*before* you add any ``mark_dirty()`` or ``on_undo()`` calls to your code.
Otherwise, you won't be sure that your tests are really testing anything!
 
Of course, you don't need to deal with ``mark_dirty()`` and ``undo()`` at all,
if you stick to using immutable values as a basis for your data structure, or
use a copy-on-write approach like that shown in our ``Queue2`` example above.
Such data structures are less efficient than updating in-place, if they contain
large amounts of data, but not every data structure needs to contain large
quantities of data!
 
Therefore, we suggest that you start with simpler data structures first, and
only add in-place updates if and when you can prove that the data copying is
unacceptable overhead, since such updates are harder to write in a
provably-correct way. (Note, too, that Python's built-in data types can
often copy data a lot faster than you'd expect...)
 
On the whole, though, it's best to stick with immutable values as much as
possible, and avoid mutating data in place if you can.
 
A Practical Example -- and ``trellis.Pipe``
-------------------------------------------
 
 
Other Things You Can Do With A Trellis

    >>> C = trellis.Cell(lambda: (F.value - 32)/1.8, 0)
 
    >>> F.value
    32
    32.0
    >>> C.value
    0
    0.0
    >>> F.value = 212
    >>> C.value
    100.0

altogether::
 
    >>> roc
    ReadOnlyCell(<function <lambda> at ...>, None [out-of-date])
    ReadOnlyCell(<function <lambda> at ...>, None [uninitialized])
 
What the above means is that you have a read-only cell whose current value is
``None``, but is out-of-date. This means that if you actually try to *read*
the value of this cell, it may or may not match what the ``repr()`` showed.
(This is because simply looking at the cell shouldn't cause the cell to
recalculate; that would be very painful when debugging).
``None``, but has not yet been initialized. This means that if you actually
try to *read* the value of this cell, it may or may not match what the
``repr()`` showed. (This is because simply looking at the cell shouldn't
cause the cell's value to be calculated; that could be very painful when
debugging).
 
If we actually read the value of this cell, the rule will be run::
 

 
Since the rule didn't depend on any other cells, there is never any way that
it could be meaningfully recalculated. Thus, it becomes constant, and cannot
be observed by any other rules. If we create another rule that reads this
be listened-to by any other rules. If we create another rule that reads this
cell, it will not end up depending on it::
 
    >>> cell2 = trellis.Cell(lambda: roc.value)

dictionary of active cells for a component::
 
    >>> trellis.Cells(view)
    {'model': Cell(None, [1, 2, 3, 4] [out-of-date]),
     'view_it': ActionCell(<bound method Viewer.view_it of
                           <Viewer object at 0x...>>, None [out-of-date])}
    {'model': Value([1, 2, 3, 4]),
     'view_it': Performer(<bound method Viewer.view_it of
                             <Viewer object at 0x...>>, None)}
 
In the case of a ``Component``, this data is also stored in the component's
``__cells__`` attribute::

register their ``set_value()`` methods as callbacks for other systems.
 
 
Discrete and Action Cells
-------------------------
Discrete and Performer Cells
----------------------------
 
To make a cell "discrete" (i.e. a receiver or discrete rule), you set its
third constructor argument (i.e., ``discrete``) to true::
To make a cell "discrete" (i.e. automatically resetting to its initial value),
you set its third constructor argument (i.e., ``discrete``) to true::
 
    >>> aReceiver = trellis.Cell(value=0, discrete=True)
    >>> aReceiver.value

    0
 
As you can see, the value a discrete cell is created with, is the default value
it resets to between received (or calculated) values. If you want to make
a discrete rule, just include a rule in addition to the default value and the
it resets to between set (or calculated) values. If you want to make a
resetting rule, just include a rule in addition to the default value and the
discrete flag.
 
"Action" cells are implemented with the ``trellis.ActionCell`` class::
``@perform`` rules are implemented with the ``trellis.Performer`` class::
 
    >>> trellis.Cells(view)['view_it']
    ActionCell(<bound method Viewer.view_it of
               <Viewer object at 0x...>>, None [out-of-date])
    Performer(<bound method Viewer.view_it of
               <Viewer object at 0x...>>, None)
 
The ActionCell constructor takes only one parameter: a zero-argument callable,
The Performer constructor takes only one parameter: a zero-argument callable,
such as a bound method or a function with no parameters. You can't set a value
for an ``ActionCell`` (because it's not writable), nor can you make it discrete
(since that would imply a readable value, and action cells exist only for their
side-effects).
 
 
Cell Attribute Metadata
-----------------------
 
The various decorators and APIs that set up cell attributes in components all
work by registering metadata for the enclosing class. This metadata can be
accessed using various ``peak.util.roles.Registry`` objects. (See the
documentation of the ``ObjectRoles`` package at the Python Package Index for
more info on registries.)
 
In the current version of the Trellis library, these registries should mostly
be considered implementation details; they are not officially documented and
may change in a future release. However, if you need to be able to access a
superclass' definition of a rule, you can do so using the ``CellRules``
registry::
 
    >>> trellis.CellRules(NoiseFilter)
    {'filtered': <function filtered at 0x...>}
 
As you can see, calling ``trellis.CellRules(sometype)`` will return you a
dictionary of rules for that type. You can then pull out the definition you
need and call it. This particular registry should be a relatively stable API
across releases.
 
 
Co-operative Multitasking
=========================
 
The Trellis allows for a limited form of co-operative multitasking, using
generator functions. By declaring a generator function as a ``@task`` method,
you can get it to run across multiple trellis recalculations, retaining its
state along the way. For example::
 
    >>> class TaskExample(trellis.Component):
    ... trellis.receivers(
    ... start = False,
    ... stop = False
    ... )
    ... @trellis.task
    ... def demo(self):
    ... print "waiting to start"
    ... while not self.start:
    ... yield trellis.Pause
    ... print "starting"
    ... while not self.stop:
    ... print "waiting to stop"
    ... yield trellis.Pause
    ... print "stopped"
 
    >>> t = TaskExample()
    waiting to start
 
    >>> t.start = True
    starting
    waiting to stop
    waiting to stop
 
    >>> t.stop = True
    stopped
 
A ``@trellis.task`` is like a ``@trellis.action``, in that it is allowed to
modify other cells, and its output cannot be observed by normal rules. The
function you decorate it with, however, must be a generator. The generator
can yield ``trellis.Pause`` in order to suspend itself until a cell it depends
on has changed.
 
In the above example, the task initially depends on the value of the ``start``
cell, so it is not resumed until ``start`` is set to ``True``. Then it prints
"starting", and waits for ``self.stop`` to become true. However, at this point
it now depends on both ``start`` *and* ``stop``, and since ``start`` is a
"receiver" cell, it resets to ``False``, causing the task to resume. (Which is
why "waiting to stop" gets printed twice at that point.)
 
We then set ``stop`` to true, which causes the loop to exit. The task is now
finished, and any further changes will not re-invoke it. In fact, if we
examine the cell, we'll see that it has become a ``CompletedTask`` cell::
 
    >>> trellis.Cells(t)['demo']
    CompletedTask(None)
 
even though it's initially a ``TaskCell``::
 
    >>> trellis.Cells(TaskExample())['demo']
    waiting to start
    TaskCell(<function step...>, None)
 
 
Invoking Subtasks
-----------------
 
Tasks can invoke or "call" other generators by yielding them. For example, we
can rewrite our example like this, for more modularity::
 
    >>> class TaskExample(trellis.Component):
    ... trellis.receivers(
    ... start = False,
    ... stop = False
    ... )
    ...
    ... def wait_for_start(self):
    ... print "waiting to start"
    ... while not self.start:
    ... yield trellis.Pause
    ...
    ... def wait_for_stop(self):
    ... while not self.stop:
    ... print "waiting to stop"
    ... yield trellis.Pause
    ...
    ... @trellis.task
    ... def demo(self):
    ... yield self.wait_for_start()
    ... print "starting"
    ... yield self.wait_for_stop()
    ... print "stopped"
 
    >>> t = TaskExample()
    waiting to start
 
    >>> t.start = True
    starting
    waiting to stop
    waiting to stop
 
    >>> t.stop = True
    stopped
 
Yielding a generator from a ``@task`` causes that generator to temporarily
replace the main generator, until the child generator returns, yields a
non-``Pause`` value, or raises an exception. At that point, control returns to
the "parent" generator. Subtasks may be nested to any depth.
 
 
Receiving Values and Propagating Exceptions
-------------------------------------------
 
If you are targeting Python 2.5 or higher, you don't need to do anything
special to receive values yielded by subtasks, or to ensure that subtask
exceptions are propagated. You can receive values using expressions like::
 
    result = yield someGenerator(someArgs)
 
However, in earlier versions of Python, this syntax doesn't exist, so you must
use the ``trellis.resume()`` function instead, e.g.::
 
    yield someGenerator(someArgs); result = trellis.resume()
 
If you are writing code intended to run on Python 2.3 or 2.4 (as well as 2.5),
you should call ``trellis.resume()`` immediately after a subtask invocation
(preferably on the same line, as shown), *even if you don't need to get the
result*. E.g.::
 
    yield someGenerator(someArgs); trellis.resume()
 
The reason you should do this is that Python versions before 2.5 do not allow
you to pass exceptions into a generator, so the Trellis can't cause the
``yield`` statement to propagate an error from ``someGenerator()``. If the
subtask raised an exception, it will silently vanish unless the ``resume()``
function is called.
 
The reason to put it on the same line as the yield is so that you can see the
subtask call in the error's traceback, instead of just a line saying
``trellis.resume()``! (Note, by the way, that it's perfectly valid to use
``trellis.resume()`` in code that will also run under Python 2.5; it's just
redundant unless the code will also be used with older Python versions.)
 
The ability to receive values from a subtask lets you create utility functions
that wait for events to occur in some non-Trellis system. For example, you
could create a function like this, to let you wait for a Twisted "deferred" to
fire::
 
    def wait_for(deferred):
        result = trellis.Cell(None, trellis.Pause)
        deferred.addBoth(result.set_value)
        while result.value is trellis.Pause:
            yield trellis.Pause
        if isinstance(result.value, failure.Failure):
            try:
                result.value.raiseException()
            finally:
                del result # get rid of the traceback reference cycle
        yield trellis.Return(result.value)
 
You would then use it like this (Python 2.5)::
 
    result = wait_for(someTwistedFuncReturningADeferred(...))
 
Or like this (compatible with earlier Python versions)::
 
    wait_for(someTwistedFuncReturningADeferred(...)); result = trellis.resume()
 
``wait_for()`` creates a cell and adds its ``set_value()`` method as a callback
to the deferred, to receive either a value or an error. It then waits until
the callback occurs, by yielding ``Pause`` objects. If the result is a Twisted
``Failure``, it raises the exception represented by the failure. Otherwise,
it wraps the result in a ``trellis.Return()`` and yields it to its calling
task, where it will be received as the result of the ``yield`` expression
(in Python 2.5) or of the ``trellis.resume()`` call (versions <2.5).
 
 
Time, Tasks, and Changes
------------------------
 
Note, by the way, that when we say the generator above will "wait" until the
callback occurs, we actually mean no such thing! What *really* happens is that
this generator yields ``Pause``, recalculation finishes normally, and control
is returned to whatever non-Trellis code caused a recalculation to occur in
the first place. Then, later, when the deferred fires and a callback occurs to
set the ``result`` cell's value, this *triggers a recalculation sweep*, in
which the generator is resumed in order to carry out the rest of its task!
 
When it yields the result or raises an exception, this is propagated back to
whatever generator "called" this one, which may then go on to do other things
with the value or exception before it pauses or returns. The recalculation
sweep once again finishes normally, and control is returned back to the code
that caused the deferred to fire.
 
Thus, "time" in the Trellis (and especially for tasks) moves forward only when
something *changes*. It's the setting of cell values that triggers
recalculation sweeps, and tasks only resume during sweeps where one of their
dependencies have changed.
 
A task is considered to depend on any cells whose value it has read since the
last time it (or a subtask) yielded a ``Pause``. Each time a task pauses, its
old dependencies are thrown out, and a new set are accumulated.
 
A task must also ``Pause`` in order to see the effects of any changes it makes
to cells. For example::
 
    >>> c = trellis.Cell(value=27)
    >>> c.value
    27
 
    >>> def demo_task():
    ... c.value = 19
    ... print c.value
    ... yield trellis.Pause
    ... print c.value
 
    >>> trellis.TaskCell(demo_task).value
    27
    19
 
As you can see, changing the value of a cell inside a task is like changing it
inside a ``@modifier`` or ``@action`` -- the change doesn't take effect until
a new recalculation occurs, and the *current* recalculation can't finish until
the task yields a ``Pause`` or returns (i.e., exits entirely).
 
In this example, the task is resumed immediately after the pause because the
task depended on ``c.value`` (by printing it), and its value *changed* in the
subsequent sweep (because the task set it). So the task was resumed
immediately, as part of the second recalculation sweep (which happened only
because there was a change in the first sweep).
 
But what if a task doesn't have any dependencies? If it doesn't depend on
anything, how does it get resumed after a pause? Let's see what happens::
 
    >>> def demo_task():
    ... print 1
    ... yield trellis.Pause
    ... print 2
 
    >>> trellis.TaskCell(demo_task).value
    1
    2
 
As you can see, a task with no dependencies, (i.e., one that hasn't looked at
any cells since its last ``Pause``), is automatically resumed. The Trellis
effectively pretends that the task both set and depended on an imaginary cell,
forcing another recalculation sweep (if one wasn't already in the works due
to other changes or the need to reset some discrete cells). This prevents
tasks from accidently suspending themselves indefinitely.
 
Notice, by the way, that this makes Trellis-style multitasking rather unique
in the world of Python event-driven systems and co-operative multitasking
tools. Most such systems require something like an "event loop", "reactor",
"trampoline", or similar code that runs in some kind of loop to manage tasks
like these. But the Trellis doesn't need a loop of its own: it can use
whatever loop(s) already exist in a program, and simply respond to changes as
they occur.
 
In fact, you can have one set of Trellis components in one thread responding to
changes triggered by callbacks from Twisted's reactor, and another set of
components in a different thread, being triggered by callbacks from a GUI
event loop. Heck, you can have them both happening in the *same* thread! The
Trellis really doesn't care. (However, you can't share any trellis components
across threads, or use them to communicate between threads. In the future,
the ``TrellisIO`` package will probably include mechanisms for safely
communicating between cells in different threads.)
 
 
Managing Activities in "Clock Time"
===================================
 
(NEW in 0.6a1)
 
Real-life applications often need to work with intervals of physical or "real"
time, not just logical "Trellis time". In addition, they often need to manage
sequential or simultaneous activities. For example, a desktop application may
have background tasks that perform synchronization, download mail, etc. A
server application may have logical tasks handling requests, and so on. These
activities may need to start or stop at various times, manage timeouts, display
or log progress, etc.
 
So, the ``peak.events.activity`` module includes support for time tracking as
well as controlling activities and monitoring their progress.
 
 
Timers and the Time Service
---------------------------
 
The Trellis measures time using "timers". A timer represents a moment in time,
but you can't tell directly *what* moment it represents. All you can do is
measure the interval between two timers, or tell whether the moment defined by
a timer has been reached.
 
The "zero" timer is ``activity.EPOCH``, representing an arbitrary starting
point in relative time::
 
    >>> from peak.events.activity import EPOCH
    >>> t = EPOCH
    >>> t
    <...activity._Timer object at ...>
 
 
Static Time Calculations
~~~~~~~~~~~~~~~~~~~~~~~~
 
As you can see, timer objects aren't very informative by themselves. However,
you can use subscripting to create new timers relative to an existing timer,
and subtract timers from each other to produce an interval in seconds, e.g.::
 
    >>> t10 = t[10]
    >>> t10 - t
    10
 
    >>> t10[-10] - t
    0
 
    >>> t10[3] - t
    13
 
Timers compare equal to one another, if and only if they represent the same
moment::
 
    >>> t==t
    True
    >>> t!=t
    False
    >>> t10[-10] == t
    True
    >>> t10[-10] != t
    False
 
And the other comparison operators work on timers according to their relative
positions in time, e.g.:
 
    >>> t[-1] < t <= t[1]
    True
    >>> t[-1] > t[-2]
    True
    >>> t[-2] > t[-1]
    False
    >>> t[-1] >= t[-1]
    True
    >>> t<=t
    True
    >>> t<=t[1]
    True
    >>> t<=t[-1]
    False
 
 
Dynamic Time Calculations
~~~~~~~~~~~~~~~~~~~~~~~~~
 
Of course, if arithmetic were all you could do with timers, they wouldn't be
very useful. Their real value comes when you perform dynamic time calculations,
to answer questions like "How long has it been since X happened?", or "Has
Y seconds elapsed since X happened?" And of course, we want any rules that
ask these questions to be recalculated if the answers change!
 
This is where the ``activity.Time`` service comes into play. The ``Time``
class is a ``context.Service`` (see the Contextual docs for more details) that
tracks the current time, and takes care of letting the Trellis know when a rule
needs to be recalculated because of a change in the current time.
 
By default, the ``Time`` service uses ``time.time()`` to track the current
time, whenever a trellis value is changed. But to get consistent timings
while testing, we'll turn this automatic updating off::
 
    >>> from peak.events.activity import Time
    >>> Time.auto_update = False
 
With auto-update off, the time will only advance if we explicitly call
``Time.tick()`` or ``Time.advance()``. ``tick()`` updates the current time
to match ``time.time()``, while ``Time.advance()`` moves the time ahead by a
specified amount (so you can run tests in "simulated time" with perfect
repeatability).
 
So now let's do some dynamic time calculations. In most programs, what you
need to know in a rule is whether a certain amount of time has elapsed
since something has happened, or whether a certain future time has arrived.
 
To do that, you can simply create a timer for the desired moment, and check its
boolean (truth) value::
 
    >>> twenty = Time[20] # go off 20 secs. from now
    >>> bool(twenty) # but we haven't gone off yet
    False
 
    >>> Time.advance(5)
    >>> bool(twenty) # not time yet...
    False
 
    >>> Time.advance(15) # bingo!
    >>> bool(twenty)
    True
 
    >>> Time.advance(7)
    >>> bool(twenty) # remains true even after the exact moment has passed
    True
 
And of course, you can use this boolean test in a rule, to trigger some action::
 
    >>> class AlarmClock(trellis.Component):
    ... trellis.values(timeout = None)
    ... def alert(self):
    ... if self.timeout:
    ... print "timed out!"
    ... alert = trellis.rule(alert)
 
    >>> clock = AlarmClock(timeout=Time[20])
    >>> Time.advance(15)
    >>> Time.advance(15)
    timed out!
 
Notice, by the way, that the ``Time`` service can be subscripted with a value
in seconds, to get a timer representing that many seconds from the current
time. (However, ``Time`` is not really a timer object, so don't try to use it
as one!)
 
 
Elapsed Time Tracking
~~~~~~~~~~~~~~~~~~~~~
 
This alarm implementation works by getting a future timer (``timeout``), and
then "goes off" when that future moment is reached. However, we can also
create an "elapsed" timer, and trigger when a certain amount of time has
passed::
 
    >>> class Elapsed(trellis.Component):
    ... trellis.values(duration = 20)
    ... trellis.rules(has_run_for = lambda self: Time[0])
    ...
    ... def alarm(self):
    ... if self.has_run_for[self.duration]:
    ... print "timed out!"
    ... alarm = trellis.rule(alarm)
 
    >>> t = Elapsed() # Capture a start time
    >>> Time.advance(15) # duration is 20, so no alarm yet
 
    >>> t.duration = 10 # duration changed, and already reached
    timed out!
 
    >>> t.duration = 15 # duration changed, but still reached
    timed out!
 
    >>> t.duration = 20 # not reached yet...
 
    >>> Time.advance(5)
    timed out!
 
As you can see, the ``has_run_for`` attribute is a timer that records the
moment when the ``Elapsed`` instance is created. The ``alarm`` rule is then
recalculated whenever the ``duration`` changes -- or elapses.
 
Of course, in complex programs, one usually needs to be able to measure the
amount of time that some condition has been true (or false). For example, how
long a process has been idle (or busy)::
 
    >>> from peak.events.activity import NOT_YET
 
    >>> class IdleTimer(trellis.Component):
    ... trellis.values(
    ... idle_for = NOT_YET,
    ... idle_timeout = 20,
    ... busy = False,
    ... )
    ... trellis.rules(
    ... idle_for = lambda self:
    ... self.idle_for.begins_with(not self.busy)
    ... )
    ... def alarm(self):
    ... if self.idle_for[self.idle_timeout]:
    ... print "timed out!"
    ... alarm = trellis.rule(alarm)
 
The way this code works, is that initially the ``idle_for`` timer is equal to
the special ``NOT_YET`` value, representing a moment that will never be
reached.
 
The ``begins_with()`` method of timer objects takes a boolean value. If the
value is false, ``NOT_YET`` is returned. If the value is true, the lesser of
the existing timer value or ``Time[0]`` (the present moment) is returned.
 
Thus, a statement like::
 
    a_timer = a_timer.begins_with(condition)
 
ensures that ``a_timer`` equals the most recent moment at which ``condition``
was observed to become true. (Or ``NOT_YET``, in the case where ``condition``
is false.)
 
So, the ``IdleTimer.alarm`` rule effectively checks whether ``busy`` has been
false for more than ``idle_timeout`` seconds. If ``busy`` is currently true,
then ``self.idle_for`` will be ``NOT_YET``, and subscripting ``NOT_YET``
always returns ``NOT_YET``. Since ``NOT_YET`` is a moment that can never be
reached, the boolean value of the expression is always false while ``busy``
is true.
 
Let's look at the ``IdleTimer`` in action::
 
    >>> it = IdleTimer()
    >>> it.busy = True
    >>> Time.advance(30) # busy for 30 seconds
 
    >>> it.busy = False
    >>> Time.advance(10) # idle for 10 seconds, no timeout yet
 
    >>> Time.advance(10) # ...20 seconds!
    timed out!
 
    >>> Time.advance(15) # idle 35 seconds, no new timeout
 
    >>> it.busy = True # busy again
    >>> Time.advance(5) # for 5 seconds...
 
    >>> it.busy = False
    >>> Time.advance(30) # idle 30 seconds, timeout!
    timed out!
 
    >>> it.idle_timeout = 15 # already at 30, fires again
    timed out!
 
 
Automatically Advancing the Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
In our examples, we've been manually updating the time. But if ``auto_update``
is true, then the time automatically advances whenever a trellis value is
changed::
 
    >>> Time.auto_update = True
    >>> c = trellis.Cell()
    >>> c.value = 42
 
    >>> now = Time[0]
    >>> from time import sleep
    >>> sleep(0.1)
 
    >>> now == Time[0] # time hasn't actually moved forward yet...
    True
 
    >>> c.value = 24
    >>> now == Time[0] # but now it has, since a recalculation occurred
    False
 
This ensures that any rules that use a current time value, or that are waiting
for a timeout, will see the correct time.
 
Note, however, that if your application doesn't change any trellis values for a
long time, then any pending timeouts may not fire for an excessive period of
time. You can, however, force an update to occur by using the ``Time.tick()``
method::
 
    >>> now = Time[0]
    >>> sleep(0.1)
    >>> now == Time[0] # time hasn't actually moved forward yet...
    True
    
    >>> Time.tick()
    >>> now == Time[0] # but now it has!
    False
 
So, an application's main loop can call ``Time.tick()`` repeatedly in order to
ensure that any pending timeouts are being fired.
 
You can reduce the number of ``tick()`` calls significantly, however, if you
make use of the ``next_event_time()`` method. If there are no scheduled events
pending, it returns ``None``::
 
    >>> print Time.next_event_time()
    None
 
But if anything is waiting, like say, our ``IdleTimeout`` object from the
previous section, it returns the relative or absolute time of the next time
``tick()`` will need to be called::
 
    >>> it = IdleTimer(idle_timeout=30)
 
    >>> Time.next_event_time(relative=True)
    30.0
 
    >>> when = EPOCH[Time.next_event_time(relative=False)]
    >>> when - Time[0]
    30.0
 
    >>> Time.advance(30)
    timed out!
 
(We can't show the absolute time in this example, because it would change every
time this document was run. But we can offset it from ``EPOCH``, and then
subtract it from the current time, to prove that it's equal to an absolute time
30 seconds after the current time.)
 
Armed with this method, you can now write code for your application's event
loop that calls ``tick()`` at the appropriate interval. You will simply need
to define a Trellis rule somewhere that monitors the ``next_event_time()`` and
schedules a call to ``Time.tick()`` if the next event time is not None. You
can use whatever scheduling mechanism your application already includes, such
as a ``wx.Timer`` or Twisted's ``reactor.callLater``, etc.
 
When the scheduled call to ``tick()`` occurs, your monitoring rule will be
run again (because ``next_event_time()`` depends on the current time), thus
repeating the cycle as often as necessary.
 
Note, however, that your rule may be run again *before* the scheduled
``tick()`` occurs, and so may end up scheduling extra calls to ``tick()``.
This should be harmless, however, but if you want to avoid the repeats you can
always write your rule so that it updates the existing scheduled call time, if
one is pending. (E.g. by updating the ``wx.Timer`` or changing the Twisted
"appointment".)
 
 
Event Loops
-----------
 
    >>> def hello(*args, **kw):
    ... print "called with", args, kw
 
    >>> from peak.events.activity import EventLoop
    >>> Time.auto_update = False # test mode
 
    >>> EventLoop.call(hello, 1, a='b')
    >>> EventLoop.call(hello, 2)
    >>> EventLoop.call(hello, this=3)
    >>> EventLoop.call(EventLoop.stop)
 
    >>> EventLoop.run()
    called with (1,) {'a': 'b'}
    called with (2,) {}
    called with () {'this': 3}
 
    >>> EventLoop.stop()
    Traceback (most recent call last):
      ...
    AssertionError: EventLoop isn't running
 
    >>> EventLoop.call(EventLoop.run)
    >>> EventLoop.call(hello, 4)
    >>> EventLoop.call(EventLoop.stop)
    >>> EventLoop.run()
    Traceback (most recent call last):
      ...
    AssertionError: EventLoop is already running
 
    >>> it = IdleTimer(idle_timeout=5)
    >>> EventLoop.run()
    called with (4,) {}
    timed out!
for a ``Performer`` (because it's not writable), nor can you make it discrete
(since that would imply a readable value, and performer cells exist only for
their side-effects). Creating a Performer cell schedules it for execution as
soon as the current modifier is complete and any normal rules are finished. It
will then be re-executed in the future, after any cells or other trellis-
managed data structures it depended on are changed. (As long as the
Performer isn't garbage collected, of course.)
 
 
Garbage Collection

 
Cells keep strong references to all of the cells whose values they accessed
during rule calculation, and weak references to all of the cells that accessed
them. This ensures that as long as an observer exists, its most-recently
observed subject(s) will also continue to exist.
them. This ensures that as long as a listener exists, its most-recently
read subject(s) will also continue to exist.
 
Cells whose rules are effectively methods (i.e., cells that represent component
attributes) also keep a strong reference to the object that owns them, by
way of the method's ``im_self`` attribute. This means that as long as some
attribute of a component is being observed, the component will continue to
exist.
attribute of a component is being observed, the whole component will continue
to exist.
 
In addition, a component's ``__cells__`` dictionary keeps a reference to all
its cells, creating a reference cycle between the cells and the component.
Thus, Component instances can only be reclaimed by Python's cycle collector,
Thus, ``Component`` instances can only be reclaimed by Python's cycle collector,
and are not destroyed as soon as they go out of scope. You should therefore
avoid giving Component objects a ``__del__`` method, and should explicitly
dispose of any resources that you want to reclaim early.

program than you do!
 
 
------------------------
Additional Documentation
------------------------
 
There's a lot more to the Trellis package than what's in this brief guide and
tutorial. Here are some links to other documentation provided with the
package:
 
`Time, Event Loops, and Tasks`_ (``Activity.txt`` in the source distribution)
    This manual explains how to create generator-based pseudo-threads, schedule
    activities for idle moments, integrate the Trellis with other event loops
    (e.g. Twisted and wxPython), and implement things like timeouts, delays,
    alarms, etc. in Trellis rules.
 
`Event-Driven Collections with the Trellis`_ (``Collections.txt`` in the source)
    This document provides a brief overview of some additional Trellis data
    structures provided by the package, such as the ``SortedSet`` type.
 
`Software Transactional Memory (STM) And Observers`_ (aka ``STM-Observer.txt``)
    This document shows how some of the underlying Trellis pieces work, and
    in future revisions, it'll include some hints on how to create your own
    custom cell types, etc.
 
 
----------
Appendices
----------

==================
 
Ken Tilton's "Cells" library for Common Lisp inspired the implementation of
the Trellis. While Tilton had never heard of Gelernter's Trellis, he did
come to see the value of having synchronous updates, like the "sweeps" of
Gelernter's design, and combined them with automatic dependency detection to
create his "Cells" library.
the Trellis. While Tilton had never heard of Gelernter's Trellis, he
independently discovered the value of having synchronous updates, like the
"sweeps" of Gelernter's design, and combined them with automatic dependency
detection to create his "Cells" library.
 
I heard about this library only because Google sponsored a "Summer of Code"
project to port Cells to Python - a project that produced the PyCells
implementation. My implementation, however, is not a port but a re-visioning
based on native Python idioms and extended to handle mutually recursive rules,
and various other features that do not precisely map onto the features of
Cells, PyCells, or other Python frameworks inspired by Cells (such as
"Cellulose").
side-effects, rollback, and various other features that do not precisely map
onto the features of Cells, PyCells, or other Python frameworks inspired by
Cells (such as "Cellulose").
 
While the first very rough drafts of this package were done in 2006 on my own
time, virtually all of the work since has been generously funded by OSAF, the

    hard to know which cells are which. There should be a way to give cells
    an identifier, so you know what you're looking at.
 
  * Coroutine/task rules and discrete rules are somewhat unintuitive as to
    their results. It's not easy to tell when you should ``poll()`` or
    ``repeat()``, especially since things will sometimes *seem* to work without
    them. In particular, we probably need a way to return *multiple* values
    from a rule via an output queue. That way, a discrete rule or task's
    recalculation can be separated from mere outputting of queued values.
 
  * Errors in rules can currently clog up the processing of rules that observe
    them. Ideally, errors should cause a rollback of the entire recalculation,
    or at least the parts that were affected by an error, so that the next
    recalculation will begin from the pre-error state.
 
  * Currently, there's no protection against accessing Cells from other
    threads, nor support for having different logical tasks in the same thread
    with their own contexts, services, etc. This should be fixed by using
    the "Contextual" library to manage thread-local (and task-local) state for
    the Trellis, and by switching to the appropriate ``context.State`` whenever
    non-rule/non-modifier code tries to read or write a cell. If combined with
    a lockable cell controller, and the rollback capability mentioned above,
    this would actually allow the Trellis to become an STM system -- a Software
    Transactional Memory.
 
  * There should probably be a way to tell if a Cell ``.has_listeners()`` or
    ``.has_dependencies()``. This will likely become important for TrellisIO,
    if not TrellisDB.
    non-rule/non-modifier code tries to read or write a cell.
 
  * There should probably be an easier way to reference cells directly, instead
    of using Cells(ob)['name'] -- perhaps a ``.link`` property, similar to the
    ``.future`` of "todo" cells, would make this easier.
 
  * Currently, you can set the value of a new cell more than once, to different
    values, as long as it hasn't been read yet. This provides some additional
    flexibility to constructors, but isn't really documented or fully
    specified yet.
 
  * The ``poll()`` and ``repeat()`` functions, as well as the
    ``.ensure_recalculation()`` method of cells, are undocumented in this
    release.
  * The ``poll()`` and ``repeat()`` functions are undocumented in this release.
 
  * It's a bad idea to use ``on_commit()`` for user-level operations
 
TrellisDB
  * A system for processing relational-like records and "active queries" mapped

    "GridBagSizer" layouts.
 
TrellisIO
  * Time service & timestamp rules
 
  * IO events
 
  * Cross-thread bridge cells
 
  * signal() events
 
 

PythonPowered
ShowText of this page
EditText of this page
FindPage by browsing, title search , text search or an index
Or try one of these actions: AttachFile, DeletePage, LikePages, LocalSiteMap, SpellCheck