The PEAK Developers' Center   Diff for "PEAK-Rules/Design" UserPreferences
 
HelpContents Search Diffs Info Edit Subscribe XML Print View Up
Ignore changes in the amount of whitespace

Differences between version dated 2010-08-11 19:28:03 and 2011-08-31 21:11:53 (spanning 4 versions)

Deletions are marked like this.
Additions are marked like this.

 
Method combination is performed using the ``combine_actions()`` API function::
 
    >>> from peak.rules import combine_actions
    >>> from peak.rules.core import combine_actions
 
``combine_actions()`` takes two arguments: a pair of actions. They are
compared using the ``overrides()`` generic function to see if one is more

For the simplest signatures (tuples of types), this corresponds to a subclass
relationship between the elements of the tuples::
 
    >>> from peak.rules import implies
    >>> from peak.rules.core import implies
 
    >>> implies(int, object)
    True

conform with ``isinstance()`` here::
 
    >>> class X: pass
 
    >>> implies(X, object)
    True
    >>> implies(X, type(X())) # InstanceType
 
    >>> from types import InstanceType
    >>> implies(X, InstanceType)
    True
 
 
``istype()`` objects
--------------------
 
Type or class objects are used to represent "this class or a subclass", but
``istype()`` objects are used to represent either "this exact type" (using
``istype(aType, True)``), or "anything but this exact type" (``istype(aType,
False)``). So their implication rules are different.
 
Internally, PEAK-Rules uses ``istype`` objects to represent a call signature
being matched, because the argument being tested is of some exact type. Then,
any rule signatures that are implied by the calling signature are considered
"applicable".
 
So, ``istype(aType, True)`` (the default) must always imply the same type or
class, or any parent class thereof::
 
    >>> from peak.rules import istype
 
    >>> implies(istype(int), int)
    True
    >>> implies(istype(int), object)
    True
    >>> implies(istype(X), InstanceType)
    True
    >>> implies(istype(X), object)
    True
 
But not the other way around::
 
    >>> implies(int, istype(int))
    False
    >>> implies(object, istype(int))
    False
    >>> implies(InstanceType, istype(X))
    False
    >>> implies(object, istype(X))
    False
 
An exact type will also imply any exclusion of a *different* exact type::
 
    >>> implies(istype(int), istype(str, False))
    True
 
In other words, if ``type(x) is int``, that implies ``type(x) is not str``.
But of course, that doesn't work they other way around::
 
    >>> implies(istype(str, False), istype(int))
    False
 
These implication rules are sufficient to bootstrap the basic types-only
rules engine; additional rules for ``istype`` behavior are explained in
Criteria.txt to show intersection of criteria such as ``istype``, and other
more-advanced criteria manipulation used in the full predicate rules engine.
 
 
Action Types
============
 

------
 
The default action type (for rules with no specified action type) is
``Method``. A ``Method`` combines a body, signature, precedence, and an
optional "chained" action that it can fall back to. All of these values
are optional, except for the body::
``Method``. A ``Method`` combines a body, a signature, a definition-order
serial number, and an optional "chained" action that it can fall back to. All
of these values are optional, except for the body::
 
    >>> from peak.rules import Method, overrides
    >>> from peak.rules.core import Method, overrides
 
    >>> def dummy(*args, **kw):
    ... print "called with", args, kw

in to the first function of the chain::
 
    >>> chain(42)
    calling Method(<...dummy...>, (), 0, None)
    calling <function dummy at...>
    called with (42,) {}
 
 

``Around`` method overrides the regular one. This forces all the regular
methods to be further down the chain than all of the "around" methods.
 
    >>> from peak.rules import Around
    >>> from peak.rules.core import Around
 
    >>> combine_actions(Method.make(dummy), Around(overriding_fn))
    Around(<...overriding_fn...>, (), 0, Method(<...dummy...>, (), 0, None))

``Before`` actions are invoked before their tail action, and ``After`` actions
are invoked afterward::
 
    >>> from peak.rules import Before, After
    >>> from peak.rules.core import Before, After
 
    >>> def primary(*args,**kw):
    ... print "primary method called"

    >>> p = Method.make(primary)
    >>> o = Around.make(overriding_fn)
    >>> combine_actions(b, combine_actions(a, combine_actions(p, o)))(17)
    calling Before(...dummy..., After(...dummy..., Method(...primary...)))
    calling <function before_template at ...>
    called with (17,) {}
    primary method called
    called with (17,) {}

    AmbiguousMethods: ([Method(...), Method(...)], (1, 2), {'x': 'y'})
 
 
Custom Method Types and Optimization
------------------------------------
Custom Method Types and Compilation
-----------------------------------
 
Custom method types can be defined by subclassing ``Method``, and used as a
generic function's default method type by setting the functions' rules'

    ... print "calling!"
    ... return self.body(*args, **kw)
 
    >>> from peak.rules import when, rules_for, abstract
    >>> from peak.rules import when, abstract
    >>> from peak.rules.core import rules_for
 
    >>> tmp = lambda foo: 42
 

    42
 
The ``compile_method(action, engine)`` function takes a method and a dispatch
engine, and returns an optimized version of the action::
engine, and returns a compiled version of the action::
 
    >>> from peak.rules import compile_method, Dispatching
    >>> from peak.rules.core import compile_method, Dispatching
    >>> engine = Dispatching(f).engine
 
    >>> compile_method(Method(tmp, ()), engine) is tmp
    True
 
However, for our newly defined method type, there is no optimization::
However, for our newly defined method type, there is no compilation::
 
    >>> m = MyMethod(tmp, ())
    >>> compile_method(m, engine) is tmp

    True
 
This is because our method type redefined ``__call__()`` but did not include
its own ``optimized()`` method.
its own ``compiled()`` method.
 
The ``optimized()`` method of a ``Method`` subclass takes an ``Engine`` as its
The ``compiled()`` method of a ``Method`` subclass takes an ``Engine`` as its
argument, and should return a callable to be used in place of directly calling
the method itself. It should pass any objects it plans to call (e.g. its tail
or individual submethods) through ``compile_method(ob, engine)``, in order to
ensure that those objects are also compiled::
 
    >>> class MyMethod2(MyMethod):
    ... def optimized(self, engine):
    ... print "optimizing"
    >>> class MyMethod2(Method):
    ... def compiled(self, engine):
    ... print "compiling"
    ... return compile_method(self.body, engine)
 
    >>> m = MyMethod2(tmp)
    >>> compile_method(m, engine) is tmp
    optimizing
    compiling
    True
 
As you can see, ``compile_method()`` invokes our new ``optimized()`` method,
which ends up returning the original function. So we can now use our new
method type in a generic function::
As you can see, ``compile_method()`` invokes our new ``compiled()`` method,
which ends up returning the original function. And, if we don't define a
``__call__()`` method of our own, we end up inheriting one from ``Method``
that compiles the method and invokes it for us::
 
    >>> m(1)
    compiling
    42
 
However, if we use this method type in a generic function, then the generic
function will cache the compiled version of its methods so they don't have to
be compiled every time they're called::
 
    >>> f = func_with(MyMethod2)
 
    >>> f(1)
    optimizing
    compiling
    42
 
    >>> f(1)
    42
 
And as you can see above, the method gets "optimized" upon first use, and then
cached.
 
(Note: what caching is done, and when the cache is reset is heavily
dependent on the specific dispatching engine in use; it can also be the case
that a similar-looking method object will be compiled more than once, because
in each case it has a different tail or match signature.)
 
Finally, note that ``Method`` subclasses do NOT inherit their ``optimized``
method from their base classes, unless they are *also* inheriting ``__call__``.
This prevents you from ending up with strangely-broken code in the event
you redefine ``__call__``, but forget to redefine ``optimized``::
Now, ``Method`` subclasses do NOT inherit their ``compiled()`` method from
their base classes, unless they are *also* inheriting ``__call__``. This
prevents you from ending up with strangely-broken code in the event
you redefine ``__call__()``, but forget to redefine ``compiled()``::
 
    >>> class MyMethod3(MyMethod2):
    ... def __call__(self, *args, **kw):

    calling!
    42
 
As you can see, the new subclass *works*, but doesn't get optimized. So, you
can do your initial debugging and development without optimization, and then
add in the optimization afterward.
As you can see, the new subclass *works*, but doesn't get compiled. So, you
can do your initial debugging and development without compilation by defining
``__call__()``, and then switch over to ``compiled()`` once you're happy with
your prototype.
 
Now, let's define a method type that works like ``MyMethod3``, but is
compiled using a template::
 
    >>> class NoisyMethod(Method):
    ... def compiled(self, engine):
    ... print "compiling"
    ... body = compile_method(self.body, engine)
    ... return engine.apply_template(noisy_template, body)
 
So far, it looks a little like our earlier compilation. We compile the
body like before, but then, what's that ``apply_template`` stuff?
 
The ``apply_template()`` method of engine objects takes a "template" function
and one or more arguments representing values that need to be accessible in
our compiled function. Let's go ahead and define ``noisy_template`` now::
 
    >>> def noisy_template(__func, __body):
    ... return """
    ... print "calling!"
    ... return __body($args)
    ... """
 
Template functions are defined using the conventions of DecoratorTools's
``@template_function`` decorator, only without the decorator. The first
positional argument is the generic function the compiled method is being
used with, and any others are up to you.
 
Any use of ``$args`` is replaced with the correct calling signature for
invoking a method of the corresponding generic function, and you *must*
name all of your arguments and local variables such that they won't conflict
with any actual argument names. (In practice, this means you want to use
``__``-prefixed names, which is why we're defining the template outside
the class, to prevent Python from mangling our parameter names and messing up
the template.)
 
Note, too, that all the other caveats regarding ``@template_function``
functions apply, including the fact that the function cannot actually *use* any
of its arguments (or any variables from its containing scope) to determine the
return string -- it must simply return a constant string. (It can, however,
refer to globals in its defining module, as long as they're not shadowed by
the generic function's argument names.)
 
Okay, let's see our new method type in action::
 
    >>> f = func_with(NoisyMethod)
 
    >>> f(1)
    compiling
    calling!
    42
 
    >>> f(1)
    calling!
    42
    
As you can see, the method is still compiled just once, but still prints
"calling!" every time it's invoked, as the compiled form of the method is
a purpose-built wrapper function.
 
To save time and memory, the ``engine.apply_template()`` tries to memoize calls
so that it will return the same function, given the same inputs, so long as the
function still exists::
 
    >>> from peak.rules import value
    >>> m = NoisyMethod((), value(42))
 
    >>> m1 = compile_method(m)
    compiling
 
    >>> m2 = compile_method(m)
    compiling
 
    >>> m1 is m2
    True
 
This will only work, however, if all the arguments passed to ``apply_template``
are usable as dictionary keys. So, it's best to use tuples instead of lists,
frozensets instead of sets, etc. (Also, this means you can't pass in keyword
arguments.)
 
 
Defining Method Precedence
--------------------------
 
You can define one method type's precedence relative to another using the
``>>`` operator (which always returns its right-side operand)::
 
    >>> NoisyMethod >> Method
    <class 'peak.rules.core.Method'>
 
You can also chain ``>>`` operators to define overall method precedence between
multiple types, e.g.::
 
    >>> Around >> NoisyMethod >> Method
    <class 'peak.rules.core.Method'>
 
As long as you don't try to introduce a precedence cycle::
 
    >>> NoisyMethod >> MyMethod2 >> Around
    Traceback (most recent call last):
      ...
    TypeError: <class 'peak.rules.core.Around'> already overrides <class 'MyMethod2'>
 
 
 
Decorators

    yo!
    after
 
Decorators can accept an entry point string in place of an actual function,
provided that the PEAK "Importing" package (``peak.util.imports``) is
available. In that case, the registration is deferred until the named module
is imported::
 
    >>> before('some.module:somefunc')(lambda: p("before"))
    <function <lambda> at ...>
 
If the named module is already imported, the registration takes place
immediately, otherwise it is deferred until the named module is actually
imported.
 
This allows you to provide optional integration with modules that might or
might not be used by a given application, without creating a dependency between
your code and that package.
 
Note, however, that if the named function doesn't exist when the module is
imported, then an attribute error will occur at import time. The syntax of
the target name is lightly checked at call time, however::
 
    >>> before('foo.bar')(lambda: p("before"))
    Traceback (most recent call last):
      ...
    TypeError: Function specifier 'foo.bar' is not in
               'module.name:attrib.name' format
 
    >>> before('foo: bar')(lambda: p("before"))
    Traceback (most recent call last):
      ...
    TypeError: Function specifier 'foo: bar' is not in
               'module.name:attrib.name' format
 
(This is just a sanity check, though, just to make sure you didn't accidentally
put some other string first (like the criteria). It won't detect a string
that points to a non-existent module, or various other possible errors, so you
should still verify that your code gets run when the target module is imported
and the relevant conditions apply.)
 
 
Creating Custom Combinations

constructor (``Rule``) that allows you to create a rule with defaults. The
predicate and action type default to ``()`` and ``None`` if not specified::
 
    >>> from peak.rules import Rule
    >>> from peak.rules.core import Rule
    >>> def dummy(): pass
    >>> r = Rule(dummy, sequence=0)
    >>> r

=======
 
``RuleSet`` objects hold the rules and policy information for a generic
function, including the default action type and optional optimziation hints.
function, including the default action type and optional optimization hints.
 
Iterating over a ruleset yields its actions::
 
    >>> from peak.rules import RuleSet
    >>> from peak.rules.core import RuleSet
    >>> rs = RuleSet()
    >>> list(rs)
    []

Observers can be added with the ``subscribe()`` and ``unsubscribe()`` methods.
Observers have their ``actions_changed`` method called with an "added" set
and a "removed" set of action definitions. (An action definition is a
tuple of the form ``(actiontype, body, signature, precedence)``, and can thus
tuple of the form ``(actiontype, body, signature, serial)``, and can thus
be used to create action objects.)
 
::

PythonPowered
ShowText of this page
EditText of this page
FindPage by browsing, title search , text search or an index
Or try one of these actions: AttachFile, DeletePage, LikePages, LocalSiteMap, SpellCheck