[PEAK] Resolving AmbiguousMethod errors in PEAK-Rules

P.J. Eby pje at telecommunity.com
Mon Aug 16 20:19:12 EDT 2010

This is something I'm working on for the PEAK-Rules docs; feedback welcome!

How To Fix Typical AmbiguousMethods Errors

Most AmbiguousMethods errors are due to mixing "default" cases with 
normal cases, e.g.::

     def jsonify(ob):
         """Jsonify something"""

     @when(jsonify, "hasattr(ob,'__json__')")
     def default_jsonify(ob):
         return ob.__jsonify__()

     @when(jsonify, int)
     def jsonify_int(ob):
         # ...etc.

The problem with the above code is that it doesn't say what should 
happen if an object is (say), a subclass of ``int`` that has its own 
``__json__`` method.  Should the "int" case be used, or the 
``__jsonify__`` method?

This isn't something PEAK-Rules can decide for itself: in the face of 
ambiguity, it refuses the temptation to guess which way was what you meant.

However, for cases like this, there's a really, really simple fix, 
and it looks like this::

     def jsonify(ob):
         """Jsonify something"""
         if hasattr(ob,'__json__'):
             return ob.__json__()
             raise TypeError("Can't jsonify", ob)

     @when(jsonify, int)
     def jsonify_int(ob):
         # ...etc.

See, PEAK-Rules is really designed to *add specialization* to 
functions, which is why jsonify in this example doesn't even need a 
decorator!  It's just a regular function, describing the *default* 
case: what should happen if there *isn't* an explicit rule.

So, the right way to define a generic function with PEAK-Rules is to 
put the "base case" in the function's initial definition.  That "base 
case" is whatever you want to happen whenever there isn't an explicit rule.

And, because you're writing sequential code there, you can trivially 
enforce whatever precedence you want, just by the order of your 
if-then's, and it's absolutely unambiguous what you intend.

Then, use PEAK-Rules to define only the *special* cases.  The 
*unique* cases.  The, "yeah, but I need this specific situation to do 
it this way..." cases.  Think of it as subclassing for functions - 
you need to put the default cases in your base class, not a subclass!  ;-)

Really, that's what PEAK-Rules is for: to separate the nice simple 
elegant definition of a (dare I say it?) "generic" function, from all 
the special cases that arise in real applications.

You know what I mean.  Those, "Oh, you need it to work with this 
other kind of object?  But only if this other condition holds?" cases.

And when you divide code into "base case + rules", you make both 
reading and writing the code more *incremental*.  First you read (or 
write) the base case, and get an idea of what it's supposed to do in 
general.  Then, you read (or write) rules in modular groups, to 
understand (or implement) a set of related use cases.

And if you were to look at PEAK-Rules' own source code, you would see 
that it follows this principle itself.  Most of its own generic 
functions have a "base case" body; relatively few are abstract.

And as you go through each module and group of classes, you'll see 
batches of rules, updating the previous information.

For example, the "implies()" generic in PEAK-Rules is initially 
defined as "x implies y if x==y".  Then, later, we learn how 
subclasses imply their bases, how booleans imply or are implied by 
other kinds of things, how conditions imply each other, and so on.

Okay, you say, all that's very well and good for your base cases, but 
suppose what I *really* want is for the "__json__" rule to take 
precedence **over** everything else?

Well, in that case, your code should look like this::

     def jsonify(ob):
         """Jsonify something"""

     @around(jsonify, "hasattr(ob,'__json__')")  # this is now 'around'
     def default_jsonify(ob):
         return ob.__json__()

     @when(jsonify, int)
     def jsonify_int(ob):
         # ...etc.

The ``@around`` says, "this is a really special case -- it's so 
special that it takes precedence over anything defined with a 
``@when``".  Voila.  Problem solved a different way.

PEAK-Rules does this sort of thing on occasion internally as well; 
for example, its ``intersect()`` function (that returns the logical 
intersection of two conditions) has an ``@around`` method that 
basically says, "if one of the conditions implies the other, just 
return that condition, otherwise, go on with whatever you'd have done 
normally".  The code looks like::

     def intersect(c1, c2):
         """Return the logical intersection of two conditions"""

     @around(intersect, (object, object))
     def intersect_if_implies(next_method, c1, c2):
         if implies(c1,c2):      return c1
         elif implies(c2, c1):   return c2
         return next_method(c1, c2)

(As you can see, it doesn't even use a fancy condition for this, as 
it's built on PEAK-Rules' types-only dispatching.)

Now, while I can't say for certain that *every* possible way of 
having ambiguous methods will be resolved by one or the other of 
these patterns, I will say that I've not yet seen a case that isn't.

So, in summary:

* If it's a default case, put it in the original function definition

* If it's a special case that overrides everything else, use @around

* If you get an ambiguous method error that can't be resolved in 
either of the above ways, or if the resolution would cause other 
problems, drop me a note via the PEAK mailing list and let me know!

More information about the PEAK mailing list