[PEAK] PEAK-Rules for Python3

Cara ceridwen.mailing.lists at gmail.com
Sat Apr 4 01:29:43 EDT 2015


> PEAK-Rules' decorators for methods defined in classes add in the
> current class as a restriction on applicability of the method; to do
> this, they need the class object, but that object can only be obtained
> *after* the class is defined.  They do this by *invisbly* decorating
> the class, so they can be notified when the class is created.  This is
> why the whole PEP business was needed: the mechanism used for this
> invisible class decoration in Python 2, went away in Python 3.

I see what's going on now.  Thanks.

> Yeah, DecoratorTools needs some significant work to port to Python 3,
> in order for the class-decorating and metaclass-related features to
> work, and PEAK-Rules actually does use some of them, directly or via
> AddOns.  I'm not sure how many of the test failures actually represent
> sticking points for PEAK-Rules, though.

> <patch snipped>
> 
> This is basically a monkeypatch to post-process class decorators on
> Python 3, by detecting DecoratorTools' metaclass-decorator protocol
> and only applying the decoration part, not the metaclass part.  I
> would be very interested in seeing what test failures are left after
> you add this, and the code I just checked in to fix the
> unittest-related failure.  If this patch above works, then we should
> also see a lot fewer failures for AddOns as well.

I applied a slightly-modified version of this patch (based on the one
you'd checked into the repository) to decorators.py.  AddOns now passes
on 2.7 and has only shallow failures on 3.4 (unorderable types in dir()
and <function Demo2.dummy ...> instead of <function dummy ...>).
DecoratorTools passes on 2 (except for the error caused by unittest
changing) and has only two substantive failures on 3.

ERROR: testMixedMetas (test_decorators.ClassDecoratorTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "~/code/peak-rules/DecoratorTools/test_decorators.py", line 236,
in testMixedMetas
    class C(B1,B2, with_metaclass(M3)):
  File "~/code/peak-rules/DecoratorTools/peak/util/decorators.py", line
762, in py3_build_class
    cls = old_build_class(func, name, *bases, **kw)
TypeError: metaclass conflict: the metaclass of a derived class must be
a (non-strict) subclass of the metaclasses of all its bases

This could easily be some interaction of future.with_metaclass, or
something else could be going on.

ERROR: testSingleExplicitMeta (test_decorators.ClassDecoratorTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "~/code/peak-rules/DecoratorTools/test_decorators.py", line 211,
in testSingleExplicitMeta
    class C(M, with_metaclass(M)):
  File "~/code/peak-rules/DecoratorTools/peak/util/decorators.py", line
765, in py3_build_class
    if '__metaclass__' in cls.__dict__:
AttributeError: 'list' object has no attribute '__dict__'

This also occurs in testOrder.  Repeated calls to decorators seem to be
nesting a list around each call.  I'm not sure why, though I'm assuming
it's related to pong() geting called multiple times.

> > And then there's a large collection of errors in BytecodeAssembler
> on
> > Python 2 I haven't assessed,
> 
> Huh?  I just ran "python27 setup.py test" on a fresh checkout of
> BytecodeAssembler and it comes back clean.

I went back and checked.  It turns out that when I run the doctests
(with the usual option flags) directly on Python 2.7, after fixing a
couple of shallow problems, there are no errors.  When I spot-checked a
few examples in the interpreter, they also ran fine.  When I run the
doctests with `setup.py test`, I get a cascade of errors.  I don't know
why there's a difference.  Could it be setuptools?  I'm using setuptools
7.0 freshly installed from PyPi.

> I'm playing around with BytecodeAssembler, and have gotten it down to
> only a couple hundred lines of error output on the tests; no crashes
> yet on 3.1 or 3.2.  FYI, the way I changed the call signature for
> creating code objects was to add a zero as the *second* argument to
> code(); if you added the extra argument someplace else it might be
> what's causing your core dump.

I'd changed the second argument to 0, yes.  The segfaults could easily
be that I'm running on 3.4 and you're trying 3.1 and 3.2.  As far as I
know, there were no bytecode changes between 3.2 and 3.4, but there were
changes made to the interpreter.  FWIW, 2/3 compatible code-bases often
only aim to support 3.3+ because in 3.3 the u'' notation for explicit
Unicode strings was brought back.  I changed the ord() call on Python 3
in assembler.py to a no-op because indexing a bytestring returns an
integer, and after that I've found two segfaults, at l.2261 and l.2278:

    >>> f = eval(c.code()) # doctest:+SKIP
    >>> f
    <function f at ...>

    >>> c.return_( # doctest:+SKIP
    ...     Function(Return(Function(Return(Local('a')))),
    ...     'f', ['a', 'b'], 'c', 'd', [99, 66])
    ... )
    >>> dis(c.code())
      0           0 LOAD_CONST               1 (99)
                  3 LOAD_CONST               2 (66)
                  6 LOAD_CONST               3 (<... f ..., file
"<string>", line -1>)
                  9 MAKE_FUNCTION            2
                 12 RETURN_VALUE

After skipping those doctests, running the doctests directly on Python 3
now produces the following log: http://pastebin.com/aR9P1JZb .  All of
the errors there (not counting the segfaults) look shallow to me, along
the lines of what you're seeing.  Running the doctests on Python 3 using
`setup.py tests` also works now, and I still have no idea why.

With the dependencies surprisingly close to working, I'm now seeing a
lot more errors in PEAK-Rules.  I simplified the implications rules for
class types as you discussed.  (I started programming in Python long
after old-style classes were deprecated, so the logic wasn't transparent
to me.)  To get rid of another bug, I commented out l.965 in core.py:

when(rules_for, type(After.sorted))(lambda f: rules_for(f.__func__))

My interpretation of this line is that it was stripping the function out
of unbound methods, but it's entirely possible I'm wrong.  It did give
me a more useful log, but unfortunately, it's still huge because there
are still quite a few tests throwing infinite-recursion errors, so I
can't post it.  The principal cycle seems to be between these three
lines in core.py:

l.62:  def implies(s1,s2):
l.596: if implies(key, sig):
l.736: if not implies(t1,t2):

The first test where that comes up is testKwArgHandling.  Calling cls =
old_build_class(func, name, *bases, **kw)  in decorators.py is raising
"TypeError: nonempty __slots__ not supported for subtype of 'int'" in
many places.  There's also a syntax error on l.140 of imports.py: "raise
exc[0],exc[1],exc[2]".

Cara



More information about the PEAK mailing list