The PEAK Developers' Center   MonkeyTyping UserPreferences
HelpContents Search Diffs Info Edit Subscribe XML Print View

"Monkey Typing" for Agile Type Declarations

Table of Contents


Python has always had "duck typing": a way of implicitly defining types by the methods an object provides. The name comes from the saying, "if it walks like a duck and quacks like a duck, it must be a duck". Duck typing has enormous practical benefits for small and prototype systems. For very large frameworks, however, or applications that comprise multiple frameworks, some limitations of duck typing can begin to show.

This PEP proposes an extension to "duck typing" called "monkey typing", that preserves most of the benefits of duck typing, while adding new features to enhance inter-library and inter-framework compatibility. The name comes from the saying, "Monkey see, monkey do", because monkey typing works by stating how one object type may mimic specific behaviors of another object type.

Monkey typing can also potentially form the basis for more sophisticated type analysis and improved program performance, as it is essentially a simplified form of concepts that are also found in languages like Dylan and Haskell. It is also a straightforward extension of Java casting and COM's QueryInterface, which should make it easier to represent those type systems' behaviors within Python as well.


Many interface and static type declaration mechanisms have been proposed for Python over the years, but few have met with great success. As Guido has said recently 1:

One of my hesitations about adding adapt() and interfaces to the core language has always been that it would change the "flavor" of much of the Python programming we do and that we'd have to relearn how to write good code.

Even for widely-used Python interface systems (such as the one provided by Zope), interfaces and adapters seem to require this change in "flavor", and can require a fair amount of learning in order to use them well and avoid various potential pitfalls inherent in their use.

Thus, spurred by a discussion on PEP 246 and its possible use for optional type declarations in Python 2, this PEP is an attempt to propose a semantic basis for optional type declarations that retains the "flavor" of Python, and prevents users from having to "relearn how to write good code" in order to use the new features successfully.

This PEP directly competes with PEP 245, which proposes a syntax for Python interfaces. If some form of this proposal is accepted, it would be unnecessary for a special interface type or syntax to be added to Python, since normal classes and partially or completely abstract classes will be routinely usable as interfaces. Some packages or frameworks, of course, may have additional requirements for interface features, but they can use metaclasses to implement such enhanced interfaces without impeding their ability to be used as interfaces by this PEP's system for creating extenders.

Of course, given the number of previous failed attempts to create a type declaration system for Python, this PEP is an act of extreme optimism, and it will not be altogether surprising if it, too, ultimately fails. However, if only because the record of its failure will be useful to the community, it is worth at least making an attempt. (It would also not be altogether surprising if this PEP results in the ironic twist of convincing Guido not to include type declarations in Python at all!)

Although this PEP will attempt to make adaptation easy, safe, and flexible, the discussion of how it will do that must necessarily delve into many detailed aspects of different use cases for adaptation, and the possible pitfalls thereof.

It's important to understand, however, that developers do not need to understand more than a tiny fraction of what is in this PEP, in order to effectively use the features it proposes. Otherwise, you may gain the impression that this proposal is overly complex for the benefits it provides, even though virtually none of that complexity is visible to the developer making use of the proposed facilities. That is, the value of this PEP's implementation lies in how much of this PEP will not need to be thought about by a developer using it!

Therefore, if you would prefer an uncorrupted "developer first impression" of the proposal, please skip the remainder of this Motivation and proceed directly to the Specification section, which presents the usage and implementation. However, if you've been involved in the Python-Dev discussion regarding PEP 246, you probably already know too much about the subject to have an uncorrupted first impression, so you should instead read the rest of this Motivation and check that I have not misrepresented your point of view before proceeding to the Specification. :)

Why Adaptation for Type Declarations?

As Guido acknowledged in his optional static typing proposals, having type declarations check argument types based purely on concrete type or conformance to interfaces would stifle much of Python's agility and flexibility. However, if type declarations are used instead to adapt objects to an interface expected by the receiver, Python's flexibility could in fact be improved by type declarations.

PEP 246 presents a basic implementation model for automatically finding an appropriate adapter, given an object to adapt, and a desired interface. However, in recent discussions on the Python developers' mailing list, it came out that there were many open issues about what sort of adapters would be useful (or dangerous) in the context of type declarations.

Over a long period of time, it became clear that there are really two fundamentally different types of adaptation that are in common use. One type is the "extender", whose purpose is to extend the capability of an object or allow it to masquerade as another type of object. An "extender" is not truly an object unto itself, merely a kind of "alternate personality" for the object it adapts. For example, a power transformer might be considered an "extender" for a power outlet, because it allows the power to be used with different devices than it would otherwise be usable for.

By contrast, an "independent adapter" is an object that provides entirely different capabilities from the object it adapts, and therefore is truly an object in its own right. While it only makes sense to have one extender of a given type for a given base object, you may have as many instances of an independent adapter as you like for the same base object. For example, Python iterators are independent adapters, as are views in a model-view-controller framework, since each iterable may have many iterators in existence, each with its own independent state. Resuming the previous analogy of a power outlet, you may consider independent adapters to be like appliances: you can plug more than one lamp into the same outlet, and different lamps may be on or off at a given point in time. Many appliances may come and go over the lifetime of the power outlet -- there is no inherent connection between them because the appliances are independent objects rather than mere extensions of the power outlet.

A key distinction between extenders and independent adapters is the "as a" relationship versus the "has a" relationship. An iterable "has" iterators and a model "has" views. But an extender represents an "as a" relationship, like treating a Person "as an" Employee, or treating a string "as a" filename.

For example, Jason Orendorff's path module offers a string subclass for representing file or directory paths, supporting operations like rename() and walkfiles(). Using this class as an extender would allow routines to declare that they want a path argument, yet allow the caller to pass in a string. The routine would then be able to call path methods on the object it receives. Yet, if the routine in turn passed that object to another routine that needs a string, the second routine would receive the original string.

In some ways, this approach can actually improve on subclassing. The path module's string subclass inherits many string methods that have no meaning for path objects, and it also has a different meaning for __iter__ than a string does. While iterating over a string yields the characters in the string, iterating over a path yields the files within the directory specified by the path. Thus, a path object is not really substitutable for a string, and today passing a path object to a routine that expects to be able to iterate over the string's characters would break.

However, if it were implemented as an extender, the path type could supply only methods that make sense for a path, and any given routine can choose to treat the object "as" either a string or a path, according to its need.

PEP 246 was originally proposed for an explicit adaptation model where an adapt() function is called to retrieve an "adapter", where no distinction was made between extenders and independent adapters. However, the adapting code in this model always has access to the "original" object, so the distinction didn't matter: you could always create new extenders or independent adapters from the original object, so you had complete control. Also, PEP 246 permitted either the caller of a function or the called function to perform the adaptation, so the scope and lifetime of the resulting adapter could be explicitly controlled in a straightforward way.

By contrast, the type declaration syntax Guido proposed would perform adaptation at the boundary between caller and callee, making it difficult for the caller to control an independent adapter's lifetime, or for the callee to obtain the "original" object in order to access a different extender or create a new independent adapter.

The discussion that followed these matters also made it clear that although PEP 246 provides excellent support for creating independent adapters, it offers few conveniences for creating extenders, and it is extenders that are the primary domain of type declarations. Guido, however, has also said 1:

I don't believe for a second that all [independent] adapters are bad [in the context of type declarations], even though I expect that [extenders] are always good.

Therefore, this PEP proposes additional support for creating extenders, in addition to PEP 246's support for creating independent adapters. Also, we propose that type declarations be limited to extenders by default, but allowing independent adapters upon request. That is, we propose that independent adapters registered for use with adapt() be required to specify an additional option or make an additional API call to declare that they are safe for use with type declarations, since it is not always appropriate to create new independent objects just because a function or method call has occurred.

Currently, it is very easy to write good independent adapters in Python, because as independent objects it suffices to write the class with the desired functionality. But it is much harder to write good extenders, because their state needs to be "sticky", in the sense that it is attached to the extended object.

Also extenders written as adapter classes are not composable. If two interfaces have overlapping functionality, it's often necessary to create separate adapter classes for each interface. Conversely, if two adapter classes are written to support different interfaces, they cannot be automatically combined to form a single extender for an interface that includes the operations of the two original interfaces.

This PEP therefore focuses on describing an extension to PEP 246 that automatically creates extender classes by combining simple declarations in Python code. These declarations are based on defining extenders for operations, rather than for interfaces as a whole. It is then possible to automatically recombine operations to create an extender for an interface whose operations are known. The result is an easy, intuitive way to create extenders that "just work" without a lot of attention to the mechanics.

Adapter Composition

One other issue that was discussed heavily on Python-Dev regarding PEP 246 was adapter composition. That is, adapting an already-adapted object. Many people spoke out against implicit adapter composition (which was referred to as transitive adaptation), because it introduces potentially unpredictable emergent behavior. That is, a local change to a program could have unintended effects at a more global scale.

Using adaptation for type declarations can produce unintended adapter composition. Take this code, for example:

def foo(bar: Baz):

def whack(ping: Whee):

If a Baz instance is passed to foo(), it is not wrapped in an adapter, but is then passed to whack(), which must then adapt it to the Whee type. However, if an instance of a different type is passed to foo(), then foo() will receive an adapter to make that object act like a Baz instance. This adapter is then passed to whack(), which further adapts it to a Whee instance, thereby composing a second adapter onto the first, or perhaps failing with a type error because there is no adapter available to adapt the already-adapted object. (There can be other side effects as well, such as when attempting to compare implicitly adapted objects or use them as dictionary keys.)

However, these problems are a direct consequence of not distinguishing between extenders and independent adapters. An extender should not be re-adapted; instead, the original object should be retrieved from the extender and re-adapted. PEP 246 is currently undergoing changes to allow supporting this behavior.

Interfaces vs. Duck Typing

An "interface" is generally recognized as a collection of operations that an object may perform, or that may be performed on it. Type declarations are then used in many languages to indicate what interface is required of an object that is supplied to a routine, or what interface is provided by the routine's return value(s).

The problem with this concept is that interface implementations are typically expected to be complete. In Java, for example, you can't say that your class implements an interface unless you actually add all of the required methods, even if some of them aren't needed in your program yet.

A second problem with this is that incompatible interfaces tend to proliferate among libraries and frameworks, even when they deal with the same basic concepts and operations. Just the fact that people might choose different names for otherwise-identical operations makes it considerably less likely that two interfaces will be compatible with each other!

There are two missing things here:

  1. Just because you want to have an object of a given type (interface) doesn't mean you will use all possible operations on it.
  2. It'd be really nice to be able to map operations from one interface onto another, without having to write wrapper classes and possibly having to write dummy implementations for operations you don't need, and perhaps can't even implement at all!

On the other hand, the idea of an interface as a collection of operations isn't a bad idea. And if you're the one using the interface's operations, it's a convenient way to do it. This proposal seeks to retain this useful property, while ditching much of the "baggage" that otherwise comes with it.

What we would like to do, then, is allow any object that can perform operations "like" those of a target interface, to be used as if it were an object of the type that the interface suggests.

As an example, consider the notion of a "file-like" object, which is often referred to in the discussion of Python programs. It basically means, "an object that has methods whose semantics roughly correspond to the same-named methods of the built-in file type."

It does not mean that the object must be an instance of a subclass of file, or that it must be of a class that declares it "implements the file interface". It simply means that the object's namespace mirrors the meaning of a file instance's namespace. In a phrase, it is "duck typing": if it walks like a duck and quacks like a duck, it must be a duck.

Traditional interface systems, however, rapidly break down when you attempt to apply them to this concept. One repeatedly used measuring stick for proposed Python interface systems has been, "How do I say I want a file-like object?" To date, no proposed interface system for Python (that this author knows about, anyway) has had a good answer for this question, because they have all been based on completely implementing the operations defined by an interface object, distinct from the concrete file type.

Note, however, that this alienation between "file-like" interfaces and the file type, leads to a proliferation of incompatible interfaces being created by different packages, each declaring a different subset of the total operations provided by the file type. This then leads further to the need to somehow reconcile the incompatibilities between these diverse interfaces.

Therefore, in this proposal we will turn both of those assumptions upside down, by proposing to declare conformance to individual operations of a target type, whether the type is concrete or abstract. That is, one may define the notion of "file-like" without reference to any interface at all, by simply declaring that certain operations on an object are "like" the operations provided by the file type.

This idea will (hopefully) better match the uncorrupted intuition of a Python programmer who has not yet adopted traditional static interface concepts, or of a Python programmer who rebels against the limitations of those concepts (as many Python developers do). And, the approach corresponds fairly closely to concepts in other languages with more sophisticated type systems (like Haskell typeclasses or Dylan protocols), while still being a straightforward extension of more rigid type systems like those of Java or Microsoft's COM (Component Object Model).


For "file-like" objects, the standard library already has a type which may form the basis for compatible interfacing between packages; if each package denotes the relationship between its types' operations and the operations of the file type, then those packages can accept other packages' objects as :file parameters.

However, the standard library cannot contain base versions of all possible operations for which multiple implementations might exist, so different packages are bound to create different renderings of the same basic operations. For example, one package's Duck class might have walk() and quack() methods, where another package might have a Mallard class (a kind of duck) with waddle() and honk() methods. And perhaps another package might have a class with moveLeftLeg() and moveRightLeg() methods that must be combined in order to offer an operation equivalent to Duck.walk().

Assuming that the package containing Duck has a function like this (using Guido's proposed optional typing syntax 2):

def walkTheDuck(duck: Duck):

This function expects a Duck instance, but what if we wish to use a Mallard from the other package?

The simple answer is to allow Python programs to explicitly state that an operation (i.e. function or method) of one type has semantics that roughly correspond to those of an operation possessed by a different type. That is, we want to be able to say that Mallard.waddle() is "like" the method Duck.walk(). (For our examples, we'll use decorators to declare this "like"-ness, but of course Python's syntax could also be extended if desired.)

If we are the author of the Mallard class, we can declare our compatibility like this:

class Mallard(Waterfowl):

    def waddle(self):
        # walk like a duck!

This is an example of declaring the similarity inside the class to be extended. In many cases, however, you can't do this because you don't control the implementation of the class you want to use, or even if you do, you don't wish to introduce a dependency on the foreign package.

In that case, you can create what we'll call an "external operation", which is just a function that's declared outside the class it applies to. It's almost identical to the "internal operation" we declared inside the Mallard class, but it has to call the waddle() method, since it doesn't also implement waddling:

@like(Duck.walk, for_type=Mallard)
def duckwalk_by_waddling(self):

Whichever way the operation correspondence is registered, we should now be able to successfully call walkTheDuck(Mallard()). Python will then automatically create an extender object that wraps the Mallard instance with a Duck-like interface. That extender will have a walk() method that is just a renamed version of the Mallard instance's waddle() method (or of the duckwalk_by_waddling external operation).

For any methods of Duck that have no corresponding Mallard operation, the extender will omit that attribute, thereby maintaining backward compatibility with code that uses attribute introspection or traps AttributeError to control optional behaviors. In other words, if we have a MuteMallard class that has no ability to quack(), but has an operation corresponding to walk(), we can still safely pass its instances to walkTheDuck(), but if we pass a MuteMallard to a routine that tries to make it quack, that routine will get an AttributeError.

Extender Creation

Note, however, that even though a different extender class is needed for different source types, it is not necessary to create an extender class "from scratch" every time a Mallard is used as a Duck. Instead, the implementation need only create a MallardAsDuck extender class once, and then cache it for repeated uses. Extender instances can also be quite small in size, because in the general case they only need to contain a reference to the object instance that they are extending.

In order to be able to create these extender classes, we need to be able to determine the correspondence between the target Duck operations, and operations for a Mallard. This is done by traversing the Duck operation namespace, and retrieving methods and attribute descriptors. These descriptors are then looked up in a registry keyed by descriptor (method or property) and source type (Mallard). The found operation is then placed in the extender class' namespace under the name given to it by the Duck type.

So, as we go through the Duck methods, we find a walk() method descriptor, and we look into a registry for the key (Duck.walk,Mallard). (Note that this is keyed by the actual Duck.walk method, not by the name "Duck.walk". This means that an operation inherited unchanged by a subclass of Duck can reuse operations declared "like" that operation.)

If we find the entry, duckwalk_by_waddling (the function object, not its name), then we simply place that object in the extender class' dictionary under the name "walk", wrapped in a descriptor that substitutes the original object as the method's self parameter. Thus, when the function is invoked via an extender instance's walk() method, it will receive the extended Mallard as its self, and thus be able to call the waddle() operation.

However, operations declared in a class work somewhat differently. If we directly declared that waddle() is "like" Duck.walk in the body of the Mallard class, then the @like decorator will register the method name "waddle" as the operation in the registry. So, we would then look up that name on the source type in order to implement the operation on the extender. For the Mallard class, this doesn't make any difference, but if we were extending a subclass of Mallard this would allow us to pick up the subclass' implementation of waddle() instead.

So, we have our walk() method, so now let's add a quack() method. But wait, we haven't declared one for Mallard, so there's no entry for (Duck.quack,Mallard) in our registry. So, we proceed through the __mro__ (method resolution order) of Mallard in order to see if there is an operation corresponding to quack that Mallard inherited from one of its base classes. If no method is found, we simply do not put anything in the extender class for a "quack" method, which will cause an AttributeError if somebody tries to call it.

Finally, if our attempt at creating an extender winds up having no operations specific to the Duck type, then a TypeError is raised. Thus if we had passed an instance of Pig to the walkTheDuck function, and Pig had no methods corresponding to any Duck methods, this would result in a TypeError -- even if the Pig type has a method named walk()! -- because we haven't said anywhere that a pig walks like a duck.

Of course, if all we wanted was for walkTheDuck to accept any object with a method named walk(), we could've left off the type declaration in the first place! The purpose of the type declaration is to say that we only want objects that claim to walk like ducks, assuming that they walk at all.

This approach is not perfect, of course. If we passed in a LeglessDuck to walkTheDuck(), it is not going to work, even though it will pass the Duck type check (because it can still quack() like a Duck). However, as with normal Python "duck typing", it suffices to run the program to find that error. The key here is that type declarations should facilitate using different objects, perhaps provided by other authors following different naming conventions or using different operation granularities.


By default, this system assumes that subclasses are "substitutable" for their base classes. That is, we assume that a method of a given name in a subclass is "like" (i.e. is substitutable for) the correspondingly-named method in a base class. However, sometimes this is not the case; a subclass may have stricter requirements on routine parameters. For example, suppose we have a Mallard subclass like this one:

class SpeedyMallard(Mallard):
    def waddle(self, speed):
        # waddle at given speed

This class is not substitutable for Mallard, because it requires an extra parameter for the waddle() method. In this case, the system should not consider SpeedyMallard.waddle to be "like" Mallard.waddle, and it therefore should not be usable as a Duck.walk operation. In other words, when inheriting an operation definition from a base class, the subclass' operation signature must be checked against that of the base class, and rejected if it is not compatible. (Where "compatible" means that the subclass method will accept as many arguments as the base class method will, and that any extra arguments taken by the subclass method are optional ones.)

Note that Python cannot tell, however, if a subclass changes the meaning of an operation, without changing its name or signature. Doing so is arguably bad style, of course, but it could easily be supported anyway by using an additional decorator, perhaps something like @unlike(Mallard.waddle) to claim that no operation correspondences should remain, or perhaps @unlike(Duck.walk) to indicate that only that operation no longer applies.

In any case, when a substitutability error like this occurs, it should ideally give the developer an error message that explains what is happening, perhaps something like "waddle() signature changed in class Mallard, but replacement operation for Duck.walk has not been defined." This error can then be silenced with an explicit @unlike decorator (or by a standalone unlike call if the class cannot be changed).

External Operations and Method Dependencies

So far, we've been dealing only with simple examples of method renaming, so let's now look at more complex integration needs. For example, the Python dict type allows you to set one item at a time (using __setitem__) or to set multiple items using update(). If you have an object that you'd like to pass to a routine accepting "dictionary-like" objects, what if your object only has a __setitem__ operation but the routine wants to use update()?

As you may recall, we follow the source type's __mro__ to look for an operation inherited possibly "inherited" from a base class. This means that it's possible to register an "external operation" under (dict.update,object) that implements a dictionary-like update() method by repeatedly calling __setitem__. We can do so like this:

@like(dict.update, for_type=object, needs=[dict.__setitem__])
def do_update(self:dict, other:dict):
    for key,value in other.items():
        self[key] = value

Thus, if a given type doesn't have a more specific implementation of dict.update, then types that implement a dict.__setitem__ method can automatically have this update() method added to their dict extender class. While building the extender class, we simply keep track of the needed operations, and remove any operations with unmet or circular dependencies.

By the way, even though technically the needs argument to @like could be omitted since the information is present in the method body, it's actually helpful for documentation purposes to present the external operation's requirements up-front.

However, if the programmer fails to accurately state the method's needs, the result will either be an AttributeError at a deeper point in the code, or a stack overflow exception caused by looping between mutually recursive operations. (E.g. if an external dict.__setitem__ is defined in terms of dict.update, and a particular extended type supports neither operation directly.) Neither of these ways of revealing the error is particularly problematic, and is easily fixed when discovered, so needs is still intended more for the reader of the code than for the extender creation system.

By the way, if we look again at one of our earliest examples, where we externally declared a method correspondence from Mallard.waddle to Duck.walk:

@like(Duck.walk, for_type=Mallard)
def walk_like_a_duck(self):

we can see that this is actually an external operation being declared; it's just that we didn't give the (optional) full declarations:

@like(Duck.walk, for_type=Mallard, needs=[Mallard.waddle])
def walk_like_a_duck(self:Mallard):

When you register an external operation, the actual function object given is registered, because the operation doesn't correspond to a method on the extended type. In contrast, "internal operations" declared within the extended type cause the method name to be registered, so that subclasses can inherit the "likeness" of the base class' methods.

Extenders with State

One big difference between external operations and ones created within a class, is that a class' internal operations can easily add extra attributes if needed. An external operation, however, is not in a good position to do that. It could just stick additional attributes onto the original object, but this would be considered bad style at best, even if it used mangled attribute names to avoid collisions with other external operations' attributes.

So let's look at an example of how to handle extenders that need more state information than is available in the extended object. Suppose, for example, we have a new DuckDodgers class, representing a duck who is also a test pilot. He can therefore be used as a rocket-powered vehicle by strapping on a JetPack, which we can have happen automatically:

@like(Rocket.launch, for_type=DuckDodgers, using=JetPack)
def launch(jetpack, self):   
    print "Up, up, and away!"

The type given as the using parameter must be instantiable without arguments. That is, JetPack() must create a valid instance. When a DuckDodgers instance is being used as a Rocket instance, and this launch method is invoked, it will attempt to create a JetPack instance for the DuckDodgers instance (if one has not already been created and cached).

The same JetPack will be used for all external operations that request to use a JetPack for that specific DuckDodgers instance. (Which only makes sense, because Dodgers can wear only one jet pack at a time, and adding more jet packs will not allow him to fly to several places at once!)

It's also necessary to keep reusing the same JetPack instance for a given DuckDodgers instance, even if it is adapted many times to different rocketry-related interfaces. Otherwise, we might create a new JetPack during flight, which would then be confused about how much fuel it had or whether it was currently in flight!

Note, by the way that JetPack is a completely independent class here. It does not have to know anything about DuckDodgers or its use in an extender, nor does DuckDodgers need to know about JetPack. In fact, neither object should be given a reference to the other, or this will create a circularity that may be difficult to garbage collect. Python's extender machinery will use a weak-key dictionary mapping from extended objects to their "extensions", so that our JetPack instance will hang around until the associated DuckDodgers instance goes away.

Then, when external operations using JetPack are invoked, they simply request a JetPack instance from this dictionary, for the given DuckDodgers instance, and then the operation is invoked with references to both objects.

Of course, this mechanism is not available for extending types whose instances cannot be weak-referenced, such as strings and integers. If you need to extend such a type, you must fall back to either storing the additional state in the object itself, using the object to key some other dictionary to obtain the state, or declaring that your extender can live with potentially inconsistent states.

XXX have a way to declare that state is kept in the extender for this scenario

Using Multiple Extender States

Different external operations can use different using types to store their state. For example, a DuckDodgers instance might be able to be used as a Soldier, provided that he has a RayGun:

@like(Soldier.fight, for_type=DuckDodgers, using=RayGun)
def fight(raygun, self, enemy:Martian):
    while enemy.isAlive():

In the event that two operations covering a given for_type type have using types with a common base class (other than object), the most-derived type is used for both operations. This rule ensures that extenders do not end up with more than one copy of the same state, divided between a base type and a derived type. This is important because extenders are really a form of inheritance, in that they extend the type of the object they extend, so we don't want to end up with multiple objects representing the same thing.

Notice that our examples of using=JetPack and using=RayGun do not interact, as long as RayGun and JetPack do not share a common base class other than object. However, if we had defined one operation using=JetPack and another as using=HypersonicJetPack, then both operations would receive a HypersonicJetPack if HypersonicJetPack is a subclass of JetPack. This ensures that we don't end up with two jet packs, but instead use the best jetpack possible for the operations we're going to perform.

However, if we also have an operation using a BrokenJetPack, and that's also a subclass of JetPack, then we have a conflict, because there's no way to reconcile a HypersonicJetPack with a BrokenJetPack, without first creating a BrokenHypersonicJetPack that derives from both, and using it in at least one of the operations.

If it is not possible to determine a single "most-derived" type among a set of operations for a given extended type, then an error is raised, similar to that raised by when deriving a class from classes with incompatible metaclasses. As with that kind of error, this error can be resolved just by adding another using type that inherits from the conflicting types.

Non-Method Attributes

Sometimes, an interface includes not only methods, but also the ability to get or set an attribute as well. In the case of attributes that are managed by descriptors in the interface or type, we can use these for @like declarations by mapping to their __get__, __set__, and __delete__ methods, e.g.:

@like(SomeClass.someAttr.__get__, for_type=OtherClass)
def get_foo_as_someAttr(self):

@like(SomeClass.someAttr.__set__, for_type=OtherClass)
def set_foo_as_someAttr(self, value): = value

While creating an extender to map OtherClass to SomeClass, we will find the someAttr descriptor and check for operations defined for its __get__, __set__ and __delete__ methods, using them to assemble a property descriptor for the extender. In addition to using functions as shown, we can also implement a shortcut like this:

like(SomeClass.someAttr, for_type=OtherClass)("foo")

To mean that any set/get/delete of someAttr on the extender should be mapped to a corresponding action on the foo attribute of the OtherClass instance. Or, we can use this:

like(SomeClass.someAttr, for_type=OtherClass, using=FooData)("foo")

to mean that the foo attribute should be gotten, set, or deleted from the FooData state instance, whenever the corresponding operation is performed on the extender's someAttr.

Special Methods

XXX binary operators

XXX level-confusing operators: comparison, repr/str, equality/hashing

XXX other special methods

Backward Compatibility

XXX explain Java cast and COM QueryInterface as proper subsets of extender concept

Reference Implementation



Many thanks to Alex Martelli, Clark Evans, and the many others who participated in the Great Adaptation Debate of 2005. Special thanks also go to folks like Ian Bicking, Paramjit Oberoi, Steven Bethard, Carlos Ribeiro, Glyph Lefkowitz and others whose brief comments in a single message sometimes provided more insight than could be found in a megabyte or two of debate between myself and Alex; this PEP would not have been possible without all of your input. Last, but not least, Ka-Ping Yee is to be thanked for pushing the idea of "partially abstract" interfaces, for which idea I have here attempted to specify a practical implementation.

Oh, and finally, an extra special thanks to Guido for not banning me from the Python-Dev list when Alex and I were posting megabytes of adapter-related discussion each day. ;)


[1](1, 2) Guido's Python-Dev posting on "PEP 246: lossless and stateless" (
[2](1, 2) Optional Static Typing -- Stop the Flames! (

EditText of this page (last modified 2005-01-16 23:51:46)
FindPage by browsing, title search , text search or an index
Or try one of these actions: AttachFile, DeletePage, LikePages, LocalSiteMap, SpellCheck