[TransWarp] peak.model refactoring coming
Phillip J. Eby
pje at telecommunity.com
Thu Jan 30 19:05:23 EST 2003
Just a reminder... don't rely on the implementation or interfaces of
peak.model to remain stable. There is a *big* refactoring coming, as soon
as I can squeeze in some time to work on it. Here are some highlights of
the plan:
* Classifiers (Classifier, DataType, Element, etc.) will know their
Features, and be able to list them in various subsets and sequences
according to specified order (via a feature "priority" attribute) and
inheritance order.
* Immutables will automatically do hashing and comparisons based on a tuple
of their features' values. (This will ultimately allow me to change some
of the specialized peak.naming classes to just inherit from model.Immutable
instead of doing this themselves.)
* Immutables will require that all their features have 'isChangeable' set
to a false value (i.e., they aren't actually changeable).
* The default default value for features will be NOT_GIVEN instead of None,
and if the default value is NOT_GIVEN, the feature will be treated as a
non-existent attribute instead of defaulting to a value. This means you
can still *give* a feature a default value of None, you will just have to
do it explicitly.
* The 'referencedEnd' attribute may be renamed to 'otherEnd' for brevity
and clarity.
* Collection and Reference features will support an aggregation kind (e.g.
indicating that a feature is composite). This is needed to support XMI
writing, where the spec requires composite links to be written in a
separate batch from purely referential links. (It would also be useful for
any other algorithm that wants to do recursive traversal of contained objects.)
* Enumerations will be completely reworked. Enumeration instances will
hash and compare based on an assigned or specified (numeric, string, or
other) value, but will __repr__/__str__ themselves as "enumName.valueName"
strings. They will probably also be unique objects that can be compared
using 'is', and be pickled in such a way as to retain that
uniqueness. Enumeration classes will likely be callable, passing in either
a string name of the value to be created, or a value that matches the
"value" of one of the enumeration members. It will be possible for the
system to automatically assign values to enumeraton members (e.g.
auto-numbering from 1), but in that event the values will not be guaranteed
to be numerically consistent from one execution to the next, and pickling
will store string values rather than numeric ones.
* Some kind of "typecode" support will be added, for XMI and CORBA
interoperability. This will probably take the form of certain MOF types
such as AggregationKind moving from peak.metamodels.MOF131 to
peak.model.mof_base or something like that, along with some new
enumerations for CORBA type codes.
* Many renamings or restructurings of methods and attributes may take
place. In particular, I'll be standardizing on a 'mdl_' prefix for model
metadata attributes that appear on classifier classes. Feature classes
won't have this prefix, because feature classes don't have instances and so
there's no possibility of name collision with class or instance attribute
names.
There are also some big open questions that may remain issues after the
refactorings above are complete:
* The role of Package/Model classes that contain other classes is in
flux. The main function they're likely to serve is the finding of classes
based on partial and qualified names, primarily to support XMI
encoding/decoding. But I'm not too keen on having to actually nest the
classes inside them. Maybe there needs to be some way to create Package or
Model objects from Python module objects, and/or a dictionary.
* Mapping-style Features. I'd like to make it possible to define a feature
that acts like a mapping, in the sense that you can store or retrieve items
by key (possibly a tuple containing multiple field values). A graph-style
feature (i.e. multiple items stored under each key) might also be
nice. But implementing either of these has some interesting interactions
with the items below...
* Implementing associations well. Albert Langer (I think) once suggested
that there should be Association objects in TransWarp, and I'm beginning to
wonder if he was right. The UML model of associations is essentially that
associations are like classes whose instances are "links" that pair
individual items. The advantage to this concept is that an association can
exist between objects that don't individually know each other. The
disadvantage is that it means twice as much typing to define the
association and its ends. Currently peak.model only requires you to define
the references at each end of the association, and even then only the
navigable ones. On the other hand, association objects are a natural place
to implement constraint validation, by treating the "link" as an object
with two attributes that need validation. In the current model, you have
to implement this checking on both ends, or decide which end is the
"master". Maybe I should add a metadata attribute that defines which end
is the master... Anyway, that's why this is still an open issue.
* Data manipulation API's and observer hooks. I'd like to (mostly)
standardize the ways in which features that are collections or mappings can
be manipulated, with a goal of making it easier to implement observability
of objects' features. One tricky bit is the distinction between observing
a collection at one end of an association, versus observing changes in the
link set of the association! The latter is more useful for storage-ish
things, and validation/business rules, while the former is more useful for
GUIs. Whatever the approach, I'd also like to minimize overhead due to
generating events when nobody is actually listening.
* Validation, constraints, and "business rules". I have some general ideas
here, but I think I'm going to have to wait until the general refactorings
have stabilized more before trying to put hooks for this
in. Unfortunately, this has a sort of circular depenendency with the
previous three items...
Anyway, I think that about sums it up for now. So steer clear of
peak.model for now, and if you have any ideas or suggestions about how to
sort out any of the open issues above, *please* speak up, as I could use
the help. ;)
More information about the PEAK
mailing list