[TransWarp] RE: [SmartObjects] Persistence models and patterns (was Re:
Joining the SmartObjects Discussion)
Phillip J. Eby
pje at telecommunity.com
Wed May 23 22:19:13 EDT 2001
Hi Albert. May I suggest in future breaking down your posts into multiple
parts? I can reply much more quickly to short posts. :)
Also, I've removed zcommerce from the cc: list, added
transwarp at eby-sarna.com, and suggest further discussion on any of the
topics herein be moved there, since I think it's actually of limited
relevance to SmartObjects. (To subscribe to the TW list, check out
http://www.eby-sarna.com/mailman/listinfo/.
At 07:28 AM 5/13/01 +1000, Albert Langer wrote:
>
>[Phillip]
>We don't have anon CVS up yet - if you have tips on how to set it up
>securely, that would help. I've heard rumors that pserver mode lets you
>read any file on the server that is readable by the user the CVS pserver
>runs as.
>
>[Albert]
>I'm quite sure it's possible to setup securely as many people do.
>Hope someone who knows how does send you tips.
We have it up now, but I've already forgotten the instructions Ty gave me
(verbally) for accessing it. I think I'll pester him to make the official
announcement of the mailing lists and CVS, on the grounds that he should
get credit since he did all the work to set them up. :)
In the meantime, you can find everything else by just going to
http://www.eby-sarna.com/.
>vvvvvvv
>Another technique is to use the mutable attribute in some immutable way:
>
> def addDepartment(self, department):
> departments = self.departments
> departments.append(department)
> self.department = departments
>
>^^^^^^^
>
>Transwarp might be able to implement this transparently?
If you look at how TW.StructuralModel.InMemory works, you'll see that the
answer is yes. StructuralModel.Persistent will work that way.
It did occur to me, however, that there are a couple of tricks that could
be played with TransWarp today, and I will probably take a whack at them
sometime in the near future.
The first is that one could weave into Elements a __setattr__ that worked
by calling getattr(self,name).set(val) - that is, you could say:
SomeElement.SomeFeature = SomeValue
and have it automatically translated into
SomeElement.SomeFeature.set(SomeValue) at the internal level, making a
nicer interface.
The second trick, was the thought that I could create a
StructuralModel.ZPatterns aspect which would implement features using the
__get_attr__ and __set_attr__ methods of DataSkins. That is, in that
aspect I would define DataSkin as a mixin to Element, and perhaps Rack and
Specialist as Services. It probably wouldn't take very long to create UI
and permissions frameworks that could be woven with this to start creating
Zope applications today.
Actually, it's a little bit more complex, in that I'd need to also define
some sort of Aspect that would autogenerate the ZClass metaclasses and
register them, and there are probably more than a few other hurdles as
well, like getting around to the XMI rewrite that I've been putting off for
some time.
>[...Phillip]
>TW's "Structural Model" is based on the Service-Element-Feature pattern
>(see
>http://www.zope.org/Members/pje/Wikis/TransWarp/ServicesElementsAndFeatures)
> which describes application data objects (Elements) in terms of their
>attributes/relationships (Features).
>
>[Albert]
>I'm assuming that the "Service-Element-Feature pattern" is your
>own terminology and that there is no URL with other documentation for this
>or for CORE+ object relational mapping etc. (Google search got lots
>of trekkie sites for "WarpCore" ;-) Please confirm.
Service-Element-Feature is our own terminology, yes.
>I am interpreting a "Service" as a specification of an Interface in
>terms of a package composed of specification classes (Elements) with their
>specification structural Features (recursively), as in the Catalysis
>UML approach.
Don't know much about Catalysis, I'm afraid, though I've been meaning to
read up on it.
Your definition of service is a bit narrow, though. In the broadest sense,
a service is just a well-known object instance, which is not "data" itself
but rather provides access to domain objects or services related thereto.
DM and SI objects (in Coad's terminology) are Services. Services are
sometimes Singletons, usually if they are SI (System Interaction) objects.
>This interface specification, expressed in a UML Structural Model does
>not determine the actual implementation, which may however be derived
>from it by adding other aspects, including horizontal frameworks, in
>the same sort of way that an abstract class can be generated from a Python
>Interfaces specification of it's operations signatures and then used to
>derive concrete implementation classes. The actual interface operations
>signatures can be defined separately from the specification attributes
>used to define them, calling the methods you have explained below on
>those structural specifications.
Yes, that's a good way of describing it. It's a bit more powerful than
that, in that you have a kind of cross-multiplier effect also. That is,
since you can add, for example, user interface capabilities to a Feature
base class, you can suddenly turn all the "value" features across an entire
UML model into form fields with a particular behavior. So you can blend
whole new interfaces into the system at the feature level. It's sort of a
fractal inheritance. :)
>BTW are you planning to incorporate more use of the Interfaces module
>and extend it to the missing half of Component specifications?
>For example I would have thought that an Aspect should specify both
>__implements__ and a similar __requires__ to support what an
>Aspectual Component "expects" as indicated in:
>
>http://www.ccs.neu.edu/research/demeter/biblio/aspectual-comps.html
>
>See also the end of section 5 of that paper for a comparison with
>Catalysis. My impression is that your approach partially bridges
>the gap between their use of a Participant Graph and the Catalysis
>use of a common model of (specification) attributes. Your derivation
>of that common model from a UML structural diagram looks to me closer
>to Catalysis and better described in terms of their UML diagrams binding
>template frameworks for collaborations. See below.
Whew. Guess I'll have to take a look at that. I haven't really done much
with Interfaces yet. It's an interesting idea to specify what's required
as well as what's provided; I'm not sure I know how I'd use it, exactly.
Perhaps when I have the aspect-weaving mechanism more thoroughly documented
you could take a stab at writing a FeatureDef that would implement a
__requires__ feature.
FeatureDefs receive all sorts of event calls during the process of weaving
aspects together and generating the resulting classes. It seems like you
could just create an InterfaceRequirement FeatureDef that captures the
"finish()" event to check whether the resulting class is going to implement
the necessary items. Or, you could capture an earlier event if you needed
to ensure that the aspects "lower" on the overlay stack support those
capabilities at the time the aspect containing the requirement is overlaid.
Anyway, this stuff is "deep magic" in the sense that I don't expect most
people to want to play around with creating new kinds of FeatureDefs. (A
FeatureDef is an object that specifies *how* features are woven together
when aspects are combined. The default FeatureDefs are ClassFeature and
OverwriteFeature, where ClassFeature causes nested classes to be woven
together, and OverwriteFeature simply overwrites features of the same name
when templates are combined. Each individual attribute in a class to be
woven can have a different FeatureDef specified to control how
weaving/layering will take place in that area.
>ZODB inherently does not provide full Isolation since each thread caches a
>separate copy and conflicts are detected on commit. A similar approach is
>often used with RDBMS "Enterprise Objects" that are optimistically checked
>out
>with a note of the current time-stamp or version number. An ACID
>transaction is used at commit time to select the original object(s)
>with the original time-stamps or version number added to the selection
>criteria in the same RDBMS transaction that updates them, so that the
>commit will abort if some other user has changed the data while it was
>checked out (perhaps via another application server in a different
>language on a different box at a different site). The select will return
>nothing to update when any of the objects read for doing the transaction
>have had their time stamp or version stamped changed since they were
>used to compute the change. Therefore the update (which need not be to
>all the rows that were read) will fail and cause a rollback.
ZPatterns applications typically perform a mix of update-at-commit and
create-during-transaction operations, with the occasional subtransaction
commit thrown in if dependencies exist. In effect, we shove almost all the
RDBMS updates to the end of the transaction, minimizing lock times. We use
the RDBMS' own locking, which can be optimistic or pessimistic. If the
RDBMS reports a conflict error, then the transaction will be retried per
normal Zope parameters. If the RDBMS fails for some other reason, the
transaction will simply fail with the supplied error.
>This is often tied in with local RAM cache invalidation similar to
>that used by Zope for it's RAM cache of ZODB persistent objects - as with
>the AOLserver/OpenNsd Tcl cache of RDBMS rows, cursors and database
>connectors.
>
>ACS provides a time-stamp on every object to facilitate this. It should be
>feasible to get the best of both RDBMS and ZODB worlds by carefully using
>Transwarp with aspects for each (whereas keeping an RDBMS object id in
>ZODB as well strikes me as the worst of both worlds).
>
>The ZODB approach to conflict resolution with _p_resolveConflict described
>at the link above could perhaps be routinely extended to re-do the entire
>transaction after re-reading all the data read when using an RDBMS. That
>also
>involves use of complex object networks that get committed together (eg an
>order header and it's items treated as a single object with a single
>transaction despite being stored in separate tables within an RDBMS).
Ugh. I'm not sure what application space you're going after here, but I
would want to check carefully before trying to implement that much
complexity, under the XP "You Ain't Gonna Need It" theory. (Hm, isn't it
interesting that M$ is working on a Windows XP? Hadn't thought of the name
similariy before now...)
>Transactions at the Application Server across multiple databases
>would presumably need to be a separate "aspect" (hopefully consistent
>with XA and Corba as well as Zope), also separated from aspects for
>cacheing, concurrency and synchronization (threads, microthreads etc).
[sound of my head exploding] :)
>The importance of AOP here is that these aren't strictly "layers"
>as in a non-AOP persistence layer, but cut across each other.
>
>This sort of thing of course requires close involvement of DC core
>developers thoroughly familiar with the current entanglement of
>persistence, cacheing, concurrency, transactions, publishing, permissions
>etc
>in order to be able to re-factor it into separate aspects that can work
>with Transwarp and an object-relational persistent layer to achieve the
>transparency (especially with complex objects) that was achieved
>much more simply using pickled objects in ZODB.
I think you're making this harder than it is. The things you mention
aren't *quite* as entangled as you make it sound, and also I think it's
easier to wrap them all up as a component anyway and map to ZODB et al from
TransWarp via ZPatterns.
>I haven't spent enough time studying SmartObjects, DBObjects and the sample
>applications ZQuest and Projektor to feel certain, but the impression I get
>is that they don't include a model of the *relationships* between objects
>in the sense that you have described such a model above and in the rest of
>your message and in the Transwarp documentation.
>
>They just seem to provide for turning table rows into objects and vice
>versa,
>like "pluggable brains".
>
>That is necessary, and not yet actually implemented in Transwarp, but if
>it's
>done without Transwarp then you end up having to implement each relationship
>manually without any assistance from an actual model. This cannot be
>considered
>an object-relational persistence layer as it only deals with objects, not
>relationships and relationships happen to be what a Relational DBMS is about
>(as well as being central to what OOP is about).
Yeah, I haven't seen much in anybody's work so far dealing with modelling
relationships. Shane Hathaway's stuff points out the possibility of
implementing it, but there's been no model proposed.
>Also, I get the impression that a "Smart" object is supposed to combine
>Domain
>business logic, UI presentation (specific to Zope), management (specific to
>Zope) and storage (CRUD of "objects" corresponding to rows in tables).
>
>This seems contrary to basic design principles for reducing the coupling
>between storage and domain objects by means of an object-relational
>persistence layer that, among other things, maps between them, so that
>only the mapping between them maintained by that layer needs to change
>when either side changes. A complex Domain object like an "Order"
>needs to be able to access and mutate it's components like "LineItem"
>and related objects like "Buyer", "Seller" and "Product" (per LineItem)
>without the application programmer having to know the details of
>table storage, so that DBAs can change those details without
>breaking applications.
Yep.
>It looks to me like Transwarp has the potential to provide for both
>of these, together with closely related but separate transactions,
>concurrency, permissions, distribution and cacheing as separate and
>replaceable "aspects". It should even be capable of adapting to issues
>like the shift of part of the object-relational persistence layer
>to stored procedures within the database itself (eg PostgreSQL
>"Rules" for "writable views") and similar shifts for permissions
>(eg ACS permissions system), other "business rules" (ordinary
>triggers and complex python stored procedures in PostgreSQL),
>and even connection pooling (SQL relay).
Yes. One straightforward application of the TW.Aspects package would be to
create a ZopePermission FeatureDef that does the correct Zope security
magic on the resulting classes, so that you can do something like:
class MyService(Aspect):
class MyElement1:
someMethod1 = someMethod2 = ZopePermission('View Element1 Data')
someMethod3 = ZopePermission('Do something')
class MyElement2:
otherMethod = ZopePermission('Do something')
And then weave this specification with other aspects of MyService.
>Back to the Transwarp Structural Aspect specifics above.
>
>I assume that as a fan of the "Law of Demeter" you would
>encourage the Service interface to provide for an undirected
>joint action such as:
>
>Service.assignment(assignedPerformer = NotGiven, assignedTask = NotGiven)
>
>rather than expose the internal Elements and the navigability of the
>Assignment association Feature in either direction to client developers
>using the Service interface.
For Demeter purposes, I consider the methods of Features to be methods of
their owner object. This is consistent with the properties pattern used by
JavaBeans, where one says object.setProperty1(value) instead of
object.Property1.set(value) as in ZPatterns. Without this mild
rulebending, there would be little point in having Features in TransWarp,
as one could never use them! (Well, without having to create a bunch of
extra methods anyway.)
>To encourage this, it might be better for the internal interface
>used by component developers to implement such an action as:
>
>Service.Assignment.addLink(assignedPerformer, assignedTask)
>
>where Assignment is the name of the association that has the 2
>parameters as role ends (in the order listed).
Er, for some applications that might make sense. But my practical
experience has been that it is usually much better to deal with an object
at one or the other end of the link. The Service is responsible for
*implementing* the relationship, but that doesn't mean it's the *interface*
for the relationship. AssociationEnd Features not only know things like
their arity and any special constraints, they're in the acquisition context
of their associated Element, which means security rules can be applied. If
this were implemented as a generic association as you suggest, then
explicit security checks would need to be done, and then have things
possibly delegated back to the individual Elements' Features anyway. Ugh.
Also, don't forget that a relationship can be implemented more than one way
in the same logical service space - think multiple Racks for one
Specialist. The Elements know better what back-end components handle their
storage, let *them* do the routing. Putting high-level relationship
management in an offshoot of the Service just complicates the implication
and adds "reflection" overhead that isn't otherwise neeeded.
>The Transwarp machinery could then select which of the two roles to
>apply a directed operation to if the association is navigable
>in only 1 direction (or more efficiently in a particular direction
>for a given storage), without client application developers having
>to think about such issues at all and without component developers
>having to make unnecessary changes when storage schemas and indexing
>etc changes.
This wouldn't be part of TransWarp, but part of a storage framework that
somebody wrote. A more advanced framework, I might add, than any I
currently plan to write. :) Specifically, I think that it's too early to
think about compiler-like logic like this. My vision of TransWarp is more
like an assembler: the programmer can specify all the details, but the
tedious grafting and assembling of classes and components is automated.
Upon this meta-framework, you can build what suits you. :)
>A shorthand notation could also be provided consistent with the
>Catalysis dialect of UML OCL:
>
>Service.Assignment += performer, task
>
>This could eventually help allow the Interface implementation to be
>generated directly from an OCL post constraint specifying the operation
>result in the UML diagram.
[sound of head exploding again]
Interesting, but sounds more like computer science research than "business
objects", at least for me. I like making cool tools, but I'm biased more
towards automating the parts of implementation that I find tedious than
creating validation systems for mistakes I rarely make. :) Again, this
may be a wonderful area for third-party framework contribution. :)
>[Phillip]
>TW currently (in CVS) supports features that are either a "value",
>"reference", "set", or "sequence". A "value" is an atomic, immutable field
>which can in essence only be set, cleared, or retrieved. A "reference" is
>a (single-valued) link to another Element. "Set" and "sequence" are
>multi-valued links. Right now, a "sequence" doesn't do anything more than
>a "set", as its interface is not fully fleshed out.
>
>Note, by the way, that a feature is NOT a variable in the usual sense. A
>"sequence" feature is not the same as a Python sequence object. It
>represents instead the *idea* of a feature of the element, and provides
>methods to manipulate or retrieve the associated data. But it is not the
>data. You cannot change it by assigning to it in a Python sense. You must
>use the feature's manipulation methods like .set() or .addItem() or
>.removeItem().
>
>[Albert]
>A python sequence likewise only represents the *idea* of a sequence.
>It doesn't have to be a list. Even lists hold only references,
>like everything else in Python, not the "actual" data.
>
>The reference could be to a proxy which might be implemented as a
>Corba or XPCOM remote object, or in whatever way you like. Likewise
>for the "actual" sequence itself, as distinct from it's contents,
>which can be acquired however you like.
>
>The only reason you can assign to it in a Python sense is because the
>interface specifies operations like __setitem__ and __delitem__ with
>required semantics to provide the convenient assignment shorthand [n].
>The usual operations for .append(), .pop() etc would be more easily
>recognized by python programmers, with .extend() also being
>important for efficiently adding a batch of items. += and -= seem
>to be natural shorthands for .append() and .remove().
>
>Note that xml_object syntax above and the 4ODS syntax below does
>"walk, talk and quack" like a Python sequence, despite being a
>similar layer over an external schema.
>
>You might prefer to reserve this shorthand seq[5] = whatever for
>future use with qualified associationEnds since it resembles the OCL
>notation for qualification:
>
>thatList = object.thatRole[qualifier1, qualifier2]
>
>thatSeq = object.thatRole(5)[qualifier1, qualifier2]
>
>to limit the result to a maximum length of 5.
>
>However any qualifier could be a tuple rather than an integer so
>nothing would prevent using [] for both.
>
>Qualified associationEnds could be very useful to implement the
>.Where stuff for queries below and could be defined by a Logical
>data model aspect that knows about indexes and candidate keys in
>the physical data model.
>
>A syntax for returning cursors is also needed as in the Scott Ambler
>paper. Perhaps also based on the above (5) with extra parameters,
>and with additional operations for scrolling the sequence object
>returned as a cursor/iterator instead of just list operations.
>Weak references to cursors could be used to reduce the danger of
>tying up database resources accidentally.
[more head exploding]
I'll have to take a look at the above further at some point in time. I
haven't given much thought as yet to advanced sequence and
qualified-association manipulation in Features. These would probably be
more advanced Feature types, and I'd like to get some basic ones working
first!
>I don't think your explanation above clarifies the actual
>implementation. The implementation looks to me a bit like ZPublisher
>URL traversal (and could perhaps borrow some useful implementation
>details like hooks etc from that code).
Features are implemented as shared immutable objects in the associated
Element class' dictionary, which are bound to their Element instance when
retrieved from that instance. They are very much like Python "method"
objects. The main difference is that they have methods of their own. The
implementations of these features then generally delegate operations back
to the Element or to an associated Service, but of course a particular
Feature can perform operations directly if need be.
>[Phillip]
>Anyway. These interfaces are implemented by Feature objects. Feature
>objects contain metadata about themselves. For example, references know
>what kind of Element they point to, and what the inverse relationship (if
>any) is. All features know their name, so the object "someTask.DueDate"
>knows it is called "DueDate".
>
>Features should not be confused with accessor methods. The Task class, for
>example, might have its own .setDueDate() method, and not grant access to
>the .DueDate.set() method. The idea is that domain methods go in the
>Element, and those methods may delegate to those of the Element's Features.
> (It's possible, however, to create automated mappings that would "publish"
>certain features' methods into default getter/setter method, and it's also
>possible to simply allow direct access to Features for many application
>scenarios.)
>
>[Albert]
>Sounds (and looks in the code) more and more like the dark inner workings
>of Zope published objects ;-)
>
>As I understand it the code is full of aq_base.aq_parent etc because you
>need to use keys derived from the Element instance self as the link
>reference
>known to the association, not keys derived from the role end (and ditto for
>looking up values of attributes other than association ends).
Don't read too much into the "InMemory" model code as it stands; it has a
lot of complexity intended to avoid circular references. (Yes, I know
Python 2.1 has weakrefs and GC both - I'm still developing with and for
1.5.2 at present.) The "Persistent" model will be able to ignore a lot of
that stuff in favor of simply using object references, and other models may
do even weirder things than the InMemory model.
>The ZPublisher way of handling this might be cleaner
>and could allow for future extensions to carry messages from one
>object to another along navigable links (like REQUEST and RESPONSE
>in Zope).
I'm lost as to what ZPublisher has to do with any of this, to be honest.
>At present Transwarp includes the concept of an "other" end, which
>precludes future support for associations with more than 2 ends,
>and also does not provide for association classes (which are very
>common in an RDBMS datamodel).
>
>It would be better to instead provide navigation in the model
>from association ends to associations that are a sequence
>of association ends as in the UML model instead of the one way
>navigation from association to association end in your present design.
>
N-ary associations and association classes are trivial to implement by
composing smaller references. To my knowledge, there aren't any useful
features of explicit support for N-aries and association classes that would
justify the much greater complexity. Consider, by the way, that the MOF
(CORBA Meta Object Facility) only handles binary associations, and it's a
sufficient metamodel to model UML itself! I figured that was sufficient
power for TransWarp.
>Then the relationship operations can be implemented in the Association
>object as suggested above for Law of Demeter and there is no need for
>direct navigation to a reverse end. This two way navigation is needed
>in the model only and of course does not preclude one way links in the
>actual storage (or the implementation of {changeable, addOnly, frozen}
>and of qualifiers).
TransWarp uses the other-end pointer as a way to notify the other end of a
change in the relationship instance. This is valuable when the
relationship crosses two forms of storage and pointers are kept in each,
for example.
>[lots of relationship stuff skipped]
Again, I think you make this all much more complex than it needs to be!
Remember that TW is *not* a general persistence or object management
framework. Following the YAGNI rule, I am not implementing things I have
not yet needed, and the lesson of ZPatterns is that very complex apps can
be implemented very simply if you follow a few simple patterns. I do not
seek to make TW feature-complete from a theoretical point of view, but from
a practical one.
>BTW did you consider keeping the entire model in a set of kjGraphs,
>one for each type of meta-model relationship, with 2 node types for
>metamodel elements and metamodel links and kjGraph links between them?
Yes. One could create a GraphModel aspect that did that. The real
advantage would be if said GraphModel aspect reimplemented the Querying
interface so that .Get() and .Where() were implemented using the kjGraph
multiplication and set operations. At some point I'll do that - most
likely when I am using TW to work on large enough UML models that the
InMemory model begins to show slow query performance.
>I would have thought that might have been both easier and more flexible.
>The current implementation seems to lose the meta-model and tie the model
>too closely to the running model instance. This could obstruct future
>enhancements for "executable UML diagrams" that could integrate
>development and debugging eg using the Corba specification for operations
>to dynamically change the UML model in accordance with the UML metamodel.
Think assembler and linker/loader, not interpreter here. The model is a
blueprint for the assembly of components - once they are assembled there is
no need for the blueprint. If you need a debugging or other aspect which
contains more metadata, weave it in! TW is designed to put in as little or
as much as the developer assembling the aspects decides.
>[Phillip]
>The basic Feature interface is all about data storage, but there is also a
>"Querying" interface consisting of two methods: .Get(name, recurse) and
>.Where(predicate). To support querying, this interface is implemented by
>both Elements and Features.
>
>[Albert]
>I was glad to see your other message indicating that the "Querying" is
>primarily intended for model introspection in memory since as you stressed
>there, generic ad hoc queries are quite difficult and not all that useful,
>compared with direct SQL. All that's really essential for full automation
>is navigation through the model (including qualification) and that is
>*much* easier than ad hoc queries (in particular restriction to natural
>equijoins with no <, > etc enables straight forward derivation of
>navigation SQL queries directly from the model - see Gadfly).
Keep in mind that TW has no real distinction between data, model,
metamodel, etc. TW deals with *aspects*. A StructuralAspect can be
assembled by code, written by hand, loaded from a metamodel, generated from
a model, etc. TW uses an XML file to load the UML metamodel - and
generates a StructuralAspect for it. In future, TW will use an XMI file to
load a UML model - as an *instance* of the UML StructuralAspect - and
generate a StructuralAspect. from it. So a UML model is just another kind
of application data to TransWarp - an application that manipulates UML models.
The big breakthrough for TransWarp was realizing, once I wrote a generator
to go from the UML Metamodel to a set of classes, that all I needed to do
was make it possible to generate the same kind of metadata from a UML
model, and TransWarp would be able to "assemble itself".
>However you seem to be separately also saying that the Feature interface is
>all about data storage and this seems to be implicit in your general
>explanations of what Structural Aspects are about. That may be too limited
>a description of their potential.
It is. I was addressing the primary cross-aspectual interface of Features.
Horizontal frameworks can add other interfaces to Features to allow HTML
or GUI rendering, etc.
>Strucural aspects can also be used to describe the structure of
>UML collaborations (as opposed to the interaction diagrams that also
>show actual messages being exchanged across the structural links).
I don't know enough about this to be helpful. But I'm beginning to suspect
that you are very focused on doing computation on UML models themselves,
not on the applications which implement what the models describe (which is
my focus). However, TransWarp is probably more useful than you realize for
what you're doing.
Because the UML metamodel is itself a TW StructuralAspect (see
TW.UML.Model), you can add whatever aspects you like to do whatever
computations you like across any part of an entire UML model -
collaborations, statecharts, constraints, comments... EVERYTHING. As soon
as I've completed the XMI refactoring, you'll be able to do this. You
won't have to wait for me to write the UML Model -> StructuralAspect
generation code (which will of course be an aspect that gets mixed into
TW.UML.Model).
Of course, initially you'll only be able to use the InMemory
StructuralModel to hold your UML models, so you'll have to read and write
them for XMI. However, as soon as any other StructuralModels are
implemented that support the Feature kinds needed by UML Models, then
you'll be able to persistently store UML models in those kinds of storage.
>According to the UML 1.4 metamodel any model element can contain an ordered
>list of other model elements as template parameters, each of which is
>associated
>with another model element as a default (figure 2-9 Core Package - Auxiliary
>Elements).
>...
>My interpretation of the Transwarp core is that essentially it provides
>machinery that treats python objects as supplier templates, (always with
>default parameter values) and maps them to instantiate client model
>elements in which the template parameters are replaced (by matching
>names, or using "transforms") with actual binding arguments.
I haven't looked at template parameters and the like as yet. But it's an
interesting thought for future research.
>BTW an important part of the code I still don't understand fully is
>the use of "transforms". You mentioned possibilities for chaining
>methods and concatenating __implements__. Presumably implementations
>of those two would be included for use by Component developers as
>well as horizontal framework developers. If you could provide
>alpha implementations now that could help clarify and document
>how the transforms code works and what it is about.
I can explain the idea a bit better, perhaps. Transforms are just objects
that receive various event messages during the weaving process, which give
them the opportunity to mess with the innards of the stuff being woven.
They're a little more general than FeatureDefs, which are bound to specific
named features which are being assembled. Transforms get the opportunity
to tamper with (i.e. transform) the contents of a namespace which is about
to be woven into the aspect, and to alter the results afterward. They also
get a "last chance to meddle" message just before the actual components get
created from the woven-together aspects.
FeatureDefs, by contrast, apply to a named item in the namespace, and once
bound to a particular name, they are sent all future weaving attempts for
that name. So, if you define attribute Foo in class Bar, and a FeatureDef
gets bound to Foo.Bar by one of the aspects, all aspects which are layered
"after" that aspect will have their Foo.Bar attribute woven by that
FeatureDef. So, if you implemented a MethodChain FeatureDef, it could
chain methods together as each new aspect is woven.
>I'm guessing that fancy transforms would provide the "Connector"
>concept of Aspectual Components (and the attribute Mapper for an
>object-relational persistence layer).
They certainly have many possible applications, and I doubt that I have
thought of anywhere near all of them. :)
>From that perspective, it ought to be possible to extend Transwarp to
>implement the Catalysis style of UML modelling in which "framework"
>template packages for collaborations, including design patterns, are
>"applied" with substitution of template parameters by binding actual roles:
Sounds like you're well on your way to designing a UML model editor using
TransWarp. I'd love to see it when you finish. :)
>* 2) Distribution *
>
>The required "Storage" interfaces could also be used for aspects
>concerned with Distribution rather than storage (Corba, XPCOM, SOAP,
>XML-RPC etc).
Yep. But I consider distribution to be "storage", so I lumped this together.
>As this is already ridiculously long,
No kidding! ;) You make me look positively laconic by comparison. :)
More information about the PEAK
mailing list