[TransWarp] RFC: PEAK package organization, continued
Phillip J. Eby
pje at telecommunity.com
Thu Jun 20 08:50:16 EDT 2002
At 01:57 PM 6/20/02 +0200, Ulrich Eck wrote:
>>Does anybody have any opinions one way or the other on this? Any
>>suggestions for better names of the individual functions?
>Will the standart usage of PEAK be the same as in TW:
>from peak.api import * ??
>if yes, does this mean, that one needs to write
>"binding.setupModule()" at the end of every Module ??
That would be the case, yes.
>isn't module-inheritance one of "the" central parts of peak ??
No, that's part of the difference between PEAK and TransWarp. PEAK is
focused on what you do with things, not on what the things are. It has
sort of bothered me that 'setupModule()' by itself doesn't say much about
what it does. I think 'binding.setupModule()' more clearly says, "set this
module up for binding", even though it still doesn't say to what or from
I thought about having the syntax be more like 'binding.deriveModule()' to
do the inheritance, or maybe 'binding.allowExtension()', but setupModule()
does both of these things. (And it imposes metaclass constraint
inheritance, as well.) Calling it
seems a little too verbose. :)
Anyhow, from a usage point of view, the module inheritance is just another
tool for creating, extending, and instantiating components; it's just that
they're module-level components.
In any case, if you have any ideas for a better name than 'setupModule',
please suggest it now.
>if you think so .. I'ld prefer to have it straight there as it is yet ...
I really don't think it's that central. I think it's pretty *basic*, and
in the tutorial it'll probably be the first part of the binding package I
cover. That's because it's also fairly *ubiquitous*, in the sense that
you'll tend to at least stick a setupModule() call at the end of a lot of
modules. But *central*? Not really. It seems to me that after a brief
period it disappears from consciousness altogether; it's no more central to
your working focus than class inheritance is.
>>Comments, anyone? Is this too broad a scope? Is there a better name
>>than peak.deployment? Anything that should be added or removed from the
>>proposed scope list?
>I'm not sure if it helps using the package, if e.g. there is a DataModel at
>another place than the database-drivers .. as well as using the naming-package
>and their providers from different locations ...
>I'ld prefer to have a Datamodel with it's drivers at a well-known place,
>the same for naming and it's providers. this is just my feeling that speaks .
Keep in mind that providers can appear anywhere outside of PEAK as
well. This is only in relation to providers actually distributed as part
of the PEAK core. Also, you should be aware that one of the ideas behind
the naming package is that you don't have to know where the driver
module(s) are in order to use them. If you do a
lookup('sybase://foo:bar/baz') you get back the connection object and
that's that. Ideally, the object you get should be a ManagedConnection
with an interface that allows you to do everything you need to do with an
SQL connection, without requiring you to know what driver is behind it.
The idea is that when we integrate peak.binding with peak.naming, you'll be
able to say something like this:
connection = binding.bindTo('config:MyDatabase')
And the provider URL for 'MyDatabase' will then be looked up in the
application configuration, and a managed connection object bound to the
connection attribute of any instance of MyDataModel.
Of course, this degree of back-end independence is dependent on us
designing a good enough driver mechanism that you can write sufficiently
DB-independent code. But if you can't, then you'll have to write variants
of your data model using module inheritance... and your application will
do something like *this*:
DataModel = binding.bindTo('config:MyDataModel')
And in your application configuration you'll specify which variant of your
data model should be imported, right alongside your specification for what
database connection parameters will be used for it.
So, I see the connection drivers themselves as being somewhat irrelevant if
we have an API that sufficiently masks their differences. The idea is that
the deployment package is focused on masking deployment differences, of
which database drivers are one. Conversely, the DataModel system (or
whatever it will end up looking like in PEAK), is focused on specifying
models that are part of the application, rather than part of its deployment
environment. It would almost belong more in the model package than in the
>>So far, the proposed PEAK overall layout looks like this:
>>peak.api - quick access to API functions and subpackage API's
>>peak.binding - tools for connecting and instantiating components
>>peak.naming - interpretation of names and their referents
>>peak.model - structural components of an application's domain model
>>peak.deployment - interfacing with the environment an app lives in
>>(This is ignoring, peak.metamodels, peak.tests, and peak.util, which are
>>not normally used by applications at this time.)
>>Thoughts? Are we headed in the right direction?
>do you have an idea, how much of restructuring is done yet .. will the
>basic concepts stay
>as they were invented for TransWarp (binding,model for example) and is
>there an approximate
>timeline for peak-0.2??
I haven't done what's in this proposal. Most of the code re-org and
re-naming is done now. I haven't renamed the binding Component methods yet
(e.g. getService -> lookupComponent, etc.).
As for conceptual stability, I think it's increasing as we go along. If
you look at the proposals I've been putting out, there's a clear
progression from broad proposals to narrower and narrower refinements of
the new structure. There hasn't been any flip-flopping, and the concepts
are looking very clear and easy to explain - at least from my point of
view. I'm also under a good bit of pressure to stabilize these concepts
very quickly, as I will have new staff to train on them next month! That
will be the acid test of whether the reorganization and refocus have been
successful. That is, if I can explain PEAK to a competent enterprise
programmer, who does not have any interest in AOP, GP, etc. for their own
sake, but is interested in getting their job done better and more quickly.
There is no timeline as yet. But I will probably finish the re-orgs this
weekend, and begin work on updating all the documentation. When I have the
reference docs for PEAK cleaned up, I'll issue a preview release, and then
start in on the tutorial. The preview release will not necessarily have a
fully polished peak.naming package, although it is at least basically
functional now. The 0.2 final release will add a tutorial for the binding
I then want to accelerate the release cycle, and begin pumping out 0.3,
0.4, etc. on a more frequent time schedule, putting fewer new features in
each time, but including documentation with them. How this will actually
work out in real life is uncertain to me. I'm about to spend at least a
work week in requirement gathering meetings for a large project, and I
don't know what impact the dates discussed there will have on all this.
>we decided to stay with current TransWarp till peak-restructuring settles
>down a bit, so we
>cannot help testing your stuff at the moment .. we need to finish a
>preview of our stuff till
>beginning of autum, and we cannot catch up with all the changes at the
Not a problem. I don't recommend anybody try to follow what's going on in
the restructuring right now. It will settle very soon, though, as I don't
think there's anything else I'm going to propose as far as
restructuring. Everything I've got in my head right now is either about
adding new facilities to PEAK (such as the AppUtils and MetaDaemon ports),
or about documenting the existing stuff.
Well, there's also the database package refactoring. But that's not going
to play a part in PEAK 0.2, which will simply not provide any database
facilities whatsoever. PEAK 0.3 will probably focus simply on providing
uniform access to ManagedConnections for Python DBAPI drivers, and not
provide a DataModel layer as such. "Uniform access" is unfortunately a
rather big project, because this means dealing with crap like the fact that
different backends take different parameter syntaxes! Developers used to
JDBC and even Perl DBI are probably not going to find that kind of
Anyway, functionality corresponding to TW's record-oriented data management
will probably not be until 0.4. If you decide to make use of PEAK before
that time and want to keep the TW stuff, you'll need to port it. Since the
model and binding stuff isn't changing fundamentally, you should find it a
matter of mostly doing search-and-replace operations.
More information about the PEAK