[TransWarp] Tips on using model, storage, config, and naming (was Re: First attempt to use the storage package in PEAK)
Roché Compaan
roche at upfrontsystems.co.za
Sun Dec 29 09:48:10 EST 2002
* Phillip J. Eby <pje at telecommunity.com> [2002-12-29 17:58]:
> At 10:31 AM 12/29/02 +0200, Roché Compaan wrote:
> >Thanks for all the great tips on the config and naming packages. I love
> >the fact that configuration variables can be aliased, are in one place
> >and that components can easily discover them.
>
> Eh? I think I know what you mean by "aliased", although technically
> they're not aliased; it's just that you can define one set of properties in
> terms of another.
I meant I don't have to hardcode configuration data like a database
connection string in components but can use a PropertyName that is set
somewhere where it more appropriate like a configuration file and where
it can be modified without changing anything on the component itself.
This felt like "aliasing" :o)
> But "in one place" is something I'd disagree with; you can put them in one
> place if you want, but every object can define its own value for a
> property, e.g.:
Not to take anything away from the ability to define config data almost
anywhere, I thought that having (specifically global) configuration data
in a .ini-format configuration file is a big big plus.
But at the moment I really value these small misunderstandings because
at the current rate you'll end up documenting most of PEAK before 2003
;-)
> As for whether components can "easily discover them", that's an interesting
> comment. There isn't any discovery capability for properties; you can look
> one up if you know its name, but there's no way to find out what properties
> do or don't exist. So, for example, you can't iterate through all possible
> URL scheme names. The property namespace is potentially infinite. You
> could do this, for example:
>
> [peak.naming.schemes]
> foo.* = # some code here to create a new scheme handler based on the scheme
> name
>
> This would let you use URL schemes of the form "foo.bar:", "foo.baz:",
> etc. This kind of capability is incompatible with true "discovery",
> although Ty and I have batted around the idea of making it possible to at
> least discover that a rule for 'foo.*' exists. This would really just be
> *rule* discovery, not *property* discovery.
In the rule foo.* = # somecode, I access the value of * with
propertyName. Correct?
> >There is one big concern that
> >is not necessarily the responsibility of a framework like PEAK and has
> >more bearing on the implementation of the problem domain namely, 3rd
> >party customisation. I mention it in the hope that you can already see a
> >pattern that will work well with PEAK.
>
> The config package gives you all the hooks you need to deal with this sort
> of thing, but of course you have to implement your basic application or
> framework in a suitable way. This could be as simple as using:
>
> config.setupModule()
Aah, I forgot about that. This is AOP in action is it not?
So in my app I have Contact.py that defines my Contact class. When
somebody wants to customise this class they can simply do the following
in their CustomContact.py module:
__bases__ = Contact
class Contact:
""" here they extend and override as they please but still
honouring the IContact interface and any other required
bindings.
"""
config.setupModule()
> In principle, there is no limit to how deep the inclusions or meta-levels
> can be. In practice, however, I think that two metalevels of configuration
> (instance and application), with "n" levels of inclusion and the use of a
> PEAK_CONFIG-specified environment configuration, should be plenty of
> flexibility. :)
I can see it now. I'm sold :-)
> >Whereas CMF Skins allows for generous customisation of the user
> >interface and provides you with a perfect place to add custom logic
> >through scripts and external methods it only solves part of the problem.
> >One still needs something that can accommodate 3rd party modifications
> >to class schemas that won't be heavily disrupted by upgrades of the app.
>
> I think that simplest way to do this sort of thing in PEAK would probably
> be to have a property that specifies where to import the "model.Model" that
> describes the application's problem-domain object model. The
> solution-domain components would have 'bindToProperty()' attributes to
> retrieve the model. Something like this:
>
> class someDM(storage.EntityDM):
>
> model = binding.bindToProperty('myapp.Model')
> defaultClass = binding.bindTo('model/TheClassIWant')
>
>
> [myapp]
> Model = importString('some.package:MyDomainModel')
>
>
> Of course, it may be simpler to do it using a wildcard rule, and something
> like:
>
> class someDM(storage.EntityDM):
>
> defaultClass = binding.bindToProperty('myapp.Model.TheClassIWant')
>
> [myapp]
> Model.* = # code to load the class named after the 'myapp.Model.' prefix
This is super cool!
> >I've attached the mysql driver if you want to include it - it was
> >fairly simple to write since MySQLdb implements the DB API.
>
> I'm afraid MySQL gives Ty and I the creeps, and we don't feel comfortable
> including a driver for it with PEAK for a number of reasons. Because we do
> try to "think of everything", we are concerned about its locking and
> transactional semantics and how they would interact with PEAK's caching,
> locking, and transactional patterns. We couldn't say whether or not it
> will work correctly, and so are totally uncomfortable including it in an
> "enterprise application kit".
Given the amount of effort that went into its transactional capabilities
that was recently added, I think you owe it at least some benchmarks.
>
> In the environment where I work, *PostgreSQL* is considered a lightweight
> database for prototyping purposes, and Sybase or Oracle are what you use
> for production applications. One of our apps has a 30GB database on a RAID
> array, and its dedicated database server hardware is a 64-bit,
> four-processor machine, with an identical warm standby server continuously
> replicated from the first. PEAK really is intended to work at this sort of
> "enterprise" scale, and using MySQL for production in such an environment
> would be quite insane, IMO.
Huh? Sure it's supposed to work at that scale but you will certainly not
make an environment like this a requirement in the readme. As far as I
can see, PEAK aims to provide way more than just robustness and
scalability. For one, it provides a serious developer with a framework
that heeds to proven software design patterns. There are even very small
apps that can benefit from PEAK from a design point of view.
> So, while I'm sure that you would never use it for an application
> environment it wasn't suited for, I'm not comfortable with making any
> implication that PEAK supports or endorses its use in any way, because
> PEAK's intended audience *does* include people who would abuse MySQL, and
> in the process would give other people an excuse to trash PEAK, Python, and
> Open Source in general for their lack of attention to "enterprise-level"
> concerns.
My guess is that the type of people you describe will steer right past
PEAK and click away in their decorated build-it-in-a-day IDE's. If you
can trash PEAK, Python and Open Source by just using MySQL, you can
certainly trash the trio by abusing one of them directly. I think you
worry to much about gate crashers in the audience, or maybe you just
think MySQL is really really creepy ;-)
--
Roché Compaan
Upfront Systems http://www.upfrontsystems.co.za
More information about the PEAK
mailing list