[TransWarp] ZPatterns Parallel
Phillip J. Eby
pje at telecommunity.com
Thu Nov 21 15:17:14 EST 2002
At 11:51 PM 11/20/02 -0400, beno wrote:
>I've read the tutorial. I would like to see the direct parallels called
>out explicitly between the PEAK architecture and ZPatterns (perhaps as an
>aside for the few who have delved into it). For example, where is my
>Specialist? My SkinScript? etc.
You're at least 3 chapters ahead of what I've written so far. ;)
But so that you can get started, I'll go ahead and cover some things
here. You're looking for the peak.storage package. These classes:
storage.EntityDM
storage.FacadeDM
storage.QueryDM
are like Racks and Specialists from ZPatterns. ZPatterns didn't really
have anything quite like QueryDM's, however. While in ZPatterns you
generally would combine a rack and a specialist to do most anything, in
PEAK you will generally just use one DM for each purpose that you have. If
you need the equivalent of a Specialist with multiple Racks, the parallel
in PEAK would be a FacadeDM that retrieves from multiple EntityDMs. The
equivalent of a Specialist with one Rack can be done with just one EntityDM
in PEAK.
There is no SkinScript. Instead, you subclass the appropriate DM type, and
override methods. For example, to create an EntityDM, you define its
defaultClass (like the load class of a rack), and load() methods, like:
class XYZSpecialist(storage.EntityDM):
defaultClass = XYZ
DBConn = binding.bindTo(ISomeKindOfConnection)
def load(self, oid, ob):
return ~self.DBConn('SELECT * FROM foo WHERE pk=?', (oid,))
In this example, we assume that you have elsewhere defined class XYZ and
interface ISomeKindOfConnection, and that your specialist class will be
used in a context where it can obtain a utility which implements
ISomeKindOfConnection.
The above code is the equivalent of gluing together a WITH QUERY LOAD ...
SkinScript with a Rack, a Specialist, and an SQL method in Zope 2 with
ZPatterns. You'll notice that this
Of course, there are other methods you can override, like 'save()' and
'new()' to update and create items, respectively. It's best to read the
peak.storage.interfaces and the peak.storage.data_managers module code
carefully to understand what these methods have to do. The load() method,
for example, must return a value suitable for passing to your "dataskin"
class' __setstate__ method.
Unlike ZPatterns, your objects do not have to derive from DataSkin or some
other specific base class. Any subclass of Persistence.Persistent will
suffice, although it will often be useful to subclass model.Element, which
is the recommended base class for application entities. model.Element
supports various "feature" objects that manage bidirectional relationships,
data marshalling, and other useful schema-level capabilities.
The peak.model package is one of the most neglected bits of PEAK at the
moment, but that will change once 0.5 is finished, and we begin developing
apps with it. Our development needs will then drive improvements of the
peak.model classes, and they will become far more attractive as base
classes for application domain persistent objects.
Another difference between using ZPatterns with Zope 2 and using PEAK with
Zope 3, is that Zope 3 provides Views. Views allow you to separate domain
logic from UI logic, which means your "specialists" don't need to include
user interface code, and can focus instead on application services. We
suggest that you declare interfaces for your specialists, and then use Zope
3's View capabilities to create separate UI's that work with that
interface. In this way, you can replace one DM with another at will, as
long as they implement the same interface.
In ZPatterns, the reason for always pairing a Rack and a Specialist was to
ensure that you could change out storage techniques without affecting UI
code. Zope 3 provides this separation for you at a pure UI-vs-logic level,
so there's no need to duplicate it in your code structure.
More information about the PEAK
mailing list