[PEAK] Cross-platform stored-procedure support

Phillip J. Eby pje at telecommunity.com
Fri Dec 12 09:25:44 EST 2003


At 10:23 AM 12/12/03 +0100, Ulrich Eck wrote:
>Hi Phillip,
>
> > From my point of view, the ideal interface would be to access something
> > like 'db.procs.someProcName(foo,bar=baz)' and get back whatever it is I'm
> > looking for, which might or might not include a cursor to get results
> > from.  I'd like positional and keyword parameters to be possible on input,
> > and I'd like to receive only relevant results on output.
> >
> > This doesn't seem possible to do in the general case.  First, we'd have to
> > be able to know -- for each procedure -- which parameters were input
> > parameters, and which parameters were output parameters.  We'd need to 
> know
> > what order the parameters were in, and what their default values should
> > be.  And, for the crazy databases that want their output parameters passed
> > *in*, we'd need to know where to stick those parameters into the input.
>
>most databases support introspection, can't we use it to find out the
>signatures of dbprocs to generate the function-wrappers with correct
>paramters and return values ?

Of course you can; just define a mapper that does that, and register it as 
the default mapper for all procedures for that backend.


>I implemented the QueryDM's for the schema-model classes for sapdb
>(similar to oracle introspection) and I'm about to do the same for
>postgres. It represents Tables, Columns, Indexes, Constraints, Sequences
>in a hopefully ansi compliant way.
>
>http://cvs.net-labs.de/cgi-bin/viewcvs.cgi/libs/nll/src/nll/database/sapdb/schema.py
>
>these packages are surely not yet ready to use for platform independent
>use - they have only been tested with sapdb for now. But with some
>normalization i think sql-schema could used for the following jobs:
>
>  - Generate Function-Wrappers for DBProcs
>  - Generate Model-Classes and configured EntityDM-Classes for the schema
>    (TableDM/SQLEntityDM + generator script which i have as well)
>  - could be used in the peak.query package to represent Tables/Colums
>  - in future versions support creating/altering schema-elements
>    through the DM's interface
>
>what do you think?

Sounds interesting.  Personally, I think I would want to use the CWM 
metamodel for DB specs, since it's been vetted by all the big database 
vendors to support everything their databases do.  I also assume that your 
proposals are for design-time use to generate Python code, right?

Anyway, those are definitely places I want PEAK to go in the long run.  But 
right now, I don't have the use cases at work to drive implementation of 
this kind of metadata processing.  More succinctly, I'm only working with a 
handful of databases at work, and most of them are the exact same schema, 
so my need for code generation from schema is quite limited.

In fact, I realized when I woke up this morning that I don't even need as 
sophisticated of stored procedure support as I proposed yesterday, because 
all the procs I'm dealing with can be managed with a single mapper function 
for each backend.  So I may not implement the full proposal in alpha 3, if 
cutting corners will let me get my real (i.e. paying) job done 
sooner.  We'll see how it turns out when I go to implement it.

Anyway, for what you're describing about pre-generared function wrappers, I 
think that's a good idea, and there's a place for it in the scheme I 
described yesterday.  Since you will define an interface that includes all 
the desired stored procedure signatures, you can generate code that 
subclasses the standard db-to-proc-support adapter class, and defines the 
generated wrappers as methods on it, finally declaring that new class to be 
an adapter to the procs interface for the appropriate backend driver 
class.  This declaration will be more specific than the default mapping 
mechanisms I described, and so will take precedence when somebody does 
adapt(db,IMyProcs).  So, once the mechanism I've described exists, that's 
how you'd plug your generation system into it.

As far as the TableDM generation part of your proposal goes, I specifically 
intend to make that obsolete in 2004 Q1, via peak.query.  PEAK will have 
its own TableDM-like concept, but it will handle joins/queries of arbitrary 
complexity via peak.query.  It will also support non-SQL backends, as long 
as they can support certain basic query functionality.  Instead of writing 
e.g. your IMAP DM's to directly work against IMAP, you would define a 
tabular model for what's available from an IMAP data source, and then use 
joins, aggregations, renames, transforms and the like to map that tabular 
model to a tabular model corresponding to your domain objects.  DM's will 
then become just a nexus for management of objects and obtaining 
information about events related to them.




More information about the PEAK mailing list