[PEAK] SQL Object-relational Mapper
Phillip J. Eby
pje at telecommunity.com
Fri May 20 18:19:22 EDT 2005
At 04:06 PM 5/19/2005 -0400, Erik Rose wrote:
>If you want to use the autoIncrementedField feature, you'll need to
>stick a getLastAutoincrementSql() method in your PEAK DB adaptor. It
>looks and behaves like this, which is from my MS SQL Server adapter
>(which I'll publish soon):
>
>def getLastAutoincrementSql(self):
> """Return the SQL that yields the last value the DB inserted in an
>auto_increment or identity-style column. To avoid race conditions,
>consider only inserts made by this connection (most DB's handle this
>for you).
>
> In DB's (like Postgres) which don't have a function like this and
>which do support sequences, you should probably use getSequenceValue()
>instead, which is more flexible anyway.
> """
> return 'SELECT @@IDENTITY'
There's another way to do this. Well, actually, two ways. The first is to
use the 'appConfig' feature of SQLConnection objects so that you can use an
.ini file to designate what function gets used for each database type.
The second is to just make 'getLastAutoincrement(connection)' a generic
function, e.g.:
@dispatch.on("conn")
def getLastAutoIncrement(conn):
"""Return the last autoincrement"""
@getLastAutoIncrement.when(SQLServerConnection)
def sql_server_autoinc(conn):
return (~conn("SELECT @@IDENTITY"))[0]
And in this way it's easy enough to add new methods for new connection
types, without the need to monkeypatch the datbase connections themselves,
or to edit their classes.
Of course, you could in fact make higher-level operations like '_new()'
into generic functions as well, although you'll probably want
'dispatch.generic' rather than 'dispatch.on' for those. Those operations
can then dispatch according to database driver type as well as target
object type, and many other things as well.
More information about the PEAK
mailing list