[TransWarp] Trying out peak.storage

Phillip J. Eby pje at telecommunity.com
Tue Jan 14 16:34:15 EST 2003

At 10:04 PM 1/14/03 +0100, Geert-Jan Van den Bogaerde wrote:
>Running this script gives no errors and the following output:
>Running SQL: INSERT INTO Contact (id, Name,Surname,Email) VALUES
>(15,'John','Smith','john at smith.com')
>which seems correct. However opening a psql console after the script has
>run shows no new rows added to the Contact table, so it seems the
>changes are not getting commited.
>What am I missing here? Or is this a bug or simply not yet implemented?
>(I checked out today's CVS, and all unit tests pass)

The problem is that DBConn hasn't joined the transaction; you must tell it 
to do so.

When executing SQL that modifies data, you should use:

self.DBConn(sql, joinTxn=True)





Any of these will cause DBConn to join a transaction if it has not already 
done so.  SQL connections don't automatically do this because some 
databases (e.g. Sybase) impose additional locking overhead on read-only 
operations that occur in a transaction.    Thus, if one has a "read-only" 
transaction, it is bad to be forced to have the SQL connection begin a 
transaction.  So PEAK does not require that a database connection be part 
of a transaction unless requested.  The downside is that you must request 
it, at the minimal typing cost of a keyword argument.

Anyway, once your DBConn joins the transaction, then it will tell 
PostgreSQL to commit as part of the same transaction commit that's invoking 
the INSERT operation, but in a later phase of the commit operation.

More information about the PEAK mailing list