[04:02:22] [connected at Wed Jan 26 04:02:22 2005] [04:02:22] <> *** Looking up your hostname... [04:02:22] <> *** Checking ident [04:02:23] <> *** Found your hostname [04:02:55] <> *** No identd (auth) response [04:02:55] <> *** Your host is kornbluth.freenode.net[kornbluth.freenode.net/6667], running version dancer-ircd-1.0.35 [04:02:55] [I have joined #peak] [04:02:55] ** kornbluth.freenode.net set the topic to http://dirtsimple.org/2004/11/generic-functions-have-landed.html [05:33:05] ** vlado has joined us [09:59:44] ** debugger has joined us [10:00:17] morning [10:43:46] ** rdmurray has joined us [12:16:15] ** pje has joined us [12:18:01] * pje waves [12:18:09] * rdmurray waves [12:18:22] How goes things? [12:18:39] Busy! [12:18:52] You're doing contract programming? [12:19:11] Something like that, yes. [12:19:38] I'm considering trying that as an option later this year. Depends on how my other money making projects go. Or don't :) [12:20:01] Btw, to answer the expat question somebody had, it's because PEAK includes the Python 2.4 version of the expat module. [12:20:12] So it would go away if we went to requiring 2.4 [12:20:20] Oh, and it's 41 lines per page, not 42. ;) [12:20:45] :) [12:20:48] But I print two screens to a paper page. :) [12:20:57] Ah. [12:21:21] It's been a long time since I've looked at code on paper. [12:21:59] Me too, actually. But when I do print it's nice to have clean page breaks. [12:22:44] Makes sense. [12:22:57] On-screen, the division helps me focus on individual items, and to keep units small and comprehensible. [12:23:51] and of course pgup/pgdn moves nicely between units then. [12:24:03] Back in my IBM CMS days, I had a whole bunch of editor macros that made that kind of focus really easy and fast. Someday I need to redevlope a macro set like that for VI. [12:59:19] hey guys :) [12:59:26] Hey. [12:59:33] hi pje :) [13:00:01] Iv've started the travel the less traveled road heheh [13:00:29] so, you don't use any special editor for that 41 line per page editing? [13:03:10] jEdit; it's not special, any editor that pages cleanly will work. [13:05:48] humm a jedit user :D [13:06:03] now a little question: why can't I use a DM outside a transaction? suppose I want to send the results to the screen/socket (a slow operation), so this means all other potential readers/writers have to wait for the transaction end? [13:06:45] If you have objects you don't mind being stale, you can change that option. [13:06:58] It's a class variable you can change in a DM subclass; I don't recall the name right off. [13:07:32] However, that just lets you keep references to stale objects that are already in-memory. [13:07:44] so the transaction mechanics is like sql "serializable"? [13:07:57] More precisely, it avoids clearing the cache when the transaction ends, and it allows you to retrieve items without being in a transaction. [13:08:16] That's pretty much the intent. [13:10:08] I see. [13:13:22] the transaction spans to what objects? I mean, I start one at my object (derived from AbstractCommand) using storage.beginTransaction(self). this means that it will "offerAs" that transaction and all functions from there on will use that transaction? [13:14:03] Transaction scope is governed by component hierarchy, not by calls. [13:14:33] And that scope is normally the "service area" in which the components appear, not the object you use to find the transaction service. [13:15:01] So, in effect, a transaction is normally global to a service area, which normally means global to your application. [13:15:25] The transaction APIs use the nearest registered transaction service, and by default that's going to be the service area-global one. [13:15:36] humm, thats what I wanted to say, it will "offerAs" the transaction into the "self" object, and all other components will pick it up with something like binding.Obtain? [13:16:09] No, you have to already have offered it; peak.ini offers one for you via the [Component Factories] [13:20:19] ah! [13:21:05] so, the transaction will serialize all acesses, meaning only one thread/user will be able to execute, right? [13:23:29] Not directly, no. [13:24:02] The actual serialization is done by whatever DB you're using. The txn service just manages commit/rollback negotiation and notification. [13:27:12] hummm, how you change the isolation level of the db? [13:30:22] ** rdmurray has left IRC ("User disconnected") [13:30:33] There are config settings by database driver [13:30:52] Check the source code of the db drivers in peak.storage.SQL [13:31:13] You can also issue SQL manually, or subclass a DB driver. [13:31:25] There isn't a fully general mechanism at the moment. [13:32:47] ah ok. [13:33:35] oh, and thx! :) [13:33:49] that should keep me busy for a while heheh [13:46:10] ** vlado has left IRC ("Leaving") [15:28:15] what is the model for a timestamp? I'm looking at the bulletins example, and there is a new class there named DateTime, should I use that one? (it has some XXX coments though) [15:29:12] btw, looking at SQL.py, the sqlite driver supports the TIMESTAMP. [16:00:28] Yeah, but who knows what it actually does? :) [16:00:49] Anyway, that DateTime class is heavily XXX and will go away eventually, someday. :) [16:01:13] you have read my mind, what does it do? heheh [16:01:44] I was referring to SQLite timestamp [16:02:04] should I use it? should I just store an integer in the db? [16:05:47] * pje shrugs [16:06:39] Maybe you should talk to the PySQLite people. [16:06:58] You can register type converters in the configuration for a given DB driver, though. [16:07:23] oh, I'm talking about using the DataTime class from the bulletins example :) [16:07:30] But they only work on retrieval, not storage. (I.e. your queries' output runs through the converters, but their input doesn't) [16:07:59] Oh. DateTime is at your own risk; that's what the XXX means in that case. [16:08:18] I don't known what they mean hehe [16:08:49] I mean, I don't known what they should do, eg: mdl_fromString, mdl_XXX [16:09:31] humm, can you tell me what I have to do to register a converter for sqlite? [16:09:44] what peak settings should I change? [16:13:23] hummm [16:13:23] [sqlite.sql_types] [16:13:23] * = config.Namespace('peak.sql_types') [16:13:34] its at that peak.ini setting? [16:13:54] Yep, so sqlite.sql_types.TIMESTAMP is where you register a converter for SQLite TIMESTAMP type. [16:14:24] If you have a global default TIMESTAMP converter (for any DB), you can use peak.sql_types.TIMESTAMP [16:16:22] still, I didn't get one thing, what model I should use in my element? [16:34:03] pje is now known as pje|phone [17:41:55] ** vlado has joined us [17:42:44] hi [18:22:20] pje|phone is now known as pje [18:22:58] re [18:24:12] its seems common to insert a variable named _p_oid in the Element, is this because we can't access the Element outside a transaction? [18:24:35] Er, no. [18:24:42] _p_oid is a ZODB-ism. [18:24:53] Needed for the current system to implement persistence [18:27:04] ah! [18:27:17] so how can you get the autogenerated id from the last INSERT, when the Elements are created after commitTransaction? [18:28:12] ** _pje has joined us [18:29:21] _pje: did you see my question about "autogenered" numbers? [18:29:52] <_pje> Nope. [18:30:18] you connection dropped _pje? [18:30:25] <_pje> However, the way to autogenerate ID's is in your DM's _new() method. [18:30:33] <_pje> Yes. [18:30:37] [23:27:14] so how can you get the autogenerated id from the last INSERT, when the Elements are created after commitTransaction? [18:31:44] <_pje> I don't believe in autoincrement columns, so you're kind of on your own there. :) [18:32:23] you are not a believer? hehe [18:33:01] humm, so, because of that you added a max(...) query on the bulletins example? [18:33:30] and I have to ask, why don't you like them? [18:33:39] rather use a sequences [18:33:59] <_pje> Yep, what vlado said. [18:34:07] if your db support them [18:34:22] sqlite doesn't seem to support them. [18:34:28] or mysql :/ [18:34:40] <_pje> Well, they're not "real" databases. :) [18:34:54] use real sql db ;) [18:34:56] PostegreSQL does hehe [18:34:57] <_pje> Although SQLite is actually much more of a real database than MySQL, IMO. [18:35:16] humm, you maintain that opinion even with mysql4.1? [18:35:25] and the new vapor 5? [18:35:36] ** pje has left IRC (Read error: 60 (Operation timed out)) [18:35:48] <_pje> MySQL lost my interest when I read their claims in the manual that transactions weren't really important. [18:36:02] we're playing with firebird lately and it works quite well [18:36:17] <_pje> After that, I didn't trust its designers to know what they were talking about with respect to transactions. [18:36:23] _pje is now known as pje [18:36:58] hummm, they said that just like that? without context? [18:37:13] I'm paraphrasing. [18:37:44] The net result of their statement, however, was to make it clear they didn't really understand ACID. [18:38:11] woah! that sounds bad! [18:38:24] Heck, much of MySQL in my experience shows that they didn't understand lots of other aspects of SQL. [18:38:31] what about maxdb ? [18:38:45] SQLite, by contrast, implements more of the SQL language and gets more of it right than MySQL. [18:38:56] eventually, they have changed their position hehe [18:39:19] cause, transaction seem to be there now. [18:39:22] No doubt. However, last I heard they were implementing ACID using BerkeleyDB, which I'm also not a big fan of. [18:39:39] And, they were saying that you didn't get the same speed in that case. [18:39:54] Meanwhile, SQLite is as fast or faster even with transactions. [18:40:01] they are getting some big boys beying them. like, sap. [18:40:17] * pje shrugs [18:40:30] MySQL may be ubiquitous, it doesn't mean I have to like it. [18:40:48] yes, ofcourse :) [18:41:14] For most use cases where I'd potentially consider MySQL, I'd rather use SQLite. [18:41:17] well, sqlite doesn't handle multi users. so, its only useful for specific contexts. [18:41:36] SQLite is multi-user, it's just that transactions are serialized. [18:41:46] it handles multi users [18:42:00] err yes, thats what I wanted to say! sorry. [18:42:25] MySQL doesn't actually give you locking granularity much finer than SQLite does; IIRC MySQL does table-level locking, not row-level locking. [18:42:48] Except maybe when you use BerkeleyDB, in which case it may be doing page-level locking. [18:42:51] thats the MyISAM backend. [18:43:09] the other backends are more "real" hehe [18:43:38] but... what I wanted to known was PEAK :) [18:43:49] What about it? [18:44:29] the "how to handle autoincrement columns" heheh [18:45:44] def _new(self,ob): [18:45:45] ct, = ~self.db('SELECT MAX(id) FROM bulletins') [18:45:45] ct = int(ct or 0) + 1 [18:45:45] ob._p_oid = ob.id = ct [18:45:45] self._save(ob) [18:45:45] return ct [18:46:19] ** _debugger has joined us [18:46:45] <_debugger> oh, I lost my history on this IRC window. :| [18:46:58] def _new(self,ob): [18:46:58] ct, = ~self.db('SELECT MAX(id) FROM bulletins') [18:46:58] ct = int(ct or 0) + 1 [18:46:58] ob._p_oid = ob.id = ct [18:46:58] self._save(ob) [18:46:59] return ct [18:47:31] ** debugger has left IRC (kornbluth.freenode.net irc.freenode.net) [18:47:31] ** tav has left IRC (kornbluth.freenode.net irc.freenode.net) [18:47:31] ** etrepum has left IRC (kornbluth.freenode.net irc.freenode.net) [18:47:47] <_debugger> I saw that on the bulletins example too. hehe [18:49:10] <_debugger> that is flawed too. cause, in the extreme you could insert two items with the same id. (well, you can't, because the db will not allow). [18:49:19] you cant [18:49:45] at least with sqlite [18:49:59] ** etrepum has joined us [18:50:05] <_debugger> you can, if you aren't in the right isolation level. [18:50:07] hm [18:50:23] yes [18:50:26] <_debugger> ok, if you only use your PEAK app to access the DB, you are in luck. [18:51:17] SQLite doesn't have stored procedures, so whatever you use will have to follow whatever conventions you use. [18:51:32] Of course, you can also have your _new() method do an insert and then get the autoincrement value. [18:51:52] <_debugger> what you are saying is, my DM will be tied with the DB I'm using? [18:51:57] I just prefer not to use autoincrement. [18:52:34] If it uses explicit SQL at all, it's pretty much tied to a particular DB; DB portability is somewhat of a myth. [18:52:58] The point of a DM is to have a designated place to put that lack of portability, so you can replace it with a different non-portable DM if you need to be portable. :) [18:53:18] <_debugger> ok, I buy that :) [18:55:36] ** debugger has joined us [18:55:36] ** tav has joined us [18:56:22] <_debugger> I'm looking at SQL.py, and I still haven't figured how the supportedTypes is used :/ [18:58:30] ** tav_ has joined us [18:59:49] typeMap() ? [18:59:51] <_debugger> eg: I see that at some time a config for sqlite.sql_types is used, but it seems noone is populating that config, or is it? [19:00:03] <_debugger> yes [19:00:12] If nothing is found, you get the raw value returned by sqlite [19:00:33] <_debugger> and nothing is found on the default PEAK setup, right? [19:03:02] ** tav has left IRC (Read error: 113 (No route to host)) [19:04:53] Right. [19:05:04] ** debugger has left IRC (Read error: 110 (Connection timed out)) [19:05:14] _debugger is now known as debugger [19:05:33] woah, I got one right! hehehe [19:14:38] ok. for getting the result of the autoincrement column i'll hack my DM._newItem to insert a _oid property. that way I can pick it up after commitTransaction :| [19:16:03] Um, no. [19:16:14] You return the _p_oid from _new. [19:16:17] Not _newItem. [19:16:41] Vlado just posted the example code; just drop the call to _save and make _new() insert and retrieve the autoinc id. [19:16:44] oh, yes, _new. [19:17:16] but I still can't get that value from outside the transaction :| [19:17:27] Why not? [19:17:44] humm, I can? how? [19:18:05] resetStatesAfterTxn = False [19:18:09] Add it to your DM class. [19:18:18] And hold on to your reference to the object. [19:18:45] Then, you can keep a stale reference to the new object. [19:18:54] Thus, I mean. [19:19:54] will the object property not go away on a GC? I saw some weakrefs :| [19:20:20] You need to keep the reference to it if you want to keep it. [19:20:33] I thought your use case was to get the id of a newly created item. [19:20:48] * pje notes that this is why he doesn't use autoincrement [19:21:05] That is, getting a new ID is usually an action that needs user involvement. [19:21:11] At least reporting the ID, anyway. [19:21:20] and that is my use case, get the autorenerated id . [19:21:30] Therefore, business logic usually generates and assigns the ID, rather than just generating an ID when you save the object. [19:21:45] Which is what autoincrement sucks at. [19:21:55] Btw, you don't have to wait for txn commit anyway; [19:22:05] It suffices to say 'dm.flush(ob)' to make it save that object. [19:22:55] 'flush()' ensures that the object is written to underlying storage, if it has been changed since load or since the last flush. [19:24:03] ah thx! [19:24:17] I'll use flush :) [19:24:29] EIBTI. :) [19:25:34] what? [19:25:36] Btw, you don't need the resetStatesAfterTxn flag if you're not going to access the object outside a txn. [19:25:40] Explicit is better than implicit. [19:25:52] From "The Zen of Python". [19:26:19] * debugger notes that we searched at dict.org / jargon file but didn't found EIBTI heheh [19:26:28] * debugger err s,we,he [19:27:17] I'm going to head out now, I need to get some food in me. [19:27:26] humm, actually, I remeber that Explicit is better tan implicit, but didn't associate it with EIBTI hehe [19:27:42] ... except when it's not ;) [19:27:53] * pje wishes he had enough time to write the replacement for DMs [19:28:02] eat well pje. and thx for tips :) [19:28:10] humm... what?! [19:28:17] what replacement ? [19:28:28] peak.schema + peak.query [19:28:33] you got one on the in the drawinf board? [19:28:47] Yeah, one workspace for access to all classes and queries. [19:28:55] schema would be cool [19:29:04] No separate DM classes. Schema translation, events, the whole nine yards. [19:29:21] I had just gotten the design to where I felt comfortable starting on it, when I got a 4-month contract. [19:29:46] Which is good, because it means I can eat in the meantime, of course. :) [19:30:00] hehehe :) [19:30:02] It may go to permanent, too, depending on how things work out. [19:30:14] So, PEAK is back to nights and weekends again I'm afraid. [19:30:39] And my first priority ATM is wrapping up the features I promised for my PyCon presentation. [19:31:09] So, that will take precedence over schema+query until I'm certain I have what I need for the talk. [19:31:37] go pje go! :) [19:31:38] But after that I'll circle back around and hopefuly the schema stuff won't be so stale in my brain that I have to start over. :) [19:31:58] hope not hehe [19:32:13] but again, something the remake is better than the original hehe [19:32:16] But that's why I post designs on the mailing list and my blog, so I can go back to them once I have time to implement. [19:32:26] Anyway... later, all [19:32:31] ** pje has left IRC ("Client exiting") [19:35:41] oh, I can use oidFor, it does the flush for me hehehe [19:36:23] night [19:36:30] ** vlado has left IRC ("Leaving") [21:48:25] ** debugger has left IRC () [22:04:51] ** rdmurray has joined us [23:33:33] ** rdmurray has left IRC ("User disconnected")