[00:00:14] * pje keeps having to run to the restroom... [00:00:23] ...too quickly to even type /afk. :( [00:00:39] what's wrong? [00:00:56] I'll spare you the graphic details. [00:01:03] haha ok [00:01:23] Suffice to say I haven't gotten much done today. [00:01:31] I can imagine [00:02:30] <_jpl_> Food poisoning? [00:02:47] Nope. [00:03:03] * pje doesn't eat poisoned food [00:03:08] :) [00:03:18] mm botulism [00:04:45] I got a nasty flu when I was in japan in december [00:05:12] totally sucked, I could hardly breathe and ended up quitting smoking because of it, ruined the trip [00:06:41] do you have any experience with WebObjects / EOF, pje? [00:06:57] Nope. [00:07:16] well from talking to people who have used it, it's the best thing since sliced bread for ORM [00:07:42] it uses these object graphs for transactions called EditingContexts [00:07:47] There are lots of frameworks with that sort of buzz. [00:07:58] well WO is like, 15 years old [00:08:04] * pje nods [00:08:07] (though it is written in java now) [00:08:30] according to people I've talked to, PEAK/ZODB is missing a lot of what makes WO badass [00:09:22] * _jpl_ had to stare at "PEAK/ZODB" for a minute before it would parse. :) [00:09:27] if you end up with a lot of copious free time and are interested in something worth cloning, I can send you my copy of WO [00:10:00] like snail mail, it's legitimate, w/ license, etc.. never used [00:10:14] Bob: does it do this? http://www.orm.net/queries.html [00:10:39] I don't know if it does that syntax [00:10:57] Not the syntax... a conceptual query mechanism. [00:11:01] I'll install it and take a look at the tools [00:11:12] And I don't mean OQL, though it might not be a bad substitute. [00:11:23] I'm not saying PEAK does that, mind you. It's just where the roadmap is. [00:12:00] I'm curious why you'd lump PEAK and ZODB into the same category of mapping tools, though. :) [00:12:03] I'm not even really *that* interested in the query languages, but I would like to see the smarter transaction stuff [00:12:12] I meant the PEAK/ZODB persistence/transaction mechanisms [00:12:22] PEAK doesn't use ZODB transactions. [00:12:47] Have you read the IntroToPeak tutorial? [00:13:00] yeah [00:13:04] What smarter stuff, specifically? [00:13:27] I really don't know enough about it to tell you specifically [00:14:10] Hm. I just googled up a claim that there's a Python EOF port. [00:14:14] http://modeling.sourceforge.net/ [00:14:21] yeah but it's GPL [00:14:36] I've been attempting to convince him to relicense [00:15:13] it's going pretty well, he didn't pick GPL for any good reason [00:15:28] So does it offer the "smarter stuff" of which you speak? [00:15:35] pretty sure [00:15:48] * _jpl_ played with modeling a year or so ago [00:16:16] So far it doesn't appear to be ORM at all; looks ER all the way. [00:16:28] <_jpl_> wooo, just got Junction working over SSH connections. [00:16:57] I was probably using the wrong terminology [00:17:39] I really don't have any experience with WO/EOF, I'm just conveying what I've heard [00:17:45] So far it looks like peak.model has a nicer syntax, too, if you only use its ER subset. [00:18:09] well syntax can be changed [00:18:44] I believe that a lot of the data modeling with WO is done via tools, not via java code [00:18:52] * pje nods. [00:18:58] I mean ultimately it ends up as code, of course [00:19:12] Of course, peak.model models can also be generated from MOF XMI, and from ASDL. [00:19:33] I was just noting that this modelling thing doesn't appear to have a very flexible metamodel, and it's not very OO. [00:20:10] it's probably not taking advantage of many of Python's features [00:20:21] Though it does have certain areas, like validation, that are a little more fleshed out than peak.model. [00:20:22] I believe it's trying to be as EOF-like as possible [00:20:43] which means emulating code that was originally Objective C, then Java [00:21:05] so I would imagine the style might be odd [00:22:02] the nested EditingContext thing is what sounded most interesting [00:22:18] Haven't gotten to that part of the doc yet. [00:22:35] Though so far, unnested EC's look no different from PEAK -- or ZODB! -- transactions. [00:24:27] Okay, looks basically the same as ZODB3 subtransactions or ZODB4 nested transactions. [00:24:42] No correspondence to PEAK, as we don't support any kind of transaction nesting. [00:24:53] why not? [00:25:18] Because I have no use cases. [00:25:55] I spent a lot of time implementing subtransaction support in ZPatterns... [00:26:13] ...and then realized later that I didn't actually need them for anything in a web application. [00:26:22] makes sense [00:26:40] After that, it occurred to me I don't have any need for them in any other kind of application either... [00:26:59] At least, none that wouldn't be served better by a simpler mechanism. [00:27:05] I'll ask someone who uses them what they think they are good for [00:27:19] For example, take the notion of a GUI where you fill out stuff and have OK or Cancel buttons... [00:27:35] That's a really bad place to use a transaction, if it maps back to locking shared resources. [00:28:10] Instead, it'd be better to have a "buffered" or "proxied" model object wrapping the real model object. [00:28:17] Or some other UI-oriented mechanism. [00:28:39] The main area they're useful is for long-running things like CVS branches. [00:28:56] I'm pretty sure that EOF is pretty flexible with regard to locking mechanisms, from what I've been told [00:29:33] My point there is that it's kind of silly to have to fiddle with what sort of locking gets done for something that doesn't need to be using a transaction in the first place. [00:30:05] But I'm somewhat biased by the flavor of apps I develop. [00:30:47] everyone is :) [00:31:08] Anyway, I guess I tend to think that things like nesting and branching in transactions are more properly the place of long-running txn theory like sagas, compensating txns, all that sort of thing. [00:31:46] For the kind of architecture I work with, a transaction must be short and sweet, and involve no user-controlled delays. [00:32:01] as with anything else on the web [00:32:25] ...as with any OLTP system. :) [00:32:41] yea [00:33:34] It's entirely possible that when peak.events and peak.model meet, the result will be able to do very editingcontext-like things. [00:33:44] But they won't be called "transactions". :) [00:34:53] looking at the books for WO, it does bake in a lot of validation in its modeling tool [00:35:36] My long term plan for peak.model validation is that it'll be driven by peak.events. [00:36:24] After a lot of thought on the matter, it seems to me that truly useful validation must be both asynchronous, and loosely coupled. [00:36:53] That is, just because you enter a bad value now, doesn't necessarily mean that you should get an exception now. [00:37:05] Because maybe the problem is one that'll get corrected before you commit. [00:37:21] Or maybe it's a problem that's too costly to check for until you're ready to commit. [00:37:31] yeah [00:37:52] Or maybe it's a business rule that just needs to be verified once a month or so, with a manual reconciliation. [00:38:44] And also, there might be *multiple* problems with an input. [00:38:47] is there anything in peak.model that lets you state that a particular attribute is a 'primary key'? [00:38:56] No, not yet. [00:39:15] And technically, it'd need to support multiple attributes forming a key. [00:39:50] yes, but typically primary key is one attribute [00:39:55] As well as uniqueness constraints on combinations of other attributes, to be honest. [00:39:58] Typically, yeah. [00:40:16] But I'll need all the other stuff to implement conceptual queries. [00:40:32] I'm just comparing what the EOModeler interface lets you do, with what peak.model lets you represent [00:42:59] <_Maniac_> night my wise peak and python friends [00:43:02] * _Maniac_ sleeps [00:43:04] <_Maniac_> zzzz [00:43:05] * pje waves to _Maniac_ [00:43:12] 'night [00:43:44] Well "lets you" is a bit strong. It doesn't constrain you from adding whatever metadata you wish. :) [00:45:37] it's late, I'm multitasking.. expect poor word choice :) [00:45:45] Honestly, I do like a lot of the simplicity that the EOF approach offers. [00:45:59] And peak.query will look a lot more like it when it lands. [00:46:18] Specifically, I want the current DM approach to fade into the background for usage purposes. [00:47:02] You should just be able to use storage APIs with classes as parameters, and let them look things up in context. [00:47:33] is there any baked-in way to Just Persist Stuff? Without building a storage myself on top of text files or SQL databases? [00:47:38] Sort of like the editingcontext query operations, but with class objects rather than class names. [00:48:03] jack-e has a framework that does O-R mapping like that, yes. [00:48:19] that generates SQL tables? [00:48:23] And I think _jpl_ wrote an SQLTable class that does a fair amount of grunt work in that area as well. [00:48:32] honestly I don't really need SQL [00:48:39] Actually, I think jack-e's tool goes the other way around... SQL->objects [00:49:01] No, there's not currently a way to Just Persist Stuff. [00:49:08] it would be nice [00:49:17] If I needed to do something like that, I imagine I'd use the transacted file support and pickle. [00:49:36] i.e. use peak.storage.files to write a pickle transactionally. [00:49:43] I'm building an app that just stores a bunch of stuff, and then converts it to a slew of XML files in batch [00:50:03] Generate sequential oids in a non-type-specific DM... and then wrap that around using pickle and a simple file. [00:50:12] Ah. [00:50:43] You know, if you just write two DMs, you can load objects from one and save them to the other. [00:51:00] that's what I was thinking [00:51:30] but I was just wondering how I should implement the 'pickle' DM [00:51:56] Yeah, the oids you use need to include what the class of the objec tis. [00:51:59] er, object is. [00:52:09] That way, _ghost() will know what class to instantiate. [00:52:32] ah [00:52:50] is there any way to just let ZODB do things.. or will that be a mess? [00:52:51] Either that, or just pickle/unpickle the whole thing as a dictionary. [00:53:00] Mess. [00:53:09] ZODB and PEAK don't use the same transaction system. [00:53:32] Yeah, the simplest way to do a pickler would be to save/load everything as one big dictionary... [00:53:48] well I only need one transaction.. begin ... run app ... commit :) [00:53:48] then you don't care what the OIDs are, you just hand stuff back. [00:53:59] Even so. :) [00:54:21] the pickle-a-dictionary method sounds easiest [00:54:55] Anyway, just like in the tutorial, you'd just override flush() to rewrite the dictionary to the transactional file. [00:54:56] I've only got a few megs of data, at most, and I don't need a query language because I'm always exporting the whole thing [00:55:02] makes sense [00:55:24] Indeed, you ought to be able to hack one of the tutorial examples pretty directly, since it does it to a text file. [00:56:02] It'd be more efficient to use a transactional stream, though. (TxnFile in peak.storage.files) [00:56:30] as opposed to EditableFile? [00:57:11] Yeah. [00:57:20] Because for EditableFile you'd need to pickle to a string. [00:57:27] gross [00:57:37] Right. So use TxnFile instead. [00:58:34] atf = TxnFile(filename='whatever') [00:58:44] is there a "binding" alternative to "text = property(lambda self:self.__text, __setText, delete)" [00:58:45] out = atf.create('b') [00:58:56] you don't need to spell it out for me ;) [00:59:00] thanks for your help though [00:59:01] No, that's why I wrote it as a property. [01:00:41] Anyway, calling flush() on your DM, either manually or automatically at commit time, would then rewrite the output file with its temporary name. [01:01:01] cool [01:01:25] Committing will then replace the previous file. [01:01:30] (if present) [01:01:41] looks easy enough [01:02:51] reboot time.. brb [01:02:54] Indeed. Might be a worthwhile addition to the PEAK lib. [01:02:58] * pje waves [01:12:45] I love screen [01:16:56] <_jpl_> ok, now that junction+ssh works, I can go home and eat dinner for a change [01:17:03] * pje smiles [01:17:14] _jpl_, so where's that new example? [01:17:17] * _jpl_ has been working on it for five days [01:17:26] <_jpl_> :( [01:17:44] ;) [01:17:56] <_jpl_> Once I file off the rough edges of the SSL and SSH support, I'll update Junction on peakplace. [01:18:07] <_jpl_> Which has the peak.events based example. [01:18:12] * pje nods. [01:18:44] <_jpl_> er, it'll get there with the check-in..... you knew what I meant. [01:19:01] * _jpl_ nutritionally deprived [01:19:08] * pje smiles [01:19:11] I know the feeling. [01:19:37] what does Junction do? [01:20:11] <_jpl_> Originally I had no idea if I could get PB working over SSH, and so my co-worker (Chad) put together SSL support for Junction. After a couple of days of hacking it turned out that PB+SSH was in fact possible. [01:20:56] <_jpl_> It's sort of a messaging hub, currently based on PB but with plans to support other communication endpoint types (CORBA, XMLRPC, etc.). [01:21:05] ah [01:21:15] does peak have gzip stream support yet? [01:21:20] <_jpl_> It's pretty rudimentary at the moment, but works for what we need it to do right now. [01:22:22] <_jpl_> e.g. decoupled remote method calls, basic distributed publish/subscribe [01:22:48] etrepum, you're kidding, right? :) [01:22:56] I haven't looked [01:23:13] I know Twisted doesn't do it properly [01:23:14] I presume you don't mean "some string".encode("gzip")? :) [01:23:27] no, that's not what I mean :) [01:23:46] PEAK doesn't have event-driven sockets yet, let alone gzip streams. [01:23:56] ok [01:24:09] I haven't looked at your events-on-top-of-twisted [01:24:13] I was just curious [01:25:04] I have code that does it, in a generator style [01:25:14] peak.events just supports the equivalent of doRead()/doWrite(), callLater(), and deferred.addCallback() [01:25:24] gotcha [01:25:37] peak.net will get socket support as soon as we either need it for work or I have lots of free time. [01:26:01] There's a socket simulator, peak.util.mockets, that does a useful impression of real sockets, for testing purposes. [01:26:07] I've been working on socket stuff, but on top of stackless [01:26:28] So peak.net will have objects that can wrap either mockets or real sockets, giving them an async interface. [01:27:06] The way mockets work, we can both listen()/accept() and connect() on different mockets at the same time, but without the sockets being real. [01:27:30] So, we'll be able to test full functionality of the wrappers without making a single actual socket system call. [01:27:37] neat [01:28:15] Being a peak.util package, mockets isn't tied to anything else in PEAK. [01:28:55] In fact, all it imports are socket, errno, and weakref. [01:28:57] I'll have to look at that.. I was doing something similar [01:29:27] Try 'peak help util.mockets' for a quick overview. [01:29:53] for the gzip stream stuff i wrote, the way the protocol works is that it is just a generator.. it yields the number of bytes it wants, and a callable to put the bytes in once they are available [01:30:03] * _jpl_ waves, idles [01:30:14] Bye John [01:30:37] <_jpl_> 'nite, phllp. :) [01:31:09] ** gbay has joined us [01:31:49] good morning world [01:35:22] * pje is getting sleepy [01:35:43] it's probably worth considering doing more platform specific socket implementations, or at least alternatives to select() [01:35:59] cause the win32 one sucks, and BSD/OSX does a lot better with kevent/kqueue [01:36:30] Well, I'm leaving the select stuff up to Twisted for anything other than vanilla select. [01:36:40] linux has something too, epoll or whatever.. I don't have any experience with it [01:38:44] pje, what do you use to edit code? I've never seen the "page of stuff" style before [01:38:52] jEdit. [01:39:01] But my habit goes back many years before that... [01:39:09] to the QuickBasic 2 editor for DOS. :) [01:39:26] There, I and other developers used 24-line code pages. [01:39:45] Because the QB editor made it really easy to scroll w/PgUp and PgDn. [01:39:52] sure [01:40:15] I have been having a hard time trying to follow it like that with really tall VIM windows :) [01:40:35] Then, when I moved to using PFE, the first Windows editor to catch my fancy, it could only handle 41 lines at most at my then-current monitor resolution. [01:41:03] So, I stuck with that ever since, even though both PFE and jEdit would fit more lines on my current monitor. [01:41:17] And 82 lines fit nicely in a small font printout. [01:41:18] 41 is a pretty bizarre number [01:41:31] Yeah, it's a prime, I believe. [01:41:47] And one less than 42. :) [01:41:54] yeah [01:42:12] my terminal windows are usually about 50 lines tall [01:42:20] I've often thought about dropping it, but it's really hard to give it up. [01:42:32] I can imagine [01:42:36] Everything just seems to have a very physical *place-ness* to it. [01:42:52] I have a much better sense of where things are in a file, than I'd have if I had to scroll. [01:42:58] yeah [01:43:11] what do you do when you're adding meat to a class in the middle of a file? ;) [01:43:13] It forces me to break things into reasonable chunk sizes, and to think about the formatting of the code. [01:43:22] I reformat. [01:43:27] ack [01:43:37] You haven't seen that in my diffs? [01:43:46] I haven't been watching commits [01:44:09] I don't reformat the whole file of course. It's just a few lines of whitespace added here or removed there. [01:44:43] I just try and leave things in a state such that if I wanted a particular kind of formatting or decoration added, I could write a program to do it for me :) [01:46:45] where is that buffer_gap stuff in peak.util used? [01:48:12] It isn't. [01:48:24] It was being done for bytecode editing in peak.config.modules. [01:48:35] But that ended up being doable in a non-shifting way. [01:48:47] It may get used for buffering socket operations, though. [01:49:00] does peak do any bytecode mangling now? [01:49:09] Only if you use the module inheritance stuff. [01:49:32] (See peak help config.modules) [01:49:45] FWIW, I tried implementing several flavors of possibly-faster-than-s+='foo' buffering, and it doesn't make a damn difference [01:49:54] I'm sure if you had a really big buffer it would matter [01:50:04] but that's not generally the case [01:50:36] Or if you have lots and lots of little strings being added to your buffer. [01:50:53] While other bits are being removed. [01:50:54] pje: what's with this (see peak help..), you a bot? ;p [01:51:07] well I tried doing the iovec stuff [01:51:18] Did you try a buffer-gap approach? [01:51:36] no [01:51:45] gbay, no. If I were a bot, I'd probably spit out the manpage myself. :) [01:51:45] (haven't read the implementation) [01:52:22] I made an interface to iovec.. for using readv, writev, socket ops [01:52:30] Buffer-gap algorithms only copy data when you switch from inserting to deleting at a different location. [01:52:44] Well, when you change what location you're writing at. [01:53:04] pje: hehe.. most likely. But then again, I've heard of bots who do their best to act human, or humane at least. None the less, bots most likely don't hack like you do they.. [01:53:21] gbay: the timbot and martellibot do. :) [01:53:44] Of course, I guess a circular queue would do even better than a buffer-gap for simple I/O queuing. [01:54:04] pje: exceptions to the rule methinks.. =) [01:54:11] And would never need more than two write operations to dump something out. [01:55:02] in any case, I've tried a few solutions, and I think that the python function call overhead ends up sucking up more performance than the operations [01:55:33] Ah. Well, I wouldn't implement anything other than a list.append()/''.join() approach in pure Python. [01:55:54] actually for a lot of cases strbuf += 'newstr' can be much faster [01:56:14] at least until the new list append performance patches go in [01:56:19] Hm. I guess it does save the attribute lookup. [01:56:57] Although one can often say 'self.write = self.buffer.append' and avoid even that. :) [01:57:14] tried that too, still slower [01:57:22] Interesting. [01:57:26] I know, I was surprised [01:57:44] Yeah, the Python manuals make a big deal about not doing that! [01:57:58] I was looking at twisted's LineReceiver, specifically.. which uses the string buffer version [01:58:02] I was like there's no way that's fast [01:58:08] so I tried to rewrite it like 8 different ways [01:58:16] couldn't do it any way that was reliably faster [01:58:45] tried cStringIO, iovec based stuff, list.append/''.join, etc. [02:01:08] I'd love to be proven wrong, but in any case, I didn't find it to be a bottleneck really either [02:01:29] Very interesting. [02:01:30] because the general case is that you get packets in about the same size you want them [02:01:54] so the buffer doesn't do a lot of accumulating [02:01:55] I was thinking more abount sending. [02:02:04] same deal [02:02:12] the OS does have a buffer, too [02:02:33] and if you're not using sockets directly, you probably have another buffer on top of that [02:05:26] I have to reboot, again.. yay! :) [02:05:46] phase 1: install WebObjects 5.2 from CD, reboot [02:05:54] phase 2: download 150 meg update so it works with current dev tools, reboot [02:09:51] * pje has got to get some sleep [02:10:05] * pje waves [02:10:13] ** pje has left IRC ("Client exiting") [02:11:15] ** gpciceri has joined us [03:00:10] ** vlado has joined us [04:29:30] jack-e|away is now known as jack-e [04:29:32] morning [05:11:17] morning [06:05:37] ** vlado has left IRC (Read error: 104 (Connection reset by peer)) [06:06:01] ** vlado has joined us [06:39:59] ** vlado has left IRC (Read error: 54 (Connection reset by peer)) [06:41:00] ** vlado has joined us [06:52:25] ** gbay has left IRC ("Client exiting") [06:52:36] ** vlado has left IRC (Read error: 54 (Connection reset by peer)) [06:53:14] ** vlado has joined us [09:32:41] * jack-e just connected a twisted.http (instead of using cgi) server as frontend to a peak.web app, that uses zope.publisher %-/ [09:33:29] ** vlado_ has joined us [09:33:37] ** vlado has left IRC (Read error: 54 (Connection reset by peer)) [09:48:01] ** vlado_ has left IRC (Read error: 54 (Connection reset by peer)) [09:48:05] ** vlado_ has joined us [09:58:55] ** vlado__ has joined us [09:58:58] ** vlado_ has left IRC (Read error: 54 (Connection reset by peer)) [10:01:50] ** vlado_ has joined us [10:02:35] ** vlado__ has left IRC (Read error: 104 (Connection reset by peer)) [10:05:37] ** vlado has joined us [10:05:40] ** vlado_ has left IRC (Read error: 104 (Connection reset by peer)) [10:34:14] ** vlado has left IRC (Read error: 104 (Connection reset by peer)) [10:34:29] ** vlado has joined us [10:58:01] ** vlado_ has joined us [10:58:13] ** vlado has left IRC (Read error: 113 (No route to host)) [11:42:31] ** Maniac_ has joined us [11:54:13] ** Maniac has left IRC (Read error: 110 (Connection timed out)) [12:12:32] ** gpciceri has left IRC (Read error: 110 (Connection timed out)) [13:07:39] bye [13:07:41] ** vlado_ has left IRC ("Leaving") [13:52:04] <_jpl_> Hi all [13:57:45] hey john :) [14:02:34] <_jpl_> vie geht es ihnen? [14:03:25] gut danke .. and what's about you ?? [14:58:50] jack-e is now known as jack-e|away [15:35:07] dict.popitem must be one of the least useful things in Python [15:35:45] <_jpl_> Yeah, I've wondered about that one myself. [15:36:02] I am making "IBasic" interfaces for readable/writable mappings [15:36:07] and I was looking at that and thinking "who the fuck uses it" [15:36:11] so I checked the standard library [15:36:13] IT IS USED ONCE [15:36:22] by sets.Set.pop [15:36:32] <_jpl_> In theory you might want to consume a dict in a loop, but in reality? [15:37:01] in reality people iterate over a copy of the dict's items and then clear it [15:37:40] probably because python doesn't do "while item=d.popitem()" or something like it [15:38:05] <_jpl_> yep [15:38:39] <_jpl_> I usually find no need to clear a dict when I'm finished iterating it. [15:39:21] there are other places that popitem exists, but they are only to expose the same interface as dict [15:42:37] I'm searching on google, and the same thing seems to be the case [15:42:45] I found a different implementation of Set, that uses popitem [15:42:53] and everything else just exposes the dict interface [15:54:37] list.clear would probably be used more than dict.clear [15:54:47] but people do del list[:] instead [15:55:39] ** gpciceri has joined us [15:55:47] ** _Maniac_ has left IRC (Read error: 54 (Connection reset by peer)) [15:56:26] ** _Maniac_ has joined us [17:22:43] ** gpciceri has left IRC ("Leaving") [17:52:13] ** gbay has joined us [17:56:08] ** gpciceri has joined us [17:58:26] ** gpciceri has left IRC (Client Quit) [18:57:38] binding.Make is really tempting [18:57:53] to use for lots of things I would normally put in a method, like method-proxying [19:26:42] ** gbay has left IRC (Remote closed the connection) [21:55:24] <_jpl_> Method proxying? [21:56:06] <_jpl_> If I'm thinking of the same thing, you can use binding.Delegate instead. [21:56:28] <_jpl_> Unless you really do only want to compute the value once. [23:56:59] * _Maniac_ looks around