[09:28:45] [connected at Fri Nov 7 09:28:45 2003] [09:28:45] <> *** Looking up your hostname... [09:28:45] <> *** Found your hostname, welcome back [09:28:45] <> *** Checking ident [09:28:46] <> *** Got ident response [09:29:06] [disconnected at Fri Nov 7 09:29:06 2003] [09:29:09] [connected at Fri Nov 7 09:29:09 2003] [09:29:09] <> *** Looking up your hostname... [09:29:10] <> *** Found your hostname, welcome back [09:29:10] <> *** Checking ident [09:29:10] <> *** Got ident response [09:29:30] [disconnected at Fri Nov 7 09:29:30 2003] [09:29:36] [connected at Fri Nov 7 09:29:36 2003] [09:29:36] <> *** Looking up your hostname... [09:29:37] <> *** Found your hostname, welcome back [09:29:37] <> *** Checking ident [09:29:37] <> *** Got ident response [09:29:58] [disconnected at Fri Nov 7 09:29:58 2003] [09:30:15] [connected at Fri Nov 7 09:30:15 2003] [09:30:15] <> *** Looking up your hostname... [09:30:15] <> *** Found your hostname, welcome back [09:30:15] <> *** Checking ident [09:30:15] <> *** Got ident response [09:30:36] [disconnected at Fri Nov 7 09:30:36 2003] [12:38:35] [connected at Fri Nov 7 12:38:35 2003] [12:38:36] <> *** Looking up your hostname... [12:38:36] <> *** Checking ident [12:38:36] <> *** No identd (auth) response [12:38:36] <> *** Found your hostname [12:38:36] <> *** Your host is kornbluth.freenode.net[bandit.probe-networks.de/6667], running version dancer-ircd-1.0.32 [12:38:36] [I have joined #peak] [12:38:36] ** kornbluth.freenode.net set the topic to PEAK http://peak.telecommunity.com || WikiPages at http://peak.telecommunity.com/DevCenter || IRC-Logs: http://peak.telecommunity.com/irc-logs/ [12:40:46] ** pje has joined us [12:40:46] Logs are back, for the moment. [12:41:02] I'm ssh-tunnelling it via Ty's machine again. Hopefully it won [12:41:11] won't rain too hard down there for a few days. :) [12:42:08] * pje is sorting out an API design issue for peak.query.algebra [12:42:45] So, I'll inflict my interior monologue (dialogue?) on you folks here as I sort it out. :) [12:43:05] Probably won't bother you much, since everybody seems to be away at the moment anyway. ;) [12:44:09] The problem in essence is aliasing. [12:44:31] Each relvar in a query needs to be unique. [12:44:51] Why does it need to be unique? So that conditions can refer to columns unambiguously. [12:45:20] E.g. foo.bar=42 - if there's more than one 'foo' in the query, how do we know which one this condition refers to? [12:46:10] And it's not just tables that this applies to; if you join a subquery to itself, you have the same ambiguity. [12:47:04] SQL's solution is to force you to make up alias names for each usage of the same table, or to use table names instead of aliases when column names would be ambiguous. [12:47:38] However, in a Python API, every time we use a string it adds quoting overhead. [12:47:47] (That is, we have to type the quotes, read the quotes, etc.) [12:48:13] So I'd like the API to avoid strings, and creating explicit alias names has to be done with strings. [12:48:59] Well, I suppose not necessarily. You could have a function 'join(t1=aTable,t2=anotherTable)' [12:49:52] But then what about join conditions? To have a condition apply to e.g. t1.foo, then the result of the join would need to have the aliases accessible. [12:50:32] E.g. query = join(t1=aTable,t2=aTable); query=query.where(query.t1.foo.eq(query.t2.bar)) [12:51:16] Hmmm. Actually, so far that's the most compact notation I've seen for a simple self-join. [12:51:44] It can't be done in a single statement or expression, though. [12:52:24] Feel free to interject questions or comments at any time, so I don't feel like I'm talking to myself. :) [12:53:07] * jack-e is trying to get out of office asap ·.. so i'll just finish my work and head of for dinner ;-) [12:53:40] Of course, I realize you're 6 hours ahead of me there. :) [12:53:47] yup [12:54:03] ** lex_ has joined us [12:55:50] * jack-e asks himself if there is a way to make the peak.storage.data_manager api asyncronous ... [12:56:14] Asynchronous how? [12:56:17] jup ... but i have not yet heard an anwer from myself :) [12:56:37] i posted that because pje probably missed it :) [12:56:43] Yes, I did. [12:56:45] i was thinking of how to combine what twisted and peak offer [12:56:46] Miss it, that is. [12:57:19] e.g. using twisteds new imap implementation as "backend" for the IMAPDatamanager [12:57:41] but then i end up synchronizing async behaviour .. [12:57:42] Hm. I was thinking you meant the other way around... [12:58:04] The way you want might be real easy, though. [12:58:22] Just run reactor iterations re-entrantly. [12:58:35] Until you receive the callback you're looking for. [12:58:56] It's plugging datamanagers into Twisted that would be hard. [12:59:06] 'cause you'd need threads or Deferreds, or both. [12:59:16] i think i found code in twisted.trial.util that does what you say [12:59:59] i'll try that, if i have some time [13:00:25] the other way round is basically the same as twisted-people did with their adapi (async-dbapi) i think [13:00:52] (using a dm in an async-env) [13:02:20] Likely so. [13:03:04] Hmm. On the aliases issue, it occurs to me that there's a problem with alias names being assigned globally. [13:03:34] If I join(t1=join(t1=aTable,t2=aTable),t2=join(t1=aTable,t2=aTable)), what do I get? [13:04:14] I guess it's not a big deal to have SQL generation make the alias names t1_t1, t1_t2, etc. [13:05:03] Actually, since the whole thing flattens to a single join, the join has to rename them, which kinda sucks since the names don't match up anymore. [13:05:50] It seems I'd have to make join-flattening take place as part of SQL generation, rather than as part of construction, then. [13:05:59] Feh. [13:07:25] Still, the idea of requiring joins to be of relvar references, rather than actual relvars certainly seems to have merit. [13:08:15] It cleanly takes care of the "can't have the same thing more than once" problem. [13:09:53] I could probably wrap that notation into the existing API as join=Items(t1=aTable,...) [13:10:27] Although that leaves out what alias the joining table should have. Ah well, never mind that bit. [13:10:51] The whole aliasing issue only really comes into play when we start generating conceptual queries. [13:11:12] We want the schema map to contain simply defined relational queries, with features expressed as pairs of projections over the relvar. [13:11:51] So, if I have a query representing the Employee table in the ConQuer example schema, [13:12:19] then I would express the feature Employee.name as a projection to emptable.empnr -> emptable.empname [13:12:53] But the conceptual -> relational translator doesn't want to join in the employee table several times if you refer to different features of Employee. [13:13:29] So, it needs to be able to translate the (relvar,projection,projection) tuple for a given feature, into projections against the query under construction, if the relvar is already present. [13:15:15] The query-in-progress will know for a given concept variable and inputProjection/relvar pair, if there is a relvar in the query already that corresponds. [13:15:15] In that case, it simply uses the right output projection. [13:15:15] But if not, it has to "clone" the relvar from the schema. [13:15:15] And add it to the query-in-progress. [13:15:35] So, if this is a new use of the relvar in the query, it needs to be a different alias. [13:20:01] I guess the nested aliasing thing isn't really a problem in join flattening. [13:20:01] If I join(t1=join(t1=aTable,t2=aTable),t2=join(t1=aTable,t2=aTable)), I can have a join object with t1 and t2 as attributes that point to "references" [13:20:04] Which contain "column references" to the name-flattened aliases. [13:20:51] IOW, join(t1=join(t1=aTable,t2=aTable),t2=join(t1=aTable,t2=aTable)) is equivalent to join(t1_t1=aTable,t1_t2=aTable,t2_t1=aTable,t2_t2=aTable) [13:21:17] Except that the former has 't1' and 't2' attributes, and the latter has four t*_t* attributes. [13:21:47] the latter is more magical :-/ [13:21:49] However, we could for all other purposes consider them identical, and allow them to compare equal. [13:22:02] It's the former that's more magical. [13:22:15] Hm. Actually, neither is magical. [13:22:25] magical = implicit [13:22:28] In each case, the *attributes* referring to tables are the alias names. [13:22:49] in the second you are splitting on the '_', right? [13:22:59] No, not at all. [13:23:06] the syntax makes the intent more obvious in the first [13:23:08] oh [13:23:11] hmm [13:23:30] In the first case, I'm disambiguating aliases in nested queries by prepending the higher-level prefixes. [13:23:43] And that is *only* visible in the output SQL. [13:24:41] So, it's quite explicit. Whatever the keyword arguments to join, those are the table references you'll be able to get at from the resulting join object. [13:25:11] gotcha [13:25:55] Hm. Now let's see what we have to do to conditions, to make this work. [13:26:30] If I do q=join(t1=aTable,t2=aTable); q(where=q.t1.foo.eq(q.t2.bar)) [13:26:55] I should get a new join object, with the same aliases. Hm, that makes sense. [13:27:07] aliases = table references, really, or relvar references. [13:27:47] However, if I do q=join(t1=q, t2=q) (creating the four-way join), what happens to the conditions? [13:28:35] Clearly, a join must *clone* the conditions of its relvars, remapping them to point to its new aliases. [13:29:31] have a nice weekend .. bye [13:29:36] See ya! [13:29:41] jack-e is now known as jack-e|away [13:30:53] So, when the join "lifts" the nested t1 and t2 from each q, and creates the internal t*_t* references, it must also "search-and-replace" column references. [13:31:33] Ugh. [13:32:57] Back to the conceptual level... we're probably going to end up doing one massive join of all the relvars.. so there we needn't worry about flattening making ugly names. [13:34:49] Hm. Now, how to incorporate this into the existing system? [13:35:11] Really, I could make join a function that does alias creation, and simply add an alias type to the existing relvar types. [13:35:52] Conditions, expressions, and DV's would need to be clonable, giving them a map of relvar->relvar replacements. [13:36:14] Actually, relvar->alias, I suppose. [13:37:05] At that point, I could let 'db' objects return tables instead of new table aliases each time you access them. [13:39:18] And, I could perhaps even get the existing relvar(join=[]) API to work with this, by allowing join=Items(t1=None,t2=otherTable) to mean, 't1' is the alias for the joining relvar. [13:40:27] Join conditions are still a problem in that scenario. [13:41:40] (ie. how do you refer to a column from each table, if they're the same table?) [13:42:24] OTOH, if the tables are different, the join process could remap the condition, just like any other condition. [13:43:05] So, you could allow join(condition,**kw), if the join condition is unambiguous as to source columns. [13:46:51] Okay, so once this was done, the "no repeated relvars" rule would be automatically enforced, because joining would always result in new relvars being created to alias the underlying relvars. [13:46:51] (It's currently enforced by raising an error when an unaliased self-join occurs) [13:48:58] This change would result in alias names being assigned manually (or based on manually assigned names), so the current SQL generation code for alias name management would go away. [13:49:33] On the downside, the test suite would have to reflect automatic alias generation in its tests of join associativity. [13:50:16] But, the test suite will have to be rewritten to use the new join API, so what the heck? [13:53:08] As for the nested naming, we could actually simplify that by only doing so when a "lifted" relvar's alias would conflict with an existing top-level or nested alias. [13:54:08] Thus join(foo=q,bar=q), where q=join(t1=t,t2=t) would be equivalent to join(t1=t,t2=t,foo_t1=t,foo_t2=t) [13:54:28] (Using alphabetical order of the top-level aliases as the basis for selecting names.) [13:55:18] This means that doing repeated join(x,join(y,join(z,...))) wouldn't create deeply nested aliases, as long as the alias names were distinct. [13:55:43] This would work well for the conceptual->relational translator; it could simply use unique alias names throughout. [13:56:56] Hmmm... interesting side effect: you could now specify much more precisely the nature of your output SQL. [13:57:18] But there's a catch we haven't considered yet: subqueries in conditions, such as EXISTS() or IN() subqueries. [13:57:35] Or even X<(SELECT ...) subqueries. [13:59:03] suppose we have the equivalent of q=aTable(where=EXISTS(subq)), and we join(a=q,b=q) [14:00:08] Realiasing each version of q needs to ensure that the subquery retains distinct aliases for its contents in each case. [14:00:16] * pje shakes his head [14:00:22] Aaaaaugh. [14:00:52] Unless, of course, there are no SQL dialects that freak out when you reuse aliases in subqueries. [14:00:59] But what are the odds of that? [14:01:50] Even if there were no such dialects, though, *correlated* subqueries would still need realiasing. [14:02:31] subq could contain conditions referencing aTable, IOW. [14:03:47] And thus, in the combined query, each instance of 'q' would need a subquery that referred to its individual aTable reference. [14:04:15] It almost seems as though any join operation requires a "deep copy" of each joined argument. [14:05:19] In which case, alias names suddenly become irrelevant again. [14:05:53] That is, the current auto-assignment algorithm is sufficient. [14:06:56] However, for API convenience, the manual assignment mechanism could be retained, and if it is *required*, then we can still retire the auto-assignment algorithm except perhaps in the case of subqueries. [14:07:58] To put it another way, each relvar will have a map of alias names to referenced relvars. [14:08:25] When relvars are joined, the relvars are cloned, and the joining relvar creates a new alias->RV map, using the previously stated rules. [14:08:59] Thus, no relvar may appear twice within a join, since if it does, each is simply a new clone. [14:12:14] Ecch. I think I'm going in circles here, between what's most convenient for manual use, vs. what's best for the conceptual translator. [14:15:50] I see two routes the translator could take: build up a list of things to be joined, and do them all at once, or 2) join as it goes. [14:16:17] If it takes the join-as-it-goes approach, it has no way to reference previous relvars, since they'll now be cloned. [14:16:44] Hmm... but I guess there's no real way around that, with any of the schemes I've come up with. [14:18:26] Well, yes there is. If you can always reference the target of the last join as an attribute, you can get at all its columns. [14:18:38] You just can't reference anything *else*. [14:18:58] Okay, never mind, that's not gonna work. [14:19:29] Okay, list of things to be joined approach... not gonna work either. We'll have to individually clone the relvars. [14:20:13] I mean, the translator will have to clone them. [14:20:35] (Which means the join operation will waste time cloning them again.) [14:21:06] (But that's a small price to pay for convenient self-join in manual API use.) [14:21:43] So, it looks like the real tradeoff here is in balancing explicit vs. implicit alias creation. [14:23:14] I think I lean towards making it explicit. Self-joins will still require two steps, no matter what we do, though. [14:27:20] Okay, I think this has reached the point where I'd be better off expressing my API goals as unit tests. :) [14:27:53] For one thing, it'll be a lot less typing. :) [14:27:56] :) [14:28:21] Ah, there's someone awake! [14:28:24] * pje grins [14:28:48] yes [14:28:56] * pje is surprised [14:29:07] I didn't think anybody could stay awake through all that. :) [14:29:16] I myself had some difficulty. [14:29:20] * pje chuckles [14:29:31] I think the API goals can be broken down into [14:29:40] 1. Joins require names [14:30:06] 2. Joins have attributes that reference the original joined items (or clones thereof) [14:30:53] 3. Joins containing duplicates are de-duplicated by cloning [14:31:32] That probably about does it. [14:32:21] Oh, and names of lifted subjoins are automatically disambiguated. [14:32:34] And each join knows alias names for every contained relvar. [14:32:36] There. [14:34:08] And now, I'm off to get some food. [14:56:31] * pje returns [14:57:51] I think the first thing I'll need to do is implement cloning. [15:11:11] It seems that cloning isn't really going to be any different than using copy.deepcopy(). [15:12:13] Hm. [15:12:41] Except that "tables" will have their __deepcopy__() return an alias of themselves. [15:13:07] No, scratch that. [15:13:36] They'll just avoid deepcopying their database. [16:03:28] Hm. Well, deepcopying is ridiculously slow, if applied to all join arguments. [16:08:27] <_jpl_> I wish I had the time and brainspace to absorb all this [16:08:29] Don't sweat it. As much as 50-80% of the stuff I spew is crap and/or just plain wrong. :) [16:08:31] But don't quote me on that. ;) [16:08:53] Once I have it *working*, then you can worry about understanding it. :) [16:26:35] Okay, it appears also that injudicious use of deepcopy() is also semantically wrong. :( [16:27:50] Or more precisely... if a condition applied at join time references cloned relvars, it must be cloned with replacement for those relvars. [16:28:43] Thus, it is an error to create a join that contains multiple instances of the same table, using a condition that refers to any of the duplicated items. [16:29:05] (Because it's then ambiguous as to which items are referred to.) [16:34:04] ** vlado has joined us [16:36:54] Also, it's an error to apply a condition to a relvar, where the condition does not apply to some component of the relvar. [16:37:31] Neither of these things is checked for right now. [16:38:10] * pje sighs [16:38:30] I keep having the feeling that there is a simple, obvious way to do this, but it keeps eluding me. [16:39:00] I think that the problem is in how I'm *perceiving* or labelling the concepts. [16:39:12] And that there is another way to look at this, in which everything is much simpler. [16:39:54] For example, I could consider the arguments to a join, not as relvars in themselves, but rather as *recipes* for constructing relvars. [16:40:49] Nah, that's not really any different. [16:42:35] Ah, looks like my ISP has fixed the port problem... time to bounce the bot. [16:42:43] [disconnected at Fri Nov 7 16:42:43 2003] [16:42:50] [connected at Fri Nov 7 16:42:50 2003] [16:42:50] <> *** Looking up your hostname... [16:42:50] <> *** Found your hostname, welcome back [16:42:50] <> *** Checking ident [16:43:22] <> *** No identd (auth) response [16:43:22] <> *** Your host is calvino.freenode.net[calvino.freenode.net/6667], running version dancer-ircd-1.0.32 [16:43:23] [I have joined #peak] [16:43:23] ** calvino.freenode.net set the topic to PEAK http://peak.telecommunity.com || WikiPages at http://peak.telecommunity.com/DevCenter || IRC-Logs: http://peak.telecommunity.com/irc-logs/ [16:43:44] There we go. [16:44:50] Okay, so what if I look at the *conditions* as recipes, rather than the relvars? [16:45:59] That seems slightly more helpful; I can construct a new condition against my original or copied relvars. [16:48:12] And, what I want to do is deepcopy the condition, with a memo that maps the original targets of the conditions, to my new aliases. [16:49:07] That *almost* works... [16:49:14] * pje sighs again [16:50:16] I'm trying to hold too many conditions in my brain simultaneously, simulating SQL generation against a zillion kinds of queries at the same time. [16:50:25] Including kinds the code doesn't support yet. [16:52:31] Okay, the root of the conflict here is in support for unaliased conditions, in order to simplify the API. [16:52:54] We want the API to be simple, because the schema mappings will be expressed in terms of the relational API. [16:54:13] For the API to be simple, we want to treat relvar columns as attributes, and use them directly in conditions. [16:55:27] Currently, if columns are renamed, or placed in a join structure, or dropped from the output, condition objects remain unchanged, because they refer directly to the "raw" columns involved. [16:57:01] This means they are bound to a specific table instance. In the general case, this is a good thing. [16:57:57] Problems ONLY arise when a table is reused, AND the conditions applying to child relvars are "lifted" into a join. [17:01:05] (Because if conditions stay with the "lower" relvar, they are unambiguous in context.) [17:25:03] * Maniac_ allows pje to design his day away whilst he departs for home and a nice tall glass of whiskey [17:25:18] cya! [17:25:23] ** Maniac_ has left us [17:29:22] * pje waves [17:30:10] * pje just got a selective deepcopy approach to work, for some very simple conditions. [18:02:11] * pje checks in the code [18:04:59] , [18:05:09] ? [18:05:14] :) [18:05:49] * lex_ shows signs of life [18:06:24] ** rdmurray has joined us [18:08:20] I just installed A2 and ran peak test and got some errors. Should I just do a cvs checkout? [18:08:43] Possibly. What errors did you get? [18:08:54] And what platform are you on? [18:09:02] FreeBSD, python 2.3.2 [18:09:15] FAIL: checkClassInfo (protocols.tests.test_advice.FrameInfoTest [18:09:20] assert f_locals is self.__class__.__dict__ # ??? AssertionError [18:09:26] That's the last one. [18:09:31] Ah. [18:09:47] Okay, I think that's an a2/2.3-specifc problem [18:09:52] Lemme check... [18:10:51] Hmmm... Looks like that's still in there. [18:11:16] It's commented '# ???' though. [18:11:24] Is that the only error? [18:11:29] Nope. [18:11:43] How many in all? [18:11:43] self.ob = ModuleType() TypeError: function takes at least 1 argument (0 given) [18:11:46] shows up a bunch [18:12:08] In fact that's the only other variety. [18:12:57] That one looks kind of problematic :) [18:12:59] Ah. These all look like 2.3-specific errors. [18:13:15] Should I just use 2.2, then? [18:13:21] And none of them should interfere with you actually using PEAK. [18:13:26] Ah, OK. [18:13:48] I really should run the test suite on 2.3 and get these fixed up. 2.2 is my development platform, so as not to introduce 2.3 dependencies. [18:14:02] Hmm. Would you recommend sticking with A2 or going to CVS, given that I'm just starting to explore peak? [18:14:05] These bugs are only breaking the test suite itself. [18:14:31] I'd suggest CVS. A3 has a simpler peak.binding API, and the Wiki notes all use it. [18:14:47] Ah, docs are a good reason :) [18:15:07] Plus there's been lots of bug fixes, too. Not these test suite issues. [18:15:11] Real bugs. :) [18:17:33] The docs say to uninstall if you are installing a new version. But I must have missed the note on how to do an uninstall. setup.py uninstall doesn't seem to be it :) [18:18:28] Nope. [18:18:41] You just have to delete the directories from site-packages. [18:18:59] Unless you're using a packaging system like RPMs or pkgsrc [18:19:04] Directories plural? [18:19:06] Or whatever it's called for FreeBSD. :) [18:19:21] site-packages/peak/ and site-packages/protocols [18:19:30] ports system. That has an uninstall :) But theres no port for PEAK. [18:19:39] Ah, of course. [18:20:03] Hm, well I know NetBSD has some .mk's that wrap the distutils... [18:20:28] So it should be a very short Makefile to set up a port, if FreeBSD has the same sort of thing. [18:20:46] Yeah. I expect it is, and just no one has stepped forward to do it. If I get them time perhaps I will. [18:20:57] s/them/the/ [18:21:24] Well, it's not like there's an official PEAK port for pkgsrc, either. I just have a Makefile I personally use. [18:22:43] By the way, I'm checking in a fix now for the ModuleType() problem. [18:22:57] Cool. [18:22:58] At least once I finish rerunning the tests, anyway. [18:23:25] It seems that 2.2 and 2.3 will both *accept* a name as the first arg to ModuleType, but 2.3 *requires* it. [18:24:21] Okay, it's checked in, but it'll take a few minutes before it rsync's to the anoncvs jail. [18:24:42] Are the warnings during the compile of _speedups.c normal? [18:26:43] Yeah. [18:26:58] _speedups.c is generated from _speedups.pyx by Pyrex. [18:28:48] With the ModuleType() fix, that leaves the class dictionary check as the only 2.3 test problem. [18:29:19] Interesting that nobody's reported either of the two problems, since PyProtocols as well as PEAK would have the problems. [18:29:27] Guess nobody's used it with 2.3 yet. [18:29:36] Or else they just didn't run the tests. :) [18:29:56] :) [18:31:22] OK, up cvs uped, built, installed, and ran the tests, and those errors are gone. (The failure is still there, but you didn't say you'd fixed that). [18:34:18] Right, I haven't decided *how*, yet. :) [18:35:17] So, given that I want to create a simple database ap (a trivial accounting package, using flat files for the database for right now) what's the best docs to start with? :) Don't feel obligated to answer that, I can wander around the wiki for a while until I get a clearer picture. [18:35:54] When in doubt, start with an individual package's interfaces module. [18:36:04] There's not really a whole lot of other doc besides docstrings. [18:36:08] k [18:37:16] PEAK is very much undocumented right now. [18:37:39] Really, it might as well be completely undocumented. PyProtocols is very thoroughly documented, however. [18:38:13] I was thinking I should start there anywa. [18:38:40] Hmm. 'pydoc peak.config.interfaces' just produced an error, no info. Is pydoc not supported? [18:41:09] Pydoc is broken. [18:41:19] It doesn't deal with metaclasses. [18:41:31] k [18:41:50] However, it really doesn't add any value over simply doing 'less interfaces.py', since the interfaces don't have any executable code for pydoc to ignore. :) [18:42:17] Well, the value it adds is that I don't have to type the path to interfaces.py :) :) [18:42:24] Which is no big deal. [18:42:33] Also, btw, as far as order of looking at things in PEAK, I'd suggest looking at binding, config, and naming, in that order. [18:42:42] Each builds on the preceding one(s). [18:42:47] OK, thanks, that helps a lot. [18:43:13] And *everything* else in PEAK except peak.util depends on peak.binding, and most things also interact with config or naming in some way. [18:43:35] The wiki has some good motivating intros to peak.binding. [18:43:49] Oddly enough, the most useful of these is titled 'PeakDatabaseApplications'. :) [18:44:06] And doesn't really have anything much about database apps. :) [18:44:07] Heh. Given what I want to do, I read that one first, and it was indeed enlightening :) [18:53:10] Hey, should I stick your comments about what order to read the interfaces file in into the wiki? Like maybe as a two sentence preface to the whole page? [18:55:15] Sure. [18:55:44] It's not so much about the interface files per se, as the order to understand the packages in. [18:56:12] You'll actually discover that those three packages' interfaces are quite incestuous: binding has interfaces that derive from config interfaces, for example. [18:56:52] It's just that you can gloss over that a little in focusing on the binding package first. [19:00:46] Excellent. Looks like that checkClassInfo test is the only one that fails on 2.3 for me, too. [19:01:40] I wonder if I could adapt Chris Wither's zope autotesting framework to autotest PEAK. [19:02:01] I currently run the FreeBSD autotests for Z3X. [19:02:17] I'd guess. It's got a 'peak.tests.test_suite' you can run. [19:02:29] Alternatively setup.py test runs it... [19:02:33] And so does 'peak test' [19:02:54] I think I'll put it on my todo list, just for fun :) [19:04:52] * pje is fixing the other test problem [19:06:52] * pje checks in the fix [19:08:06] And the tests pass now. [19:08:14] (on 2.2 and 2.3) [19:08:34] I figured out a different way to test what I was testing, that's more compatible. [19:13:57] So how'd you hear about PEAK, btw? [19:22:20] Oh, I've been following your adventures loosely since the days of ZPatterns, which I used for a shopping cart ap. [19:22:52] * pje nods [19:23:01] Z3, while cool, is way overkill for the ap I'm currently working on. [19:23:10] So I decided to finally check out PEAK :) [19:23:25] Alas, PEAK's storage and web frameworks are a bit underpowered at present. [19:23:59] Not that I plan for the web part to be a Zope-killer, except for the sort of application space that ZPatterns was for. [19:24:07] i.e. web *applications*, not web *sites*. [19:24:14] Well, I don't need web, and my storage needs *now* are pretty simple. But I want to design the ap so plugging in a better storage later is easy. [19:24:48] I prefer designing web applications to designing web sites :) [19:24:50] Well, if you plan to use files, you'll want to check out peak.storage.files. [19:24:56] k [19:25:15] Probably EditableFile in particular, if your data can fit all in memory. [19:25:55] At first it will, for sure. [19:27:09] Usage is pretty simple: f=EditableFile(parent,filename="whatever") [19:27:29] f.text is then None if the file is nonexistent, or else the file's contents. [19:27:45] Setting it to None deletes the file, setting it to a string changes the file's contents. [19:28:07] A transaction has to be active in order to make changes. [19:28:28] And they are written out when the transaction is committed, to a backup file that then replaces the original. [19:29:07] Unfortunately, it's not multi-process safe; in principle two processes could try to commit at the same time and overwrite each other. [19:29:33] Well, that won't be an issue for me, at least at this stage. [19:29:54] In peak.tools.version, there's a versioning application that uses EditableFile, and sets up a transaction, etc. [19:34:44] FYI all the tests run for me now, too. [19:36:20] Good. [19:47:05] <_jpl_> btw, I'm using PEAK with 2.3, but don't often run 'peak tests' [19:47:55] Which, if you haven't run into any errors and aren't modifying peak, makes perfect sense :) [19:49:37] <_jpl_> And I'm going to see if I can get to the halfway mark with PeakDatabaseApplications this weekend. :) [19:50:02] <_jpl_> pje, it will have something to do with databases once I get more into it. :) [19:50:21] <_jpl_> I'm planning to show basic DM usage, probably using your bulletins examples. [19:51:08] Sounds good. [19:52:45] <_jpl_> rdmurray: I'd be happy to answer any questions you might have; I've been using PEAK pretty heavily for six months or so, having spent countless hours reading the source code, and so have a pretty good grasp of binding, config, and storage from a newcomer's perspective. [19:53:33] * pje is going to head home now... [19:53:41] Have fun, guys. [19:53:46] <_jpl_> See you around. [19:53:57] night, pje [19:54:22] _jpl_: thanks. I think I have to read docs for a while before I have any inteligent questions :) [19:54:22] Adios [19:54:37] ** pje has left us [19:55:13] What kind of ap(s) have you written with it? [19:56:10] <_jpl_> I got a big head start from reading the code to the 'bulletins' example. It doesn't do much, but it does demonstrate configuration, data managers, binding, and how to build basic command line apps. [19:56:38] <_jpl_> I'm working on an agent/service framework for doing automatic system/configuration management. [19:56:42] Where do I find that example? [19:57:22] That sounds like something I'd have liked to have had in my old job (ISP Operations manager) [19:58:02] <_jpl_> Over the last two weeks I've converted all the distributed code to Twisted, so now have a pretty decent example of a high-level layer above PerspectiveBroker using PEAK to pull everything together. [19:58:57] <_jpl_> ISPs will love this thing; we'll be releasing it open source eventually. We're building it to manage the x-thousand Unix+Windows servers we have here at LookSmart. [19:59:23] Sounds way cool. [19:59:50] I've looked a couple config management systems in the past, but they were pretty inflexible and limited. [20:01:17] Where is that bulletin example? [20:01:29] <_jpl_> They usually are. We're going for a high level of flexibility with this system, which PEAK makes pretty easy to do. Thanks to PEAK's binding and config capabilities, users will be able to plug-in "tools" or adapters for tools to do just about anything. [20:01:49] <_jpl_> From the top level directory: examples/bulletins [20:02:06] Oh, in the source. I was looking in the wiki [20:02:55] _jpl_, i like being able to plug in DM's via a config file :) [20:04:09] <_jpl_> So, for example, we've abstracted out commands that are common across any platform, like 'install a package', 'partition a disk', 'update DNS', etc. Underneath there are "tool" classes which know how to do those things for a specific platform (Linux and W2K now, Solaris soon, and anything could be plugged in anyway); the commands pick the "right tool for the job", do the work, log everything, and store the results back at the central database. [20:04:36] The concepts behind PEAK have me excited the same way I was excited when I first learned component programming years ago. This takes component programming to the next level. [20:05:57] <_jpl_> There's a job scheduling/processing system as well, so everything can be executed in a standard fashion with detailed logging; so you'll be able to see detailed history of most anything that's happened to or been done on any host. [20:06:54] <_jpl_> Likewise. PEAK's binding and config libraries are really fantastic, and bring an amazing level of flexibility right down the class/object level that you just don't find anywhere else. (I haven't, anyway) [20:08:24] <_jpl_> I really love how in every class you simply ask for the things you need, allowing them to be created higher up the component tree in whatever way makes sense for a given environment. [20:12:00] <_jpl_> Over the last few days I've learned how to provide components in interesting, dynamic ways; e.g. we use protocols (interfaces) for the various types of tool objects, like IFileManager, IPackageManager, IPartitionTool, etc. There may be several such classes in the system, one for each supported platform, each with an 'isCompatible' property which is true if they'll run on the current platform. I was able to write a dynamic configuration provider which r [20:12:49] <_jpl_> To use it you just do a simple "foo = binding.Obtain(IPartitionTool)" for example, and get a usable instance object. No fuss no muss. [20:13:01] <_jpl_> Oops, I went over the line limit... [20:13:27] <_jpl_> "...dynamic configuration provider which runs through the available classes for a given protocol and return a new instance of the first appropriate class." [20:14:33] <_jpl_> Maniac, I thought you weren't using DMs yet? [20:18:16] _jpl_, which doesn't mean i dont' like the idea :) (that concept was one of the things that lead me to peak) [20:20:25] <_jpl_> I'm not saying that, I was just surprised because you're my primary customer for the DM part of the database tutorial. :) [20:21:18] I think I may be a customer, too. [20:23:05] <_jpl_> For the moment, you can find the examples I learned from in examples/bulletins/src/bulletins/storage.py [20:24:00] <_jpl_> I've gotta run. Hopefully will get some more writing done over the weekend. [20:24:15] <_jpl_> See you around. [20:24:17] see you [20:24:36] ** _jpl_ has left us [20:32:49] :) [21:05:32] ** lex has joined us