[03:47:43] jack-e|away is now known as jack-e [04:01:39] ** vlado_ has joined us [04:02:12] good morning [04:02:48] hey vlado :) [04:02:56] did you see pje's anser in the logs ? [04:03:25] [connected at Wed Nov 5 04:03:25 2003] [04:03:25] <> *** Looking up your hostname... [04:03:25] <> *** Checking ident [04:03:25] <> *** Found your hostname [04:03:56] <> *** No identd (auth) response [04:03:56] <> *** Your host is zelazny.freenode.net[irc.oregonstate.edu/6667], running version dancer-ircd-1.0.32 [04:03:57] [I have joined #peak] [04:03:57] ** zelazny.freenode.net set the topic to PEAK http://peak.telecommunity.com || WikiPages at http://peak.telecommunity.com/DevCenter || IRC-Logs: http://peak.telecommunity.com/irc-logs/ [04:05:36] since there is adapt() there is no need of introspection [04:06:03] right :) [04:06:39] it's really great. [04:06:51] btw, you've ported z3's stateful workflow to peak ? [04:07:08] yup and already refactored it [04:07:13] what's its status ? [04:07:21] it is now nearly UML1.3 Specification compatible [04:07:26] and supports composite states [04:07:37] (no concurrent stuff yet .. not full featured at all) [04:07:51] but it works nicely and has entry/exit/transaction actions [04:08:20] look at cvs.net-labs.de -> libs/nllworkflow/src/nllworkflow/stateful/tests/test_instance.py [04:09:03] i started to build a UML-Browser for testing my GuiFramework and now it ends up as a genery peak-model-browser due to extensive use of Adaption :))) [04:09:23] TextField based, not grafical (using wxPython) [04:09:59] i use the stateful-engine for building the GUI Controller [04:13:07] your gui framework ? [04:13:33] jup .. i'm working on 3 libraries currently (all based on peak) [04:13:50] nll (database access, network, expressions, utils) [04:14:07] nllgui (component-model for wxPython-gui programming, workspaces) [04:14:28] nllworkflow (stateful, and in future activitiebased workflowengines) [04:14:36] all in our cvs.net-labs.de/libs [04:23:35] dumb db Q: if i have collection of 300K items how do i avoid loading of them i the memory ? [04:24:04] hmm .. what do you want to do with them ? [04:24:15] iterate over, select out of, ... [04:25:02] iterate over [04:25:40] ask your sql-conn for a list of unique id's and: [04:25:41] but using something like prevset/nextset [04:26:18] the sql-cursor has something like this [04:26:34] but i think there is no support for iterating over the items of an EntityDM [04:27:41] you could write a querydm, that prefetches the sql-data and caches the data, then gives back a list of ids that you can iterate over. then you ask you EntityDM for the Element with oid=xx, it will be created and the data is taken from the cache [04:28:46] (it still fetches data and holds it in memory) [04:29:09] but the signature is fairly small for RowStructs i think [04:34:16] if your conn-obj supports nextset you can easily use it in your generator method of the query dem [04:34:21] s/dem/dm [04:39:42] what about prevset ? [04:40:12] i don't see support for that .. (look at peak/storage/SQL.py) [04:40:48] i've already looked [04:41:33] you probably need to talk to pje then .. i haven't used sql with that many results yet [04:41:43] prevset requires support from the sql conn [04:43:00] me too, but i think in near future i will have to [04:43:42] i think it's hard to model a o-r mapping that support from 0-unlimited amount of object efficiently [04:44:05] +s [09:21:44] {global notice} Hi all, it appears that we had a problem with one of our main Eu hubs, I'm currently looking into the situation, and will give further messages in wallops. [10:29:25] ** Maniac_ has joined us [10:37:01] ** Phatt_One_ has joined us [10:37:08] ** Phatt_One_ has left us [10:39:36] ** lex_ has joined us [11:31:47] ** lex__ has joined us [18:07:59] [connected at Wed Nov 5 18:07:59 2003] [18:08:00] <> *** Looking up your hostname... [18:08:00] <> *** Checking ident [18:08:00] <> *** No identd (auth) response [18:08:04] <> *** Found your hostname [18:08:04] <> *** Your host is leguin.freenode.net[leguin.acc.umu.se/6667], running version dancer-ircd-1.0.32 [18:08:05] [I have joined #peak] [18:08:05] ** leguin.freenode.net set the topic to PEAK http://peak.telecommunity.com || WikiPages at http://peak.telecommunity.com/DevCenter || IRC-Logs: http://peak.telecommunity.com/irc-logs/ [18:08:50] Not that I know of, why? [18:09:03] * pje just routed the logbot here via ssh tunnelling to another box [18:09:17] So I guess I need to bitch at my dedicated server's ISP. [18:10:18] Unless, of course, the problem is that my server has been banned from freenode for some reason. [18:13:27] * pje sighs, already anticipating the conversation with the ISP [18:15:05] Well, since I missed the OO query question, I hope whoever had it will repeat it. [18:15:37] For the benefit of anyone just joining us via the logs, we lost logs from about 11:30am US/Eastern up till 6:09pm US/Eastern today. [18:18:13] <_jpl_> I sent some e-mail to you a couple of days ago and you're usually one of the most prompt responders I've ever seen. :) [18:18:24] <_jpl_> It's highly possible that the problem is on my end, though. [18:18:43] Oh, you mean about the tracebacks? I believe that's the only one I haven't answered. [18:18:58] <_jpl_> Ah, well, the OO query question was basically about what you get back from the queries. I.e. how do you map query results to domain objects? [18:20:18] Ah. peak.query.algebra is strictly relational algebra and SQL generation. [18:20:38] peak.query.conceptual (and perhaps peak.query.schema) will deal with higher-level issues. [18:21:03] The ultimate goal is that you should never need to use peak.query.algebra directly. [18:21:20] But it's an important base layer to let the other layers deal in abstractions rather than SQL. [18:21:52] So, there's a schema mapping layer that doesn't exist yet, and a conceptual->relational translator that's needed. [18:22:45] The mapping layer will say things like "this feature(attribute) of this type is represented by a particular DV in this RV, if this other DV is equal to the primary key. [18:23:15] The query translator will use that information to build up a relational query using peak.query.algebra. [18:24:57] To answer the domain mapping question, though, I'd have to note that probably the schema translation layer would be used by a DM directly. [18:25:23] IOW, if all you need is to retrieve a single object, the conceptual query layer isn't needed; the schema mapping contains all the RV's and DV's needed. [18:25:53] The only question is whether you want to retrieve them all at once, or divide them into subsets, with some not loaded right away. [18:26:14] For example, if you have some BLOB or CLOB fields in an object, and would prefer them to be lazily loaded. [18:28:19] So, anyway, peak.query.algebra is like "platform-independent SQL" with an object-based API, rather than a string API. [18:28:48] The SQL generation is just proof-of-concept at the moment, though, and also doesn't support subqueries in the WHERE clause yet. [18:28:53] * Maniac returns [18:29:17] It's also not hooked up to the new DB-specific properties system yet. [18:30:10] * pje looks at iBATIS a bit more [18:30:28] * _jpl_ reads [18:30:32] Looks like the basics of their sqlmap thing could be done using db.appConfig in PEAK. [18:30:47] In fact, I've already done similar on my Sybase->Oracle port project at work. [18:31:32] I like the cache functionality though. [18:32:15] <_jpl_> What are RV's and DV's? I haven't had time to read all your notes yet, unfortunately. [18:32:29] <_jpl_> Oh, before I forget I wanted to run something past you. [18:33:26] <_jpl_> I was thinking it should be pretty easy to add command line specified configuration to the peak script. So you could do something like "config-file -D myapp.foo.bar =10". [18:33:56] <_jpl_> I was going to take a stab at it later today or tomorrow, but it seems pretty straightforward. [18:34:06] RV = relation variable (a query), DV = domain variable (a column in a query) [18:34:41] Hmmm. Yes. I guess it would be in one sense. [18:34:55] But, how would you indicate whether the value is a string, number, or...? [18:35:28] And it woudn't work for 'config-file whatever' because then it would be the config file's job to parse it. [18:35:46] For the 'peak' script to parse it, it'd have to be *before* the config-file in the command line. [18:36:45] <_jpl_> Right, so you're concerned that the config file would then override anything the peak script setup? [18:38:09] Uh, no. [18:38:30] Just that it's only useful if you supply the options to peak itself. [18:38:58] If you run 'peak iniFile configfile whatever', the 'peak' script doesn't see 'whatever'. [18:39:09] Heck, the 'peak' script doesn't even see the 'configfile' part! [18:39:27] So, it'd have to be 'peak whatever iniFile configfile'... [18:39:32] <_jpl_> RIght, but I'm talking about adding the functionality in the peak script itself. [18:39:35] Which means it's useless. [18:40:06] I mean, if you put it in the #! line (which is where it'd have to go), then you might as well put the options right there in the file. [18:40:10] <_jpl_> I.e. have the peak script parse sys.argv and set configuration on the configRoot before proceeding. [18:40:28] It could do that. But how would you ever *use* it? [18:40:46] You'd have to type 'peak -D whatever=whatever iniFile configfile args' [18:41:21] I'm just saying it seems to make more sense to have some way to do this that the actual command object can use. [18:41:40] If you're going to expect users to use it, I mean. [18:42:02] If it's for debugging, well, fine... but I usually just hack up an extra .ini and put it on PEAK_CONFIG. [18:42:02] <_jpl_> Why couldn't the command object just do a PropertyName lookup? [18:42:34] You mean you're fine making people type 'peak -D whatever=whatever iniFile myApp'? [18:43:20] <_jpl_> No, I was imagining the "myApp -D whatever=whatever" scenario. [18:43:36] Right. So, 'myApp' has to be the one to parse it. [18:44:04] <_jpl_> The peak script couldn't parse and remove -D options from sys.argv first? [18:44:11] * pje shudders [18:44:14] Nope. [18:44:27] Those aren't its arguments. [18:44:41] That would make it impossible to write a command with its own -D interpretation. [18:44:47] <_jpl_> But they're meant to be system-wide arguments. [18:45:04] No. [18:45:18] <_jpl_> Not really arguments, either. I'm talking about turning them into config properties. [18:45:29] I understand that. [18:46:01] But I'm not going to make it impossible (or even clumsy) for a command object to have its own meaning for -D (or any other parameter) [18:46:19] Interpreters don't parse argv past their own arguments. Consider Python itself, or sh. [18:46:57] Running 'python foo.py -c"something"' doesn't run "something" as a Python command unless foo.py interprets that argument. [18:47:22] And in '/bin/sh -c python foo.py -c"something"', 'sh' only looks at the first -c. [18:47:56] So, each peak command that's an interpreter is only allowed to read up to the first argument. Everything from there on is the argv of the next command. [18:48:11] <_jpl_> I understand, but I do think there ought to be a simple way to specify system-wide properties from a command line. [18:48:49] echo foo.bar=baz >/tmp/myprops; PEAK_CONFIG=/tmp/myprops myApp etc [18:49:05] Or, mix a mixin into myApp that supports the -D. [18:49:27] There's just no way to do this *globally*, without also imposing an arbitrary restriction on all possible command objects. [18:50:29] <_jpl_> I see that, certainly. [18:50:57] Right, so I could still put it in 'peak', but then it would be useless except for debugging. [18:51:24] So, I don't see a compelling case to add it at the moment. [18:51:41] But you *can* use environment variables, you know. [18:52:09] e.g. map a property to come from the environment. [18:52:15] or else a default value. [18:52:22] You can define a default rule thus: [18:52:24] [foo] [18:52:27] bar? = baz [18:52:51] The ? is evaluated *after* any foo.* rule if there is no foo.bar defined. [18:52:57] So... [18:52:58] [foo] [18:53:16] bar? = #default value [18:53:24] bar = # try to get from environment [18:59:31] <_jpl_> Oh, nice, I didn't know about any of that. [18:59:47] <_jpl_> And I don't really know how to use environment variables, either. [19:03:34] Hmmm. Here's a thought... [19:03:49] Suppose you make a 'withConfig' shell script that does this: [19:04:04] <_jpl_> Environment variables would be fine, if I knew how to use them in PEAK. This really is just for debugging purposes. [19:04:28] echo foo.bar=baz >/tmp/$$.props; PEAK_CONFIG=$PEAK_CONFIG:/tmp/$$.props $* [19:04:45] Then you could run 'withConfig myApp' and go. [19:05:17] Of course, you'd need more shell scripting to parse out the arguments so you could do 'withConfig blah myApp' [19:08:11] I suppose the other alternative is to make a PEAK command that can parse #! lines. :) [19:08:44] So, you'd make 'withConfig' a PEAK command that pulls off the config args, then opens the specified file and reads the #! line to see what it should do with the file. :) [19:09:26] <_jpl_> Hmm [19:09:53] Or, I guess we could have 'peak -Dwhatever run myApp' [19:10:12] Where 'run' would be something that tries to parse the #! line for myApp. [19:11:24] Honestly, for me this sounds like too much work for debugging purposes. I'd just edit the .ini, or set up a custom .ini for my debugging sessions if I needed to change a lot of things. [19:13:04] <_jpl_> Probably is. Executable config files are already about as convenient as it gets. [19:14:16] <_jpl_> I've got to run. I'll just edit my config files for now. :) [19:15:14] * _jpl_ waves [19:17:50] See ya [19:31:41] darn [19:31:47] anyone still here [19:31:47] ? [19:32:06] pje: i was the one with the relational comment [19:32:16] just wanted to point out iBatis, i think it has some neat ideas [19:32:54] i think it shouldn't be ignored when considering how PEAK might allow o-r mapping [19:33:43] and I agree, the caching is really neat, i assume you are talking about the caching dependecies thingamabob :-) [19:44:15] Yep. [19:44:38] Thing is, though, I want to abstract out the SQL altogether. [19:44:46] Hence, peak.query.algebra. [19:45:15] The work I've been doing on a Sybase->Oracle port at my day job has convinced me I really don't want to write apps that use SQL directly any more. ;) [19:52:59] pje, i don't mind writing sql directly, but i like o-r better :) [19:55:38] Well, we'll get there. Maybe in just a few weeks, I'm hoping. [19:55:58] At least, DB schema-independent OO queries that translate to SQL. [19:56:00] it looks like ian bicking dropped out of the discussion tho [19:56:15] Eh? What do you mean? [19:56:26] Was he here on the channel? What's his nick? [20:03:01] what happened? [20:07:09] ** pje has joined us [20:07:12] ** chrchr has joined us [20:07:12] ** _jpl_ has joined us [20:07:12] ** vlado has joined us [20:07:12] ** jack-e has joined us [20:07:12] ** Maniac has joined us [20:07:21] Odd that I was the only person split off, though. :) [20:07:57] Weird. Last thing I see on the log is that lex asked what happened... [20:08:15] hey [20:08:27] {global notice} Hi all, it appears we're still having problems with one of our main EU hubs. I'm still looking for the cause of the problem, and appologize for the problems. I've rerouted around it for the time being. [20:09:51] pje: i understand the portability issue you are facing, but i have a hunch that sometimes it is important to have a clean way to write SQL [20:10:27] for examples when prototyping pieces of an app where you haven't had time to add lots of pretty indirection and have to cludge pieces to get the demo to work [20:10:46] * pje shrugs [20:10:59] I've been combing through tons and tons of legacy SQL in the last few weeks. [20:11:18] So far, I've encountered nothing that couldn't be generated by peak.query.algebra. [20:11:44] Actually, peak.query.algebra is *more* expressive than SQL, since it reflects the full relational algebra. [20:12:07] It's actually the case that you'll be able to express things in it that can't be translated to SQL, at least for some dialects of SQL. [20:12:37] i used SQL and PL/SQL a great deal on my last project and i realized that SQL PL/SQL is really just another programming language which is very good at it's purpose... [20:12:48] Python is Python and SQL, PL/SQL is just that [20:12:49] Yes, it is. [20:13:02] what if someone asked you to port a Python app to a Java app [20:13:10] (i'm actually in this position right now) [20:13:16] I'd use jythonc. ;) [20:13:22] Or at least look into it. :) [20:13:37] but the problem exists is my point [20:13:40] I agree. I'd start with Jython. [20:13:50] Honestly, however, if it was somebody in my company, I'd ask them why they wanted to do that. [20:14:20] *Start* with Jython. Have working softawer at every juncture. Once you have a Jython/Java (but still mostly Python) version they'll probably say, "Good enough." [20:14:21] Anyway, it's pretty obvious that peak.storage doesn't get in the way of using plain SQL. [20:14:51] So, we're only talking here about having *additional* tools. [20:15:31] But, the *automated* O-R mapping tools won't deal in SQL as such, they'll use the relational algebra abstraction. [20:15:43] pje: i think what you're doing is great, please don't get me wrong :-) [20:15:58] i'm just looking at it from a different perspective [20:16:00] And, there'll probably be support at the SQL driver to provide custom remapping of a relational structure to a replacement SQL structure, e.g. to provide optimizer hints. [20:16:32] Optimizer hints are really the only thing that can't meaningfully be abstracted cross-platform, or even across queries! :( [20:16:35] maybe instead of generating SQL PL/SQL from something else, it would be possible to write straight SQL, PL/SQL and then write converters between other databases [20:16:45] * pje shudders [20:16:55] And what SQL dialect would you consider to be "the" SQL? [20:17:02] well, how often do you have to port? [20:17:24] Preferably never. ;) [20:17:38] But there's more than DB independence here... [20:17:45] I'm also looking for *schema independence*. [20:18:03] The same app that's slated for an Oracle port also needs data model changes rather badly. [20:18:55] Anyway, who *wants* to start with SQL as the basis for a solution? SQL is a pretty sucky way to express a query. [20:19:14] Don't get me wrong, I understand the need to map from arbitrary, legacy schemas. [20:19:32] Which is why p.q.a can express anything that can be expressed relationally. [20:20:03] by how is p.q.a better expressivly than sql? [20:20:36] i like SQL and i would only change if the alternative was a real improvement and not just a 'different' thing :-) [20:21:02] It's not more expressive than SQL in general, just more expressive than specific SQL dialects. [20:21:02] how would you argue for p.q.a over sql? [20:21:11] hmmm [20:21:19] can you give me an example? [20:21:30] * lex_ loves examples [20:21:37] And it's more general, in that any operation definable over relation variables can be expressed. [20:21:53] But translation to SQL is dependent on whether the operation has a way to be expressed in that dialect. [20:22:08] i'm curious, how is it portable? don't you still have to write the sql generation plugin for each db? [20:22:08] For example, Sybase doesn't support SELECT ... FROM (SELECT ...) [20:22:25] right [20:22:41] Yes, but most of the operations are very fine-grained, and the current skeletal SQL generation covers pretty much any ANSI SQL at this point. [20:22:56] For functions, we use the new driver-specific thunk maps. [20:23:13] That is, you can use a peak.config property space to define the implementation of a set of standardized functions, across drivers. [20:23:22] E.g. Sybase SUBSTRING vs. Oracle SUBSTR. [20:23:33] Akin to JDBC driver escapes. [20:23:55] As for examples... take a look at peak.query.tests.test_algebra [20:24:11] And look at the SQL generation tests. [20:25:12] pqa isn't really that heavy of an abstraction above SQL, you'll notice. [20:25:40] You could say that the "deep structure" of pqa and SQL are the same, in linguistic terms. The differences are in "surface structure". [20:26:45] e.g. Employee(where=Employee['empnr'].eq(42),keep=['empname']) [20:26:53] vs. SELECT E1.empname FROM Employee AS E1 WHERE E1.empnr=42 [20:27:22] i would say the latter is more readable :-) [20:27:26] The former is easier to generate dynamically in code, compared to the string manipulations required to generate SQL. [20:28:06] If I have the SQL, how do I then add criteria to it, or join it to something else? [20:28:28] By contrast, the pqa expression can be called with join=[otherTable], where=criteria, to change it. [20:28:54] Whereas, to do that with raw SQL, you have to parse it back to get the "deep structure", and then unparse it to regenerate it. [20:29:02] right [20:29:13] i think that would be a great solution though :-) [20:29:17] It's the difference between manipulating strings and AST's for programming. [20:30:07] i would even be open to providing a simple syntax for generating Employee(where=Employee['empnr'].eq(42),keep=['empname']) to then generate the sql [20:30:08] Hm. You must use exec and eval a lot in Python. :) [20:30:15] i do not [20:30:53] I thought you were saying that unparsing SQL was a good solution. [20:31:15] have p.q.a parse SQL specifically written for it and generate your object graph, then manipulate it and generate the db specific sql [20:31:24] Anyway, peak.query.conceptual will deal with a different kind of AST, based on a logical object model, that then translates to the pqa level. [20:31:33] Have you seen the ConQuer papers? [20:31:39] no [20:32:17] http://www.orm.net/queries.html [20:32:48] I'm curious whether there's a simple lexical mapping of their work, that would then be able to generate the pqc objects. [20:33:14] PQC's "deep structure" will be equivalent to ConQuer, but as with PQA there's no parser. [20:33:46] if it's pluggable someone could potentially write an SQL adapter [20:33:49] I'd be open to proposals for a syntax that could be used to express PQC-level queries with less Python syntax overhead. [20:34:02] so that SQL could be loaded into pqa [20:34:14] why stick to one? [20:34:22] * pje laughs [20:34:27] the link you gave me seems like one alternative [20:34:30] Well, I don't need more than one, personally. [20:34:46] but the possibility is there [20:35:03] it would be interesting to be able to: sql -> pqc -> sql -> pqc -> sql [20:35:51] SQL contains more information than PQC or PQA. However, it's mostly useless information. [20:36:08] For example, table aliases are part of SQL, but they do not contribute in any way to the output of the query. [20:36:22] So, from PQA->SQL, the driver assigns arbitrary aliases. [20:36:58] i rarely use aliases [20:37:05] So, you could generate PQA from SQL, but you would be doing extra work (informationally speaking) by writing the SQL. [20:37:25] but you could harnass all of the SQL that already exists [20:37:26] You must not have lots of tables with columns that have the same name. :) [20:37:33] no, i do not :-) [20:37:40] * pje does. [20:37:47] ok [20:38:13] You must also not reuse the same table as many as three or four times in a query. :) [20:38:15] * pje does [20:38:26] I told you I deal in legacy data models. ;) [20:38:56] i'm assuming the above is a view? [20:38:58] Anyway, most of the crud SQL I deal in is dynamically generated, so it does me no good to parse it. [20:39:14] What, the table reuse? No, it's an object hierarchy map. [20:39:31] i mean the legacy sql that joins on the same table thrice [20:39:51] Yeah, it's a hierarchy table. [20:40:09] oh, ok [20:40:14] nested set? [20:40:25] Transitivity table. [20:40:31] what is that? [20:40:35] Remember, Sybase doesn't do recursion. [20:40:50] i've never used Sybase [20:40:56] It's a table with 'a b 1', 'b c 1' 'a c 2', etc. to represent hierarchies. [20:40:58] i've only used MS SQL and Postgresql [20:41:13] MS SQL is pretty much Sybase with some bells and whistles. [20:41:29] parent-child-depth items. [20:42:04] Anyway, if you have a relationship between items in the hierarchies, and the relationship is inherited/acquired on each side, you join hierarhchy-relationship-hierarchy. [20:42:20] So the same hierarchy table is in there twice. At least. :) [20:42:21] my last project was a project management system/budget app (Zope based) and i had a project component hierarchy with arbitrary number of levels [20:42:28] i was using PostgreSQL [20:42:38] * pje nods [20:42:41] and i wrote a PL/SQL function to traverse the database [20:43:03] i can send you some of it if you want to see (might find it useful in the future) [20:43:10] You can see though why I don't care for SQL, though. From my POV, it's lots of noise obscuring the real data, which is a lot simpler than the physical representation of the data. [20:43:45] s/traverse the database/traverse the table/g [20:43:58] And my existing apps generate the SQL dynamically, anyway, because I have to translate from relatively simple-sounding user queries into nightmarish SQL. [20:44:21] But, they mostly use string manipulation to generate SQL, and that sucks. [20:44:36] Thus, object-level relational abstraction. [20:44:38] i would do it in the views [20:44:48] * pje laughs [20:45:03] You wouldn't say that if you'd seen my data model. [20:45:10] well, do you agree that most applications just move data from here to there? [20:45:19] * pje shrugs [20:45:36] databases are good at preparing data to be moved [20:45:39] Well, if that's all you're doing, the existing PEAK tools are fine. If you don't care about db or schema abstraction, what's there is fine. [20:46:03] and they are good at accepting data to be saved (an INSERT rule for example) [20:46:06] But, if you perform complex summaries on complex queries that need to be fast and ad hoc... [20:46:27] I've got an app that's used as both a data warehouse AND OLTP. [20:46:37] OLTP? [20:47:02] Online Transaction Processing. It's a DB geeks' term. [20:47:18] An OLTP schema is typically structured for high normalization. [20:47:27] Vs. a warehouse schema which is denormalized. [20:47:36] OLTP = fast write, warehouse = fast read [20:47:56] * lex_ ponders [20:48:22] So, this schema I've been talking about is OLTP to the nth degree... virtually impossible to write ad hoc queries on for data extraction. [20:48:37] Really, this should properly have a seperate data warehouse with a different schema. [20:48:50] But then, we have to port all the bloody SQL and rewrite it to that new data model! [20:49:03] Thus, to me, "SQL solution" is an oxymoron. :) [20:49:24] A query language that is dependent on the physical structure of the data is useless as an abstraction. [20:50:13] And that's what SQL is, when used on any sufficiently large database. SQL is only abstractable when its 1) simple enough for your optimizer and/or 2) you have a small enough DB or fast enough box. [20:50:44] Any sufficiently large db/app will have SQL that can't be abstracted into views without losing optimizer-critical info. [20:51:20] Especially when that info consists of optimizer hints for the specific query, that have to go against the view part. [20:51:35] Can you tell I've been struggling with this for *5 years*? [20:52:09] So, I want physical-independent queries, but with full control over generated SQL to add hints. [20:53:20] I've already been looking at ways to generate SQL for e.g. Oracle to work around its IN () quirks. [20:53:22] you can build views ontop of each other though [20:53:39] wich lets you encapsulate different pieces [20:54:05] so i don't see your suggestion that SQL becomes unweildy with large databases [20:54:27] Do you think I wouldn't have done that if it worked? [20:56:34] Believe me, if I the schema weren't proprietary, I'd be pointing you at it and some queries and telling you to show me the money. :) [20:56:56] i have a hunch that generating a single query that does what ten other views together would produce, and then if you execute this query and there is some problem, debugging it would be a nightmare, no? [20:57:19] Uh, no. [20:57:35] but that is what you're suggesting, right? [20:57:37] Because I can look at that one piece of SQL and get the DB to show me its query plan. [20:58:19] you gonna sift through a several hundred line query plan? [20:58:42] It's going to be just as long for the view, or else I'm not going to be able to debug that either. [20:58:57] but you can execute the views seperately [20:59:11] i guess you would argue that you could do the same with pqc :-) [20:59:16] * pje sighs [20:59:44] Look, I've had this argument before, with professional database people. [20:59:58] I hand them the schema and some queries, and they all shut the hell up in five seconds, flat. [21:00:30] And these are the people who work for the companies that make the databases. [21:00:51] one sec, brb [21:01:25] * pje has got to get home... it's 9pm. [21:02:00] Anyway, please don't take this the wrong way but... you're not going to convince me that views even come close to a solution at the application level. [21:02:12] And they absolutely don't at the user level. [21:02:34] To provide a UI model for queries, we need something that can do ConQuer-like query structures. [21:02:52] And we can either hardcode it, or do conceptual query schema mapping. [21:03:06] If we do the latter, we also get O-R mapping for almost free. [21:03:30] And we get physical schema independence, which makes it more likely that I can migrate OFF of the darn schema. [21:03:54] i wasn't saying not to do that, i was just suggesting that SQL could be one of the inputs [21:04:13] Yep, certainly, if somebody contributes the parser. [21:04:24] :-) [21:04:32] I'm just saying I don't personally see any value in it, because most of the SQL in my apps is dynamically generated. [21:04:48] And I don't want to keep the current string-based generation, hence p.q.a. [21:05:05] But p.q.a is still schema specific, hence p.q.c. [21:05:40] The value of ConQuer is that their research proves to my satisfaction that there is a model by which a complete O-R mapping can be specified. [21:05:58] (In terms of join paths and functions, effectively) [21:06:29] And, I've got a good grasp of how to generate a p.q.a query from such a join path graph, created from a ConQuer-like node structure. [21:06:56] From where I and my team are sitting, that looks like the holy grail. [21:07:16] We dream of being able to express ConQuer like queries over this monstrosity. [21:07:28] Without having to manually translate them as we have to now. [21:07:55] If you have a (comparatively) trivial DB model, I can see where ConQuer might not look like a big deal compared to SQL. [21:08:03] Anyway, I've gotta run. [21:08:09] alright, cya [21:08:16] thanks for the conversation [21:08:21] i'll have a lot to think about :-) [21:08:26] No problem. Hope I wasn't too harsh on you. :) [21:08:27] anything i can do to help? [21:08:34] no, no :-) [21:08:52] Well, I'll be posting soon on some open issues in the schema mapping mechanism. [21:09:02] last year I did a presentation on your TransWarp at the company retreat, everybody though it was the coolest thing, so i've been following your work for a while :-) [21:09:13] i'm hoping PEAK will be a cumulation of all your research and efforts [21:09:26] Mainly they are 1) alternate keys and 2) multi-column keys [21:09:35] i would be interested in helping you write documentation [21:09:46] That would be great. The wiki needs work. [21:09:48] i think that's the most important thing to get people to use it [21:10:14] I'm not worried about popularity right now, just functionality. Too much popularity would kill PEAK right now. [21:10:21] (Due to lack of docs and function) [21:10:42] Plus, too many people using peak.storage and it'll be hard for me to upgrade it with these new O-R features. :) [21:10:56] alright [21:11:06] i was just saying more docs is always good :-) [21:11:11] Yes. [21:11:28] So, please feel free to help. :) [21:11:40] Docs, ideas, and code are all good things to contribute. [21:12:23] So, thanks, and see you later. [21:12:35] buhbye, good luck [21:12:45] Adios [21:12:47] ** pje has left us