[PEAK] Towards a GUI facility for PEAK

Phillip J. Eby pje at telecommunity.com
Wed Dec 8 09:53:12 EST 2004


At 12:31 PM 12/8/04 +0200, Niki Spahiev wrote:
>current incarnation of abstract view (TransferContext) can produce UI 
>screens based on universal resource format (TransferContext.Attach), we 
>found that this works great for prototyping, but for users toolkit 
>specific design is required. Get/Set flag is compatible with C++ version, 
>for python next version we plan something like configure() found in Tk and 
>WCK. (IMHO WCK is great example, i started wxPython port if it and WCK 
>looks very universal approach).

I'm not quite sure I follow; it sounds like what you're saying is that 
TransferContext is a generic view, not a generic model, right?  I was 
initially confused because I thought it was the model side (which my posts 
focused on much more so than the view side).


>This interface is meant to be compatible with CGI, with but of javascript 
>there can be Enter and Leave notifications too. All view parameters 
>support TransferContext interface.
>
>We have dummy TransferContext implementation for unit testing. Menus and 
>interface Action are not well integrated, but follow your guidelines.
>
>So your speculations look like mind reading to me :^)

Well, if I understand correctly you've got something quite different, 
actually.  In your system the model actually knows about control types, for 
example, and in what I'm speculating about, it's none of the interaction 
model's business what its UI looks like.  That is, views wrap models, and 
*all* of the look-and-feel is in the view layer.  Really, the interaction 
model is all about abstractions for reading and writing the domain model.

Essentially, my idea is that you define these interaction model objects, 
and then put them into a kind of command registry, basically.  And you also 
register views on them, which can be implemented with a GUI toolkit's 
native resource format, or you could maybe have a toolkit-neutral format 
for prototyping too.  But the point is that, given an interaction model 
object, you should be able to look up a view for it, and the interaction 
model doesn't know anything about it.

As a simple example, consider the startup of a GUI application.  Basically, 
in my concept you would register the interaction model for the application 
as a whole in some way, e.g. by specifying it in a config file or on the 
command line, and you are just running the "GUI application engine" and 
giving it that application model object.

So, the "engine" looks up the view for that object, and maybe sees it uses 
the, "generic top-level window" view.  So, it starts that up, and the view 
says, "Okay, let me look in the command registry to see what commands 
should be available on my menus and toolbars."  So, it puts the commands on 
the menus, based on how they were registered as part of the metadata for 
the application-level interaction object.

So then, you have a GUI shell now, and somebody clicks on something, 
triggering a command object.  The GUI processes that command by looking up 
views on it for let's say, "pre-run view", "progress dialog", and "post-run 
view".  Or maybe, for performance's sake, it just calls the command, and 
looks at the return value to see if it needs to do any views at all, since 
the command might be some kind of editing keystroke that needs to be 
processed quickly.  But anyway, the point is that the GUI is in charge of 
all GUI functions, and the interaction model knows absolutely nothing about 
them.  The interaction model would provide peak.events event sources for 
changes made to the domain model, and offer facilities to change the domain 
model, but it would probably never actually have a reference to a GUI 
window or anything like that, even with as simple of an interface as your 
TransferContext.

Now, whether that would actually work in practice, I don't know.  But, it 
seems like a layer that can be defined and tested independently of 
everything but the domain model, and so is quite useful.  Of course, the 
more dynamic the GUI, the more code has to also exist at the view level, 
but that code at least will be just mapping the interaction model to visual 
rendition, rather than dealing with the domain model directly.  This should 
allow greater separation of roles in team development, since it would allow 
a toolkit specialist to focus on accomplishing the desired visual or 
interactive effects without learning the domain model.  Conversely, it 
would also avoid having model developers have to know about GUI 
programming, and perhaps prevent them from dabbling in it if their skills 
are not up to par.  :)




More information about the PEAK mailing list