[PEAK] Towards a GUI facility for PEAK

Phillip J. Eby pje at telecommunity.com
Sun Dec 5 00:48:31 EST 2004


I've been thinking a lot lately about GUI applications, such as might be 
done with wxPython.  Mainly, I'm thinking about them from a non-functional 
requirements perspective: the "ilities" that PEAK usually focuses on, like 
flexibility and testability.

I had been thinking quite some time back about creating editor plugins for 
Eclipse to support PEAK development, but after looking at the relatively 
modest size (4470 lines) of the PyCrust graphical shell, I have to say I'm 
beginning to think the grass is actually greener in Python-land where this 
sort of thing is concerned.

I'm not a GUI maven, though, so I hate to tie PEAK to a specific library 
like wxPython.  And I'd hate even more to be trying to come up with my own 
cross-platform GUI system, so that's definitely out.  But it occurred to me 
today that most of the really "interesting" bits of an extensible GUI 
framework have nothing to do with the graphics platform.

For example, Eclipse is organized around "extension points" to which one 
"contributes extensions", using a mechanism not unlike PEAK's property 
plugins facility (where you can scan for all keys under a property 
namespace, and then retrieve the associated values).  Its extension points 
cover a variety of areas, including:

   * Menus
   * Editors for a given file type
   * Popup context menus ("right-click menus") for objects of a given type
   * Pages to appear in preferences or properties dialogs

and many dozens of others.  These extension points and extensions are 
defined by "plugins", which basically are directories with a manifest file 
that lists extension points offered by the plugin, and the extensions that 
the plugin wants to contribute to other plugins' extension points.  It also 
notes any import dependencies between that plugin and other plugins.

Apart from the necessary import path munging, such an arrangement as this 
would be fairly simple in PEAK as it sits today.  Simple 'plugin.ini' files 
would replace Eclipse's 'plugin.xml' files, for example.  Our new 
multi-dispatch metadata registration and combination facilities would work 
for most kinds of registration that would be needed.

Interestingly, tests could be performed against such plugins without 
actually instantiating or even importing a GUI, so long as appropriate 
abstractions are chosen for modelling the domain objects that will be 
represented in GUI views.  For example, it's sufficient to have a model for 
menu items to test that a plugin has registered appropriate items for a 
menu; it isn't necessary to actually display them when unit 
testing.  (Acceptance testing is of course a different issue.)

Of course, many UI actions will need to result in the display of dialogs or 
perform operations on GUI components.  I think, however, that most of these 
things can again be mitigated by appropriate abstractions.  By requiring 
actions against framework-supplied components to depend only on very 
narrow  interfaces that do not expose the underlying GUI toolkit, it should 
be straightforward to mock them when necessary.  All of the visual (and 
some of the behavioral) characteristics of GUI components provided by a 
plugin would be mapped via some sort of resource-lookup, much like in 
peak.web.  In other words, a plugin might have a class to define the 
behaviors of a dialog, but there would be a separate mechanism to map that 
class to toolkit-specific resource information (such as an .xrc) to 
instantiate the dialog itself.

Thus, one could legitimately request the peak.ui framework to, "pop up an 
'xyz.abc' dialog for this object", and have it select a resource to go with 
the current library/platform/etc.  Initially, we would support only two 
"platforms", one of which would be a testing mode where there is no actual 
visual rendering take place.  It's also unlikely that anybody would ever 
write plugins that implemented views for two different GUI toolkits; the 
idea is more to factor out any toolkit dependencies from this hypothetical 
peak.ui framework, so that people using different toolkits could build 
their own apps using whatever frameworks.  Plug-ins for their respective 
apps would have to have implementations for the toolkit that the 
application used.

Of course, this doesn't stop one from potentially doing something like 
creating cross-toolkit resource formats, or creating adapters to convert 
one toolkit's resources into another's.  But all of that is outside scope 
of the basic peak.ui system, except insofar as an appropriate choice of 
dispatch mechanisms may make those things easier at a later point.

So, in short, we could build a set of *top-down* user interface 
abstractions for a modularly-composable application user interface.  That 
is, an application is composed of separately-registered features, such as 
context menu items, top-level menu items, toolbar buttons, accelerator 
keys, editors, and so on.

These features, however, would not derive from or depend on any actual GUI 
toolkit; they would simply be "domain model" objects within the domain of 
user interaction, and would have "views" applied to them at runtime to 
create the real GUI (if in fact there is one).

Anyway, the new idea here -- at least, it's new to me -- is building such a 
framework from the top down, starting with models rather than views.  That 
is, I've always assumed that you had to start with a specific GUI toolkit, 
and then build up advanced features from that.  Now, I realize you can do 
it the opposite way...  start with the advanced features, but no actual UI 
implementation.  In fact, testing such a "headless" implementation is 
easier.  Then, one can build out the GUI form of that abstraction for a 
specific toolkit, and make it work.  And then you can do it for another 
toolkit, if you like, or not.

I think perhaps we could call these abstract interaction objects the 
"interaction model" for an application.  They do not represent the domain 
model objects, but rather the actions that the user can take, and the 
affordances (such as windows, menus, buttons, and the like) provided to the 
user for presenting the model or taking actions on it.  The actual GUI is 
then implemented by selecting views of the interaction model, in the same 
manner that peak.web looks up views by object type, view name, and other 
conditions.

One interesting question is how far down these abstractions go, in terms of 
modelling an interaction.  For example, do we just operate on a level of 
"display the such-and-such dialog", or do we get into what pages a "wizard" 
has and such?  I suspect that the answers will be determined by detailed 
application requirements "on the ground" when we get that far.

For example, I suspect we'll find that some abstractions will actually need 
to get into things like size and placement of a window, so that they can be 
saved as part of an application's preferences or persistent state.  Others 
won't really care.  On the other hand, it might suffice to say that 
abstractions that need such information to persist might simply need to 
provide an interface to allow saving it.  In other words, the GUI view 
would call down to the interaction model for that GUI item to save any 
preferences that the GUI view needed to save, if the interaction model 
allows it.

Anyway, this is all just speculation at the moment, but very 
interesting.  I don't think I've seen a concept like this anywhere in the 
literature; I've seen cross-platform toolkits built bottom-up, and even 
Eclipse's architecture seems pretty dependent on its GUI toolkit (SWT) as 
far as I can tell.  Maybe I should patent the concept, before somebody else 
does.  ;)

One thing I'm *really* curious about is how many of these UI abstractions 
would also be usable with peak.web, treating HTML as just another GUI 
toolkit...  Hm.  Well, enough speculation for tonight.




More information about the PEAK mailing list