[PEAK] Anybody using...
Phillip J. Eby
pje at telecommunity.com
Fri Jan 16 00:19:38 EST 2004
running.IMainLoop -- I'm thinking about changing lastActivity to an
events.Value, which means you'd call it instead of reading it. And also
I'm thinking of dropping setExitCode/childForked and replacing them with
stop(exitcode) and crash(exitcode) methods. Any objections?
running.ISignalManager -- I'm thinking about reworking this to just map
signals to events, and dropping the add/remove handler stuff. Any objections?
running.IProcessProxy -- I'd like to completely overhaul this interface to
use event sources, handle asynchronous signal issues directly, and probably
drop the checkStatus() method as well.
I'm also looking at rethinking the periodic/adaptive task stuff as well,
but I don't know how much people have built on it, or how useful it
actually is for periodic tasks to use peak.events. The primary use case Ty
and I have for periodic tasks is making programs that simply perform
various system-wide "housekeeping" tasks, and they're (at least right now)
inherently serialized. But I'm not sure that other folks really have the
same use case for that. I'm guessing that most people's use for periodic
tasks is really to do things that would be trivially expressed as a thread
like:
while True:
# do something I need to do
if succeeded:
yield scheduler.sleep(5); events.resume()
else:
yield scheduler.sleep(60); events.resume()
The only way a prioritized task queue is relevant is if one or more tasks
"block" the progress of other tasks. However, if we added a 'priority'
value to threads, then we could have a task queue that simply prioritized
threads. It would be similar to the existing ITaskQueue, but would run
threads rather than IPeriodicTasks. It would be an ITaskSwitch itself, so
you could e.g.:
yield aTaskQueue.enter(priority); events.resume()
# ... do whatever I need to do ...
yield aTaskQueue.exit(); events.resume()
This would create a sort of "critical section" that would keep other
threads in the task queue from being able to enter() until your thread
exit()ed. However, if multiple threads are waiting to enter(), then the
highest priority thread will always get to go first (unless there's a tie
for highest priority, in which case it's first-come first-served).
So, we could still keep the AdaptiveTask class, but instead of running via
__call__ it would create a prioritized thread that called getWork() and
doWork(). The downside here is that getWork() and doWork() can't unblock;
they are inherently blocking.
One thing that bothers me about the periodic tasks system as such is that
it's a kludge we developed years ago to have a basic ...wait for it...
*event-driven* system, but based on polling. getWork() gets things to do,
and the framework passes them to doWork(). However, looking at this
through peak.events, it would seem the *real* structure here is:
while True:
yield aSourceOfWork; work = events.resume()
# do the work
All of the "adaptive" mechanisms of adaptive tasks (and their predecessors
in MetaDaemon) are intended for the *getWork()* side of the process. That
is, all of the "time management" and "priority management" is to control
how much time gets spent *polling* for work to do. In a sense, we could
say that each task is really two threads: one that loops using time as a
basis, generating work events, and the other that loops on work events,
doing them.
Anyway, there are many different possible structures here, and I haven't
finished sorting out yet how it should really work. For our primary use
cases, there are all sorts of interesting issues like transactions and
blocking. On the one hand, we could simply stick with what we have, except
that we haven't yet ported over lots of code from MetaDaemon, and if
peak.events will make any of it easier, I'd like to take advantage of it.
However, from what I can tell, it seems that by changing the ITaskQueue and
IPeriodicTask implementations and interfaces a bit, we could keep the
getWork()/doWork() structures intact, so anybody using them could continue
doing so. Only if someone had overridden e.g. __call__, would there be a
difficulty. So, I think that all in all, we can have our cake and eat it
too: generalize priority queue scheduling to be usable with *any* thread,
while still allowing AdaptiveTask subclasses to run as they did before,
*and* cleaning up the logic of how they work.
More information about the PEAK
mailing list