[PEAK] peak.running.daemons and Twisted
Phillip J. Eby
pje at telecommunity.com
Fri Apr 23 17:23:27 EDT 2004
At 11:44 AM 4/23/04 -0700, John Landahl wrote:
>Will running.daemons.TaskQueue and AdaptiveTask work with Twisted?
Yes.
>Also, the processes that I'd like to base on AdaptiveTask will need to
>yield on deferreds (they'll be doing remote PB calls to see if they have
>work to do). For this to work it looks like I'd need to subclass
>TaskQueue and AdaptiveTask and override _processNextTask() and __call__()
>respectively. If I understand things correctly,
>_processNextTask() would need to "yield task(); didWork = events.resume()"
>instead of "didWork = task()", and __call__() would need to "yield
>self.getWork; job = events.resume()" instead of "job = self.getWork()"
>(the call to doWork() would probably need to be modified similarly as
>well). Does this seem like it could work?
Not really. The AdaptiveTask stuff assumes that you are doing synchronous
tasks, and only one such task may be active per TaskQueue. Really, that
scenario is the only one where you'd *want* to use
AdaptiveTask. Otherwise, you can just write your own timing logic as an
events.Task, i.e. using:
yield self.scheduler.sleep(n); events.resume()
whenever you want to pause until your next invocation (where 'n' is the
number of seconds till your next attempt).
So, in truth, running.daemons was designed for 1) the callback-driven
paradigm, and 2) to manage a prioritized queue of non-concurrent tasks. If
you are writing new code, I recommend that you instead use yield-driven
scheduling. If your tasks are synchronous and of equal priority, an
'events.Semaphore(1)' can be used to serialize them, e.g.:
yield self.semaphore; events.resume()
self.semaphore.take()
# do stuff...
self.semaphore.put() # release the lock
yield self.scheduler.sleep(n); events.resume()
around each task. Finally, if your tasks are also prioritized, you need to
create the equivalent of a prioritized semaphore to replace TaskQueue. I
will probably add one to peak.events at some point, but it's not something
I can get to right now.
More information about the PEAK
mailing list