[PEAK] The path of model and storage
Robert Brewer
fumanchu at amor.org
Wed Jul 28 14:18:01 EDT 2004
I wrote:
> And if, in the process, you decided to make a cache container which:
>
> 1. Accepts any Python object (i.e. - don't have to subclass from
> peak.something),
> 2. Is thread-safe: avoiding "dict mutated while iterating" (probably
> with a page-locking btree),
> 3. Indexes on arbitrary keys, which are simple attributes of
> the cached
> objects (both unique and non-), and
> 4. Is no more than 4 times as slow as a native dict,
and Philip J. Eby replied:
> #2 just ain't gonna happen. Workspaces will not be shareable across
> threads. (Or more precisely, workspaces and the objects
> provided by them will not include any protection against
> simultaneous access by multiple threads.)
*boggle* Are you just forcing the mutexing down to consumer code then? I
can understand not shareable across processes, but *threads* -- wow.
Bring on the ad-hockery. :O
> #4 is also impossible, if I understand it correctly, since
> accessing just *one* attribute will in the typical case require *two*
> dictionary lookups. So, unless you're not counting the time
> needed to
> *generate* the keys, that's clearly a non-starter.
I wouldn't really be concerned about insertion time so much as lookup
time, which could be performed in 3-4 lookups for very large spaces. But
then, if your Workspace doesn't persist between threads, I can see how
insertion time would become an issue for you. ;)
Bob
More information about the PEAK
mailing list