[PEAK] adapt factory parameter

Stephen Haberman stephenh at chase3000.com
Wed Jun 9 18:44:18 EDT 2004


Hello again,

Thanks for the previous quick response.

I have been lurking on the list but then changed email addresses so missed
the last PyProtocols thread. I found it in the archives and wanted to bring
up a use case I had in mind for adapt's factory parameter.

So, Phillip shot down my use of Attribute vs. Collection, so I thought I'd
go ahead and try to solve it by doing something like:

protocols.declareAdapterForObject(
    ISQLFeature,
    lambda ob,p: (ob.upperBound == 1 and ob.lowerBound == 0 and
AttributeSQLFeature(ob)) or None,
    model.StructuralFeature,
    depth=1)

protocols.declareAdapterForObject(
    ISQLFeature,
    lambda ob,p: (ob.upperBound == None and ob.lowerBound == 0 and
CollectionSQLFeature(ob)) or None,
    model.StructuralFeature,
    depth=2)

My take on this was that by declaring two adapters for SturcturalFeature ->
ISQLFeature, with different depths (yes, I was guessing widely on this),
that PyProtocols would try the adapter at each depth before throwing an
exception. I've got another very similar use case where I'd like to have n
adapters take a shot at the same type -> proto or object -> proto adaption,
so I'd like really to see this work out.

However, it seems that PyProtocols fails right away upon trying the first
adapter and having it return None.

So, the other idiom I was thinking would work, albeit more clumsily:

adapters = [lambda, lambda]
for a in adapters:
   sqlFeature = adapt(f, ISQLFeature, factory=a)
   if sqlFeature: return sqlFeature
raise Exception, 'No adapter for %s' % f

This gives each adapter a chance to see if they want to take responsibility
for the feature and letting that feature support ISQLFeature.

Does this look like a valid use case of the factory parameter? Basically, I
want to try n factories/adaptors where PyProtocols only lets me try one.

Or am I missing something that would make this easier and not require the
use of the factory parameter?

Thanks,
Stephen





More information about the PEAK mailing list