[TransWarp] model.Attribute questions
Phillip J. Eby
pje at telecommunity.com
Mon Mar 24 16:24:45 EST 2003
At 09:31 PM 3/24/03 +0100, Radek Kanovsky wrote:
>class Book (model.Element) :
>
> class Title (model.Attribute) :
> referencedType = model.String
>
> class Pages (model.Attribute) :
> referencedType = model.Integer
>
>* Is there a standard PEAK way for defining maximum length of Book.Title
>value?
> I want to generate appropriate VARCHAR(<N>). I think that attributes
> model.String.length or model.Atribute.upperBound are for other purposes.
> Same question arises about UNIQUE, NOT NULL, etc.
Actually, String.length is correct, but you must subclass String, e.g.:
class TitleString(model.String):
length = 35
...
referencedType = TitleString
As far as other characteristics, NOT NULL would correspond to a
'lowerBound' of 1 on the feature, also available as 'isRequired'. (Note:
'lowerBound' is the attribute you should set; if it is nonzero, then
'isRequired' will be readable as true.)
UNIQUE doesn't have any correspondence that I'm aware of. You can, if you
like, define your own configuration property or feature attribute to cover
that.
If you are constructing an SQL mapping tool for general consumption, I
would suggest you look at the CORBA typecode support in
peak.model.datatypes, as it handles things like precision/scale of fixed
numeric types, plus many other common datatypes used by SQL such as binary
octet strings. You might want to base your type definitions off of
'aFeature.typeObject.mdl_typeCode', a model.TypeCode instance that
comprises a CORBA typecode kind (e.g. tk_float, tk_fixed, tk_string, etc.)
and other fields such as length, decimal places, etc.
>* Why doesn't peak check values assigned to element attributes? Do I need
> to make it manualy as follows?
>
> class Title (model.Attribute) :
> referencedType = model.String
> def set (attrcls, obj, val) :
> val = str(val) # need string
> if length(val) > 100 :
> raise ValueError('Nobody reads such long titles')
> return super(attrcls, attrcls).set(obj, val)
>
> class Pages (model.Attribute) :
> referencedType = model.Integer
> def set (attrcls, obj, val) :
> val = int(val) # need int
> if val < 0 : raise ValueError("Antibook!!!")
> return super(attrcls, attrcls).set(obj, val)
>
> Or methods storage.EntityDM.(_new|_save) are the right place for doing
> such things? But this relates naturaly to model and not to storage.
The 'normalize()' method of a feature should be used to validate/convert a
value assigned to that feature (or a value that's contained by a collection
feature). By default, the 'normalize()' method is delegated to a
'mdl_normalize()' class method on the type object (referencedType). The
default datatypes in peak.model.datatypes do not currently implement any
validation; they are abstract bases for you to extend with your own
policies. I do not have enough information at this point to define
universal default behaviors for them. So, extend the types as you need to.
> I must confess that I am slightly confused by contents of
> model/datatypes.py
> file and its purpose.
It's there mainly so that CORBA typecodes can be supported; this is needed
to implement XMI interchange of MOF and UML models, for example. You
should also find it at least somewhat useful for implementing mapping to
SQL types. It also has some potentially useful abstract base classes for
defining your own data types, using the same overall framework, as long as
the types fit within the overall CORBA scheme of typecode kinds.
>PS: I think that there is typo in model.datatypes.Double that parent
> of Double should be Float and not PrimitiveType. Python float
> is defined as double at C level so it is able to hold both
> float and double values.
>
> class Float(PrimitiveType):
> mdl_fromString = float
>
> class Double(PrimitiveType):
> pass
Yes, that is indeed a typo. Thanks.
More information about the PEAK
mailing list