I recently had some code that was developed - and was working - on a 10.1B02 box get shipped to a 10.1A01 HP-UX box and fail with a type-mismatch error on a function call.
It turns out that this little test segment:
function test returns logical
(input curdec as decimal):
end function.
test(0).
Will compile on a 10.1B02, but not on a 10.1A01.
Did PSC relax the type-checking rules between those two versions?
In version 10.1B, as part of the Object Oriented feature, OpenEdge was enhanced to support 'widening' when passing parameters. By this I mean that the caller is able to pass a data type that is ‘narrower’ (for example INTEGER) in place for a ‘wider’ data type (for example DECIMAL). Implicit widening is supported with method and user defined function calls. When calling a method or user-defined function with a parameter that is narrower than the expected data type, the compiler does an implicit conversion to the wider data type.
In 10.1B we supported widening for:
INTEGER to INT64 to DECIMAL
DATE to DATETIME to DATETIME-TZ
In 10.1C we added support for
CHARACTER to LONGCHAR
In version 10.1B, as part of the Object Oriented feature, OpenEdge was enhanced to support 'widening' when passing parameters.
...
In 10.1B we supported widening for:
INTEGER to INT64 to DECIMAL
DATE to DATETIME to DATETIME-TZ
In 10.1C we added support for
CHARACTER to LONGCHAR
Ok, this makes perfect sense - particularly when dealing with the new data types, and the legacy code base that's already out there.
Is is safe to presume that the reverse is not the case - that an INT64 can't be put into an INT, LONGCHAR into a CHAR?
That is correct! The compiler makes sure that you are NOT passing a 'wider' data type to a 'narrower' data type.
I need to clarify my previous posting. For user-defined classes, the compiler does verify that a routine is properly passing data to a method. The compiler will verify that a routine is not passing ‘wider’ data to a routine expecting ‘narrower’ data.
However, for a user-defined function, there is one exception to the above rule. For user-defined functions, the compiler allows an INT64 to be passed to a function expecting an INTEGER. It is possible that the invocation of this user-defined function may succeed at compilation time but fail at runtime – based on the size of the data being passed to the user-defined function. This was done to ease the integration of the INT64 data type into existing procedural applications.
Do you mean that the call will succeed if the INT64 contains only a number that will fit in an integer, but fail if it is bigger than that?
It's going to be filed under "yet another quirk of the ABL."
I would've preferred to maintain the strict type checking - it's better to require the conversion at the time of the function call rather than do it implicitly and then have it fail some of the time during runtime. At least with mandated int64toint conversion, it's pretty clear that a conversion's going on there.
Thomas asked: Do you mean that the call will succeed if the INT64 contains only a number that will fit in an integer, but fail if it is bigger than that?
Yes, for user-defined functions, OpenEdge does not detect this as a compiler time error. During runtime if the value stored in the callers INT64 parameter is a valid INTEGER value the call will succeed, but if it contains a value larger, the runtime generates error 13682 ‘Value too larger to fit in INTEGER datatype.’
Tim noted that: I would've preferred to maintain the strict type checking …
For the Object-oriented extensions we have added (and are continuing to add) to ABL, we took the strict type checking model you noted. When passing data to a method the compiler verifies that the caller is passing an appropriate data type.
During runtime if the value stored in the callers INT64 parameter is a valid INTEGER value the call will succeed, but if it contains a value larger, the runtime generates error 13682 ‘Value too larger to fit in INTEGER datatype.’
Is there any other information given so an end-user/developer can figure out where the failure happened and fixed it?
For the Object-oriented extensions we have added (and are continuing to add) to ABL, we took the strict type checking model you noted. When passing data to a method the compiler verifies that the caller is passing an appropriate data type.
This means that the language has relaxed the type-checking requirements on function parameters between versions, while maintaining strict type checking on object method calls.
Yet another addition to the list of the ABL's "quirks and idiosyncracies".
Is there any other information given so an end-user/developer can figure out where the failure happened and fixed it?
There is nothing special about this runtime error – by that I mean at development time a developer can use the startup option debugalert to generate a ABL call stack when a message is generated.
This means that the language has relaxed the type-checking requirements on function parameters between versions, while maintaining strict type checking on object method calls.
In fact the language never relaxed type checking for functions relative to INT64. The model noted in this thread has existed in the language since the introduction of the INT64 data type
In fact the language never relaxed type checking for functions relative to INT64.
Agreed.
The model noted in this thread has existed in the language since the introduction of the INT64 data type
Except the original question was the relaxed type checking between an integer parameter value and a decimal function signature. They used to have to be identical, but now that's not the case because supporting "narrow values being passed to wider parameters" means that an integer parm can be passed to a decimal function parameter w/out getting a compiler error.
Except the original question was the relaxed type checking between an integer parameter value and a decimal function signature. They used to have to be identical, but now that's not the case because supporting "narrow values being passed to wider parameters" means that an integer parm can be passed to a decimal function parameter w/out getting a compiler error.
Agreed