I am experiencing a number of issues using an OpenEdge database as a repository for Enterprise Architect which relate to SQL datatype issues since the normal way to create the schema for this usage is with a SQL script.
First, a field which is defined as CHAR(1) or CHARACTER(1) results in a field with a datatype of "fixchar". I wouldn't mind that, except that any field defined as fixchar is poison in any ABL code involving a display. One can't even export the table containing such a field to a .d! I am going to try changing these to VARCHAR and see if that works.
Second, there are a number of fields defined as FLOAT. These result in a datatype of "double". Same problems in terms of display or export. It appears that I should try changing these to NUMBER, although NUMBER takes a size and FLOAT doesn't and I don't have any obvious guidance on what size to use. This size does not seem to influence anything in the ABL side, but it does influence the SQL width.
Third, the FLOAT fields end up with a format of "X(24)". This make Progress really annoyed when one exports a .df and then tries to load it elsewhere. NUMBER seems to get a ->>,>>>,>>> type format, so possibly the one change will fix both.
I find it very irritating that there are supported SQL datatypes which are not accessible from the ABL. Didn't we learn that lesson over date-time???
Finally, it seems that if a table or field name in the SQL script is quoted, then it is created in the case in which it appears, but if it is not quoted, then it is created all upper case. I don't suppose there is a utility or parameter to either quote everything or to alter this behavior.
CHAR or CHARACTER is one of the datatypes which is both required and very fully defined by the sql standard. so OpenEdge sql is required to have the "fixed length" behavior you see. The ABL character type evolved from the 4GL tradition and corresponds to the sql VARCHAR datatype. VARCHAR is exactly what you want.
FLOAT is another sql standards datatype. However, FLOAT does not have an exact counterpart in ABL. Usually the best "matching" datatype would be NUMBER, which is exactly identical to the ABL DECIMAL. You can have up to 50 digits in this type, so it accomodate a wide range of values. As you say, the size of a NUMBER type does not affect the ABL. The right size is totally application dependent. You might try loading the data, as an experiment, into FLOAT datatypes and then querying to find your largest and smallest values.
Note that the ABL does not support FLOAT, and so the format associated with a FLOAT is never used. If you defined a NUMBER type to load the float data into then you could explicitly define a format for the NUMBER type. As of OE sql 10.1A you can define ABL formats in sql DDL using the sql keyword PRO_FORMAT.
hope this helps, ...sjp
Thanks.
The bottom line here is that we should have parity between SQL and ABL. If a datatype is required for SQL, then either it should map automatically into an ABL type or it should be supported in ABL. Period.
Not to mention supporting non-indexed searches.
Not to mention supporting non-indexed searches.
This is in the works....
Yes, I know ... I'm just mentioning it as an egregious example of doing something for the SQL engine and then not making it available for the ABL engine.
The idea of putting a datatype in the dictionary to support SQL and then not only not making it available to ABL, but leaving it so that any attempt to access it via ABL, even the dictionary dump routines, produces an error is just bizarre. I mean, how many people are there out there that are doing SQL only access to an OE database and never having to do a dump and load?
Is it now possible to select a char field and cast that to its desired width - in effect I want to overcome the limitation where the sql engine will abort a query if the char field content exceeds the defined width by "cast"ing it to a derived column of a fixed length.
I understand that the sql engine ideal that a named column cannot be extracted if it exceeds its defined length - to avoid the possible situation where you could updated the truncated column and write it back and thus cause an error.
This is the biggest issue FMPOV with using the sql engine against legacy applications. The "nice to have" open limit comment has now become a nightmare when reporting. Even if the cast override was restricted to non updateable views?