Socket programming problem

Posted by isoft01 on 18-Apr-2012 07:40

Good day,

I am writing a simple socket client in OpenEdge Architect which communicates to a Java server socket running on Linux. The program passes various character strings each having a special function. The first character string passed, obtains a sessionID which is used throughout the remainder of the session until it's terminated.  The first method passes the string and gets the resultant without any issue. The MEMPTR variable's size is reset to zero(0) and again set to the length of the subsequent string before the actual WRITE takes place.

When writing the MEMPTR variable to the socket the second time around, the receiving server socket produces and error and failes to process the request. Reviewing the server log files I get a number format exception with reference to the characters at the beginning of the string. Using VI I get ^@ characters at the start of the string. I checked the value of the MEMPTR variables using the OE debugger and also checked the strings in a hex editor and couldn't find anything out of the ordinary. 

Coding the socket client in C sharp works 100% the way I it's suppose to using the System.Text.Encoding.ASCII.GetBytes() method. Are there any gotcha's in OpenEdge when working with sockets that I am not taking into account?

If more specific information is required, I will provide it.

Kind regards,


All Replies

Posted by Matt Baker on 18-Apr-2012 16:41

The ^@ looks like garbage, but might be partially encoded bytes.  Sounds like you have an off-by-one error somewhere.

A couple of things to check.

ABL character variables and memptrs are 1 based and not 0 based like they are in most other languages.  So make sure you are pushing your strings into the memptr as position 1, and that you write the whole thing, and not position 0, or starting the string at position 1 instead of position 0.  The socket writes are the same way.  They start at memptr offset 1.

Check your code page encoding.  You are converting the internal representation of ABL (and java) strings to bytes.  Make sure you use a consistent encoding when translating from character data into memptr data and back again.  Usually UTF-8 is quite safe and will easily convert to the code pages usually used internally by both java (UTF-16LE) and openedge (iso8859-1 normally).  If your C# client is using System.Text.Encoding.ASCII.GetBytes() (.net uses UTF--16BE internally), this is handling the encoding for you from the bytes to ascii data, but you need to make sure its consistent on both ends.

Posted by isoft01 on 19-Apr-2012 07:54

Thanks Matthew,

I think I've covered the first part to ensure data that I write to a memptr is pushed at position 1. The second part is what interest me, the character encoding. I will check what the default character encoding is on the recipient side and ensure that the data being passed conforms to what is required.

Thanks for the feedback.

Posted by isoft01 on 25-Apr-2012 14:05


I managed to resolve the problem. I was using GET-SIZE(memptr) as one of the arguments when writing the memptr to the socket. This resulted in null characters being wriiten over the socket which the receiving application could not parse. I changed the write statement from GET-SIZE(memptr) to the actual length of the character string which solved the problem.


This thread is closed