I have 2 machines with the exact same web service setup on both of them. Only difference is that one points to a live DB and the other to a test DB. We have a method where our client requests all the stock items from us and they do this in batches of 15k records at a time... This works perfectly on the 1 machine pointing to the live db yet on the other machine pointing to the test db.. when calling this method it sees there is a timeout problem, the method runs but afterwards get the following.
READPACKET IOException : java.net.SocketException: Software caused connection abort: recv failed (Software caused connection abort: recv failed)
4GL-Provider Error in SOAP request execution: Communication layer message: General Error: READPACKET IOException : java.net.SocketException: Software caused connection abort: recv failed (Software caused connection abort: recv failed). (7175) (10926)
Now both machines go through a firewall and the same rules are applied to both machines. In my testing if i specify, i want to retrieve a batch of 10k records or even 13k records it works perfectly yet the batch requirement of 15k records fails with the above errors. To retrieve the 15k records it does take a fair time to run. (2:40min).
Is there a timeout setting im missing??.. ive double checked all the settings on the 2 web services and appservers and they are identical.
Can someone put me in the correct direction here?
I guess that during the holidays season we had less community members online... Anyone following this forum may have suggestions or ideas to share?
is the timeout setting perhaps on the firewall itself? and the live connection fast enough not to trigger it?
I am thinking where David is going - the time out is on some intermediate part like tomcat or the wsa component.
Web services are meant to happen pretty quickly - they aren't like Progress appservers that chug till they are done.
You may want the web service to interact with a background process that will do the deeds for you. Have it request a job (via a database record, don't have it start and stop by the web service), check the status, etc. if you are dealing with thousands of records.
Hi guys... yes this is still an issue the only way i can resolve it at the moment is by lowering the amount of records pulled by the client through the webservice...
Im using the WSA and this error is comming from this log. The WSA is running on an external tomcat (v8). The application properties for the webservice has the property set for the appserver to connect to.
My understanding is that a timeout should never occur, the connection should be kept alive until it is finished or unless an error occurs etc... so in my mind this call to the webservice should be able to run for 10 min if necessary.
Perhaps follow Adrian Jones thought - could this be happening? knowledgebase.progress.com/.../P167205
I have looked at this before but logically it doesnt make any sense as it works for 15k records on the 1 machine and the other not.... our client goes through the same firewall to access the WS on both machines... so in my mind it has to be machine related... Ive compared appservers, wsa and webservice setup for both webservices and they are identical.
So there must be a setting im missing somewhere....
Is there anyone out there who can put this in to practice and call a webservice method that runs for a long time to see what results they get and how they resolve this???
The difference might not be in settings but in hardware itself, have a hunch the test machine is less muscled than the live one so it just can't produce the same results in the same amount of time but that is not the point here... why is the 15k a requirement, it's pretty darn large number to pipe through a single sync call. This is not streaming we're talking about, it's a plain sync request/response call through a http/tcp protocol and lots of network devices involved so I won't expect the connection stay on for 10 minutes even if nothing pass through it (the server is still preparing the response).
Do yourself a favour and change the requirement instead of chasing dead horses, the is no way to gracefully recover from any communication error other that by making the request again so better make sure you keep the requests as short as possible.
If the problem is unseen, I would think that to be the timeout at the tomcat. Did you configure your tomcat to have a maximum connection timeout. If not then you can replace connectionTimeout to "-1" in server.xml.
In addition to that, did you also increase the connectiontimeout at WSA, if you haven't then you can follow the below Kbase to configure your WSA