OK, so I think I am going to take the plunge and turn off intra-container messaging. Without getting into all the details, I have a custom service that calls a web service. If the external web service is down, the service goes into a wait-retry mode until the external system becomes available.
My processes are an follows:
ProcessA receives the inital message via HTTP acceptor and routes to ProcessB. A and B have their own queues. The custom web service Service lives in B. With intra-container messaging on, the message stays on A's queue until B delivers it, and that is where the problem lies. A cannot route messages to other processes, say C and D, until B acknowledges it's message off A's queue. The effect is that none of the messages on A's queue can be delivered to any other recipients until the external web service becomes available.
To say it another way, if our external system in quesiton goes down, no messages originating from A will be sent anywhere.
I beleive that turning off intra-container messaging will solve the problem because any messages from A that are routed to B, will be taken off A's queue and placed on B's queue, thereby freeing up A to route to other processes.
If I am wrong here let me know. Anyway, I just wondered if there are any gotchas with not using intra-container messaging? Our message volume is about 25,000 messages in a 24 hour period, with message sizes between 3k and 20k. Messages are sent in bursts of 10 to 150 usually, but somes times we get 1500 or more once a day.
Nathan,
When I look at a system I consider placing a service (or a process) that connects to an external system in it's own ESBContainer (therefore intra-container messaging is eliminated from the equation). In this case the connection service is your web-service callout service.
By placing the connection level service in a separate ESB container will ensure that if the invoked service is hung (e.g. the service is not responding) other dependent services/processes will continue to process information.
You should also review the number of listeners and instances of the services that you have deployed. If the number of instances is 1 and it has only 1 listener then one one message at a time can be invoked. For any connection level service I would suggest at least having 2 instances deployed, each in a different ESB Container, this way if one fails the other will automatically process the requests. There should also be a minimum number of listeners that are set, probably 10 minimum, but you will have to play with what works best for you.
Hope this helps
David
David,
Thanks for the reply. So deploying my web service Service to another contianer sounds like a good option. As far as the listeners go, that is a little more interesting. My service takes a URL as a parameter, so I can resuse it in other processes. If the external system is down, and I have two listeners on that service, will the next thread pick up a message and try to send it to the same external system? What if I have one listener for the service, and a message is sent to external system A, and a message is sent to external system B? Since I am reusing the service, will calls to the external systems happen sequentially? I am just thinking though this, so maybe I need separate services for each external system. Then they would each have their own threads and could execute in parallel. What do you think?
Nathan
Nathan,
Technically as the url is a runtime parameter there is no problem having a single instance of the service, but enough listeners should be configured to handle the number of concurrent requests. The question is how do you determine when to invoke which web-service.
If you are load balancing across the two URLs you may consider a list of URLs as an init parameter and then on each invocation round robin or similar between each. Or if this is a failure pattern, you may consider a list of URLs as a runtime parameter where the service will try the first and if that fails the second is invoked. Alternatively, if the first fails send to the RME and let an error handling service manage this (this is not what I would recommend).
David