I have set up my test environment to have fault tolerance by specifying primary and secondary management containers (each containing a management broker, activation daemon for containers needed to run on that particular machine, agent manager, directory service and host manager -- obviously the DS and AM on the secondary is simply in standby). I have also set up messaging brokers in containers on the separate machines and have those load balanced inside a particular cluster. The question I have is I also have containers that have ESB containers deployed to them containing custom Sonic services. Currently I have those containers set up to be fault tolerant so that the primary is running the containers but the secondary machince is hosting the backup containers (which are in standby). My issue is this design only provides fault tolerance for my custom services, not load balancing of the work they are doing, correct? How do you load balance processing done by custom services? Sorry if I'm missing something, I just want to make sure I'm using both machines in the environment as evenly as possible. Thanks.
Adam .. if I am interpreting your deployment correctly then it does not really fit what we would think of as best practices.
My recommendation assuming you have two machines is to follow a pattern something like:
From the ESB side use JMS entry endpoints that receive from either queues or shared subscriptions .. that will result in the load being shared between the services. There are no mechansims to exactly balance the load, however when both are receiving from the same destination and there is sufficient messages, both will process requests and if one should fail then the other will handle all teh requests.
You will note that at no point did I mention fault tolerant containers; this is because they really only fit a narrow set of use cases:
Ideally you would design your application to avoid the above scenarios.
Also missing is the use of an Activation Daemon; these are best suited to where you need to schedule the startup/shutdown of containers .. if not I'd rather have startup scripts (Windows services or Unix boot scripts) that fire off the containers.
For additional tolerance to process failures (vs. host failures) you could consider adding replicated brokers, but I'm not sure what licensing you have.
For tolerance of network failures you could add a redundant network; if the machines are on the smae LAN often folks will simply put a crossover cable between secondary NICs on the machines.
Hope this is of some help.
- David
David, I may not have stated it correctly, but my environment mimics your recommendation except at the ESB container portion. Instead of having containers for the ESB objects (one on each machine), I have one container for the ESB objects on one machine and the other machine hosts the backup container for that. Currently my service endpoints are set to be Topics with Shared Subscriptions, so you would recommend instead of using the primary/backup container set up I currently have for my ESB, set up two containers (one on each server) that each holds an instance of my ESB container?
Sorry for late reply .. but yes, exactly .. 2 containers (vs. primary/backup) each with an instance of the ESB container.