HomeDocumentation > Configuring and administering > Configuring and administering the Apache Geronimo Server > Clustering and farming > JMS clustering in Geronimo

JMS clustering in Geronimo is handled by ActiveMQ component using blueprint services. you can configure brokers to be clustered, and a JMS request can failover to another broker if the JMS broker goes down using Master Slave functionality.

Prerequisite

Make sure the system module org/apache/geronimo/activemq-ra/3.0/car is loaded during server startup. And then update config-substitutions.properties file under the /var/config directory to specify IP address or host name for each ActiveMQ node when the server is stopped.

config-substitutions.properties

JMS clustering scenarios

There are different kinds of Master/Slave configurations available according to ActiveMQ documentation:

In Geronimo server, all those configurations are handled using blueprint services. You need to update the content of activemq.xml within activemq-broker-blueprint-3.0.car under /repository/org/apache/geronimo/configs/activemq-broker-blueprint/3.0 directory in accordance with the scenario you choose. The easier way would be to unzip activemq-broker-blueprint-3.0.car and repack them after modification.

See the following configuration for each scenario in Geronimo

Pure Master Slave

With this scenario, you must specify the master and slave node explicitly and manually restart a failed master.

Master node

On the master node, you just need to specify that the current node is a master by using the brokerName attribute as follows.

activemq.xml

Slave node

Because each master has only one slave in the Pure Master/Slave scenario, the slave node must know the URI of the master node and also be tagged as a slave node by using the brokerName attribute.

activemq.xml

Client connection

JMS clients use the failover:// protocol to locate brokers in a cluster. See the following example:

failover://(tcp://masterIP:61616,tcp://slaveIP:61616)?randomize=false

Shared File System

In this scenario, you must use a shared file system to provide high availability of brokers and automatic discovery of master/slave nodes. The shared folder must allow different slaves to have write permission.

Each node

On each node, configure a shared directory as the place where brokers are using the persistenceAdapter element as follows:

activemq.xml

Note that:

  • For the shared file system on a Linux node, you must mount the shared directory first.
  • For the shared file system on a Windows node, you can use the path such as //ipAddress/sharedFolder in the configuration.
  • On each node, broker1 should be replaced with the exact IP address of current node.

Client connection

JMS clients use the failover:// protocol to locate brokers in a cluster. See the following example:

failover://(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)?randomize=false

JDBC Master Slave

In this scenario, you must use a shared database as the persistence enginee and automatic recovery.

Each node

On each node, configure a shared database pool by using the jdbcPersistenceAdapter element as follows. We use a remote Oracle database server as an example:

activemq.xml

Note that:

  • For the database server, dbServer should be replaced with actual IP address of the database server.
  • On each node, broker1 should be replaced with the exact IP address of current node.

Client connection

JMS clients use the failover:// protocol to locate brokers in a cluster. See the following example:

failover://(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)?randomize=false