HomeDocumentation > Configuring and administering > Configuring and administering the Apache Geronimo Server > Clustering and farming > JMS clustering in Geronimo

JMS clustering in Geronimo is handled by ActiveMQ component directly. By updating activemq.xml file under /var/activemq/conf directory, you can configure brokers to be clustered and a JMS request can fail over to another broker if the JMS broker goes down using Master Slave functionality.

There are different kinds of Master/Slave configurations available according to ActiveMQ documentation:

Refer to the configuration below for each scenario in Geronimo

Pure Master/Slave

With this scenario, you must specify the master and slave node explicitly and manually restart a failed master

Master node

On the master node, you just need to specify the current node is a master by using brokerName attribute as followed.

activemq.xml
...
<broker xmlns="http://activemq.apache.org/schema/core" 
        brokerName="master" 
        useJmx="false" 
        deleteAllMessagesOnStartup="true" 
        tmpDataDirectory="${activemq.data}/tmp_storage" 
        useShutdownHook="false" start="false">
...

Slave node

Because each master only has one slave in Pure Master/Slave scenario, the slave node must know URI of master node and also be tagged as a slave node using brokerName attribute.

activemq.xml
...
 <broker xmlns="http://activemq.apache.org/schema/core" 
         brokerName="slave" deleteAllMessagesOnStartup="true"
         useJmx="false" 
         masterConnectorURI="tcp://masterHostname:${${ActiveMQPort}}"
         tmpDataDirectory="${activemq.data}/tmp_storage" 
         useShutdownHook="false" start="false">
...

Client connection

JMS clients use failover:// protocol to locate brokers in a cluster such as

failover://(tcp://masterhost:61616,tcp://slavehost:61616)?randomize=false

Shared File system

In this scenario, you must use a shared file system to provide high availability of brokers and automatic discovery of master/slave nodes. The shared folder must allow different slaves to have write permission.

Each node

On each node, configure a shared directory as the place where brokers are using persistenceAdapter element as followed:

activemq.xml
...
   <persistenceAdapter>
      <amqPersistenceAdapter directory="/sharedFileSystem/sharedBrokerData"/>
    </persistenceAdapter>
...

Note that:

  • For the shared file system on a Linux node, you must mount the shared directory first.
  • For the shared file system on a Windows node, you can use the path such as //ipAddress/sharedFolder in the configuration.

Client connection

JMS clients use failover:// protocol to locate brokers in a cluster such as

failover://(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)?randomize=false

JDBC Master Slave

In this scenario, you must use a shared database as persistence enginee and automatic recovery.

Each node

On each node, configure a shared database pool using jdbcPersistenceAdapter element as followed, we use embeded Derby database server as an example:

activemq.xml
...
    <persistenceAdapter>
      <amqPersistenceAdapter dataSource="Shared-DS"/>
    </persistenceAdapter>
...
    <bean id="Shared-ds" class="org.apache.derby.jdbc.EmbeddedDataSource">
      <property name="databaseName" value="Shared_db"/>
      <property name="createDatabase" value="create"/>
    </bean>

Client connection

JMS clients use failover:// protocol to locate brokers in a cluster such as

failover://(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)?randomize=false