Home > Documentation > Configuring and administering > Configuring and administering the Apache Geronimo Server > Clustering and farming > JMS clustering in Geronimo |
JMS clustering in Geronimo is handled by ActiveMQ component using blueprint services. you can configure brokers to be clustered, and a JMS request can failover to another broker if the JMS broker goes down using Master Slave functionality.
Make sure the system module org/apache/geronimo/activemq-ra/3.0/car
is loaded during server startup. And then update config-substitutions.properties
file under the /var/config
directory to specify IP address or host name for each ActiveMQ node when the server is stopped.
ActiveMQHostname=hostname/IP
There are different kinds of Master/Slave configurations available according to ActiveMQ documentation:
In Geronimo server, all those configurations are handled using blueprint services. You need to update the content of activemq.xml
within activemq-broker-blueprint-3.0.car
under /repository/org/apache/geronimo/configs/activemq-broker-blueprint/3.0
directory in accordance with the scenario you choose. The easier way would be to unzip activemq-broker-blueprint-3.0.car
and repack them after modification.
See the following configuration for each scenario in Geronimo
With this scenario, you must specify the master and slave node explicitly and manually restart a failed master.
On the master node, you just need to specify that the current node is a master by using the brokerName attribute as follows.
... <cm:property name="serverHostname" value="masterIP"/> ...
Because each master has only one slave in the Pure Master/Slave scenario, the slave node must know the URI of the master node and also be tagged as a slave node by using the brokerName attribute.
... <cm:property name="serverHostname" value="slaveIP"/> ...
JMS clients use the failover:// protocol to locate brokers in a cluster. See the following example:
failover://(tcp://masterIP:61616,tcp://slaveIP:61616)?randomize=false
In this scenario, you must use a shared file system to provide high availability of brokers and automatic discovery of master/slave nodes. The shared folder must allow different slaves to have write permission.
On each node, configure a shared directory as the place where brokers are using the persistenceAdapter
element as follows:
... <cm:property name="serverHostname" value="broker1"/> ... <amq:persistenceAdapter> <amq:amqPersistenceAdapter directory="/sharedFileSystem/sharedBrokerData"/> </amq:persistenceAdapter> ...
Note that:
//ipAddress/sharedFolder
in the configuration.JMS clients use the failover:// protocol to locate brokers in a cluster. See the following example:
failover://(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)?randomize=false
In this scenario, you must use a shared database as the persistence enginee and automatic recovery.
On each node, configure a shared database pool by using the jdbcPersistenceAdapter
element as follows. We use a remote Oracle database server as an example:
... <cm:property name="serverHostname" value="broker1"/> ... <amq:persistenceAdapter> <amq:jdbcPersistenceAdapter dataSource="#oracle-ds"/> </amq:persistenceAdapter> ... <bean id="oracle-ds" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"/> <property name="url" value="jdbc:oracle:thin:@dbServer:1521:AMQDB"/> <property name="username" value="scott"/> <property name="password" value="tiger"/> <property name="maxActive" value="200"/> <property name="poolPreparedStatements" value="true"/> </bean> ...
Note that:
JMS clients use the failover:// protocol to locate brokers in a cluster. See the following example:
failover://(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)?randomize=false
Bookmark this on Delicious Digg this | Privacy Policy - Copyright © 2003-2013, The Apache Software Foundation, Licensed under ASL 2.0. |