HomeDocumentation > Configuring and administering > Configuring and administering the Apache Geronimo Server > Clustering and farming > Tomcat Native Clustering

The Tomcat Web container provides a native clustering solution that can be configured through Geronimo by using gbean definitions within config.xml or by using server.xml starting from v2.2 just like you did for a standalone Tomcat server. This document will go through the available GBeans and how to configure them for clustering in a Geronimo server with a tomcat web container.

Cluster Configuration Elements

Following are parameters or attributes that you will need to configure for a Tomcat native clustering.

  • Cluster
    • ClusterListenerChain
    • TomcatValveChain
    • Channel
      • Membership
      • Receiver
      • Sender
      • ChannelInterceptor
        • StaticMember
    • ClusterManager

Although sufficient for many applications, Tomcat clusters have some limitations:

  • This feature does not replicate stateful session Enterprise JavaBeans (EJBs). Do not use stateful session EJBs in your distributed applications.
  • This feature does not replicate dynamic updates to the Java Naming and Directory Interface (JNDI). You will have to configure all the JNDI names that are used by your distributed applications in every node of the cluster.
  • This feature does not replicate distributable Web applications to other nodes in the cluster. You will have to deploy your distributable Web applications to every node.

Consider a cluster configuration to improve the scalability and availability of your Web application. The following sections provide detailed instructions about how to set up your cluster nodes.

Setting up a clustering environment

To set up a small cluster, you need at least 2 nodes and 1 HTTP server. The HTTP server is used to serve requests from clients and ensure well-balanced traffic load among different nodes. Similarly, each node is configured to use the same logical Tomcat engine and enable session affinity.

Planning your cluster

The Tomcat cluster replicates HTTP session data by memory-to-memory multicast communication.

Every node transmits its session data to every other node in the cluster. This algorithm is efficient only when the clusters are small. If the clusters grow too large, the overhead in storage utilization and network traffic becomes excessive. To avoid excessive overhead, consider dividing your nodes into several smaller clusters.

HTTP session data is replicated among the nodes in the cluster by using a multicast broadcast. All nodes in the cluster must be on the same physical subnet and multicast broadcast must be supported by that subnet.

Preparing your Web application

To participate in a cluster configuration, your Web application must be implemented correctly.

  • Ensure that every object placed in the HTTP session implements java.io.Serializable. The clustering feature serializes the objects when it distributes them to the other nodes in the cluster.
  • The deployment descriptor for your Web application, that is the web.xml file in the Web archive, must indicate that your Web application is distributable. To do this, insert the distributable element in the deployment descriptor.
    Excerpt from web.xml
  • Do not use stateful session Enterprise JavaBeans (EJBs). The clustering feature does not replicate stateful EJBs among the nodes in the cluster.
  • If your Web application uses a database, every node in the cluster can access the database. Ensure that the JDBC drivers are installed on every node and that the datasource objects are defined correctly on every node.
  • Do not depend on dynamic updates to the Java Naming and Directory Interface (JNDI). You need to configure all the JNDI names used by your application in every node of the cluster. The clustering feature does not replicate JNDI changes among the nodes in the cluster.

Enabling session affinity

Support for session affinity, also known as sticky session support, allows a load balancing service to route an HTTP request back to the same node that created the HTTP session associated with that request until that node fails. You must use session affinity if you configure an asynchronous type of session replication. With asynchronous replication, the reply is returned before the HTTP session is replicated, and the next request using that session might arrive before the replication is complete. In this case, route the request to the node that sent the reply to the last request and originated the replication to ensure that the request is processed using the correct session data..

In Geronimo servers, users can have the same experiences as in Tomcat Web container to configure session affinity. To enable session affinity, you must modify servel.xml under the <GERONIMO_HOME>/var/catalina directory and configure the <Engine> with a jvmRoute attribute value that is unique for each node in the cluster. The load balancer will return this value in the session cookie or the encoded URL returned to the browser. When a related request arrives, it can use the value to route the request to the correct node. See the following example:

  1. For every node in the cluster, update server.xml as follows after the server is stopped:
    Excerpt from server.xml
    where
    • nodeId is a node identifier that is unique among all the nodes in the cluster. If you are using mod_jk, make sure that the jvmRoute attribute value matches your worker name in workers.properties.
  2. Restart the server to enable the new configuration.

Engaging load balancing and failover

Initially, the server configuration includes an AJP connector suitable for exchanging messages with a load balancing service. See Configuring a remote Apache HTTP server for more information about the HTTP server configuration.

Configuring the cluster at application level

To configure application level clustering, you have to input all related GBean settings in your deployment plan to make sure HTTP sessions are replicated successfully. Moreover, you must use the gbean definitions within deployment plans to configure the cluster configuration elements of Tomcat Web containers. If you want to deploy your Web application to a cluster, install your WAR files to the appropriate cluster member, assuring that you use the correct deployment plan for each member.

Sample Tomcat clustering with multicast Configuration

The template for your Web application deployment plan is shown as follows. See Creating deployment plans for Web applications for more information about the parameters.

geronimo-web.xml

where

  • web-cluster-server1 should match the WAR file name. It can be different for each node in the cluster.

On each node, deploy your Web application, by using either the administration console or the deploy command, following this syntax:

deploy --user name --password word deploy archive plan

where

  • name is replaced with a user name authorized to manage the server. If you omit this option, you will be prompted to enter a user name.
  • word is replaced with the password used to authenticate the user. If you omit this option, you will be prompted to enter a password.
  • archive is replaced with a file specification to your Web application WAR file.
  • plan is replaced with a file specification to your deployment plan.

Note: After the server installation, the default user name is system, and the default password is manager.

Sample Tomcat clustering with unicast configuration

The following code snippet is part of a clustering example that uses unicast configuration. Static members are defined using org.apache.geronimo.tomcat.cluster.StaticMemberGBean. You have to define each static member within your deployment plan to make sure that your application is clustered successfully, and make sure that TCP ports for session replication are defined consistently on each node.

excerpt from geronimo-web.xml

Where

  • IPAddress1 is the IP address or host name of the current static member.
  • TCP_port1 is the TCP port on the current node to listen for session replication data from other static members.
  • IPAddress2 is the IP address or host name of the second static member.
  • TCP_port2 is the TCP port on the second static member to listen for session replication data from other static members

To convert this example to a multicast configuration, removed the DisableMCastInterceptor, StaticMemberInterceptor, and StaticMember definitions. Also, change the value for the "address" attribute for the TomcatReceiver definition to "auto".

Configuring the cluster at Engine or Host level

You can configure the cluster just like you always did for a standalone Tomcat Web container. Input all the clustering configuration into the var/catalina/server.xml file according to your requirements in Geronimo. Both multicast and unicat are supported. See the following sample codes when you want to configure an Engine or Host level cluster. Make sure that these segments are enclosed within the <Engine> or <Host> element in server.xml. ${clusterName} is from var/config/config-substitutions.properties, which should be identical within all nodes in a cluster.

Details about each component of the cluster and configuration options can be found in the tomcat documentation at Tomcat6 clustering.

Sample multicast configuration code

server.xml

Sample unicast configuration code

server.xml

Where

  • IPAddress is the IP address or host name of the second static member.
  • TCP_port is the TCP port on the second static member to listen for session replication data from other static members.