|Home > Documentation > User's guide > Clustering > Tomcat Native Clustering|
The Tomcat Web container provides a native clustering solution that can be configured through Geronimo using gbean definitions within config.xml or your deployment descriptors . This document will go through the available GBeans and how to configure them for clustering in a Geronimo server with a tomcat web container.
A cluster configuration should be considered when you want to improve the scalability and availability of your Web application. The following sections provide detailed instructions on how to set up your cluster nodes and how to deploy your applications with tomcat clustering enabled.
Generally, to set up a small cluster you will need at least 2 nodes and 1 HTTP server. The HTTP server is used to serve requests from clients and ensure well-balanced traffic load among different nodes. Similarly, each node is configured to use the same logical Tomcat engine and enable session affinity.
The Tomcat cluster replicates HTTP session data via memory to memory multicast communication.
Every node transmits its session data to every node in the cluster. This algorithm is only efficient when the clusters are small. If the clusters grow too large, the overhead in storage utilization and network traffic becomes excessive. To avoid excessive overhead, consider dividing your nodes into several smaller clusters.
HTTP session data is replicated among the nodes in the cluster using a multicast broadcast. All nodes in the cluster must be on the same physical subnet and multicast broadcast must be supported by that subnet.
To participate in a cluster configuration, your Web application must be implemented correctly.
Support for session affinity, also known as sticky session support, allows a load balancing service to route an HTTP request back to the same node that created the HTTP session associated with that request until that node fails. You must use session affinity if you configure an asynchronous type of session replication. With asynchronous replication, the reply is returned before the HTTP session is replicated so there is always a chance that the next request using that session arrives before the replication is complete. In this case, the only way to ensure that the request is processed using the correct session data is to route the request to the node that sent the reply to the last request and originated the replication.
For every node in the cluster, update config.xml as follows after the server is stopped
Initially, the server configuration includes an AJP connector suitable for exchanging messages with a load balancing service. See Configuring a remote Apache HTTP server for more information about the HTTP server configuration.
To deploy your application to a cluster and make sure HTTP sessions are replicated successfully, cluster configuration elements of Tomcat Web containers must be configured using GBean definitions within deployment plans.
If you want to deploy your Web application to a cluster, install your WAR files to the appropriate cluster member, assuring that you use the correct deployment plan for each member. Here is the template for your Web application deployment plan
On each node, deploy your Web application, either via admin console or deploy command, following this syntax:
Note: After server installation, the default user name is initially system, and the default password is manager.
Static members in a cluster are defined using org.apache.geronimo.tomcat.cluster.StaticMemberGBean in the deployment plan. You have to specify all static members within the deployment plan to make sure the application is clustered successfully. Refer to the sample code below for an application-scoped unicast clustering configuration on one node. The sample assumes there are only two static members in a cluster environment.
A few notes for better understanding of the sample deployment plan:
To convert this example to a multicast configuration, the DisableMCastInterceptor, StaticMemberInterceptor, and StaticMember definitions should be removed. Also, the value for the address attribute for the ReceiverGBean definition should be changed to auto.
To set up an application-scoped unicast clustering with more than two nodes, make sure you have defined all the static members in the deployment plan for each server node.
Interceptors can perform actions when a message is sent or received. Use a reference NextInterceptor in interceptor configurations to chain interceptors together. You can control how the client requests are processed by arranging the order of the interceptor chain. In the following sample code, when the TcpFailureDetector intercepter catches errors, it calls the next interceptor StaticMember1Interceptor. The static member referenced inside StaticMember1Interceptor, static member 2 in this example, will immediately take over the work of the first static member.
All the static members involved should be defined in the deployment plan as well as the relationship between these static members. Use a reference NextStaticMember in the definition of each static member, except for the last one, to chain static members together. When static member 2 in this example fails to take over the work, static member 3 will immediately take over the work. It is the same for longer static member chain. When a static member fails to take over the work of the previous static member, it calls the next static member in the chain. The request is passed along until a static member can handle the work or it gets to the end.
See the following deployment plan snippet as a complete example of setting up application-scoped unicast clustering with more than two nodes.