The typical SAP J2EE Engine 6.20 Cluster installation provides a single dispatcher and a single server. Additional server and dispatcher nodes can be added using either SAP J2EE Engine 6.20 Config Tool or Property Files. Adding nodes typically increases performance and scalability.
To add a node to the cluster, you must first create the node, and then run and connect it to the cluster. Cluster nodes are created locally – that is, the system administrator cannot create nodes on a remote machine. To add a cluster node on a particular machine, SAP J2EE Engine 6.20 (“Cluster” type) must be installed on this machine. The Creating Cluster Nodes section explains how to make new server and dispatcher nodes.
When a new node is created, it must be configured properly to run within the existing cluster. The configuration of the newly created cluster nodes is explained in the Configuring Additional Nodes section below.
This task can be performed using either Cluster Tool or Property Files.
SAP J2EE Engine 6.20 Config Tool enables you to add server and dispatcher nodes to the cluster. After the new node has been added successfully, it can also be configured using the Config Tool.
Note: For detailed information on how to add cluster nodes using the Config Tool, refer to the Config Tool section in this manual.
The system administrator can create a new node in SAP J2EE Engine 6.20 cluster by changing the configuration in the property files, as follows:
If a server node is added:
OpenPort
and
JoinPort
properties in the respective Cluster Manager property file must be changed as well. Otherwise, the new node cannot connect to the dispatcher.ClusterHosts
property in the Cluster Manager property file must be set for the respective host.If a dispatcher node is added:
Open Port
and
Join Port
properties in the respective Cluster Manager property file must be changed as well.When a new node is created, it can be run and connected to the SAP J2EE Engine 6.20 cluster. When joining nodes to a cluster the system administrator must take into consideration the ports used on the machine where the new node is to run. All communication services on cluster nodes have a default port property that must be changed if the particular port is already in use on the particular machine. It is essential that the ports specified for HTTP Service, P4 Service, JMS Service, and Telnet Service are not already in use by another cluster node running on the corresponding machine. The values of the
OpenPort
and
JoinPort
properties must not be duplicated for cluster nodes running on a single machine.
When creating new cluster nodes with Config Tool, the system administrator should remember that for server nodes with a number higher than 10, (for example,
server11
,
server12
, and so on) the default value of Cluster Manager
ClusterHosts
property is
localhost:2078.
This means that the corresponding server node tries to connect only to a cluster element with
JoinPort
2078 that is running locally. To connect to other cluster elements that run on remote hosts, their hosts and JoinPort values must be specified in the
ClusterHosts
property. The hosts must be separated by semicolon (;). They can be specified either by name or by IP address. Respectively, for dispatcher nodes with numbers higher than 5 (that is, dispatcher6, dispatcher7, and so on) the default values for
ClusterHosts
are:
localhost:2078;localhost:2062; localhost:2063;localhost:2064;localhost:2065.
More hosts can be added as well.
Additional server nodes can be either primary or non-primary storage servers. Primary servers hold data that is replicated on other primary servers to provide for fail-over recovery. To configure the new server node as a primary storage server, the system administrator must set the
DependentElement
property of Cluster Manager to
false
. The same task can be performed using the Config Tool by selecting the corresponding cluster element and setting the “Primary Element” indicator. The number of primary server nodes may vary. Typically, having more primary storage servers adds stability and security to the system, but it requires more system resources.
When a newly generated server node connects to the cluster, a database replication process starts. If the database is too large, the timeout period may be exceeded and the corresponding server node will fail to connect. To overcome this problem, increase the value of the
CoreLoadTimeout
property in
ServiceManager.properties
file.
In addition, when configuring the cluster, the system administrator may choose to restrict the access to one or more of its nodes from certain machines (specified either by hostnames or IP addresses). This is achieved by configuring IPVerification Manager appropriately.
Note: For more information on configuring Cluster Manager, Service Manager and IPVerification Manager, see the Managers Administration Reference section in this manual.