J2EE clustering, Part 1

Clustering technology is crucial to good Website design; do you know the basics?

Enterprises are choosing Java 2, Enterprise Edition (J2EE) to deliver their mission-critical applications over the Web. Within the J2EE framework, clusters provide mission-critical services to ensure minimal downtime and maximum scalability. A cluster is a group of application servers that transparently run your J2EE application as if it were a single entity. To scale, you should include additional machines within the cluster. To minimize downtime, make sure every component of the cluster is redundant.

In this article we will gain a foundational understanding of clustering, clustering methods, and important cluster services. Because clustering approaches vary across the industry, we will examine the benefits and drawbacks of each approach. Further, we will discuss the important cluster-related features to look for in an application server.

To apply our newly acquired clustering knowledge to the real world, we will see how HP Bluestone Total-e-Server 7.2.1, Sybase Enterprise Application Server 3.6, SilverStream Application Server 3.7, and BEA WebLogic Server 6.0 each implement clusters.

In Part 2 of this series, we will cover programming and failover strategies for clusters, as well as test our four application server products to see how they scale and failover.

Clusters defined

J2EE application server vendors define a cluster as a group of machines working together to transparently provide enterprise services (support for JNDI, EJB, JSP, HttpSession and component failover, and so on). They leave the definition purposely vague because each vendor implements clustering differently. At one end of the spectrum rest vendors who put a dispatcher in front of a group of independent machines, none of which has knowledge of the other machines in the cluster. In this scheme, the dispatcher receives an initial request from a user and replies with an HTTP redirect header to pin the client to a particular member server of the cluster. At the other end of the spectrum reside vendors who implement a federation of tightly integrated machines, with each machine totally aware of the other machines around it along with the objects on those machines.

In addition to machines, clusters can comprise redundant and failover-capable:

  • Load balancers: Single points of entry into the cluster and traffic directors to individual Web or application servers
  • Web servers
  • Gateway routers: Exit points out of an internal network
  • Multilayer switches: Packet and frame filters to ensure that each machine in the cluster receives only information pertinent to that machine
  • Firewalls: Cluster protectors from hackers by filtering port-level access to the cluster and internal network
  • SAN (Storage Area Networking) switches: Connect the application servers, Web servers, and databases to a backend storage medium; manage which physical disk to write data to; and failover
  • Databases

Regardless of how they are implemented, all clusters provide two main benefits: scalability and high availability (HA).

Scalability

Scalability refers to an application’s ability to support increasing numbers of users. Clusters allow you to provide extra capacity by adding extra servers, thus ensuring scalability.

High availability

HA can be summed up in one word: redundancy. A cluster uses many machines to service requests. Therefore, if any machine in a cluster fails, another machine can transparently take over.

A cluster only provides HA at the application server tier. For a Web system to exhibit true HA, it must be like Noah’s ark in containing at least two of everything, including Web servers, gateway routers, switching infrastructures, and so on. (For more on HA, see the HA Checklist.)

Cluster types

J2EE clusters usually come in two flavors: shared nothing and shared disk. In a shared-nothing cluster, each application server has its own filesystems with its own copy of applications running in the cluster. Application updates and enhancements require updates in every node in the cluster. With this setup, large clusters become maintenance nightmares when code pushes and updates are released.

In contrast, a shared-disk cluster employs a single storage device that all application servers use to obtain the applications running in the cluster. Updates and enhancements occur in a single filesystem and all machines in the cluster can access the changes. Until recently, a downside to this approach was its single point of failure. However, SAN gives a single logical interface into a redundant storage medium to provide failover, failback, and scalability. (For more on SAN, see the Storage Infrastructure sidebar.)

When comparing J2EE application servers’ cluster implementations, it’s important to consider:

  • Cluster implementation
  • Cluster and component failover services
  • HttpSession failover
  • Single points of failure in a cluster topology
  • Flexible topology layout
  • Maintenance

Later on we’ll look at how four popular application servers compare in various areas. But first, let’s examine each item in more detail.

Cluster implementation

J2EE application servers implement clustering around their implementation of JNDI (Java Naming and Directory Interface). Although JNDI is the core service J2EE applications rely on, it is difficult to implement in a cluster because it cannot bind multiple objects to a single name. Three general clustering methods exist in relation to each application server’s JNDI implementation:

  • Independent
  • Centralized
  • Shared global

Independent JNDI tree

HP Bluestone Total-e-Server and SilverStream Application Server utilize an independent JNDI tree for each application server. Member servers in an independent JNDI tree cluster do not know or care about the existence of other servers in the cluster. Therefore, failover is either not supported or provided through intermediary services that redirect HTTP or EJB requests. These intermediary services are configured to know where each component in the cluster resides and how to get to an alternate component in case of failure.

One advantage of the independent JNDI tree cluster: shorter cluster convergence and ease of scaling. Cluster convergence measures the time it takes for the cluster to become fully aware of all the machines in the cluster and their associated objects. However, convergence is not an issue in an independent JNDI tree cluster because the cluster achieves convergence as soon as two machines start up. Another advantage of the independent JNDI tree cluster: scaling requires only the addition of extra servers.

However, several weaknesses exist. First, failover is usually the developer’s responsibility. That is, because each application server’s JNDI tree is independent, the remote proxies retrieved through JNDI are pinned to the server on which the lookup occurred. Under this scenario, if a method call to an EJB fails, the developer has to write extra code to connect to a dispatcher, obtain the address of another active server, do another JNDI lookup, and call the failed method again. Bluestone implements a more complicated form of the independent JNDI tree by making every request go through an EJB proxy service or Proxy LBB (Load Balance Broker). The EJB proxy service ensures that each EJB request goes to an active UBS instance. This scheme adds extra latency to each request but allows automatic failover in between method calls.

Centralized JNDI tree

Sybase Enterprise Application Server implements a centralized JNDI tree cluster. Under this setup, centralized JNDI tree clusters utilize CORBA’s CosNaming service for JNDI. Name servers house the centralized JNDI tree for the cluster and keep track of which servers are up. Upon startup, every server in the cluster binds its objects into its JNDI tree as well as all of the name servers.

Getting a reference to an EJB in a centralized JNDI tree cluster is a two-step process. First, the client looks up a home object from a name server, which returns an interoperable object reference (IOR). An IOR points to several active machines in the cluster that have the home object. Second, the client picks the first server location in the IOR and obtains the home and remote. If there is a failure in between EJB method invocation, the CORBA stub implements logic to retrieve another home or remote from an alternate server listed in the IOR returned from the name server.

The name servers themselves demonstrate a weakness of the centralized JNDI tree cluster. Specifically, if you have a cluster of 50 machines, of which five are name servers, the cluster becomes useless if all five name servers go down. Indeed, the other 45 machines could be up and running but the cluster will not serve a single EJB client while the naming servers are down.

Another problem arises from bringing an additional name server online in the event of a total failure of the cluster’s original name servers. In this case, a new centralized name server requires every active machine in the cluster to bind its objects into the new name server’s JNDI tree. Although it is possible to start receiving requests while the binding process takes place, this is not recommended, as the binding process prolongs the cluster’s recovery time. Furthermore, every JNDI lookup from an application or applet really represents two network calls. The first call retrieves the IOR for an object from the name server, while the second retrieves the object the client wants from a server specified in the IOR.

Finally, centralized JNDI tree clusters suffer from an increased time to convergence as the cluster grows in size. That is, as you scale your cluster, you must add more name servers. Keep in mind that the generally accepted ratio of name server machines to total cluster machines is 1:10, with a minimum number of two name servers. Therefore, if you have a 10-machine cluster with two name servers, the total number of binds between a server and name server is 20. In a 40-machine cluster with four name servers, there will be 160 binds. Each bind represents a process wherein a member server binds all of its objects into the JNDI tree of a name server. With that in mind, the centralized JNDI tree cluster has the worst convergence time among all of the JNDI cluster implementations.

Shared global JNDI tree

Finally, BEA WebLogic implements a shared global JNDI tree. With this approach, when a server in the cluster starts up it announces its existence and JNDI tree to the other servers in the cluster through IP (Internet Protocol) multicast. Each clustered machine binds its objects into the shared global JNDI tree as well as its own local JNDI tree.

Having a global and local JNDI tree within each member server allows the generated home and remote stubs to failover and provides quick in-process JNDI lookups. The shared global JNDI tree is shared among all machines within the cluster, allowing any member machine to know the exact location of all objects within the cluster. If an object is available at more than one server in the cluster, a special home object is bound into the shared global JNDI tree. This special home knows the location of all EJB objects with which it is associated and generates remote objects that also know the location of all EJB objects with which it is associated.

The shared global approach’s major downsides: the large initial network traffic generated when the servers start up and the cluster’s lengthy convergence time. In contrast, in an independent JNDI tree cluster, convergence proves not to be an issue because no JNDI information sharing occurs. However, a shared global or centralized cluster requires time for all of the cluster’s machines to build the shared global or centralized JNDI tree. Indeed, because shared global clusters use multicast to transfer JNDI information, the time required to build the shared global JNDI tree is linear in relation to the number of subsequent servers added.

The main benefits of shared global compared with centralized JNDI tree clusters center on ease of scaling and higher availability. With shared global, you don’t have to fiddle with the CPUs and RAM on a dedicated name server or tune the number of name servers in the cluster. Rather, to scale the application, just add more machines. Moreover, if any machine in the cluster goes down, the cluster will continue to function properly. Finally, each remote lookup requires a single network call compared with the two network calls required in the centralized JNDI tree cluster.

All of this should be taken with a grain of salt because JSPs, servlets, EJBs, and JavaBeans running on the application server can take advantage of being co-located in the EJB server. They will always use an in-process JNDI lookup. Keep in mind that if you run only server-side applications, little difference exists among the independent, centralized, or shared global cluster implementations. Indeed, every HTTP request will end up at an application server that will do an in-process JNDI lookup to return any object used within your server-side application.

Next, we turn our attention to the second important J2EE application server consideration: cluster and failover services.

Cluster and failover services

Providing J2EE services on a single machine is trivial compared with providing the same services across a cluster. Due to the complications of clustering, every application server implements clustered components in unique ways. You should understand how vendors implement clustering and failover of entity beans, stateless session beans, stateful session beans, and JMS. Many vendors claim to support clustered components but their definitions of what that means usually involve components running in a cluster. For example, BEA WebLogic Server 5.1 supported clustered stateful session beans but if the server that the bean instance was on were to fail, all of the state would be lost. The client would then have to re-create and repopulate the stateful session bean, making it useless in a cluster. It wasn’t until BEA WebLogic 6.0 that stateful session beans employed in-memory replication for failover and clustering.

All application servers support EJB clustering but vary greatly in their support of configurable automatic failover. Indeed, some application servers do not support automatic failover in any circumstance by EJB clients. For example, Sybase Enterprise Application Server supports stateful session bean failover if you load the bean’s state from a database or serialization. As mentioned above, BEA WebLogic 6.0 supports stateful session bean failover through in-memory replication of stateful session bean state. Most application servers can have JMS running in a cluster but don’t have load balancing or failover of individually named Topics and Queues. With that in mind, you’ll probably need to purchase a clusterable implementation of JMS such as SonicMQ to get load balancing to Topics and Queues.

Another important consideration, to which we now turn our attention: HttpSession failover.

HttpSession failover

HttpSession failover allows a client to seamlessly get session information from another server in the cluster when the original server on which the client established a session fails. BEA WebLogic Server implements in-memory replication of session information, while HP Bluestone Total-e-Server utilizes a centralized session server with a backup for HA. SilverStream Application Server and Sybase Enterprise Application Server utilize a centralized database or filesystem that all application servers would read and write to.

The main drawback of database/filesystem session persistence centers around limited scalability when storing large or numerous objects in the HttpSession. Every time a user adds an object to the HttpSession, all of the objects in the session are serialized and written to a database or shared filesystem. Most application servers that utilize database session persistence advocate minimal use of the HttpSession to store objects, but this limits your Web application’s architecture and design, especially if you are using the HttpSession to store cached user data.

Memory-based session persistence stores session information in-memory to a backup server. Two variations of this method exist. The first method writes HttpSession information to a centralized state server. All machines in the cluster write their HttpSession objects to this server. In the second method, each cluster node chooses an arbitrary backup node to store session information in-memory. Each time a user adds an object to the HttpSession, that object alone is serialized and added in-memory to a backup server.

Under these methods, if the number of servers in the cluster is low, the dedicated state server proves better than in-memory replication to an arbitrary backup server because it frees up CPU cycles for transaction processing and dynamic page generation.

On the other hand, when the number of machines in the cluster is large, the dedicated state server becomes the bottleneck and in-memory replication to an arbitrary backup server (versus a dedicated state server) will scale linearly as you add more servers. In addition, as you add more machines to the cluster you will need to constantly tune the state server by adding more RAM and CPUs. With in-memory replication to an arbitrary backup server, you just add more machines and the sessions evenly distribute themselves across all machines in the cluster. Memory-based persistence provides flexible Web application design, scalability, and high availability.

Now that we’ve tackled HttpSession failover, we will examine single points of failure.

Single points of failure

Cluster services without backup are known as single points of failure. They can cause the whole cluster or parts of your application to fail. For example, WebLogic JMS can have only a Topic running on a single machine within the cluster. If that machine happens to go down and your application relies on JMS Topics, your cluster will be down until another WebLogic instance is started with that JMS Topic. (Note that only durable messages will be delivered when the new instance is started.)

Ask yourself if your cluster has any single points of failure. If it does, you will need to gauge whether or not you can live with them based on your application requirements.

Next up, flexible scaling topologies.

Flexible scaling topologies

Clustering also requires a flexible layout of scaling topologies. Most application servers can take on the responsibilities of an HTTP server as well as those of an application server, as seen in Figure 1.

Figure 1. All-in-one topology Click on thumbnail to view full-size image. (6 KB)

The architecture illustrated in Figure 1 is good if most of your Website serves dynamic content. However, if your site serves mostly static content, then scaling the site would be an expensive proposition, as you would have to add more application servers to serve static HTML page requests. With that in mind, to scale the static portions of your Website, add Web servers; to scale the dynamic portions of your site, add application servers, as in Figure 2.

Figure 2. Partitioned topology Click on thumbnail to view full-size image. (7 KB)

The main drawback of the architecture shown in Figure 2: added latency for dynamic pages requests. However, it provides a flexible methodology for scaling static and dynamic portions of the site independently.

Finally, what application server discussion would be complete without a look at maintenance?

Maintenance

With a large number of machines in a cluster, maintenance revolves around keeping the cluster running and pushing out application changes. Application servers should provide agents to sense when critical services fail and then restart them or activate them on a backup server. Further, as changes and updates occur, an application server should provide an easy way to update and synchronize all servers in the cluster.

Sybase Enterprise Application Server and HP Bluestone Total-e-Server provide file and configuration synchronization services for clusters. Sybase Enterprise Application Server provides file and configuration synchronization services at the host, group, or cluster level. Bluestone offers file and configuration synchronization services only at the host level. If large applications as well as numerous changes need to be deployed, this process will take a long time. BEA WebLogic Server provides configuration synchronization only. Of the two, configuration synchronization with a storage area network works better because changes can be made to a single logical storage medium and all of the machines in the cluster will receive the application file changes. Each machine then only has to receive configuration changes from a centralized configuration server. SilverStream Application server loads application files and configuration from a database using dynamic class loaders. The dynamic class loaders facilitate application changes in a running application server.

That concludes our look at the important features to consider in application servers. Next let’s look at how our four popular application servers handle themselves in relation to our criteria.

Application server comparisons

Now that we have talked about clusters in general, let’s focus our attention upon individual application servers and apply what we learned to the real world. Below, you’ll find comparisons of:

  • HP Bluestone Total-e-Server 7.2.1
  • Sybase Enterprise Application Server 3.6
  • SilverStream Application Server 3.7
  • BEA WebLogic Server 6.0

Each application server section provides a picture of an HA architecture followed by a summary of important features as presented in this article.

HP Bluestone Total-e-Server 7.2.1

Figure 3. HP Bluestone 7.2.1 topology. Click on thumbnail to view full-size image. (8 KB)

General cluster summary:

Bluestone implements independent JNDI tree clustering. An LBB, which runs as a plug-in within a Web server, provides load balancing and failover of HTTP requests. LBBs know which applications are running on which UBS (universal business server) instance and can direct traffic appropriately. Failover of stateful and stateless session and entity beans is supported in between method calls through EJB Proxy Service and Proxy LBB. The main disadvantages of EJB Proxy Service are that it adds extra latency to each EJB request and it runs on the same machine as the UBS instances. The EJB Proxy Service and UBS stubs allow failover in case of UBS instance failure but do not allow failover in case of hardware failure. Hardware failover is supported through client-side configuration of apserver.txt or Proxy LBB configuration of apserver.txt. The client-side apserver.txt lists all of the components in the cluster. When additional components are added to the cluster, all clients need to be updated through the BAM (Bluestone Application Manager) or manually on a host-by-host basis. Configuring apserver.txt on the Proxy LBB insulates clients from changes in the cluster but again adds additional latency to each EJB call. HP Bluestone is the only application server to provide clustered and load-balanced JMS. Cluster convergence time:

Least, when compared with centralized and shared global JNDI tree clusters.

HttpSession failover:

In-memory centralized state server and backup state server or database.

Single points of failure:

None.

Flexible cluster topology:

All clustering topologies are supported.

Maintenance:

Bluestone excels at maintenance. Bluestone provides a dynamic application launcher (DAL) that the LBB calls when an application or machine is down. The DAL can restart applications on the primary or backup machine. In addition, Bluestone provides a configuration and deployment tool called Bluestone Application Manager (BAM), which can deploy application packages and their associated configuration files. The single downside of this tool is that you can configure only one host at a time — problematic for large clusters.

Sybase Enterprise Application Server 3.6

Figure 4. Sybase Enterprise Application Server 3.6 topology. Click on thumbnail to view full-size image. (5 KB)

General cluster summary:

Enterprise Application Server implements centralized JNDI tree clustering, and a hardware load balancer provides load balancing and failover of HTTP requests. The two name servers per cluster can handle HTTP requests, but for performance considerations they typically should be dedicated exclusively to handling JNDI requests.

Enterprise Application Server 3.6 does not have Web server plug-ins; they will, however, be available with the EBF (Error and Bug Fixes) for 3.6.1 in February. The application supports stateful and stateless session and entity bean failover in between method calls. Keep in mind that Enterprise Application Server does not provide any monitoring agents or dynamic application launchers, requiring you to buy a third-party solution like Veritas Cluster Server for the single points of failure or automatic server restarts. Enterprise Application Server does not support JMS.

Cluster convergence time:

Convergence time depends on the number of name servers and member servers in the cluster. Of the three cluster implementations, centralized JNDI tree clusters produce the worst convergence times. Although convergence time is important, member machines can start receiving requests after binding objects into a name server that is utilized by an EJB client (this is not recommended, however).

HttpSession failover:

HttpSession failover occurs through a centralized database. There’s no option for in-memory replication.

Single points of failure:

No single points of failure exist if running multiple name servers.

Flexible cluster topology:

A flexible cluster topology is limited due to the lack of Web server plug-ins.

Maintenance:

Sybase provides the best option for application deployment through a file and configuration synchronization. It can synchronize clusters at the component, package, servlet, application, or Web application level. You can also choose to synchronize a cluster, group of machines, or individual host — an awesome feature but it can take a while if there are many machines in the cluster and many files to add or update. A weakness is the lack of dynamic application launching services, which means you must purchase a third-party solution such as Veritas Cluster Server.

SilverStream Application Server 3.7

Figure 5. SilverStream Application Server dispatcher topology. Click on thumbnail to view full-size image. (6 KB)
Figure 6. SilverStream Application Server Web server integration module topology. Click on thumb- nail to view full-size image. (6 KB)

General cluster summary:

When setting up a SilverStream cluster, choose from two configurations: dispatcher-based or Web server integration-module (WSI)-based. In a dispatcher-based cluster, a user connects to the dispatcher or hardware-based dispatcher — Alteon 180e, for example. Then the dispatcher sends an HTTP redirect to pin the client to a machine in the cluster. From that point on, the client is physically bound to a single server. In the dispatcher configuration there is no difference between a single machine and cluster because, as far as the client is concerned, the cluster becomes a single machine. The major disadvantage: you can’t scale the static portions of the site independently of the dynamic portions.

In a WSI-based cluster, a user connects to a dispatcher, which load balances and failovers HTTP requests to Web servers. Each Web server has a plug-in that points to a load balancer fronting a SilverStream cluster. The WSI cluster does not use redirection, so every HTTP request is load balanced across any of the Web servers. The secondary load balancer insures failover at the application server tier. A WSI architecture advantage over the dispatcher architecture: the ability to independently scale static and dynamic portions of a site. In a major disadvantage, the ArgoPersistenceManager is required for HttpSession failover. In either architecture, EJB clients still cannot failover in between method calls. EJB failover totally remains the developer’s responsibility. Finally, SilverStream does not support clustered JMS.

HttpSession failover:

SilverStream provides HttpSession failover by a centralized database and the ArgoPersistenceManager. The solution, unfortunately, is proprietary. Instead of using the standard HttpSession to store session information to a database, you must use the product’s ArgoPersistenceManager class — a drawback.

Cluster convergence time:

Least, when compared with centralized and shared global JNDI tree clusters.

Single points of failure:

None with the above architectures.

Flexible cluster topology:

All clustering topologies supported.

Maintenance:

Cache manager and dynamic class-loaders provide an easy way to deploy and upgrade running applications with little interruption to active clients. When an application is updated on one server in the cluster, the update gets written to a database and the cache manager invalidates the caches on all other servers, forcing them to refetch the updated parts of the application on next access. The only downside of this approach is the time needed to load applications out of the database and into active memory during the initial access.

BEA WebLogic Server 6.0

Figure 7. BEA WebLogic Server 6.0 topology. Click on thumbnail to view full-size image. (6 KB)

General cluster summary:

WebLogic Server implements clustering through a shared global JNDI cluster tree. Under this setup, when an individual server starts up it puts its JNDI tree into the shared global JNDI tree. The server then uses its copy of the tree to service requests, allowing an individual server to know the exact location of all objects in the cluster. Stubs that clients utilize are cluster-aware, meaning that they are optimized to work on their server of origin but also possess knowledge of all other replica objects running in different application servers. Because clusterable stubs have knowledge of all replica objects in the cluster, they can transparently failover to other replica objects in the cluster. A unique feature of WebLogic Server is in-memory replication of stateful session beans as well as configurable automatic failover of EJB remote objects. WebLogic defines clusterable as a service that can run in a cluster. JMS is clusterable but each Topic or Queue only runs on a single server within the cluster. You cannot load balance or failover JMS Topics or Queues in a cluster — a major weakness of WebLogic’s JMS implementation.

HttpSession failover:

WebLogic Server provides HttpSession failover by transparent in-memory state replication to an arbitrary backup server or database server. Every machine in the cluster picks a different backup machine. If the primary server fails, the backup becomes a primary and the primary picks a new backup server. A unique feature of WebLogic is cookie independence. Both HP Bluestone and Enterprise Application Server require cookies for HttpSession failover. WebLogic can use encrypted information in the URL to direct a client to the backup server if there is a failure.

Single points of failure:

JMS and Administration server.

Flexible cluster topology:

All clustering topologies supported.

Maintenance:

WebLogic’s weakness is maintenance. Although BEA has made positive steps with configuration synchronization, WebLogic Server does not provide any monitoring agents, dynamic application launchers, or file synchronization services. This shortfall requires you to buy a third-party solution for the single points of failure or implement sneaker HA. With sneaker HA, all WebLogic Administrators have to wear Nike Air Streak Vapor IV racing shoes. If a server goes down they must sprint to the machine and troubleshoot the problem. However, if you employ SAN you don’t need file synchronization services, but most developers are just beginning to realize the benefits of SAN.

Analysis

Overall BEA WebLogic Server 6.0 has the most robust and well thought-out clustering implementation. HP Bluestone Total-e-Server 2.7.1, Sybase Enterprise Application Server 3.6, and SilverStream Application Server 3.7 would be the next choices, in that order.

Picking the right application server entails making tradeoffs. If you are writing an application where you will have EJB clients (applets and applications), Web clients, extensive use of HttpSession (for caching), and ease of scaling and failover, you will want to go with BEA WebLogic 6.0. If your application requires heavy use of JMS and most of your clients are Web clients, Bluestone will probably be a better choice. From a clustering standpoint Sybase Enterprise Application Server 3.7 lacks important clustering features such as JMS, in-memory HttpSession replication, session-based server affinity, and partitioning through Web server plug-ins. Sybase Enterprise Application Server 3.7 does, however, bring to the table file and configuration synchronization services. But this benefit is minimized if you are using SAN. SilverStream Application Server has the weakest implementation of clustering and lacks clustered JMS, in-memory HttpSession replication, and failover.

Conclusion

In this article you have gained a foundational understanding of clustering, clustering methods, and important cluster services. We have examined the benefits and drawbacks of each application server and discussed the important cluster-related features to look for in an application server. With this knowledge, you now understand how to set up clusters for high availability and scalability. However, this represents just the beginning of your journey. Go out, get some evaluation clustering licenses, and Control-C some application servers.

In Part 2, we will start writing code, test the application server vendors’ claims, and provide failover and cluster strategies for each of the different application servers. Further, we will use the J2EE Java Pet Store for load testing and cluster convergence benchmarks.

Abraham Kang is the scalability and high
availability architect for Infogain’s Delivery Operations
Group. He has over a year and a half of experience clustering J2EE
application servers. Moreover, he is a Sun-certified programmer and
developer, Oracle 8i-certified professional DBA, Cisco-certified
network associate, and Cisco-certified design associate. Abraham
would like to thank his manager, Jessie Spencer-Cooke, for
supporting his efforts to write this article and providing a
environment for excellence.

Source: www.infoworld.com