Server-side Java: Counting tiers – one, two, or n?

How many tiers does your application architecture need?

I hate articles that make you wade through mountains of text before getting to the point. Accordingly, here is a chart summarizing the pros and cons of different architectures for distributed applications discussed in this article.

On tiers

In the beginning, life was simple. Computers were separate, individual devices. Programs had access to all the computer’s input and output through computer-connected devices. With the invention of networks, life became more complicated. Now we have to write programs that depend on other programs running on faraway computers. Often, we have to write all those faraway programs as well! This is what’s called distributed programming.

A brief definition: a distributed application is a system comprised of programs running on multiple host computers. The architecture of this distributed application is a sketch of the different programs, describing which programs are running on which hosts, what their responsibilities are, and what protocols determine the ways in which different parts of the system talk to one another.

Architecture Pros Cons
One tier

Simple

Very high performance

Self-contained

No networking — can’t access remote services

Potential for spaghetti code

Two tiers

Clean, modular design

Less network traffic

Secure algorithms

Can separate UI from business logic

Must design/implement protocol

Must design/implement reliable data storage

Three tiers

Can separate UI, logic, and storage

Reliable, replicable data

Concurrent data access via transactions

Efficient data access

Need to buy database product

Need to hire DBA

Need to learn new language (SQL)

Object-relational mapping is difficult

N tiers

Support multiple applications more easily

Common protocol/API

Quite inefficient

Must learn API (CORBA, RMI, etc.)

Expensive products

More complex; thus, more potential for bugs

Harder to balance loads

The concept of tiers provides a convenient way to group different classes of architecture. Basically, if your application is running on a single computer, it has a one-tier architecture. If your application is running on two computers — for instance, a typical Web CGI application that runs on a Web browser (client) and a Web server — then it has two tiers. In a two-tier system, you have a client program and a server program. The main difference between the two is that the server responds to requests from many different clients, while the clients usually initiate the requests for information from a single server.

A three-tier application adds a third program to the mix, usually a database, in which the server stores its data. The three-tier application is an incremental improvement to the two-tier architecture. The flow of information is still essentially linear: a request comes from the client to the server; the server requests or stores data in the database; the database returns information to the server; the server returns information back to the client.

An n-tier architecture, on the other hand, allows an unlimited number of programs to run simultaneously, send information to one another, use different protocols to communicate, and interact concurrently. This allows for a much more powerful application, providing many different services to many different clients.

It also opens a huge can of worms, creating new problems in design, implementation, and performance. Many technologies exist that help contain this nightmare of complexity, including CORBA, EJB, DCOM, and RMI, and many products based on these technologies are being furiously marketed. However, the leap from three-tier to n-tier — or the leap from one- to two-tier, or from two- to three-tier, for that matter — must not be taken lightly. It’s easy to open a can of worms, but you always need a bigger can to put them back in. The proponents of these technologies are infatuated with their advantages, and often fail to mention the disadvantages of jumping to a more complicated architecture.

In this article, I will discuss the advantages and disadvantages of each style of architecture, and give you some information that will help you choose the right architecture for your application. Consider these reasons before choosing a product because its fact sheet promises to make your life easier.

One-tier architectures

A one-tier application is simply a program that doesn’t need to access the network while running. Most simple desktop applications, like word processors or compilers, fall into this category.

The advent of the Web complicates this definition a bit. As I mentioned earlier, a Web browser is part of a two-tier application (a Web server being the other part). But what happens if that Web browser downloads a Java applet and runs it? If the applet doesn’t access the network while running, is it a one-tier or two-tier application? For present purposes, we will say that the self-contained applet is a one-tier application, since it is contained entirely on the client computer. By this definition, a program written in JavaScript or VBScript and deployed inside an HTML page would also qualify as a one-tier application.

One-tier architecture has a huge advantage: simplicity. One-tier applications don’t need to handle any network protocols, so their code is simpler. Such code also benefits from being part of an independent operation. It doesn’t need to guarantee synchronization with faraway data, nor does it need exception-handling routines to deal with network failure, bogus data from a server, or a server running different versions of a protocol or program.

Moreover, a one-tier application can have a major performance advantage. The user’s requests don’t need to cross the network, wait their turn at the server, and then return. This has the added effect of not weighing down your network with extra traffic, and not weighing down your server with extra work.

Two-tier architectures

A two-tier architecture actually has three parts: a client, a server, and a protocol. The protocol bridges the gap between the client and server tiers. The two-tier design is very effective for network programming as well as for GUI programs, in which you can allocate functionality to the host. Traditionally, GUI code lives on the client host, and the so-called business logic lives on the server host. This allows user feedback and validation to occur on the client, where turnaround is quick; in the process, precious network and server resources are preserved. Similarly, logic lives on the server, where it is secure, and can make use of server-side resources (though here we’re approaching a three-tier application).

The prototypical two-tier application is a client-server program with a GUI front-end written in a high-level language like Java, C++, or Visual Basic. In the two-tier program, you can see the clear division between front and back tiers. The first tier, the client, needn’t worry about data storage issues or about processing multiple requests; the second tier, the server, needn’t worry about user feedback and tricky user interface (UI) issues. For example, a chat application contains a client that displays messages and accepts input from the user, and a server that relays messages from one client to another. Specialization is good: divide and conquer.

Note that the Web again complicates the picture. Let’s say you have a CGI program that calculates a mortgage. (It may be implemented as a Java servlet or a Perl script.) All of its input is provided by the HTTP get request, via an HTML form which the user fills out. Its output is one or more HTML files. All the calculation occurs on the server. Is this a one-tier or a two-tier application?

The definition is tricky. I prefer to call it a one-and-a-half-tier application. Even though its function incorporates a Web browser to display the output and accept user input, all of the actual program execution occurs on the server. Since the programmer is only responsible for writing a single running program, and not code that must execute on the client, then it’s not truly a two-tier application. However, this is highly debatable, and you could easily argue that the HTML form is actually a primitive form of program code. Note also that the addition of any JavaScript or other client-side code promotes it to a two-tier application.

The reason for haggling over whether standard CGI programming results in a one- or two-tier architecture is that it has implications for your application’s design and performance. A one-tier application combines all functions into a single process; a two-tier application must separate different functions. On the bright side, this means that a one-tier application has the ability to mix different functions; however, the programmer must make sure that the program doesn’t become a mass of spaghetti code. Many Perl and Python CGI scripts are total pasta.

In some cases, you can write a two-tier application without writing a server or designing a protocol. For example, you can write a Web browser that speaks to a Web server using the (already designed) HTTP protocol. However, if you have to write your own server, or design and implement your own protocol, you can spend more time writing your program than you would if you were writing a one-tier application. The tradeoff is usually worth it, unless time-to-market is a crucial factor.

Three-tier architectures

Often, a two-tier app will need to store data on a server. Usually, the information is stored on the filesystem; however, data integrity issues arise when multiple clients simultaneously ask the server to perform tasks. Since filesystems generally have rudimentary concurrency controls at best (lock files are found on only some platforms, and are often flawed), the most common solution is to add a third program, or database.

Databases specialize in storing, retrieving, and indexing data. Just as a two-tier architecture separates GUI and business logic, a three-tier architecture allows you to separate business logic and data access. You can also provide highly optimized data indices and retrieval methods, and provide for replication, backup, redundancy, and load-balancing procedures specific to your data’s needs. Separating code into client and server code increases the scalability of your application; so does placing data on a dedicated process, host, or series of hosts.

(Currently, SQL RDBMSs, like those from Oracle and Sybase, vastly outnumber other database types. You may have heard the names of some of these other types — OODBs (object-oriented databases), ORDBs (object-relational databases), and embedded databases — thrown around as buzzwords, but these are still exotic species, rarely encountered in the real world.)

The general procedure for using a database is to design a schema that describes your data, and queries that store and retrieve that data. There is one shortcoming to this approach: you need to learn a whole new programming language! SQL is not Java, and with the added effort of designing and implementing a new schema, translation code to go from one schema to the other, and queries executed by your program, your development time can greatly increase. When you throw stored procedures and the hiring of a full-time DBA into the mix, the decision to go with a database can seem unnatural. I’ve heard people estimate that as much as 70 percent of a given project’s programming and debugging is spent in the object-relational mapping code.

Granted, there are many cases in which using a database is necessary, and reduces an application’s time to market. But there are also many cases in which you can store a small amount of data in a simple local file instead of a relational database. A simple rule of thumb: if you only need to store data, and can get away with retrieving files by name, use a filesystem. If you also need to search through those data, then use a database — especially if those searches are based on varying criteria.

One reason you can benefit from using a database — besides the improvement in concurrency, access speed, and reliability — is that multiple applications (or services) can access the same data. This benefit is on the border between three-tier and n-tier applications.

A word on stored procedures: they are evil. Stored procedures are essentially little programs that run inside the database. Since they are close to the data, they can perform manipulations (sorting, filtering, transforming, etc.) that would be prohibitively expensive to perform on the server. Some operations require that stored procedures or triggers be efficient. However, you can easily misuse them. It is tempting to put business logic inside stored procedures. All this constitutes a disruption of the three-tiered structure: instead of GUI, logic, and storage being neatly separated, you now have logic intermingling with storage, and logic on multiple tiers within the architecture — causing potential headaches down the road if that logic has to change. Furthermore, the stored-procedure logic is written in a different language than the application code, and is subject to a different revision-control mechanism (if there is one, which there often isn’t) and a different debugger mechanism (generally text-based and rudimentary, if it exists). This makes stored-procedure code much more difficult to develop and debug.

To make this more concrete, here’s an example of a stored procedure. Let’s say you have a Website with a shopping cart. At checkout time, you need to calculate shipping costs. So you write a stored procedure to calculate the costs for UPS and FedEx automatically every time you save a shopping cart record. (This kind of procedure is actually called a trigger.) It all works perfectly and there are no problems. But what happens when, six months later, you need to add an option that allows your customers to ship via the US Postal Service? Since the post office uses a different rate and zone structure, you will need to write some code, naturally. But this code needs to be written in PL/SQL, and you can’t use a visual debugger. If you mess up, you may interfere with the data’s integrity. If you had written the original shipping cost calculator in Java in the middle tier, you could have just extended the DeliveryMethod class and you’d be done.

N-tier architectures

To listen to the hype, you’d think that n-tier architectures are the greatest thing to happen to computing since the vacuum tube. Proponents of CORBA, EJB, and DCOM believe that every new application should be written, and every existing application should be retrofitted, to support their favorite spec. In the universe of distributed objects thus imagined, writing a new application is as simple as choosing objects and sending messages to them in high-level code. The distributed object protocol handles the nasty, low-level details of parameter marshaling, networking, locating the remote objects, transaction management, and so forth.

A good example of a n-tier distributed application is a stock-trading system. In this environment, multiple data feeds (stock quotes, news, trading orders) arrive from different sources, multiple databases (accounts, logs, historical data) are accessed, and multiple clients run specialized applications. It makes sense to weave together the disparate patches in this quilt with the thread of a common distributed object architecture, like CORBA or EJB.

What’s the catch? First, there’s the learning curve. It takes time to learn how to use a new API, and even more time to learn its ins and outs. Second, there’s the product cost. A good standards-compliant distributed object application server, like BEA’s Weblogic or IBM’s WebSphere, can cost tens of thousands of dollars.

Beyond those material concerns, there are some architectural considerations mitigating the rush towards distributed objects. It’s hard to design objects that are truly reusable; the dream of being able to use current work in future projects is often a vain one. Instead, the design and implementation effort you put into making reusable objects is often wasted, with requirements for the next project being different enough to require a code rewrite anyway. Even more important is the fact that, by leaving the safety of the three-tier architecture (UI code goes here, business logic goes there), you run the risk of designing a system that’s more complex than you bargained for. This can impede progress, since a careless design decision can have ramifications later on.

Next is the issue of performance. I cannot tell a lie: distributed object protocols are slow. There is no way that an application written in, say, CORBA can be as efficient across the wire as one using a custom-designed socket protocol. Therefore, if your application absolutely needs top networking speed, let your motto be DIY — do it yourself.

CORBA is slow because it needs to be general. In this sense, its greatest strength is its greatest weakness. The time it takes to marshal and unmarshal parameters, the amount of data transmitted, and its handshaking protocols all suffer from the need for generality. A custom protocol can make more assumptions, and compress data better, leading to higher efficiency.

Please note that this inefficiency may be perfectly acceptable. In a well-designed system, you can make up for it by just adding more boxes. This is what it people mean by describing an architecture as scalable — a scalable architecture is one that allows your programs to more easily spread out over multiple machines.

Furthermore, if using a distributed-object architecture allows you to write programs that are faster, larger, more powerful, more robust, and just generally cooler, then it’s definitely worth it. If you have only two objects interacting (in a chat application, for instance), it may make sense to invent a whole new protocol for them. If you have many objects interacting, however, you’ll find the number of combinations increasing exponentially with the number of objects, so you should probably go with an existing standard.

Another pitfall relating to performance in n-tier systems is a little more subtle. Let’s say, in a three-tier system, that you’re getting millions of hits on a single Web server. You can fix the problem simply by adding more Web servers. This is called load balancing — you have balanced the million hits between several equivalent servers. You still have a single database, in which each of the servers stores its data. This means that there’s no problem if, say, one server writes data and immediately thereafter another server needs to read it. (If needed, you can load-balance the database, too.)

However, in an n-tier application, there are dozens or hundreds of objects interacting, running on many different host computers. If the system is slow, it’s not immediately clear which objects, or which hosts, require load balancing. It requires sophisticated analysis of network traffic and log files, as well as plain old guesswork, to isolate the bottlenecks. (And even if you find the problem, you may not be able to do anything about it.) In other words, by increasing the granularity of your object design, you have limited the system’s performance.

Let me explain this a different way. In the three-tier, load-balanced system, you know that each of the Web servers is making more or less optimal use of its CPU. A request arrives, and the system churns through it until it’s done. (If it needs to wait for a query sent to the database to return, then it multitasks or multithreads and works on a different request.) However, if the chain of communication has to pass to several hosts on the network, then the original server may be sitting idle, waiting for a series of messages to return — messages that are all queued up in some overloaded remote object somewhere across your network. The front-line CPUs are being underutilized, and the rear-guard CPU is maxed out. And there’s no easy way to transfer the idle cycles from one machine to another. If you’re unlucky, you may have to go back and redesign your system at the object level to iron out these inefficiencies.

Conclusion

Writing a distributed application can be fun and rewarding, but the right tool for the job is not always the latest buzzword. A developer must understand the advantages and disadvantages of many architectures before deciding on the solution to an idiosyncratic problem. For a summary of these points, see the table at the top of this article.

The authors of this month’s server-side Java computing articles will be holding a free online seminar on January 13 at 11:00 a.m. PST. Register to join at https://seminars.jguru.com.

Alex Chaffee is a software guru with jGuru (Jguru.com) (formerly MageLang
Institute), a leading Java developer training and resource site. As
the director of software engineering for EarthWeb, Alex cocreated
Gamelan ( a
directory for the Java community. He has presented at numerous
users groups and conferences, written articles for several Java
magazines, and contributed to the book The Official Gamelan
Java Directory. JavaWorld and jGuru have formed a
partnership to help the community better understand server-side
Java technology. Together, JavaWorld and jGuru are jointly
producing articles, free educational Web events, and working
together on the JavaWorld bookstore and Web-based
training.

Source: www.infoworld.com