|
Exploring the Dark Side of SOAs (Continued)
Of course, that's the way it is supposed to be, but in practice those building client applications would do well to have a technical appreciation of how the Web service does its chores. This may assist with issues surrounding how and how often to make calls to that service, for example, or it may help developers resolve problems that are inherent within the service itself.
This latter reason represents a true problemconsuming Web services without knowing anything about them other than the required input and expected output. Application developers are frequently at a loss to analyze and diagnose problems they encounter with a Web service. There is little or no visibility into the service, and if an application doesn't work or perform as expected, developers may only be able to analyze the code they have written, rather than the application as a whole. To the application developer, the Web service is the dark side.
Peeking into the Web Service
Let's take a closer look at what is likely to be a typical scenario in an enterprise. One such circumstance is when the Web service runs on the Java platform, while a client application uses Microsoft .Net technology. In this example, the client application fails to scale to the required number of users. In fact, in actual use, it supports only 10 simultaneous users, whereas the requirement is to support 100 simultaneous users. Specifically, at 10 users it eventually crashes.
Solving a problem such as this should follow standard processes. They might look something like this: encounter a problem, characterize the problem, analyze the parameters of the problem, do some diagnosis of the problem, and turn over analysis and diagnosis to the Web service owner.
Once this scalability problem is encountered, the initial task is to localize it to either the client application or the Web service. A logical way to do that is to separate the two and test them individually. For example, it is reasonable to write a test harness that acts as the back end to the client application, and produces canned responses similar to that expected of the Web service. Likewise, it's straightforward to write a test harness that exercises the Web service specifically.
While these test harnesses will generate the appropriate responses, what they won't do is simulate the characteristics of multiple users on the application. What is needed in addition to the test harness is a vehicle for load testing. Although in many enterprises, load testing is still a manual process, there are several ways to automate the process. Using a commercial load-testing tool, it's possible to substantially reduce the manual effort involved and obtain accurate data on the Web service response time and system characteristics.
Figure 1 shows the results of a load test performed with a commercial load-testing product. The data indicate that memory use within the context of the Java Virtual Machine (JVM) is increasing over time and not declining as simulated users leave the system.
A logical culprit is how the application is using memory. Although many developers believe that a managed platform such as Java can have no significant memory issues, rather than concentrating on the tactical mechanics of allocating, initializing, casting, and freeing memory blocks of specific size and location, developers must focus on overarching strategies for using memory management to improve application performance and reliability.
How possible is this on the Java platform? Consider the following simple Java method:
java.lang.String:
String r = new String ();
For (int I-0< limit; I+=1) {
r = r + compute(i);
}
return r;
In this method, a new String r is allocated as a temporary object for use in the iteration. In the iteration itself, the current r is copied into that new instance. In addition, the results of the method compute(i) is added to the new r string. This is done each time through the loop. The result is that a new temporary object is created each time you iterate through this loop.
This has a couple of implications. First, the proliferation of temporary objects means that memory is continually being allocated. Although memory allocation isn't particularly expensive, it does extract a performance penalty. And the larger memory footprint means more memory locations to access. This also doesn't take a great deal of time, but it has an associated performance penalty.
Back to top
|