FTP Online
 
 

Simplify Enterprise Testing With the Right Tools: Two Case Studies
The right tools—and the right strategies for using them—enable successful testing of enterprise applications at H&R Block and Burlington Coat Factory.
by Edmund X. DeJesus

Posted September 29, 2003

Enterprise architects face a perplexing dilemma about testing. On the one hand, enterprise applications are so valuable and important that they must be tested thoroughly. On the other hand, enterprise applications are so valuable and important that they can't be interrupted or disturbed for testing at all. Luckily, there are strategies and tools for dealing with this dilemma. You can perform the testing that is so essential, while ensuring the continuity of business operations.

How H&R Block Makes Testing Less Taxing
Keeping track of your customers' money—and their taxes—are probably two of the most serious responsibilities that an enterprise can take on. H&R Block does both with Web-based systems. The company's applications must be accurate and available for any of its thousands of users anytime, year-round.

In addition, it faces inescapable external deadlines: tax laws change annually. Some changes are major, some minor, but all must be assimilated into the H&R Block systems—and into its test strategies. Increasing the difficulty of testing, the Block systems employ various architectures, including Web-based, Windows XP and 2000, UNIX, and some legacy systems.

Block uses a range of tools to test both system performance and load balancing, as well as to acquire statistics and perform monitoring. "We use mostly Empirix test tools," reports Mike Deloney, director of quality assurance at the Kansas City headquarters.

E-Load for Load Testing
Load testing involves stressing the system, and carefully watching for any difficulties, from minor issues like memory leaks to major problems like system failure. Empirix E-Load is the tool of choice here.

In the Block load test environment, controller modules load test agents: servers that create virtual users. In this way, the company can gradually simulate multiple users and observe the effects on resources (like memory) of having more users engaging the system. "We have performance targets for different numbers of concurrent users, so that users don't have to wait more than an acceptable time for, say, a Web page to load," explains Deloney. The testers keep increasing the load on the system, while looking for a fall-off in response time.

Block tries to make its tests as much like real demands as possible. "It can be difficult to match the test cases to real life," advises Deloney. To help make that match as faithful as possible, Block uses OneSight and e-Manager to keep an eye on the real system. These tools unobtrusively note the conditions under real use, including the heaviest day or hour, most-demanded pages, and typical and maximum numbers of users.

This harvested data gives testers an idea of the limits of demand that the real system experiences. Armed with this information, the testers can then design tests that more realistically match what the actual systems face. Sometimes they conduct their tests on genuine production equipment, and sometimes they use a subset of the equipment designed to represent a specific percentage of production capacity.

User-Friendliness Is Important
Ease of use is important in selecting a test tool. "It requires experience with the business and its goals to design tests," Deloney points out. They find the Empirix tools to be user-friendly, making it easy to enter test scripts.

During testing, the tools are able to simulate subtle behavior, such as the think-time it takes a user to decide what to do next, or the random delays between the advent of new users. As the script plays back, the tool can easily identify where any difficulties arose. This allows testers to tweak the script to expose any issues with the system.

In addition to testing the external demands on the system, Block also tests internal system processes such as load balancing. The ideal is to distribute user load across many servers, so that no one server becomes overwhelmed. In reality, there may be difficulties in assigning or relinquishing memory, or in accessing storage, depending on server load. Careful testing should reveal where the bottlenecks are. Even the impact of unforeseen events—such as pulling the plug on a server—must be simulated and evaluated.

Block uses careful monitoring tools OneSight and e-Manager to look for specific indicators, including CPU utilization, memory usage, database locks, and table contention. "Typically, we start with a small subset of the entire system," comments Deloney. Then the testers move up in both the breadth of the system tested and the user demand on the system, including batch processes.

Since systems change, tests must change with them. While it is often possible to reuse and rewrite test scripts, it is often necessary to create new ones. Naturally, you want a tool that makes that as easy as possible. "We have moved from Windows NT to 2000 to XP and now to .NET, with no problems," notes Deloney. It is important for tool vendors themselves to keep pace with changes in technology, incorporating them rapidly into their products. Response to the customer is essential.

Burlington Coat Factory: Tests Tailored to Fit
While satisfying many anonymous external users is important, it may be more difficult to take care of the demands of your own employees. Burlington Coat Factory operates over 300 retail clothing stores in 42 states. It has thousands of internal users asking for vast amounts of data on purchases, sales, inventory, orders, and a dozen other items. The company's enterprise systems must be voluminous enough to hold all that data and nimble enough to provide it efficiently where needed.

"We have data pouring in, and users need to slice and dice it," observes Bruce Woods, manager of software quality and training. Burlington is an Oracle shop relying on Web forms and vertical applications such as Oracle Human Resources. They use test tools from Mercury Interactive, including WinRunner and LoadRunner.

WinRunner is an automated GUI tester that Woods finds indispensable. "It is difficult and complex to ensure that a user interface is behaving properly," he remarks. Testers must keep track of which fields are read-only, what size every field is, and how each click should affect the system. The test group is constantly building test suites to run against their interfaces.

An essential element of user interface testing is how easy it is to change the test when the code changes. Every change to the code should result in a change to the test. "We spend a lot of time creating and maintaining test suites," states Woods.

LoadRunner for Load Testing
LoadRunner is Mercury Interactive's tool for load testing. It is interactive, allowing testers to modify conditions during testing. A typical situation involves Burlington's human resources application, which any of more than 23,000 employees can access through in-store kiosks. The tool constantly monitors the system, providing essential data for later interpretation.

"You can't remove the human from the process," declares Woods. It is up to the testers to interpret test information, identify any problems, and ultimately suggest improvements to the system.

Burlington uses a testing lab, so that testing activities will not affect the production environment. The lab uses the same architecture as the real system and approximates the real system very accurately. In addition to identifying architecture and hardware issues, tests can reveal software errors. For example, one set of tests showed that one method of ending an operation was safer for the data involved, but another method was faster. Developers could then take that information and resolve the trade-off prudently.

The benefits can be substantial. Burlington is currently migrating from Oracle8i to Oracle9i Real Application Clusters. The tools allow the testers to run sophisticated tests against the middle tier—in their experience, the source of many bottlenecks—and the back-end systems. This would have been impossible with manual testing.

Woods also values the company relationship with Mercury Interactive. He finds its representatives easy to work with and customer-oriented. In the future, Burlington hopes to add performance-monitoring tools, especially for J2EE components on the middle tier.

What would make for a perfect test tool? One with no resource footprint on the system, that could be distributed widely throughout the system with no impact, and that allowed infinite drill-down into detailed layers of code. Until such a tool is created, though,, testers are glad to use these tools to investigate processes that no human could ever hope to solve easily.

About the Author
Edmund X. DeJesus is a freelance technical writer in Norwood, Mass. You can reach him by e-mail at dejesus@compuserve.com.