FTP Online
 
 

Why Security and Outsourcing Are Key Trends in Testing and Performance
While most of the major players in the T&P market seem to be running in place, several trends and new challenges have come to the fore in recent months.
by Chris Preimesberger

Posted September 29, 2003

When an IT shop tests and optimizes its code while plowing through the software development cycle, the testing it is doing is aimed more often at leaks in security and authentication apparatus. This isn't your father's IT testing anymore; T&P isn't exclusively for improving the speed and efficiency of the application.

Sure, overall software quality remains Job One, but thanks to increased virus and worm activity in recent months, back doors and cracks in applications are being put on center stage and swathed in floodlights in an effort to identify Achilles heels. Nonetheless, hackers seem to be retaining a slight edge in the battle of corporate systems security; just read the daily news for supporting data.

That's trend No. 1 in the business right now. No. 2 shouldn't be a surprise: outsourcing. An increasing number of companies are sending their work offshore to be tested and optimized. The key question: Can you really trust your code to strangers on some faraway continent?

Lastly, the nascent rise in enterprise grid computing, led by IBM, Microsoft, and Oracle, is causing some new challenges in T&P.

But let's start with an overall look at the state of the market.

Market Leaders Are Running in Place
Theresa Lanowitz, Gartner Group's analyst for the testing and performance sector, wrote in a September 2003 report that "challenges for the future of testing include security at the application level, application programming interface testing, and compliance testing. (Newer) vendors are beginning to create products that will have significant effect on the testing market during the next one to two years," she wrote.

Mercury Interactive, with approximately 50 percent of the testing market, remains the segment leader, Gartner reports, followed by Compuware and Parasoft. These three companies have been market leaders for several successive years. Key "niche" players from whom Lanowitz expects innovation include Telelogic, Quest Software, Solstice Software, and RadView Software.

Other leaders are IBM/Rational, Empirix, Segue, and Keynote Systems.

"One of the areas with the greatest potential is API testing at the developer level," Lanowitz said. "Application integration and the continued use of packaged applications will drive API testing to a position of importance in the future. Although new challenges are abundant, the core of testing—functional and performance—will not decrease in importance."

Renewed Emphasis on the Importance of Testing
Software quality in general has not improved markedly in the last 20 years, according to several industry reports. Approximately 85 percent of all software projects either come in over budget, late for the target market, or do not perform up to expectations. Because this number is not getting any better, it follows that testing and performance of new and upgraded applications is more important than ever.

With the advent of the new 64-bit processors this year (from Advanced Micro Devices, and soon from Intel), more-complicated multilayer applications will be forthcoming—requiring even deeper testing processes and mechanisms.

"The bottom line is that IS organizations need to understand the requirement for multiple types and layers of testing," Lanowitz said. "Testing is an important part of the application delivery and management cycle. Enterprises with competencies in testing will be more competitive and successful than those that ignore or delay testing."

Build Security Into the T&P Process
"Most security problems are related to some kind of bug," said Adam Kolawa, founder and CEO of Parasoft of Monrovia, California, a maker of error-prevention software for code. "Bugs allow for exploitation. Everybody is testing their apps out of the box, when they should be monitoring the quality of the code as it is generated. That's the best way we have at the moment to prevent security lapses."

However, enterprise and mid-size IT shops with different requirements, budgets, and overall philosophies can place completely different emphases on security. The bottom line is this: Virtually any software program can be reverse-engineered and/or hacked, if the intruder is educated and motivated enough.

"Hackers are like germs; they keep building up immunities against whatever antibiotics we throw at them," said Joe McKendrick, a Philadelphia-based enterprise software analyst for Evans Data Corp. in Santa Cruz, California. "They keep finding new ways to get in and cause havoc. It's a continuing major challenge to the industry to keep finding ways to circumvent intruders. I'm not sure anybody has the complete answer yet."

Maybe not, but it's generally true that the more foresight, testing, and performance a company does, the harder it will be for someone to compromise the software once it's out in the world. At least that's the logic.

Microsoft, of course, has long been the biggest target of hackers, and for a number of reasons. "They simply have the most systems in place," McKendrick said. "Most other companies enjoy so-called 'security by obscurity,' and don't have nearly the same issues that Microsoft and other huge companies have to deal with."

However, some new evidence has surfaced that shows Microsoft's security problems may need to be put into a larger context. An industry security study, "The Myth of the Monoculture," released Sept. 24 by the Computer and Communications Industry Association (CCIA), that warns of the dangers of "monoculture" in the IT industry, drew the following response (in part) from Jonathan Zuck, president of the Association for Competitive Technology:

The study's premise of an existing monoculture in computer security is inherently false. Of 660 million Windows users worldwide, less than one-tenth of one percent were impacted by the notorious MSBlast worm last month. Why? In reality, each Windows user has different configurations of hardware, routers, virus software, and security habits. The diversity that comes from the security stack of hardware, software, and user habits leads to an extremely heterogeneous security environment even on a single operating system like Windows. The evidence clearly shows that the monoculture feared by the authors exists only in theory and not in reality.

Undoubtedly, this is a developing story that will be drawing continuing attention in the future.

Pros and Cons of Outsourcing T&P
If you're going to hire an offshore service to handle all or part of your T&P work, keep in mind that cultural differences can play a big part on how well the work will get done.

"I'm finding that 'offshoring' some of my line work helps my overall bottom line and also frees my U.S. engineers to do more design and architecting, which is what they do best," said Matt Liotta, founder and CEO of Montara Software in Atlanta, Georgia. "The culture of the offshore shop plays importantly in this picture. For example, when I send exact specifications on a project to my shop in India, I find that their developers were excellent at following my exact orders. This works perfectly for me … I don't need to have them making other suggestions or whatever. This is a generalization, but I find it to be true; they follow orders well, and their culture literally shows up in my software."

Kolawa agreed. "Subcontractors in China and India are a viable alternative for many U.S. companies. Whereas a U.S. developer may deviate a bit from the spec but will often stumble onto a new idea, the Indian or Chinese developer basically will say, 'Yes, master,' and do it exactly how it's spelled out," he said. "But this is why so much innovation comes out of the U.S."

Grid Computing Brings New Challenges
Testing and optimizing applications for a slew of grid network computers presents new problems; in fact, this topic is worth a full story by itself.

Grid computing is the latest marketing-speak trend in enterprise IT systems. Ostensibly, the advantage is that companies can save money immediately by using existing computers together with inexpensive newer computers in a fail-safe network powered by a high-end central database and application servers. This opposes conventional thinking that companies have to continually purchase more—and more powerful—computers and software every few years, laying to waste previous versions of expensive software and hardware.

In order for a grid test to be close to relevant, testers must try to emulate something that's very hard to do: What happens to an app when it's running on dozens, if not hundreds, of different servers and workstations across a vast network? Those kinds of challenges speak for themselves.

"First of all, you've got to test apps on a (simulated) network that all are roughly the same size and scale that you're actually putting into production," McKendrick said. "Then, you have to test for many more use cases, which can be costly and time-consuming. Not a simple or easy thing to do at all."

Nobody said software development was going to be easy.

About the Author
Chris Preimesberger is an IT writer/researcher based in the San Francisco Bay Area.