Welcome Guest!
Create Account | Login
Locator+ Code:

Search:
FTPOnline Channels Conferences Resources Hot Topics Partner Sites Magazines About FTP RSS 2.0 Feed

Special Report: Testing and Application Quality

email article
printer friendly

Applying Automated Error Prevention
The time is ripe for the software industry to apply the AEP concept that other industries have used for increased productivity and cost reduction.
by Adam Kolawa

November 17, 2006

In today's world, where IT has become an essential part of the enterprise and software controls everything from automobiles to airplanes to pacemakers, software defects are no longer just an annoyance. Organizations that release or deploy software with functionality or security flaws now risk spoiled reputations, reduced market share, and costly lawsuits. Outages in critical business systems could cost companies millions of dollars per hour, and even one glitch in software embedded in medical or military/aerospace systems could cost lives.

Frighteningly, there are now more opportunities than ever for development teams to make mistakes that introduce these devastating defects into the software. Today's enterprise systems are typically extremely complex, multitier systems—often a precarious combination of new technologies and old, such as legacy systems wrapped as Web applications or services, and then integrated with newer systems through a service-oriented architecture (SOA). At each layer there are different opportunities for something to go wrong, and a simple mistake in one component can ripple throughout the system, causing far-reaching, difficult-to-diagnose problems.

Moreover, the development process itself has also grown more complicated, opening even more opportunities for the introduction of defects. For example, communication barriers introduced by offshore outsourcing and distributed development make it more difficult than ever to ensure that the software meets the customer's expectations. The increasing complexity of enterprise systems has resulted in hot wired build processes that may work in one context, but which cannot be modified, extended, or moved to environments without a significant chance of introducing problems—or breaking the build altogether.

Modifying the development process to reduce opportunities for mistakes—for instance, by setting downstream traps to prevent the root cause of problems whose symptoms manifest themselves upstream—would dramatically reduce the amount of late-cycle debugging that is responsible for most project setbacks. However, most development teams try to achieve quality by attempting to identify and remove all of the application's defects at the end of the development process.

The Complexity Obstacle
This bug-finding approach is not only resource intensive, but it is also largely ineffective. To have any chance of exposing all of the defects that may be nested throughout the application, the team would need to identify every single path through the application, and then rigorously test each and every one. Moreover, any error found would be difficult, costly, and time consuming to fix, considering that the effort, cost, and time required to fix each bug increases exponentially as the development process progresses.

If error prevention is so much more effective than relying solely on end-of-process testing, why is it practiced so rarely in the software industry? Because it's so difficult is the answer. Just determining how to prevent a single defect can be a complex process: it requires someone who truly understands the code to abstract a general root cause from a specific symptom, figure out how to prevent the root cause, and then determine how to implement a preventive practice in the development process.




Back to top












Java Pro | Visual Studio Magazine | Windows Server System Magazine
.NET Magazine | Enterprise Architect | XML & Web Services Magazine
VSLive! | Thunder Lizard Events | Discussions | Newsletters | FTPOnline Home