Welcome Guest!
Create Account | Login
Locator+ Code:

Search:
FTPOnline Channels Conferences Resources Hot Topics Partner Sites Magazines About FTP RSS 2.0 Feed

Special Report: Web Services and Enterprise Architecture

email article
printer friendly

Guarding Against the Network Menace
Monitor performance securely in real time from the user's browser back to the database through the Web and application infrastructure.
by Hon Wong

February 22, 2007

Using live customers as the test bed for applications is generally considered bad practice, but this practice is exactly what happens in the brave new world of Web applications. Traditional development tools and methodologies don't allow development teams to really test and debug Web infrastructure issues before the real users get their hands on the new application. The result isn't pretty.

ADVERTISEMENT

As reported by Gartner, up to 30 percent of development time can be consumed by debugging production issues. The important question operationally is how to minimize this 30 percent time sink to call developers to task only when the problem really has to do with the code, rather than their being distracted constantly by infrastructural issues.

Launching a Web application is like hang gliding off the cliffs overlooking the Pacific Ocean near foggy San Francisco. A hang glider jumps from the wind-swept cliff above the roaring ocean, sometimes into a bank of thick fog. This stunt would be a near-death experience without reliable gear complemented by actionable information and the experience required to tackle unforeseen circumstances. Nevertheless, launching new Java applications often emulates this experience.

Most Java developers and application architects diligently adhere to a best-practice development process to ensure the quality of the application. Within this process there are well-defined procedures supplemented by well-understood tools to ensure that—at least in the insular environment of the development lab—features are implemented as specified, the application performs to expectation, or that it isn't consuming too much memory or computing resources.

To carry forward the hang-gliding metaphor, the real concern is not how the glider holds up in the shop, but how it performs when you jump off the cliff. Similarly, to the developer or architect, after ensuring that the application passes the rigor of QA tests in the lab, they must find out how the application will perform when deployed on the Web infrastructure. The unknowns of the production environment abound:

  • Are the servers running the expected release of the operating system and middleware to deliver the required performance?
  • What other applications are deployed on these same servers that will conflict with or impact the performance of the new application?
  • Are the servers properly load balanced?
  • What about the Internet cloud? Is there enough bandwidth available consistently to support the rigor of the application?
  • If a content delivery network is used to deliver high bandwidth-consuming objects, then how can you ensure the speedy delivery of these objects?
  • If the application infrastructure is linked to the Web infrastructure through an application delivery controller (ADC), will the firewall, SSL off-loading, caching, and compression schemes being implemented impact positively or negatively application performance? Note that ADC technologies, according to Gartner, reside in the data center and are deployed asymmetrically—meaning only on the data center end—and accelerate end-user performance of browser-based and related applications by offering several technologies that work at the network and application layers.
  • How will the uncontrollable "last mile" components—the end user's PC, browser, and Internet connection—impact the overall performance and availability of the application?

The list of questions goes on and on. Because of such complexity, there is no way to adequately eliminate all the risk factors associated with deploying a Web application.

Ensure Application Performance
While the risk of deploying Web applications cannot be completely eliminated, this simple three-step approach will greatly enhance the chance of smooth deployment and accelerated ramp-up (to ensure the newly deployed application doesn't share the same fate as Icarus):

  1. Trace all load-generated (synthetic) and beta user (real) transactions from end to end to identify and fix application speed bumps created by the interaction of the new application with the Web infrastructure and other applications running on the same infrastructure as well as bottlenecks within the Web infrastructure (see Resources).
  2. If an ADC is available, adjust the application delivery features (caching, compression, and so on) of the ADC that separates the application infrastructure from the Web infrastructure to compensate for the performance issues discovered through step 1.
  3. Apply the real-user monitoring and performance diagnosis capability to monitor real transactions in real time to give IT operations and development a common platform to triage and resolve performance problems before it impacts user experience (see Figure 1).



Back to top













Java Pro | Visual Studio Magazine | Windows Server System Magazine
.NET Magazine | Enterprise Architect | XML & Web Services Magazine
VSLive! | Thunder Lizard Events | Discussions | Newsletters | FTPOnline Home