FTP Online
 
 

Throwing Apps "Over the Wall"
META Group's Thomas Murphy answers six questions about application deployment.

Posted October 15, 2003

FTPOnline interviewed Thomas Murphy, senior program director at META Group, Inc., an IT research and advisory service. See what he has to say about zero administration, "throwing applications over the wall" to IT, and more.

FTPOnline: What are the biggest issues facing IT in terms of deploying and maintaining applications?

Thomas Murphy: As we move toward an era of Web services and grid-based computing, there is a tremendous demand on developers to radically improve the quality and documentation of their software and also to understand the operational demands of the code they create. This drives a need for much better coordination between development and operations staff members.

FTPOnline: There's a perception that in-the-trenches developers don't think about deployment. Has this changed? How has that affected IT?

Thomas Murphy: I believe this perception still holds. Developers build systems and toss them over the wall to operations, not warning them ahead of time about requirements or understanding how they can better instrument code to improve application manageability. This causes huge cost issues where applications are deployed to dedicated servers and inhibits server consolidation.

Operations also gets broadsided a lot, having to scramble at the last minute to put systems into production. Some organizations are doing a better job, but we believe that operations staff should be involved during the requirements-gathering phase and on through the analysis, design, and development of software solutions.

FTPOnline: It's been said that most IT groups see 60 to 80 percent of their budget soaked up by maintenance of existing applications. What can be done to reduce maintenance time and cost?

Thomas Murphy: This is tough for a few reasons. First, companies often do a poor job of defining what constitutes maintenance. When a bug fix is required, everyone engages in a little "can you do this for me while you're at it?" So first you have to define software maintenance as Fix on Fail; everything else is enhancement work.

Second, organizations should practice portfolio management to understand the value of software assets and the costs associated with running them. This will enable a reasoned approach to when should systems be rewritten or retired. Going forward, the creation of an Enterprise Architecture that defines overall policies, patterns, and practices is fundamental because costs often run out of control due to the inconsistent nature of what we produce.

FTPOnline: Is "zero administration" attainable? What is required to achieve that?

Thomas Murphy: Zero administration has been a buzzword for thin-client desktops. Certainly we can reduce the cost of deploying systems, and thin-client solutions can help because you deploy only to the server. But this just means you have zero client deployment cost; you still have costs associated with putting the application on the server.

With managed code systems like .NET, you gain a powerful aid in the concept of application assemblies. Application assemblies remove the problems associated with DLL or shared library hell. Still, keeping all the pieces together means you have to administer something: patches to the OS, updates to the drivers, new versions of the application server, and so on. Administration costs will always be there as long as there are changes to applications and the underlying infrastructure. But again, you can minimize these costs by utilizing Enterprise Architecture to set standards and by paying attention during development.

FTPOnline: What is the most common thing that can go wrong during an enterprise deployment?

Thomas Murphy: This depends on the platform and application stereotype. The worst thing we hear of is new applications built with new infrastructure that doesn't follow the EA. For instance, the corporate standards could dictate AIX with DB2 UDB, and WebSphere for an application server with a specific version such as 5.1. But IT delivers an application it built and "tested" that consists of WebLogic running on Solaris with Oracle.

Beyond that, there is a lot of complexity around security, caching, and other performance measures. Because people are often under the gun to get the system up tomorrow, they just throw more hardware at it. They build more capacity into the deployment than they think they'll need, and hope it all performs.

FTPOnline: HP and IBM are talking about better integrating their management tools into the development process, IBM by making Rational interface with Tivoli and HP by improving OpenView. Is this the path of the future? Why or why not?

Thomas Murphy: This is part of the path. Getting developer tools to link with operations and management tools is a critical step. But it isn't just the link—you either have to automatically instrument the code or train the developer on the best practices for this. Of course, developers don't want a lot of instrumentation because that represents overhead in the software's execution path. However, grid/utility computing, Web services, and so on, are all going to demand this, so integrating with the development process is important. And it is important to get that pushed all the way back to requirements and design.

About Thomas Murphy
Thomas Murphy is currently a senior program director with the META Group, an IT research and advisory service, where he focuses on enterprise application development methods, tools, and architectures. Reach him at thomas.murphy@metagroup.com.