Combat Increasing IT Complexity
Achieve success in your architecture initiatives through the appropriate mix of people, process, and technology.
by Firdaus Bhathena
February 14, 2006
Over the last decade, the role of an enterprise architect (EA) has evolved from a focus on consolidation of technologies, to application and data integration, and most recently, to business alignment and support of business strategy. The latest iteration of the EA role calls for an increased focus on architecture planning to ensure that IT is in a position to effectively respond to ever-changing business needs.
As most of us have learned, the best architecture planning efforts are tied to the baseline of what you have in your environment today. Further, the foundation of any successful re-architecture initiative is a fully accurate picture of the systems, applications, and other infrastructure components in your environment, and how they collectively function to deliver IT services to the enterprise. When these enterprise linkages and interactions are documented and widely understood, the EA performs a powerful role in business enablement.
This sounds relatively straightforward, but several factors can make it difficult to get an accurate view of this information when it's needed and leverage it to support important business decisions.
The first factor is increased complexity. As IT organizations continue to move from monolithic, legacy applications toward distributed business applications, strategic and operational teams alike are struggling with infrastructure that is increasing in scope, scale, and complexity (see Figure 1). Nothing runs on "a box" anymore; distributed application infrastructures are characterized by an integrated and customized collection of many smaller software components, with dependencies on common "building blocks" such as databases, Web servers, and application servers. Complexity is further compounded by mergers and acquisitions and decades of layered technology purchases.
IT complexity has a major impact on availability, manageability, and operations costs. Complex IT environments are inherently expensive to operate, more difficult to manage, and can be unpredictable when change is introduced. However, complexity cannot be eliminated in a dynamic and innovative environment, so the goal needs to be to manage it through making appropriate investments in architecture, internal culture, organization, and technology.
The good news is that investment in enterprise architecture has emerged as a core strategy for leading organizations seeking to rein in complexity within their IT environments and create a common language across management disciplines.
Beware the Rate of Change
The second factor that can make it difficult to get an accurate view of your environment is the rate of change. Operational changes to infrastructurenew application rollouts and upgrades, configuration changes, hardware changes, hot fixes, security patches, and so onoccur continuously. Larger change projects such as data center migrations, server consolidations, and infrastructure assimilation of acquired companies are also routine activities for leading companies today. Change is a well-known complexity multiplier, and as you're all aware, the pace is on the increase, with no signs of decrease in sight. Without an understanding of what's in the environment and how elements are interdependent upon one another, any change to the architecture is fraught with risk.
Scattered information is a third factor complicating a clear view of your environment. IT architecture and management information tends to be scattered throughout a company and to reside on whiteboards, in notebooks, and in the heads of enterprise architects, system and network administrators, and other critical IT personnel. On the positive front, EA tools are emerging to combat this challenge by storing relevant information in a repository and providing capabilities to assemble and present the data in a variety of ways.
A fourth and final factor to account for: technology advancements such as virtualization and provisioning. As every aspect of enterprise architecture and computing has grown more complex, the flexibility and intelligence that virtualization and provisioning add to the management mix has made these technologies increasingly attractive. Virtualization reduces technology limitations and provisioning reduces capacity constraints. Both virtualization and provisioning increase the rate of change, which contributes to increased complexity.
Given these challenges, how can you get a fully accurate picture of the systems, applications, and other infrastructure components in the environment? You also need to know how they support the delivery of IT services to the enterpriseknowledge that is central to nearly every EA initiative. Here are some suggestions.
How to Get the Big Picture
First, foster and promote collaboration. The EA's goal is to select and implement the right investments in standards, procedures, and technologies to support the organization's business goals. This requires teamwork and collaboration with all groups within IT as well as key business personnel to ensure that the EA has a clear understanding of business needs, and that the business respects the role of the EA. Armed with this knowledge, for example, the EA can investigate the appropriate technologies to support business needs, gain buy-in on technology standardization, and work with the appropriate groups to resolve or rationalize exceptions to standards and maintain insight into the big picture of their architecture environment.
Second, implement structure through framework and process adoption. Complexity can be reduced (although not eliminated) with structure. A structured environment is built around standards and conventions for all components of the infrastructure, and many frameworks and process methodologies exist for providing structure. While there are no common definitions, standards, processes, or tools for managing enterprise architecture, EA frameworks that can provide structure include the Zachman Framework, the Open Group Architecture Framework, and the US Federal Enterprise Architecture Framework.
Widely used process methodologies include Control Objectives for Information and Related Technology (COBIT), which provides a reference framework for management, users, and IT audit, control, and security practitioners and covers all IT activities; Six Sigma, which is a broadly applied, disciplined, data-driven approach and methodology for eliminating defects; and IT Infrastructure Library (ITIL), which is a widely accepted and cohesive set of IT best practices focused on service management that continues to gain traction within companies seeking to improve their change management efforts.
Third, leverage technologies that give you the big-picture view of your environment. To manage the complexity of IT, organizations must begin by understanding their architecture and the interrelationships and dependencies that exist within the environment. In the past, many of us have attempted to create a map, or blueprint, of elements in our IT infrastructure by maintaining multiple rudimentary data stores such as spreadsheets, Visio diagrams, or Microsoft Access databases containing data gathered through manual efforts. This is an effort begging for automationa manual approach simply cannot provide sufficient information due to the size, complexity, and amount of changes occurring within IT, or deliver fully accurate information at any given point in time.
One company reported that manually mapping out just a single critical business application took five staff members several weeks, and due to the dynamic nature of their environment, the data was out-of-date before the project was completed. This was a source of great frustration because they needed this information to support a host of strategic initiatives and operational tasks.
You can address this challenge with technologies such as automated application and server-dependency mapping solutions that provide information about hierarchical and peer-to-peer relationships existing among infrastructure components. The best tools in this category completely eliminate the manual effort traditionally associated with this process by automatically discovering infrastructure components, dynamically mapping their dependencies, and tracking changes in real time as they occur. The result is an automatically generated, dynamically updated picture of the complex server and applications within the IT infrastructure.
The information from these tools provides the foundation for strategic initiatives such as audit and compliance, disaster recovery, business continuity, and data center migrations, in addition to operational activities such as problem resolution and change impact analysis that absolutely require realtime information. Additionally, these tools can provide a critical feed of realtime, fully accurate application and server information into an enterprise configuration management database (CMDB) and maintain synchronization between live configurations and records stored in the CMDB (see Figure 2), or serve as a feeder into other EA tools used to create blueprints of business, systems, and technical architecture.
At the end of the day, an accurate understanding of your systems, applications, and other infrastructure components and how they function collectively to deliver enterprise services is essential to the EA's ability to realize business benefits such as cost reduction and technology standardization, process improvement, and strategic differentiation.
About the Author
Firdaus cofounded Relicore in November of 2000 and served as the company's president and CEO for its first four years. In his current role, he is responsible for defining and driving Relicore's market, product, and technology strategies. Prior to Relicore, Firdaus cofounded WebLine Communications, a company responsible for introducing Web-based, enterprise-class customer interaction software to the international call center market.