FTP Online
Search:
Locator+ Code:
FTPOnline Channels Conferences Resources Hot Topics Partner Sites Magazines About FTP RSS 2.0 Feed

Searching for Holy Grails: Interview with Jon Toigo (Continued)

Challenges, Costs, and Best Practices
FTPOnline: What are some of the main challenges and costs involved in DR planning?

Jon Toigo: The primary challenge is usually mapping the inputs and outputs to business processes and ferreting out the infrastructure components that handle them directly or indirectly. The problem is made worse by the fact that there is rarely an up-to-date description available of the business process itself, let alone of the systems, networks, and storage that enable it.

Secondly, planners confront a major hurdle as they seek to characterize data—to determine what needs to be protected and what doesn't, what needs to be restored immediately versus what can wait awhile. When the computer was originally designed, some brainiac decided data should be self-destructive. It should overwrite itself whenever modified. We need a new mechanism that attaches headers to data that will identify what app was used to create the data and what protection and retention characteristics the data manifests. I describe a solution to this problem in my next Holy Grail book. It could be implemented readily, especially as Microsoft moves data storage away from files and into object-oriented databases.

Finally, most planners confront the challenge of getting sign-off on a purchase order from cash-strapped senior management. You need to recontextualize DR planning so that the strategies you seek to implement do more than simply reduce risk. They must deliver the whole enchilada of business value: risk-reduction, cost-savings, and business enablement. So, look for dual use strategies—for example, network resiliency strategies that have the additional benefit of improving application performance for end users.

FTPOnline: Are there some general "best practices" that can help with DR planning?

Jon Toigo: DR is represented by a lot of "gurus" as a mysterious undertaking whose rules are known only to a few privileged practitioners. In fact, it is a straightforward application of common sense. You need to know technology to do the job. You need to know project management. You need to know how to negotiate with vendors. You need diplomacy, tact, business savvy, and excellent written and oral communications skills. Beyond this, keep yourself educated on the subject by reading books, attending the occasional conference, doing the professional development activities that you would normally do to stay current with technology. I don't believe in DRP certifications, but I do believe in data protection certification (certifying that you know something about the technologies available for replicating data).

FTPOnline: How do distributed computing environments make DR planning more complicated?

Jon Toigo: Distributed computing is a two edged sword, really. Distributed systems may be more survivable if measures are taken deliberately to implement redundancies and networks are fully meshed. After the Kobe earthquake a few years ago, a company with a distributed environment was back up and running within four hours by working around damaged platforms, while a company with a centralized IT infrastructure was down for four weeks.

However, as I mentioned earlier, distributed environments often fall prey to unenlightened designs. Rather than building distributed architectures that can be rebuilt on the fly from different types of servers, networks, and storage devices, designers too often implement architectural designs that require 1-for-1 replacement of all components, a prohibitively expensive strategy.

FTPOnline: How often do companies need to revisit DR plans? Are there effective ways to test them?

Jon Toigo: As often as you can. Certainly on a routine basis every few months, but also after any new technology or application is implemented. Testing can be done through a paper walkthrough or through the actual implementation of strategies at a recovery strategy. There is no one right way. The wrong way is not to test at all.

FTPOnline: In your book, Disaster Recovery Planning, you say that you look forward to a time when DR planning books aren't necessary because DR planning will be integral to all companies at all levels. Is that likely to happen anytime soon, and if not, why?

Jon Toigo: It could happen, especially as organizations begin looking for ways to directly map infrastructure to business process and seek Service Level Agreements from their IT organizations (whether in-house or outsourced). When IT is run like a business, rather than an exception to the rules of profit and loss, it will be forced to deliver services that are supported by resilient architecture. I also think that the next generation storage technologies—and I'm not talking about Fibre Channel fabrics or the storage area networks (SANs) of today—will embrace a utility model. Running storage as a utility will require that provisions be made for the management of data, not just hardware. Achieving capacity allocation efficiency and capacity utilization efficiency carries with it the burden of managing data replication and providing redundant access options. So, what we think of as DR will eventually become part of the design process itself.

How to Begin Planning Storage Issues and Solutions
Back to Introduction



Back to top


Sponsored Links
Click Here: FREE downloads and MORE
for VS.NET 2003 Pros!

Visual Studio .NET
New version 2003

Microsoft Windows Server 2003.
Try the new platform.

Sonic Stylus Studio
Click for FREE trial

Native .NET Code, Fast. Easy to Modify. Code Generation White Papers.

ADVERTISEMENT

Java Pro | Visual Studio Magazine | Windows Server System Magazine
.NET Magazine | Enterprise Architect | XML & Web Services Magazine
VSLive! | Thunder Lizard Events | Discussions | Newsletters | FTP Home