Smart Architectures, Architects
Intel's Prasad L. Rampalli sheds light on the complicated realities facing enterprise architects today.
by Matt Carter
Posted April 8, 2003
|
Prasad L. Rampalli
Vice President, Finance and Enterprise Services
Director and Chief Architect, Architecture and Integration Platforms
Intel Corp.
|
Prasad L. Rampalli's title sums up the complicated reality facing enterprise architects. Rampalli is Vice President of Finance and Enterprise Services and Director and Chief Architect of Architecture and Integration Platforms, leading up Intel's push toward a common IT infrastructure across the company.
Rampalli sat down with Matt Carter, Vice President of Internet Products at Fawcette Technical Publications, during the Intel Developer Forum in San Jose, Calif. They discussed the role of enterprise architects at Intel, the importance of a common platform for building a smart architecture, how to get IT and development to pull in the same direction, and how TCO drives everything an architect does.
Anatomy of an Enterprise Architect
Matt Carter: Your title is a mouthful; can you briefly describe your role at Intel?
Prasad L. Rampalli: I serve as the chief architect for applications and the infrastructure that Intel uses around its IT activity. My job is to ensure that we have a unified architecture for both the applications and infrastructureI'm trying to drive a transformation to enable significant reuse and productivity in the development lifecycle, while at the same time optimizing our total cost of ownership (TCO) off the infrastructure. And how we do that in a unified manner, front to back, is at the core of what this title means.
Carter: How did you get to this position at Intel?
Rampalli: I've been at Intel for 19 years, mostly in IT over the past 10 years. Before that, I was a manufacturing engineer working on test systems and utilization and so on. At that time, I was looking for a management system that would give me information on specific test parameters tied to our products. I came into IT as an end userand as one of the harshest critics of systems that don't meet end-user needs.
Back in the early 1980s, the rate of capital cost of most of our process equipment was going up pretty rapidly. At the same time, the realization dawned on most of us that the utilization was terrible. That got the company into a specific focus: "Hey, let's figure out how our equipment is being used and how can we drive and improve the process."
I was an engineer at the time, and the focus was on looking at data and patterns and driving statistical predictive maintenance techniques based on reliable distribution. I asked the IT guys what systems we had to manage that, and they told me we were in a process of defining requirements. I looked at the requirements document and realized we didn't have a lot of bearing on the problems I was trying to solve. It became clear that IT had to engage in the business in a different way to comprehend requirements of technical solutions. And that got me into Information Technology.
Carter: The process for moving all of Intel to a single platform sounds daunting. How did the transformation go, and what benefits did it bring to your company's architecture?
Rampalli: Back in the early 1990s, we didn't have a standard client OS, and I was asked to come in and implement Windows 95. I got in there and looked at the problem statement, and I said, "OK, we can implement Windows 95. But isn't the value proposition one of TCO and agility, and how you move from one upgrade cycle to the next? Looking only at the client is not going to solve a problem.
So we defined that whole program to transform the company's environment to a single platform. At that time we had four operating systems, eight different client configurations, six different types of hardware, and multiple configurations on platforms. From there, we went to a single network operating system, a set of finite images on the clientessentially one for the desktopand a standard build on the back end.
We moved all that to [Microsoft Windows] NT in 1995. I still remember when I got in front of Craig [Barrett, then COO and current CEO of Intel] and Andy [Grove, then Intel CEO and current Chairman of the Board] and the rest of the folks and told them that going with NT will deliver the lowest cost-value proposition. A ton of people from the Unix camp thought it was crazy to promote or even support this. Their concerns involved scalability and the reliability of an industrial-strength transition; everybody relied on Unix for that. Microsoft technology was viewed as something you could implement for personal productivity, but not something to run Intel's complete environment. After significant debate, we decided on NT, and that implementation essentially set the foundation for us to move to an evolutionary process.
Carter: How has this evolutionary process moved forward?
Rampalli: There were about 40,000 users who had to be migrated onto this environment. There were about 600 applicationsI'm talking about significant applicationsthat had to be tested. We had application loads, or "bundles" as we called them, based on user profiles, and we had to create this application repository that was therefore going to be a reusable environment. It was a great experience because it gave me a sense of how influential the standardization could be in the environment, and how the nuances of standardization are beyond the core of decisions with NT.
The mid-1990s saw the dawning of the Internet, and the feeling at Intel was: "Hey, let's have Intel be a shining light in the Internet space, a showcase on Intel architecture that demonstrates significant breakthroughs in business logic, using the Internet system technologies." And I think to myself many times that if we hadn't put the standards infrastructure into the environment, we wouldn't have moved as fast as we did on the implementation of the Internet infrastructure running on top of this.
When we started the Internet implementation in 1995-96, it was just a base foundation hosting dynamic static Web sites. But it quickly moved from that phase to the phase of business processing, which really started in 1997. I would say 1998 through 2002 we focused on integration, during which we ushered in a ton of technologies as standard, reusable layers for the environment. All those layers have been possible because the foundation platform was already there.
The Biggest Challenge
Carter: What were your biggest challenges in implementing this architectural change?
Rampalli: Our biggest challenge in this whole conversion didn't involve the United States, but our 350-odd clients across Vietnam, Taiwan, Korea, and China. We had about six different languages to migrate the operating system into and to test applications in. That was hard. The standardization effort there differed from a standard approach to implementing a hundred clients a day in Arizona, which was what we were doing in Santa Clara. So it was pretty standardized in the United States, but once we got into the Asian context with local languages, it became a different challenge.
Carter: What is the process for developing applications at Intel? It seems there has to be some tension between the developers' needs and the desire of IT to keep platforms under control.
Rampalli: Historically, someone from the business side says, "Hey, I need a solution," and the apps guys say, "OK, this is a solution that could work." Then they come up with a requirements document and a design specification, and request some infrastructure that needs to be built. IT responds with what can be done and when it can be done. The solution is built based on this kind of dialogue.
So, it's very much that waythis is what I need, this is when I need it, can you buy these four servers for me? More often than not, the funding process is usually driven from these projects, and the infrastructure essentially gets a part of that funding in response to it. And that has resulted in projects driving unique infrastructures for each application. Then what you have is a mish-mash of local operations in the environment. When you stand back in two years and ask, "Is it best in class in TCO, and is it really the most agile thing I have?" And the answer is, probably not. And this is not unique as a phenomenon at Intel. Most IT shops have been dealing with this process. And thanks to the dot-com bust and the sobering of IT spending, there's a real focus now on where the money is going to be spent and how it is going to be measured in terms of a set of criteria.
Carter: So how have you gone about changing this mode of operation?
Rampalli: The thinking is that the success of the developer community is based on reuse and productivity, right? And the success of an infrastructure is based on TCO and agility so you can upgrade and retool and so on. And for us to be able to implement and maximize both the reuse and the developer productivity, along with TCO, we need a unified process that ties the two criteria together. So there has to be a formal vision of an architecture that takes these groups and the business needs into account.
So, the business framework drives the requirements on data and the technology, and these, in turn, result in a solution that gets implemented. And the solution has an application element and an infrastructure element, which are formalized activities. When we talk about architecture at Intel, we describe it in scientific rigor along the core use of how you consider architecture. If the architecture doesn't have a business framework or data architecture, or a technical architecture, or a solution or infrastructure architecture, then it is not an architecture.
We're also trying to spawn from this architecture a set of generic-usage models based on functional building blocksthe essence of an application architectureand an infrastructure architecture based on a modular infrastructure of elements. We hope all of these can support one of the usage models.
Getting Maximum Utilization
Carter: Intel obviously is a chip company, and invests heavily in using your own processor technology in your IT infrastructure. How are you leveraging your hardware investments, and new technologies such as 64-bit computing?
Rampalli: Take a typical application development lifecycle pipeline where the application moves from the development box to the integration box, to the testing box, and finally to a production box. Once you get through the three production systems, the average utilization for these three boxes is 15 percent. But if you take a snapshot of these systems, you utilize the system by 100 percent during the integration test. Once you're past the integration, the integration box is not being used. The same thing is true in development, right? So when you average it out, the three production systems have about 15 percent utilization.
The question is, "How do I take this asset base and utilize it to get maximum utilization and the best TCO possible?" This is what's eating my lunch, right? The only way to get past that is what we call dynamic allocation of resources. In other words, I could take resources from the development box and utilize it for a testing process if it's sitting idle. The system knows it can grab CPU from there because it's idle. This is the notion of grid computing applied at a system level. This three-to-one consolidation of systems alone is a huge reduction to cost and TCO.
Carter: So you're an NT shop historically. Are you 100 percent focused on .NET, or are you incorporating [Sun Microsystems] Java as well in your enterprise?
Rampalli: Our ERP system runs on J2EE and J2ME, so yes, we have Java in there. But we are predominantly on Microsoft: .NET Framework to the core. We don't have any roadblocks in the assimilation of applications or services that are utilizing Java, however. We are essentially looking at Web services in a .NET framework.
Carter: So are XML Web Services ready for prime time? Is anything standing in the way of widespread deployment of Web services?
Rampalli: I am excited about Web services because of the open standards efforts in place there. I'm also a little cautious about their implementation in certain areas that aren't ready yetI'm talking about the business-to-business spacebecause of security issues. In the Web services paradigm, we lack an encrypted authentication process in the environment.
Security is critical in the B2B paradigm because it's a machine-to-machine transaction. We believe that until we get security capabilities in place for the guaranteed delivery of the Web services message between two trading partners, we are not ready for using them outside Intel. Internally, however, we are gung-ho on implementing Web services. We've built consolidated Web services: a set of application servers that have been implemented to host all the integration code and to support the scalability of transactions going to this platform, the monitoring, the management, the security, and so on. The entire implementation of e-business Web services will be done on this single implementation.
Bridging the Business and Technical
Carter: A big part of the role of an enterprise architect is bridging the needs of the business people and the technical people. Has it been difficult reconciling the needs of these two groups?
Rampalli: The biggest lesson I learned was establishing a set of rules by which we can communicate to developers and IT on how business values are going to be delivered and how, in turn, those business values are connected to the notion of TCO. They are not at odds with each other. That debate and discussion still continues as an inherent dichotomy of a co-tension in the environment, and I think tomorrow's IT is going to be looking at embracing business value and TCO as enabling strategies. When I look back at the lesson from this, it's not just TCO; it's the speed with which we can do upgrades, with which we can lend capability to the environment, which is business value.
Carter: Where are 64-bit applications starting to pay off?
Rampalli: We are already seeing a lot of benefit from certain classes of applications. One is decision support. We need fast processing of heuristics; the data and the heuristics that run against that data are stored in cache because if you run into an I/O-based process against a database, you would take forever to get that algorithm crunched and done. So, our focus is to move to 64-bit those areas where we get all these advantages in performance and a reduction in our cost.
Carter: So obviously creating architecture is driven by cost concerns…
Rampalli: …The issue that is front-and-center for most CFOs is, "you're costing me so much; what do I get for it?" Let's say you're a $26 million company and you want to grow to $50 million in three years. The CFO wants to know that if you spend all this money on your business, will you deliver the promised productivity gain and scalability that can help grow the company? If you have a set of reasonable building blocks and a shared-services architecture, you would get economies of scale that you perhaps don't have today, so that you don't build beyond reach. Most IT shops today have a huge fixed-cost burden So, just because the company started reducing its revenue because there's no uptake in the business, it doesn't mean the IT costs go down. You have to pay depreciation on the life of the servers, and you find that IT shops have this back-end issue where, in a theoretical situation of zero growth, or zero revenue, you still have assets to write off, and their fixed costs are pretty high.
One of the things we are doing is looking at the program that completely rearchitects the data center to get to a different performance point and cost point. We are looking at a rearchitecting of the infrastructure that gets us to a different cost basis, perhaps even a 50 percent reduction against what it would have cost. This is critical for the payback model that you want to get to.
Carter: This is a big task. How are you going to get there?
Rampalli: This is going to happen through multiple strategies: shared application services to drive reuse and modularity; globalization to lower our cost basis for development; and this whole server consolidation. So, this is what you will see in the next three years: aggressive growth in shared services, a majority model on globalization, consolidation, dynamic allocation of system resources, leverage of form factors that enable a better cost per unit, automated architectures, and automation that takes people out of the equation on the lower-end activities.
About Prasad L. Rampalli
Prasad L. Rampalli is Vice President of Finance and Enterprise Services and Director and Chief Architect of Architecture and Integration Platforms, where he is responsible for driving common architecture, infrastructure, and shared services across all IT product and service lines and infrastructure areas for Intel Corp.
Rampalli joined Intel in 1983 and has held several technical and management positions within Intel's IT organization and Technology and Manufacturing Group. Since 1999, he has been responsible for delivering the core technologies and data management capabilities to help Intel achieve its goal of becoming a 100 percent e-Corporation. From 1998 to 1999, he oversaw the creation of a standard set of core enterprise applications services around the deployment of SAP at Intel. This included systems engineering, product management, release management and production support and involved driving the transition of more than 250 resources across application and IT teams into a central function.
From 1995 to 1997, he managed a company-wide effort to standardize Intel's client/server infrastructure from several heterogeneous operating environments to one running on Windows NT. Previously, he led the deployment and standardization of Intel's shop floor control and equipment performance management system across Intel's factories.
Prior to joining Intel, Rampalli was a manufacturing engineer at Ampex Corp. and a maintenance engineer at Tata Iron and Steel Company.
Rampalli was named to CIO Magazine's Top 100 Honoree List in 2001 and 2002. He has received three Intel Achievement Awards.
Rampalli received a master's degree in industrial engineering and operations research from the University of Texas at Arlington. He received a bachelor's degree in mechanical engineering from the Indian Institute of Technology in Kanpur, India.
|