IBM Ups the Enterprise Ante (Continued)
Implementing IBM's "On-Demand" Strategy
Q: In spite of the downturn in IT, WebSphere has been doing well. What is the big-picture context for understanding WebSphere strategy?
Swainson: The WebSphere platform has been evolving over time, and our strategy for it has evolved. Initially, we talked about an application server and the associated set of tools for building applications. Then we extended that to include specific products for commerce, portals, and business integration. What we've tried to create is an infrastructure layer that people can use to build and integrate their business applications. We've also concentrated much more on working with ISVs to embed WebSphere in their products. And we've done a lot of work around application-development tooling.
WebSphere has been and continues to be a huge investment for IBM. My development budget is almost a billion dollars this year, and that doesn't include sales and marketing.
Q: How does WebSphere fit with IBM's overall strategy for on-demand e-business?
Swainson: WebSphere is strategic in the context of IBM's on-demand strategy. On-demand is all about flexibilityhow you take a business and create a service-oriented architecture around it so that you can have flexibility in what you and your partners can do. It is intended to set an agenda for how people use IT resources, how they can procure them, how they can outsource them. And it depends on the notion of an open-standardsbased, flexible middleware environment upon which to build all these things.
The glue that ties resources together in the on-demand operating environment is a middleware layer that sits on top of their existing infrastructure, whether that be mainframe-based or Unix-based or Windows-based. Through the use of technologies such as Web services, the middleware layer ties the resources together into an environment that can be repurposed, redeployed, outsourced, and insourced. It can form part of a virtual supply chain or a virtual value net that exists across enterprises or across a whole industry. So, business flexibility is the goal, and middleware is the glue that makes it all possible.
Q: But on-demand is not a WebSphere or middleware concept, but an overall IBM direction.
Swainson: Yes, it's an IBM-wide initiative for how we think computing will be done in the future, and it really ties together our whole business. It allows our consulting guys to discuss how to create a new value proposition for a particular set of business processes. So they've got a whole series of business-transformation services that they can go into industries and companies and talk about. How do you get a more efficient supply chain if you're a manufacturer, or how do you create a more efficient straight-through processing, zero-latency environment if you're a financial-services organization? And then we back that up with technology offerings that allow you to instantiate that.
So this allows us to talk about the whole range of customer requirements, all the way from the thought leadership required to transform a business, to the actual hosting of the IT systems, if that's what you need. And IBM is in a unique position to do that because we're the only vendor in the marketplace that offers all the piecesthe consulting services, the outsourcing and systems-integration services, the hardware and the softwarethat you need to create an end-to-end business-value proposition around IT.
Q: When on-demand computing was first discussed, a lot of people mistakenly thought that it was like an ASP model.
Swainson: Those people were not alone. Sam Palmisano, our CEO, used words very much like the ones I just used when he announced it. But what a lot of the press heard was that utility equals ASPan outsourced kind of model where you pay by the click. Partly that's because ASP is a metaphor that the non-technical press understood, but when you say "middleware" their eyes glaze over. So they walked away with that impression, and the early articles gave that impression. You'll see us try to create a much broader view in our advertising and executives' speeches.
I might add that it is going to be evolutionary. ASPs were a particular instantiation of a model that, frankly, wasn't particularly successful, for a lot of technical and business reasons. Part of the reason they weren't successful is that the middleware infrastructure didn't yet exist to do what they were trying to do. The systems they were trying to implement were too inflexible and too monolithic. You can't share resources if you can't break up work into things that can be shared, and then an environment that allows you to share.
We have lots of proof of this. Mainframes for decades have been sharing large amounts of resources and running different types of workloads. One of the reasons mainframes work so well is that the peaks and valleys of many types of workloads tend to cancel each other out. That's part of the idea here, too, that you can create a shared-resource environment and a more efficient way to run those resources, but you need software that allows you to run it.
Q: One of the other high-level concepts that relates to this is IBM's notion of autonomic computing. How does that fit in?
Swainson: Autonomic is an attribute of this environment. Clearly, if you have a shared-resource environment, you need it to be highly fault-tolerant and failure-resistant, so it has the ability to repair and tune itself. Today's systems, except for mainframes, don't have that. Big modern mainframes such as the Z-Series are composed of dozens of processors that can fail off and you'll never have a blip in terms of the workflow. They can do all kinds of things that you can't traditionally do in the distributed-computing environment, either on Unix or Windows.
Part of the autonomic idea is to bring that level of systems thinking into a distributed environment. Now you have to do some grungy, low-level things first. You need to have a shared log so all the resources working in that environment write their information in a consistent way so that someone can globally optimize it. Today, everything in a distributed environment writes its information in its own logs, and there is no semantic or syntactical representation in anything else. So that's the kind of base-level work that's necessary so you can see what's going on in systems.
Then there are some higher-level things that need to happen at the middleware layer. You need to understand what workloads are running in which systems. How do I start to move those workloads around, or optimize or balance those workloads, so I can drive the aggregate utilization of one of these environments above the 10 or 20 percent that it typically runs at today? How do we take advantage of all the resource that lives out there and allow you to optimize over the top of it? You need something to be the global provisioning agent that understands what's going on in terms of workload in the environment, and then to put the workloads where they best fit.
WebSphere plays a critical role in that because it's the carrier for all these services. WebSphere runs the Web services, and it will run the global grid standards, or OGSA, as they start to emerge in their standardized form. WebSphere will power this on-demand operating environment. And you will see parts of WebSphere move down into the operating-system layer because we're trying to build that layer at the network level now, not simply at the machine level.
So this can be thought of as a progression. We started talking about WebSphere as an application server, then as an integration and portal and commerce server. The next layer: It's the network operating systemor at least it provides a set of services that can be brought together into a network operating system.
Q: And this is what enables grid computing.
Swainson: Yes, although from our perspective, you'll hear "grid" become an attribute of on-demand. On-demand systems will be grid-like in terms of their ability to deploy hardware resources. Grids have value implicitly if you can share resources among them. But the trick is: How do you create the environment that allows you to share a resource? Grids work well today for scientific computing because the workloads tend to be highly "parallelizable" and also because the architecture tends to be uniform. In the world of commercial business systems, that tends not to be the case at all. How do you throw a piece of work into an environment where you might have mainframes and mid-range systems and Unix systems and PCs? What's the common denominator in all of those systems? I think a common operating environment will emerge for how applications work in these multimachine environments, but clearly that's not there today.
Back to top
|