Summary:

Red Herring :: Twelve months ago, like most of his peers across North America, Huw Morgan, chief technology officer for Bell Globemedia Interactive, received an order from the bean counters: buy less gear, spend less money. But this was easier said than done. Bell Globemedia is […]

Red Herring ::

Twelve months ago, like most of his peers across North America, Huw Morgan, chief technology officer for Bell Globemedia Interactive, received an order from the bean counters: buy less gear, spend less money. But this was easier said than done. Bell Globemedia is the AOL Time Warner of Canada–10 million Canadians access the Internet through its service, which gains thousands of new users each month. That kind of growth was creating a heavy burden on the company’s Internet infrastructure. Mr. Morgan was asked to cut back on equipment when it was needed most.

As he was pondering that problem, he received a call from an executive at Think Dynamics, a Toronto-based company. The executive was pushing a new type of software that promised to help Mr. Morgan solve his dilemma. He proposed setting up a “minigrid.” Think Dynamics’ grid-computing software could respond to spikes in data traffic on Bell Globemedia’s network by tapping into idle computing power spread across its network and two data centers.

Unlike the Internet, which primarily is a network for communications, grids are networks for computation–they are thinking, number-crunching entities. Like a decentralized nervous system, grids consist of high-end computers, servers, workstations, storage systems, and databases that work in tandem across private and public networks.

At least that’s the theory. In reality, numerous obstacles remain, like security and resource sharing. Despite billions of research dollars pouring in, the industry is years away from commercial viability. For the time being, its future depends, at least in part, on the evangelistic efforts of a lean and laconic New Zealander named Ian Foster.

What Linus Torvalds, cocreator of the Linux operating system, is to the open-source movement, Mr. Foster, 43, is to the world of grid computing. From his paper-infested office at Argonne National Laboratory in Illinois, where he is a senior scientist and head of the distributed systems lab, Mr. Foster champions grid technology. Today, he says, grids are where the Web was in 1991 or 1992–more academic curiosity than commercial venture. But, just as the Internet grew from a collection of small academic networks to a humongous octopus spreading its tentacles around the world, Mr. Foster predicts today’s minigrids will grow into a huge global grid, a transcontinental processing pool engaged in all sorts of complex tasks, like designing and testing semiconductors and decoding the human genome. Applications like customer relationship management (CRM) and supply chain management (SCM) will be run from such a network. Tasks will be broken down, distributed to millions of connected processors, and then the reassembled results would be sent back to a single desktop. Just as the Web produced thousands of business opportunities, a global grid will create and transform a slew of industries, from data processing to storage management. Startups and a few tech titans have already begun to enter this space.

Affordable FordA year ago Ford Motor’s power train division–the group that ensures the Taurus has enough oomph to navigate the Rocky Mountains of Colorado–needed to find, thanks to declining car sales, a cheap way to design its latest transmissions. In the old days, Ford might have spent $150 million to buy and house a supercomputer. Instead, Henry Ford’s descendants turned for answers to a former American Motors executive’s son, Scott McNealy, CEO of Sun Microsystems. Lucky for Ford, in July 2000, Sun bought a little-known German startup called Gridware. Like Think Dynamics, Gridware made software that tied together many desktop computers into a minigrid that could perform like a supercomputer. It was such a big success, saving Ford as much as $100 million, that the car company is using the same software to design other components as well.

BMW, Boeing, Motorola, Novartis, Pacific Life, Saab, and Synopsys are also using minigrids, buying software and services from firms like Entropia, IBM, Platform Computing, Sun, Think Dynamics, and United Devices. Still, not much money is being made. Total sales of grid-related products and services, nearly all of it for experimental projects, will be a meager $180 million in 2002, according to the market research firm Grid Technology Partners. Even big players like Sun are using the technology, not to generate cash, but rather as a loss leader–a reason to buy their big-ticket hardware products that take advantage of the new technology.

Academic PerformanceThe situation, however, is quite different in academia, where generous government funding, more than $500 million in the last two years, has accelerated the deployment of grids. Mostly used for crunching the huge amounts of data produced by research projects in fields like cosmology and bioinformatics, these newfangled “academic grids” are typically run by universities or government organizations, like the National Aeronautics and Space Administration. Worldwide, 18 academic grids are, or will soon be, up and running–compared with 3 just a year ago.

The European Union-funded EU DataGrid Project is one of the largest academic grids in the world. Led by Fabrizio Gagliardi, who is also the director of the CERN (European Organization for Nuclear Research) School of Computing, the DataGrid was built to tackle a monumental problem: thanks to its huge supercollider, CERN’s European Laboratory for Particle Physics in Switzerland is expected to produce several petabytes of data each year–a petabyte is 10,000 gigabytes (equivalent to 20 million four-drawer filing cabinets full of text). Even the most basic analysis of this data would require 20 teraflops of computing power (a teraflop is a trillion floating-point operations per second and is a measure of a computer’s speed). Yet the world’s most powerful supercomputer can do only 3 teraflops per second.

In 2000, to increase their computing capacity, Mr. Gagliardi and his team were trying to develop software that would allow the laboratory to share resources with other academic and government institutions across the European Union. That’s when Mr. Foster arrived with something up his sleeve–the Globus Project, a quasi-OS for grid computing.

Essentially, Globus is like Novell’s NetWare, which started as an OS for local area networks (LANs) and ended up jump-starting the LAN revolution. Globus could be the kernel from which grid computing grows. Already several scheduling and programming tools sit atop the OS. These tools, in turn, enable end-user applications like computing on demand. A few of these applications now exist, but Mr. Foster is betting that Globus and the grid’s open-source nature will help seed more innovation. “The involvement of corporations [like IBM] will help make that change,” he says.

“Globus has become the de facto standard; there is reasonable adoption because it is evolving quite fast, and the technical community seems to love it,” says Patrick Scaglia, vice president and director of Hewlett-Packard’s Internet and Computing Platforms Research Center, who is helping HP navigate the murky waters of grid computing.

Foster’s ChildMr. Foster’s first brush with the grid came with parallel computing, a technology developed in the early ’80s that enabled thousands of workstations to perform like a supercomputer. (Unlike grid computing, these workstations had to be connected in parallel rather than many to many.) After his exposure to parallel computing, Mr. Foster became intrigued by the possibilities of cheap computers, open-source software, and the emergence of a global network, now known as the Internet. In 1994, Mr. Foster, who is also a professor of computer science at the University of Chicago, proposed a precursor to today’s grid and went looking for funding from, and was turned down by, the Advanced Research Projects Agency (ARPA; now known as the Defense Advanced Research Projects Agency [DARPA], the military’s venture capital arm).

Meanwhile, Tom DeFanti, a professor of computer science at the University of Illinois, Chicago, and Rick Stevens, director of the high-performance computing and communications program at Argonne National Laboratory, floated a then-revolutionary plan to link 11 U.S. high-speed research networks to create–for a single week–a national, high-performance network called I-Way. Immediately, Mr. Foster saw that such a network needed software to help it run.

In 1995, Mr. Foster volunteered to pull together I-Soft, a software system that provided access to the network to organize and monitor projects. I-Soft, created by a small team including distributed computing gurus like Warren Smith and Jonathan Geisler, played a significant role in convincing the community that an enabling software infrastructure was critical to the success of the grid vision. The success of I-Soft led ARPA at last to fund the Globus Project.

Even though Mr. Foster had plenty of help in the past 20 years, grids would not be where they are today without him: “What Ian has done is sold the vision and gone out and tried to get funding and people who would eventually apply this technology,” says Mr. Gagliardi.

Commercial BreakMr. Foster’s groundbreaking work has not been lost on the media, analyst, and business communities. Recently, Irving Wladawsky-Berger, vice president for technology and strategy at the IBM Server Group, called grid computing the “key to advancing e-business and the next step in the evolution of the Internet towards a true computing platform.” He predicted that grid computing, like supercomputing before it, would find its way into the commercial world.

Grid Technology Partners estimates that the worldwide grid-computing industry will grow at a compound annual growth rate of 276 percent, topping more than $4.1 billion by 2005, when IT applications like CRM and enterprise resource planning will begin to be run on the grid.

Given that the issues of security and resource sharing have to be resolved, that projection seems overly optimistic. Take, for example, CRM software. In grid computing, the software would reside on the grid, but it hasn’t been determined how the data would be transmitted securely over the network. “Currently, the security is not acceptable for mission-critical and enterprise computing, but we are hoping that in the next five years, all such issues will be resolved,” says Mr. Scaglia of HP. Plus, the software would need to be rewritten extensively to work on the grid.

Even the most ardent believers, like EU DataGrid project leader Mr. Gagliardi, are cautious. “With grid computing you need a lot of security, and that is the key,” he says. “It will be years before this is economically feasible.” Mr. Gagliardi is hoping that phone companies will build an infrastructure that is secure and will offer services, like data on demand, computing on demand, and disaster recovery. He says that without the support of deep-pocketed backers, corporate adoption will remain far in the future.

“Sure, we are going up the hype curve, and it is important to manage expectations; it will be several years before the technology has a lot of its kinks worked out,” says Mr. Foster. “It will take much longer than people like to think. But still, this is something very real, and it will make a difference. That’s what you need to remember.”

You’re subscribed! If you like, you can update your settings

Comments have been disabled for this post