34 Comments

Summary:

Two items caught my eye today: SanDisk CEO Eli Harari explaining how we are counting down to the end of Moore’s Law in terms of electrons per cell, and news that Apple will increase the speed of its processor by 1.5 times to 600 MHz, making it easier for the iPhone to render web pages and enhance application usage. The two stories elicited a similar response from me: Why are we measuring Moore’s Law using a yardstick from the PC era? In today’s world, don’t megabits per second (Mbps) matter more than the MIPS?

Earlier this morning I read the comments of Eli Harari, chief executive of Flash memory chip maker SanDisk, in which he tells The New York Times’ Saul Hansell that we are counting down to the end of Moore’s Law.

“We are running out of electrons. When we started out we had about one million electrons per cell,” or locations where information is stored on a chip, he said. “We are now down to a few hundred.” This simply can’t go on forever, he noted: “We can’t get below one.’”

In what could be viewed as a karmic retort to The Times’ Bits Blog post, John Gruber reported that Apple was going to increase the speed of its processor by 1.5 times to 600 MHz, making it easier for the iPhone to render web pages and enhance application usage. The two stories elicited a similar response from me: Why are we measuring Moore’s Law using a yardstick from the PC era?

Processing power and cramming more storage onto chips is something that was part of the PC boom, as Gruber so eloquently illustrates in his post. So why aren’t we phrasing the conversation in the context of networks and connectivity? After all, how many of us really use unconnected computing devices? In today’s world, don’t megabits per second (Mbps) matter more than the MIPS?

My COMMputing (Communications+computing) view of the world puts the speed of the networks and the availability of connectivity at a level higher than the raw oomph of a processor or the capacity of a memory chip. I was musing about this as I walked to the office, so when arrived I got on the phone with Sun Microsystems Chief Technology Officer Greg Papadopoulos, who will be appearing at our upcoming Structure 09 conference on June 25. Papadopoulos has been involved with computers — super and small — for a long time, and as such knows a thing or two about Moore’s Law, so I asked him for his take.

“Moore’s Law is a proxy for the PC industry and that’s what it has come to mean,” he told me. But, he added, “It is much more than that.” With virtualization and parallelism, he said, the basic tenets of Moore’s Law live on. “Silicon is like steel. We have not come to terms with that as an industry and as a society.” I most certainly agree. How you use the cheap stuff and build interesting things is far more relevant. As Papadopoulos noted, the emergence of systems-in-a-package, or SiP, technology, “allows you to combine various different type of silicon modules and built something entirely new. Like combining RF modules, DSPs and memory for a mobile phone.” Indeed, SiP “is a major discontinuity in the semiconductor business,” wrote Sramana Mitra back in April 2005. “SiP will put a further brake in the slowdown of Moore’s Law.” And from Wikipedia:

An example SiP can contain several chips — such as a specialized processor, DRAM, flash memory— combined with passive components — resistors and capacitors — all mounted on the same substrate. This means that a complete functional unit can be built in a multi-chip package, so that few external components need to be added to make it work. This is particularly valuable in space constrained environments like MP3 players and mobile phones as it reduces the complexity of the printed circuit board and overall design.

Thanks to such developments, communications are now being embedded natively into devices that were previously “compute only.” Which brings me back to COMMputing. It doesn’t matter how fast the iPhone processor becomes — all that matters is whether AT&T’s wireless pipe is robust enough for me to effectively leverage the “hardware goodness” of the device. If you don’t have a fast enough network, then you won’t have anything to render on the browser.

“If you ask people if they had a choice of getting a computer with a processor that is 10 times more powerful than their current one, or get a connection that is 10 times as fast, most people would opt for the latter,” Papadopoulos pointed out. He’s is a believer in the 4G wireless broadband technology called Long-Term Evolution (LTE) because he knows that multimegabit wireless speeds are going to spawn a brand-new class of devices. These devices won’t have the fastest processor, or the biggest flash memory drive. Instead they will be connected at high speeds and information and services will be served up over the Internet — instantly.

And at that point no one will ask, how fast is that processor?

  1. …but we still need the better performance from the processor to handle the high volume of instantaneous data. i believe they go hand in hand, just my opinion.

    Share
    1. yeah but it can’t be the only focus. the processor needs to be smart enough and big enough. the bandwidth is the key and graphics are the second most important key in future experiences, i think most of us are forgetting that little bit.

      Share
      1. Om, who says that a processor is the only focus. Eli Harari said like that because it’s for the sake of his company to deny Moore’s law and so did Apple for the sake of its product iPhone. Microsoft’s, Google’s, Yahoo’s, and Facebook’s guys are focusing on something else that can be software and communication. And as jasonspalace said that it had better go hand in hand so that the demand of better, clear, and quick communication can be fulfilled by quick processors.

        You are a bit delirious here.

        Share
  2. Karl Hildebrandt Friday, May 22, 2009

    I agree and disagree with your point. For the vast majority of the applications out there the bandwidth will be the limiting factor. However as the memory in cell phones increases and processor capabilities increase then the application power is going to increase so the MIPS will become important. These application will be able to do more with the data they get. In the end we will have to find a holistic balance between the speed to get the data and the ability to manipulate it.

    Share
    1. So we are not really disagreeing. When I say comm-puting, I am saying exactly that. computing and communications come into sync. Computing at the line speed is what matters not the other way around. So I am actually arguing for a world view which is communications centric and processor is part of the show. without the communication fabric, processor is just a processor. nothing more in a world where every app and service is going to be network enabled.

      Share
  3. Bandwidth at the periphery don’t grow as fast as storage and computing.

    It has been like that for years, and it is even more of a problem in the US than in Europe.

    A good and geeky read on what it means is Accelerando by Stross. A sci-fi book pushing that historical fact — bandwidth at the edge don’t grow fast enough — to the day when the whole solar system will be a computer. That computer can exchange just a tiny fraction of its data with any given pair elsewhere. Because long distance bandwidth is much more constrained than anything local.

    The whole book is about bandwidth, storage and download, actually! That and a bunch of other Singularity nerdyness :-)

    Ok, but the point ? Huge and powerful local computing and storage may always be necessary versus pure distant access, because local access is faster — and network edge upload/download speeds are stuck at a very low level everywhere on the globe.

    Maybe we WANT a 10x bandwidth, but it’s much easier to get a 10x more spacious and powerful computer.

    Share
  4. Bandwidth, specially upload speeds, and latency are usually missing from discussion. In large part, that is because there is competition in the CPU space where we have choices, whereas bandwidth is in the hands of the infernal cable-telco duopoly, who would rather manage artificially created scarcity and nickel and dime you for every dollar of value you can get from the network, even if that means stillbirth for new technology.
    The fact the cable/telco industry is one of the largest campaign contributors to a venal Congress does not inspire hope either. This is not going to change until we have an administration willing to confront the oligopoly the way the French did with theirs. Julius Genachowski has been saying all the right things, but will he be able to deliver against such entrenched vested interests?

    Share
  5. I think you’re missing a couple of big points, Om. For one, wireless speed depends to a huge degree on the processing power of the DSP-ish device that encodes and decodes the bits. So while the speed of the general-purpose CPU doesn’t need to grow like wildfire, it does need to keep pace with the lower-level system components that make speed happen. And don’t forget that the end-to-end architecture of the current Internet invests a great deal of infrastructure in the end system’s CPU.

    But the real limiting factor of performance in hand-held devices is power. It doesn’t matter how fast you can encode bits for MIMO and OFDM and manage packets at the TCP level if your battery dies after 45 minutes, so the real deal today is something like MIPS/Watt. Adding parts to a die doesn’t improve that metric, so your Sun guy is clearly showing his stripes as a datacenter-oriented dude, which is what he is, of course.

    Share
    1. Excellent points, Richard – and far more relevant than the original post.

      Share
  6. I’m all for taking shots, veiled or otherwise, at AT&T and everyone else’s crummy data network, but I think you are misrepresenting John Gruber to set this up. Gruber’s point is that a faster CPU matters because it will let the iPhone better keep up with even the mediocre of ATT’s network.

    Share
    1. A more relevant fact is that the upgraded iPhone just barely surpasses the CPU power of the Blackberry 9000 series, which is more or less the standard for high-end smartphones these days.

      By reference, the Blackberry has more the 2000 times the CPU power and memory of the first node on the ARPANET. But the AT&T wireless network is roughly 1500 times faster than the first ARPANET connection, so these things are more or less in proportion.

      Share
      1. Richard,

        First of all thank you for a historical perspective. It is amazing — I have been putting together a presentation around this whole notion and well, now I can go back to the days of ARPANET. :-)

        I totally agree – the Bold is so much better than any other smart phone I have used and Curve 2 (8900) is even better. With UMA it totally rocks. I just think people have to redefine their thinking around the whole notion of what is computing today. I just think Moore’s Law is morphing into more of a Moore’s Theorem, though I am not smart enough to make that assertion. I would love love to hear your thoughts here.

        Share
      2. Om,

        I’m writing a white paper that deals in part with the progression of CPU power, network speeds, protocol sophistication, and regulatory models, so I don’t want to reveal too much of my thinking around all of this until I’m ready to go public with it. For the current discussion I think we do have to accept that we’re into the last legs of easy upgrades for both CPU power and network speeds, so it behooves us to pay more attention to efficiency in both the network and the system than we have in the past, at least for mobile devices where battery power is such a limiting factor.

        If you’d like to comp me for your event, we can talk some more about these dynamics.

        Share
      3. “We are into the last legs of easy upgrades for both CPU power and network speeds” ?

        Thank you for raising this point. Finally I have something to say :-) It is now up to us software and application people to push the limit by building more efficient softeware!

        Share
    2. Eas

      You are perhaps mis-reading the post. I am not misrepresenting John, and neither am I taking pot shot at him. AT&T’s network is a pretty pathetic and you know it as much as I do. Others have also experienced that.

      On the keeping up with the AT&T’s mediocre network etc: point of my post is that we need to think processors/network speeds in tandem and not as standalone metrics.

      We might be saying the same thing.

      Share
  7. Jayshree Ullal Friday, May 22, 2009

    Particularly in the Cloud, with dense computing, more symmetry in the network is needed
    My emerging though simplistic rule of thumb is 1GHz of compute Processing = 1 Gigabit of I/O
    All depends on workloads and apps of course

    More in my blog later this summer…

    Share
    1. Jayshree

      I totally agree with you on this. I think the bump in the speed of the iPhone is keeping up with the upgraded speed of the AT&T network. They have been promising us speeds of around 700 kbps and higher, though my view is that there will be more symmetry in the compute/communicate worlds.

      Share
      1. They’re actually promising 7.2 Mbps in the shared downstream direction this summer for the new iPhone. It’s just a firmware upgrade to their existing plant, but it will do some interesting things to the backhaul in places where they still use copper.

        Share
    2. That’s more or less the ratio we had in the early Ethernet days: 10 MHz of CPU on a 10 Mbps network, and 1 MHz of CPU on my 1 Mbps StarLAN network.

      BTW, I remember your first IEEE 802 meeting in Irvine.

      Share
  8. What is the constraint in handheld networked computing ?

    1. CPU speed
    2. Bandwidth

    PC was in same stage few years back and we see that we still need more bandwidth and more computing power.
    Hand held revolution is on and we don’t really know what is possible at this moment.

    Ignore any of it at your own peril.

    Share
  9. I agree with you here OM ………actually you can summarize current IT trend (apps and data in cloud ) with Sun slogan …”NETWORK IS COMPUTER” …………5 years ago its true for IT geeks for building …massively parallel super computers ….but now with advent with cloud aps ….its become more relevant to normal …………..why download song when you can stream ………access data from any where using cloud devices like pogo plug or cloud syscronization services ……….even core office apps now cloud based ……..like google office , zoho ………………fact of life today is very well work productively with good network connection + cloud apps on netbooks / old hardware

    Share

Comments have been disabled for this post