34 Comments

Summary:

Two items caught my eye today: SanDisk CEO Eli Harari explaining how we are counting down to the end of Moore’s Law in terms of electrons per cell, and news that Apple will increase the speed of its processor by 1.5 times to 600 MHz, making it easier for the iPhone to render web pages and enhance application usage. The two stories elicited a similar response from me: Why are we measuring Moore’s Law using a yardstick from the PC era? In today’s world, don’t megabits per second (Mbps) matter more than the MIPS?

Earlier this morning I read the comments of Eli Harari, chief executive of Flash memory chip maker SanDisk, in which he tells The New York Times’ Saul Hansell that we are counting down to the end of Moore’s Law.

“We are running out of electrons. When we started out we had about one million electrons per cell,” or locations where information is stored on a chip, he said. “We are now down to a few hundred.” This simply can’t go on forever, he noted: “We can’t get below one.'”

In what could be viewed as a karmic retort to The Times’ Bits Blog post, John Gruber reported that Apple was going to increase the speed of its processor by 1.5 times to 600 MHz, making it easier for the iPhone to render web pages and enhance application usage. The two stories elicited a similar response from me: Why are we measuring Moore’s Law using a yardstick from the PC era?

Processing power and cramming more storage onto chips is something that was part of the PC boom, as Gruber so eloquently illustrates in his post. So why aren’t we phrasing the conversation in the context of networks and connectivity? After all, how many of us really use unconnected computing devices? In today’s world, don’t megabits per second (Mbps) matter more than the MIPS?

My COMMputing (Communications+computing) view of the world puts the speed of the networks and the availability of connectivity at a level higher than the raw oomph of a processor or the capacity of a memory chip. I was musing about this as I walked to the office, so when arrived I got on the phone with Sun Microsystems Chief Technology Officer Greg Papadopoulos, who will be appearing at our upcoming Structure 09 conference on June 25. Papadopoulos has been involved with computers — super and small — for a long time, and as such knows a thing or two about Moore’s Law, so I asked him for his take.

“Moore’s Law is a proxy for the PC industry and that’s what it has come to mean,” he told me. But, he added, “It is much more than that.” With virtualization and parallelism, he said, the basic tenets of Moore’s Law live on. “Silicon is like steel. We have not come to terms with that as an industry and as a society.” I most certainly agree. How you use the cheap stuff and build interesting things is far more relevant. As Papadopoulos noted, the emergence of systems-in-a-package, or SiP, technology, “allows you to combine various different type of silicon modules and built something entirely new. Like combining RF modules, DSPs and memory for a mobile phone.” Indeed, SiP “is a major discontinuity in the semiconductor business,” wrote Sramana Mitra back in April 2005. “SiP will put a further brake in the slowdown of Moore’s Law.” And from Wikipedia:

An example SiP can contain several chips — such as a specialized processor, DRAM, flash memory— combined with passive components — resistors and capacitors — all mounted on the same substrate. This means that a complete functional unit can be built in a multi-chip package, so that few external components need to be added to make it work. This is particularly valuable in space constrained environments like MP3 players and mobile phones as it reduces the complexity of the printed circuit board and overall design.

Thanks to such developments, communications are now being embedded natively into devices that were previously “compute only.” Which brings me back to COMMputing. It doesn’t matter how fast the iPhone processor becomes — all that matters is whether AT&T’s wireless pipe is robust enough for me to effectively leverage the “hardware goodness” of the device. If you don’t have a fast enough network, then you won’t have anything to render on the browser.

“If you ask people if they had a choice of getting a computer with a processor that is 10 times more powerful than their current one, or get a connection that is 10 times as fast, most people would opt for the latter,” Papadopoulos pointed out. He’s is a believer in the 4G wireless broadband technology called Long-Term Evolution (LTE) because he knows that multimegabit wireless speeds are going to spawn a brand-new class of devices. These devices won’t have the fastest processor, or the biggest flash memory drive. Instead they will be connected at high speeds and information and services will be served up over the Internet — instantly.

And at that point no one will ask, how fast is that processor?

You’re subscribed! If you like, you can update your settings

  1. jasonspalace Friday, May 22, 2009

    …but we still need the better performance from the processor to handle the high volume of instantaneous data. i believe they go hand in hand, just my opinion.

    1. yeah but it can’t be the only focus. the processor needs to be smart enough and big enough. the bandwidth is the key and graphics are the second most important key in future experiences, i think most of us are forgetting that little bit.

      1. Om, who says that a processor is the only focus. Eli Harari said like that because it’s for the sake of his company to deny Moore’s law and so did Apple for the sake of its product iPhone. Microsoft’s, Google’s, Yahoo’s, and Facebook’s guys are focusing on something else that can be software and communication. And as jasonspalace said that it had better go hand in hand so that the demand of better, clear, and quick communication can be fulfilled by quick processors.

        You are a bit delirious here.

  2. Karl Hildebrandt Friday, May 22, 2009

    I agree and disagree with your point. For the vast majority of the applications out there the bandwidth will be the limiting factor. However as the memory in cell phones increases and processor capabilities increase then the application power is going to increase so the MIPS will become important. These application will be able to do more with the data they get. In the end we will have to find a holistic balance between the speed to get the data and the ability to manipulate it.

    1. So we are not really disagreeing. When I say comm-puting, I am saying exactly that. computing and communications come into sync. Computing at the line speed is what matters not the other way around. So I am actually arguing for a world view which is communications centric and processor is part of the show. without the communication fabric, processor is just a processor. nothing more in a world where every app and service is going to be network enabled.

  3. Bandwidth at the periphery don’t grow as fast as storage and computing.

    It has been like that for years, and it is even more of a problem in the US than in Europe.

    A good and geeky read on what it means is Accelerando by Stross. A sci-fi book pushing that historical fact — bandwidth at the edge don’t grow fast enough — to the day when the whole solar system will be a computer. That computer can exchange just a tiny fraction of its data with any given pair elsewhere. Because long distance bandwidth is much more constrained than anything local.

    The whole book is about bandwidth, storage and download, actually! That and a bunch of other Singularity nerdyness :-)

    Ok, but the point ? Huge and powerful local computing and storage may always be necessary versus pure distant access, because local access is faster — and network edge upload/download speeds are stuck at a very low level everywhere on the globe.

    Maybe we WANT a 10x bandwidth, but it’s much easier to get a 10x more spacious and powerful computer.

  4. Fazal Majid Friday, May 22, 2009

    Bandwidth, specially upload speeds, and latency are usually missing from discussion. In large part, that is because there is competition in the CPU space where we have choices, whereas bandwidth is in the hands of the infernal cable-telco duopoly, who would rather manage artificially created scarcity and nickel and dime you for every dollar of value you can get from the network, even if that means stillbirth for new technology.
    The fact the cable/telco industry is one of the largest campaign contributors to a venal Congress does not inspire hope either. This is not going to change until we have an administration willing to confront the oligopoly the way the French did with theirs. Julius Genachowski has been saying all the right things, but will he be able to deliver against such entrenched vested interests?

  5. Richard Bennett Friday, May 22, 2009

    I think you’re missing a couple of big points, Om. For one, wireless speed depends to a huge degree on the processing power of the DSP-ish device that encodes and decodes the bits. So while the speed of the general-purpose CPU doesn’t need to grow like wildfire, it does need to keep pace with the lower-level system components that make speed happen. And don’t forget that the end-to-end architecture of the current Internet invests a great deal of infrastructure in the end system’s CPU.

    But the real limiting factor of performance in hand-held devices is power. It doesn’t matter how fast you can encode bits for MIMO and OFDM and manage packets at the TCP level if your battery dies after 45 minutes, so the real deal today is something like MIPS/Watt. Adding parts to a die doesn’t improve that metric, so your Sun guy is clearly showing his stripes as a datacenter-oriented dude, which is what he is, of course.

    1. Excellent points, Richard – and far more relevant than the original post.

  6. I’m all for taking shots, veiled or otherwise, at AT&T and everyone else’s crummy data network, but I think you are misrepresenting John Gruber to set this up. Gruber’s point is that a faster CPU matters because it will let the iPhone better keep up with even the mediocre of ATT’s network.

    1. Richard Bennett Eas Friday, May 22, 2009

      A more relevant fact is that the upgraded iPhone just barely surpasses the CPU power of the Blackberry 9000 series, which is more or less the standard for high-end smartphones these days.

      By reference, the Blackberry has more the 2000 times the CPU power and memory of the first node on the ARPANET. But the AT&T wireless network is roughly 1500 times faster than the first ARPANET connection, so these things are more or less in proportion.

      1. Richard,

        First of all thank you for a historical perspective. It is amazing — I have been putting together a presentation around this whole notion and well, now I can go back to the days of ARPANET. :-)

        I totally agree – the Bold is so much better than any other smart phone I have used and Curve 2 (8900) is even better. With UMA it totally rocks. I just think people have to redefine their thinking around the whole notion of what is computing today. I just think Moore’s Law is morphing into more of a Moore’s Theorem, though I am not smart enough to make that assertion. I would love love to hear your thoughts here.

      2. Om,

        I’m writing a white paper that deals in part with the progression of CPU power, network speeds, protocol sophistication, and regulatory models, so I don’t want to reveal too much of my thinking around all of this until I’m ready to go public with it. For the current discussion I think we do have to accept that we’re into the last legs of easy upgrades for both CPU power and network speeds, so it behooves us to pay more attention to efficiency in both the network and the system than we have in the past, at least for mobile devices where battery power is such a limiting factor.

        If you’d like to comp me for your event, we can talk some more about these dynamics.

      3. “We are into the last legs of easy upgrades for both CPU power and network speeds” ?

        Thank you for raising this point. Finally I have something to say :-) It is now up to us software and application people to push the limit by building more efficient softeware!

    2. Eas

      You are perhaps mis-reading the post. I am not misrepresenting John, and neither am I taking pot shot at him. AT&T’s network is a pretty pathetic and you know it as much as I do. Others have also experienced that.

      On the keeping up with the AT&T’s mediocre network etc: point of my post is that we need to think processors/network speeds in tandem and not as standalone metrics.

      We might be saying the same thing.

  7. Jayshree Ullal Friday, May 22, 2009

    Particularly in the Cloud, with dense computing, more symmetry in the network is needed
    My emerging though simplistic rule of thumb is 1GHz of compute Processing = 1 Gigabit of I/O
    All depends on workloads and apps of course

    More in my blog later this summer…

    1. Jayshree

      I totally agree with you on this. I think the bump in the speed of the iPhone is keeping up with the upgraded speed of the AT&T network. They have been promising us speeds of around 700 kbps and higher, though my view is that there will be more symmetry in the compute/communicate worlds.

      1. Richard Bennett Om Malik Friday, May 22, 2009

        They’re actually promising 7.2 Mbps in the shared downstream direction this summer for the new iPhone. It’s just a firmware upgrade to their existing plant, but it will do some interesting things to the backhaul in places where they still use copper.

    2. That’s more or less the ratio we had in the early Ethernet days: 10 MHz of CPU on a 10 Mbps network, and 1 MHz of CPU on my 1 Mbps StarLAN network.

      BTW, I remember your first IEEE 802 meeting in Irvine.

  8. Saswat Praharaj Friday, May 22, 2009

    What is the constraint in handheld networked computing ?

    1. CPU speed
    2. Bandwidth

    PC was in same stage few years back and we see that we still need more bandwidth and more computing power.
    Hand held revolution is on and we don’t really know what is possible at this moment.

    Ignore any of it at your own peril.

  9. MIPS are death, Megabits/connectivity becomes the law « Technologycritics’s Blog Saturday, May 23, 2009
  10. I agree with you here OM ………actually you can summarize current IT trend (apps and data in cloud ) with Sun slogan …”NETWORK IS COMPUTER” …………5 years ago its true for IT geeks for building …massively parallel super computers ….but now with advent with cloud aps ….its become more relevant to normal …………..why download song when you can stream ………access data from any where using cloud devices like pogo plug or cloud syscronization services ……….even core office apps now cloud based ……..like google office , zoho ………………fact of life today is very well work productively with good network connection + cloud apps on netbooks / old hardware

  11. I would operating system efficiency and browser page rendering speed to the SIP+higher_bandwidth equation.

    Chrome and Android running on that new Qualcomm “all in one” chip over WiMax is what we want, and the result would be way faster than merely doubling ( quadrupling? ) the CPU processor speed.

  12. Totally disagree. I think you need to go back and reread Gruber’s article. Pay attention to the part about NetShare and his superior experience with non-mobile Safari.

    @clatko

  13. outside of the tech/geek crowd people are far less willing to pay for bandwidth than for compute speed. i sell compouters and when i ask a customer why they wat to upgrade the numbe rone answer is either ‘faster downloads’ or ‘not waiting for the internet all the time.’ i usually suggest that they may be better off upgrading there connection from the basic DSL plan to a faster one or from mobile broadband to a wired one. the usual response is that they want to know what they can spend at once to speed up there connection; but they are unwilling to have any higher recurring monthly charges for the sake of speed.

    in addition to fatter pipes we need to be looking at better compresion algrithims and some stadard for send heavily compressed data from the cloud to user clients. this should all be totally transparent to the users.

  14. Virtual Web Symphony Monday, May 25, 2009

    PC processing is extermely important. It is true that when you have an option of fast access to intenet, you will definietly enjoy that much more than faster processing in localized environ. But faster internet access is also related to CPU speeds.
    So in my opinion CPU speeds and Bandwith both are equally important.

  15. I just read your entry trhough the lin at http://technologycritics.wordpress.com/
    Though agree with your article as well as Technologycritics’ article, I could not deby that Windowslog has a strong point. When we are talking of mobility, MIPS of the terminal is as relevant as it was before. However, it is totally different
    A) to think on chanllenging the MIPS of a Data Center by accelerating the speed of each processorthan by multiplying the number of processors and managing their conectivity in a very producting manner
    B) to think of increasing MIPS of the terminal, as one of the ways to make the connectivity factor efficient than maintain the challenge on MIPS as the main challenge and the performance of the network (not just bandwith) as a secondary one.

    Manuel No

  16. 10 Things You Don’t Need to Do In the Clouds « SmoothSpan Blog Monday, May 25, 2009

    [...] Worrying about MIPs in general.  As Om Malik so correctly points out, its the megabits (of connectivity) not the MIPs that count these days.  We haven’t been [...]

  17. Prashant Gandhi Tuesday, May 26, 2009

    Om,

    Though both connectivity and computing matter, their relative importance do change depending on time, application workload, device, etc. Ping-ponging between the two will continue unabated… There are times when I crave for a much faster CPU & more memory (e.g. when running VMs on my PC & Mac) and other times when I crave for much faster bandwidth (e.g. when surfing from coffee shop).

    Interestingly, both computing and connectivity equipment depend on processors, ASICs and memories (MIPS, Mbps, Mpps, MB) which are bound by fundamental laws of precious electrons…

    PG.

  18. Alcatel Lucent Boosts Fiber Speed to 100 Petabits in Lab Monday, September 28, 2009

    [...] as Cyan Optics are so important to maintaining the current pace of innovation on the web. Now that broadband is our platform we have to make sure it continues to get faster and [...]

  19. 别让 PC 像恐龙一样消亡 | iFanr 爱范儿 ♂专注于拇指设备的小众讨论 Monday, October 26, 2009

    [...] 其中一下子吸引到我的是,近乎一半的受访对象认为离线的电脑毫无价值。80 年代及 90 年代早期 PC 制造商们在硬件上的竞赛已经很快演变为了宽带的竞赛,价值建立在网络平台非硬件设备上。下一代的公司将不再是那些控制着硬件业的巨头,而是那些主宰着网络的大腕如 Google 和 Facebook,甚至包括像 Spotify 或 Hunch 等正为此而努力的公司。 [...]

  20. Google to Build Fiber Network to Drive Web Innovation – GigaOM Wednesday, February 10, 2010

    [...] at GigaOM have said for years that broadband is the platform for innovation, and Google no doubt agrees. The pace of technological innovation in terms of video conferencing, [...]

  21. Google’s Fiber Network Could Foil ISPs and Fuel Innovation – GigaOM Wednesday, February 10, 2010

    [...] at GigaOM have said for years that broadband is the platform for innovation, and Google no doubt agrees. The pace of technological innovation in terms of video conferencing, [...]

  22. Verizon Goes Up & Down at 10 Gbps in Tests : Tech News « Tuesday, October 26, 2010

    [...] we’ve said before, broadband speeds matter, and by providing more capacity and speeds Verizon is offering a platform that’s not only [...]

Comments have been disabled for this post