5 Comments

Summary:

As the needs for performance and power efficiency become more important, figuring out how to get the best hardware on the job without having to rewrite all your code becomes a problem computer science must solve. Is abstraction the answer?

armdatacenterthumb

The rise of servers powered by cell phone chips, with hundreds of them whirring away to solve a problem while using less power, has become almost commonplace in the last few years, but over at Linux Magazine, Editor Douglas Eadline brings up the enormous problems associated with such a vision: namely, the software can’t cut it yet (hat tip to Inside HPC). Eadline writes about May’s Law, the corollary to Moore’s Law, which says the number of transistors on a chip will double every 18 months.

David May postulated that the software efficiency halves every 18 months to compensate for Moore’s Law. Basically, adding faster hardware makes the software more complicated and it doesn’t run as well. It also takes time for it to catch up to hardware gains. Now, as certain high-performance compute gurus or webscale data centers contemplate a wholesale change in architecture inside their data centers by adding ARM-based servers, the issue of software complexity must be addressed. Eadline suggests abstracting the run time environments for such systems. From his article:

As software progress crawls along, I am convinced that future large-scale HPC applications will include dynamic fault-tolerant runtime systems. The user needs to be lifted away from low-level responsibility so they can focus on the application and not the complexity of the next hardware advance.

This sounds similar to the issues driving the creation and adoption of platforms as a service on top of various clouds, only Eadline is arguing for a run time environment that enables high-performance computing on top of different hardware architectures, be it x86 chips, graphics processors or ARM-based chips. We’ve covered that before, and I still think it has promise. As the needs for performance and power efficiency become more important, figuring out how to get the best hardware on the job without having to rewrite all your code becomes a problem computer science must solve.

You’re subscribed! If you like, you can update your settings

  1. First of all there are hard numerical problems. No doubt about it.

    We learn “first” to learn from negative feedback, then we learn self, then we learn “all”, then we learn numeric, then math. With a massive parallel lock free, “clock” free system. Is the sequence important, why not use numeric to implement “all”?

    Maybe just maybe we should learn from mother nature. Or what is the abstract of “all” or “Five”, since we have to jump through hoops to learn it? While some cultures stop at 2 or 3, One-Two-Many cultures.

    In other words I think we will have “smart” problem solving systems with a high level of fault tolerance(lock free) and HPC numerical sub-system. The problem seems to be to make “numerical” system “smart”, except one does the same thing over and over and expects a different outcome and calls it smart.

    For example:
    How can a system learn numbers ( the short version)

    Numbers are sequences
    Sequences in order
    Order low to high

    How can one teach/learn order? One could start with nil, but what is the representation of nil in an active system, bombarded with all kinds of “random” signals, how does one filter nil out ? Or one could start with a lot/many/all which is easy to detect as “many” synchronized signals. Now after one gets “all” how about “except” to break up “all”, from there it’s a small step to greater/smaller to ordered sequence. Of we go to numeric and math, no statistics no Voodoo.

    Or we are closer than most people think, or what machines can do. Just a little rethinking the basics.

  2. This is one area where AFAIK the open source MONO project (project initially created to run .NET on a *NIX OS) is out front… where you can use AOT compilation and get ARM optimized binary from high-level .NET code (C#/VB.NET) where the hardware arch is abstracted. No metrics on the quality of the ARM optimizations though…
    http://www.mono-project.com/AOT
    MS does support .NET for ARM but only VIA the compact and micro frameworks which only support a subset of CLR instructions.

  3. Aaron: Compiling to ARM is not the point. Almost any language can be compiled to ARM code. Theoretically any language can. The problem is the amount of processors and allotting relevant tasks to them with low overhead.

    Ronald: I don’t understand what you want to convey.

    See my commentary on this article: http://abiro.com/w/2011/03/19/the-future-of-web-application-platforms/

  4. I am convinced that future large-scale HPC applications will include dynamic fault-tolerant runtime systems.
    *****
    I also share your opinion on that.

    Greetings and thank aromatic

  5. Ab Initio’s CO>OS is supposed to do just this… http://abinitio.com/abinitio_products.html

Comments have been disabled for this post