9 Comments

Summary:

There’s been a lot of talk lately about programming language OpenCL, as the new version of Apple’s OS X operating system, which uses it, is due to be unveiled soon. But what exactly is OpenCL and why should you care? It all boils down to increasing […]

There’s been a lot of talk lately about programming language OpenCL, as the new version of Apple’s OS X operating system, which uses it, is due to be unveiled soon. But what exactly is OpenCL and why should you care? It all boils down to increasing system performance, and bowing to the realities of today’s visually intensive computing. Like Red Bull purports to do for tired partygoers, OpenCL gives computing wings.

OpenCL is a programming framework that allows software to run on both the CPU and the graphics processor of the computer. This means programs will run faster and offer more performance on a machine’s existing hardware (provided it has a separate GPU rather than integrated graphics). Most programs are either written to run on the CPU (a common Intel or AMD x86 processor) or specific to the GPU, such as video games. In the last few years, however, chip vendors have offered software development kits and frameworks that allowed developers to access the GPU for general purpose tasks, such as data processing or transcoding, that could be parallelized to run on a multicored GPU.

Nividia pushed CUDA for scientific computing while AMD tried to push Close to the Metal (now Stream). But as sending tasks to the GPU became easier, the software and hardware providers realized an open standard that wasn’t linked to a particular chip vendor would be the best option. So earlier this year Apple offered OpenCL to the Khronos Group, a standards-setting organization, and Intel, Nvidia and AMD joined forces to create a standard that would work on multiple chips.

The standard was released this month, so now any programmer who wants to borrow a little power from the GPU can. It should also come in handy for running visuals on devices such as future iPhones, or mobile devices that use Nvidia’s Tegra chip, which has a separate graphics core. Indeed, Imagination, the company that’s licensed its graphics core to Apple for use in future iPhones, is hiring OpenCL engineers. Prepare for computing to get faster and prettier — from laptops to smartphones.

  1. Actually OpenCL will allow programmers to take advantage of any and all supported processing units/cores, not just CPUs and GPUs; this also includes DSPs and anything else that is designed for specialized tasks.

    Share
  2. I’m excited about this technology too, but there’s one very vital limitation that few are talking about. Current CPU/GPU architectures have CPU/Cores and CPU/Cores each ONLY able to access their own memory space. This means that when OpenCL (with the block language extensions) executes code on a GPU/Core, it must first block copy memory to the GPU/Core and then after execution block copy the results back. The overhead of this option is minimal compared to the performance gain for moderate to deep mathematical computations, but becomes an issue with general computing logic. If OpenCL is used for general computing logic, it’s possible that it will execute slower than if only CPU/Cores are used. OpenCL is an important step, but until CPU/GPU have a shared memory space – GPU/Cores for general computing will be prohibitive.

    Share
  3. @ Mike Ross:

    Thanks for the informative post. From what you have revealed, it sounds like it will not improve the performance of real-time tasks. But, it seems like it still could be utilized to speed up tasks like rendering effects on digital photos and videos. Do you agree?

    Share
  4. APPL is also introducing Grand Central which may (with OpenCL) do something about shared Core memory spaces – we’ll see.

    Share
  5. [...] everything from netbooks to mobile internet devices running on ARM processors. Other machines will offload more processing to the graphics processor. This is great for consumers, who will soon be able to choose a computer that fits their lifestyle, [...]

    Share
  6. A very interesting topic – but a sloppily written article.

    Mike Ross, it depends how divisible certain tasks are. It’s like programming for multiple cores (hard in itself, as you have surely heard) brought to the next level. If you are able to separate out certain tasks that could run as separate threads/processes and that would actually benefit (!) from another architecture (like GPU), then it would undoubtedly make sense.

    Share
  7. [...] Huang says he will continue from investing more in R&D around his three core initiatives – GPU computing, mobile computing and visual [...]

    Share
  8. [...] the consumer side, the demand for graphics, which drove AMD’s purchase of ATI back in 2005, has taken the emphasis and even some of the [...]

    Share

Comments have been disabled for this post