We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message
80,259 News Articles

Wanted: heroic programmers or high-level APIs

Multicore computing is progressing so fast it's difficult to forecast "what tomorrow may bring"

Multicore computing is progressing so fast that it's difficult to forecast "what tomorrow may bring", says Barbara Chapman of the University of Houston, Texas.

This makes it hard to maintain relevance with the applications program interface (API) layer that takes multicore program development within the reach of mainstream programmers.

"Any [computer] system is only as good as the programming tools you have for it," Chapman told the Multicore World conference in Wellington.

There will always be "heroic programmers" who can coordinate multiple processors using low-level instructions, but most "lack the time and perhaps the skills and the training" to program close to the "bare metal", she says.

So having the right higher-level programming models that combine relative ease of development with reasonable efficiency is very important -- not only for the developers themselves, but for the commercial success of multicore computing. If businesses say "this is an impressive technology, but we haven't the resources to develop for it; we're going away", then progress will be held back, she says.

Multicore systems often incorporate different processors with different operating systems, making a portable set of tools essential. When it comes to choosing tools some people will always put performance first and some will put portability first, Chapman says.

"I think portability is really critical. Sometimes we'll have areas where performance is valued more and people are prepared to go the extra mile to create an efficient system, but most will want a programming model that will allow them to use an efficient strategy from the human resources point of view "and allow the compiler to work out the details."

An important candidate is OpenMP, an API that allows opportunities for parallel processing to be marked, so the master program can start up a number of "slave" processes that run code in several parallel threads on the different processors.

Multicore computing has been prone to the kind of underestimates that are legendary from the early days of the computer industry. At the beginning of OpenMP development for example, it was thought that no one would ever need more than four threads running simultaneously, Chapman says. These days even trainee programmers are attempting to handle systems with 48 processors.

OpenMP began development in 1997, so it's "middle-aged" in computing terms but still very much in ferment. The architecture review board that controls the standard is currently attempting to finalise version 4.0. It was hoped that the new version would be ready by the end of last year. There is a draft version that can be read on the openmp.org but "there's still a lot of debate going on so we can't say 'this is the way it's going to be'."

New features in the projected 4.0 include facilities for threads to communicate with one another -- hitherto they have had to run independently -- and tidy methods of terminating a thread when an error condition occurs.

See also:

Multicore is key, but don't expect a 'killer app'


IDG UK Sites

Best January sales 2015 UK tech deals LIVE: Best New Year bargains and savings on phones, tablets,...

IDG UK Sites

Chromebooks: ready for the prime time (but not for everybody)

IDG UK Sites

Best Photoshop Tutorials 2014: 10 inspiring step-by-step guides to creating amazing art,...

IDG UK Sites

Mac tips tricks & hacks: 10 things you didn't know your Mac could do