The world’s first microprocessor, the Intel 4004, was launched in 1971. It was a 4-bit design with a clock speed of 740kHz, and contained a single core. Today we have 64-bit chips, clock speeds of 4.4GHz, and up to a dozen cores. This phenomenal rate of change would be awe-inspiring had we not come to expect constant improvements as the norm in the world of computing.

See also: What's so great about Haswell?

Some analysts and scientists are suggesting that such complacency might be misguided as the laws of physics could soon step in and bring a halt to further improvements.

Fortunately, silicon transistors aren’t the only way to make processors, and even the familiar concept of executing instructions sequentially, on digital data, has its alternatives. Here we look at some of these different and, in some cases, bizarre technologies to get a view of what might be driving our computers in a decade or two's time.

You might also like: Haswell laptops: All day, and all of the night

Future processors: the non-silicon alternative


Transistors made from carbon nanotubes have the potential to operate at 1THz. (Photo: Standford University)

A transistor is an electronic component that either amplifies a signal or allows one signal to control another. They form the basis of nearly all electronic equipment, indeed today’s most complicated processors contain no fewer than 2.5 billion transistors. Although the term “silicon chip” is a familiar one, the element silicon isn’t the only substance that can be used to make transistors. Indeed in the early days, germanium was also used.

No-one is suggesting a return to germanium of course, but other semi-metallic elements do look promising when used in mixtures. Intel has been experimenting with several of these compound semiconductors. Mixing elements allows the various electrical properties to be fine-tuned, whereas using single elements provides no such flexibility, and this has provided improved performance compared to silicon.

Back in 2005 the company announced an InSb (indium and antimony) transistor that was five times faster than its silicon counterpart but consumed a tenth of the power. More recently, Intel has used a combination of indium, gallium and arsenic and has referred, tantalisingly, to “very high performing devices”.

While these substances might provide a stop-gap measure, an alternative with the potential for even higher performance – albeit a potential that is by no means guaranteed and much further away – is carbon.

For many years, carbon was known to exist into two forms, namely graphite and diamond. Then, in 1985, Buckminsterfullerene was discovered. This form of carbon has molecules with 60 carbon atoms arranged as a sphere, and many more new forms of carbon have been discovered since. Two which are attracting a great deal of interest are graphene, which comprises sheets of carbon atoms, and a group known as carbon nano-tubes in which the atoms are arranged in cylinders of various sizes. Among their many other uses, both these forms of carbon are able to be used as transistors and can operate much more quickly than silicon.

Silicon and other semi-metallic elements have small amounts of impurities added to them – a process known as doping – to give them the semi-conductor properties needed for them to act as transistors. Some of these esoteric forms of carbon, on the other hand, are inherently semiconducting so don’t need doping.

More significantly, though, an electrical current travels more quickly through graphene than any other known substance. As a result, IBM has demonstrated a 300GHz graphene transistor and experts believe that both these forms of carbon have the potential to operate at 1THz. As yet the transistors are more suitable for analogue electronic circuits, such as those used in mobile phones, than digital circuits, but you can bet that researchers will do their upmost to change all that.

Next page: Optical computers

Future processors: light beam computing

While non-silicon transistors could provide a performance boost without the difficulties associated with shrinking silicon chips yet further, digital computers don’t have to be electronic and some of the alternatives offer potential performance gains. For this reason, scientists have long investigated computers that don’t rely on electrical signals. Whenever non-electronic forms of computing are discussed, the optical alternative invariably comes to the fore but, despite many years of research, progress has been slow.

Optical computers have been built but although fabricating an electronic switch, i.e. a transistor, is simplicity itself, creating a practical version of the optical equivalent has proved fiendishly difficult. Efforts have been hampered mainly by the problems of miniaturisation. Despite this, and not wanting to be drawn on when, a scientist from the ETH research centre in Zurich hasn’t ruled out the possibility of a fully optical computer some time in the future.

Although all optical computers are proving challenging, other researchers have set their sights on building chips that combine electronics and optics thereby obtaining the best of both worlds. One of the greatest problems with today’s processors is not so much the processing speed but the speed at which data is transmitted around the chip.

IBM’s work with hybrid processor could offer the best of both worlds – electronics and optics.

IBM’s work with hybrid processor could offer the best of both worlds – electronics and optics.

The transfer of data back and forth between the processor cores and the cache memories is a particularly notorious bottleneck. Work carried out by IBM has used silicon for the actual processing of the data but optical links for the data pathways. By using wavelength division multiplexing – a technique that involves using different colours of light for different data streams but transmitting them along the same pathways – much higher rates of data transfers are achieved in less space and with a lower power consumption than with electronics.

Optical computer research

Traditionally, optical computer research has been hampered by the problems of miniaturisation. (Photo: Harvard University)

Future processors: from digital to analogue

Having taken a detour from the world of electronics into that of optics, we’ll now return to electronics but with a difference. In virtually all today’s computers, values are stored digitally - in other words a sequence of 0s and 1s.  These are represented by different voltage levels, say 0V and 1.5V.

So, for example, because the binary equivalent of the decimal number 13 is 00001101, it could be stored digitally in an 8-bit processor as the voltages 0V, 0V, 0V, 0V, 1.5V, 1.5V, 0V, and 1.5V, each in different memory circuits. However, we can also conceive of it being stored as a single voltage in a single circuit. Now 13 might be represented by 13V while 12 would appear as 12V and 3 as 3V. This forms the basis of analogue computing.

Analogue computer

Analogue computers might have died out in the 70s but they could just make a come-back for some applications. (Photo: Joe Mabel)

Analogue computing is by no means new – it existed alongside digital computing for many years. Typically, analogue computers had circuits for addition, subtraction and integration, and these were wired up using patch leads to create a circuit capable of solving a particular problem. Because wiring up the circuit was a time-consuming job, later analogue computers used digital computers to do that wiring so that a program could be loaded from disk in much the way as with a digital computer.

A major advantage of analogue computing is that the speed of operation doesn’t depend on the complexity of the problem. However, the amount of hardware required does increase with the size of the problem and this proved to be their downfall as digital computers got faster. The other nail in the coffin was that they aren’t general purpose – while they’re very good at some problems such as simulation, they are totally incapable of solving other types of problem. Interest in analogue computing hasn’t totally vanished, however, and while nobody ever thinks they’ll stage a comeback and replace digital computers, the option of digital computers having analogue co-processors, optimised for specific type of task, might just be an attractive option.

And then there’s the artificial neural network, a very specific type of analogue computer that mimics the operation of the human brain. That means it's good at tasks like face recognition a task with which digital computers still struggle. Progress has been slow after many years and although digital computers can simulate neural networks, and often do so for pattern-recognition tasks, because they do it sequentially they are much slower than true artificial neural networks which would operate in parallel.

Neural network

Mimicking the human brain, an artificial neural network could be so much faster than a digital computer for some tasks.

Next page: biological computing

Future processors: biological computing

We talked about neural networks on the previous page, but what about biological computing? This really is the stuff of science fiction.

An artificial neural network is inspired by biology but implemented using analogue electronics. Other researchers, however, have used the actual building blocks of life: DNA.

All living organisms rely on this phenomenally complicated molecule. Comprising a string of so-called bases, the exact sequence of the bases defines the exact characteristics of each organism, and their chemical reactions are key to passing on that genetic information during cell division.

Because the sequence of bases can be thought of as ‘data’, and because DNA’s chemical reactions can be thought of as ‘processing’, there is potential for it to be used as a type of processor.

This was demonstrated several years ago to solve the “travelling salesman” problem which aims to find a route starting at one city, ending at another, and visiting a list of other cities only once en route, relying on the availability of transport services between the various cities.

It’s fairly easy to write a program to solve the problem on a digital computer, but the time taken to solve it increases dramatically with the number of cities. So, if your PC could come up with an answer for 20 cities in a reasonable time, adding just one more city would make the calculation time unreasonable, since it would increase the total time by a factor of 21. Go to 22 cities and the time becomes completely impractical.

DNA computing provides an excellent solution because a test tube full of DNA could contain millions of molecules, thereby allowing millions of possible routes to be tried out in parallel.

Strange as using test tubes full of chemicals might be, even stranger is the use of actual biological material. Again we’re not suggesting that they’ll replace silicon chips anytime soon but researchers at the University of Florida have used neurons, extracted from rats, to act as an autopilot for a flight simulator. Science fact truly is stranger than science fiction.

DNA molecule

In proving data storage and processing, the DNA molecule can be thought of as a chemical computer.

Neural network