With the release of Windows 8 Microsoft has signalled that the future of computing will involve interacting with our devices in ways a keyboard and mouse currently fail to deliver. It isn’t exactly surprising, as we’ve been using this hardware for the best part of three decades, but as we enter this new era the ideas being put forward by innovative designers are surprising, revolutionary, and even - borrowing from the big book of Apple adjectives - magical.

In the early days of computing the only way to load programs, or do basically anything at all, was to type in commands via a keyboard. In fact the Altair 8800, one of the very first home computers, didn’t even have a keyboard. Instead the user entered commands by toggling various switches and the reward for their efforts were two rows of lights blinking in sequential patterns. It’s a wonder the computers ever caught on at all.

Researchers at the now legendary Xerox PARC facility in California knew that things could be better than this and set about designing the WIMP system, which incorporated a graphical user interface (GUI) featuring Windows, Icons, Menus, and a Pointing device - hence the name. This remained hidden away until Steve Jobs negotiated a trip to the facility, witnessed the technology, and immediately set about replicating and refining it for the mass market. After the release of the hilariously priced Apple Lisa in 1983, the more sensible Macintosh in 1984, and Windows 1.0 in 1985, the landscape was forever changed and the graphical user interface became the norm, which it has remained until now.

Smartphones and tablets have recently opened the doorway to the possibilities of touch- and speech-controlled interfaces, while games systems such as the Nintendo Wii and Microsoft Kinect for Xbox have released gamers from their sofas and gamepads, demonstrating the possibilities of using gestures and movement to interact with their devices. Now the gloves are off and developers are showcasing, and even manufacturing, systems that only a few years ago would have been the babblings of madmen. The future is here and there’s nary a keyboard in sight.

Future of computer control: Gesture Control

Games consoles

If there’s one new interface that we’re already very familiar with, it’s motion control. When the Wii was released in 2006 the idea of playing a computer game standing up was bizarre, multiplayer experiences were generally limited to Xbox Live or sitting shoulder to shoulder with a gamepad-wielding friend. The chances of a yoga game being released were negligible. But Nintendo’s smart little white box tapped into something that previous games consoles had failed to address in the same way: the simple fact that games should be fun and the interface intuitive.

Kinect Adventures Xbox 360

The elegant control system immediately made sense to most users, mainly due to the fact that the on-screen representations of their actions were things that they already knew how to do. Wii Sports included a tennis game that you played as if holding an actual tennis racket, golf required you to swing the controller like a club, and boxing was exhausting and potentially dangerous if an unsuspecting family member walked in front of you during a frantic bout.

This removal of classic barriers such as multi-buttoned control pads that required high levels of accuracy with small joysticks, meant that people who never considered gaming as a viable or fun pastime flocked to the Wii in huge numbers. Wii parties became a common event, and the sales just continued to climb.

The console was an unadulterated success, going on to sell over 96 million units, making it the most successful system in Nintendo’s illustrious history, and showing just what could be achieved if the interface was designed to respond directly to existing human behaviour rather than require the learning of new ones.

Both Sony and Microsoft quickly responded with their own peripherals that emulated the Wii motion controller. The Sony Move looked a bit like a deactivated light saber, or marital aid depending on your viewpoint, and received much critical acclaim. Sales were less impressive though and Sony recently admitted that the device has failed to live up to their expectations.

PlayStation Move

Microsoft fared better with its Kinect interface which, after selling over eight million units in the first sixty days of its release, entered the Guinness book of records as the fastest selling consumer electronics device of all time. That's despite a high price which approached the cost of the Xbox 360 itself.

The advertising slogan that stated ‘You are the controller’ highlighted one of the main differences between the Kinect and its competitors. With the Wii most of the tracking is done via accelerometers within the controller itself which lets the base unit know its orientation, distance from the screen, and the speed at which it’s moving. This can be fine for most applications but still requires batteries to be charged, buttons to be pressed, and arms the players with a solid, offensive weapon which has led to an unsurprising rise in Wii-related injuries. The Kinect is different, and this difference means it can have implications for how we use our computers in years to come.

Xbox 360 Kinect

The Kinect unit is fitted with an RGB camera, depth sensor, and a multi-array microphone. This allows the device to see the user, track their movement, range their distance, and even process voice commands without the need for a separate controller.

For gaming, this is revolutionary since the player becomes part of the game. On the many different fitness titles available, your body is shown on screen as you attempt the various routines, enabling you to correct mistakes and actually improve your dancing, martial arts prowess, or avoid the cheating techniques that waving the Wii controllers would have allowed you.

The camera also acts as a video calling interface, and the speech controls are integrated system wide so you can browse the web via Bing, access the various apps, and even shut the console down just using your voice. It was only a matter of time then that something as useful as this made its way to a desktop environment, especially one where gestures and touch controls are coming to the fore.

Kinect comes to Windows

In June 2011 Microsoft released a Kinect for Windows development kit which allowed developers to tailor the device for innovative projects with Windows 7. Of course, various clever hackers had already found ways to manipulate the device, but between them all the world got a glimpse of the future of digital interaction.

One group of students at MIT created a javascript program that allowed the user to navigate websites just using gestures (which worked particularly impressively with 3D maps programs and drawing programs), programmer Oliver Kreylos hacked his Kinect so that it would create 3D models from the images it captured, a German research lab built a portable guidance system for blind people, while another group of MIT students created a tiny helicopter that used the Kinect sensor to avoid crashing into obstacles.

More practical uses included a shopping trolley that followed you around a store and which allowed you to scan items as you placed them in the basket. The upshot of this was that once you had finished browsing you simply entered your payment information, the items were charged, and you could leave the store without the need to go to a till.

Medical professionals were also quick to see the potential of hands-free control, with a group of surgeons at St Thomas’ hospital in London trialling the Kinect to manipulate 3D images of a patient’s aorta during an operation. The BBC reported that John Brennan, president of the British Society for Endovascular Therapy, stated that ‘I would find it difficult to think of operating rooms in ten or fifteen years' time where these were just not the norm’.

Microsoft has been open about its desire to see the technology deployed in interesting and unusual ways by developers and has continued to release updates to the developer kits. The company also stated on its Kinect for Windows blog that these updates ‘will include support for Windows 8 desktop applications’, suggesting that maybe those with non-touchscreen PCs might not need to upgrade their entire hardware to run the latest offering of Windows. It’s still relatively early days for the Kinect, but already it seems to be marking a path towards the future.

If there’s one criticism of the device it’s that it lacks fine motion controls, with many of the gestures needing to be slightly exaggerated and distant from the screen. There are already lens covers such as the Nyko Zoom which aims to reduce the amount of space you need between you and the sensor for it to work correctly, but reports are mixed on how successful it is at accomplishing this.

Next page: Fine motion control for computers - Leap Motion

Leap into the future: Leap Motion

One company that seems to have a viable solution though is Leap Motion which is set to release its own motion sensor called the Leap - a device that’s already been impressing the various tech blogs and publications with its incredible levels of accuracy and diminutive size.

‘It’s based around a set of technologies developed by my co-founder David Holz while he was getting his math PhD’ explains Leap Motion CEO Michael Buckwald. ‘Essentially it uses an entirely new approach to motion sensing that has never been used before in academia or commercially. Accuracy is generally around one hundredth of a millimetre, which is many times more accurate than existing approaches, also much more responsive. We have almost no detectable latency, whereas most approaches have very notable latency. One of the most exciting things is that it can be put in a very small form factor, fairly inexpensively.’

This isn’t hyperbole as the Leap device is about the same size as an iPod Nano and will be $70 when it’s released at the beginning of 2013. Leap Motion assert that the device will be able to control the Windows 8 modern interface by simple movements of fingers rather than needing to wave your hands.

Leap Motion

‘If you think about something like Windows 8,’ says Buckwald, ‘it really is hard to use if you don’t have a touchscreen. This is a great way to control something like that. Users can sit back comfortably in their chair and pinch to zoom and scroll, do anything they can do with a touchscreen but in a more responsive way.

The goal is to create an interaction bubble around a user that’s seated in front of a computer, but within that environment the device can be placed anywhere. In the future we will build long range versions but we think that pretty much everything other than TV control is a short range use case and since the technology was built to do complicated things we’re more interested in seeing it used for those rather than flipping up and down through a list of channels.’

Leap Motion drawing on screen

The technological landscape is already scattered with the dormant corpses of devices which promised much but never found support from software developers. This scenario is one that Leap Motion are keen to avoid and so far, its strategy looks solid.

‘There are around 42,000 developers that have applied to build content on top of the Leap platform,’ Buckwald reveals, ‘and we want those developers to have as big an audience as possible. Thousands already have units and are building amazing things. There will be everything from casual games like Angry Birds to more serious games like first-person shooters, to music and video editing tools, scientific visualisation, and engineering. The goal really is to have a diverse market place with a lot of different things catering to different people. Over time we’re going to start working with OEMs to integrate the technology into more products. We’re working to integrate this into laptops, also smartphones and tablets, eventually even things like cars and planes.’

Gaze Tracking

When it comes to fine motion it’s hard to beat the minute movements that our eyes make on a constant basis. Swedish company Tobii has been developing a technology that it calls Gaze Tracking, which uses infrared cameras to track the eyes of a user and execute commands.

At the moment Tobii only sells into the enterprise and research markets but the company has announced that it intends to extend that to mainstream computers and tablets in the coming year or so.

To illustrate the potential Tobii released a video demonstration in 2012 of its Gaze Tracking system which controlled a Windows 8 desktop. Tiles and links were selected simply by staring at them, while the company contends that web pages will scroll as you read down the page and even the screensaver can be set that it only activates once you look away from the monitor for a certain amount of time. The best part of all is that the system requires no glasses to be worn by the user, as the data is gathered by the cameras.

 Next page: Voice control

Future of computer control: Voice control

The last couple of years have seen a huge emphasis put on voice control interfaces. Thanks to the Kinect it’s now possible to control elements of your television or gaming experiences by talking to the device, and Windows 8 integration now looks a distinct possibility in the near future. The technology where voice is most apparent at the moment though is in mobile devices.

Siri, Google Now and S-Voice

When Apple released the iPhone 4S in 2011 it incorporated a digital assistant called Siri, which was controlled by speech.

The whole focus of Apple’s advertising campaign for the flagship phone centred on the new interface, and brought in celebrities such as Samuel L Jackson, Martin Scorsese, Zooey Deschanel, and John Malkovitch to have on-screen conversations with the system. One of the main differences Siri offered was a kind of personality that had never been seen before in mass market products.

Siri on iPhone 4S

The programmers cleverly added answers to questions such as ‘What are you wearing?’ ‘When is your birthday?’ and even ‘Who is God?’, which gave the software the appearance of humour, while also spawning websites such as ‘Sh*t Siri Says’ which collected the various answers to odd questions users posed.   

Google has its own voice-controlled search apps and the new Now feature that's part of the 4.2 Jelly Bean version of Android. Google Now aims to do more than simply provide information when asked, but instead builds a profile of the user over time to offer suggestions or data that it thinks will be useful. Samsung have also developed the S-Voice system which features on its flagship phones.

There’s no doubt that as these technologies mature they will become a central part of our interaction with devices, but they still have a fair way to go in terms of accuracy. Siri can be a frustrating device to use, especially if you have a heavy accent or even a cold. The anguish that the system can induce is wonderfully highlighted in the 'not safe for work' YouTube video ‘Apple Scotland - iPhone commercial for Siri’, which features a Scotsman trying in vain to ask Siri for basic information.

Nuance Dragon Dictate

The main challenge for the voice deciphering code is that it has to contend with many different factors while interpreting a user’s input. One company that has been working on overcoming these challenges on the desktop is Nuance, whose Dragon Dictate software, reviewed, is one of the most advanced in the industry.

Guy speaking to iPhone with Dragon Dictate

‘Speech recognition is an extraordinarily hard computational problem’ explains Nuance’s Neil Grant. ‘Effectively you’ve got an astronomical search space. An example would be if you had a seventeen word phrase - which is an average length sentence - within a fifty thousand word vocabulary. It’s the equivalent of finding the correct phrase out of 7.6x1079 possibilities. Roughly the amount of atoms in the observable universe. Now to put that into context, when Google does a search to find a webpage for you, it’s searching somewhere around 1x1012 web pages, so significantly less.

‘If you’re typing something on a keyboard it’s very simple, it’s binary - you either hit the keystroke or you don’t. With speech there’s far more variability in terms of accents, tonality, environmental conditions, background noise, and microphone quality. One of the ways we tighten that with the desktop speech recognition is that a user has a profile attached to them so the computer understands the nuances of the way they speak. The software can apply this data to achieve higher levels of accuracy, and the more you use it and make corrections, the more it learns and then applies those learnings to your profile.’

This dedicated usage is a significant factor that gives Nuance software its famed levels of accuracy. It also highlights one of the challenges ahead for the mobile software that many of us currently use.

‘Something like Siri is effectively speaker-independent speech recognition’ says Neil. ‘That means it’s not training a profile for you, certainly not in any great depth. You might use it on your phone then another family member might use it, so it’s dealing with potentially multiple speakers from the same device. It’s a much harder process and means it can’t set itself up in advance for a particular accent.’

Advances in noise-cancelling microphones and the continued refinement of voice control software is seeing rapid improvements in all areas of the technology. Nuance itself offers iPad and iPhone versions of its software now, and the continued updates to Siri and Google Voice Search will no doubt push the software even further in the years ahead.

Laptops and voice control

Manufacturers are also beginning to incorporate the technology into newer versions of laptops in response to the ever encroaching influence of tablets.

‘One of the key specifications set by Intel on the new Ultrabooks is embedded speech recognition’ says Neil. ‘So this is something that is absolutely coming through and what we will see is speech on these devices becoming more and more ubiquitous.’

One of the eye-catching elements of Siri that Apple aggressively markets is the system-wide integration of commands. Rather than a stand-alone app, Siri is able to control calendar entries, send emails, tweets, Facebook updates, and play specific music to you, all from the same interface. For voice control to really make an impact on the everyday computers it needs to offer a similar level of depth. Thankfully Windows offers developers the chance to do exactly that.

‘With Windows we can get very, very deep’ Neil continues. ‘There’s a certain amount of integration built into our Windows version, not only dictation capabilities but real command and control of applications like Microsoft Office. For example, a chap called Stuart Mangan, a rugby player, was involved in a tackle and broke his neck leaving him paralysed from the neck down. We effectively voice-enabled his entire PC, to give him not only his email and documents but, through Nokia PC Suite, he was able text messages and make phone calls. He came back to us saying that we’d given him his independence and privacy back.’

Woman dictating to Dragon Dictate

The concept of voice control has been a staple of science fiction for decades, and the representation of communicable computers such as HAL in 2001: A Space Odyssey, or even Holly from Red Dwarf, has been a constant reminder of the convenience and ease with which the interface could work - so long as the computer in question will acquiesce to opening the pod bay doors when you ask. There’s no doubt that this kind of interface is now more of a possibility, but as the way we interact with our technology changes what impact will this have on the systems of the future?

‘A mouse and a keyboard are not a natural way of interfacing with something’ states Neil. ‘They’re a solution to a problem, and they’ve been a very successful solution, but the keyboard layout was designed to slow us down. Stephen Fry came out with a very good quote a couple of years ago where he stated it took less time to get your private pilot's license than it did to learn to type at sixty words per minute.

So we’ve got these interfaces we’re stuck with at the moment - the keyboard and the mouse - which are fine for certain things but for others there are certainly improvements that can be made. You’re starting to see prototypes coming through, the Google Glass project for one, looking at ultra-mobility - wearable computing - and there is a necessity to change the interface. As your devices become more and more mobile you’re not going to be able to carry a keyboard around. Obviously voice is the natural step for that.’

Next page: Wearable computing

Future of computer control: Wearable computing

When Google launched the developer prototypes of its Glass Project at the 2012 Google I/O conference, it did so with subtlety. ‘We have something special for you’ announced co-founder Sergey Brin, before switching the feed on the main screen to a remote camera onboard a helicopter that was circling the conference building. Only this camera was a little different, as it was incorporated into the glasses that one of the passengers was wearing.

In fact four different occupants bore the new apparel, and the reason why became apparent when they subsequently threw themselves out of the open doors and plummeted towards the earth. The conference audience was amazed at how these skydivers were sending back live feeds to the screen, all from tiny cameras fitted into unobtrusive glasses.

The rest of the demonstration involved bike stunts on the roof, a man rappelling down the outside of the building, and finally a BMX biker riding through the lobby, up onto the stage and delivering the product that looks set to be one of Google’s most impressive ventures yet.

The camera was a good way to illustrate the viewpoint of a user, but the Google Glass project is more than just a novelty. In the demonstration video that followed, Google revealed its vision for the future.

Google Project Glass

When wearing the glasses, information is displayed in a form of HUD (Head-Up Display) on the inside of the lens. A user is able to receive emails, which they can activate and read by looking at the icon - in a similar way to the Tobii Eye product - and reply via voice command without the need for a mobile phone or computer screen. The project also displayed ideas of augmented reality. The user went to a train station but was notified of delays, so the glasses displayed a Google map route to the destination and proceeded to guide them via turn-by-turn navigation. Other features shown included taking photos, posting to social networks, and even receiving phone calls.

The project has been met with scepticism in some quarters as to how usable it will be, but with the backing of a company like Google there’s every possibility that some form of interactive eyewear could become a reality in the near future. Developer examples have already been issued and the expectation is that consumer models could be available by 2014. They won’t be cheap though - the developer kits cost $1500 - so if you want to be sporting the latest in wearable tech, you’d better start saving now.

Next page: Mind control

Future of computer control: Mind control

The concepts of gesture and voice control are reasonable evolutions of existing technology that we know and understand. After all, most laptops and mobile devices already have cameras and microphones.

If there’s one area of research that’s harder to comprehend than any other though, it has to be mind control - using the power of your brain alone to manipulate and control the data and instructions that a computer receives. The complexities of such a system are formidable but the potential benefits are truly immense and potentially life changing. It’s all the more remarkable then that one of the foremost exponents of practical mind control interfaces saw fit to place their high-tech wizardry into a rather unusual device. A skateboard.

Board of Awesomeness

Chaotic Moon is a California-based company that has already caused quite a stir in the three years since it was formed in 2010. It was responsible for the Kinect-powered shopping trolley, and followed that up with a Kinect-controlled skateboard which it dubbed the Board of Awesomeness. The next project it built showcased a simple, but effective use of mind control principles when the designers took a headset built by a company called Emotiv - who also specialise in brain/machine interfaces - attached it to a Windows 8 tablet, and bolted them both to the skateboard.

Chaotic Moon skateboard

‘We started with very simple things like getting it to move,’ explains Whurley, Chaotic Moon’s General Manager, ‘Then trying to get it to move and stop. It was really interesting, because all we did was replace the Kinect with the USB key that talks wirelessly to the Emotiv headset. It was pretty simple as far as physical configuration, then fairly complex, cumbersome and trial-and-error with the software.’

The basic principles of the technology are relatively simple. When we think about something in particular our brain creates patterns of electrical activity. These patterns can be recorded using headsets such as the ones Emotiv manufactures, and then these patterns are translated into recognisable commands for a computer system to execute. Like speech, though, there are still issues of compatibility.

‘The reason is your brain has folds in it.' says Whurley. ‘Yours are different from mine and the electrical patterns are different to mine. It’s not a magic technology where I can put it on anybody and get the exact same results all the time. What we did was literally, over hundreds of times, test different people doing different stuff, and we came up with a way we could get it to work for 95 percent of people. With the simple commands, not the complex commands. Things like moving forward, forward faster, slowing down, and stopping. Those were the four basic ones that we tried to get to work, and homogenise if you will, so that across everybody's brainwaves it would work. I will tell you...it is unreal how many people go absolutely bananas. They love it. People are just blown away. It’s this moment of magic and sorcery which is kind of awesome.’

Controlling something with the mind is still such a new way for humans to interact with machines that it can be hard to switch off our thoughts, something which can have unexpected results. An example of this can be seen when CNET reporter Molly Wood, while testing the Board of Imagination, nearly crashed it into a wall even though she was no longer riding it.

Chaotic Moon skateboard

‘Yes, yes she did!’ Whurley exclaims. ‘The thing you’ll notice on that video is that the skateboard kept going, and the reason it did is that Molly was thinking about moving because she was chasing after it. What she didn’t understand is that by doing that she was actually driving it further and further, faster away from us. That’s why on the video you see me say "stop thinking!"’

Urban sports aside, the potential for technologies that require only an uplink to a headset but offer the possibilities of complex operation is not something that Chaotic Moon dismiss.

‘There are implications for people in wheelchairs,’ Whurley considers. ‘There are implications for people with disabilities of all kinds. In addition to that there’s repetitive tasks, controlling automation. So, for example, controlling brain/computer interfaces as part of a robotics control system in manufacturing or hazardous areas, and things like that. So there’s a lot of different areas you could take this and that’s what we try to do.’

Next page: Exoskeletons

Future of computer control: Exoskeletons

The eventual end goal of a mind control system would arguably be one where a human and machine form some kind of symbiosis. It’s a long way from opening your email just by thinking about it, to a computer-controlled exoskeleton that would empower a paraplegic person to walk again. But this very idea is one that Dr Miguel Nicolelis is trying to make a reality, and he has a notable deadline. At the opening ceremony of the 2014 World Cup in Brazil a young adult paraplegic will, if things go to plan, take several steps and kick a football thanks to a robotic suit which he or she will wear and control via a thought control interface. It promises to be a wholly remarkable sight, the significance of which will overshadow any of the football that will follow.

‘The idea of having a demonstration at the opening ceremony of the World Cup,’ Dr Nicolelis stated in a recent interview, ‘was basically generated by our desire to speed up the process of bringing this technology to clinical applications. I think showcasing the potential of those few steps in a prototype way is literally the kick-off of this field’.

In his book Beyond Boundaries, Dr Nicolelis charts the development of this area of neuroscience and how the future could look very different if the theory becomes a reality. The current research he and his colleagues at the ‘Walk-Again Project’ are conducting involves, in very simplistic terms, implanting micro-electrode arrays into the brain itself to measure precise brain activity, accompanied by implanted microchips or ‘neurochips’. The signals are then processed and wirelessly sent to a BMI (Brain Machine Interface) which in turn translates the thoughts into commands that then powers the robotic neuroprosthesis.Beyond Boundaries

It sounds incredible, but the scientists remain confident that the technology will be ready for their big day and a potential audience of hundred of millions of people. In the short term (which in scientific terms is the next decade or so) the technology would be focussed on helping those with paralysis, Parkinson’s disease, and other neurological disorders. But as the technology becomes accepted and costs begin to decline Dr Nicolelis sees more mainstream applications becoming viable.

‘When we improve our ways to read brain activity with non-invasive technology,’ he concludes, ‘so technology does not require, like we do today, these small implants on the brain to read electrical signals from populations of brain cells. When we get to that level we truly will be able to liberate the brain from the physical limits of our bodies. We will be able to communicate in different ways, we will be able to control devices just by thinking. The times in which we will have to exert force or exert our own movements into the world to control devices probably will be gone.’

In a relatively short space of time our relationship with computers has gone from huge water-cooled mainframes that required specialist operators, to far more powerful devices we carry in our pockets. The way we use our devices, and our expectations of them, is now beginning to alter their design, with newer and more powerful interfaces evolving to delight and surprise us. But we have only scratched the surface. For some of us it might seem almost impossible to consider a computer that has no keyboard or physical means of control. A few years from now it might be impossible to imagine that we ever needed them at all.