We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message

What's the future for the keyboard and mouse?

We investigate motion tracking, voice recognition and even mind control gadgets

With the release of Windows 8 Microsoft has signalled that the future of computing will involve interacting with our devices in ways a keyboard and mouse currently fail to deliver. It isn’t exactly surprising, as we’ve been using this hardware for the best part of three decades, but as we enter this new era the ideas being put forward by innovative designers are surprising, revolutionary, and even - borrowing from the big book of Apple adjectives - magical.

In the early days of computing the only way to load programs, or do basically anything at all, was to type in commands via a keyboard. In fact the Altair 8800, one of the very first home computers, didn’t even have a keyboard. Instead the user entered commands by toggling various switches and the reward for their efforts were two rows of lights blinking in sequential patterns. It’s a wonder the computers ever caught on at all.

Researchers at the now legendary Xerox PARC facility in California knew that things could be better than this and set about designing the WIMP system, which incorporated a graphical user interface (GUI) featuring Windows, Icons, Menus, and a Pointing device - hence the name. This remained hidden away until Steve Jobs negotiated a trip to the facility, witnessed the technology, and immediately set about replicating and refining it for the mass market. After the release of the hilariously priced Apple Lisa in 1983, the more sensible Macintosh in 1984, and Windows 1.0 in 1985, the landscape was forever changed and the graphical user interface became the norm, which it has remained until now.

Smartphones and tablets have recently opened the doorway to the possibilities of touch- and speech-controlled interfaces, while games systems such as the Nintendo Wii and Microsoft Kinect for Xbox have released gamers from their sofas and gamepads, demonstrating the possibilities of using gestures and movement to interact with their devices. Now the gloves are off and developers are showcasing, and even manufacturing, systems that only a few years ago would have been the babblings of madmen. The future is here and there’s nary a keyboard in sight.

Future of computer control: Gesture Control

Games consoles

If there’s one new interface that we’re already very familiar with, it’s motion control. When the Wii was released in 2006 the idea of playing a computer game standing up was bizarre, multiplayer experiences were generally limited to Xbox Live or sitting shoulder to shoulder with a gamepad-wielding friend. The chances of a yoga game being released were negligible. But Nintendo’s smart little white box tapped into something that previous games consoles had failed to address in the same way: the simple fact that games should be fun and the interface intuitive.

Kinect Adventures Xbox 360

The elegant control system immediately made sense to most users, mainly due to the fact that the on-screen representations of their actions were things that they already knew how to do. Wii Sports included a tennis game that you played as if holding an actual tennis racket, golf required you to swing the controller like a club, and boxing was exhausting and potentially dangerous if an unsuspecting family member walked in front of you during a frantic bout.

This removal of classic barriers such as multi-buttoned control pads that required high levels of accuracy with small joysticks, meant that people who never considered gaming as a viable or fun pastime flocked to the Wii in huge numbers. Wii parties became a common event, and the sales just continued to climb.

The console was an unadulterated success, going on to sell over 96 million units, making it the most successful system in Nintendo’s illustrious history, and showing just what could be achieved if the interface was designed to respond directly to existing human behaviour rather than require the learning of new ones.

Both Sony and Microsoft quickly responded with their own peripherals that emulated the Wii motion controller. The Sony Move looked a bit like a deactivated light saber, or marital aid depending on your viewpoint, and received much critical acclaim. Sales were less impressive though and Sony recently admitted that the device has failed to live up to their expectations.

PlayStation Move

Microsoft fared better with its Kinect interface which, after selling over eight million units in the first sixty days of its release, entered the Guinness book of records as the fastest selling consumer electronics device of all time. That's despite a high price which approached the cost of the Xbox 360 itself.

The advertising slogan that stated ‘You are the controller’ highlighted one of the main differences between the Kinect and its competitors. With the Wii most of the tracking is done via accelerometers within the controller itself which lets the base unit know its orientation, distance from the screen, and the speed at which it’s moving. This can be fine for most applications but still requires batteries to be charged, buttons to be pressed, and arms the players with a solid, offensive weapon which has led to an unsurprising rise in Wii-related injuries. The Kinect is different, and this difference means it can have implications for how we use our computers in years to come.

Xbox 360 Kinect

The Kinect unit is fitted with an RGB camera, depth sensor, and a multi-array microphone. This allows the device to see the user, track their movement, range their distance, and even process voice commands without the need for a separate controller.

For gaming, this is revolutionary since the player becomes part of the game. On the many different fitness titles available, your body is shown on screen as you attempt the various routines, enabling you to correct mistakes and actually improve your dancing, martial arts prowess, or avoid the cheating techniques that waving the Wii controllers would have allowed you.

The camera also acts as a video calling interface, and the speech controls are integrated system wide so you can browse the web via Bing, access the various apps, and even shut the console down just using your voice. It was only a matter of time then that something as useful as this made its way to a desktop environment, especially one where gestures and touch controls are coming to the fore.

Kinect comes to Windows

In June 2011 Microsoft released a Kinect for Windows development kit which allowed developers to tailor the device for innovative projects with Windows 7. Of course, various clever hackers had already found ways to manipulate the device, but between them all the world got a glimpse of the future of digital interaction.

One group of students at MIT created a javascript program that allowed the user to navigate websites just using gestures (which worked particularly impressively with 3D maps programs and drawing programs), programmer Oliver Kreylos hacked his Kinect so that it would create 3D models from the images it captured, a German research lab built a portable guidance system for blind people, while another group of MIT students created a tiny helicopter that used the Kinect sensor to avoid crashing into obstacles.

More practical uses included a shopping trolley that followed you around a store and which allowed you to scan items as you placed them in the basket. The upshot of this was that once you had finished browsing you simply entered your payment information, the items were charged, and you could leave the store without the need to go to a till.

Medical professionals were also quick to see the potential of hands-free control, with a group of surgeons at St Thomas’ hospital in London trialling the Kinect to manipulate 3D images of a patient’s aorta during an operation. The BBC reported that John Brennan, president of the British Society for Endovascular Therapy, stated that ‘I would find it difficult to think of operating rooms in ten or fifteen years' time where these were just not the norm’.

Microsoft has been open about its desire to see the technology deployed in interesting and unusual ways by developers and has continued to release updates to the developer kits. The company also stated on its Kinect for Windows blog that these updates ‘will include support for Windows 8 desktop applications’, suggesting that maybe those with non-touchscreen PCs might not need to upgrade their entire hardware to run the latest offering of Windows. It’s still relatively early days for the Kinect, but already it seems to be marking a path towards the future.

If there’s one criticism of the device it’s that it lacks fine motion controls, with many of the gestures needing to be slightly exaggerated and distant from the screen. There are already lens covers such as the Nyko Zoom which aims to reduce the amount of space you need between you and the sensor for it to work correctly, but reports are mixed on how successful it is at accomplishing this.

Next page: Fine motion control for computers - Leap Motion

IDG UK Sites

45 Best Android games: top Android games for your smartphone or tablet in 2014 (24 are free!)

IDG UK Sites

How Apple, Adobe, Microsoft and others have let us down over UltraHD and hiDPI screens

IDG UK Sites

Do you have the X-Factor too? Mix Off app puts fans in the frame

IDG UK Sites

iPad Pro release date, rumours and leaked images - 12.9 screen 'coming in 2015'