Hoping to leapfrog Google and Apple's successes in the smartphone market, Microsoft plans to use cloud-based speech recognition and natural language processing technology to offer user interface capabilities not found on the iPhone or Android devices.
"We believe speech is not a separate application. Rather it is an integral part of the user experience," said Zig Serafin, Microsoft unified communications general manager, before an audience at the SpeechTEK 2010 conference, being held this week in New York.
In order to boost Windows Phone 7's capabilities for understanding a voice command and delivering the requested result, the company plans to tie in Windows Phone 7 handsets with the company's Tellme cloud-based voice recognition and natural language processing service, said Serafin, in a subsequent interview.
Microsoft purchased the company that created this service, Tellme Networks, in 2007.
Before the SpeechTEK audience, Serafin chastised the Android and iPhone operating systems for using icons as the chief form of interaction. "Most smartphones are a grid of icons, much like Windows 3.1," he said.
Talking to the phone is a more natural way of telling it what to do, he said. "When you move to a device that doesn't have a large keyboard, voice is such a compelling complement to that experience," Serafin said.
He then had Microsoft marketing director Ilya Bukshteyn run through a demonstration of how a Windows Phone 7 could use speech recognition and natural language processing, or the means by which a computer interprets what a person says. Bukshteyn asked the phone to call 'Paul', and a voice emitting from the handset responded with a number of different contacts with the first name of Paul. Bukshteyn responded with the specific full name and the phone proceeded to call that person.
Bukshteyn also told the phone to open an album of pictures, and a picture view app came up on the screen, showing not only pictures taken by the user, but also taken by the user's friends that were posted on social networking sites.
In a third example, Bukshteyn asked for a list of nearby Chinese restaurants. The request was conveyed to the Bing search service, which returned a list of restaurants and their locations on a map.
While the iPhone and the Android variants do use some voice recognition capabilities, Microsoft's phone service will be different in a number of respects, Serafin said. For one, it will be not be restricted to being used in just a few apps, but rather could be used to control the entire phone. The second way in which this service will be unique is that it will be interactive. If given an ambiguous command, the handset or appropriate service can ask the user to clarify the request.
The speech component is one part of what Serafin called the "natural user interface" or NUI. The NUI relies on voice, touch and even motion as forms of input.
"Speech is the core of NUI," he said. Part of the demonstration showed how Microsoft's Kinnect XBox technology could interpret hand gestures to trigger actions on the computer. This technology will be used in Microsoft products beyond the XBox, Bukshteyn said in a subsequent interview.
Serafin said that the company is in the early processes of rolling speech interaction into different Windows Phone 7 components, beginning with those most heavily used - search, calling people, and guiding users to photo collections.
A user can trigger the phone to listen to voice commands by holding down a single button on the phone. Some of the language processing will be done on the phone and some will be done by Tellme. "Honestly, the user shouldn't know or care about" where the voice commands are processed, Bukshteyn said.
Serafin claimed that the Tellme service is the largest used speech-based natural language processing system in use today. Microsoft pitches the service to large organisations for phone-based help desk support. The service fields over 2.5 billion calls a year for corporate clients, he said.