Visitors to IBM's China Research Lab are "startled" by what they see.
I have tried many of the speech recognition products over the years, and I hear people like Dan Poynter, the dean of self-publishing, tell how he writes books in a matter of hours talking into a microphone while using speech recognition software. Could this really be possible? Whether or not it was in years past, apparently it finally is possible today, according to reports coming out of the IBM China Research Lab in Beijing.
Scientists at the lab reportedly have perfected speech-to-text technology; all audio over the airwaves and audio/video files on the Internet (such as TV, and radio, and YouTube) can be converted to text. The reason this is important to more than just authors and the disabled is that by having the ability to convert speech to text accurately, the material now can be indexed, searched, and retrieved. Stored video clips, for example, would be easy to find. It also means that the text can be translated into other languages with a high level of accuracy.
IBM has been working on speech recognition technology for some three decades now and continues to see promise in the technology, though the company may have spent far more on research than it has yet realized from commercial adoption of the resulting products. Last January, the company sold rights to its ViaVoice technology to Nuance, owner of Dragon NaturallySpeaking, which actively promotes its Windows and Mac OS speech-recognition products to businesses and consumers. There is a broad embedded market for speech recognition technology as well, and Nuance sells its own licenses to companies that wish to incorporate speech recognition into their solutions. A related emerging market can be seen in the area of security and conversational biometrics, in which a person's voice patterns are used to establish identity.
IBM China Research Laboratory
In a recent article in The Atlantic, staff writer James Fallows describes the time he and his wife visited IBM's China Research Laboratory and saw a product demo that "startled" him.
"My wife and I were the only native speakers of English in the room. But when each of us spoke into the voice recognition system, it produced nearly perfect real-time versions of what we said," reports Fallows. He then describes how he tried the system first by speaking with deliberate clarity, but then gave it a more challenging run-through by speaking in a "fast, conversational speed." With the exception of a made-up word he included, Fallows' 28-word sentence displayed on the computer screen exactly as he spoke it.
Scientists in the lab already are using the system internally to help with communications between China and Armonk, N.Y. Though everyone at the lab speaks English, discussions with those in the U.S. are facilitated by having a near real-time English transcription running across the bottom of the screen, something that greatly helps comprehension.
The objective of speech recognition is to enable easy access to the full range of computer services without having to type or use one's hands. IBM Research Lab in Haifa has been working on what it calls distributed speech recognition (DSR) to provide a client/server standards-based DSR framework for deploying speech-enabled applications and services over today's mobile networks.
This writer has been following various implementations of speech recognition technology over the years and has tried IBM's ViaVoice product (now marketed by Nuance), along with earlier iterations of Dragon NaturallySpeaking. Although Microsoft has included a speech recognition engine in the Vista operating system, the problem with the technology to date is that it has been so fraught with errors that no person who has a serious interest in using it for business could afford to fiddle with it. Today's Google Voice may be an example of this dilemma.
What appears to be happening today, however, is the technology is maturing and accuracy levels are high enough to satisfy everyone's expectations. We're not so sure about Google's engine, but after Nuance's acquisition of IBM's ViaVoice source code, Nuance has the opportunity to merge the best parts of ViaVoice with the Dragon engine. While that may or may not be practical, I can attest to the fact that Dragon NaturallySpeaking 10 is a useful and practical product, which I could not say about V9. From what Fallows has written about the achievements in IBM's China Research Laboratory, it appears that IBM--despite the deal with Nuance--has continued to work on speech recognition and has taken the technology to yet another level of accuracy.
As early as 2001, IBM released IBM Embedded ViaVoice for delivering voice technologies to mobile devices. The application gave developers the ability to develop robust mobile speech solutions. Winner of the Speech Technology magazine 2007 Market Leader Award, Embedded ViaVoice is behind many of today's small mobile devices and automotive telematics systems. Developers can provide users with voice access to information on mobile devices and hands-free phones. The solutions are fully integrated with the devices' telephony features and offer automatic speech recognition and text-to-speech capabilities. The ViaVoice application helps minimize skills and time needed to develop advanced voice recognition solutions for devices and systems.
The WebSphere Voice Server extension advances IBM's speech technology in the field of telecommunications. It works on top of WebSphere Application Server and consists of a speech recognition server, a speech synthesizer server, and IBM WebSphere Voice Toolkit. WebSphere Voice Server provides application developers with a choice of programming interfaces to develop speech-enabled interactive voice response applications based on VoiceXML and Java. VoiceXML allows voice applications to be developed and deployed in an analogous way to HTML for visual applications. Using WebSphere Voice Server, many services can be automated, such as information inquiries, stock quotes, ticket booking, hotel reservations, and travel services.
Telematics is a combination of telecommunications and computing technology, as found in such products as General Motors voice-activated OnStar, which handles navigation, emergency, and maintenance notification issues for drivers. OnStar was created by a team of IBM researchers in the mid-1990s. While the product apparently hasn't saved GM from bankruptcy, it's one that has endeared the company's customers to a wide array of GM vehicles.
IBM currently is looking for applications for its conversational biometrics engine, which holds great potential for harnessing speech technology to combat credit card and banking fraud. This technology could plausibly be the key to creating one of the most flexible and robust user verification and detection solutions ever devised. By combining powerful and accurate acoustic, text-independent speaker-recognition with additional sources of verification--such as a person's knowledge of a subject--a provider can vary the number of security controls according to the risk. Such technology could improve the convenience of accessing encrypted data that today requires the use of encrypted keys and passwords.
Speech recognition technology will see increased use as mobile devices become ever more ubiquitous and users come to expect the responsiveness, convenience, and security of which they're capable. IBM's 30 years of research in laboratories around the world already has produced commercially successful products in the field of speech recognition, and that foundation will lay the groundwork for much more application development in this arena in years to come.