US20080154600A1 - System, Method, Apparatus and Computer Program Product for Providing Dynamic Vocabulary Prediction for Speech Recognition - Google Patents

System, Method, Apparatus and Computer Program Product for Providing Dynamic Vocabulary Prediction for Speech Recognition Download PDF

Info

Publication number
US20080154600A1
US20080154600A1 US11/614,159 US61415906A US2008154600A1 US 20080154600 A1 US20080154600 A1 US 20080154600A1 US 61415906 A US61415906 A US 61415906A US 2008154600 A1 US2008154600 A1 US 2008154600A1
Authority
US
United States
Prior art keywords
words
word
recognized
candidate
recognition network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/614,159
Inventor
Jilei Tian
Jussi Leppanen
Imre Kiss
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Oyj
Original Assignee
Nokia Oyj
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Oyj filed Critical Nokia Oyj
Priority to US11/614,159 priority Critical patent/US20080154600A1/en
Assigned to NOKIA CORPORATION reassignment NOKIA CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KISS, IMRE, LEPPANEN, JUSSI, TIAN, JILEI
Publication of US20080154600A1 publication Critical patent/US20080154600A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/083Recognition networks

Definitions

  • Embodiments of the present invention relate generally to speech processing technology and, more particularly, relate to a method, apparatus, and computer program product for providing dynamic vocabulary prediction for setting up speech recognition network of resource constraint portable devices.
  • the modem communications era has brought about a tremendous expansion of wireline and wireless networks.
  • Computer networks, television networks, and telephony networks are experiencing an unprecedented technological expansion, fueled by consumer demand.
  • Wireless and mobile networking technologies have addressed related consumer demands, while providing more flexibility and immediacy of information transfer.
  • the services may be in the form of a particular media or communication application desired by the user, such as a music player, a game player, an electronic book, short messages, email, etc.
  • the services may also be in the form of interactive applications in which the user may respond to a network device in order to perform a task, play a game or achieve a goal.
  • the services may be provided from a network server or other network device, or even from the mobile terminal such as, for example, a mobile telephone, a mobile television, a mobile computer, a mobile gaming system, etc.
  • Such applications may provide for a user interface that does not rely on substantial manual user activity. In other words, the user may interact with the application in a hands-free or semi-hands free environment.
  • An example of such an application may be paying a bill, ordering a program, requesting and receiving driving instructions, etc.
  • Other applications may convert oral speech into text or perform some other function based on recognized speech, such as dictating a short message service (SMS) or email, etc.
  • SMS short message service
  • speech recognition applications applications that produce text from speech
  • speech synthesis applications applications that produce speech from text
  • other speech processing devices are becoming more common.
  • Speech recognition which may be referred to as automatic speech recognition (ASR) may be conducted by numerous different types of applications.
  • a dictation engine which may be employed for isolated word speech recognition, is one example of such an application which may include a large vocabulary of words that may be recognized.
  • the dictation engine may include a vocabulary set of 100,000 words or more.
  • Each word of the vocabulary may have a corresponding acoustic model by concatenating subword acoustic models, such as phonemic HMMs.
  • Speech recognition such as may be performed by a Viterbi decoder, often involves comparing speech to various ones of the acoustic models in order to find a model most likely to have produced the speech.
  • a speech recognition process it may be desirable for speech recognition to be performed on a subset of the entire vocabulary in order to reduce the number of models that must be compared to a given speech sample, so that it can be used in a resource constrained embedded system with low memory and computational complexity.
  • a typical recognition vocabulary for a next word to be recognized is often formed as a subset of the entire vocabulary based on a fixed number of candidate words and information from the language model. This conventional mechanism can result in a large requirement for runtime memory resource usage.
  • a method, apparatus and computer program product are therefore provided for providing dynamic vocabulary prediction for speech recognition.
  • an efficient dynamic vocabulary prediction for large vocabulary isolated speech recognition in resource-constrained systems may be provided.
  • a recognition network may be dynamically created as a subset of a vocabulary of words.
  • embodiments of the present invention dynamically generate a recognition network for each word to be recognized.
  • embodiments of the present invention account for the fact that even previously recognized words may not have been recognized properly in defining the recognition network for each word to be recognized.
  • flexible and efficient speech recognition may be provided.
  • a method of providing dynamic vocabulary prediction for speech recognition includes determining a confidence measure for each candidate recognized word for a current word to be recognized, selecting a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words, and determining a recognition network for a next word to be recognized.
  • the recognition network may include likely follower words for each of the selected candidate words using language model and supplementary words.
  • a computer program product for providing dynamic vocabulary prediction for speech recognition.
  • the computer program product includes at least one computer-readable storage medium having computer-readable program code portions stored therein.
  • the computer-readable program code portions include first, second and third executable portions.
  • the first executable portion is for determining a confidence measure for each candidate recognized word for a current word to be recognized.
  • the second executable portion is for selecting a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words.
  • the third executable portion is for determining a recognition network for a next word to be recognized.
  • the recognition network may include likely follower words for each of the selected candidate words using language model and supplementary words.
  • an apparatus for providing dynamic vocabulary prediction for speech recognition includes a recognition network element.
  • the recognition network element may be configured to determine a confidence measure for each candidate recognized word for a current word to be recognized.
  • the recognition network element may also be configured to select a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words, and determine a recognition network for a next word to be recognized.
  • the recognition network may include likely follower words for each of the selected candidate words using language model and supplementary words.
  • an apparatus for providing dynamic vocabulary prediction for speech recognition includes means for determining a confidence measure for each candidate recognized word for a current word to be recognized, means for selecting a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words and means for determining a recognition network for a next word to be recognized.
  • the recognition network may include likely follower words for each of the selected candidate words using language model and supplementary words.
  • a system for providing dynamic vocabulary prediction for speech recognition may include a speech processing element, a speech recognition engine and a recognition network element.
  • the speech processing element may be configured to segment input speech into a series of words including a current word to be recognized and a next word to be recognized as well as feature extraction.
  • the speech recognition engine may be configured to determine candidate recognized words corresponding to each word of the series of words based on a recognition network dynamically generated for each word of the series of words.
  • the recognition network element may be configured to determine a confidence measure for each candidate recognized word for the current word to be recognized, to select a subset of candidate recognized words for the current word to be recognized as selected candidate words based on the confidence measure of each one of the candidate recognized words for the current word to be recognized, and to determine a next recognition network for a next word to be recognized.
  • the next recognition network may include likely follower words for each of the selected candidate words using language model and supplementary words.
  • Embodiments of the invention may provide a method, apparatus and computer program product for employment in systems to enhance speech processing.
  • mobile terminals and other electronic devices may benefit from an ability to perform speech processing in an efficient manner without suffering performance degradation.
  • accurate word recognition may be performed using relatively small amounts of resources.
  • FIG. 1 is a schematic block diagram of a mobile terminal according to an exemplary embodiment of the present invention
  • FIG. 2 is a schematic block diagram of a wireless communications system according to an exemplary embodiment of the present invention.
  • FIG. 3 illustrates a block diagram of a system for providing dynamic vocabulary prediction for speech recognition according to an exemplary embodiment of the present invention
  • FIG. 4 shows graphical views illustrating n-best distribution in terms of decoder vocabulary prediction rate and predicted vocabulary size for an exemplary embodiment of the present invention
  • FIG. 5 illustrates a sequence of segmented words and corresponding word lattice according to an exemplary embodiment of the present invention.
  • FIG. 6 is a flowchart according to an exemplary method for providing dynamic vocabulary prediction for speech recognition according to an exemplary embodiment of the present invention.
  • FIG. 1 illustrates a block diagram of a mobile terminal 10 that would benefit from embodiments of the present invention.
  • a mobile telephone as illustrated and hereinafter described is merely illustrative of one type of mobile terminal that would benefit from embodiments of the present invention and, therefore, should not be taken to limit the scope of embodiments of the present invention.
  • While one embodiment of the mobile terminal 10 is illustrated and will be hereinafter described for purposes of example, other types of mobile terminals, such as portable digital assistants (PDAs), pagers, mobile computers, mobile televisions, gaming devices, laptop computers, cameras, video recorders, GPS devices and other types of voice and text communications systems, can readily employ embodiments of the present invention.
  • PDAs portable digital assistants
  • pagers pagers
  • mobile computers mobile televisions
  • gaming devices laptop computers
  • cameras video recorders
  • GPS devices GPS devices and other types of voice and text communications systems
  • system and method of embodiments of the present invention will be primarily described below in conjunction with mobile communications applications. However, it should be understood that the system and method of embodiments of the present invention can be utilized in conjunction with a variety of other applications, both in the mobile communications industries and outside of the mobile communications industries.
  • the mobile terminal 10 includes an antenna 12 (or multiple antennae) in operable communication with a transmitter 14 and a receiver 16 .
  • the mobile terminal 10 further includes a controller 20 or other processing element that provides signals to and receives signals from the transmitter 14 and receiver 16 , respectively.
  • the signals include signaling information in accordance with the air interface standard of the applicable cellular system, and also user speech, received data and/or user generated data.
  • the mobile terminal 10 is capable of operating with one or more air interface standards, communication protocols, modulation types, and access types.
  • the mobile terminal 10 is capable of operating in accordance with any of a number of first, second, third and/or fourth-generation communication protocols or the like.
  • the mobile terminal 10 may be capable of operating in accordance with second-generation (2G) wireless communication protocols IS-136 (TDMA), GSM, and IS-95 (CDMA), or with third-generation (3G) wireless communication protocols, such as UMTS, CDMA2000, WCDMA and TD-SCDMA, with fourth-generation (4G) wireless communication protocols or the like.
  • 2G second-generation
  • 3G third-generation
  • UMTS Universal Mobile Telecommunications
  • CDMA2000 Code Division Multiple Access 2000
  • WCDMA Wideband Code Division Multiple Access
  • TD-SCDMA fourth-generation
  • the controller 20 includes circuitry desirable for implementing audio and logic functions of the mobile terminal 10 .
  • the controller 20 may be comprised of a digital signal processor device, a microprocessor device, and various analog to digital converters, digital to analog converters, and other support circuits. Control and signal processing functions of the mobile terminal 10 are allocated between these devices according to their respective capabilities.
  • the controller 20 thus may also include the functionality to convolutionally encode and interleave message and data prior to modulation and transmission.
  • the controller 20 can additionally include an internal voice coder, and may include an internal data modem.
  • the controller 20 may include functionality to operate one or more software programs, which may be stored in memory.
  • the controller 20 may be capable of operating a connectivity program, such as a conventional Web browser. The connectivity program may then allow the mobile terminal 10 to transmit and receive Web content, such as location-based content and/or other web page content, according to a Wireless Application Protocol (WAP), Hypertext Transfer Protocol (HTTP) and/or the like, for example.
  • WAP Wireless Application Protocol
  • the mobile terminal 10 may also comprise a user interface including an output device such as a conventional earphone or speaker 24 , a ringer 22 , a microphone 26 , a display 28 , and a user input interface, all of which are coupled to the controller 20 .
  • the user input interface which allows the mobile terminal 10 to receive data, may include any of a number of devices allowing the mobile terminal 10 to receive data, such as a keypad 30 , a touch display (not shown) or other input device.
  • the keypad 30 may include the conventional numeric (0-9) and related keys (#, *), and other keys used for operating the mobile terminal 10 .
  • the keypad 30 may include a conventional QWERTY keypad arrangement.
  • the keypad 30 may also include various soft keys with associated functions.
  • the mobile terminal 10 may include an interface device such as a joystick or other user input interface.
  • the mobile terminal 10 further includes a battery 34 , such as a vibrating battery pack, for powering various circuits that are required to operate the mobile terminal 10 , as well as optionally providing mechanical vibration as a detectable output.
  • the mobile terminal 10 may further include a user identity module (UIM) 38 .
  • the UIM 38 is typically a memory device having a processor built in.
  • the UIM 38 may include, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), etc.
  • SIM subscriber identity module
  • UICC universal integrated circuit card
  • USIM universal subscriber identity module
  • R-UIM removable user identity module
  • the UIM 38 typically stores information elements related to a mobile subscriber.
  • the mobile terminal 10 may be equipped with memory.
  • the mobile terminal 10 may include volatile memory 40 , such as volatile Random Access Memory (RAM) including a cache area for the temporary storage of data.
  • RAM volatile Random Access Memory
  • the mobile terminal 10 may also include other non-volatile memory 42 , which can be embedded and/or may be removable.
  • the non-volatile memory 42 can additionally or alternatively comprise an EEPROM, flash memory or the like, such as that available from the SanDisk Corporation of Sunnyvale, Calif., or Lexar Media Inc. of Fremont, Calif.
  • the memories can store any of a number of pieces of information, and data, used by the mobile terminal 10 to implement the functions of the mobile terminal 10 .
  • the memories can include an identifier, such as an international mobile equipment identification (IMEI) code, capable of uniquely identifying the mobile terminal 10 .
  • IMEI international mobile equipment identification
  • FIG. 2 is a schematic block diagram of a wireless communications system according to an exemplary embodiment of the present invention.
  • the system includes a plurality of network devices.
  • one or more mobile terminals 10 may each include an antenna 12 for transmitting signals to and for receiving signals from a base site or base station (BS) 44 .
  • the base station 44 may be a part of one or more cellular or mobile networks each of which includes elements required to operate the network, such as a mobile switching center (MSC) 46 .
  • MSC mobile switching center
  • the mobile network may also be referred to as a Base Station/MSC/Interworking function (BMI).
  • BMI Base Station/MSC/Interworking function
  • the MSC 46 is capable of routing calls to and from the mobile terminal 10 when the mobile terminal 10 is making and receiving calls.
  • the MSC 46 can also provide a connection to landline trunks when the mobile terminal 10 is involved in a call.
  • the MSC 46 can be capable of controlling the forwarding of messages to and from the mobile terminal 10 , and can also control the forwarding of messages for the mobile terminal 10 to and from a messaging center. It should be noted that although the MSC 46 is shown in the system of FIG. 2 , the MSC 46 is merely an exemplary network device and embodiments of the present invention are not limited to use in a network employing an MSC.
  • the MSC 46 can be coupled to a data network, such as a local area network (LAN), a metropolitan area network (MAN), and/or a wide area network (WAN).
  • the MSC 46 can be directly coupled to the data network.
  • the MSC 46 is coupled to a gateway device (GTW) 48
  • GTW 48 is coupled to a WAN, such as the Internet 50 .
  • devices such as processing elements (e.g., personal computers, server computers or the like) can be coupled to the mobile terminal 10 via the Internet 50 .
  • the processing elements can include one or more processing elements associated with a computing system 52 (two shown in FIG. 2 ), origin server 54 (one shown in FIG. 2 ) or the like, as described below.
  • the BS 44 can also be coupled to a signaling GPRS (General Packet Radio Service) support node (SGSN) 56 .
  • GPRS General Packet Radio Service
  • the SGSN 56 is typically capable of performing functions similar to the MSC 46 for packet switched services.
  • the SGSN 56 like the MSC 46 , can be coupled to a data network, such as the Internet 50 .
  • the SGSN 56 can be directly coupled to the data network. In a more typical embodiment, however, the SGSN 56 is coupled to a packet-switched core network, such as a GPRS core network 58 .
  • the packet-switched core network is then coupled to another GTW 48 , such as a GTW GPRS support node (GGSN) 60 , and the GGSN 60 is coupled to the Internet 50 .
  • the packet-switched core network can also be coupled to a GTW 48 .
  • the GGSN 60 can be coupled to a messaging center.
  • the GGSN 60 and the SGSN 56 like the MSC 46 , may be capable of controlling the forwarding of messages, such as MMS messages.
  • the GGSN 60 and SGSN 56 may also be capable of controlling the forwarding of messages for the mobile terminal 10 to and from the messaging center.
  • devices such as a computing system 52 and/or origin server 54 may be coupled to the mobile terminal 10 via the Internet 50 , SGSN 56 and GGSN 60 .
  • devices such as the computing system 52 and/or origin server 54 may communicate with the mobile terminal 10 across the SGSN 56 , GPRS core network 58 and the GGSN 60 .
  • the mobile terminals 10 may communicate with the other devices and with one another, such as according to the Hypertext Transfer Protocol (HTTP) and/or the like, to thereby carry out various functions of the mobile terminals 10 .
  • HTTP Hypertext Transfer Protocol
  • the mobile terminal 10 may be coupled to one or more of any of a number of different networks through the BS 44 .
  • the network(s) may be capable of supporting communication in accordance with any one or more of a number of first-generation (1G), second-generation (2G), 2.5G, third-generation (3G), 3.9G, fourth-generation (4G) mobile communication protocols or the like.
  • one or more of the network(s) can be capable of supporting communication in accordance with 2G wireless communication protocols IS-136 (TDMA), GSM, and IS-95 (CDMA).
  • one or more of the network(s) can be capable of supporting communication in accordance with 2.5G wireless communication protocols GPRS, Enhanced Data GSM Environment (EDGE), or the like. Further, for example, one or more of the network(s) can be capable of supporting communication in accordance with 3G wireless communication protocols such as a Universal Mobile Telephone System (UMTS) network employing Wideband Code Division Multiple Access (WCDMA) radio access technology.
  • UMTS Universal Mobile Telephone System
  • WCDMA Wideband Code Division Multiple Access
  • Some narrow-band AMPS (NAMPS), as well as TACS, network(s) may also benefit from embodiments of the present invention, as should dual or higher mode mobile stations (e.g., digital/analog or TDMA/CDMA/analog phones).
  • the mobile terminal 10 can further be coupled to one or more wireless access points (APs) 62 .
  • the APs 62 may comprise access points configured to communicate with the mobile terminal 10 in accordance with techniques such as, for example, radio frequency (RF), Bluetooth (BT), infrared (IrDA) or any of a number of different wireless networking techniques, including wireless LAN (WLAN) techniques such as IEEE 802.11 (e.g., 802.11a, 802.11b, 802.11g, 802.11n, etc.), WiMAX techniques such as IEEE 802.16, and/or ultra wideband (UWB) techniques such as IEEE 802.15 and/or the like.
  • the APs 62 may be coupled to the Internet 50 .
  • the APs 62 can be directly coupled to the Internet 50 . In one embodiment, however, the APs 62 are indirectly coupled to the Internet 50 via a GTW 48 . Furthermore, in one embodiment, the BS 44 may be considered as another AP 62 . As will be appreciated, by directly or indirectly connecting the mobile terminals 10 and the computing system 52 , the origin server 54 , and/or any of a number of other devices, to the Internet 50 , the mobile terminals 10 can communicate with one another, the computing system, etc., to thereby carry out various functions of the mobile terminals 10 , such as to transmit data, content or the like to, and/or receive content, data or the like from, the computing system 52 .
  • data As used herein, the terms “data,” “content,” “information” and similar terms may be used interchangeably to refer to data capable of being transmitted, received and/or stored in accordance with embodiments of the present invention. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present invention.
  • the mobile terminal 10 and computing system 52 may be coupled to one another and communicate in accordance with, for example, RF, BT, IrDA or any of a number of different wireline or wireless communication techniques, including LAN, WLAN, WiMAX, UWB techniques and/or the like.
  • One or more of the computing systems 52 can additionally, or alternatively, include a removable memory capable of storing content, which can thereafter be transferred to the mobile terminal 10 .
  • the mobile terminal 10 can be coupled to one or more electronic devices, such as printers, digital projectors and/or other multimedia capturing, producing and/or storing devices (e.g., other terminals).
  • the mobile terminal 10 may be configured to communicate with the portable electronic devices in accordance with techniques such as, for example, RF, BT, IrDA or any of a number of different wireline or wireless communication techniques, including USB, LAN, WLAN, WiMAX, UWB techniques and/or the like.
  • techniques such as, for example, RF, BT, IrDA or any of a number of different wireline or wireless communication techniques, including USB, LAN, WLAN, WiMAX, UWB techniques and/or the like.
  • data associated with a speech recognition application or other speech processing application may be communicated over the system of FIG. 2 between a mobile terminal, which may be similar to the mobile terminal 10 of FIG. 1 and a network device of the system of FIG. 2 , or between mobile terminals.
  • a mobile terminal which may be similar to the mobile terminal 10 of FIG. 1 and a network device of the system of FIG. 2 , or between mobile terminals.
  • the system of FIG. 2 need not be employed for communication between mobile terminals or between a network device and the mobile terminal, but rather FIG. 2 is merely provided for purposes of example.
  • embodiments of the present invention may be resident on a communication device such as the mobile terminal 10 , or may be resident on a network device or other device accessible to the communication device.
  • a speaker may be asked to speak with a clear pause between words in order to enable the word to be segmented by voice activity detection (VAD).
  • VAD voice activity detection
  • VAD may be used to detect word boundaries so that speech recognition may be carried out only on a single segmented word at any given time.
  • the n-best word candidates may then be given for each segmented word.
  • a word lattice may then be produced including each of the n-best word candidates for each corresponding word of the utterance.
  • the word candidates of the word lattice may be listed or otherwise organized in order of a score that represents a likelihood that the word candidate is the correct word.
  • one way of scoring the word candidates is to provide an acoustic score and a language score such as a language model (LM) n-gram value.
  • the acoustic score is a value based on sound alone.
  • the acoustic score represents a probability that the word candidate matches the spoken word being analyzed based only on the sound of the spoken word.
  • the language score takes into account language attributes such as grammar to determine the probability that a particular word candidate matches the spoken word being analyzed based on language probabilities accessible to the application. For example, if the first word of an utterance is “I”, then the probability of the second word spoken being “is” would be very low, while the probability of the second word spoken being “am” would be much higher.
  • LM language model
  • the n-gram LM may be trained on a large text corpus. After calculating a value for the acoustic score and the language score, a combined or composite score may be acquired that may subsequently be used to order each of the candidate words.
  • the recognition network may then be utilized for comparing models of words in the recognition network to sample speech, such as a subsequent word, following the particular word.
  • the recognition network not just based on the best candidate followers for the best word (e.g., the word having the highest composite score), but based on a list of all of the best candidate followers for several best candidate recognized words.
  • a list of best candidate recognized words may be generated.
  • a fixed number of best candidate recognized words may be generated based on the combined acoustic and language scores (which may be weighted in some fashion known in the art). For each of the best candidate recognized words, a corresponding list of best candidate followers may form the recognition network.
  • embodiments of the present invention may incorporate a confidence measure to be used for dynamic selection of selected ones of the best candidate recognized words (e.g., selected candidate words) based on a difference between each of the best candidate recognized words and the best candidate recognized word (e.g., the candidate recognized word having the highest combined acoustic and language score). Only best candidate followers associated with the selected candidate words (i.e., candidate words that have a confidence measure within a threshold distance from the best candidate recognized word) may then be used to form the recognition network thereby providing dynamic vocabulary prediction.
  • a confidence measure to be used for dynamic selection of selected ones of the best candidate recognized words (e.g., selected candidate words) based on a difference between each of the best candidate recognized words and the best candidate recognized word (e.g., the candidate recognized word having the highest combined acoustic and language score).
  • FIG. 3 illustrates a block diagram of a system for providing dynamic vocabulary prediction for speech recognition according to an exemplary embodiment of the present invention.
  • An exemplary embodiment of the invention will now be described with reference to FIG. 3 , in which certain elements of a system for providing dynamic vocabulary prediction for speech recognition are displayed.
  • the system of FIG. 3 will be described, for purposes of example, in connection with the mobile terminal 10 of FIG. 1 .
  • the system of FIG. 3 may also be employed in connection with a variety of other devices, both mobile and fixed, and therefore, embodiments of the present invention should not be limited to application on devices such as the mobile terminal 10 of FIG. 1 .
  • FIG. 3 illustrates a block diagram of a system for providing dynamic vocabulary prediction for speech recognition according to an exemplary embodiment of the present invention.
  • FIG. 3 illustrates one example of a configuration of a system for providing dynamic vocabulary prediction for speech recognition
  • numerous other configurations may also be used to implement embodiments of the present invention.
  • segmentation of words is illustrated in FIG. 3 as being performed by a VAD element 70
  • any other known mechanism for sentence segmentation may alternatively be employed.
  • the system 68 includes a speech processing element for segmenting input speech 72 into speech samples such as individual words, a recognition network element 74 and a speech recognition decoder/engine 76 .
  • the speech processing element of one exemplary embodiment may be the VAD element 70 .
  • the VAD element 70 may be in communication with the speech recognition decoder/engine 76 via the recognition network element 74 , as shown in FIG. 3 , or alternatively, the VAD element 70 and the speech recognition decoder/engine 76 could be communicatively coupled independent of the recognition network element 74 .
  • the recognition network element 74 may provide an input to the speech recognition decoder/engine 76 for providing information regarding a recognition network.
  • the VAD element 70 , the recognition network element 74 and the speech recognition decoder/engine 76 may each be embodied by and/or operate under the control of a processing element.
  • some or each of the VAD element 70 , the recognition network element 74 and the speech recognition decoder/engine 76 may be embodied by and/or operate under the control of a single processing element.
  • a single or even multiple processing elements may perform all of the functions associated with one or more of the VAD element 70 , the recognition network element 74 and the speech recognition decoder/engine 76 .
  • Processing elements as described herein may be embodied in many ways.
  • a processing element may be embodied as a processor, a coprocessor, a controller or various other processing means or devices including integrated circuits such as, for example, an ASIC (application specific integrated circuit).
  • ASIC application specific integrated circuit
  • the VAD 70 , the recognition network element 74 and the speech recognition decoder/engine 76 may be portions of a speech recognition application 78 that operates under the control of a processing element such as the controller 20 of the mobile terminal 10 .
  • the speech recognition application 78 may also include an interface element 80 , which may be any means or device capable of communicating the output of the speech recognition decoder/engine 76 to, for example, a display to allow user interface with regard to the output of the speech recognition decoder/engine 76 .
  • the interface element 80 may provide the output of the speech recognition decoder/engine 76 to another application for further processing.
  • the VAD 70 , the recognition network element 74 , the speech recognition decoder/engine 76 and/or the interface element 80 may be embodied in the form of software applications executable by the controller 20 .
  • instructions for performing the functions of the VAD 70 , the recognition network element 74 , the speech recognition decoder/engine 76 and/or the interface element 80 may be stored in a memory (for example, either the volatile memory 40 or the non-volatile memory 42 ) of the mobile terminal 10 and executed by the controller 20 .
  • the VAD 70 may be any device or means embodied in either hardware, software, or a combination of hardware and software that is capable of monitoring signals including voice data (e.g., the input speech) and determining whether voice activity is present.
  • voice data e.g., the input speech
  • the receiver 16 may communicate call data to the VAD 70 .
  • the call data may be any type of call including an IP call (for example, VoIP, Internet call, Skype, etc.) or a conference call.
  • the call data may include caller voice data which can be detected by the VAD 70 .
  • user voice data input into the mobile terminal 10 by, for example, the microphone 26 may be communicated to the VAD 70 and detected.
  • the VAD 70 may be capable of signaling periods of silence and periods of voice activity in the voice data. Accordingly, the VAD 70 may be used to detect and/or indicate word boundaries. For example, if the speech recognition application 78 is an isolated word speech dictation application, the user may be prompted to speak each word with a clear pause between words so that the VAD 70 may detect word boundaries and communicate segmented voice data 82 to the recognition network element 74 and/or to the speech recognition decoder/engine 76 .
  • the system may include a feature extractor 69 configured to extract feature vectors 71 from the input speech 72 and outputting the feature vectors 71 to the VAD 70 and the recognition network element 74 .
  • the recognition network element 74 may receive the feature vectors 71 from the feature extractor and may receive information as to which feature vectors correspond to speech and which correspond to silence from the VAD 70 .
  • the speech recognition decoder/engine 76 may be any speech recognition decoder/engine known in the art.
  • the speech recognition decoder/engine 76 may be any device or means embodied in either hardware, software, or a combination of hardware and software that is capable of examining speech samples from the received segmented voice data 82 and generating acoustic and language scores, as described above, for candidate words corresponding to each word of the segmented voice data 82 .
  • the speech recognition decoder/engine 76 may be configured to receive recognition network information 84 from the recognition network element 74 so that acoustic and language scores may only be generated for a limited subset of candidate words.
  • the recognition network information 84 may include a listing of words of the recognition network and only words of the recognition network may be used for score calculation.
  • the recognition network information 84 may include acoustic modeling information for each word of the recognition network. The mechanism used to determine the recognition network will be described in greater detail below.
  • the speech recognition decoder/engine 76 may be in communication with a memory element (e.g., either the volatile memory 40 or the non-volatile memory 42 ) which may store a large or full vocabulary 86 .
  • the large or full vocabulary 86 may include a listing of words and their corresponding phonetic pronunciations.
  • the speech recognition decoder/engine 76 may access the large or full vocabulary 86 in order to access acoustic modeling information for words of the recognition network to use for composite score and/or other score calculations.
  • the speech recognition decoder/engine 76 may receive the acoustic modeling information for words of the recognition network to use in composite score and/or other score calculations directly from the recognition network element 74 .
  • the speech recognition decoder/engine 76 may then perform composite score and/or other score calculations to generate, for example, a list of best word candidates based on the composite scores.
  • processing and runtime memory resources may be preserved since the recognition network represents a dynamically selected portion of the full set of vocabulary words based upon which recognition operations may be conducted at the speech recognition decoder/engine 76 thereby reducing the number of candidate words upon which recognition operations must be performed.
  • the recognition network element 74 may be any device or means embodied in either hardware, software, or a combination of hardware and software that is capable of determining the recognition network information 84 as described in greater detail below.
  • the recognition network element 74 may receive word recognition information 88 , which may include the list of best word candidates, from the speech recognition decoder/engine 76 .
  • the word recognition information 88 may represent a listing of the most likely candidate words (e.g., n-best candidate words) corresponding to a particular word of the input speech 72 in response to recognition operations (e.g., similar to those described above) being performed on the particular word.
  • the recognition network element 74 may be configured to dynamically determine the recognition network for each word.
  • the recognition network element 74 may be configured to dynamically select a subset of words based on the word recognition information 88 for a current word in order to determine the recognition network for the next word to be recognized.
  • the recognition network element 74 may dynamically determine a recognition network to be used for recognition of a current word based on word recognition information 88 associated with a word upon which recognition operations were previously performed.
  • the recognition network element 74 may be configured to calculate a confidence measure for each of the n-best candidate words.
  • the best candidate word identified in the word recognition information 88 e.g., the word among the n-best candidate words having the highest composite score
  • a distance (in terms of a difference) from the reference value of the best candidate word may be measured or otherwise determined for each other n-best candidate word.
  • the recognition network element 74 may be configured to dynamically determine a number of selected candidate words for use in follower word prediction (e.g., prediction of a word likely to follow a current word (or sequence of words) based on language model and/or word frequency information).
  • follower word prediction e.g., prediction of a word likely to follow a current word (or sequence of words) based on language model and/or word frequency information.
  • a threshold may be determined and the difference between the best candidate word and the remaining n-best candidate words may be compared to the threshold. Only those n-best candidate words that fall within the threshold (e.g., have a confidence measure close to that of the best candidate word within the threshold amount) may be selected as the selected candidate words. Accordingly, predictions regarding words that are likely to follow a current word being recognized may be based only upon those words that are statistically most likely to be a correct recognition or decode of the current word.
  • the recognition network element 74 may be further configured to utilize language model information to predict likely follower words for each of the selected candidate words.
  • the result of the prediction of likely follower words may produce a list of best candidate followers including the best candidate follower words for each of the selected candidate words thereby defining the recognition network for the next word to be recognized.
  • Acoustic modeling information for the words of the recognition network for the next word to be recognized may then be utilized for word recognition operations of the next word to be recognized for determining best candidate words for the next word to be recognized as well.
  • the process described above may then be repeated for each subsequent word to be recognized in order to generate a word lattice as described in greater detail below.
  • the process described above is considered “dynamic” since the selected candidate words are selected for each word based only on those words that are statistically likely to be relevant, rather than being based on a fixed number of candidate words which may be invariable or constant for each word.
  • Generation of a dynamic decoder vocabulary for the recognition network may improve isolated word dictation. For example, if a word to be recognized is not in the decoder vocabulary, then it may be impossible to have correct recognition thereby providing motivation to increase a size of the decoder vocabulary set to reduce the likelihood of such a failure. Meanwhile, if a larger sized decoder vocabulary set is utilized, a larger memory footprint may be required and more computational resources may be utilized. Thus, generating a recognition network by dynamically generating a candidate list of words that are statistically likely to follow the words statistically most likely to be a correct recognition candidate for an immediately preceding word for which recognition operations have been performed, may reduce both the memory footprint and resource consumption utilized in performing recognition operations.
  • X) (which is impossible to know in advance) may be calculated by applying Bayes rule as shown below:
  • the first term i.e., log(P(X
  • the second term i.e., log(P(W)
  • the term A may represent a language model scaling factor.
  • the token passing scheme may be used for simplifying the calculation.
  • n-best recognition candidates i.e., n-best candidate words
  • Token passing can be used for computing the overall most likely sentence hypothesis (accumulative scores) given the acoustic scored word lists in each word segment and language model probability.
  • an accumulative score may be determined by:
  • accumulative_score i accumulative_score i - 1 + LM_bigram + acoustic_score i acoustic_scaling ,
  • accumulative_score i represents the accumulative score for the sentence up to and including word i
  • accumulative_score i-l represents the accumulative score for the sentence prior to word i
  • LM_bigram may represent a language score for word i
  • acoustic _score i may represent an acoustic score for word i
  • acoustic_scaling may represent a scaling of the acoustic score.
  • a token passing search may be used to find the n-best best recognition candidates based on token passed accumulative scores on n-best recognition candidates and history information (e.g., previous words used in obtaining the cumulative score for a sentence) stored in the tokens.
  • the accumulative scores may be computed from all acoustic scores and scaled language model bigrams along a particular pass. Given ranked n-best recognized word candidates, the language model may again be used to predict a vocabulary set for the next word segment. Both word pairs and backoff unigrams (e.g., common words for the language such as the, and, etc.) may be used for predicting the likely follower words, given the n-best candidate words. Taking a union of all of the follower words as a new vocabulary (i.e., the recognition network), the system may able to repeatedly recognize the next segment in the sentence given a smaller subset of vocabulary for use in the recognition.
  • word pairs and backoff unigrams e.g., common words for the language such as the, and, etc.
  • the confidence measure may be calculated for each of the n-best candidate words.
  • the confidence measure may be calculated according to the following equation in which W is the reference word sequence:
  • LLR represents a logarithm of a likelihood ratio.
  • the LLR represents a normalization of a reference score.
  • the reference word sequence can be approached in several ways, e.g. anti-model.
  • the reference word sequence may be modeled by obtaining scores for n-best word sequences.
  • the best scored word sequence (e.g., the word sequence with the highest cumulative score) may be chosen as the reference word sequence.
  • the confidence measure for each word sequence is determined based on a difference between the accumulative score of the word sequence (e.g., the accumulation of each word score for the given word sequence, up to the current word) and the best accumulative score (e.g., the accumulative score of the best word sequence).
  • the confidence measure is normalized for word duration by dividing the difference between the accumulative score of the word sequence and the best accumulative score by the word duration. The confidence measure may therefore be calculated by:
  • the confidence measure for the best scored word sequence may be established as a reference (which using equation (4) would be zero).
  • the confidence measure of each other word sequence may be compared to the confidence measure of the best scored word sequence to determine, based on a difference between the confidence measures, which of the word sequences are within a threshold amount of difference from the confidence measure of the best scored word sequence.
  • Current words for each of the word sequences that are within the threshold amount of difference may be considered as selected candidate words.
  • the recognition network element 74 may determine the recognition network based on the selected candidate words by predicting likely follower words in order to produce the best candidate follower words for each of the selected candidate words.
  • a normalized acoustic score for confidence measure calculations.
  • FIG. 4 shows graphical views illustrating n-best distribution in terms of decoder vocabulary prediction rate and predicted vocabulary size for an exemplary embodiment in which the confidence measure of the best word sequence is 0 and the threshold is set at ⁇ 0.03.
  • the vertical axis represents a percentage of how often a correct next word is within a list of a length indicated on the horizontal axis.
  • the correct word may be in a 1-best list of words about 93% of the time.
  • the correct word may be in the 2-best list about an additional 5% of the time.
  • FIG. 4B shows the same data depicted in FIG. 4A in a cumulative manner. As shown in FIG.
  • the correct word may be in the 2-best list about 98% of the time (i.e., the sum of 93% and 5% from FIG. 4A ).
  • FIG. 4C shows a percentage of an 8-best vocabulary size given n being 1 through 8 number of selected candidate words.
  • FIG. 4D shows the same data as is illustrated in FIG. 4C in a cumulative manner. Accordingly, if one selected candidate word is utilized, the vocabulary size of the recognition network may be about 37% of a vocabulary set generated using eight selected candidate words. Meanwhile, if two selected candidate words are utilized, the vocabulary size increases by about 13% to a cumulative vocabulary size of about 50% of the vocabulary set generated using eight selected candidate words.
  • the number of selected candidate words is two, there is approximately a 98% chance that the correct word will be in the predicted list and the recognition network will be about 50% smaller than it would be for an eight selected candidate word scenario.
  • 2-best word may provide a relatively good performance and adding more candidates in an n-best list may only marginally improve performance with an increase in memory usage.
  • the segmented voice data 82 includes each of the words (word 1 to word m ) of the sentence or phrase.
  • the speech recognition decoder/engine 76 is configured to construct a word lattice 90 from the segmented voice data 82 .
  • the word lattice 90 includes candidate words 92 corresponding to each of the words of the sentence or phrase.
  • candidate words w 11 to w 1n correspond to word 1
  • candidate words w il to w in correspond to words
  • candidate words w ml to w mn correspond to word m
  • the candidate words 92 of the word lattice 90 may be listed or otherwise organized such that candidates having a highest composite score are listed or otherwise presented first while remaining candidate words are listed or otherwise presented in order of descending composite scores.
  • the candidate words 92 may be presented in a list of best word candidates.
  • the list of best word candidates may be a listing of words from a corresponding recognition network for the respective word being recognized, in which the listing of words are ranked based on cumulative scores.
  • the list of best candidate words may be provided to the recognition network element 74 as a portion of the word recognition information 88 .
  • the recognition network element 74 may calculate a confidence measure for each of the candidate words 92 .
  • the best candidate word may be assigned a confidence measure of a reference value (e.g., 0) and a confidence measure relative to the reference value may be calculated for each other candidate word as described above.
  • the confidence measure for each of the other candidate words may be compared to a threshold and those candidate words having a confidence measure within the threshold may be selected as selected candidate words 94 .
  • the selected candidate words 94 may then be utilized for predicting likely follower words 96 to determine the recognition network for each word to be recognized as described above. In this regard, as can be seen from FIG.
  • likely follower words 96 are predicted corresponding to each of the selected candidate words 94 for each given word of the sequence.
  • the corresponding confidence measure for word candidate w 11 may be zero.
  • word candidate w 12 and word candidate w 11 may each meet the threshold criteria with respect to the best word candidate and may thus be selected candidate words along with word candidate w 11 . Accordingly, likely follower words for each of the selected candidate words may be determined using, for example, language models.
  • follower words fw 111 to fw 11w may correspond to word candidate w 11
  • follower words fw 121 to fw 12d may correspond to word candidate w 12
  • follower words fw 131 to fw 13g may correspond to word candidate w 13
  • Selected candidate words are also shown for each other word of the sequence (e.g., 94 ′, 94 ′′, and 94 ′′′).
  • the composite score may include an acoustic score and a language score, which may be known as an LM n-gram.
  • the acoustic score represents a probability that the word candidate (for example w 11 ) matches the corresponding spoken word (for example word 1 ) based only on the sound of the corresponding spoken word.
  • the acoustic score may be stored in association with each word or node.
  • the language score takes into account language attributes such as grammar to determine the probability that a particular word candidate (for example w 11 ) matches the spoken word (for example word 1 ) being analyzed based on language probabilities accessible to the application that are associated with each consecutive word pair (for example word 1 and word 2 ).
  • the language score is defined for each consecutive word pair, which may also be called an arc or transition.
  • the language score may be calculated based on language probabilities that may be stored in a memory of the mobile terminal 10 , or otherwise accessible to a device practicing embodiments of the invention.
  • the composite score may also include scaling in order to balance between the acoustic and language scores.
  • a sentence level search may be performed.
  • candidate recognized words have been obtained for all word segments (e.g., w 11 , . . . , w 1a , w 21 , . . . , w 2b , . . . ).
  • a sentence level search may be performed for the candidate recognized words based on the language model scores and obtained acoustic model scores.
  • the sentence level search provides the most likely sequence of words in the word lattice, which may be an output of the speech recognition decoder/engine 76 .
  • the word lattice 90 may be constructed one word at a time.
  • the candidate words corresponding to word 1 may be assembled prior to the assembly of the candidate words corresponding to word 2 .
  • the speech recognition decoder/engine 76 may be configured to produce any number of candidate words corresponding to each of the words of the sentence or phrase.
  • the speech recognition decoder/engine 76 may be configured to calculate only a top ten or any other selected number of candidate words for each corresponding word or the sentence or phrase.
  • the candidate words 92 may be presented, listed, organized, etc. in order of composite score.
  • the candidate words 92 may be ranked in order of the likelihood that each candidate word matches the actual corresponding spoken word based on a balance between both acoustic and language scores.
  • Candidate sentences may then be constructed based on the candidate words 92 by constructing sentences including the candidate words 92 .
  • the speech recognition decoder/engine 76 may be configured to determine any number of candidate sentences.
  • the adaptive speech recognition decoder/engine 76 may be configured to determine ten candidate sentences, or n-best sentences ranked according to a summation of the composite scores of the candidate words.
  • the candidate sentences may be organized, listed, presented or otherwise ranked in order of likelihood that the candidate sentence correctly matches the spoken sentence or phrase based on a balance between acoustic and language scores of each of the candidate words of the candidate sentence.
  • the candidate sentences may have a corresponding composite score or sentence score.
  • the candidate sentences may then be communicated to the interface element 80 for output to a user or to another application.
  • the interface element 80 may present the candidate sentences to the user for confirmation, modification or selection via, for example, a display.
  • the presentation of the candidate sentences may be accomplished in a list format, such as by listing a specific number of candidate sentences on a display and enabling the user to select a best or correct one of the candidate sentences.
  • the best or correct one of the candidate sentences should be understood to include the candidate sentence that matches or most closely matches the actual spoken sentence or phrase.
  • the user may be presented with a complete list of all candidate sentences or a selected number of candidate sentences in which remaining candidate sentences may be viewed at the option of the user if none of the currently displayed candidate sentences include the best or correct one of the candidate sentences.
  • the user may be presented with a single candidate sentence at any given time, which represents the candidate sentence with the highest composite score that has not yet been viewed by the user.
  • the user may again be given the option to view the next most likely candidate sentence if the currently displayed candidate sentence is not the best or correct one of the candidate sentences.
  • the user may use the interface element 80 to control attributes of the speech recognition decoder/engine 76 such as, for example, the number of candidate sentences to generate, the number of candidate sentences to display, the order in which to display candidate sentences, etc.
  • prediction of the recognition network for a next word to be recognized may depend at least in part upon whether a match or a mismatch occurs between training and testing sets of a particular language model.
  • a match the discussion provided above may be employed.
  • the predefined set may include a set of frequently used words and/or acoustic matching candidates appended to the likely follower words to form the recognition network.
  • FIG. 6 is a flowchart of a system, method and program product according to exemplary embodiments of the invention. It will be understood that each block or step of the flowcharts, and combinations of blocks in the flowcharts, can be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions. For example, one or more of the procedures described above may be embodied by computer program instructions. In this regard, the computer program instructions which embody the procedures described above may be stored by a memory device of a mobile terminal and executed by a built-in processor in a mobile terminal.
  • any such computer program instructions may be loaded onto a computer or other programmable apparatus (i.e., hardware) to produce a machine, such that the instructions which execute on the computer or other programmable apparatus create means for implementing the functions specified in the flowcharts block(s) or step(s).
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowcharts block(s) or step(s).
  • the computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowcharts block(s) or step(s).
  • blocks or steps of the flowcharts support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that one or more blocks or steps of the flowcharts, and combinations of blocks or steps in the flowcharts, can be implemented by special purpose hardware-based computer systems which perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
  • one embodiment of a method of providing dynamic vocabulary prediction for speech recognition may include determining a confidence measure for each candidate recognized word for a current word to be recognized at operation 200 .
  • a subset of candidate recognized words may be selected as selected candidate words based on the confidence measure of each one of the candidate recognized words at operation 210 .
  • a recognition network may be determined for a next word to be recognized.
  • the recognition network may include likely follower words for each of the selected candidate words.
  • the method may also include determining candidate words for the next word to be recognized based on a recognition probability associated with each of the likely follower words at an optional operation 230 .
  • the above described functions may be carried out in many ways. For example, any suitable means for carrying out each of the functions described above may be employed to carry out embodiments of the invention. In one embodiment, all or a portion of the elements of the invention generally operate under control of a computer program product.
  • the computer program product for performing the methods of embodiments of the invention includes a computer-readable storage medium, such as the non-volatile storage medium, and computer-readable program code portions, such as a series of computer instructions, embodied in the computer-readable storage medium.

Abstract

An apparatus for providing dynamic vocabulary prediction for setting up a speech recognition network of resource constrained portable devices may include a recognition network element. The recognition network element may be configured to determine a confidence measure for each candidate recognized word for a current word to be recognized. The recognition network element may also be configured to select a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words, and determine a recognition network for a next word to be recognized, the recognition network including likely follower words for each of the selected candidate words, e.g. using language model and highly frequently used words.

Description

    TECHNOLOGICAL FIELD
  • Embodiments of the present invention relate generally to speech processing technology and, more particularly, relate to a method, apparatus, and computer program product for providing dynamic vocabulary prediction for setting up speech recognition network of resource constraint portable devices.
  • BACKGROUND
  • The modem communications era has brought about a tremendous expansion of wireline and wireless networks. Computer networks, television networks, and telephony networks are experiencing an unprecedented technological expansion, fueled by consumer demand. Wireless and mobile networking technologies have addressed related consumer demands, while providing more flexibility and immediacy of information transfer.
  • Current and future networking technologies continue to facilitate ease of information transfer and convenience to users. One area in which there is a demand to increase ease of information transfer relates to the delivery of services to a user of a mobile terminal. The services may be in the form of a particular media or communication application desired by the user, such as a music player, a game player, an electronic book, short messages, email, etc. The services may also be in the form of interactive applications in which the user may respond to a network device in order to perform a task, play a game or achieve a goal. The services may be provided from a network server or other network device, or even from the mobile terminal such as, for example, a mobile telephone, a mobile television, a mobile computer, a mobile gaming system, etc.
  • In many applications, it is necessary for the user to receive audio information such as oral feedback or instructions from the network or mobile terminal or for the user to give oral instructions or feedback to the network or mobile terminal. Such applications may provide for a user interface that does not rely on substantial manual user activity. In other words, the user may interact with the application in a hands-free or semi-hands free environment. An example of such an application may be paying a bill, ordering a program, requesting and receiving driving instructions, etc. Other applications may convert oral speech into text or perform some other function based on recognized speech, such as dictating a short message service (SMS) or email, etc. In order to support these and other applications, speech recognition applications (applications that produce text from speech), speech synthesis applications (applications that produce speech from text), and other speech processing devices are becoming more common.
  • Speech recognition, which may be referred to as automatic speech recognition (ASR), may be conducted by numerous different types of applications. A dictation engine, which may be employed for isolated word speech recognition, is one example of such an application which may include a large vocabulary of words that may be recognized. For example, the dictation engine may include a vocabulary set of 100,000 words or more. Each word of the vocabulary may have a corresponding acoustic model by concatenating subword acoustic models, such as phonemic HMMs. Speech recognition, such as may be performed by a Viterbi decoder, often involves comparing speech to various ones of the acoustic models in order to find a model most likely to have produced the speech. During a speech recognition process, it may be desirable for speech recognition to be performed on a subset of the entire vocabulary in order to reduce the number of models that must be compared to a given speech sample, so that it can be used in a resource constrained embedded system with low memory and computational complexity. However, a typical recognition vocabulary for a next word to be recognized is often formed as a subset of the entire vocabulary based on a fixed number of candidate words and information from the language model. This conventional mechanism can result in a large requirement for runtime memory resource usage.
  • However, with the ubiquitous nature of mobile terminals which may be resource constrained, it is becoming increasingly desirable to improve the performance of mobile terminals without increasing requirements for memory size and processing power. Accordingly, it may be desirable to provide speech recognition capabilities that avoid the disadvantages described above.
  • BRIEF SUMMARY
  • A method, apparatus and computer program product are therefore provided for providing dynamic vocabulary prediction for speech recognition. As such, for example, an efficient dynamic vocabulary prediction for large vocabulary isolated speech recognition in resource-constrained systems may be provided. According to exemplary embodiments of the present invention, a recognition network may be dynamically created as a subset of a vocabulary of words. In this regard, rather than selecting a fixed number of candidate words which may be compared to a speech sample for recognition, embodiments of the present invention dynamically generate a recognition network for each word to be recognized. Furthermore, embodiments of the present invention account for the fact that even previously recognized words may not have been recognized properly in defining the recognition network for each word to be recognized. Thus, flexible and efficient speech recognition may be provided.
  • In one exemplary embodiment, a method of providing dynamic vocabulary prediction for speech recognition is provided. The method includes determining a confidence measure for each candidate recognized word for a current word to be recognized, selecting a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words, and determining a recognition network for a next word to be recognized. The recognition network may include likely follower words for each of the selected candidate words using language model and supplementary words.
  • In another exemplary embodiment, a computer program product for providing dynamic vocabulary prediction for speech recognition is provided. The computer program product includes at least one computer-readable storage medium having computer-readable program code portions stored therein. The computer-readable program code portions include first, second and third executable portions. The first executable portion is for determining a confidence measure for each candidate recognized word for a current word to be recognized. The second executable portion is for selecting a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words. The third executable portion is for determining a recognition network for a next word to be recognized. The recognition network may include likely follower words for each of the selected candidate words using language model and supplementary words.
  • In another exemplary embodiment, an apparatus for providing dynamic vocabulary prediction for speech recognition is provided. The apparatus includes a recognition network element. The recognition network element may be configured to determine a confidence measure for each candidate recognized word for a current word to be recognized. The recognition network element may also be configured to select a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words, and determine a recognition network for a next word to be recognized. The recognition network may include likely follower words for each of the selected candidate words using language model and supplementary words.
  • In another exemplary embodiment, an apparatus for providing dynamic vocabulary prediction for speech recognition is provided. The apparatus includes means for determining a confidence measure for each candidate recognized word for a current word to be recognized, means for selecting a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words and means for determining a recognition network for a next word to be recognized. The recognition network may include likely follower words for each of the selected candidate words using language model and supplementary words.
  • In another exemplary embodiment, a system for providing dynamic vocabulary prediction for speech recognition is provided. The system may include a speech processing element, a speech recognition engine and a recognition network element. The speech processing element may be configured to segment input speech into a series of words including a current word to be recognized and a next word to be recognized as well as feature extraction. The speech recognition engine may be configured to determine candidate recognized words corresponding to each word of the series of words based on a recognition network dynamically generated for each word of the series of words. The recognition network element may be configured to determine a confidence measure for each candidate recognized word for the current word to be recognized, to select a subset of candidate recognized words for the current word to be recognized as selected candidate words based on the confidence measure of each one of the candidate recognized words for the current word to be recognized, and to determine a next recognition network for a next word to be recognized. The next recognition network may include likely follower words for each of the selected candidate words using language model and supplementary words.
  • Embodiments of the invention may provide a method, apparatus and computer program product for employment in systems to enhance speech processing. As a result, for example, mobile terminals and other electronic devices may benefit from an ability to perform speech processing in an efficient manner without suffering performance degradation. Accordingly, accurate word recognition may be performed using relatively small amounts of resources.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
  • Having thus described embodiments of the invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
  • FIG. 1 is a schematic block diagram of a mobile terminal according to an exemplary embodiment of the present invention;
  • FIG. 2 is a schematic block diagram of a wireless communications system according to an exemplary embodiment of the present invention;
  • FIG. 3 illustrates a block diagram of a system for providing dynamic vocabulary prediction for speech recognition according to an exemplary embodiment of the present invention;
  • FIG. 4 shows graphical views illustrating n-best distribution in terms of decoder vocabulary prediction rate and predicted vocabulary size for an exemplary embodiment of the present invention;
  • FIG. 5 illustrates a sequence of segmented words and corresponding word lattice according to an exemplary embodiment of the present invention; and
  • FIG. 6 is a flowchart according to an exemplary method for providing dynamic vocabulary prediction for speech recognition according to an exemplary embodiment of the present invention.
  • DETAILED DESCRIPTION
  • Embodiments of the present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. Indeed, the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout.
  • FIG. 1 illustrates a block diagram of a mobile terminal 10 that would benefit from embodiments of the present invention. It should be understood, however, that a mobile telephone as illustrated and hereinafter described is merely illustrative of one type of mobile terminal that would benefit from embodiments of the present invention and, therefore, should not be taken to limit the scope of embodiments of the present invention. While one embodiment of the mobile terminal 10 is illustrated and will be hereinafter described for purposes of example, other types of mobile terminals, such as portable digital assistants (PDAs), pagers, mobile computers, mobile televisions, gaming devices, laptop computers, cameras, video recorders, GPS devices and other types of voice and text communications systems, can readily employ embodiments of the present invention. Furthermore, devices that are not mobile may also readily employ embodiments of the present invention.
  • The system and method of embodiments of the present invention will be primarily described below in conjunction with mobile communications applications. However, it should be understood that the system and method of embodiments of the present invention can be utilized in conjunction with a variety of other applications, both in the mobile communications industries and outside of the mobile communications industries.
  • The mobile terminal 10 includes an antenna 12 (or multiple antennae) in operable communication with a transmitter 14 and a receiver 16. The mobile terminal 10 further includes a controller 20 or other processing element that provides signals to and receives signals from the transmitter 14 and receiver 16, respectively. The signals include signaling information in accordance with the air interface standard of the applicable cellular system, and also user speech, received data and/or user generated data. In this regard, the mobile terminal 10 is capable of operating with one or more air interface standards, communication protocols, modulation types, and access types. By way of illustration, the mobile terminal 10 is capable of operating in accordance with any of a number of first, second, third and/or fourth-generation communication protocols or the like. For example, the mobile terminal 10 may be capable of operating in accordance with second-generation (2G) wireless communication protocols IS-136 (TDMA), GSM, and IS-95 (CDMA), or with third-generation (3G) wireless communication protocols, such as UMTS, CDMA2000, WCDMA and TD-SCDMA, with fourth-generation (4G) wireless communication protocols or the like.
  • It is understood that the controller 20 includes circuitry desirable for implementing audio and logic functions of the mobile terminal 10. For example, the controller 20 may be comprised of a digital signal processor device, a microprocessor device, and various analog to digital converters, digital to analog converters, and other support circuits. Control and signal processing functions of the mobile terminal 10 are allocated between these devices according to their respective capabilities. The controller 20 thus may also include the functionality to convolutionally encode and interleave message and data prior to modulation and transmission. The controller 20 can additionally include an internal voice coder, and may include an internal data modem. Further, the controller 20 may include functionality to operate one or more software programs, which may be stored in memory. For example, the controller 20 may be capable of operating a connectivity program, such as a conventional Web browser. The connectivity program may then allow the mobile terminal 10 to transmit and receive Web content, such as location-based content and/or other web page content, according to a Wireless Application Protocol (WAP), Hypertext Transfer Protocol (HTTP) and/or the like, for example.
  • The mobile terminal 10 may also comprise a user interface including an output device such as a conventional earphone or speaker 24, a ringer 22, a microphone 26, a display 28, and a user input interface, all of which are coupled to the controller 20. The user input interface, which allows the mobile terminal 10 to receive data, may include any of a number of devices allowing the mobile terminal 10 to receive data, such as a keypad 30, a touch display (not shown) or other input device. In embodiments including the keypad 30, the keypad 30 may include the conventional numeric (0-9) and related keys (#, *), and other keys used for operating the mobile terminal 10. Alternatively, the keypad 30 may include a conventional QWERTY keypad arrangement. The keypad 30 may also include various soft keys with associated functions. In addition, or alternatively, the mobile terminal 10 may include an interface device such as a joystick or other user input interface. The mobile terminal 10 further includes a battery 34, such as a vibrating battery pack, for powering various circuits that are required to operate the mobile terminal 10, as well as optionally providing mechanical vibration as a detectable output.
  • The mobile terminal 10 may further include a user identity module (UIM) 38. The UIM 38 is typically a memory device having a processor built in. The UIM 38 may include, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), etc. The UIM 38 typically stores information elements related to a mobile subscriber. In addition to the UIM 38, the mobile terminal 10 may be equipped with memory. For example, the mobile terminal 10 may include volatile memory 40, such as volatile Random Access Memory (RAM) including a cache area for the temporary storage of data. The mobile terminal 10 may also include other non-volatile memory 42, which can be embedded and/or may be removable. The non-volatile memory 42 can additionally or alternatively comprise an EEPROM, flash memory or the like, such as that available from the SanDisk Corporation of Sunnyvale, Calif., or Lexar Media Inc. of Fremont, Calif. The memories can store any of a number of pieces of information, and data, used by the mobile terminal 10 to implement the functions of the mobile terminal 10. For example, the memories can include an identifier, such as an international mobile equipment identification (IMEI) code, capable of uniquely identifying the mobile terminal 10.
  • FIG. 2 is a schematic block diagram of a wireless communications system according to an exemplary embodiment of the present invention. Referring now to FIG. 2, an illustration of one type of system that would benefit from embodiments of the present invention is provided. The system includes a plurality of network devices. As shown, one or more mobile terminals 10 may each include an antenna 12 for transmitting signals to and for receiving signals from a base site or base station (BS) 44. The base station 44 may be a part of one or more cellular or mobile networks each of which includes elements required to operate the network, such as a mobile switching center (MSC) 46. As well known to those skilled in the art, the mobile network may also be referred to as a Base Station/MSC/Interworking function (BMI). In operation, the MSC 46 is capable of routing calls to and from the mobile terminal 10 when the mobile terminal 10 is making and receiving calls. The MSC 46 can also provide a connection to landline trunks when the mobile terminal 10 is involved in a call. In addition, the MSC 46 can be capable of controlling the forwarding of messages to and from the mobile terminal 10, and can also control the forwarding of messages for the mobile terminal 10 to and from a messaging center. It should be noted that although the MSC 46 is shown in the system of FIG. 2, the MSC 46 is merely an exemplary network device and embodiments of the present invention are not limited to use in a network employing an MSC.
  • The MSC 46 can be coupled to a data network, such as a local area network (LAN), a metropolitan area network (MAN), and/or a wide area network (WAN). The MSC 46 can be directly coupled to the data network. In one typical embodiment, however, the MSC 46 is coupled to a gateway device (GTW) 48, and the GTW 48 is coupled to a WAN, such as the Internet 50. In turn, devices such as processing elements (e.g., personal computers, server computers or the like) can be coupled to the mobile terminal 10 via the Internet 50. For example, as explained below, the processing elements can include one or more processing elements associated with a computing system 52 (two shown in FIG. 2), origin server 54 (one shown in FIG. 2) or the like, as described below.
  • The BS 44 can also be coupled to a signaling GPRS (General Packet Radio Service) support node (SGSN) 56. As known to those skilled in the art, the SGSN 56 is typically capable of performing functions similar to the MSC 46 for packet switched services. The SGSN 56, like the MSC 46, can be coupled to a data network, such as the Internet 50. The SGSN 56 can be directly coupled to the data network. In a more typical embodiment, however, the SGSN 56 is coupled to a packet-switched core network, such as a GPRS core network 58. The packet-switched core network is then coupled to another GTW 48, such as a GTW GPRS support node (GGSN) 60, and the GGSN 60 is coupled to the Internet 50. In addition to the GGSN 60, the packet-switched core network can also be coupled to a GTW 48. Also, the GGSN 60 can be coupled to a messaging center. In this regard, the GGSN 60 and the SGSN 56, like the MSC 46, may be capable of controlling the forwarding of messages, such as MMS messages. The GGSN 60 and SGSN 56 may also be capable of controlling the forwarding of messages for the mobile terminal 10 to and from the messaging center.
  • In addition, by coupling the SGSN 56 to the GPRS core network 58 and the GGSN 60, devices such as a computing system 52 and/or origin server 54 may be coupled to the mobile terminal 10 via the Internet 50, SGSN 56 and GGSN 60. In this regard, devices such as the computing system 52 and/or origin server 54 may communicate with the mobile terminal 10 across the SGSN 56, GPRS core network 58 and the GGSN 60. By directly or indirectly connecting mobile terminals 10 and the other devices (e.g., computing system 52, origin server 54, etc.) to the Internet 50, the mobile terminals 10 may communicate with the other devices and with one another, such as according to the Hypertext Transfer Protocol (HTTP) and/or the like, to thereby carry out various functions of the mobile terminals 10.
  • Although not every element of every possible mobile network is shown and described herein, it should be appreciated that the mobile terminal 10 may be coupled to one or more of any of a number of different networks through the BS 44. In this regard, the network(s) may be capable of supporting communication in accordance with any one or more of a number of first-generation (1G), second-generation (2G), 2.5G, third-generation (3G), 3.9G, fourth-generation (4G) mobile communication protocols or the like. For example, one or more of the network(s) can be capable of supporting communication in accordance with 2G wireless communication protocols IS-136 (TDMA), GSM, and IS-95 (CDMA). Also, for example, one or more of the network(s) can be capable of supporting communication in accordance with 2.5G wireless communication protocols GPRS, Enhanced Data GSM Environment (EDGE), or the like. Further, for example, one or more of the network(s) can be capable of supporting communication in accordance with 3G wireless communication protocols such as a Universal Mobile Telephone System (UMTS) network employing Wideband Code Division Multiple Access (WCDMA) radio access technology. Some narrow-band AMPS (NAMPS), as well as TACS, network(s) may also benefit from embodiments of the present invention, as should dual or higher mode mobile stations (e.g., digital/analog or TDMA/CDMA/analog phones).
  • The mobile terminal 10 can further be coupled to one or more wireless access points (APs) 62. The APs 62 may comprise access points configured to communicate with the mobile terminal 10 in accordance with techniques such as, for example, radio frequency (RF), Bluetooth (BT), infrared (IrDA) or any of a number of different wireless networking techniques, including wireless LAN (WLAN) techniques such as IEEE 802.11 (e.g., 802.11a, 802.11b, 802.11g, 802.11n, etc.), WiMAX techniques such as IEEE 802.16, and/or ultra wideband (UWB) techniques such as IEEE 802.15 and/or the like. The APs 62 may be coupled to the Internet 50. Like with the MSC 46, the APs 62 can be directly coupled to the Internet 50. In one embodiment, however, the APs 62 are indirectly coupled to the Internet 50 via a GTW 48. Furthermore, in one embodiment, the BS 44 may be considered as another AP 62. As will be appreciated, by directly or indirectly connecting the mobile terminals 10 and the computing system 52, the origin server 54, and/or any of a number of other devices, to the Internet 50, the mobile terminals 10 can communicate with one another, the computing system, etc., to thereby carry out various functions of the mobile terminals 10, such as to transmit data, content or the like to, and/or receive content, data or the like from, the computing system 52. As used herein, the terms “data,” “content,” “information” and similar terms may be used interchangeably to refer to data capable of being transmitted, received and/or stored in accordance with embodiments of the present invention. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present invention.
  • Although not shown in FIG. 2, in addition to or in lieu of coupling the mobile terminal 10 to computing systems 52 across the Internet 50, the mobile terminal 10 and computing system 52 may be coupled to one another and communicate in accordance with, for example, RF, BT, IrDA or any of a number of different wireline or wireless communication techniques, including LAN, WLAN, WiMAX, UWB techniques and/or the like. One or more of the computing systems 52 can additionally, or alternatively, include a removable memory capable of storing content, which can thereafter be transferred to the mobile terminal 10. Further, the mobile terminal 10 can be coupled to one or more electronic devices, such as printers, digital projectors and/or other multimedia capturing, producing and/or storing devices (e.g., other terminals). Like with the computing systems 52, the mobile terminal 10 may be configured to communicate with the portable electronic devices in accordance with techniques such as, for example, RF, BT, IrDA or any of a number of different wireline or wireless communication techniques, including USB, LAN, WLAN, WiMAX, UWB techniques and/or the like.
  • In an exemplary embodiment, data associated with a speech recognition application or other speech processing application may be communicated over the system of FIG. 2 between a mobile terminal, which may be similar to the mobile terminal 10 of FIG. 1 and a network device of the system of FIG. 2, or between mobile terminals. As such, it should be understood that the system of FIG. 2 need not be employed for communication between mobile terminals or between a network device and the mobile terminal, but rather FIG. 2 is merely provided for purposes of example. Furthermore, it should be understood that embodiments of the present invention may be resident on a communication device such as the mobile terminal 10, or may be resident on a network device or other device accessible to the communication device.
  • In a typical speech recognition application such as, for example, isolated word based speech recognition, a speaker may be asked to speak with a clear pause between words in order to enable the word to be segmented by voice activity detection (VAD). It should be noted that while speaking with a clear pause between the words may enhance the accuracy of a speech recognition application, it is also possible to apply the principles disclosed herein to normal speech. However, recognition error rate may be increased in such applications.
  • VAD may be used to detect word boundaries so that speech recognition may be carried out only on a single segmented word at any given time. The n-best word candidates may then be given for each segmented word. Once the same process has been performed for each word in an utterance, a word lattice may then be produced including each of the n-best word candidates for each corresponding word of the utterance. The word candidates of the word lattice may be listed or otherwise organized in order of a score that represents a likelihood that the word candidate is the correct word. In this regard, one way of scoring the word candidates is to provide an acoustic score and a language score such as a language model (LM) n-gram value. The acoustic score is a value based on sound alone. In other words, the acoustic score represents a probability that the word candidate matches the spoken word being analyzed based only on the sound of the spoken word. Meanwhile, the language score takes into account language attributes such as grammar to determine the probability that a particular word candidate matches the spoken word being analyzed based on language probabilities accessible to the application. For example, if the first word of an utterance is “I”, then the probability of the second word spoken being “is” would be very low, while the probability of the second word spoken being “am” would be much higher. It is traditional to use the term language model (LM) for the statistical n-gram models of word sequences that use the previous n-1 words to predict the next one. The n-gram LM may be trained on a large text corpus. After calculating a value for the acoustic score and the language score, a combined or composite score may be acquired that may subsequently be used to order each of the candidate words.
  • Based on the recognition of a particular word, it may be possible to use statistical information regarding the likelihood of other words following the particular word in order to create a recognition network including the words that are most likely to follow the particular word (e.g., best candidate followers). The recognition network may then be utilized for comparing models of words in the recognition network to sample speech, such as a subsequent word, following the particular word. By creating the recognition network as a subset of vocabulary words, runtime memory and other resource usage may be reduced since a smaller number of models are compared to the sample speech.
  • However, for any given word for which word recognition has been conducted as described above, there is a chance that the word was recognized improperly. Accordingly, it may be desirable to create the recognition network, not just based on the best candidate followers for the best word (e.g., the word having the highest composite score), but based on a list of all of the best candidate followers for several best candidate recognized words. In other words, based on the acoustic and language scores generated during a recognition operation, a list of best candidate recognized words may be generated. In one example, a fixed number of best candidate recognized words may be generated based on the combined acoustic and language scores (which may be weighted in some fashion known in the art). For each of the best candidate recognized words, a corresponding list of best candidate followers may form the recognition network.
  • In an effort to further reduce a size of the recognition network, embodiments of the present invention may incorporate a confidence measure to be used for dynamic selection of selected ones of the best candidate recognized words (e.g., selected candidate words) based on a difference between each of the best candidate recognized words and the best candidate recognized word (e.g., the candidate recognized word having the highest combined acoustic and language score). Only best candidate followers associated with the selected candidate words (i.e., candidate words that have a confidence measure within a threshold distance from the best candidate recognized word) may then be used to form the recognition network thereby providing dynamic vocabulary prediction.
  • FIG. 3 illustrates a block diagram of a system for providing dynamic vocabulary prediction for speech recognition according to an exemplary embodiment of the present invention. An exemplary embodiment of the invention will now be described with reference to FIG. 3, in which certain elements of a system for providing dynamic vocabulary prediction for speech recognition are displayed. The system of FIG. 3 will be described, for purposes of example, in connection with the mobile terminal 10 of FIG. 1. However, it should be noted that the system of FIG. 3, may also be employed in connection with a variety of other devices, both mobile and fixed, and therefore, embodiments of the present invention should not be limited to application on devices such as the mobile terminal 10 of FIG. 1. It should also be noted, that while FIG. 3 illustrates one example of a configuration of a system for providing dynamic vocabulary prediction for speech recognition, numerous other configurations may also be used to implement embodiments of the present invention. In this regard, although segmentation of words is illustrated in FIG. 3 as being performed by a VAD element 70, any other known mechanism for sentence segmentation may alternatively be employed.
  • Referring now to FIG. 3, a system 68 for providing dynamic vocabulary prediction for speech recognition is provided. The system 68 includes a speech processing element for segmenting input speech 72 into speech samples such as individual words, a recognition network element 74 and a speech recognition decoder/engine 76. As stated above, the speech processing element of one exemplary embodiment may be the VAD element 70. The VAD element 70 may be in communication with the speech recognition decoder/engine 76 via the recognition network element 74, as shown in FIG. 3, or alternatively, the VAD element 70 and the speech recognition decoder/engine 76 could be communicatively coupled independent of the recognition network element 74. In either case, the recognition network element 74 may provide an input to the speech recognition decoder/engine 76 for providing information regarding a recognition network.
  • In an exemplary embodiment, the VAD element 70, the recognition network element 74 and the speech recognition decoder/engine 76 may each be embodied by and/or operate under the control of a processing element. In this regard, some or each of the VAD element 70, the recognition network element 74 and the speech recognition decoder/engine 76 may be embodied by and/or operate under the control of a single processing element. Alternatively, a single or even multiple processing elements may perform all of the functions associated with one or more of the VAD element 70, the recognition network element 74 and the speech recognition decoder/engine 76. Processing elements as described herein may be embodied in many ways. For example, a processing element may be embodied as a processor, a coprocessor, a controller or various other processing means or devices including integrated circuits such as, for example, an ASIC (application specific integrated circuit).
  • As shown in FIG. 3, the VAD 70, the recognition network element 74 and the speech recognition decoder/engine 76 according to one embodiment may be portions of a speech recognition application 78 that operates under the control of a processing element such as the controller 20 of the mobile terminal 10. The speech recognition application 78 may also include an interface element 80, which may be any means or device capable of communicating the output of the speech recognition decoder/engine 76 to, for example, a display to allow user interface with regard to the output of the speech recognition decoder/engine 76. Alternatively, the interface element 80 may provide the output of the speech recognition decoder/engine 76 to another application for further processing. In an exemplary embodiment, the VAD 70, the recognition network element 74, the speech recognition decoder/engine 76 and/or the interface element 80 may be embodied in the form of software applications executable by the controller 20. As such, instructions for performing the functions of the VAD 70, the recognition network element 74, the speech recognition decoder/engine 76 and/or the interface element 80 may be stored in a memory (for example, either the volatile memory 40 or the non-volatile memory 42) of the mobile terminal 10 and executed by the controller 20.
  • The VAD 70 may be any device or means embodied in either hardware, software, or a combination of hardware and software that is capable of monitoring signals including voice data (e.g., the input speech) and determining whether voice activity is present. For example, in response to receipt of a call, such as a wireless telephone call, the receiver 16 may communicate call data to the VAD 70. The call data may be any type of call including an IP call (for example, VoIP, Internet call, Skype, etc.) or a conference call. The call data may include caller voice data which can be detected by the VAD 70. Additionally, user voice data input into the mobile terminal 10 by, for example, the microphone 26 may be communicated to the VAD 70 and detected. In response to detection of voice data, the VAD 70 may be capable of signaling periods of silence and periods of voice activity in the voice data. Accordingly, the VAD 70 may be used to detect and/or indicate word boundaries. For example, if the speech recognition application 78 is an isolated word speech dictation application, the user may be prompted to speak each word with a clear pause between words so that the VAD 70 may detect word boundaries and communicate segmented voice data 82 to the recognition network element 74 and/or to the speech recognition decoder/engine 76.
  • In an alternative exemplary embodiment, illustrated in dotted lines in FIG. 3, the system may include a feature extractor 69 configured to extract feature vectors 71 from the input speech 72 and outputting the feature vectors 71 to the VAD 70 and the recognition network element 74. In such an embodiment, the recognition network element 74 may receive the feature vectors 71 from the feature extractor and may receive information as to which feature vectors correspond to speech and which correspond to silence from the VAD 70.
  • The speech recognition decoder/engine 76 may be any speech recognition decoder/engine known in the art. In an exemplary embodiment, the speech recognition decoder/engine 76 may be any device or means embodied in either hardware, software, or a combination of hardware and software that is capable of examining speech samples from the received segmented voice data 82 and generating acoustic and language scores, as described above, for candidate words corresponding to each word of the segmented voice data 82. In an exemplary embodiment, the speech recognition decoder/engine 76 may be configured to receive recognition network information 84 from the recognition network element 74 so that acoustic and language scores may only be generated for a limited subset of candidate words. In this regard, the recognition network information 84 may include a listing of words of the recognition network and only words of the recognition network may be used for score calculation. Alternatively, the recognition network information 84 may include acoustic modeling information for each word of the recognition network. The mechanism used to determine the recognition network will be described in greater detail below.
  • In an exemplary embodiment, the speech recognition decoder/engine 76 may be in communication with a memory element (e.g., either the volatile memory 40 or the non-volatile memory 42) which may store a large or full vocabulary 86. The large or full vocabulary 86 may include a listing of words and their corresponding phonetic pronunciations. In one embodiment, the speech recognition decoder/engine 76 may access the large or full vocabulary 86 in order to access acoustic modeling information for words of the recognition network to use for composite score and/or other score calculations. However, as an alternative embodiment, the speech recognition decoder/engine 76 may receive the acoustic modeling information for words of the recognition network to use in composite score and/or other score calculations directly from the recognition network element 74. The speech recognition decoder/engine 76 may then perform composite score and/or other score calculations to generate, for example, a list of best word candidates based on the composite scores. Thus, according to embodiments of the present invention, processing and runtime memory resources may be preserved since the recognition network represents a dynamically selected portion of the full set of vocabulary words based upon which recognition operations may be conducted at the speech recognition decoder/engine 76 thereby reducing the number of candidate words upon which recognition operations must be performed.
  • The recognition network element 74 may be any device or means embodied in either hardware, software, or a combination of hardware and software that is capable of determining the recognition network information 84 as described in greater detail below. In an exemplary embodiment, the recognition network element 74 may receive word recognition information 88, which may include the list of best word candidates, from the speech recognition decoder/engine 76. As such, the word recognition information 88 may represent a listing of the most likely candidate words (e.g., n-best candidate words) corresponding to a particular word of the input speech 72 in response to recognition operations (e.g., similar to those described above) being performed on the particular word. Using the word recognition information 88, the recognition network element 74 may be configured to dynamically determine the recognition network for each word. In other words, the recognition network element 74 may be configured to dynamically select a subset of words based on the word recognition information 88 for a current word in order to determine the recognition network for the next word to be recognized. Expressed from an alternative perspective, the recognition network element 74 may dynamically determine a recognition network to be used for recognition of a current word based on word recognition information 88 associated with a word upon which recognition operations were previously performed.
  • In an exemplary embodiment, upon receipt of the word recognition information 88 associated with a word upon which recognition operations were previously performed (e.g., receiving a list of the n-best candidate words based on cumulative scores), the recognition network element 74 may be configured to calculate a confidence measure for each of the n-best candidate words. The best candidate word identified in the word recognition information 88 (e.g., the word among the n-best candidate words having the highest composite score) may be used as a reference value. A distance (in terms of a difference) from the reference value of the best candidate word may be measured or otherwise determined for each other n-best candidate word. Based on the difference between the best candidate word and the remaining n-best candidate words, the recognition network element 74 may be configured to dynamically determine a number of selected candidate words for use in follower word prediction (e.g., prediction of a word likely to follow a current word (or sequence of words) based on language model and/or word frequency information). In an exemplary embodiment, a threshold may be determined and the difference between the best candidate word and the remaining n-best candidate words may be compared to the threshold. Only those n-best candidate words that fall within the threshold (e.g., have a confidence measure close to that of the best candidate word within the threshold amount) may be selected as the selected candidate words. Accordingly, predictions regarding words that are likely to follow a current word being recognized may be based only upon those words that are statistically most likely to be a correct recognition or decode of the current word.
  • The recognition network element 74 may be further configured to utilize language model information to predict likely follower words for each of the selected candidate words. The result of the prediction of likely follower words may produce a list of best candidate followers including the best candidate follower words for each of the selected candidate words thereby defining the recognition network for the next word to be recognized. Acoustic modeling information for the words of the recognition network for the next word to be recognized may then be utilized for word recognition operations of the next word to be recognized for determining best candidate words for the next word to be recognized as well. The process described above may then be repeated for each subsequent word to be recognized in order to generate a word lattice as described in greater detail below. The process described above is considered “dynamic” since the selected candidate words are selected for each word based only on those words that are statistically likely to be relevant, rather than being based on a fixed number of candidate words which may be invariable or constant for each word.
  • Generation of a dynamic decoder vocabulary for the recognition network may improve isolated word dictation. For example, if a word to be recognized is not in the decoder vocabulary, then it may be impossible to have correct recognition thereby providing motivation to increase a size of the decoder vocabulary set to reduce the likelihood of such a failure. Meanwhile, if a larger sized decoder vocabulary set is utilized, a larger memory footprint may be required and more computational resources may be utilized. Thus, generating a recognition network by dynamically generating a candidate list of words that are statistically likely to follow the words statistically most likely to be a correct recognition candidate for an immediately preceding word for which recognition operations have been performed, may reduce both the memory footprint and resource consumption utilized in performing recognition operations.
  • An exemplary embodiment will now be described in the context of a recognition application utilizing a token passing scheme in connection with a Viterbi decoder that is well known in the art, as modified by embodiments of the present invention. In theory, given a word sequence W (i.e. a sentence) and an observation sequence X, a posterior acoustic probability P(W|X) (which is impossible to know in advance) may be calculated by applying Bayes rule as shown below:
  • P ( W X ) = P ( X W ) · P ( W ) P ( X ) , ( 1 )
  • where P(X) is usually not considered because it is difficult to estimate reliably. In practice, a log function may be applied to produce

  • log(P(X|W)+λ·log(P(W)   (2)
  • as a recognition hypothesis measure. With regard to equation (2) above, the first term (i.e., log(P(X|W)) may represent an acoustic score, the second term (i.e., log(P(W)) may represent a language score and the term A may represent a language model scaling factor. The token passing scheme may be used for simplifying the calculation.
  • The n-best recognition candidates (i.e., n-best candidate words) may be given as the output from the decoder ranked based on token based accumulative acoustic scores. Token passing can be used for computing the overall most likely sentence hypothesis (accumulative scores) given the acoustic scored word lists in each word segment and language model probability. By taking acoustic and LM n-gram scores into account in the token passing, an accumulative score may be determined by:
  • accumulative_score i = accumulative_score i - 1 + LM_bigram + acoustic_score i acoustic_scaling ,
  • which accumulative_scorei represents the accumulative score for the sentence up to and including wordi and accumulative_scorei-l represents the accumulative score for the sentence prior to wordi. LM_bigram may represent a language score for wordi, acoustic _scorei may represent an acoustic score for wordi and acoustic_scaling may represent a scaling of the acoustic score. A token passing search may be used to find the n-best best recognition candidates based on token passed accumulative scores on n-best recognition candidates and history information (e.g., previous words used in obtaining the cumulative score for a sentence) stored in the tokens. The accumulative scores may be computed from all acoustic scores and scaled language model bigrams along a particular pass. Given ranked n-best recognized word candidates, the language model may again be used to predict a vocabulary set for the next word segment. Both word pairs and backoff unigrams (e.g., common words for the language such as the, and, etc.) may be used for predicting the likely follower words, given the n-best candidate words. Taking a union of all of the follower words as a new vocabulary (i.e., the recognition network), the system may able to repeatedly recognize the next segment in the sentence given a smaller subset of vocabulary for use in the recognition.
  • As stated above, the confidence measure may be calculated for each of the n-best candidate words. The confidence measure may be calculated according to the following equation in which W is the reference word sequence:
  • LLR ( W X ) = log { P ( W X ) P ( W ^ X ) } = log { P ( X W ) · P ( W ) P ( X W ^ ) · P ( W ^ ) } = log { P ( X W ) · P ( W ) } - log { P ( X W ^ ) · P ( W ^ ) } . ( 3 )
  • In equation (3), LLR represents a logarithm of a likelihood ratio. As such, the LLR represents a normalization of a reference score. The reference word sequence can be approached in several ways, e.g. anti-model. In an exemplary embodiment, the reference word sequence may be modeled by obtaining scores for n-best word sequences. The best scored word sequence (e.g., the word sequence with the highest cumulative score) may be chosen as the reference word sequence.
  • Based on equation (3) above, it may be possible to calculate the confidence measure using equation (4) below. According to equation (4), the confidence measure for each word sequence is determined based on a difference between the accumulative score of the word sequence (e.g., the accumulation of each word score for the given word sequence, up to the current word) and the best accumulative score (e.g., the accumulative score of the best word sequence). In an exemplary embodiment, the confidence measure is normalized for word duration by dividing the difference between the accumulative score of the word sequence and the best accumulative score by the word duration. The confidence measure may therefore be calculated by:
  • confidence = ( accumulative_score - best_accumulative _score ) word_duration . ( 4 )
  • The confidence measure for the best scored word sequence may be established as a reference (which using equation (4) would be zero). The confidence measure of each other word sequence may be compared to the confidence measure of the best scored word sequence to determine, based on a difference between the confidence measures, which of the word sequences are within a threshold amount of difference from the confidence measure of the best scored word sequence. Current words for each of the word sequences that are within the threshold amount of difference may be considered as selected candidate words. Thereafter, given the selected candidate words, the recognition network element 74 may determine the recognition network based on the selected candidate words by predicting likely follower words in order to produce the best candidate follower words for each of the selected candidate words. In an exemplary embodiment, instead of utilizing a normalized accumulative score as described above, it may be possible to utilize a normalized acoustic score for confidence measure calculations.
  • FIG. 4 shows graphical views illustrating n-best distribution in terms of decoder vocabulary prediction rate and predicted vocabulary size for an exemplary embodiment in which the confidence measure of the best word sequence is 0 and the threshold is set at −0.03. In FIG. 4A, the vertical axis represents a percentage of how often a correct next word is within a list of a length indicated on the horizontal axis. Thus, according to this exemplary embodiment, the correct word may be in a 1-best list of words about 93% of the time. Meanwhile, if a 2-best words list is used, the correct word may be in the 2-best list about an additional 5% of the time. FIG. 4B shows the same data depicted in FIG. 4A in a cumulative manner. As shown in FIG. 4B, the correct word may be in the 2-best list about 98% of the time (i.e., the sum of 93% and 5% from FIG. 4A). FIG. 4C shows a percentage of an 8-best vocabulary size given n being 1 through 8 number of selected candidate words. FIG. 4D shows the same data as is illustrated in FIG. 4C in a cumulative manner. Accordingly, if one selected candidate word is utilized, the vocabulary size of the recognition network may be about 37% of a vocabulary set generated using eight selected candidate words. Meanwhile, if two selected candidate words are utilized, the vocabulary size increases by about 13% to a cumulative vocabulary size of about 50% of the vocabulary set generated using eight selected candidate words. Thus, for example, according to an exemplary embodiment, if the number of selected candidate words is two, there is approximately a 98% chance that the correct word will be in the predicted list and the recognition network will be about 50% smaller than it would be for an eight selected candidate word scenario. Thus, using 2-best word according to embodiments of the present invention may provide a relatively good performance and adding more candidates in an n-best list may only marginally improve performance with an increase in memory usage.
  • An exemplary embodiment will now be described in greater detail with reference to FIG. 5, in which a sequence of spoken words such as a particular sentence or phrase is received as the input speech 72, and output by the VAD 70 as the segmented voice data 82 shown in FIG. 4. In this regard, the segmented voice data 82 includes each of the words (word1 to wordm) of the sentence or phrase. The speech recognition decoder/engine 76 is configured to construct a word lattice 90 from the segmented voice data 82. The word lattice 90 includes candidate words 92 corresponding to each of the words of the sentence or phrase. For example, candidate words w11 to w1n correspond to word1, candidate words wil to win correspond to words and candidate words wml to wmn correspond to wordm. The candidate words 92 of the word lattice 90 may be listed or otherwise organized such that candidates having a highest composite score are listed or otherwise presented first while remaining candidate words are listed or otherwise presented in order of descending composite scores. In other words, the candidate words 92 may be presented in a list of best word candidates. In an exemplary embodiment, the list of best word candidates may be a listing of words from a corresponding recognition network for the respective word being recognized, in which the listing of words are ranked based on cumulative scores. As described above, the list of best candidate words may be provided to the recognition network element 74 as a portion of the word recognition information 88.
  • During operation, given the candidate words 92, the recognition network element 74 may calculate a confidence measure for each of the candidate words 92. The best candidate word may be assigned a confidence measure of a reference value (e.g., 0) and a confidence measure relative to the reference value may be calculated for each other candidate word as described above. The confidence measure for each of the other candidate words may be compared to a threshold and those candidate words having a confidence measure within the threshold may be selected as selected candidate words 94. The selected candidate words 94 may then be utilized for predicting likely follower words 96 to determine the recognition network for each word to be recognized as described above. In this regard, as can be seen from FIG. 5, likely follower words 96 are predicted corresponding to each of the selected candidate words 94 for each given word of the sequence. Thus, for example, if it is assumed that word candidate w11 is the best word candidate, the corresponding confidence measure for word candidate w11 may be zero. If the threshold is set to −0.3 and it is assumed that word candidate w12 has a confidence measure of −0.1, word candidate w13 has a confidence measure of −0.25 and all remaining word candidates have confidence measures less than −0.3, then word candidate w12 and word candidate w11 may each meet the threshold criteria with respect to the best word candidate and may thus be selected candidate words along with word candidate w11. Accordingly, likely follower words for each of the selected candidate words may be determined using, for example, language models. Thus, for example, follower words fw111 to fw11w may correspond to word candidate w11, follower words fw121 to fw12d may correspond to word candidate w12, and follower words fw131 to fw13g may correspond to word candidate w13. Selected candidate words are also shown for each other word of the sequence (e.g., 94′, 94″, and 94′″).
  • As stated above, the composite score may include an acoustic score and a language score, which may be known as an LM n-gram. The acoustic score represents a probability that the word candidate (for example w11) matches the corresponding spoken word (for example word1) based only on the sound of the corresponding spoken word. In this regard, the acoustic score may be stored in association with each word or node. Meanwhile, the language score takes into account language attributes such as grammar to determine the probability that a particular word candidate (for example w11) matches the spoken word (for example word1) being analyzed based on language probabilities accessible to the application that are associated with each consecutive word pair (for example word1 and word2). In this regard, the language score is defined for each consecutive word pair, which may also be called an arc or transition. The language score may be calculated based on language probabilities that may be stored in a memory of the mobile terminal 10, or otherwise accessible to a device practicing embodiments of the invention. The composite score may also include scaling in order to balance between the acoustic and language scores.
  • Thereafter, a sentence level search may be performed. In this regard, after candidate recognized words have been obtained for all word segments (e.g., w11, . . . , w1a, w21, . . . , w2b, . . . ), for example, a sentence level search may be performed for the candidate recognized words based on the language model scores and obtained acoustic model scores. The sentence level search provides the most likely sequence of words in the word lattice, which may be an output of the speech recognition decoder/engine 76.
  • In an exemplary embodiment, the word lattice 90 may be constructed one word at a time. For example, the candidate words corresponding to word1 may be assembled prior to the assembly of the candidate words corresponding to word2. The speech recognition decoder/engine 76 may be configured to produce any number of candidate words corresponding to each of the words of the sentence or phrase. For example, the speech recognition decoder/engine 76 may be configured to calculate only a top ten or any other selected number of candidate words for each corresponding word or the sentence or phrase. As stated above, the candidate words 92 may be presented, listed, organized, etc. in order of composite score. As such, the candidate words 92 may be ranked in order of the likelihood that each candidate word matches the actual corresponding spoken word based on a balance between both acoustic and language scores. Candidate sentences may then be constructed based on the candidate words 92 by constructing sentences including the candidate words 92. The speech recognition decoder/engine 76 may be configured to determine any number of candidate sentences. For example, the adaptive speech recognition decoder/engine 76 may be configured to determine ten candidate sentences, or n-best sentences ranked according to a summation of the composite scores of the candidate words. The candidate sentences may be organized, listed, presented or otherwise ranked in order of likelihood that the candidate sentence correctly matches the spoken sentence or phrase based on a balance between acoustic and language scores of each of the candidate words of the candidate sentence. In this regard, the candidate sentences may have a corresponding composite score or sentence score. The candidate sentences may then be communicated to the interface element 80 for output to a user or to another application.
  • The interface element 80 may present the candidate sentences to the user for confirmation, modification or selection via, for example, a display. The presentation of the candidate sentences may be accomplished in a list format, such as by listing a specific number of candidate sentences on a display and enabling the user to select a best or correct one of the candidate sentences. The best or correct one of the candidate sentences should be understood to include the candidate sentence that matches or most closely matches the actual spoken sentence or phrase. In such an embodiment, the user may be presented with a complete list of all candidate sentences or a selected number of candidate sentences in which remaining candidate sentences may be viewed at the option of the user if none of the currently displayed candidate sentences include the best or correct one of the candidate sentences. Alternatively, the user may be presented with a single candidate sentence at any given time, which represents the candidate sentence with the highest composite score that has not yet been viewed by the user. In such an embodiment, the user may again be given the option to view the next most likely candidate sentence if the currently displayed candidate sentence is not the best or correct one of the candidate sentences. In an exemplary embodiment, the user may use the interface element 80 to control attributes of the speech recognition decoder/engine 76 such as, for example, the number of candidate sentences to generate, the number of candidate sentences to display, the order in which to display candidate sentences, etc.
  • It should be noted that in some cases, prediction of the recognition network for a next word to be recognized may depend at least in part upon whether a match or a mismatch occurs between training and testing sets of a particular language model. In a matching case, the discussion provided above may be employed. However, in a mismatching case, it may be desirable to include, for example, a predefined set of supplemental words as part of the recognition network. In an exemplary embodiment, the predefined set may include a set of frequently used words and/or acoustic matching candidates appended to the likely follower words to form the recognition network.
  • FIG. 6 is a flowchart of a system, method and program product according to exemplary embodiments of the invention. It will be understood that each block or step of the flowcharts, and combinations of blocks in the flowcharts, can be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions. For example, one or more of the procedures described above may be embodied by computer program instructions. In this regard, the computer program instructions which embody the procedures described above may be stored by a memory device of a mobile terminal and executed by a built-in processor in a mobile terminal. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (i.e., hardware) to produce a machine, such that the instructions which execute on the computer or other programmable apparatus create means for implementing the functions specified in the flowcharts block(s) or step(s). These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowcharts block(s) or step(s). The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowcharts block(s) or step(s).
  • Accordingly, blocks or steps of the flowcharts support combinations of means for performing the specified functions, combinations of steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that one or more blocks or steps of the flowcharts, and combinations of blocks or steps in the flowcharts, can be implemented by special purpose hardware-based computer systems which perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
  • In this regard, one embodiment of a method of providing dynamic vocabulary prediction for speech recognition may include determining a confidence measure for each candidate recognized word for a current word to be recognized at operation 200. A subset of candidate recognized words may be selected as selected candidate words based on the confidence measure of each one of the candidate recognized words at operation 210. At operation 220, a recognition network may be determined for a next word to be recognized. The recognition network may include likely follower words for each of the selected candidate words. The method may also include determining candidate words for the next word to be recognized based on a recognition probability associated with each of the likely follower words at an optional operation 230.
  • The above described functions may be carried out in many ways. For example, any suitable means for carrying out each of the functions described above may be employed to carry out embodiments of the invention. In one embodiment, all or a portion of the elements of the invention generally operate under control of a computer program product. The computer program product for performing the methods of embodiments of the invention includes a computer-readable storage medium, such as the non-volatile storage medium, and computer-readable program code portions, such as a series of computer instructions, embodied in the computer-readable storage medium.
  • Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the embodiments of the invention are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (32)

1. A method comprising:
determining a confidence measure for each candidate recognized word for a current word to be recognized;
selecting a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words; and
determining a recognition network for a next word to be recognized, the recognition network including likely follower words for each of the selected candidate words.
2. A method according to claim 1, wherein determining the confidence measure comprises determining a relative difference between one of the candidate recognized words and a best candidate recognized word.
3. A method according to claim 2, wherein determining the relative difference comprises determining a difference between an accumulative score of a particular candidate recognized word and an accumulative score of the best candidate recognized word having a highest accumulative score.
4. A method according to claim 3, wherein determining the confidence measure further comprises normalizing the confidence measure by dividing the relative difference by a word duration of the word to be recognized.
5. A method according to claim 1, wherein selecting the subset comprises comparing the confidence measure to a threshold and defining the selected candidate words as the candidate recognized words having corresponding confidence measures that meet the threshold.
6. A method according to claim 1, wherein determining the recognition network comprises determining likely follower words for each of the selected candidate words based on language model information.
7. A method according to claim 1, further comprising determining candidate words for the next word to be recognized based on a recognition probability associated with each of the likely follower words.
8. A method according to claim 1, further comprising including a predefined set of supplemental words as part of the recognition network.
9. A method according to claim 8, wherein including the predefined set of supplemental words comprises including at least one of frequently used words or acoustic matching candidates.
10. A computer program product comprising at least one computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions comprising:
a first executable portion for determining a confidence measure for each candidate recognized word for a current word to be recognized;
a second executable portion for selecting a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words; and
a third executable portion for determining a recognition network for a next word to be recognized, the recognition network including likely follower words for each of the selected candidate words.
11. A computer program product according to claim 10, wherein the first executable portion includes instructions for determining a relative difference between one of the candidate recognized words and a best candidate recognized word.
12. A computer program product according to claim 11, wherein the first executable portion includes instructions for determining a difference between an accumulative score of a particular candidate recognized word and an accumulative score of the best recognized candidate word having a highest accumulative score.
13. A computer program product according to claim 12, wherein the first executable portion includes instructions for normalizing the confidence measure by dividing the relative difference by a word duration of the word to be recognized.
14. A computer program product according to claim 10, wherein the second executable portion includes instructions for comparing the confidence measure to a threshold and defining the selected candidate words as the candidate recognized words having corresponding confidence measures that meet the threshold.
15. A computer program product according to claim 10, wherein the third executable portion includes instructions for determining likely follower words for each of the selected candidate words based on language model information.
16. A computer program product according to claim 10, further comprising a fourth executable portion for determining candidate words for the next word to be recognized based on a recognition probability associated with each of the likely follower words.
17. A computer program product according to claim 10, further comprising a fourth executable portion for including a predefined set of supplemental words as part of the recognition network.
18. A computer program product according to claim 17, wherein the fourth executable portion includes instructions for including at least one of frequently used words or acoustic matching candidates.
19. An apparatus comprising a recognition network element configured to:
determine a confidence measure for each candidate recognized word for a current word to be recognized;
select a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words; and
determine a recognition network for a next word to be recognized, the recognition network including likely follower words for each of the selected candidate words.
20. An apparatus according to claim 19, wherein the recognition network element is further configured to determine a relative difference between one of the candidate recognized words and a best candidate recognized word.
21. An apparatus according to claim 20, wherein the recognition network element is further configured to determine a difference between an accumulative score of a particular candidate recognized word and an accumulative score of the best candidate recognized word having a highest accumulative score.
22. An apparatus according to claim 21, wherein the recognition network element is further configured to normalize the confidence measure by dividing the relative difference by a word duration of the word to be recognized.
23. An apparatus according to claim 19, wherein the recognition network element is further configured to compare the confidence measure to a threshold and define the selected candidate words as the candidate recognized words having corresponding confidence measures that meet the threshold.
24. An apparatus according to claim 19, wherein the recognition network element is further configured to determine likely follower words for each of the selected candidate words based on language model information.
25. An apparatus according to claim 19, further comprising a speech recognition engine configured to determine candidate words for the next word to be recognized based on a recognition probability associated with each of the likely follower words.
26. An apparatus according to claim 19, wherein the recognition network element is further configured to include a predefined set of supplemental words as part of the recognition network.
27. An apparatus according to claim 26, wherein the predefined set of supplemental words comprises at least one of frequently used words or acoustic matching candidates.
28. An apparatus according to claim 19, wherein the apparatus is embodied as a mobile terminal.
29. An apparatus comprising:
means for determining a confidence measure for each candidate recognized word for a current word to be recognized;
means for selecting a subset of candidate recognized words as selected candidate words based on the confidence measure of each one of the candidate recognized words; and
means for determining a recognition network for a next word to be recognized, the recognition network including likely follower words for each of the selected candidate words.
30. An apparatus according to claim 29, wherein means for determining the confidence measure comprises means for determining a relative difference between one of the candidate recognized words and a best candidate recognized word.
31. A system comprising:
a speech processing element configured to segment input speech into a series of words including a current word to be recognized and a next word to be recognized;
a speech recognition engine configured to determine candidate recognized words corresponding to each word of the series of words based on a recognition network dynamically generated for each word of the series of words; and
a recognition network element configured to:
determine a confidence measure for each candidate recognized word for the current word to be recognized;
select a subset of candidate recognized words for the current word to be recognized as selected candidate words based on the confidence measure of each one of the candidate recognized words for the current word to be recognized; and
determine a next recognition network for a next word to be recognized, the next recognition network including likely follower words for each of the selected candidate words.
32. A system according to claim 31, wherein the recognition network element is further configured to determine a relative difference between one of the candidate recognized words and a best candidate recognized word.
US11/614,159 2006-12-21 2006-12-21 System, Method, Apparatus and Computer Program Product for Providing Dynamic Vocabulary Prediction for Speech Recognition Abandoned US20080154600A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/614,159 US20080154600A1 (en) 2006-12-21 2006-12-21 System, Method, Apparatus and Computer Program Product for Providing Dynamic Vocabulary Prediction for Speech Recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/614,159 US20080154600A1 (en) 2006-12-21 2006-12-21 System, Method, Apparatus and Computer Program Product for Providing Dynamic Vocabulary Prediction for Speech Recognition

Publications (1)

Publication Number Publication Date
US20080154600A1 true US20080154600A1 (en) 2008-06-26

Family

ID=39544163

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/614,159 Abandoned US20080154600A1 (en) 2006-12-21 2006-12-21 System, Method, Apparatus and Computer Program Product for Providing Dynamic Vocabulary Prediction for Speech Recognition

Country Status (1)

Country Link
US (1) US20080154600A1 (en)

Cited By (193)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080109220A1 (en) * 2006-11-03 2008-05-08 Imre Kiss Input method and device
US20080162137A1 (en) * 2006-12-28 2008-07-03 Nissan Motor Co., Ltd. Speech recognition apparatus and method
US20080201142A1 (en) * 2007-02-15 2008-08-21 Motorola, Inc. Method and apparatus for automication creation of an interactive log based on real-time content
US20080221880A1 (en) * 2007-03-07 2008-09-11 Cerra Joseph P Mobile music environment speech processing facility
US20080221884A1 (en) * 2007-03-07 2008-09-11 Cerra Joseph P Mobile environment speech processing facility
US20080243481A1 (en) * 2007-03-26 2008-10-02 Thorsten Brants Large Language Models in Machine Translation
US20080255835A1 (en) * 2007-04-10 2008-10-16 Microsoft Corporation User directed adaptation of spoken language grammer
US20090030691A1 (en) * 2007-03-07 2009-01-29 Cerra Joseph P Using an unstructured language model associated with an application of a mobile communication facility
US20090164890A1 (en) * 2007-12-19 2009-06-25 Microsoft Corporation Self learning contextual spell corrector
US20090248415A1 (en) * 2008-03-31 2009-10-01 Yap, Inc. Use of metadata to post process speech recognition output
US20100094629A1 (en) * 2007-02-28 2010-04-15 Tadashi Emori Weight coefficient learning system and audio recognition system
US20100153110A1 (en) * 2008-12-11 2010-06-17 Chi Mei Communication Systems, Inc. Voice recognition system and method of a mobile communication device
US20100286979A1 (en) * 2007-08-01 2010-11-11 Ginger Software, Inc. Automatic context sensitive language correction and enhancement using an internet corpus
US20110099012A1 (en) * 2009-10-23 2011-04-28 At&T Intellectual Property I, L.P. System and method for estimating the reliability of alternate speech recognition hypotheses in real time
US20110184736A1 (en) * 2010-01-26 2011-07-28 Benjamin Slotznick Automated method of recognizing inputted information items and selecting information items
US20110307254A1 (en) * 2008-12-11 2011-12-15 Melvyn Hunt Speech recognition involving a mobile device
US20120005318A1 (en) * 2010-06-30 2012-01-05 International Business Machines Corporation Network Problem Determination
CN102592595A (en) * 2012-03-19 2012-07-18 安徽科大讯飞信息科技股份有限公司 Voice recognition method and system
US20120239382A1 (en) * 2011-03-18 2012-09-20 Industrial Technology Research Institute Recommendation method and recommender computer system using dynamic language model
US8635243B2 (en) 2007-03-07 2014-01-21 Research In Motion Limited Sending a communications header with voice recording to send metadata for use in speech recognition, formatting, and search mobile search application
CN103531197A (en) * 2013-10-11 2014-01-22 安徽科大讯飞信息科技股份有限公司 Command word recognition self-adaptive optimization method for carrying out feedback on user speech recognition result
US8831957B2 (en) * 2012-08-01 2014-09-09 Google Inc. Speech recognition models based on location indicia
US8838457B2 (en) 2007-03-07 2014-09-16 Vlingo Corporation Using results of unstructured language model based speech recognition to control a system-level function of a mobile communications facility
US20140297281A1 (en) * 2013-03-28 2014-10-02 Fujitsu Limited Speech processing method, device and system
US8886545B2 (en) 2007-03-07 2014-11-11 Vlingo Corporation Dealing with switch latency in speech recognition
US8886540B2 (en) 2007-03-07 2014-11-11 Vlingo Corporation Using speech recognition results based on an unstructured language model in a mobile communication facility application
US20140358533A1 (en) * 2013-05-30 2014-12-04 International Business Machines Corporation Pronunciation accuracy in speech recognition
US8949130B2 (en) 2007-03-07 2015-02-03 Vlingo Corporation Internal and external speech recognition use with a mobile communication facility
US8949266B2 (en) 2007-03-07 2015-02-03 Vlingo Corporation Multiple web-based content category searching in mobile search application
US9015036B2 (en) 2010-02-01 2015-04-21 Ginger Software, Inc. Automatic context sensitive language correction using an internet corpus particularly for small keyboard devices
US9135544B2 (en) 2007-11-14 2015-09-15 Varcode Ltd. System and method for quality management utilizing barcode indicators
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
EP2985760A1 (en) * 2014-08-12 2016-02-17 Honeywell International Inc. Methods and apparatus for interpreting received speech data using speech recognition
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9400952B2 (en) 2012-10-22 2016-07-26 Varcode Ltd. Tamper-proof quality management barcode indicators
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US20160306783A1 (en) * 2014-05-07 2016-10-20 Tencent Technology (Shenzhen) Company Limited Method and apparatus for phonetically annotating text
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9583107B2 (en) 2006-04-05 2017-02-28 Amazon Technologies, Inc. Continuous speech transcription performance indication
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US9646277B2 (en) 2006-05-07 2017-05-09 Varcode Ltd. System and method for improved quality management in a product logistic chain
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9973450B2 (en) 2007-09-17 2018-05-15 Amazon Technologies, Inc. Methods and systems for dynamically updating web service profile information by parsing transcribed message strings
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US20180166080A1 (en) * 2016-12-08 2018-06-14 Guangzhou Shenma Mobile Information Technology Co. Ltd. Information input method, apparatus and computing device
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10056077B2 (en) 2007-03-07 2018-08-21 Nuance Communications, Inc. Using speech recognition results based on an unstructured language model with a music system
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10152298B1 (en) * 2015-06-29 2018-12-11 Amazon Technologies, Inc. Confidence estimation based on frequency
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10176451B2 (en) 2007-05-06 2019-01-08 Varcode Ltd. System and method for quality management utilizing barcode indicators
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10318632B2 (en) 2017-03-14 2019-06-11 Microsoft Technology Licensing, Llc Multi-lingual data input system
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10403283B1 (en) 2018-06-01 2019-09-03 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US10445678B2 (en) 2006-05-07 2019-10-15 Varcode Ltd. System and method for improved quality management in a product logistic chain
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10643611B2 (en) 2008-10-02 2020-05-05 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10672393B2 (en) * 2018-01-12 2020-06-02 Intel Corporation Time capsule based speaking aid
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10684703B2 (en) 2018-06-01 2020-06-16 Apple Inc. Attention aware virtual assistant dismissal
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10697837B2 (en) 2015-07-07 2020-06-30 Varcode Ltd. Electronic quality indicator
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10741170B2 (en) 2015-11-06 2020-08-11 Alibaba Group Holding Limited Speech recognition method and apparatus
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11023513B2 (en) 2007-12-20 2021-06-01 Apple Inc. Method and apparatus for searching using an active ontology
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US11060924B2 (en) 2015-05-18 2021-07-13 Varcode Ltd. Thermochromic ink indicia for activatable quality labels
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
WO2021221390A1 (en) * 2020-04-29 2021-11-04 Samsung Electronics Co., Ltd. System and method for out-of-vocabulary phrase support in automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US11231904B2 (en) 2015-03-06 2022-01-25 Apple Inc. Reducing response latency of intelligent automated assistants
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
US11314370B2 (en) 2013-12-06 2022-04-26 Apple Inc. Method for extracting salient dialog usage from live data
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US11495218B2 (en) 2018-06-01 2022-11-08 Apple Inc. Virtual assistant operation in multi-device environments
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US11704526B2 (en) 2008-06-10 2023-07-18 Varcode Ltd. Barcoded indicators for quality management

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010018654A1 (en) * 1998-11-13 2001-08-30 Hsiao-Wuen Hon Confidence measure system using a near-miss pattern
US6374220B1 (en) * 1998-08-05 2002-04-16 Texas Instruments Incorporated N-best search for continuous speech recognition using viterbi pruning for non-output differentiation states
US20030023437A1 (en) * 2001-01-27 2003-01-30 Pascale Fung System and method for context-based spontaneous speech recognition
US20030110035A1 (en) * 2001-12-12 2003-06-12 Compaq Information Technologies Group, L.P. Systems and methods for combining subword detection and word detection for processing a spoken input
US20030182110A1 (en) * 2002-03-19 2003-09-25 Li Deng Method of speech recognition using variables representing dynamic aspects of speech
US6694296B1 (en) * 2000-07-20 2004-02-17 Microsoft Corporation Method and apparatus for the recognition of spelled spoken words
US6725196B2 (en) * 1998-02-10 2004-04-20 Canon Kabushiki Kaisha Pattern matching method and apparatus
US6778959B1 (en) * 1999-10-21 2004-08-17 Sony Corporation System and method for speech verification using out-of-vocabulary models
US20050010411A1 (en) * 2003-07-09 2005-01-13 Luca Rigazio Speech data mining for call center management
US20050091054A1 (en) * 2000-07-20 2005-04-28 Microsoft Corporation Method and apparatus for generating and displaying N-Best alternatives in a speech recognition system
US20060009974A1 (en) * 2004-07-09 2006-01-12 Matsushita Electric Industrial Co., Ltd. Hands-free voice dialing for portable and remote devices
US20070185713A1 (en) * 2006-02-09 2007-08-09 Samsung Electronics Co., Ltd. Recognition confidence measuring by lexical distance between candidates
US20080120094A1 (en) * 2006-11-17 2008-05-22 Nokia Corporation Seamless automatic speech recognition transfer
US20080167872A1 (en) * 2004-06-10 2008-07-10 Yoshiyuki Okimoto Speech Recognition Device, Speech Recognition Method, and Program
US7571098B1 (en) * 2003-05-29 2009-08-04 At&T Intellectual Property Ii, L.P. System and method of spoken language understanding using word confusion networks

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6725196B2 (en) * 1998-02-10 2004-04-20 Canon Kabushiki Kaisha Pattern matching method and apparatus
US6374220B1 (en) * 1998-08-05 2002-04-16 Texas Instruments Incorporated N-best search for continuous speech recognition using viterbi pruning for non-output differentiation states
US20010018654A1 (en) * 1998-11-13 2001-08-30 Hsiao-Wuen Hon Confidence measure system using a near-miss pattern
US6778959B1 (en) * 1999-10-21 2004-08-17 Sony Corporation System and method for speech verification using out-of-vocabulary models
US20050091054A1 (en) * 2000-07-20 2005-04-28 Microsoft Corporation Method and apparatus for generating and displaying N-Best alternatives in a speech recognition system
US6694296B1 (en) * 2000-07-20 2004-02-17 Microsoft Corporation Method and apparatus for the recognition of spelled spoken words
US20030023437A1 (en) * 2001-01-27 2003-01-30 Pascale Fung System and method for context-based spontaneous speech recognition
US20030110035A1 (en) * 2001-12-12 2003-06-12 Compaq Information Technologies Group, L.P. Systems and methods for combining subword detection and word detection for processing a spoken input
US20030182110A1 (en) * 2002-03-19 2003-09-25 Li Deng Method of speech recognition using variables representing dynamic aspects of speech
US7571098B1 (en) * 2003-05-29 2009-08-04 At&T Intellectual Property Ii, L.P. System and method of spoken language understanding using word confusion networks
US20050010411A1 (en) * 2003-07-09 2005-01-13 Luca Rigazio Speech data mining for call center management
US20080167872A1 (en) * 2004-06-10 2008-07-10 Yoshiyuki Okimoto Speech Recognition Device, Speech Recognition Method, and Program
US20060009974A1 (en) * 2004-07-09 2006-01-12 Matsushita Electric Industrial Co., Ltd. Hands-free voice dialing for portable and remote devices
US20070185713A1 (en) * 2006-02-09 2007-08-09 Samsung Electronics Co., Ltd. Recognition confidence measuring by lexical distance between candidates
US20080120094A1 (en) * 2006-11-17 2008-05-22 Nokia Corporation Seamless automatic speech recognition transfer

Cited By (312)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9583107B2 (en) 2006-04-05 2017-02-28 Amazon Technologies, Inc. Continuous speech transcription performance indication
US9646277B2 (en) 2006-05-07 2017-05-09 Varcode Ltd. System and method for improved quality management in a product logistic chain
US10726375B2 (en) 2006-05-07 2020-07-28 Varcode Ltd. System and method for improved quality management in a product logistic chain
US10445678B2 (en) 2006-05-07 2019-10-15 Varcode Ltd. System and method for improved quality management in a product logistic chain
US10037507B2 (en) 2006-05-07 2018-07-31 Varcode Ltd. System and method for improved quality management in a product logistic chain
US20080109220A1 (en) * 2006-11-03 2008-05-08 Imre Kiss Input method and device
US8355913B2 (en) * 2006-11-03 2013-01-15 Nokia Corporation Speech recognition with adjustable timeout period
US20080162137A1 (en) * 2006-12-28 2008-07-03 Nissan Motor Co., Ltd. Speech recognition apparatus and method
US7949524B2 (en) * 2006-12-28 2011-05-24 Nissan Motor Co., Ltd. Speech recognition correction with standby-word dictionary
US7844460B2 (en) * 2007-02-15 2010-11-30 Motorola, Inc. Automatic creation of an interactive log based on real-time content
US20080201142A1 (en) * 2007-02-15 2008-08-21 Motorola, Inc. Method and apparatus for automication creation of an interactive log based on real-time content
US8494847B2 (en) * 2007-02-28 2013-07-23 Nec Corporation Weighting factor learning system and audio recognition system
US20100094629A1 (en) * 2007-02-28 2010-04-15 Tadashi Emori Weight coefficient learning system and audio recognition system
US10056077B2 (en) 2007-03-07 2018-08-21 Nuance Communications, Inc. Using speech recognition results based on an unstructured language model with a music system
US8949266B2 (en) 2007-03-07 2015-02-03 Vlingo Corporation Multiple web-based content category searching in mobile search application
US20080221880A1 (en) * 2007-03-07 2008-09-11 Cerra Joseph P Mobile music environment speech processing facility
US20080221884A1 (en) * 2007-03-07 2008-09-11 Cerra Joseph P Mobile environment speech processing facility
US20080221900A1 (en) * 2007-03-07 2008-09-11 Cerra Joseph P Mobile local search environment speech processing facility
US20080221889A1 (en) * 2007-03-07 2008-09-11 Cerra Joseph P Mobile content search environment speech processing facility
US9619572B2 (en) 2007-03-07 2017-04-11 Nuance Communications, Inc. Multiple web-based content category searching in mobile search application
US8880405B2 (en) 2007-03-07 2014-11-04 Vlingo Corporation Application text entry in a mobile environment using a speech processing facility
US8886540B2 (en) 2007-03-07 2014-11-11 Vlingo Corporation Using speech recognition results based on an unstructured language model in a mobile communication facility application
US9495956B2 (en) 2007-03-07 2016-11-15 Nuance Communications, Inc. Dealing with switch latency in speech recognition
US8838457B2 (en) 2007-03-07 2014-09-16 Vlingo Corporation Using results of unstructured language model based speech recognition to control a system-level function of a mobile communications facility
US8949130B2 (en) 2007-03-07 2015-02-03 Vlingo Corporation Internal and external speech recognition use with a mobile communication facility
US20090030691A1 (en) * 2007-03-07 2009-01-29 Cerra Joseph P Using an unstructured language model associated with an application of a mobile communication facility
US8886545B2 (en) 2007-03-07 2014-11-11 Vlingo Corporation Dealing with switch latency in speech recognition
US8635243B2 (en) 2007-03-07 2014-01-21 Research In Motion Limited Sending a communications header with voice recording to send metadata for use in speech recognition, formatting, and search mobile search application
US8996379B2 (en) 2007-03-07 2015-03-31 Vlingo Corporation Speech recognition text entry for software applications
US20130346059A1 (en) * 2007-03-26 2013-12-26 Google Inc. Large language models in machine translation
US8812291B2 (en) * 2007-03-26 2014-08-19 Google Inc. Large language models in machine translation
US8332207B2 (en) * 2007-03-26 2012-12-11 Google Inc. Large language models in machine translation
US20080243481A1 (en) * 2007-03-26 2008-10-02 Thorsten Brants Large Language Models in Machine Translation
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US20080255835A1 (en) * 2007-04-10 2008-10-16 Microsoft Corporation User directed adaptation of spoken language grammer
US10504060B2 (en) 2007-05-06 2019-12-10 Varcode Ltd. System and method for quality management utilizing barcode indicators
US10176451B2 (en) 2007-05-06 2019-01-08 Varcode Ltd. System and method for quality management utilizing barcode indicators
US10776752B2 (en) 2007-05-06 2020-09-15 Varcode Ltd. System and method for quality management utilizing barcode indicators
US9026432B2 (en) 2007-08-01 2015-05-05 Ginger Software, Inc. Automatic context sensitive language generation, correction and enhancement using an internet corpus
US8914278B2 (en) * 2007-08-01 2014-12-16 Ginger Software, Inc. Automatic context sensitive language correction and enhancement using an internet corpus
US20100286979A1 (en) * 2007-08-01 2010-11-11 Ginger Software, Inc. Automatic context sensitive language correction and enhancement using an internet corpus
US9973450B2 (en) 2007-09-17 2018-05-15 Amazon Technologies, Inc. Methods and systems for dynamically updating web service profile information by parsing transcribed message strings
US9558439B2 (en) 2007-11-14 2017-01-31 Varcode Ltd. System and method for quality management utilizing barcode indicators
US10262251B2 (en) 2007-11-14 2019-04-16 Varcode Ltd. System and method for quality management utilizing barcode indicators
US9135544B2 (en) 2007-11-14 2015-09-15 Varcode Ltd. System and method for quality management utilizing barcode indicators
US10719749B2 (en) 2007-11-14 2020-07-21 Varcode Ltd. System and method for quality management utilizing barcode indicators
US9836678B2 (en) 2007-11-14 2017-12-05 Varcode Ltd. System and method for quality management utilizing barcode indicators
US20090164890A1 (en) * 2007-12-19 2009-06-25 Microsoft Corporation Self learning contextual spell corrector
US8176419B2 (en) * 2007-12-19 2012-05-08 Microsoft Corporation Self learning contextual spell corrector
US11023513B2 (en) 2007-12-20 2021-06-01 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
US8676577B2 (en) * 2008-03-31 2014-03-18 Canyon IP Holdings, LLC Use of metadata to post process speech recognition output
US20090248415A1 (en) * 2008-03-31 2009-10-01 Yap, Inc. Use of metadata to post process speech recognition output
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US10776680B2 (en) 2008-06-10 2020-09-15 Varcode Ltd. System and method for quality management utilizing barcode indicators
US11341387B2 (en) 2008-06-10 2022-05-24 Varcode Ltd. Barcoded indicators for quality management
US10885414B2 (en) 2008-06-10 2021-01-05 Varcode Ltd. Barcoded indicators for quality management
US9384435B2 (en) 2008-06-10 2016-07-05 Varcode Ltd. Barcoded indicators for quality management
US9710743B2 (en) 2008-06-10 2017-07-18 Varcode Ltd. Barcoded indicators for quality management
US11238323B2 (en) 2008-06-10 2022-02-01 Varcode Ltd. System and method for quality management utilizing barcode indicators
US11704526B2 (en) 2008-06-10 2023-07-18 Varcode Ltd. Barcoded indicators for quality management
US10417543B2 (en) 2008-06-10 2019-09-17 Varcode Ltd. Barcoded indicators for quality management
US10049314B2 (en) 2008-06-10 2018-08-14 Varcode Ltd. Barcoded indicators for quality management
US10572785B2 (en) 2008-06-10 2020-02-25 Varcode Ltd. Barcoded indicators for quality management
US9996783B2 (en) 2008-06-10 2018-06-12 Varcode Ltd. System and method for quality management utilizing barcode indicators
US10089566B2 (en) 2008-06-10 2018-10-02 Varcode Ltd. Barcoded indicators for quality management
US10789520B2 (en) 2008-06-10 2020-09-29 Varcode Ltd. Barcoded indicators for quality management
US11449724B2 (en) 2008-06-10 2022-09-20 Varcode Ltd. System and method for quality management utilizing barcode indicators
US9317794B2 (en) 2008-06-10 2016-04-19 Varcode Ltd. Barcoded indicators for quality management
US9646237B2 (en) 2008-06-10 2017-05-09 Varcode Ltd. Barcoded indicators for quality management
US10303992B2 (en) 2008-06-10 2019-05-28 Varcode Ltd. System and method for quality management utilizing barcode indicators
US9626610B2 (en) 2008-06-10 2017-04-18 Varcode Ltd. System and method for quality management utilizing barcode indicators
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US11348582B2 (en) 2008-10-02 2022-05-31 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10643611B2 (en) 2008-10-02 2020-05-05 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US20110307254A1 (en) * 2008-12-11 2011-12-15 Melvyn Hunt Speech recognition involving a mobile device
US9959870B2 (en) * 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US20100153110A1 (en) * 2008-12-11 2010-06-17 Chi Mei Communication Systems, Inc. Voice recognition system and method of a mobile communication device
US20180218735A1 (en) * 2008-12-11 2018-08-02 Apple Inc. Speech recognition involving a mobile device
US10795541B2 (en) 2009-06-05 2020-10-06 Apple Inc. Intelligent organization of tasks items
US11080012B2 (en) 2009-06-05 2021-08-03 Apple Inc. Interface for a virtual digital assistant
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10475446B2 (en) 2009-06-05 2019-11-12 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US20110099012A1 (en) * 2009-10-23 2011-04-28 At&T Intellectual Property I, L.P. System and method for estimating the reliability of alternate speech recognition hypotheses in real time
US9653066B2 (en) * 2009-10-23 2017-05-16 Nuance Communications, Inc. System and method for estimating the reliability of alternate speech recognition hypotheses in real time
US20170249935A1 (en) * 2009-10-23 2017-08-31 Nuance Communications, Inc. System and method for estimating the reliability of alternate speech recognition hypotheses in real time
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US9548050B2 (en) 2010-01-18 2017-01-17 Apple Inc. Intelligent automated assistant
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10706841B2 (en) 2010-01-18 2020-07-07 Apple Inc. Task flow identification based on user intent
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US11423886B2 (en) 2010-01-18 2022-08-23 Apple Inc. Task flow identification based on user intent
US20110184736A1 (en) * 2010-01-26 2011-07-28 Benjamin Slotznick Automated method of recognizing inputted information items and selecting information items
US9015036B2 (en) 2010-02-01 2015-04-21 Ginger Software, Inc. Automatic context sensitive language correction using an internet corpus particularly for small keyboard devices
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US10692504B2 (en) 2010-02-25 2020-06-23 Apple Inc. User profiling for voice input processing
US8244839B2 (en) * 2010-06-30 2012-08-14 International Business Machines Corporation Network problem determination
US20120005318A1 (en) * 2010-06-30 2012-01-05 International Business Machines Corporation Network Problem Determination
US20120239382A1 (en) * 2011-03-18 2012-09-20 Industrial Technology Research Institute Recommendation method and recommender computer system using dynamic language model
US10102359B2 (en) 2011-03-21 2018-10-16 Apple Inc. Device access using voice authentication
US10417405B2 (en) 2011-03-21 2019-09-17 Apple Inc. Device access using voice authentication
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US11350253B2 (en) 2011-06-03 2022-05-31 Apple Inc. Active transport based notifications
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US11120372B2 (en) 2011-06-03 2021-09-14 Apple Inc. Performing actions associated with task items that represent tasks to perform
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US11069336B2 (en) 2012-03-02 2021-07-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
CN102592595A (en) * 2012-03-19 2012-07-18 安徽科大讯飞信息科技股份有限公司 Voice recognition method and system
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US8831957B2 (en) * 2012-08-01 2014-09-09 Google Inc. Speech recognition models based on location indicia
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9633296B2 (en) 2012-10-22 2017-04-25 Varcode Ltd. Tamper-proof quality management barcode indicators
US10552719B2 (en) 2012-10-22 2020-02-04 Varcode Ltd. Tamper-proof quality management barcode indicators
US9400952B2 (en) 2012-10-22 2016-07-26 Varcode Ltd. Tamper-proof quality management barcode indicators
US10242302B2 (en) 2012-10-22 2019-03-26 Varcode Ltd. Tamper-proof quality management barcode indicators
US9965712B2 (en) 2012-10-22 2018-05-08 Varcode Ltd. Tamper-proof quality management barcode indicators
US10839276B2 (en) 2012-10-22 2020-11-17 Varcode Ltd. Tamper-proof quality management barcode indicators
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US20140297281A1 (en) * 2013-03-28 2014-10-02 Fujitsu Limited Speech processing method, device and system
US9978364B2 (en) 2013-05-30 2018-05-22 International Business Machines Corporation Pronunciation accuracy in speech recognition
US9384730B2 (en) * 2013-05-30 2016-07-05 International Business Machines Corporation Pronunciation accuracy in speech recognition
US20140358533A1 (en) * 2013-05-30 2014-12-04 International Business Machines Corporation Pronunciation accuracy in speech recognition
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10769385B2 (en) 2013-06-09 2020-09-08 Apple Inc. System and method for inferring user intent from speech inputs
US11048473B2 (en) 2013-06-09 2021-06-29 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
CN103531197A (en) * 2013-10-11 2014-01-22 安徽科大讯飞信息科技股份有限公司 Command word recognition self-adaptive optimization method for carrying out feedback on user speech recognition result
US11314370B2 (en) 2013-12-06 2022-04-26 Apple Inc. Method for extracting salient dialog usage from live data
US20160306783A1 (en) * 2014-05-07 2016-10-20 Tencent Technology (Shenzhen) Company Limited Method and apparatus for phonetically annotating text
US10114809B2 (en) * 2014-05-07 2018-10-30 Tencent Technology (Shenzhen) Company Limited Method and apparatus for phonetically annotating text
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US10417344B2 (en) 2014-05-30 2019-09-17 Apple Inc. Exemplar-based natural language processing
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US10714095B2 (en) 2014-05-30 2020-07-14 Apple Inc. Intelligent assistant for home automation
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10699717B2 (en) 2014-05-30 2020-06-30 Apple Inc. Intelligent assistant for home automation
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US10169329B2 (en) 2014-05-30 2019-01-01 Apple Inc. Exemplar-based natural language processing
US10657966B2 (en) 2014-05-30 2020-05-19 Apple Inc. Better resolution when referencing to concepts
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US10904611B2 (en) 2014-06-30 2021-01-26 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US9418679B2 (en) 2014-08-12 2016-08-16 Honeywell International Inc. Methods and apparatus for interpreting received speech data using speech recognition
EP2985760A1 (en) * 2014-08-12 2016-02-17 Honeywell International Inc. Methods and apparatus for interpreting received speech data using speech recognition
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10390213B2 (en) 2014-09-30 2019-08-20 Apple Inc. Social reminders
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10453443B2 (en) 2014-09-30 2019-10-22 Apple Inc. Providing an indication of the suitability of speech recognition
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10438595B2 (en) 2014-09-30 2019-10-08 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US11556230B2 (en) 2014-12-02 2023-01-17 Apple Inc. Data detection
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US11231904B2 (en) 2015-03-06 2022-01-25 Apple Inc. Reducing response latency of intelligent automated assistants
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US10529332B2 (en) 2015-03-08 2020-01-07 Apple Inc. Virtual assistant activation
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US11060924B2 (en) 2015-05-18 2021-07-13 Varcode Ltd. Thermochromic ink indicia for activatable quality labels
US11781922B2 (en) 2015-05-18 2023-10-10 Varcode Ltd. Thermochromic ink indicia for activatable quality labels
US11127397B2 (en) 2015-05-27 2021-09-21 Apple Inc. Device voice control
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10152298B1 (en) * 2015-06-29 2018-12-11 Amazon Technologies, Inc. Confidence estimation based on frequency
US11614370B2 (en) 2015-07-07 2023-03-28 Varcode Ltd. Electronic quality indicator
US11920985B2 (en) 2015-07-07 2024-03-05 Varcode Ltd. Electronic quality indicator
US11009406B2 (en) 2015-07-07 2021-05-18 Varcode Ltd. Electronic quality indicator
US10697837B2 (en) 2015-07-07 2020-06-30 Varcode Ltd. Electronic quality indicator
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US11500672B2 (en) 2015-09-08 2022-11-15 Apple Inc. Distributed personal assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US11664020B2 (en) 2015-11-06 2023-05-30 Alibaba Group Holding Limited Speech recognition method and apparatus
US11526368B2 (en) 2015-11-06 2022-12-13 Apple Inc. Intelligent automated assistant in a messaging environment
US10741170B2 (en) 2015-11-06 2020-08-11 Alibaba Group Holding Limited Speech recognition method and apparatus
US10354652B2 (en) 2015-12-02 2019-07-16 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US11037565B2 (en) 2016-06-10 2021-06-15 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10580409B2 (en) 2016-06-11 2020-03-03 Apple Inc. Application integration with a digital assistant
US10942702B2 (en) 2016-06-11 2021-03-09 Apple Inc. Intelligent device arbitration and control
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10553215B2 (en) 2016-09-23 2020-02-04 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US20180166080A1 (en) * 2016-12-08 2018-06-14 Guangzhou Shenma Mobile Information Technology Co. Ltd. Information input method, apparatus and computing device
US10796699B2 (en) * 2016-12-08 2020-10-06 Guangzhou Shenma Mobile Information Technology Co., Ltd. Method, apparatus, and computing device for revision of speech recognition results
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US10318632B2 (en) 2017-03-14 2019-06-11 Microsoft Technology Licensing, Llc Multi-lingual data input system
US10332518B2 (en) 2017-05-09 2019-06-25 Apple Inc. User interface for correcting recognition errors
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10847142B2 (en) 2017-05-11 2020-11-24 Apple Inc. Maintaining privacy of personal information
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US10789945B2 (en) 2017-05-12 2020-09-29 Apple Inc. Low-latency intelligent automated assistant
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10672393B2 (en) * 2018-01-12 2020-06-02 Intel Corporation Time capsule based speaking aid
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US10684703B2 (en) 2018-06-01 2020-06-16 Apple Inc. Attention aware virtual assistant dismissal
US11495218B2 (en) 2018-06-01 2022-11-08 Apple Inc. Virtual assistant operation in multi-device environments
US10403283B1 (en) 2018-06-01 2019-09-03 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US10984798B2 (en) 2018-06-01 2021-04-20 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11009970B2 (en) 2018-06-01 2021-05-18 Apple Inc. Attention aware virtual assistant dismissal
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US10944859B2 (en) 2018-06-03 2021-03-09 Apple Inc. Accelerated task performance
US10504518B1 (en) 2018-06-03 2019-12-10 Apple Inc. Accelerated task performance
WO2021221390A1 (en) * 2020-04-29 2021-11-04 Samsung Electronics Co., Ltd. System and method for out-of-vocabulary phrase support in automatic speech recognition

Similar Documents

Publication Publication Date Title
US20080154600A1 (en) System, Method, Apparatus and Computer Program Product for Providing Dynamic Vocabulary Prediction for Speech Recognition
US7716049B2 (en) Method, apparatus and computer program product for providing adaptive language model scaling
US11664020B2 (en) Speech recognition method and apparatus
US7552045B2 (en) Method, apparatus and computer program product for providing flexible text based language identification
JP6435312B2 (en) Speech recognition using parallel recognition tasks.
CN107810529B (en) Language model speech endpoint determination
US9031839B2 (en) Conference transcription based on conference data
US8265933B2 (en) Speech recognition system for providing voice recognition services using a conversational language model
KR101247578B1 (en) Adaptation of automatic speech recognition acoustic models
KR101932181B1 (en) Speech recognition using device docking context
US20080126093A1 (en) Method, Apparatus and Computer Program Product for Providing a Language Based Interactive Multimedia System
US7124080B2 (en) Method and apparatus for adapting a class entity dictionary used with language models
US7319960B2 (en) Speech recognition method and system
JP5706384B2 (en) Speech recognition apparatus, speech recognition system, speech recognition method, and speech recognition program
EP1551007A1 (en) Language model creation/accumulation device, speech recognition device, language model creation method, and speech recognition method
US9484019B2 (en) System and method for discriminative pronunciation modeling for voice search
JP2006058899A (en) System and method of lattice-based search for spoken utterance retrieval
US10152298B1 (en) Confidence estimation based on frequency
US9449598B1 (en) Speech recognition with combined grammar and statistical language models
JP2006189730A (en) Speech interactive method and speech interactive device
US20220399013A1 (en) Response method, terminal, and storage medium
Karabetsos et al. Embedded unit selection text-to-speech synthesis for mobile devices
CN112712793A (en) ASR (error correction) method based on pre-training model under voice interaction and related equipment

Legal Events

Date Code Title Description
AS Assignment

Owner name: NOKIA CORPORATION, FINLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TIAN, JILEI;LEPPANEN, JUSSI;KISS, IMRE;REEL/FRAME:018665/0550

Effective date: 20061220

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION