US20110184740A1 - Integration of Embedded and Network Speech Recognizers - Google Patents

Integration of Embedded and Network Speech Recognizers Download PDF

Info

Publication number
US20110184740A1
US20110184740A1 US12/794,896 US79489610A US2011184740A1 US 20110184740 A1 US20110184740 A1 US 20110184740A1 US 79489610 A US79489610 A US 79489610A US 2011184740 A1 US2011184740 A1 US 2011184740A1
Authority
US
United States
Prior art keywords
voice command
query
machine
query result
audio stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/794,896
Inventor
Alexander Gruenstein
William J. Byrne
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US12/794,896 priority Critical patent/US20110184740A1/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BYRNE, WILLIAM J., GRUENSTEIN, ALEXANDER
Priority to CN201180013111.0A priority patent/CN102884569B/en
Priority to EP18207861.8A priority patent/EP3477637B1/en
Priority to KR1020127022282A priority patent/KR101770358B1/en
Priority to AU2011209760A priority patent/AU2011209760B2/en
Priority to EP11702758.1A priority patent/EP2529372B1/en
Priority to CA2788088A priority patent/CA2788088A1/en
Priority to PCT/US2011/022427 priority patent/WO2011094215A1/en
Publication of US20110184740A1 publication Critical patent/US20110184740A1/en
Priority to US13/287,913 priority patent/US8412532B2/en
Priority to US13/585,280 priority patent/US8868428B2/en
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/32Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications

Definitions

  • This description generally relates to the field of speech recognition.
  • Speech recognition systems in mobile devices allow users to communicate and provide commands to the mobile device with minimal usage of input controls such as, for example, keypads, buttons, and dials.
  • Some speech recognition tasks can be a complex process for mobile devices, requiring an extensive analysis of speech signals and search of word and language statistical models. This is because mobile devices typically have limited computational, memory, and battery resources. As such, more complex speech recognition tasks are oftentimes offloaded to speech recognition systems located externally to the mobile device such as, for example, speech recognition systems in network servers.
  • the results of the voice command may be limited to data stored in the network server.
  • the mobile device user does not have the benefit of viewing query results that may correspond to the voice command based on data stored in the mobile device.
  • the delay time in transferring the voice command to the network server, performing the speech recognition operation at the network server, and transferring the query result from the network server to the mobile device can be significant. Significant delay time in the execution of applications on mobile devices, such as speech recognition tasks, can lead to a poor user experience.
  • Methods and systems are needed for performing speech recognition tasks on a client device, such as a mobile device, to overcome the above-noted limitations of speech recognition systems in mobile applications.
  • Embodiments include a method for performing a voice command on a client device.
  • the method includes translating, using a first speech recognizer located on the client device, an audio stream of a voice command to a first machine-readable voice command and generating a first query result using the first machine-readable voice command to query a client database.
  • the audio stream can be transmitted to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer.
  • the method includes receiving a second query result from the remote server device, where the second query result is generated by the remote server device using the second machine-readable voice command to query a remote server database, and displaying the first query result and the second query result on the client device.
  • the transmission of the audio stream to the remote server device and the transmission of the second query result from the remote server device to the client device can occur simultaneously, substantially at the same time as, or a time period that overlaps with the generation of the first query result by the client device.
  • Embodiments additionally include a computer program product that includes a computer-usable medium with computer program logic recorded thereon for enabling a processor to perform a voice command on a client device.
  • the computer program logic includes the following: first computer readable program code that enables a processor to translate, using a first speech recognizer located on the client device, an audio stream of a voice command to a first machine-readable voice command; second computer readable program code that enables a processor to generate a first query result using the first machine-readable voice command to query a client database; third computer readable program code that enables a processor to transmit the audio stream to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer; fourth computer readable program code that enables a processor to process a second query result from the remote server device, wherein the second query result is generated by the remote server device using the second machine-readable voice command; and, fifth computer readable program code that enables a processor to a control a display of the first query result
  • Embodiments further include a system for performing a voice command on a client device.
  • the system includes a first speech recognizer device, a client query manager, and a display device.
  • the first speech recognizer device is configured to translate an audio stream of a voice command to a first machine-readable voice command.
  • the client query manager is configured to perform the following functions: generate a first query result using the first machine-readable voice command to query a client database; transmit the audio stream to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer; and, receive a second query result from the remote server device, where the second query result is generated by the remote server device using the second machine-readable voice command to query a remote server database.
  • the display device is configured to display the first query result and the second query result on the client device.
  • FIG. 1 is an illustration of an exemplary communication system in which embodiments can be implemented.
  • FIG. 2 is an illustration of an embodiment of a client device.
  • FIG. 3 is an illustration of an embodiment of a server device.
  • FIG. 4 is an illustration of an embodiment of a method for performing a voice command on a client device.
  • FIGS. 5( a )- 5 ( c ) are illustrations of an exemplary user interface on a mobile phone in accordance with embodiments.
  • FIG. 6 is an illustration of an embodiment of a method for performing a voice command on a client device.
  • FIGS. 7( a ) and 8 ( b ) are illustrations of another exemplary user interface on a mobile phone in accordance with embodiments.
  • FIG. 8 is an illustration of an embodiment of another method for performing a voice command on a client device.
  • FIG. 9 is an illustration of an embodiment of another method for performing a voice command on a client device.
  • FIGS. 10( a )- 10 ( e ) are illustrations of an of yet another exemplary user interface on a mobile phone in accordance with embodiments.
  • FIG. 11 is an illustration of an example computer system in which embodiments can be implemented.
  • FIG. 1 is an illustration of an exemplary communication system 100 in which embodiments can be implemented.
  • Communication system 100 includes a client device 110 that is communicatively coupled to a server device 130 via a network 120 .
  • Client device 110 can be, for example and without limitation, a mobile phone, a personal digital assistant (PDA), a laptop, or other similar types of mobile devices.
  • Server device 130 can be, for example and without limitation, a telecommunications server, a web server, or other similar types of database servers.
  • server device 130 can have multiple processors and multiple shared or separate memory components such as, for example and without limitation, one or more computing devices incorporated in a clustered computing environment or server farm.
  • server device 130 can be implemented on a single computing device.
  • computing devices include, but are not limited to, a central processing unit, an application-specific integrated circuit, or other type of computing device having at least one processor and memory.
  • network 120 can be, for example and without limitation, a wired (e.g., ethernet) or a wireless (e.g., Wi-Fi and 3G) network that communicatively couples client device 110 to server device 130 .
  • FIG. 2 is an illustration of an embodiment of client device 110 .
  • Client device 110 includes a speech recognizer 210 , a client query manager 220 , a microphone 230 , a client database 240 , and a display device 250 .
  • microphone 230 is coupled to speech recognizer 210 , which is coupled to client query manager 220 .
  • Client manager 220 is also coupled to client database 240 and display 250 , according to an embodiment.
  • speech recognizer 210 and client query manager 220 can be implemented in software, firmware, hardware, or a combination thereof.
  • Embodiments of speech recognizer 210 and client query manager 220 , or portions thereof, can also be implemented as computer-readable code executed on one or more computing devices capable of carrying out the functionality described herein. Examples of computing devices include, but are not limited to, a central processing unit, an application-specific integrated circuit, or other type of computing device having at least one processor and memory.
  • microphone 230 is configured to receive an audio stream corresponding to a voice command and to provide the voice command to speech recognizer 210 .
  • the voice command can be generated from an audio source such as, for example and without limitation, a mobile phone user, according to an embodiment.
  • speech recognizer 210 is configured to translate the audio stream to a machine-readable voice command, according to an embodiment.
  • Methods and techniques to translate the audio stream to the machine-readable voice command are known to a person of ordinary skill in the relevant art. Examples of these methods and techniques can be found in commercial speech recognition software such as Dragon Naturally Speaking Software and MacSpeech Software, both by Nuance Communications, Inc.
  • client query manager 220 queries client database 240 to generate a query result.
  • client database 240 contains information that is locally stored in client device 110 such as, for example and without limitation, telephone numbers, address information, and results from previous voice commands (described in further detail below). Based on the description herein, a person of ordinary skill in the relevant art will recognize that other data stored in client database 240 can provide query results to embodiments described herein.
  • client query manager 220 also coordinates a transmission of the audio stream corresponding to the voice command to server device 130 via network 120 of FIG. 1 .
  • the audio stream can be transmitted to server device 130 in multiple types of audio file formats such as, for example and without limitation, a WAVE audio format.
  • client query manager 220 coordinates a reception of a query result from server device 130 via network 120 .
  • the transmission of data to and reception of data from server device 130 can be performed using a transceiver (not shown in FIG. 2 ), which is known by a person of ordinary skill in the relevant art.
  • Client query manager 220 of FIG. 2 coordinates the transmission of the audio stream to server device 130 simultaneously, substantially the same time, or in a parallel manner as it queries client database 240 , according to an embodiment.
  • the query result from server device 130 can be received by client query manager 220 and displayed on display device 250 at substantially the same time as, in parallel with, or soon after the query result from client device 110 .
  • the query result from server device 130 can be received by client query manager 220 and displayed on display device 250 prior to the display of a query result from client database 240 , according to an embodiment.
  • display device 250 is configured to display the query results from client database 240 and from server device 130 . These query results are stored in client database 240 and may be retrieved at a later time based on a future voice command that is substantially the same as or substantially similar to the voice command used to generate the query results, according to an embodiment.
  • FIG. 3 is an illustration of an embodiment of server device 130 .
  • Server device 130 includes a speech recognizer 310 , a server query manager 320 , and a server database 330 .
  • speech recognizer 310 is coupled to server query manager 320 , which is coupled to server database 330 .
  • speech recognizer 310 and server query manager 320 can be implemented in software, firmware, hardware, or a combination thereof.
  • Embodiments of speech recognizer 310 and server query manager 320 , or portions thereof, can also be implemented as computer-readable code executed on one or more computing device capable of carrying out the functionality described herein. Examples of computing devices include, but are not limited to, a central processing unit, an application-specific integrated circuit, or other type of computing device having at least one processor and memory.
  • server device 130 receives an audio stream corresponding to a voice command from client device 110 .
  • server query manager 320 coordinates the reception of the audio stream from client device 110 via a transceiver (not shown in FIG. 3 ) and transfer of the audio stream to speech recognizer 310 .
  • speech recognizer 310 is configured to translate the audio stream to a machine-readable voice command, according to an embodiment of the present.
  • speech recognizer 310 is configured to translate both simple speech recognition tasks, as well as more complex speech recognition tasks than those tasks translated by speech recognizer 210 in client device 110 . This is because speech recognizer 310 has more computational and memory resources than speech recognizer 210 to translate more complex voice commands to corresponding machine-readable voice commands, according to an embodiment. Methods and techniques to process complex speech recognition tasks are known to a person of ordinary skill in the relevant art.
  • server query manager 320 queries server database 330 to generate a query result.
  • server database 330 contains a wide array of information such as, for example and without limitation, text data, image data, and video. Based on the description herein, a person of ordinary skill in the relevant art will recognize that other data stored in server database 330 can provide query results to embodiments described herein.
  • server query manager 320 coordinates a transmission of the query result to client device 110 via network 120 of FIG. 1 .
  • the transmission of data to and the reception of data from client device 110 can be performed using a transceiver (not shown in FIG. 3 ), which is known by a person of ordinary skill in the relevant art.
  • FIG. 4 is an illustration of an embodiment of a method 400 for performing a voice command on a client device.
  • Method 400 can occur using, for example, client device 110 in communication system 100 of FIG. 1 .
  • client device 110 in communication system 100 of FIG. 1 .
  • a speech recognition system performing in accordance with method 400 processes both simple and complex voice commands on the client device as well as the server device.
  • the query results generated by both the client device and the server device provide information from a client database and a server database, respectively.
  • the user of the client device receives the benefit of viewing query results that may correspond to the voice command based on data stored on the client device as well as data stored on the server device.
  • communication system 100 will be used to facilitate in the description of method 400 .
  • method 400 can be executed on other communication systems. These other communication systems are within the scope and spirit of the embodiments described herein.
  • method 400 will be described in the context of a mobile phone (e.g., client device 110 of FIG. 1 ) with a mobile phone user as the audio source of the voice command.
  • the mobile phone is communicatively coupled to a network server (e.g., server device 130 of FIG. 1 ) via a communications network (e.g., network 120 of FIG. 1 ).
  • a network server e.g., server device 130 of FIG. 1
  • a communications network e.g., network 120 of FIG. 1
  • method 400 can be executed on other types of client devices such as, for example and without limitation, a PDA and a laptop and with other audio sources such as, for example and without limitation, a radio and a computer.
  • client devices and audio sources are within the scope and spirit of the embodiments described herein.
  • step 410 an audio stream of a voice command is translated into a machine-readable voice command with a speech recognizer located on the mobile phone.
  • speech recognizer 210 translates the audio stream received by microphone 230 .
  • a query is made to a database of the mobile phone to generate a query result based on the machine-readable voice command generated from step 410 .
  • client query manager 220 queries client database 240 to generate the query result.
  • FIGS. 5( a )-( c ) are illustrations of an exemplary user interface (UI) 510 on a mobile phone in accordance with embodiments described herein. These illustrations are used to help facilitate in the explanation of steps 410 and 420 of FIG. 4 .
  • UI user interface
  • mobile phone UI 510 prompts the mobile phone user for a voice command.
  • the mobile phone user provides “Barry Cage” as the voice command.
  • the mobile phone translates the audio stream of the voice command into a machine-readable voice command using its embedded speech recognizer (e.g., speech recognizer 210 of FIG. 2) .
  • a query manager on the mobile phone e.g., client query manager 220 of FIG. 2 ) queries the mobile phone's database for “Barry Cage.”
  • the mobile phone's query manager queries a contact list database for the name “Barry Cage” and finds a query result 520 .
  • a person of ordinary skill in the relevant art will recognize that other databases on the mobile phone can be queried to generate the query result such as, for example and without limitation, call log information, music libraries, and calendar listings.
  • the mobile phone user can select query result 520 to view contact information 530 corresponding to the voice command.
  • step 430 the audio stream of the voice command is transmitted to a network server, where the voice command is translated to a machine-readable voice command with a speech recognizer located on the network server.
  • client query manager 220 coordinates a transmission of the audio stream to server device 130 .
  • a query result is received from the network server, where the query result is generated from a query made to a server database based on the machine-readable voice command from step 430 .
  • speech recognizer 310 translates the voice command to the machine-readable voice command.
  • server query manager 320 queries server database 330 to generate the query result. This query result is then transmitted from server device 130 to client device 110 via network 120 .
  • the transmission of the audio stream to the network server (step 430 ) and the reception of the query result from the network server (step 440 ) can be performed simultaneously with, substantially at the same time as, or to overlap with the translation of the audio stream of the voice command by the mobile phone (step 410 ) and query of the database on the mobile phone (step 420 ).
  • the query result from the network server can be received by and displayed on the mobile phone at substantially the same time as, in parallel with, or soon after a display of the query result from the database of the mobile phone.
  • the query result from the network server can be received by and displayed on the mobile phone prior to the display of the query result from the mobile phone's database, according to an embodiment.
  • step 450 of FIG. 4 the query result from step 420 and the query result from step 440 are displayed on the mobile phone.
  • the query results from steps 420 and 440 are stored in the database of the mobile phone and may be displayed based on a future voice command by the mobile phone user.
  • FIGS. 7( a ) and 7 ( b ) are illustrations of an exemplary UI 710 on a mobile phone in accordance with embodiments described herein. These illustrations are used to help facilitate in the explanation of steps 430 - 450 of FIG. 4 .
  • FIGS. 7( a ) and 7 ( b ) assume that the mobile phone user provides “Barry Cage” as the voice command.
  • field 720 displays a query result from a query made to the mobile phone's database (e.g., client database 240 of FIG. 2) .
  • field 730 displays a query result from a query made to the network server (e.g., server database 330 of FIG. 3 ).
  • field 730 is a list of three entries in which the network server returns as possible matches for the voice command: “barry cage”; “mary paige”; and, “mary peach.” If the mobile phone user does not decide to select an entry from field 720 (i.e., “Barry Cage”), then the mobile phone user can select an entry from field 730 .
  • a partial portion of the list in field 630 can be received by and displayed on the mobile phone at a first time instance and the remainder of the list in field 730 can be received by and displayed on the mobile phone at a second time instance (e.g., later in time than the first time instance). In this way, the mobile phone user can view a portion of the query results as the remainder of the query results is being processed by the network server and received by the mobile phone.
  • results 740 from a web search is displayed on the mobile phone.
  • the mobile phone user can then scroll through search results 740 to locate a hyperlink of interest.
  • search results 740 and the query result from step 420 of FIG. 4 are stored in the mobile phone for a future voice command by the mobile phone user, according to an embodiment.
  • FIG. 8 is an illustration of another method 800 for performing a voice command on a client device.
  • Method 800 can occur using, for example, client device 110 in communication system 100 of FIG. 1 . Similar to method 400 of FIG. 4 , for ease of explanation, communication system 100 will be used to facilitate in the description of method 800 . Further, for ease of explanation, method 800 will be described in context of a mobile phone (e.g., client device 110 of FIG. 1 ) with a mobile phone user as the audio source of the voice command.
  • a mobile phone e.g., client device 110 of FIG. 1
  • step 810 an audio stream of a voice command is received by the mobile phone.
  • microphone 230 is configured to receive the audio stream of the voice command.
  • a speech recognizer located on the mobile phone determines whether the audio stream (from step 810 ) can be translated into a machine-readable voice command with an appropriate confidence score.
  • the speech recognizer located on the mobile phone e.g., speech recognizer 210 of FIG. 2
  • the speech recognizer located on the mobile phone may not be able to translate more complex voice command into corresponding machine-readable voice commands with relatively high confidence scores.
  • a speech recognition confidence score for the voice command is below a predetermined threshold, then a query is not made to a database of the mobile phone based on the voice command, according to an embodiment.
  • the mobile phone stores the machine-readable voice command with the relatively low confidence score for future recall by the mobile phone. This future recall feature will be described in further detail below.
  • step 830 if the speech recognizer located on the mobile phone is able to provide a machine-readable voice command translation for the audio stream of the voice command, then the voice command is translated into the machine-readable voice command with the speech recognizer located on the mobile phone. Step 830 performs a similar function as step 410 of FIG. 4 .
  • step 840 a query is made on a database of the mobile phone to generate a query result based on the machine-readable voice command generated from step 830 .
  • Step 840 performs a similar function as step 420 of FIG. 4 .
  • step 850 regardless of whether the speech recognizer located on the mobile phone is able to provide the machine-readable voice command translation for the audio stream of the voice command with the appropriate confidence score, the audio stream of the voice command is transmitted to a network server, where the voice command is translated to a machine-readable voice command with a speech recognizer located on the network server. Step 850 performs a similar function as step 430 of FIG. 4 .
  • step 860 a query result is received from the network server, where the query result is generated from a query made to a server database based on the machine-readable voice command from step 850 .
  • Step 860 performs a similar function as step 440 of FIG. 4 .
  • FIG. 9 is an illustration of another method 900 for performing a voice command on a client device. Similar to steps 430 and 440 of FIG. 6 , steps 860 and 870 of FIG. 8 can be performed simultaneously with, substantially at the same time as, to overlap with the translation of the audio stream of the voice command by the mobile phone (step 830 ) and query of the database on the mobile phone (step 840 ), according to an embodiment. As a result, in an embodiment, the query result from the network server can be received by and displayed on the mobile phone at substantially the same time as, in parallel with, or soon after a display of the query result from the database of the mobile phone.
  • the query result from the network server can be received by and displayed on the mobile phone prior to the display of a query result from the mobile phone's database, according to an embodiment.
  • step 880 of FIG. 8 if the speech recognizer located on the mobile phone is able to provide a machine-readable voice command translation for the audio stream of the voice command (see step 870 ), the query result from step 820 and the query result from 840 are displayed on the mobile phone (see step 880 ).
  • the query results from steps 820 and 840 are stored in the database of the mobile phone for a future voice command by the mobile phone user.
  • the speech recognizer located on the mobile device is not able to provide a machine-readable voice command translation for the audio stream of the voice command (see step 870 )
  • only the query result from step 840 is displayed on the mobile phone (see step 890 ).
  • the query result from step 840 is stored in the database of the mobile phone for a future voice command by the mobile phone user.
  • a future voice command can be translated into a machine-readable voice command, in which this machine-readable voice command can be compared to the machine-readable voice command with the relatively low confidence score (from step 820 of FIG. 2 ). If the two machine-readable voice commands substantially match one another or are substantially similar to one another, then the mobile phone displays the query result from step 820 and/or the query result from step 840 , according to an embodiment.
  • An exemplary method and system to store and retrieve data in fields 720 and 730 of FIG. 7( a ) can be found in U.S. patent application Ser. No. 12/783,470 (Atty. Docket No. 2525.2360000), which is entitled “Personalization and Latency Reduction for Voice-Activated Commands” and incorporated herein by reference in its entirety.
  • the audio stream corresponding to the future voice command is transmitted to the network server, where the voice command is translated to a machine-readable voice command with the speech recognizer located on the network server.
  • a query is made to a database on the network server to generate a query result. This query result is received by, displayed on, and stored in the mobile phone, according to an embodiment.
  • a benefit, among others, in displaying the stored query result corresponding to the prior voice command and another query result corresponding to the future voice command is that the mobile phone user receives the benefit of viewing an updated query result (if any) from the network server, according to an embodiment.
  • the speech recognizer on the mobile phone may mischaracterize the future voice command as corresponding to a previously-stored voice command.
  • the speech recognizer located on the network server may be able to resolve the mischaracterization by providing a more accurate translation of the future voice command than the translation provided by speech recognizer located on the mobile phone, according to an embodiment.
  • FIGS. 10( a )-( e ) are illustrations of an exemplary UI 1010 on a mobile phone in accordance with embodiments described herein. These illustrations are used to help facilitate in the explanation of method 800 .
  • mobile phone UI 1010 prompts the mobile phone user for a voice command.
  • the mobile phone user provides “pizza my heart” as the voice command.
  • the mobile phone receives the voice command and determines whether the audio stream of the voice command can be translated into a machine-readable voice command with an appropriate confidence score.
  • the voice command “pizza my heart” does not return a speech recognition confidence score above the predetermined threshold value. In other words, the voice command “pizza my heart” does not return a high-confidence match from the speech recognizer located on the mobile phone.
  • the audio stream of the voice command is transmitted to a network server for further speech recognition processing, in accordance with step 850 .
  • FIG. 10( b ) is an illustration of an exemplary list of query results 1020 from the voice command made to the network server.
  • Exemplary list of query results 1020 is transmitted from the network server to the mobile phone, in accordance with step 850 .
  • information relating to each of the query results e.g., web pages, images, text data
  • cache memory of the mobile phone This allows the mobile user to select a query result of interest from exemplary list of query results 1020 and instantly view information relating to the query result, thus improving the mobile phone user's experience. For instance, with respect to FIG.
  • the mobile phone user selects the top entry “pizza my heart” from exemplary list of query results 1020 and a list of web search results 1030 is displayed on the mobile phone. From the web search results, the mobile phone user can select a hyperlink of interest (e.g., www.pizzamyheart.com) and view the contents of the web page on the mobile phone, as illustrated in a web page 1040 of FIG. 10( d ).
  • a hyperlink of interest e.g., www.pizzamyheart.com
  • a partial portion of the exemplary list of query results can be received by and displayed on the mobile phone at a first time instance and the remainder of the exemplary list of query results can be received by and displayed on the mobile phone at a second time instance (e.g., later in time than the first time instance).
  • the mobile phone user can view a portion of the query results as the remainder of the query results is being processed by the network server and received by the mobile phone.
  • the query result selected by the mobile phone user is stored in the database of the mobile phone for a future voice command by the mobile phone user.
  • the hyperlink “www.pizzamyheart.com” appears as a query result from a query made to the database of the mobile phone when, at a later time, the mobile phone user provides “pizza my heart” as a voice command to the mobile phone. This is illustrated in field 1050 of FIG. 10( e ).
  • the mobile phone user can select the query result in field 1050 and view the web page at “www.pizzamyheart.com,” as illustrated in FIG. 10( d ).
  • the mobile phone user In storing the query result and associated web page, the mobile phone user receives the benefit of viewing a previously-selected web search result. In turn, the mobile phone user's experience is enhanced since the mobile phone is able to quickly recall a selected entry from a previous voice command.
  • An exemplary method and system to store and retrieve data in field 1050 of FIG. 10( e ) can be found in U.S. patent application Ser. No. 12/783,470 (Atty. Docket No. 2525.2360000), which is entitled “Personalization and Latency Reduction for Voice-Activated Commands” and incorporated herein by reference in its entirety.
  • FIG. 11 is an illustration of an example computer system 1100 in which embodiments, or portions thereof, can be implemented as computer-readable code.
  • the methods illustrated by flowchart 400 of FIG. 4 , flowchart 600 of FIG. 6 , flowchart 800 of FIG. 8 , or flowchart 900 of FIG. 9 can be implemented in computer system 1100 .
  • Various embodiments are described in terms of this example computer system 1100 . After reading this description, it will become apparent to a person skilled in the relevant art how to implement embodiments described herein using other computer systems and/or computer architectures.
  • Computer system 1100 is an example computing device and includes one or more processors, such as processor 1104 .
  • Processor 1104 may be a special purpose or a general-purpose processor.
  • Processor 1104 is connected to a communication infrastructure 1106 (e.g., a bus or network).
  • Computer system 1100 also includes a main memory 1108 , preferably random access memory (RAM), and may also include a secondary memory 1110 .
  • Secondary memory 1110 can include, for example, a hard disk drive 1112 , a removable storage drive 1114 , and/or a memory stick.
  • Removable storage drive 1114 can comprise a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory, or the like.
  • the removable storage drive 1114 reads from and/or writes to a removable storage unit 1118 in a well-known manner.
  • Removable storage unit 1118 can include a floppy disk, magnetic tape, optical disk, etc. which is read by and written to by removable storage drive 1114 .
  • removable storage unit 1118 includes a computer-usable storage medium having stored therein computer software and/or data.
  • secondary memory 1110 can include other similar devices for allowing computer programs or other instructions to be loaded into computer system 1100 .
  • Such devices can include, for example, a removable storage unit 1122 and an interface 1120 .
  • Examples of such devices can include a program cartridge and cartridge interface (such as those found in video game devices), a removable memory chip (e.g., EPROM or PROM) and associated socket, and other removable storage units 1122 and interfaces 1120 which allow software and data to be transferred from the removable storage unit 1122 to computer system 1100 .
  • Computer system 1100 can also include a communications interface 1124 .
  • Communications interface 1124 allows software and data to be transferred between computer system 1100 and external devices.
  • Communications interface 1124 can include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, or the like.
  • Software and data transferred via communications interface 1124 are in the form of signals, which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 1124 . These signals are provided to communications interface 1124 via a communications path 1126 .
  • Communications path 1126 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, a RF link or other communications channels.
  • computer program medium and “computer-usable medium” are used to generally refer to media such as removable storage unit 1118 , removable storage unit 1122 , and a hard disk installed in hard disk drive 1112 .
  • Computer program medium and computer-usable medium can also refer to memories, such as main memory 1108 and secondary memory 1110 , which can be memory semiconductors (e.g., DRAMs, etc.). These computer program products provide software to computer system 1100 .
  • Computer programs are stored in main memory 1108 and/or secondary memory 1110 . Computer programs may also be received via communications interface 1124 . Such computer programs, when executed, enable computer system 1100 to implement embodiments discussed herein. In particular, the computer programs, when executed, enable processor 904 to implement processes described above, such as the steps in the methods illustrated by flowchart 400 of FIG. 4 , flowchart 600 of FIG. 6 , flowchart 800 of FIG. 8 , and flowchart 900 of FIG. 9 , discussed above. Accordingly, such computer programs represent controllers of the computer system 1100 . Where embodiments described herein are implemented using software, the software can be stored in a computer program product and loaded into computer system 1100 using removable storage drive 1114 , interface 1120 , hard drive 1112 or communications interface 1124 .
  • the computer programs when executed, can enable one or more processors to implement processes described above, such as the steps in the methods illustrated by flowchart 400 of FIG. 4 , flowchart 600 of FIG. 6 , flowchart 800 of FIG. 8 , and flowchart 900 of FIG. 9 .
  • the one or more processors can be part of a computing device incorporated in a clustered computing environment or server farm.
  • the computing process performed by the clustered computing environment such as, for example, the steps in the methods illustrated by flowcharts 400 , 600 , 800 , and 900 may be carried out across multiple processors located at the same or different locations.
  • Embodiments are also directed to computer program products including software stored on any computer-usable medium. Such software, when executed in one or more data processing device, causes a data processing device(s) to operate as described herein.
  • Embodiments employ any computer-usable or -readable medium, known now or in the future. Examples of computer-usable mediums include, but are not limited to, primary storage devices (e.g., any type of random access memory), secondary storage devices (e.g., hard drives, floppy disks, CD ROMS, ZIP disks, tapes, magnetic storage devices, optical storage devices, MEMS, nanotechnological storage devices, etc.), and communication mediums (e.g., wired and wireless communications networks, local area networks, wide area networks, intranets, etc.).
  • primary storage devices e.g., any type of random access memory
  • secondary storage devices e.g., hard drives, floppy disks, CD ROMS, ZIP disks, tapes, magnetic storage devices, optical storage devices, MEMS, nanotechn

Abstract

A method, computer program product, and system are provided for performing a voice command on a client device. The method can include translating, using a first speech recognizer located on the client device, an audio stream of a voice command to a first machine-readable voice command and generating a first query result using the first machine-readable voice command to query a client database. In addition, the audio stream can be transmitted to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer. Further, the method can include receiving a second query result from the remote server device, where the second query result is generated by the remote server device using the second machine-readable voice command and displaying the first query result and the second query result on the client device.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • This application claims the benefit of U.S. Provisional Application No. 61/298,251 (SKGF Ref. No. 2525.2310000), filed Jan. 26, 2010, titled “Integration of Embedded and Network Speech Recognizers,” which is incorporated herein by reference in its entirety.
  • BACKGROUND
  • 1. Field
  • This description generally relates to the field of speech recognition.
  • 2. Background
  • Speech recognition systems in mobile devices allow users to communicate and provide commands to the mobile device with minimal usage of input controls such as, for example, keypads, buttons, and dials. Some speech recognition tasks can be a complex process for mobile devices, requiring an extensive analysis of speech signals and search of word and language statistical models. This is because mobile devices typically have limited computational, memory, and battery resources. As such, more complex speech recognition tasks are oftentimes offloaded to speech recognition systems located externally to the mobile device such as, for example, speech recognition systems in network servers.
  • Since more complex speech recognition tasks are performed on network servers and not on the mobile device, the results of the voice command may be limited to data stored in the network server. For these speech recognition tasks, the mobile device user does not have the benefit of viewing query results that may correspond to the voice command based on data stored in the mobile device. In addition, the delay time in transferring the voice command to the network server, performing the speech recognition operation at the network server, and transferring the query result from the network server to the mobile device can be significant. Significant delay time in the execution of applications on mobile devices, such as speech recognition tasks, can lead to a poor user experience.
  • Methods and systems are needed for performing speech recognition tasks on a client device, such as a mobile device, to overcome the above-noted limitations of speech recognition systems in mobile applications.
  • SUMMARY
  • Embodiments include a method for performing a voice command on a client device. The method includes translating, using a first speech recognizer located on the client device, an audio stream of a voice command to a first machine-readable voice command and generating a first query result using the first machine-readable voice command to query a client database. In addition, the audio stream can be transmitted to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer. Further, the method includes receiving a second query result from the remote server device, where the second query result is generated by the remote server device using the second machine-readable voice command to query a remote server database, and displaying the first query result and the second query result on the client device. The transmission of the audio stream to the remote server device and the transmission of the second query result from the remote server device to the client device can occur simultaneously, substantially at the same time as, or a time period that overlaps with the generation of the first query result by the client device.
  • Embodiments additionally include a computer program product that includes a computer-usable medium with computer program logic recorded thereon for enabling a processor to perform a voice command on a client device. The computer program logic includes the following: first computer readable program code that enables a processor to translate, using a first speech recognizer located on the client device, an audio stream of a voice command to a first machine-readable voice command; second computer readable program code that enables a processor to generate a first query result using the first machine-readable voice command to query a client database; third computer readable program code that enables a processor to transmit the audio stream to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer; fourth computer readable program code that enables a processor to process a second query result from the remote server device, wherein the second query result is generated by the remote server device using the second machine-readable voice command; and, fifth computer readable program code that enables a processor to a control a display of the first query result and the second query result on the client device.
  • Embodiments further include a system for performing a voice command on a client device. The system includes a first speech recognizer device, a client query manager, and a display device. The first speech recognizer device is configured to translate an audio stream of a voice command to a first machine-readable voice command. The client query manager is configured to perform the following functions: generate a first query result using the first machine-readable voice command to query a client database; transmit the audio stream to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer; and, receive a second query result from the remote server device, where the second query result is generated by the remote server device using the second machine-readable voice command to query a remote server database. Further, the display device is configured to display the first query result and the second query result on the client device.
  • Further features and advantages of embodiments described herein, as well as the structure and operation of various embodiments, are described in detail below with reference to the accompanying drawings. It is noted that the embodiments described below are not limited to the specific embodiments described herein. Such embodiments are presented herein for illustrative purposes only. Additional embodiments will be apparent to persons skilled in the relevant art based on the teachings contained herein.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate embodiments and, together with the description, further serve to explain the principles of the embodiments and to enable a person skilled in the relevant art to make and use the embodiments.
  • FIG. 1 is an illustration of an exemplary communication system in which embodiments can be implemented.
  • FIG. 2 is an illustration of an embodiment of a client device.
  • FIG. 3 is an illustration of an embodiment of a server device.
  • FIG. 4 is an illustration of an embodiment of a method for performing a voice command on a client device.
  • FIGS. 5( a)-5(c) are illustrations of an exemplary user interface on a mobile phone in accordance with embodiments.
  • FIG. 6 is an illustration of an embodiment of a method for performing a voice command on a client device.
  • FIGS. 7( a) and 8(b) are illustrations of another exemplary user interface on a mobile phone in accordance with embodiments.
  • FIG. 8 is an illustration of an embodiment of another method for performing a voice command on a client device.
  • FIG. 9 is an illustration of an embodiment of another method for performing a voice command on a client device.
  • FIGS. 10( a)-10(e) are illustrations of an of yet another exemplary user interface on a mobile phone in accordance with embodiments.
  • FIG. 11 is an illustration of an example computer system in which embodiments can be implemented.
  • DETAILED DESCRIPTION
  • The following detailed description refers to the accompanying drawings that illustrate exemplary embodiments. Other embodiments are possible, and modifications can be made to the embodiments within the spirit and scope of this description. Therefore, the detailed description is not meant to limit the embodiments described below.
  • It would be apparent to one of skill in the relevant art that the embodiments described below can be implemented in many different embodiments of software, hardware, firmware, and/or the entities illustrated in the figures. Any actual software code with the specialized control of hardware to implement embodiments is not limiting of this description. Thus, the operational behavior of embodiments will be described with the understanding that modifications and variations of the embodiments are possible, given the level of detail presented herein.
  • FIG. 1 is an illustration of an exemplary communication system 100 in which embodiments can be implemented. Communication system 100 includes a client device 110 that is communicatively coupled to a server device 130 via a network 120. Client device 110 can be, for example and without limitation, a mobile phone, a personal digital assistant (PDA), a laptop, or other similar types of mobile devices. Server device 130 can be, for example and without limitation, a telecommunications server, a web server, or other similar types of database servers. In an embodiment, server device 130 can have multiple processors and multiple shared or separate memory components such as, for example and without limitation, one or more computing devices incorporated in a clustered computing environment or server farm. The computing process performed by the clustered computing environment, or server farm, may be carried out across multiple processors located at the same or different locations. In an embodiment, server device 130 can be implemented on a single computing device. Examples of computing devices include, but are not limited to, a central processing unit, an application-specific integrated circuit, or other type of computing device having at least one processor and memory. Further, network 120 can be, for example and without limitation, a wired (e.g., ethernet) or a wireless (e.g., Wi-Fi and 3G) network that communicatively couples client device 110 to server device 130.
  • FIG. 2 is an illustration of an embodiment of client device 110. Client device 110 includes a speech recognizer 210, a client query manager 220, a microphone 230, a client database 240, and a display device 250. In an embodiment, microphone 230 is coupled to speech recognizer 210, which is coupled to client query manager 220. Client manager 220 is also coupled to client database 240 and display 250, according to an embodiment.
  • In an embodiment, speech recognizer 210 and client query manager 220 can be implemented in software, firmware, hardware, or a combination thereof. Embodiments of speech recognizer 210 and client query manager 220, or portions thereof, can also be implemented as computer-readable code executed on one or more computing devices capable of carrying out the functionality described herein. Examples of computing devices include, but are not limited to, a central processing unit, an application-specific integrated circuit, or other type of computing device having at least one processor and memory.
  • In an embodiment, microphone 230 is configured to receive an audio stream corresponding to a voice command and to provide the voice command to speech recognizer 210. The voice command can be generated from an audio source such as, for example and without limitation, a mobile phone user, according to an embodiment. In turn, speech recognizer 210 is configured to translate the audio stream to a machine-readable voice command, according to an embodiment. Methods and techniques to translate the audio stream to the machine-readable voice command are known to a person of ordinary skill in the relevant art. Examples of these methods and techniques can be found in commercial speech recognition software such as Dragon Naturally Speaking Software and MacSpeech Software, both by Nuance Communications, Inc.
  • Based on the machine-readable voice command, in an embodiment, client query manager 220 queries client database 240 to generate a query result. In an embodiment, client database 240 contains information that is locally stored in client device 110 such as, for example and without limitation, telephone numbers, address information, and results from previous voice commands (described in further detail below). Based on the description herein, a person of ordinary skill in the relevant art will recognize that other data stored in client database 240 can provide query results to embodiments described herein.
  • In an embodiment, client query manager 220 also coordinates a transmission of the audio stream corresponding to the voice command to server device 130 via network 120 of FIG. 1. The audio stream can be transmitted to server device 130 in multiple types of audio file formats such as, for example and without limitation, a WAVE audio format. After server device 130 processes the audio stream, which will be described in further detail below, client query manager 220 coordinates a reception of a query result from server device 130 via network 120. The transmission of data to and reception of data from server device 130 can be performed using a transceiver (not shown in FIG. 2), which is known by a person of ordinary skill in the relevant art.
  • Client query manager 220 of FIG. 2 coordinates the transmission of the audio stream to server device 130 simultaneously, substantially the same time, or in a parallel manner as it queries client database 240, according to an embodiment. As a result, in an embodiment, the query result from server device 130 can be received by client query manager 220 and displayed on display device 250 at substantially the same time as, in parallel with, or soon after the query result from client device 110. In the alternative, depending on the computation time for client query manager 220 to query client database 240 or the complexity of the voice command, the query result from server device 130 can be received by client query manager 220 and displayed on display device 250 prior to the display of a query result from client database 240, according to an embodiment.
  • In reference to FIG. 2, in an embodiment, display device 250 is configured to display the query results from client database 240 and from server device 130. These query results are stored in client database 240 and may be retrieved at a later time based on a future voice command that is substantially the same as or substantially similar to the voice command used to generate the query results, according to an embodiment.
  • FIG. 3 is an illustration of an embodiment of server device 130. Server device 130 includes a speech recognizer 310, a server query manager 320, and a server database 330. In an embodiment, speech recognizer 310 is coupled to server query manager 320, which is coupled to server database 330.
  • In an embodiment, speech recognizer 310 and server query manager 320 can be implemented in software, firmware, hardware, or a combination thereof. Embodiments of speech recognizer 310 and server query manager 320, or portions thereof, can also be implemented as computer-readable code executed on one or more computing device capable of carrying out the functionality described herein. Examples of computing devices include, but are not limited to, a central processing unit, an application-specific integrated circuit, or other type of computing device having at least one processor and memory.
  • As described above, with respect to FIG. 2, server device 130 receives an audio stream corresponding to a voice command from client device 110. In an embodiment, server query manager 320 coordinates the reception of the audio stream from client device 110 via a transceiver (not shown in FIG. 3) and transfer of the audio stream to speech recognizer 310. In turn, speech recognizer 310 is configured to translate the audio stream to a machine-readable voice command, according to an embodiment of the present.
  • In an embodiment, speech recognizer 310 is configured to translate both simple speech recognition tasks, as well as more complex speech recognition tasks than those tasks translated by speech recognizer 210 in client device 110. This is because speech recognizer 310 has more computational and memory resources than speech recognizer 210 to translate more complex voice commands to corresponding machine-readable voice commands, according to an embodiment. Methods and techniques to process complex speech recognition tasks are known to a person of ordinary skill in the relevant art.
  • Based on the machine-readable voice command translated by speech recognizer 310, in an embodiment, server query manager 320 queries server database 330 to generate a query result. In an embodiment, server database 330 contains a wide array of information such as, for example and without limitation, text data, image data, and video. Based on the description herein, a person of ordinary skill in the relevant art will recognize that other data stored in server database 330 can provide query results to embodiments described herein.
  • After a query result is retrieved from server database 330, server query manager 320 coordinates a transmission of the query result to client device 110 via network 120 of FIG. 1. The transmission of data to and the reception of data from client device 110 can be performed using a transceiver (not shown in FIG. 3), which is known by a person of ordinary skill in the relevant art.
  • FIG. 4 is an illustration of an embodiment of a method 400 for performing a voice command on a client device. Method 400 can occur using, for example, client device 110 in communication system 100 of FIG. 1. Unlike speech recognition systems that offload more complex voice commands to a server device for processing and returns a corresponding query result to the client device, a speech recognition system performing in accordance with method 400 processes both simple and complex voice commands on the client device as well as the server device. The query results generated by both the client device and the server device provide information from a client database and a server database, respectively. As a result, the user of the client device receives the benefit of viewing query results that may correspond to the voice command based on data stored on the client device as well as data stored on the server device.
  • For ease of explanation, communication system 100 will be used to facilitate in the description of method 400. However, based on description herein, a person of ordinary skill in the relevant art will recognize that method 400 can be executed on other communication systems. These other communication systems are within the scope and spirit of the embodiments described herein.
  • Further, for ease of explanation, method 400 will be described in the context of a mobile phone (e.g., client device 110 of FIG. 1) with a mobile phone user as the audio source of the voice command. The mobile phone is communicatively coupled to a network server (e.g., server device 130 of FIG. 1) via a communications network (e.g., network 120 of FIG. 1). Based on the description herein, a person of ordinary skill in the relevant art will recognize that method 400 can be executed on other types of client devices such as, for example and without limitation, a PDA and a laptop and with other audio sources such as, for example and without limitation, a radio and a computer. These other types of client devices and audio sources are within the scope and spirit of the embodiments described herein.
  • In step 410, an audio stream of a voice command is translated into a machine-readable voice command with a speech recognizer located on the mobile phone. As described above, with respect to FIG. 2, speech recognizer 210 translates the audio stream received by microphone 230.
  • In step 420, a query is made to a database of the mobile phone to generate a query result based on the machine-readable voice command generated from step 410. In reference to FIG. 2, based on the machine-readable voice command translated by speech recognizer 210, client query manager 220 queries client database 240 to generate the query result.
  • FIGS. 5( a)-(c) are illustrations of an exemplary user interface (UI) 510 on a mobile phone in accordance with embodiments described herein. These illustrations are used to help facilitate in the explanation of steps 410 and 420 of FIG. 4.
  • With respect to FIG. 5( a), mobile phone UI 510 prompts the mobile phone user for a voice command. In this example, the mobile phone user provides “Barry Cage” as the voice command. In turn, in accordance with step 410, the mobile phone translates the audio stream of the voice command into a machine-readable voice command using its embedded speech recognizer (e.g., speech recognizer 210 of FIG. 2). A query manager on the mobile phone (e.g., client query manager 220 of FIG. 2) queries the mobile phone's database for “Barry Cage.”
  • With respect to FIG. 5( b), the mobile phone's query manager queries a contact list database for the name “Barry Cage” and finds a query result 520. Based on the description herein, a person of ordinary skill in the relevant art will recognize that other databases on the mobile phone can be queried to generate the query result such as, for example and without limitation, call log information, music libraries, and calendar listings.
  • With respect to FIG. 5( c), the mobile phone user can select query result 520 to view contact information 530 corresponding to the voice command.
  • In reference to FIG. 4, in step 430, the audio stream of the voice command is transmitted to a network server, where the voice command is translated to a machine-readable voice command with a speech recognizer located on the network server. As described above, with respect to FIG. 2, client query manager 220 coordinates a transmission of the audio stream to server device 130.
  • In step 440, a query result is received from the network server, where the query result is generated from a query made to a server database based on the machine-readable voice command from step 430. With respect to FIG. 3, speech recognizer 310 translates the voice command to the machine-readable voice command. Based on the machine-readable voice command, server query manager 320 queries server database 330 to generate the query result. This query result is then transmitted from server device 130 to client device 110 via network 120.
  • In an embodiment, as illustrated in method 600 of FIG. 6, the transmission of the audio stream to the network server (step 430) and the reception of the query result from the network server (step 440) can be performed simultaneously with, substantially at the same time as, or to overlap with the translation of the audio stream of the voice command by the mobile phone (step 410) and query of the database on the mobile phone (step 420). As a result, in an embodiment, the query result from the network server can be received by and displayed on the mobile phone at substantially the same time as, in parallel with, or soon after a display of the query result from the database of the mobile phone. In the alternative, depending on the computation time to query the mobile phone's database or the complexity of the voice command, the query result from the network server can be received by and displayed on the mobile phone prior to the display of the query result from the mobile phone's database, according to an embodiment.
  • In step 450 of FIG. 4, the query result from step 420 and the query result from step 440 are displayed on the mobile phone. In an embodiment, the query results from steps 420 and 440 are stored in the database of the mobile phone and may be displayed based on a future voice command by the mobile phone user.
  • FIGS. 7( a) and 7(b) are illustrations of an exemplary UI 710 on a mobile phone in accordance with embodiments described herein. These illustrations are used to help facilitate in the explanation of steps 430-450 of FIG. 4.
  • Similar to FIG. 5, the illustrations of FIGS. 7( a) and 7(b) assume that the mobile phone user provides “Barry Cage” as the voice command. With respect to FIG. 7( a), in accordance with steps 410 and 420 of FIG. 4, field 720 displays a query result from a query made to the mobile phone's database (e.g., client database 240 of FIG. 2). In addition, in accordance with steps 430-450, field 730 displays a query result from a query made to the network server (e.g., server database 330 of FIG. 3).
  • In the example of FIG. 7( a), field 730 is a list of three entries in which the network server returns as possible matches for the voice command: “barry cage”; “mary paige”; and, “mary peach.” If the mobile phone user does not decide to select an entry from field 720 (i.e., “Barry Cage”), then the mobile phone user can select an entry from field 730. In addition, a partial portion of the list in field 630 can be received by and displayed on the mobile phone at a first time instance and the remainder of the list in field 730 can be received by and displayed on the mobile phone at a second time instance (e.g., later in time than the first time instance). In this way, the mobile phone user can view a portion of the query results as the remainder of the query results is being processed by the network server and received by the mobile phone.
  • With respect to FIG. 7( b), if the mobile phone user selects “barry cage” from field 730 of FIG. 7( a), then results 740 from a web search is displayed on the mobile phone. The mobile phone user can then scroll through search results 740 to locate a hyperlink of interest. In accordance with step 450 of FIG. 4, search results 740 and the query result from step 420 of FIG. 4 (e.g., field 720 of FIG. 7( a)) are stored in the mobile phone for a future voice command by the mobile phone user, according to an embodiment. For instance, if the mobile phone user provides “Barry Cage” as a voice command at a later point in time, “Barry Cage” in field 720 and “barry cage” in field 730 of FIG. 7( a) would be retrieved from the mobile phone's memory and displayed to the mobile phone user. In storing the web search result for “Barry Cage,” the mobile phone user receives the benefit of viewing a previously-selected web search result. In turn, the mobile phone user's experience is enhanced since the mobile phone is able to quickly recall a selected entry from a previous voice command. An exemplary method and system to store and retrieve data in fields 720 and 730 of FIG. 7( a) can be found in U.S. patent application Ser. No. 12/783,470 (Atty. Docket No. 2525.2360000), which is entitled “Personalization and Latency Reduction for Voice-Activated Commands” and incorporated herein by reference in its entirety.
  • FIG. 8 is an illustration of another method 800 for performing a voice command on a client device. Method 800 can occur using, for example, client device 110 in communication system 100 of FIG. 1. Similar to method 400 of FIG. 4, for ease of explanation, communication system 100 will be used to facilitate in the description of method 800. Further, for ease of explanation, method 800 will be described in context of a mobile phone (e.g., client device 110 of FIG. 1) with a mobile phone user as the audio source of the voice command.
  • In step 810, an audio stream of a voice command is received by the mobile phone. As described above, with respect to FIG. 2, microphone 230 is configured to receive the audio stream of the voice command.
  • In step 820, a speech recognizer located on the mobile phone determines whether the audio stream (from step 810) can be translated into a machine-readable voice command with an appropriate confidence score. In an embodiment, due to computational and memory resources of the mobile phone, the speech recognizer located on the mobile phone (e.g., speech recognizer 210 of FIG. 2) may not be able to translate more complex voice command into corresponding machine-readable voice commands with relatively high confidence scores. In particular, if a speech recognition confidence score for the voice command is below a predetermined threshold, then a query is not made to a database of the mobile phone based on the voice command, according to an embodiment. Instead, in an embodiment, the mobile phone stores the machine-readable voice command with the relatively low confidence score for future recall by the mobile phone. This future recall feature will be described in further detail below. Methods and techniques to determine speech recognition confidence scores are known to a person of ordinary skill in the relevant art.
  • In step 830, if the speech recognizer located on the mobile phone is able to provide a machine-readable voice command translation for the audio stream of the voice command, then the voice command is translated into the machine-readable voice command with the speech recognizer located on the mobile phone. Step 830 performs a similar function as step 410 of FIG. 4.
  • In step 840, a query is made on a database of the mobile phone to generate a query result based on the machine-readable voice command generated from step 830. Step 840 performs a similar function as step 420 of FIG. 4.
  • In step 850, regardless of whether the speech recognizer located on the mobile phone is able to provide the machine-readable voice command translation for the audio stream of the voice command with the appropriate confidence score, the audio stream of the voice command is transmitted to a network server, where the voice command is translated to a machine-readable voice command with a speech recognizer located on the network server. Step 850 performs a similar function as step 430 of FIG. 4.
  • In step 860, a query result is received from the network server, where the query result is generated from a query made to a server database based on the machine-readable voice command from step 850. Step 860 performs a similar function as step 440 of FIG. 4.
  • FIG. 9 is an illustration of another method 900 for performing a voice command on a client device. Similar to steps 430 and 440 of FIG. 6, steps 860 and 870 of FIG. 8 can be performed simultaneously with, substantially at the same time as, to overlap with the translation of the audio stream of the voice command by the mobile phone (step 830) and query of the database on the mobile phone (step 840), according to an embodiment. As a result, in an embodiment, the query result from the network server can be received by and displayed on the mobile phone at substantially the same time as, in parallel with, or soon after a display of the query result from the database of the mobile phone. In the alternative, depending on the computation time to query the mobile phone's database or the complexity of the voice command, the query result from the network server can be received by and displayed on the mobile phone prior to the display of a query result from the mobile phone's database, according to an embodiment.
  • In reference to step 880 of FIG. 8, if the speech recognizer located on the mobile phone is able to provide a machine-readable voice command translation for the audio stream of the voice command (see step 870), the query result from step 820 and the query result from 840 are displayed on the mobile phone (see step 880). In an embodiment, the query results from steps 820 and 840 are stored in the database of the mobile phone for a future voice command by the mobile phone user.
  • In the alternative, if the speech recognizer located on the mobile device is not able to provide a machine-readable voice command translation for the audio stream of the voice command (see step 870), then only the query result from step 840 is displayed on the mobile phone (see step 890). In an embodiment, the query result from step 840 is stored in the database of the mobile phone for a future voice command by the mobile phone user.
  • In an embodiment, a future voice command can be translated into a machine-readable voice command, in which this machine-readable voice command can be compared to the machine-readable voice command with the relatively low confidence score (from step 820 of FIG. 2). If the two machine-readable voice commands substantially match one another or are substantially similar to one another, then the mobile phone displays the query result from step 820 and/or the query result from step 840, according to an embodiment. An exemplary method and system to store and retrieve data in fields 720 and 730 of FIG. 7( a) can be found in U.S. patent application Ser. No. 12/783,470 (Atty. Docket No. 2525.2360000), which is entitled “Personalization and Latency Reduction for Voice-Activated Commands” and incorporated herein by reference in its entirety.
  • In addition, according to an embodiment, the audio stream corresponding to the future voice command is transmitted to the network server, where the voice command is translated to a machine-readable voice command with the speech recognizer located on the network server. Based on the machine-readable voice command corresponding to the future voice command, in an embodiment, a query is made to a database on the network server to generate a query result. This query result is received by, displayed on, and stored in the mobile phone, according to an embodiment.
  • A benefit, among others, in displaying the stored query result corresponding to the prior voice command and another query result corresponding to the future voice command is that the mobile phone user receives the benefit of viewing an updated query result (if any) from the network server, according to an embodiment. In addition, in an embodiment, the speech recognizer on the mobile phone may mischaracterize the future voice command as corresponding to a previously-stored voice command. In this case, the speech recognizer located on the network server may be able to resolve the mischaracterization by providing a more accurate translation of the future voice command than the translation provided by speech recognizer located on the mobile phone, according to an embodiment.
  • FIGS. 10( a)-(e) are illustrations of an exemplary UI 1010 on a mobile phone in accordance with embodiments described herein. These illustrations are used to help facilitate in the explanation of method 800.
  • With respect to FIG. 10( a), mobile phone UI 1010 prompts the mobile phone user for a voice command. In this example, the mobile phone user provides “pizza my heart” as the voice command. In turn, in accordance with steps 810 and 820, the mobile phone receives the voice command and determines whether the audio stream of the voice command can be translated into a machine-readable voice command with an appropriate confidence score.
  • In the example illustrated in FIG. 10, the voice command “pizza my heart” does not return a speech recognition confidence score above the predetermined threshold value. In other words, the voice command “pizza my heart” does not return a high-confidence match from the speech recognizer located on the mobile phone. The audio stream of the voice command is transmitted to a network server for further speech recognition processing, in accordance with step 850.
  • FIG. 10( b) is an illustration of an exemplary list of query results 1020 from the voice command made to the network server. Exemplary list of query results 1020 is transmitted from the network server to the mobile phone, in accordance with step 850. In an embodiment, as the mobile phone user views exemplary list of query results 1020, information relating to each of the query results (e.g., web pages, images, text data) is stored in cache memory of the mobile phone. This allows the mobile user to select a query result of interest from exemplary list of query results 1020 and instantly view information relating to the query result, thus improving the mobile phone user's experience. For instance, with respect to FIG. 10( c), the mobile phone user selects the top entry “pizza my heart” from exemplary list of query results 1020 and a list of web search results 1030 is displayed on the mobile phone. From the web search results, the mobile phone user can select a hyperlink of interest (e.g., www.pizzamyheart.com) and view the contents of the web page on the mobile phone, as illustrated in a web page 1040 of FIG. 10( d).
  • Further, in an embodiment of step 850, a partial portion of the exemplary list of query results can be received by and displayed on the mobile phone at a first time instance and the remainder of the exemplary list of query results can be received by and displayed on the mobile phone at a second time instance (e.g., later in time than the first time instance). In this way, the mobile phone user can view a portion of the query results as the remainder of the query results is being processed by the network server and received by the mobile phone.
  • In an embodiment, the query result selected by the mobile phone user (e.g., www.pizzamyheart.com) is stored in the database of the mobile phone for a future voice command by the mobile phone user. For instance, the hyperlink “www.pizzamyheart.com” appears as a query result from a query made to the database of the mobile phone when, at a later time, the mobile phone user provides “pizza my heart” as a voice command to the mobile phone. This is illustrated in field 1050 of FIG. 10( e). The mobile phone user can select the query result in field 1050 and view the web page at “www.pizzamyheart.com,” as illustrated in FIG. 10( d). In storing the query result and associated web page, the mobile phone user receives the benefit of viewing a previously-selected web search result. In turn, the mobile phone user's experience is enhanced since the mobile phone is able to quickly recall a selected entry from a previous voice command. An exemplary method and system to store and retrieve data in field 1050 of FIG. 10( e) can be found in U.S. patent application Ser. No. 12/783,470 (Atty. Docket No. 2525.2360000), which is entitled “Personalization and Latency Reduction for Voice-Activated Commands” and incorporated herein by reference in its entirety.
  • Various aspects of the embodiments described herein may be implemented in software, firmware, hardware, or a combination thereof. FIG. 11 is an illustration of an example computer system 1100 in which embodiments, or portions thereof, can be implemented as computer-readable code. For example, the methods illustrated by flowchart 400 of FIG. 4, flowchart 600 of FIG. 6, flowchart 800 of FIG. 8, or flowchart 900 of FIG. 9 can be implemented in computer system 1100. Various embodiments are described in terms of this example computer system 1100. After reading this description, it will become apparent to a person skilled in the relevant art how to implement embodiments described herein using other computer systems and/or computer architectures.
  • Computer system 1100 is an example computing device and includes one or more processors, such as processor 1104. Processor 1104 may be a special purpose or a general-purpose processor. Processor 1104 is connected to a communication infrastructure 1106 (e.g., a bus or network).
  • Computer system 1100 also includes a main memory 1108, preferably random access memory (RAM), and may also include a secondary memory 1110. Secondary memory 1110 can include, for example, a hard disk drive 1112, a removable storage drive 1114, and/or a memory stick. Removable storage drive 1114 can comprise a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory, or the like. The removable storage drive 1114 reads from and/or writes to a removable storage unit 1118 in a well-known manner. Removable storage unit 1118 can include a floppy disk, magnetic tape, optical disk, etc. which is read by and written to by removable storage drive 1114. As will be appreciated by persons skilled in the relevant art, removable storage unit 1118 includes a computer-usable storage medium having stored therein computer software and/or data.
  • In alternative implementations, secondary memory 1110 can include other similar devices for allowing computer programs or other instructions to be loaded into computer system 1100. Such devices can include, for example, a removable storage unit 1122 and an interface 1120. Examples of such devices can include a program cartridge and cartridge interface (such as those found in video game devices), a removable memory chip (e.g., EPROM or PROM) and associated socket, and other removable storage units 1122 and interfaces 1120 which allow software and data to be transferred from the removable storage unit 1122 to computer system 1100.
  • Computer system 1100 can also include a communications interface 1124. Communications interface 1124 allows software and data to be transferred between computer system 1100 and external devices. Communications interface 1124 can include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, or the like. Software and data transferred via communications interface 1124 are in the form of signals, which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 1124. These signals are provided to communications interface 1124 via a communications path 1126. Communications path 1126 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, a RF link or other communications channels.
  • In this document, the terms “computer program medium” and “computer-usable medium” are used to generally refer to media such as removable storage unit 1118, removable storage unit 1122, and a hard disk installed in hard disk drive 1112. Computer program medium and computer-usable medium can also refer to memories, such as main memory 1108 and secondary memory 1110, which can be memory semiconductors (e.g., DRAMs, etc.). These computer program products provide software to computer system 1100.
  • Computer programs (also called computer control logic) are stored in main memory 1108 and/or secondary memory 1110. Computer programs may also be received via communications interface 1124. Such computer programs, when executed, enable computer system 1100 to implement embodiments discussed herein. In particular, the computer programs, when executed, enable processor 904 to implement processes described above, such as the steps in the methods illustrated by flowchart 400 of FIG. 4, flowchart 600 of FIG. 6, flowchart 800 of FIG. 8, and flowchart 900 of FIG. 9, discussed above. Accordingly, such computer programs represent controllers of the computer system 1100. Where embodiments described herein are implemented using software, the software can be stored in a computer program product and loaded into computer system 1100 using removable storage drive 1114, interface 1120, hard drive 1112 or communications interface 1124.
  • Based on the description herein, a person of ordinary skill in the relevant will recognize that the computer programs, when executed, can enable one or more processors to implement processes described above, such as the steps in the methods illustrated by flowchart 400 of FIG. 4, flowchart 600 of FIG. 6, flowchart 800 of FIG. 8, and flowchart 900 of FIG. 9. In an embodiment, the one or more processors can be part of a computing device incorporated in a clustered computing environment or server farm. Further, in an embodiment, the computing process performed by the clustered computing environment such as, for example, the steps in the methods illustrated by flowcharts 400, 600, 800, and 900 may be carried out across multiple processors located at the same or different locations.
  • Embodiments are also directed to computer program products including software stored on any computer-usable medium. Such software, when executed in one or more data processing device, causes a data processing device(s) to operate as described herein. Embodiments employ any computer-usable or -readable medium, known now or in the future. Examples of computer-usable mediums include, but are not limited to, primary storage devices (e.g., any type of random access memory), secondary storage devices (e.g., hard drives, floppy disks, CD ROMS, ZIP disks, tapes, magnetic storage devices, optical storage devices, MEMS, nanotechnological storage devices, etc.), and communication mediums (e.g., wired and wireless communications networks, local area networks, wide area networks, intranets, etc.).
  • While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art that various changes in form and details can be made therein without departing from the spirit and scope of the embodiments described herein. It should be understood that this description is not limited to these examples. This description is applicable to any elements operating as described herein. Accordingly, the breadth and scope of this description should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (20)

1. A method for performing a voice command on a client device, comprising:
translating, using a first speech recognizer located on the client device, an audio stream of a voice command to a first machine-readable voice command;
generating a first query result using the first machine-readable voice command to query a client database;
transmitting the audio stream to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer;
receiving a second query result from the remote server device, wherein the second query result is generated by the remote server device using the second machine-readable voice command to query a remote server database; and
displaying the first query result and the second query result on the client device.
2. The method of claim 1, further comprising:
storing at least a portion of the first and second query results on the client device.
3. The method of claim 2, further comprising retrieving the stored first and second query results when translation of a subsequent voice command is determined to be substantially similar to the translated voice command that generated the first and second query results.
4. The method of claim 3, further comprising:
transmitting to the remote server device a second audio stream associated with the subsequent voice command;
translating the second audio stream to a third machine-readable voice command using the second speech recognizer;
receiving a third query result from the remote server device, wherein the third query result is generated from a subsequent query made to the server database based on the third machine-readable voice command; and
displaying the first, second, and third query results on the client device.
5. The method of claim 2, further comprising identifying which portion of the first and second query results to store, the identification comprising:
receiving a user selection of an item of interest from a list of items returned as part of the second query result.
6. The method of claim 1, wherein generating the first query result comprises transmitting the audio stream to the second speech recognizer such that the query made to the remote server database based on the second machine-readable voice command occurs during a time period that overlaps when the query is made to the client database based on the first machine-readable voice command.
7. The method of claim 1, wherein transmitting the audio stream comprises transmitting a compressed audio stream of the voice command from the client device to the server device.
8. The method of claim 1, wherein displaying the first and second query results comprises displaying the first result and a first subset of the second query result at a first time instance and the first result, the first subset of the second query result, and a second subset of the second query result at a second time instance.
9. A computer program product comprising a computer-usable medium having computer program logic recorded thereon for enabling a processor to perform a voice command on a client device, the computer program logic comprising:
first computer readable program code that enables a processor to translate, using a first speech recognizer located on the client device, an audio stream of a voice command to a first machine-readable voice command;
second computer readable program code that enables a processor to generate a first query result using the first machine-readable voice command to query a client database;
third computer readable program code that enables a processor to transmit the audio stream to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer;
fourth computer readable program code that enables a processor to receive a second query result from the remote server device, wherein the second query result is generated by the remote server device using the second machine-readable voice command to query a remote server database; and
fifth computer readable program code that enables a processor to display the first query result and the second query result on the client device.
10. The computer program product of claim 9, further comprising:
sixth computer readable program code that enables a processor to store at least a portion of the first and second query results on the client device.
11. The computer program product of claim 10, further comprising:
seventh computer readable program code that enables a processor to retrieve the stored first and second query results when translation of a subsequent voice command is determined to be substantially similar to the translated voice command that generated the first and second query results.
12. The computer program product of claim 11, further comprising:
eighth computer readable program code that enables a processor to transmit to the remote server device a second audio stream associated with the subsequent voice command;
ninth computer readable program code that enables a processor to translate the second audio stream to a third machine-readable voice command using the second speech recognizer;
tenth computer readable program code that enables a processor to receive a third query result from the remote server device, wherein the third query result is generated from a subsequent query made to the server database based on the third machine-readable voice command; and
eleventh computer readable program code that enables a processor to display the first, second, and third query results on the client device.
13. The computer program product of claim 10, wherein the sixth computer readable program code comprises:
seventh computer readable program code that enables a processor to identify which portion of the first and second query results to store, the identification comprising receiving a user selection of an item of interest from a list of items returned as a part of the second query result.
14. The computer program product of claim 9, wherein the second computer readable program code comprises:
sixth computer readable program code that enables a processor to transmit the audio stream to the second speech recognizer such that the query made to the remote server database based on the second machine-readable voice command occurs during a time period that overlaps when the query is made to the client database based on the first machine-readable voice command.
15. A system for performing a voice command on a client device, comprising:
a first speech recognizer device configured to translate an audio stream of a voice command to a first machine-readable voice command;
a client query manager configured to:
generate a first query result using the first machine-readable voice command to query a client database;
transmit the audio stream to a remote server device that translates the audio stream to a second machine-readable voice command using a second speech recognizer device; and
receive a second query result from the remote server device, wherein the second query result is generated by the remote server device using the second machine-readable voice command to query a remote server database; and
a display device configured to display the first query result and the second query result on the client device.
16. The system of claim 15, further comprising:
a microphone configured to receive the audio stream of the voice command and to provide the audio stream to the first speech recognizer device; and
a storage device configured to store at least a portion of the first and second query results on the client device.
17. The system of claim 16, wherein the client query manager is configured to retrieve the stored first and second query results from the storage device when translation of a subsequent voice command is determined to be substantially similar to the translated voice command that generated the first and second query results.
18. The system of claim 17, wherein the client query manager is configured to:
transmit to the remote server device a second audio stream associated with the subsequent voice command;
translate the second audio stream to a third machine-readable voice command using the second speech recognizer device; and
receive a third query result from the remote server device, wherein the third query result is generated from a subsequent query made to the server database based on the third machine-readable voice command.
19. The system of claim 15, wherein the client query manager is configured to transmit the audio stream to the second speech recognizer device such that the query made to the remote server database based on the second machine-readable voice command occurs during a time period that overlaps when the query is made to the client database based on the first machine-readable voice command.
20. The system of claim 15, wherein the display device is configured to display the first result and a first subset of the second query result at a first time instance and the first result, the first subset of the second query result, and a second subset of the second query result at a second time instance.
US12/794,896 2010-01-26 2010-06-07 Integration of Embedded and Network Speech Recognizers Abandoned US20110184740A1 (en)

Priority Applications (10)

Application Number Priority Date Filing Date Title
US12/794,896 US20110184740A1 (en) 2010-01-26 2010-06-07 Integration of Embedded and Network Speech Recognizers
PCT/US2011/022427 WO2011094215A1 (en) 2010-01-26 2011-01-25 Integration of embedded and network speech recognizers
AU2011209760A AU2011209760B2 (en) 2010-01-26 2011-01-25 Integration of embedded and network speech recognizers
EP18207861.8A EP3477637B1 (en) 2010-01-26 2011-01-25 Integration of embedded and network speech recognizers
KR1020127022282A KR101770358B1 (en) 2010-01-26 2011-01-25 Integration of embedded and network speech recognizers
CN201180013111.0A CN102884569B (en) 2010-01-26 2011-01-25 Integration of embedded and network speech recognizers
EP11702758.1A EP2529372B1 (en) 2010-01-26 2011-01-25 Integration of embedded and network speech recognizers
CA2788088A CA2788088A1 (en) 2010-01-26 2011-01-25 Integration of embedded and network speech recognizers
US13/287,913 US8412532B2 (en) 2010-01-26 2011-11-02 Integration of embedded and network speech recognizers
US13/585,280 US8868428B2 (en) 2010-01-26 2012-08-14 Integration of embedded and network speech recognizers

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US29825110P 2010-01-26 2010-01-26
US12/794,896 US20110184740A1 (en) 2010-01-26 2010-06-07 Integration of Embedded and Network Speech Recognizers

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US13/287,913 Continuation US8412532B2 (en) 2010-01-26 2011-11-02 Integration of embedded and network speech recognizers
US13/585,280 Continuation US8868428B2 (en) 2010-01-26 2012-08-14 Integration of embedded and network speech recognizers

Publications (1)

Publication Number Publication Date
US20110184740A1 true US20110184740A1 (en) 2011-07-28

Family

ID=44309629

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/794,896 Abandoned US20110184740A1 (en) 2010-01-26 2010-06-07 Integration of Embedded and Network Speech Recognizers
US13/287,913 Active US8412532B2 (en) 2010-01-26 2011-11-02 Integration of embedded and network speech recognizers
US13/585,280 Active US8868428B2 (en) 2010-01-26 2012-08-14 Integration of embedded and network speech recognizers

Family Applications After (2)

Application Number Title Priority Date Filing Date
US13/287,913 Active US8412532B2 (en) 2010-01-26 2011-11-02 Integration of embedded and network speech recognizers
US13/585,280 Active US8868428B2 (en) 2010-01-26 2012-08-14 Integration of embedded and network speech recognizers

Country Status (7)

Country Link
US (3) US20110184740A1 (en)
EP (2) EP3477637B1 (en)
KR (1) KR101770358B1 (en)
CN (1) CN102884569B (en)
AU (1) AU2011209760B2 (en)
CA (1) CA2788088A1 (en)
WO (1) WO2011094215A1 (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110184730A1 (en) * 2010-01-22 2011-07-28 Google Inc. Multi-dimensional disambiguation of voice commands
WO2013049237A1 (en) * 2011-09-30 2013-04-04 Google Inc. Hybrid client/server speech recognition in a mobile device
US20130278492A1 (en) * 2011-01-25 2013-10-24 Damien Phelan Stolarz Distributed, predictive, dichotomous decision engine for an electronic personal assistant
US20140006028A1 (en) * 2012-07-02 2014-01-02 Salesforce.Com, Inc. Computer implemented methods and apparatus for selectively interacting with a server to build a local dictation database for speech recognition at a device
US20140058732A1 (en) * 2012-08-21 2014-02-27 Nuance Communications, Inc. Method to provide incremental ui response based on multiple asynchronous evidence about user input
US20140096590A1 (en) * 2012-05-07 2014-04-10 Alexander Himanshu Amin Electronic nose system and method
WO2014060054A1 (en) * 2012-10-16 2014-04-24 Audi Ag Speech recognition in a motor vehicle
US20140201182A1 (en) * 2012-05-07 2014-07-17 Alexander Himanshu Amin Mobile communications device with electronic nose
US8924219B1 (en) 2011-09-30 2014-12-30 Google Inc. Multi hotword robust continuous voice command detection in mobile devices
US20150127353A1 (en) * 2012-05-08 2015-05-07 Samsung Electronics Co., Ltd. Electronic apparatus and method for controlling electronic apparatus thereof
KR20150068003A (en) * 2013-12-11 2015-06-19 삼성전자주식회사 interactive system, control method thereof, interactive server and control method thereof
US20160104484A1 (en) * 2014-10-14 2016-04-14 Samsung Electronics Co., Ltd. Electronic device and method for spoken interaction thereof
US9582245B2 (en) 2012-09-28 2017-02-28 Samsung Electronics Co., Ltd. Electronic device, server and control method thereof
US20170069010A1 (en) * 2012-05-07 2017-03-09 Hannah Elizabeth Amin Mobile communications device with electronic nose
US20170083285A1 (en) * 2015-09-21 2017-03-23 Amazon Technologies, Inc. Device selection for providing a response
US9898455B2 (en) 2014-12-01 2018-02-20 Nuance Communications, Inc. Natural language understanding cache
US20180122366A1 (en) * 2016-11-02 2018-05-03 Panasonic Intellectual Property Corporation Of America Information processing method and non-temporary storage medium for system to control at least one device through dialog with user
CN109844856A (en) * 2016-08-31 2019-06-04 伯斯有限公司 Multiple virtual personal assistants (VPA) are accessed from individual equipment
EP3534364A1 (en) * 2012-06-26 2019-09-04 Google LLC Distributed speech recognition
US10482904B1 (en) 2017-08-15 2019-11-19 Amazon Technologies, Inc. Context driven device arbitration
US20200082827A1 (en) * 2018-11-16 2020-03-12 Lg Electronics Inc. Artificial intelligence-based appliance control apparatus and appliance controlling system including the same
CN111508484A (en) * 2019-01-31 2020-08-07 阿里巴巴集团控股有限公司 Voice data processing method and device
US10925537B2 (en) 2016-03-23 2021-02-23 Canary Medical Inc. Implantable reporting processor for an alert implant
US10971157B2 (en) 2017-01-11 2021-04-06 Nuance Communications, Inc. Methods and apparatus for hybrid speech recognition processing
US11004445B2 (en) * 2016-05-31 2021-05-11 Huawei Technologies Co., Ltd. Information processing method, server, terminal, and information processing system
KR20210075040A (en) * 2014-11-12 2021-06-22 삼성전자주식회사 Apparatus and method for qusetion-answering
US20210272563A1 (en) * 2018-06-15 2021-09-02 Sony Corporation Information processing device and information processing method
US11191479B2 (en) 2016-03-23 2021-12-07 Canary Medical Inc. Implantable reporting processor for an alert implant
US11481401B2 (en) * 2020-11-25 2022-10-25 International Business Machines Corporation Enhanced cognitive query construction
US11786126B2 (en) 2014-09-17 2023-10-17 Canary Medical Inc. Devices, systems and methods for using and monitoring medical devices

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110184740A1 (en) * 2010-01-26 2011-07-28 Google Inc. Integration of Embedded and Network Speech Recognizers
US9202465B2 (en) * 2011-03-25 2015-12-01 General Motors Llc Speech recognition dependent on text message content
US8700406B2 (en) * 2011-05-23 2014-04-15 Qualcomm Incorporated Preserving audio data collection privacy in mobile devices
WO2013102052A1 (en) * 2011-12-28 2013-07-04 Bloomberg Finance L.P. System and method for interactive automatic translation
US9317605B1 (en) 2012-03-21 2016-04-19 Google Inc. Presenting forked auto-completions
KR101914708B1 (en) 2012-06-15 2019-01-14 삼성전자주식회사 Server and method for controlling the same
US9171066B2 (en) * 2012-11-12 2015-10-27 Nuance Communications, Inc. Distributed natural language understanding and processing using local data sources
US9117451B2 (en) * 2013-02-20 2015-08-25 Google Inc. Methods and systems for sharing of adapted voice profiles
CN105027198B (en) * 2013-02-25 2018-11-20 三菱电机株式会社 Speech recognition system and speech recognition equipment
US9646606B2 (en) 2013-07-03 2017-05-09 Google Inc. Speech recognition using domain knowledge
US9305554B2 (en) * 2013-07-17 2016-04-05 Samsung Electronics Co., Ltd. Multi-level speech recognition
US9530416B2 (en) 2013-10-28 2016-12-27 At&T Intellectual Property I, L.P. System and method for managing models for embedded speech and language processing
US9666188B2 (en) 2013-10-29 2017-05-30 Nuance Communications, Inc. System and method of performing automatic speech recognition using local private data
US9792911B2 (en) * 2014-03-25 2017-10-17 Panasonic Automotive Systems Company Of America, Division Of Panasonic Corporation Of North America Background voice recognition trainer
CN106462630B (en) 2014-06-18 2020-08-18 谷歌有限责任公司 Method, system, and medium for searching video content
US9953646B2 (en) 2014-09-02 2018-04-24 Belleau Technologies Method and system for dynamic speech recognition and tracking of prewritten script
US10310808B2 (en) 2014-09-08 2019-06-04 Google Llc Systems and methods for simultaneously receiving voice instructions on onboard and offboard devices
US9830321B2 (en) * 2014-09-30 2017-11-28 Rovi Guides, Inc. Systems and methods for searching for a media asset
US10999636B1 (en) 2014-10-27 2021-05-04 Amazon Technologies, Inc. Voice-based content searching on a television based on receiving candidate search strings from a remote server
CN104918198A (en) * 2015-05-11 2015-09-16 阔地教育科技有限公司 Online-classroom-based audio calibration method and device
US9865265B2 (en) 2015-06-06 2018-01-09 Apple Inc. Multi-microphone speech recognition systems and related techniques
US10013981B2 (en) * 2015-06-06 2018-07-03 Apple Inc. Multi-microphone speech recognition systems and related techniques
WO2017014721A1 (en) * 2015-07-17 2017-01-26 Nuance Communications, Inc. Reduced latency speech recognition system using multiple recognizers
US10048936B2 (en) * 2015-08-31 2018-08-14 Roku, Inc. Audio command interface for a multimedia device
US9972342B2 (en) * 2015-11-20 2018-05-15 JVC Kenwood Corporation Terminal device and communication method for communication of speech signals
US9761227B1 (en) * 2016-05-26 2017-09-12 Nuance Communications, Inc. Method and system for hybrid decoding for enhanced end-user privacy and low latency
US9619202B1 (en) 2016-07-07 2017-04-11 Intelligently Interactive, Inc. Voice command-driven database
JP6659514B2 (en) * 2016-10-12 2020-03-04 東芝映像ソリューション株式会社 Electronic device and control method thereof
US10614804B2 (en) 2017-01-24 2020-04-07 Honeywell International Inc. Voice control of integrated room automation system
US10572220B2 (en) * 2017-04-12 2020-02-25 American Megatrends International, Llc Method for controlling controller and host computer with voice
US10984329B2 (en) 2017-06-14 2021-04-20 Ademco Inc. Voice activated virtual assistant with a fused response
US10679620B2 (en) * 2018-03-06 2020-06-09 GM Global Technology Operations LLC Speech recognition arbitration logic
US20190332848A1 (en) 2018-04-27 2019-10-31 Honeywell International Inc. Facial enrollment and recognition system
US20190390866A1 (en) 2018-06-22 2019-12-26 Honeywell International Inc. Building management system with natural language interface
US10885912B2 (en) * 2018-11-13 2021-01-05 Motorola Solutions, Inc. Methods and systems for providing a corrected voice command
US20220293109A1 (en) * 2021-03-11 2022-09-15 Google Llc Device arbitration for local execution of automatic speech recognition

Citations (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6185535B1 (en) * 1998-10-16 2001-02-06 Telefonaktiebolaget Lm Ericsson (Publ) Voice control of a user interface to service applications
US6363488B1 (en) * 1995-02-13 2002-03-26 Intertrust Technologies Corp. Systems and methods for secure transaction management and electronic rights protection
US20030139924A1 (en) * 2001-12-29 2003-07-24 Senaka Balasuriya Method and apparatus for multi-level distributed speech recognition
US6738743B2 (en) * 2001-03-28 2004-05-18 Intel Corporation Unified client-server distributed architectures for spoken dialogue systems
US20040192384A1 (en) * 2002-12-30 2004-09-30 Tasos Anastasakos Method and apparatus for selective distributed speech recognition
US20040254787A1 (en) * 2003-06-12 2004-12-16 Shah Sheetal R. System and method for distributed speech recognition with a cache feature
US20050149500A1 (en) * 2003-12-31 2005-07-07 David Marmaros Systems and methods for unification of search results
US6963759B1 (en) * 1999-10-05 2005-11-08 Fastmobile, Inc. Speech recognition technique based on local interrupt detection
US6993482B2 (en) * 2002-12-18 2006-01-31 Motorola, Inc. Method and apparatus for displaying speech recognition results
US7013289B2 (en) * 2001-02-21 2006-03-14 Michel Horn Global electronic commerce system
US7027987B1 (en) * 2001-02-07 2006-04-11 Google Inc. Voice interface for a search engine
US7058580B2 (en) * 2000-05-24 2006-06-06 Canon Kabushiki Kaisha Client-server speech processing system, apparatus, method, and storage medium
US7062444B2 (en) * 2002-01-24 2006-06-13 Intel Corporation Architecture for DSR client and server development platform
US20060235684A1 (en) * 2005-04-14 2006-10-19 Sbc Knowledge Ventures, Lp Wireless device to access network-based voice-activated services using distributed speech recognition
US7136710B1 (en) * 1991-12-23 2006-11-14 Hoffberg Steven M Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US7191135B2 (en) * 1998-04-08 2007-03-13 Symbol Technologies, Inc. Speech recognition system and method for employing the same
US20070061335A1 (en) * 2005-09-14 2007-03-15 Jorey Ramer Multimodal search query processing
US7225134B2 (en) * 2000-06-20 2007-05-29 Sharp Kabushiki Kaisha Speech input communication system, user terminal and center system
US20080071763A1 (en) * 2006-09-15 2008-03-20 Emc Corporation Dynamic updating of display and ranking for search results
US20080154612A1 (en) * 2006-12-26 2008-06-26 Voice Signal Technologies, Inc. Local storage and use of search results for voice-enabled mobile communications devices
US20080162472A1 (en) * 2006-12-28 2008-07-03 Motorola, Inc. Method and apparatus for voice searching in a mobile communication device
US20080221898A1 (en) * 2007-03-07 2008-09-11 Cerra Joseph P Mobile navigation environment speech processing facility
US7461352B2 (en) * 2003-02-10 2008-12-02 Ronald Mark Katsuranis Voice activated system and methods to enable a computer user working in a first graphical application window to display and control on-screen help, internet, and other information content in a second graphical application window
US7519536B2 (en) * 1998-10-02 2009-04-14 Nuance Communications, Inc. System and method for providing network coordinated conversational services
US20090106603A1 (en) * 2007-10-19 2009-04-23 Oracle International Corporation Data Corruption Diagnostic Engine
US20090144632A1 (en) * 2001-10-23 2009-06-04 Visto Corporation System and method for merging remote and local data in a single user interface
US7548977B2 (en) * 2005-02-11 2009-06-16 International Business Machines Corporation Client / server application task allocation based upon client resources
US20090164216A1 (en) * 2007-12-21 2009-06-25 General Motors Corporation In-vehicle circumstantial speech recognition
US20090204409A1 (en) * 2008-02-13 2009-08-13 Sensory, Incorporated Voice Interface and Search for Electronic Devices including Bluetooth Headsets and Remote Systems
US20090265163A1 (en) * 2008-02-12 2009-10-22 Phone Through, Inc. Systems and methods to enable interactivity among a plurality of devices
US20090271200A1 (en) * 2008-04-23 2009-10-29 Volkswagen Group Of America, Inc. Speech recognition assembly for acoustically controlling a function of a motor vehicle
US20100097239A1 (en) * 2007-01-23 2010-04-22 Campbell Douglas C Mobile device gateway systems and methods
US20100106497A1 (en) * 2007-03-07 2010-04-29 Phillips Michael S Internal and external speech recognition use with a mobile communication facility
US7738778B2 (en) * 2003-06-30 2010-06-15 Ipg Electronics 503 Limited System and method for generating a multimedia summary of multimedia streams
US20100185448A1 (en) * 2007-03-07 2010-07-22 Meisel William S Dealing with switch latency in speech recognition
US20100205199A1 (en) * 2009-02-06 2010-08-12 Yi-An Lin Intent driven search result rich abstracts
US20100251162A1 (en) * 2006-10-03 2010-09-30 Verizon Data Services Inc. Interactive search graphical user interface systems and methods
US7809574B2 (en) * 2001-09-05 2010-10-05 Voice Signal Technologies Inc. Word recognition using choice lists
US20100312563A1 (en) * 2009-06-04 2010-12-09 Microsoft Corporation Techniques to create a custom voice font
US20100312555A1 (en) * 2009-06-09 2010-12-09 Microsoft Corporation Local and remote aggregation of feedback data for speech recognition
US20110015928A1 (en) * 2009-07-15 2011-01-20 Microsoft Corporation Combination and federation of local and remote speech recognition
US20110046951A1 (en) * 2009-08-21 2011-02-24 David Suendermann System and method for building optimal state-dependent statistical utterance classifiers in spoken dialog systems
US20110067059A1 (en) * 2009-09-15 2011-03-17 At&T Intellectual Property I, L.P. Media control
US7933777B2 (en) * 2008-08-29 2011-04-26 Multimodal Technologies, Inc. Hybrid speech recognition
US20110125500A1 (en) * 2009-11-25 2011-05-26 General Motors Llc Automated distortion classification
US20120072221A1 (en) * 1999-04-12 2012-03-22 Ben Franklin Patent Holding, Llc Distributed voice user interface
US20120084079A1 (en) * 2010-01-26 2012-04-05 Google Inc. Integration of Embedded and Network Speech Recognizers
US8165883B2 (en) * 2001-10-21 2012-04-24 Microsoft Corporation Application abstraction with dialog purpose
US8224644B2 (en) * 2008-12-18 2012-07-17 Microsoft Corporation Utterance processing for network-based speech recognition utilizing a client-side cache
US8249878B2 (en) * 2008-08-29 2012-08-21 Multimodal Technologies, Llc Distributed speech recognition using one way communication

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1315721A (en) 2000-03-23 2001-10-03 韦尔博泰克有限公司 Speech information transporting system and method for customer server

Patent Citations (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7136710B1 (en) * 1991-12-23 2006-11-14 Hoffberg Steven M Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US6363488B1 (en) * 1995-02-13 2002-03-26 Intertrust Technologies Corp. Systems and methods for secure transaction management and electronic rights protection
US7191135B2 (en) * 1998-04-08 2007-03-13 Symbol Technologies, Inc. Speech recognition system and method for employing the same
US7519536B2 (en) * 1998-10-02 2009-04-14 Nuance Communications, Inc. System and method for providing network coordinated conversational services
US6185535B1 (en) * 1998-10-16 2001-02-06 Telefonaktiebolaget Lm Ericsson (Publ) Voice control of a user interface to service applications
US20120072221A1 (en) * 1999-04-12 2012-03-22 Ben Franklin Patent Holding, Llc Distributed voice user interface
US6963759B1 (en) * 1999-10-05 2005-11-08 Fastmobile, Inc. Speech recognition technique based on local interrupt detection
US7058580B2 (en) * 2000-05-24 2006-06-06 Canon Kabushiki Kaisha Client-server speech processing system, apparatus, method, and storage medium
US7225134B2 (en) * 2000-06-20 2007-05-29 Sharp Kabushiki Kaisha Speech input communication system, user terminal and center system
US7027987B1 (en) * 2001-02-07 2006-04-11 Google Inc. Voice interface for a search engine
US7013289B2 (en) * 2001-02-21 2006-03-14 Michel Horn Global electronic commerce system
US6738743B2 (en) * 2001-03-28 2004-05-18 Intel Corporation Unified client-server distributed architectures for spoken dialogue systems
US7809574B2 (en) * 2001-09-05 2010-10-05 Voice Signal Technologies Inc. Word recognition using choice lists
US8165883B2 (en) * 2001-10-21 2012-04-24 Microsoft Corporation Application abstraction with dialog purpose
US20090144632A1 (en) * 2001-10-23 2009-06-04 Visto Corporation System and method for merging remote and local data in a single user interface
US6898567B2 (en) * 2001-12-29 2005-05-24 Motorola, Inc. Method and apparatus for multi-level distributed speech recognition
US20030139924A1 (en) * 2001-12-29 2003-07-24 Senaka Balasuriya Method and apparatus for multi-level distributed speech recognition
US7062444B2 (en) * 2002-01-24 2006-06-13 Intel Corporation Architecture for DSR client and server development platform
US6993482B2 (en) * 2002-12-18 2006-01-31 Motorola, Inc. Method and apparatus for displaying speech recognition results
US20040192384A1 (en) * 2002-12-30 2004-09-30 Tasos Anastasakos Method and apparatus for selective distributed speech recognition
US7461352B2 (en) * 2003-02-10 2008-12-02 Ronald Mark Katsuranis Voice activated system and methods to enable a computer user working in a first graphical application window to display and control on-screen help, internet, and other information content in a second graphical application window
US20040254787A1 (en) * 2003-06-12 2004-12-16 Shah Sheetal R. System and method for distributed speech recognition with a cache feature
US7738778B2 (en) * 2003-06-30 2010-06-15 Ipg Electronics 503 Limited System and method for generating a multimedia summary of multimedia streams
US20050149500A1 (en) * 2003-12-31 2005-07-07 David Marmaros Systems and methods for unification of search results
US7548977B2 (en) * 2005-02-11 2009-06-16 International Business Machines Corporation Client / server application task allocation based upon client resources
US20060235684A1 (en) * 2005-04-14 2006-10-19 Sbc Knowledge Ventures, Lp Wireless device to access network-based voice-activated services using distributed speech recognition
US20070061335A1 (en) * 2005-09-14 2007-03-15 Jorey Ramer Multimodal search query processing
US20080071763A1 (en) * 2006-09-15 2008-03-20 Emc Corporation Dynamic updating of display and ranking for search results
US20100251162A1 (en) * 2006-10-03 2010-09-30 Verizon Data Services Inc. Interactive search graphical user interface systems and methods
US20080154612A1 (en) * 2006-12-26 2008-06-26 Voice Signal Technologies, Inc. Local storage and use of search results for voice-enabled mobile communications devices
US20080162472A1 (en) * 2006-12-28 2008-07-03 Motorola, Inc. Method and apparatus for voice searching in a mobile communication device
US20100097239A1 (en) * 2007-01-23 2010-04-22 Campbell Douglas C Mobile device gateway systems and methods
US20100106497A1 (en) * 2007-03-07 2010-04-29 Phillips Michael S Internal and external speech recognition use with a mobile communication facility
US20100185448A1 (en) * 2007-03-07 2010-07-22 Meisel William S Dealing with switch latency in speech recognition
US20080221898A1 (en) * 2007-03-07 2008-09-11 Cerra Joseph P Mobile navigation environment speech processing facility
US20090106603A1 (en) * 2007-10-19 2009-04-23 Oracle International Corporation Data Corruption Diagnostic Engine
US20090164216A1 (en) * 2007-12-21 2009-06-25 General Motors Corporation In-vehicle circumstantial speech recognition
US20090265163A1 (en) * 2008-02-12 2009-10-22 Phone Through, Inc. Systems and methods to enable interactivity among a plurality of devices
US8099289B2 (en) * 2008-02-13 2012-01-17 Sensory, Inc. Voice interface and search for electronic devices including bluetooth headsets and remote systems
US20090204409A1 (en) * 2008-02-13 2009-08-13 Sensory, Incorporated Voice Interface and Search for Electronic Devices including Bluetooth Headsets and Remote Systems
US20090271200A1 (en) * 2008-04-23 2009-10-29 Volkswagen Group Of America, Inc. Speech recognition assembly for acoustically controlling a function of a motor vehicle
US8249878B2 (en) * 2008-08-29 2012-08-21 Multimodal Technologies, Llc Distributed speech recognition using one way communication
US8504372B2 (en) * 2008-08-29 2013-08-06 Mmodal Ip Llc Distributed speech recognition using one way communication
US7933777B2 (en) * 2008-08-29 2011-04-26 Multimodal Technologies, Inc. Hybrid speech recognition
US20120296645A1 (en) * 2008-08-29 2012-11-22 Eric Carraux Distributed Speech Recognition Using One Way Communication
US8224644B2 (en) * 2008-12-18 2012-07-17 Microsoft Corporation Utterance processing for network-based speech recognition utilizing a client-side cache
US20100205199A1 (en) * 2009-02-06 2010-08-12 Yi-An Lin Intent driven search result rich abstracts
US20100312563A1 (en) * 2009-06-04 2010-12-09 Microsoft Corporation Techniques to create a custom voice font
US20100312555A1 (en) * 2009-06-09 2010-12-09 Microsoft Corporation Local and remote aggregation of feedback data for speech recognition
US20110015928A1 (en) * 2009-07-15 2011-01-20 Microsoft Corporation Combination and federation of local and remote speech recognition
US20110046951A1 (en) * 2009-08-21 2011-02-24 David Suendermann System and method for building optimal state-dependent statistical utterance classifiers in spoken dialog systems
US20110067059A1 (en) * 2009-09-15 2011-03-17 At&T Intellectual Property I, L.P. Media control
US20110125500A1 (en) * 2009-11-25 2011-05-26 General Motors Llc Automated distortion classification
US20120084079A1 (en) * 2010-01-26 2012-04-05 Google Inc. Integration of Embedded and Network Speech Recognizers

Cited By (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8626511B2 (en) * 2010-01-22 2014-01-07 Google Inc. Multi-dimensional disambiguation of voice commands
US20110184730A1 (en) * 2010-01-22 2011-07-28 Google Inc. Multi-dimensional disambiguation of voice commands
US11443220B2 (en) 2011-01-25 2022-09-13 Telepahty Labs, Inc. Multiple choice decision engine for an electronic personal assistant
US20130278492A1 (en) * 2011-01-25 2013-10-24 Damien Phelan Stolarz Distributed, predictive, dichotomous decision engine for an electronic personal assistant
US10169712B2 (en) 2011-01-25 2019-01-01 Telepathy Ip Holdings Distributed, predictive, dichotomous decision engine for an electronic personal assistant
US9904892B2 (en) 2011-01-25 2018-02-27 Telepathy Labs, Inc. Multiple choice decision engine for an electronic personal assistant
US9904891B2 (en) 2011-01-25 2018-02-27 Telepathy Labs, Inc. Multiple choice decision engine for an electronic personal assistant
US10726347B2 (en) 2011-01-25 2020-07-28 Telepathy Labs, Inc. Multiple choice decision engine for an electronic personal assistant
US9842299B2 (en) * 2011-01-25 2017-12-12 Telepathy Labs, Inc. Distributed, predictive, dichotomous decision engine for an electronic personal assistant
US11436511B2 (en) 2011-01-25 2022-09-06 Telepathy Labs, Inc. Multiple choice decision engine for an electronic personal assistant
WO2013049237A1 (en) * 2011-09-30 2013-04-04 Google Inc. Hybrid client/server speech recognition in a mobile device
US8924219B1 (en) 2011-09-30 2014-12-30 Google Inc. Multi hotword robust continuous voice command detection in mobile devices
US20140201182A1 (en) * 2012-05-07 2014-07-17 Alexander Himanshu Amin Mobile communications device with electronic nose
US9645127B2 (en) * 2012-05-07 2017-05-09 Alexander Himanshu Amin Electronic nose system and method
US20150095301A1 (en) * 2012-05-07 2015-04-02 Alexander Himanshu Amin Mobile communications device with electronic nose
US10592510B2 (en) * 2012-05-07 2020-03-17 Alexander Himanshu Amin Mobile communications device with electronic nose
US11215595B2 (en) 2012-05-07 2022-01-04 Alexander Himanshu Amin Mobile communications device with electronic nose
US20210065273A1 (en) * 2012-05-07 2021-03-04 Hannah Elizabeth Amin Mobile communications device with electronic nose
US8930341B2 (en) * 2012-05-07 2015-01-06 Alexander Himanshu Amin Mobile communications device with electronic nose
US20170069010A1 (en) * 2012-05-07 2017-03-09 Hannah Elizabeth Amin Mobile communications device with electronic nose
US10839440B2 (en) * 2012-05-07 2020-11-17 Hannah Elizabeth Amin Mobile communications device with electronic nose
US20140096590A1 (en) * 2012-05-07 2014-04-10 Alexander Himanshu Amin Electronic nose system and method
US20170184559A1 (en) * 2012-05-07 2017-06-29 Alexander Himanshu Amin Mobile communications device with electronic nose
US10697948B2 (en) 2012-05-07 2020-06-30 Alexander Himanshu Amin Mobile communications device with electronic nose
US10254260B2 (en) * 2012-05-07 2019-04-09 Alexander Himanshu Amin Mobile communications device with electronic nose
US11782037B2 (en) 2012-05-07 2023-10-10 Alexander Himanshu Amin Mobile communications device with electronic nose
US20150127353A1 (en) * 2012-05-08 2015-05-07 Samsung Electronics Co., Ltd. Electronic apparatus and method for controlling electronic apparatus thereof
EP3534364A1 (en) * 2012-06-26 2019-09-04 Google LLC Distributed speech recognition
US9715879B2 (en) * 2012-07-02 2017-07-25 Salesforce.Com, Inc. Computer implemented methods and apparatus for selectively interacting with a server to build a local database for speech recognition at a device
US20140006028A1 (en) * 2012-07-02 2014-01-02 Salesforce.Com, Inc. Computer implemented methods and apparatus for selectively interacting with a server to build a local dictation database for speech recognition at a device
US20140058732A1 (en) * 2012-08-21 2014-02-27 Nuance Communications, Inc. Method to provide incremental ui response based on multiple asynchronous evidence about user input
US9384736B2 (en) * 2012-08-21 2016-07-05 Nuance Communications, Inc. Method to provide incremental UI response based on multiple asynchronous evidence about user input
US9582245B2 (en) 2012-09-28 2017-02-28 Samsung Electronics Co., Ltd. Electronic device, server and control method thereof
US10120645B2 (en) 2012-09-28 2018-11-06 Samsung Electronics Co., Ltd. Electronic device, server and control method thereof
US11086596B2 (en) 2012-09-28 2021-08-10 Samsung Electronics Co., Ltd. Electronic device, server and control method thereof
WO2014060054A1 (en) * 2012-10-16 2014-04-24 Audi Ag Speech recognition in a motor vehicle
US9412374B2 (en) 2012-10-16 2016-08-09 Audi Ag Speech recognition having multiple modes in a motor vehicle
KR102246893B1 (en) * 2013-12-11 2021-04-30 삼성전자주식회사 Interactive system, control method thereof, interactive server and control method thereof
KR20150068003A (en) * 2013-12-11 2015-06-19 삼성전자주식회사 interactive system, control method thereof, interactive server and control method thereof
US10255321B2 (en) 2013-12-11 2019-04-09 Samsung Electronics Co., Ltd. Interactive system, server and control method thereof
EP3025258A4 (en) * 2013-12-11 2017-01-18 Samsung Electronics Co., Ltd. Interactive system, server and control method thereof
US11786126B2 (en) 2014-09-17 2023-10-17 Canary Medical Inc. Devices, systems and methods for using and monitoring medical devices
US10546587B2 (en) * 2014-10-14 2020-01-28 Samsung Electronics Co., Ltd. Electronic device and method for spoken interaction thereof
US20160104484A1 (en) * 2014-10-14 2016-04-14 Samsung Electronics Co., Ltd. Electronic device and method for spoken interaction thereof
US11817013B2 (en) 2014-11-12 2023-11-14 Samsung Electronics Co., Ltd. Display apparatus and method for question and answer
KR102649208B1 (en) 2014-11-12 2024-03-20 삼성전자주식회사 Apparatus and method for qusetion-answering
KR20210075040A (en) * 2014-11-12 2021-06-22 삼성전자주식회사 Apparatus and method for qusetion-answering
KR20220130655A (en) * 2014-11-12 2022-09-27 삼성전자주식회사 Apparatus and method for qusetion-answering
KR102445927B1 (en) 2014-11-12 2022-09-22 삼성전자주식회사 Apparatus and method for qusetion-answering
US9898455B2 (en) 2014-12-01 2018-02-20 Nuance Communications, Inc. Natural language understanding cache
US11922095B2 (en) 2015-09-21 2024-03-05 Amazon Technologies, Inc. Device selection for providing a response
US20170083285A1 (en) * 2015-09-21 2017-03-23 Amazon Technologies, Inc. Device selection for providing a response
US9875081B2 (en) * 2015-09-21 2018-01-23 Amazon Technologies, Inc. Device selection for providing a response
US11540772B2 (en) 2016-03-23 2023-01-03 Canary Medical Inc. Implantable reporting processor for an alert implant
US11896391B2 (en) 2016-03-23 2024-02-13 Canary Medical Inc. Implantable reporting processor for an alert implant
US11779273B2 (en) 2016-03-23 2023-10-10 Canary Medical Inc. Implantable reporting processor for an alert implant
US11191479B2 (en) 2016-03-23 2021-12-07 Canary Medical Inc. Implantable reporting processor for an alert implant
US11045139B2 (en) 2016-03-23 2021-06-29 Canary Medical Inc. Implantable reporting processor for an alert implant
US11020053B2 (en) 2016-03-23 2021-06-01 Canary Medical Inc. Implantable reporting processor for an alert implant
US11638555B2 (en) 2016-03-23 2023-05-02 Canary Medical Inc. Implantable reporting processor for an alert implant
US10925537B2 (en) 2016-03-23 2021-02-23 Canary Medical Inc. Implantable reporting processor for an alert implant
US11004445B2 (en) * 2016-05-31 2021-05-11 Huawei Technologies Co., Ltd. Information processing method, server, terminal, and information processing system
EP3507797B1 (en) * 2016-08-31 2023-12-27 Bose Corporation Accessing multiple virtual personal assistants (vpa) from a single device
CN109844856A (en) * 2016-08-31 2019-06-04 伯斯有限公司 Multiple virtual personal assistants (VPA) are accessed from individual equipment
US10468024B2 (en) * 2016-11-02 2019-11-05 Panaonic Intellectual Property Corporation Of America Information processing method and non-temporary storage medium for system to control at least one device through dialog with user
US20180122366A1 (en) * 2016-11-02 2018-05-03 Panasonic Intellectual Property Corporation Of America Information processing method and non-temporary storage medium for system to control at least one device through dialog with user
US10971157B2 (en) 2017-01-11 2021-04-06 Nuance Communications, Inc. Methods and apparatus for hybrid speech recognition processing
US10482904B1 (en) 2017-08-15 2019-11-19 Amazon Technologies, Inc. Context driven device arbitration
US11875820B1 (en) 2017-08-15 2024-01-16 Amazon Technologies, Inc. Context driven device arbitration
US11133027B1 (en) 2017-08-15 2021-09-28 Amazon Technologies, Inc. Context driven device arbitration
US20210272563A1 (en) * 2018-06-15 2021-09-02 Sony Corporation Information processing device and information processing method
US11948564B2 (en) * 2018-06-15 2024-04-02 Sony Corporation Information processing device and information processing method
US20200082827A1 (en) * 2018-11-16 2020-03-12 Lg Electronics Inc. Artificial intelligence-based appliance control apparatus and appliance controlling system including the same
US11615792B2 (en) * 2018-11-16 2023-03-28 Lg Electronics Inc. Artificial intelligence-based appliance control apparatus and appliance controlling system including the same
CN111508484A (en) * 2019-01-31 2020-08-07 阿里巴巴集团控股有限公司 Voice data processing method and device
US11481401B2 (en) * 2020-11-25 2022-10-25 International Business Machines Corporation Enhanced cognitive query construction

Also Published As

Publication number Publication date
AU2011209760B2 (en) 2013-12-05
CN102884569A (en) 2013-01-16
KR101770358B1 (en) 2017-08-22
EP2529372A1 (en) 2012-12-05
US8868428B2 (en) 2014-10-21
EP3477637B1 (en) 2021-08-11
KR20130018658A (en) 2013-02-25
US20120310645A1 (en) 2012-12-06
WO2011094215A1 (en) 2011-08-04
AU2011209760A1 (en) 2012-08-16
US20120084079A1 (en) 2012-04-05
CA2788088A1 (en) 2011-08-04
EP2529372B1 (en) 2019-04-10
US8412532B2 (en) 2013-04-02
EP3477637A1 (en) 2019-05-01
CN102884569B (en) 2014-11-05

Similar Documents

Publication Publication Date Title
US8412532B2 (en) Integration of embedded and network speech recognizers
US20210166699A1 (en) Methods and apparatus for hybrid speech recognition processing
US9905228B2 (en) System and method of performing automatic speech recognition using local private data
US10325590B2 (en) Language model modification for local speech recognition systems using remote sources
CN110288985B (en) Voice data processing method and device, electronic equipment and storage medium
KR101418163B1 (en) Speech recognition repair using contextual information
EP2380166B1 (en) Markup language-based selection and utilization of recognizers for utterance processing
US20160203002A1 (en) Headless task completion within digital personal assistants
CN102137085A (en) Multi-dimensional disambiguation of voice commands
US20220020358A1 (en) Electronic device for processing user utterance and operation method therefor
JP2014513828A (en) Automatic conversation support
US20210020177A1 (en) Device for processing user voice input
AU2014200663B2 (en) Integration of embedded and network speech recognizers
JP2015102805A (en) Voice recognition system, electronic device, server, voice recognition method and voice recognition program
US20220238107A1 (en) Device and method for providing recommended sentence related to utterance input of user
US11776537B1 (en) Natural language processing system for context-specific applier interface
CN114168706A (en) Intelligent dialogue ability test method, medium and test equipment

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GRUENSTEIN, ALEXANDER;BYRNE, WILLIAM J.;SIGNING DATES FROM 20100601 TO 20100603;REEL/FRAME:024495/0213

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044142/0357

Effective date: 20170929