WO2001026093A1 - Interactive user interface using speech recognition and natural language processing - Google Patents

Interactive user interface using speech recognition and natural language processing Download PDF

Info

Publication number
WO2001026093A1
WO2001026093A1 PCT/US2000/027407 US0027407W WO0126093A1 WO 2001026093 A1 WO2001026093 A1 WO 2001026093A1 US 0027407 W US0027407 W US 0027407W WO 0126093 A1 WO0126093 A1 WO 0126093A1
Authority
WO
WIPO (PCT)
Prior art keywords
matching
phrase
grammar
database
searching
Prior art date
Application number
PCT/US2000/027407
Other languages
French (fr)
Inventor
Dean Weber
Original Assignee
One Voice Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by One Voice Technologies, Inc. filed Critical One Voice Technologies, Inc.
Priority to EP00968695A priority Critical patent/EP1221161A1/en
Priority to AU78570/00A priority patent/AU7857000A/en
Publication of WO2001026093A1 publication Critical patent/WO2001026093A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/19Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching

Definitions

  • the present invention relates to speech recognition for an object-based computer user interface. More specifically, the present invention relates to a novel method and system for user interaction with a computer using speech recognition and natural language processing.
  • This application is a continuation-in-part of U.S. Patent Application Serial No. 09/166,198, entitled “Network Interactive User Interface Using Speech Recognition and Natural Language Processing,” filed October 5, 1998.
  • Speech recognition involves software and hardware that act together to audibly detect human speech and translate the detected speech into a string of words.
  • speech recognition works by breaking down sounds the hardware detects into smaller non-divisible sounds called phonemes.
  • Phonemes are distinct units of sound. For example, the word “those” is made up of three phonemes; the first is the “th” sound, the second is the “o” sound, and the third is the "s" sound.
  • the speech recognition software attempts to match the detected phonemes with known words from a stored dictionary.
  • An example of a speech recognition system is given in U.S. Patent No.
  • a proposed enhancement to these speech recognition systems is to process the detected words using a natural language processing system.
  • Natural language processing generally involves determining a conceptual "meaning” (e.g., what meaning the speaker intended to convey) of the detected words by analyzing their grammatical relationship and relative context.
  • a conceptual "meaning" e.g., what meaning the speaker intended to convey
  • U.S. Patent No. 4,887,212 entitled “PARSER FOR NATURAL LANGUAGE TEXT", issued December 12, 1989, assigned to International Business Machines Corporation and incorporated by reference herein teaches a method of parsing an input stream of words by using word isolation, morphological analysis, dictionary look-up and grammar analysis.
  • Natural language processing used in concert with speech recognition provides a powerful tool for operating a computer using spoken words rather than manual input such as a keyboard or mouse.
  • a conventional natural language processing system may fail to determine the correct "meaning" of the words detected by the speech recognition system. In such a case, the user is typically required to recompose or restate the phrase, with the hope that the natural language processing system will determine the correct "meaning” on subsequent attempts. Clearly, this may lead to substantial delays as the user is required to restate the entire sentence or command.
  • Another drawback of conventional systems is that the processing time required for the speech recognition can be prohibitively long. This is primarily due to the finite speed of the processing resources as compared with the large amount of information to be processed.
  • Another drawback of conventional speech recognition and natural language processing systems is that once a user successfully "trains" a computer system to recognize the user's speech and voice commands, the user cannot easily move to another computer without having to undergo the process of training the new computer. As a result, changing a user's computer workstations or location results in wasted time by users that need to re-train the new computer to the user's speech habits and voice commands.
  • the present invention is a novel and improved system and method for interacting with a computer using utterances, speech processing and natural language processing.
  • the system comprises a speech processor for searching a first grammar file for a matching phrase for the utterance, and for searching a second grammar file for the matching phrase if the matching phrase is not found in the first grammar file.
  • the system also includes a natural language processor for searching a database for a matching entry for the matching phrase; and an application interface for performing an action associated with the matching entry if the matching entry is found in the database.
  • the natural language processor updates a user voice profile with at least one of the database, the first grammar file and the second grammar file with the matching phrase if the matching entry is not found in the database.
  • the first grammar file is a context-specific grammar file.
  • a context-specific grammar file is one that contains words and phrases that are highly relevant to a specific subject.
  • the second grammar file is a general grammar file.
  • a general grammar file is one that contains words and phrases which do not need to be interpreted in light of a context. That is to say, the words and phrases in the general grammar file do not belong to any parent context.
  • the speech processor searches a dictation grammar for the matching phrase if the matching phrase is not found in the general grammar file.
  • the dictation grammar is a large vocabulary of general words and phrases. By searching the context-specific and general grammars first, it is expected that the speech recognition time will be greatly reduced due to the context-specific and general grammars being physically smaller files than the dictation grammar.
  • the speech processor searches a context- specific dictation model for the matching phrase if the matching phrase is not found within the dictation grammar.
  • a context-specific dictation model is a model that indicates the relationship between words in a vocabulary. The speech processor uses this to determine help decode the meaning of related words in an utterance.
  • the natural language processor replaces at least one word in the matching phrase prior to searching the database. This may be accomplished by a variable replacer in the natural language processor for substituting a wildcard for the at least one word in the matching phrase. By substituting wildcards for certain words (called "word- variables") in the phrase, the number of entries in the database can be significantly reduced. Additionally, a pronoun substituter in the natural language processor may substitute a proper name for pronouns the matching phrase, allowing user-specific facts to be stored in the database.
  • a string formatter text formats the matching phrase prior to searching the database. Also, a word weighter weights individual words in the matching phrase according to a relative significance of the individual words prior to searching the database.
  • a search engine in the natural language processor generates a confidence value for the matching entry.
  • the natural language processor compares the confidence value with a threshold value.
  • a boolean tester determines whether a required number of words from the matching phrase are present in the matching entry. This boolean testing serves as a verification of the results returned by the search engine.
  • the natural language processor prompts the user whether the matching entry is a correct interpretation of the utterance if the required number of words from the matching phrase are not present in the matching entry.
  • the natural language processor also prompts the user for additional information if the matching entry is not a correct interpretation of the utterance.
  • At least one of the database, the first grammar file and the second grammar file are updated with the additional information. In this way, the present invention adaptively "learns" the meaning of additional utterances, thereby enhancing the efficiency of the user interface.
  • the speech processor will enable and search a context-specific grammar associated with the matching entry for a subsequent matching phrase for a subsequent utterance. This ensures that the most relevant words and phrases will be searched first, thereby decreasing speech recognition times.
  • the invention includes a method for updating a computer for voice interaction with an object, such as a help file or web page.
  • an object table which associates with the object with the voice interaction system, is transferred to the computer over a network.
  • the location of the object table can be imbedded within the object, at a specific internet web-site, or at consolidated location that stores object tables for multiple objects.
  • the object table is searched for an entry matching the object.
  • the entry matching the object may result in an action being performed, such as text speech being voiced through a speaker, a context-specific grammar file being used, or a natural language processor database being used.
  • the object table may be part of a dialog definition file. Dialog definition files may also include a context-specific grammar, entries for a natural language processor database, a context-specific dictation model, or any combination thereof.
  • a network interface transfers a dialog definition file from over the network.
  • the dialog definition file contains an object table.
  • a data processor searches the object table for a table entry that matches the object. Once this matching table entry is found, an application interface performs an action specified by the matching entry.
  • the dialog definition file associated with a network is located, and then read.
  • the dialog definition file could be read from a variety of locations, such as a web-site, storage media, or a location that stores dialog definition files for multiple objects.
  • An object table, contained within the dialog definition file, is searched to find a table entry matching the object.
  • the matching entry defines an action associated with the object, and the action is then performed by the system.
  • the dialog definition file may contain a context-specific grammar, entries for a natural language processor database, a context-specific dictation model, or any combination thereof.
  • FIG. 1 is a functional block diagram of an exemplary computer system for use with the present invention
  • FIG. 2 is an expanded functional block diagram of the CPU 102 and storage medium 108 of the computer system of FIG. 1 of the present invention
  • FIGS. 3A-3D are a flowchart of the method of providing interactive speech recognition and natural language processing to a computer;
  • FIG. 4 is a diagram of selected columns of an exemplary natural language processing (NLP) database of the present invention.
  • NLP natural language processing
  • FIG. 5 is a diagram of an exemplary Database Definition File (DDF) according to the present invention.
  • DDF Database Definition File
  • FIG. 6 is a diagram of selected columns of an exemplary object table of the present invention.
  • FIGS. 7A-7C are a flowchart of the method of the present invention, illustrating the linking of interactive speech recognition and natural language processing to a networked object, such as a web-page;
  • FIG. 8 is a diagram depicting a computer system connecting to other computers, storage media, and web-sites via the Internet.
  • FIG. 9 is a diagram of an exemplary user voice profile according to the present invention
  • FIG. 10 is a flowchart of the method of the present invention, illustrating the retrieval and enabling of an individual's user voice profile during login at a computer workstation.
  • computer system 100 includes a central processing unit (CPU) 102.
  • the CPU 102 may be any general purpose microprocessor or microcontroller as is known in the art, appropriately programmed to perform the method described herein with reference to FIGS. 3A-3D.
  • the software for programming the CPU can be found at storage medium 108 or alternatively from another location across a computer network.
  • CPU 102 may be a conventional microprocessor such as the Pentium II processor manufactured by Intel Corporation or the like.
  • CPU 102 communicates with a plurality of peripheral equipment, including a display 104, manual input 106, storage medium 108, microphone 110, speaker 112, data input port 114 and network interface 116.
  • Display 104 may be a visual display such as a CRT, LCD screen, touch-sensitive screen, or other monitors as are known in the art for visually displaying images and text to a user.
  • Manual input 106 may be a conventional keyboard, keypad, mouse, trackball, or other input device as is known in the art for the manual input of data.
  • Storage medium 108 may be a conventional read/write memory such as a magnetic disk drive, floppy disk drive, CD- ROM drive, silicon memory or other memory device as is known in the art for storing and retrieving data.
  • storage medium 108 may be remotely located from CPU 102, and be connected to CPU 102 via a network such as a local area network (LAN), or a wide area network (WAN), or the Internet.
  • Microphone 110 may be any suitable microphone as is known in the art for providing audio signals to CPU 102.
  • Speaker 112 may be any suitable speaker as is known in the art for reproducing audio signals from CPU 102. It is understood that microphone 110 and speaker 112 may include appropriate digital-to-analog and analog-to-digital conversion circuitry as appropriate.
  • Data input port 114 may be any data port as is known in the art for interfacing with an external accessory using a data protocol such as RS-232, Universal Serial Bus, or the like.
  • Network interface 116 may be any interface as known in the art for communicating or transferring files across a computer network, examples of such networks include TCP/IP, ethernet, or token ring networks.
  • a network interface 116 may consist of a modem connected to the data input port 114.
  • FIG. 1 illustrates the functional elements of a computer system 100.
  • Each of the elements of computer system 100 may be suitable off-the-shelf components as described above.
  • the present invention provides a method and system for human interaction with the computer system 100 using speech.
  • the computer system 100 may be connected to the Internet 700, a collection of computer networks.
  • computer system 100 may use a network interface 116, a modem connected to the data input port 114, or any other method known in the art.
  • Web-sites 710, other computers 720, and storage media 108 may also be connected to the Internet through such methods known in the art.
  • FIG. 2 illustrates an expanded functional block diagram of CPU 102 and storage medium 108. It is understood that the functional elements of FIG. 2 may be embodied entirely in software or hardware or both. In the case of a software embodiment, the software may be found at storage medium 108 or at an alternate location across a computer network.
  • CPU 102 includes speech recognition processor 200, data processor 201, natural language processor 202, and application interface 220.
  • the data processor 201 interfaces with the display 104, storage medium 108, microphone 110, speaker 112, data input port 114, and network interface 116.
  • the data processor 201 allows the CPU to locate and read data from these sources.
  • Natural language processor 202 further includes variable replacer 204, string formatter 206, word weighter 208, boolean tester 210, pronoun replacer 211, and search engine
  • Storage medium 108 includes a plurality of context-specific grammar files 212, general grammar file 214, dictation grammar 216, context-specific dictation model 217, and natural language processor (NLP) database 218.
  • the grammar files 212 In the preferred embodiment, the grammar files 212,
  • BNF files Bakus-Naur Form (BNF) files, which describe the structure of the language spoken by the user.
  • BNF files are well known in the art for describing the structure of language, and details of BNF files will therefore not be discussed herein.
  • One advantage of BNF files is that hierarchical tree-like structures may be used to describe phrases or word sequences, without the need to explicitly recite all combinations of these word sequences.
  • the use of BNF files in the preferred embodiment minimizes the physical sizes of the files 212, 214, and 216 in the storage medium 108, increasing the speed at which these files can be enabled and searched as described below.
  • other file structures are used.
  • the context-specific dictation model 217 is an optional file that contains specific models to improve dictation accuracy. These models enable users to specify word orders and word models. The models accomplish this by describing words and their relationship to other words, thus determining word meaning by contextual interpretation in a specific field or topic. Take for example, the phrase "650 megahertz microprocessor computer.”
  • a context-specific dictation model 217 for computers may indicate the likelihood of the word “microprocessor” with "computer,” and that a number, such as "650” is likely to be found near the word “megahertz.”
  • a speech recognition processor would analyze the phrase, interpret a single object, i.e. the computer, and realize that "650 megahertz microprocessor" are adjectives or traits describing the type of computer.
  • Topics for context-specific dictation models 217 vary widely, and may include any topic area of interest to a user — both broad and narrow. Broad topics may include: history, law, medicine, science, technology, or computers. Specialized topics, such as a particular field of literature encountered at a book retailer's web-site are also possible. Such a context-specific dictation model 217 may contain text for author and title information, for example.
  • the context-specific dictation model 217 format relies upon the underlying speech recognition processor 200, and is specific to each type of speech recognition processor 200.
  • the operation and interaction of these functional elements of FIG. 2 will be described with reference to the flowchart of FIGS. 3A-3D.
  • FIG. 3A the flow begins at block 300 with the providing of an utterance to speech processor 200.
  • An utterance is a series of sounds having a beginning and an end, and may include one or more spoken words.
  • a microphone 110 that captures spoken words may perform the step of block 300.
  • the utterance may be provided to the speech processor 200 over data input port 114, or from storage medium 108.
  • the utterance is in a digital format such as the well-known ".wav" audio file format.
  • the speech processor 200 determines whether one of the context-specific grammars 212 has been enabled. If the context-specific grammars 212 are enabled, the context-specific grammars 212 are searched at block 304.
  • the context-specific grammars 212 are BNF files that contain words and phrases which are related to a parent context.
  • a context is a subject area.
  • examples of contexts may be "news", or "weather", or "stocks”. In such a case, the context-specific grammars 212 would each contain commands, control words, descriptors, qualifiers, or parameters that correspond to a different one of these contexts.
  • the use of contexts provides a hierarchal structure for types of information. Contexts and their use will be described further below with reference to the NLP database 218. 10
  • the context-specific grammar 212 is searched for a match to the utterance provided at block 300. However, if a context-specific grammar 212 has not been enabled, the flow proceeds to block 308 where the general grammar 214 is enabled.
  • the general grammar 214 is a BNF file which contains words and phrases which do not, themselves, belong to a parent context, but may have an associated context for which a context-specific grammar file 212 exists. In other words, the words and phrases in the general grammar 214 may be at the root of the hierarchal context structure. For example, in one embodiment applicable to personal computers, the general grammar 214 would contain commands and control phrases.
  • the general grammar 214 is searched for a matching word or phrase for the utterance provided at block 300. A decision is made, depending on whether the match is found, at block 312. If a match is not found, then the dictation grammar 216 is enabled at block 314.
  • the dictation grammar 216 is a BNF file that contains a list of words that do not, themselves, have either a parent context or an associated context. For example, in one embodiment applicable to a personal computer, the dictation grammar 216 contains a relatively large list of general words similar to a general dictionary.
  • the dictation grammar is searched for matching words for each word of the utterance provided at block 300.
  • decision block 318 if no matching words are found, any relevant context-specific dictation model 217 is enabled at block 317.
  • a visual error message is optionally displayed at the display 104 or an audible error message is optionally reproduced through speaker 112, at block 320.
  • the process ends until another utterance is provided to the speech processor 200 at block 300.
  • the enabled context-specific grammar 212 if any, is first searched. If there are no matches in the enabled context-specific grammar 212, then the general grammar 214 is enabled and searched. If there are no matches in the general grammar 214, then the dictation grammar 316 is enabled and searched. Finally, if there are not matches in the dictation grammar 316, a context-specific dictation model 217 is enabled 317 and used to interpret the utterance.
  • the speech recognition processor 200 when the speech recognition processor 200 is searching either the context-specific grammar 212 or the general grammar 214, it is said to be in the "command and control" mode. In this mode, the speech recognition processor 200 compares the entire utterance as a whole to the entries in the grammar. By contrast, when the speech recognition processor 200 is searching the dictation grammar, it is said to be in the "dictation" mode. In this mode, the speech recognition processor 200 compares the utterance to the entries in the dictation grammar 216 one word at a time. Finally, when the speech recognition processor 200 is matching the utterance with a context-specific dictation model 217, it is said to be in "model matching" mode.
  • any individual context-specific grammar 212 will be smaller in size (i.e., fewer total words and phrases) than the general grammar 214, which in turn will be smaller in size than the dictation grammar 216.
  • searching any enabled context-specific grammar 212 first, it is likely that a match, if any, will be found more quickly, due at least in part to the smaller file size.
  • searching the general grammar 214 before the dictation grammar 216 it is likely that a match, if any, will be found more quickly.
  • the words and phrases in the enabled context- specific grammar 212 are more likely to be uttered by the user because they are words that are highly relevant to the subject matter about which the user was most recently speaking. This also allows the user to speak in a more conversational style, using sentence fragments, with the meaning of his words being interpreted according to the enabled context-specific grammar 212.
  • the present invention may search more efficiently than if the searching were to occur one entry at a time in a single, large list of all expected words and phrases.
  • Block 322 shows that one action may be to direct application interface 220 to take some action with respect to a separate software application or entity.
  • application interface 220 may use the Speech Application Programming Interface (SAPI) standard by Microsoft to communicate with an external application.
  • SAPI Speech Application Programming Interface
  • the external application may be directed, for example, to access a particular Internet web site URL or to speak a particular phrase by converting text to speech.
  • Other actions may be taken as will be discussed further below with reference to the NLP database 218 of FIG. 4.
  • Block 324 shows that another action may be to access a row in the natural language processing (NLP) database 218 directly, thereby bypassing the natural language processing steps described further below.
  • Block 326 shows that another action may be to prepend a word or phrase for the enabled context to the matching word or phrase found in the context-specific grammar 306. For example, if the enabled context were "movies" and the matching utterance were “8 o'clock,” the word “movies” would be prepended to the phrase “8 o'clock” to form the phrase “movies at 8 o'clock.”
  • the flow may proceed to block 322 where the application interface 220 is directed to take an action as described above, or to block 324 where a row in the NLP database is directly accessed.
  • the general grammar 214 no prepending of a context occurs because, as stated above, the entries in the general grammar 214 do not, themselves, have a parent context.
  • manually entered words may be captured, at block 301, and input into the natural language processor.
  • words may be entered manually via manual input
  • the natural language processor 202 formats the phrase for natural language processing analysis. This formatting is accomplished by string formatter 206 and may include such text processing as removing duplicate spaces between words, making all letters lower case (or upper case), expanding contractions (e.g., changing "it's" to "it is”), and the like. The purpose of this formatting step is to prepare the phrase for parsing.
  • word-variables refers to words or phrases that represent amounts, dates, times, currencies, and the like.
  • word-variables refers to words or phrases that represent amounts, dates, times, currencies, and the like.
  • the phrase “what movies are playing at 8 o'clock” would be transformed at block 330 to "what movies are playing at $time” where "$time” is a wildcard function used to represent any time value.
  • the phrase "sell IBM stock at 100 dollars” would be transformed at block 330 to "sell IBM stock at Sdollars” where "Sdollars” is a wildcard function used to represent any dollar value.
  • This step may be accomplished by a simple loop that searches the phrase for key tokens such as the words "dollar” or "o'clock” and replaces the word-variables with a specified wildcard function.
  • an array may be used. This allows re- substitution of the original word- variable back into the phrase at the some position after the NLP database 218 has been searched.
  • the purpose of replacing word- variables with an associated wildcard function at block 330 is to reduce the number of entries that must be present in the NLP database 218.
  • the NLP database 218 would only contain the phrase "what movies are playing at $time" rather than a separate entry for 8 o'clock, 9 o'clock, 10 o'clock, and so on.
  • the NLP database 218 will be described further below.
  • pronouns in the phrase are replaced with proper names by pronoun replacer 211.
  • the pronouns "I,” "my,” or “mine” would be replaced with the speaker's name.
  • the purpose of this step is to allow user-specific facts to be stored and accessed in the NLP database 218. For example, the sentence “who are my children” would be transformed into “who are Dean's children” where "Dean” is the speaker's proper name.
  • this step may be performed in a simple loop that searches the phrase for pronouns, and replaces the pronouns found with an appropriate proper name. In order to keep track of the locations in the phrase where a substitution was made, an array may be used.
  • the individual words in the phrase are weighted according to their relative "importance” or "significance” to the overall meaning of the phrase by word weighter 208. For example, in one embodiment there are three weighting factors assigned. The lowest weighting factor is assigned to words such as "a,” “an,” “the,” and other articles. The highest weighting factor is given to words that are likely to have a significant relation to the meaning of the phrase. For example, these may include all verbs, nouns, adjectives, and proper names in the NLP database 218. A medium weighting factor is given to all other words in the phrase. The purpose of this weighting is to allow for more powerful searching of the NLP database 218. An example of selected columns of the NLP database 218 of one embodiment is shown in FIG. 4.
  • the NLP database 218 comprises a plurality of columns 400-410, and a plurality of rows 412A-412N.
  • the entries represent phrases that are "known"' to the NLP database.
  • a number of required words for each entry in column 400 is shown.
  • an associated context or subcontext for each entry in column 400 is shown.
  • one or more associated actions are shown for each entry in column 400.
  • the NLP database 218 shown in FIG. 4 is merely a simplified example for the purpose of teaching the present invention. Other embodiments may have more or fewer columns with different entries.
  • the NLP database 218 is searched for possible matches to the phrase, based on whether the entry in column 400 of the NLP database 218 contains any of the words in the phrase (or their synonyms), and the relative weights of those words.
  • a confidence value is generated for each of the possible matching entries based on the number of occurrences of each of the words in the phrase and their relative weights.
  • Weighted word searching of a database is well known in the art and may be performed by commercially available search engines such as the product "dtsearch” by DT Software, Inc. of Arlington, Virginia.
  • searching using synonyms is well known in the art and may be accomplished using such publicly available tools such as "WordNet,” developed by the Cognitive Science Laboratory of Princeton University in Princeton, New Jersey.
  • the search engine may be an integral part of the natural language processor 202.
  • the natural language processor 202 determines whether any of the possible matching entries has a confidence value greater than or equal to some predetermined minimum threshold, T.
  • T represents the lowest acceptable confidence value for which a decision can be made as to whether the phrase matched any of the entries in the NLP database 218. If there is no possible matching entry with a confidence value greater than or equal to T, then the flow proceeds to block 342 where an optional error message is either visually displayed to the user over display 104 or audibly reproduced over speaker 112.
  • the type of error message, if any, displayed to the user may depend on how many "hits" (i.e., how many matching words from the phrase) were found in the highest-confidence NLP database entry. A different type of error message would be generated if there was zero or one hits, than if there were two or more hits.
  • the flow proceeds to block 344 where the "noise” words are discarded from the phrase.
  • the "noise” words include words that do not contribute significantly to the overall meaning of the phrase relative to the other words in the phrase. These may include articles, pronouns, conjunctions, and words of a similar nature. "Non-noise” words would include words that contribute significantly to the overall meaning of the phrase. "Non-noise” words would include verbs, nouns, adjectives, proper names, and words of a similar nature.
  • the non-noise word requirement is retrieved from column 402 of the NLP database 218 for the highest-confidence matching entry at block 346. For example, if the highest-confidence matching phrase was the entry in row 412 A, (e.g., "what movies are playing at $time"), then the number of required non-noise words is 3.
  • a test is made to determine whether the number of required non-noise words from the phrase is actually present in the highest-confidence entry retrieved from the NLP database 218. This test is a verification of the accuracy of the relevance-style search performed at block 336, it being understood that an entry may generate a confidence value higher than the minimum threshold, T, without being an acceptable match for the phrase.
  • test performed at decision 348 is a boolean "AND" test performed by boolean tester 210.
  • the test determines whether each one of the non-noise words in the phrase (or its synonym) is actually present in the highest-confidence entry. If there are a sufficient number of required words actually present in the highest-confidence entry, then the flow proceeds to block 350, where the natural language processor 202 directs application interface 220 to take an associated action from column 408 or 410. It is understood that additional action columns may also be present.
  • the associated action in column 408 e.g., access movie web site
  • Other entries in the NLP database have other associated actions. For example, if the highest-confidence entry is that in row 412E (e.g., "what time is it"), the associated action may be for natural language processor 202 to direct a text-to-speech application (not shown) to speak the present time to the user through the speaker 112.
  • the first associated action may be to access a predetermined news web site on the Internet, and a second associated action may be to direct an image display application (not shown) to display images associated with the news. Different or additional actions may also be performed.
  • the natural language processor 202 instructs the speech recognition processor 200 to enable the context-specific grammar 212 for the associated context of column 404.
  • the flow proceeds to block 354 where the user is prompted over display 104 or speaker 112 whether the highest-confidence entry was meant. For example, if the user uttered "How much is IBM stock selling for today," the highest-confidence entry in the NLP database 218 may be the entry in row 412B. In this case, although the relevance factor may be high, the number of required words (or their synonyms) may not be sufficient. Thus, the user would be prompted at block 354 whether he meant "what is the price of IBM stock on August 28, 1998.” The user may respond either affirmatively or negatively. If it is determined at decision
  • the flow proceeds to FIG. 3D where the associated context from column 404 of NLP database 218 is retrieved for the highest-confidence entry, and the user is prompted for information using a context-based interactive dialog at block 360. For example, if the user uttered "what is the price of XICOR stock today," and the highest confidence entry from the NLP database 218 was row 412B (e.g., "what is the price of IBM stock on $date"), then the user would be prompted at block 354 whether that was what he meant.
  • context-based interactive dialog may entail prompting the user for the name and stock ticker symbol of XICOR stock. The user may respond by speaking the required information.
  • a different context-based interactive dialog may be used for each of the possible contexts. For example, the "weather” context-based interactive dialog may entail prompting the user for the name of the location (e.g., the city) about which weather information is desired. Also, the "news" context-based interactive dialog may entail prompting the user for types of articles, news source, Internet URL for the news site, or other related information.
  • the NLP database 218, general grammar 214, and context-specific grammar 212 are updated to include the new information, at block 362. In this way, the next time the user asks for that information, a proper match will be found, and the appropriate action taken without prompting the user for more information.
  • the present invention adaptively "learns" to recognize phrases uttered by the user.
  • one or more of the NLP database 218, context specific grammar 212, general grammar 214, and dictation grammar 216 also contain time-stamp values (not shown) associated with each entry. Each time a matching entry is used, the time-stamp value associated with that entry is updated. At periodic intervals, or when initiated by the user, the entries that have a time-stamp value before a certain date and time are removed from their respective databases/grammars. In this way, the databases/grammars may be kept to an efficient size by "purging" old or out-of-date entries. This also assists in avoiding false matches.
  • the updates to the NLP database 218, general grammar 214, and context-specific grammar 212 are stored in a user voice profile 800, shown in FIG. 9.
  • a user voice profile 800 would be comprised of any general grammar additions 214a, context-specific grammar additions 212a, and NLP database additions 218a created by the user training. Since each user of the system would have a different user voice profile 800, the invention would be flexible enough to allow for special customizations and could adapt to the idiosyncrasies of individual users.
  • the user voice profile 800 would be stored locally and mirrored at known server location.
  • the mirrored copy referred to as the "travelling" user voice profile, enables users to access their phrases "adaptively" learned by the invention, even when the user is logged into a different location.
  • FIG. 10 illustrates an exemplary method of the present invention that accesses customized user voice profiles 800 at local and remote (travelling) locations. Initially, a valid system user is verified, by any means known in the art, and then the system searches for a locally stored user voice profile. For example, the system queries the user for their login ID and password as shown in block 900. If the password and login ID match, as determined by decision block 905, the user is deemed to be a valid user.
  • this login ID and password are but one of many methods known in the art to verify valid users, and that all such validation systems could 18 be easily substituted.
  • the system searches for a travelling user voice profile, block 920. If either search turns up a user voice profile, the user voice profile is loaded, blocks 915 and 925, respectively.
  • the retrieval of the user voice profile 800 is successful, blocks 930 and 935, the user voice profile 800 is enabled by extracting the general grammar additions 214a, context-specific grammar additions 212a, and NLP database additions 218a. These "learned" adaptations are then used by the system, as discussed earlier with the method of FIGS 3A-3D.
  • speech recognition and natural language processing may be used to interact with objects, such as help files (".hip” files), World- Wide- Web (“WWW” or “web”) pages, or any other objects that have a context-sensitive voice-based interface.
  • objects such as help files (".hip” files), World- Wide- Web (“WWW” or “web”) pages, or any other objects that have a context-sensitive voice-based interface.
  • FIG. 5 illustrates an exemplary Dialog Definition File (DDF) 500 which represents information necessary to associate the speech recognition and natural language processing to an internet object, such as a text or graphics file or, in the preferred embodiment, a web-page or help file.
  • DDF Dialog Definition File
  • the Dialog Definition File 500 consists of an object table 510
  • the DDF may also contain additional context-specific grammar files 214 and additional entries for the natural language processing (NLP) database 218, as illustrated in FIG. 5.
  • the preferred embodiment of the DDF 500 includes an object table 510, a context-specific grammar file 214, a context-specific dictation model 217, and a file containing entries to the natural language processing database 218.
  • the object table 510 is a memory structure, such as a memory tree, chain or table, which associates an address of a resource with various actions, grammars, or entries in the NLP database 218.
  • FIG. 6 illustrates a memory table which may contain entry columns for: an object 520, a Text-to- Speech (TTS) flag 522, a text speech 524, a use grammar flag 526, an append grammar flag 528, an "is yes/no?” flag, and "do yes” 532 and "do no” 534 actions.
  • TTS Text-to- Speech
  • Each row in the table 540A- 540n would represent the grammar and speech related to an individual object.
  • the exemplary embodiment of the invention would refer to objects 520 through a Universal Resource Locator (URL).
  • a URL is a standard method of specifying the address of any resource on the Internet that is part of the World- Wide- Web.
  • URLs can specify information in a large variety of object formats, including hypertext, graphical, database and other files, in addition to a number of object devices and communication protocols.
  • object formats including hypertext, graphical, database and other files, in addition to a number of object devices and communication protocols.
  • URLs and other method of specifying objects can be used.
  • the Text-to-Speech (TTS) flag 522 indicates whether an initial statement should be voiced over speaker 112 when the corresponding object is transferred. For example, when transferring the web page listed in the object column 520 of row 540A (http : //www. conversit . com), the TTS flag 522 is marked, indicating the text speech 524, "Hello, welcome to...,” is to be voiced over speaker 112.
  • the next three flags relate to the use of grammars associated with this object.
  • the affirmative marking of the "use grammar” 526 or “append grammar” 528 flags indicate the presence of a content-specific grammar file 214 related to the indicated object.
  • the marking of the "use grammar” flag 526 indicates that the new content-specific grammar file 214 replaces the existing content-specific grammar file, and the existing file is disabled.
  • the "append grammar” flag 528 indicates that the new content-specific grammar file should be enabled concurrently with the existing content-specific grammar file.
  • the remaining column entries relate to a "yes/no" grammar structure. If the "Is yes/no?" flag 530 is marked, then a standard "yes/no" grammar is enabled. When a standard "yes/no" grammar is enabled, affirmative commands spoken to the computer result in the computer executing the command indicated in the "Do Yes" entry 532. Similarly, a negative command spoken to the computer results in the computer executing the command indicated in the "Do No” entry 534.
  • the entries in the "Do Yes" 532 and “Do No” 534 columns may either be commands or pointers to commands imbedded in the NLP Database 218. For example, as shown in row 540B, the "Is Yes/No?" flag is marked.
  • FIG. 7A a method and system of providing speech and voice commands to objects, such as a computer reading a help file or browsing the World-Wide-Web, is illustrated.
  • the method of FIGS. 7A-7C may be used in conjunction with the method of FIGS 3A-3D and FIG. 10.
  • an object location is provided to a help file reader or World-Wide- Web browser.
  • a help file reader/browser is a program used to examine hypertext documents that are written to help users accomplish tasks or solve problems, and is well known in the art.
  • the web browser is a program used to navigate through the Internet, and is well known in the art.
  • the step, at block 602 of providing an object location to the browser can be as simple as a user clicking on a program "help" menu item, manually typing in a URL, or having a user select a "link" at a chosen web-site. It also may be the result of a voiced command as described earlier with reference to the action associated with each entry in the NLP database 218.
  • the computer Given the object location, the computer must decide on whether it can resolve object location specified, at block 604. This resolution process is a process well known in the art. If the computer is unable to resolve the object location or internet address, an error message is displayed in the browser window, at block 605, and the system is returned to its initial starting state 600. If the object location or internet address is resolved, the computer retrieves the object at block 606. For a networked object, for example, a web browser sends the web-site a request to for the web page, at block 606. For a help file application, the help reader reads the help file off of storage media 108, at block 606.
  • the computer 100 determines whether the DDF file 500 corresponding to the object is already present on the computer 100. If the DDF file is present, the flow proceeds to FIG. 7C, if not the flow proceeds to FIG. 7B.
  • the computer examines whether the DDF file 500 location is encoded within the object.
  • the DDF file location could be encoded within web page HyperText Markup Language (HTML) as a URL.
  • HTML HyperText Markup Language
  • the location's internet address is resolved, at block 616, and the computer requests transfer of the DDF file 500, at block 626.
  • An equivalent encoding scheme could be used within help file hypertext.
  • Block 618 determines whether the DDF file is located at the web-site. At this step, the computer sends query to the web-site inquiring about the presence of the DDF file 500. If the DDF file 500 is present at the web-site, the computer requests transfer of the DDF file 500, at block 626.
  • the computer queries the centralized location about the presence of a DDF file for the web-site, at block 620. If the DDF file is present at the web-site, the computer requests transfer of the DDF file, at block 626. If the DDF file 500 cannot be found, the existing components of any present DDF file, such as the object table 510, context-specific dictation model 217, NLP database 218 associated with the object, and context-specific grammar 214 for any previously-viewed object, are deactivated in block 622. Furthermore, the object is treated as a non-voice-activated object, and only standard grammar files are used, at block - 624. Standard grammar files are the grammar files existing on the system excluding any grammars associated with the content-specific grammar file associated with the object.
  • any existing components of any present DDF file 500 are deactivated, at block 622, and the web- site is treated as a non-voice-activated object, and only standard grammar files are used, at block 624.
  • the DDF file 500 If the DDF file 500 is requested at block 626 and its transfer is successful at block 628, it replaces any prior DDF file, at block 630. Any components of the DDF file 500, such as the object table 510, context-specific-grammar files 214, context-specific-dictation models 217, and NLP database 218 are extracted at block 632. A similar technique may be used for obtaining the software necessary to implement the method illustrated in FIGS. 3A-3D, comprising the functional elements of FIG. 2. 22
  • the object table 510 is read into memory by the computer in block 634. If the object is present in the site object table 510, as determined by block 636, it will be represented by a row 540A-540/I of the table, as shown in FIG. 6. Each row of the object table represents the speech-interactions available to a user for that particular object. If no row corresponding to the object exists, then no-speech interaction exists for the web page, and processing ends.
  • the computer checks if the TTS flag 522 is marked, to determine whether a text speech 524 is associated with the web-page, at block 638. If there is a text speech 524, it is voiced at block 640, and flow continues. If there is a context-specific grammar file associated with object, as determined by decision block 642, it is enabled at block 644, and then the NLP database 218 is enabled at block 646. If no context-specific grammar file is associated with the object, only the NLP database 218 is enabled at block 646. Once the NLP database is enabled 646, the system behaves as FIG. 3A-3C, as described above.
  • the present invention provides a method and system for an object interactive user-interface for a computer.
  • the present invention decreases speech recognition time and increases the user's ability to communicate with local and networked objects, such as help files or web-pages, in a conversational style.
  • Adaptive updating of the various grammars and the NLP database the present invention further increases interactive efficiency.
  • the adaptive updates can be incorporated into user voice profiles that can be stored locally and remotely, to allow users access to the user voice profiles at various locations.

Abstract

A system and method for interacting with objects, via a computer using utterances, speech processing and natural language processing. A Data Definition File relates networked objects and a speech processor. The Data Definition File encompasses a memory structure relating the objects, including grammar files, a context-specific dictation model, and a natural language processor. The speech processor searches a first grammar file for a matching phrase for the utterance, and for searching a second grammar file for the matching phrase if the matching phrase is not found in the first grammar file. The system also includes a natural language processor for searching a database for a matching entry for the matching phrase; and an application interface for performing an action associated with the matching entry if the matching entry is found in the database. The system utilizes context-specific grammars and dictation models, thereby enhancing speech recognition and natural language processing efficiency. Additionally, for each user the system adaptively and interactively 'learns' words and phrases, and their associated meanings, storing the adaptive updates into user voice profiles. Because the user voice profiles can be stored locally or remotely, users can access the adaptively learned words and phrases at various locations.

Description

INTERACTIVE USER INTERFACE USING SPEECH RECOGNITION AND NATURAL LANGUAGE PROCESSING
BACKGROUND OF THE INVENTION I. Field of the Invention
The present invention relates to speech recognition for an object-based computer user interface. More specifically, the present invention relates to a novel method and system for user interaction with a computer using speech recognition and natural language processing. This application is a continuation-in-part of U.S. Patent Application Serial No. 09/166,198, entitled "Network Interactive User Interface Using Speech Recognition and Natural Language Processing," filed October 5, 1998.
II. Description of the Related Art
As computers have become more prevalent it has become clear that many people have great difficulty understanding and communicating with computers. A user must often learn archaic commands and non-intuitive procedures in order to operate the computer. For example, most personal computers use windows-based operating systems that are largely menu-driven. This requires that the user leam what menu commands or sequence of commands produce the desired results. Furthermore, traditional interaction with a computer is often slowed by manual input devices such as keyboards or mice. Many computer users are not fast typists. As a result, much time is spent communicating commands and words to the computer through these manual input devices. It is becoming clear that an easier, faster and more intuitive method of communicating with computers and networked objects, such as web-sites, is needed. One proposed method of computer interaction is speech recognition. Speech recognition involves software and hardware that act together to audibly detect human speech and translate the detected speech into a string of words. As is known in the art, speech recognition works by breaking down sounds the hardware detects into smaller non-divisible sounds called phonemes. Phonemes are distinct units of sound. For example, the word "those" is made up of three phonemes; the first is the "th" sound, the second is the "o" sound, and the third is the "s" sound. The speech recognition software attempts to match the detected phonemes with known words from a stored dictionary. An example of a speech recognition system is given in U.S. Patent No. 4,783,803, entitled "SPEECH RECOGNITION APPARATUS AND METHOD", issued November 8, 1998, assigned to Dragon Systems, Inc., and incorporated herein by reference. Presently, there are many commercially available speech recognition software packages available from such companies as Dragon Systems, Inc. and International Business Machines, Inc.
One limitation of these speech recognition software packages or systems is that they typically only perform command and control or dictation functions. Thus, the user is still required to learn a vocabulary of commands in order to operate the computer.
A proposed enhancement to these speech recognition systems is to process the detected words using a natural language processing system. Natural language processing generally involves determining a conceptual "meaning" (e.g., what meaning the speaker intended to convey) of the detected words by analyzing their grammatical relationship and relative context. For example, U.S. Patent No. 4,887,212, entitled "PARSER FOR NATURAL LANGUAGE TEXT", issued December 12, 1989, assigned to International Business Machines Corporation and incorporated by reference herein teaches a method of parsing an input stream of words by using word isolation, morphological analysis, dictionary look-up and grammar analysis.
Natural language processing used in concert with speech recognition provides a powerful tool for operating a computer using spoken words rather than manual input such as a keyboard or mouse. However, one drawback of a conventional natural language processing system is that it may fail to determine the correct "meaning" of the words detected by the speech recognition system. In such a case, the user is typically required to recompose or restate the phrase, with the hope that the natural language processing system will determine the correct "meaning" on subsequent attempts. Clearly, this may lead to substantial delays as the user is required to restate the entire sentence or command. Another drawback of conventional systems is that the processing time required for the speech recognition can be prohibitively long. This is primarily due to the finite speed of the processing resources as compared with the large amount of information to be processed. For example, in many conventional speech recognition programs, the time required to recognize the utterance is long due to the size of the dictionary file being searched. An additional drawback of conventional speech recognition and natural language processing systems is that they are not interactive, and thus are unable to cope with new situations. When a computer system encounters unknown or new networked objects, new relationships between the computer and the objects are formed. Conventional speech recognition and natural language processing systems are unable to cope with the situations that result from the new relationships posed by previously unknown networked objects. As a result, a conversational-style interaction with the computer is not possible. The user is required to communicate complete concepts to the computer. The user is not able to speak in sentence fragments because the meaning of these sentence fragments (which is dependent on the meaning of previous utterances) will be lost.
Another drawback of conventional speech recognition and natural language processing systems is that once a user successfully "trains" a computer system to recognize the user's speech and voice commands, the user cannot easily move to another computer without having to undergo the process of training the new computer. As a result, changing a user's computer workstations or location results in wasted time by users that need to re-train the new computer to the user's speech habits and voice commands.
What is needed is an interactive user interface for a computer that utilizes speech recognition and natural language processing which avoids the drawbacks mentioned above.
SUMMARY OF THE INVENTION The present invention is a novel and improved system and method for interacting with a computer using utterances, speech processing and natural language processing. Generically, the system comprises a speech processor for searching a first grammar file for a matching phrase for the utterance, and for searching a second grammar file for the matching phrase if the matching phrase is not found in the first grammar file. The system also includes a natural language processor for searching a database for a matching entry for the matching phrase; and an application interface for performing an action associated with the matching entry if the matching entry is found in the database. In the preferred embodiment, the natural language processor updates a user voice profile with at least one of the database, the first grammar file and the second grammar file with the matching phrase if the matching entry is not found in the database.
The first grammar file is a context-specific grammar file. A context-specific grammar file is one that contains words and phrases that are highly relevant to a specific subject. The second grammar file is a general grammar file. A general grammar file is one that contains words and phrases which do not need to be interpreted in light of a context. That is to say, the words and phrases in the general grammar file do not belong to any parent context. By searching the context-specific grammar file before searching the general grammar file, the present invention allows the user to communicate with the computer using a more conversational style, wherein the words spoken, if found in the context specific grammar file, are interpreted in light of the subject matter most recently discussed.
In a further aspect of the present invention, the speech processor searches a dictation grammar for the matching phrase if the matching phrase is not found in the general grammar file. The dictation grammar is a large vocabulary of general words and phrases. By searching the context-specific and general grammars first, it is expected that the speech recognition time will be greatly reduced due to the context-specific and general grammars being physically smaller files than the dictation grammar. In another aspect of the present invention, the speech processor searches a context- specific dictation model for the matching phrase if the matching phrase is not found within the dictation grammar. A context-specific dictation model is a model that indicates the relationship between words in a vocabulary. The speech processor uses this to determine help decode the meaning of related words in an utterance. In another aspect of the present invention, the natural language processor replaces at least one word in the matching phrase prior to searching the database. This may be accomplished by a variable replacer in the natural language processor for substituting a wildcard for the at least one word in the matching phrase. By substituting wildcards for certain words (called "word- variables") in the phrase, the number of entries in the database can be significantly reduced. Additionally, a pronoun substituter in the natural language processor may substitute a proper name for pronouns the matching phrase, allowing user-specific facts to be stored in the database.
In another aspect of the present invention, a string formatter text formats the matching phrase prior to searching the database. Also, a word weighter weights individual words in the matching phrase according to a relative significance of the individual words prior to searching the database. These steps allow for faster, more accurate searching of the database.
A search engine in the natural language processor generates a confidence value for the matching entry. The natural language processor compares the confidence value with a threshold value. A boolean tester determines whether a required number of words from the matching phrase are present in the matching entry. This boolean testing serves as a verification of the results returned by the search engine.
In order to clear up ambiguities, the natural language processor prompts the user whether the matching entry is a correct interpretation of the utterance if the required number of words from the matching phrase are not present in the matching entry. The natural language processor also prompts the user for additional information if the matching entry is not a correct interpretation of the utterance. At least one of the database, the first grammar file and the second grammar file are updated with the additional information. In this way, the present invention adaptively "learns" the meaning of additional utterances, thereby enhancing the efficiency of the user interface.
The speech processor will enable and search a context-specific grammar associated with the matching entry for a subsequent matching phrase for a subsequent utterance. This ensures that the most relevant words and phrases will be searched first, thereby decreasing speech recognition times.
Generically, the invention includes a method for updating a computer for voice interaction with an object, such as a help file or web page. Initially, an object table, which associates with the object with the voice interaction system, is transferred to the computer over a network. The location of the object table can be imbedded within the object, at a specific internet web-site, or at consolidated location that stores object tables for multiple objects. The object table is searched for an entry matching the object. The entry matching the object may result in an action being performed, such as text speech being voiced through a speaker, a context-specific grammar file being used, or a natural language processor database being used. The object table may be part of a dialog definition file. Dialog definition files may also include a context-specific grammar, entries for a natural language processor database, a context-specific dictation model, or any combination thereof.
In another aspect of the present invention, a network interface transfers a dialog definition file from over the network. The dialog definition file contains an object table. A data processor searches the object table for a table entry that matches the object. Once this matching table entry is found, an application interface performs an action specified by the matching entry.
In another aspect of the present invention, the dialog definition file associated with a network is located, and then read. The dialog definition file could be read from a variety of locations, such as a web-site, storage media, or a location that stores dialog definition files for multiple objects. An object table, contained within the dialog definition file, is searched to find a table entry matching the object. The matching entry defines an action associated with the object, and the action is then performed by the system. In addition to an object table, the dialog definition file may contain a context-specific grammar, entries for a natural language processor database, a context-specific dictation model, or any combination thereof. BRIEF DESCRIPTION OF THE DRAWINGS
The features, objects and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout and wherein:
FIG. 1 is a functional block diagram of an exemplary computer system for use with the present invention;
FIG. 2 is an expanded functional block diagram of the CPU 102 and storage medium 108 of the computer system of FIG. 1 of the present invention; FIGS. 3A-3D are a flowchart of the method of providing interactive speech recognition and natural language processing to a computer;
FIG. 4 is a diagram of selected columns of an exemplary natural language processing (NLP) database of the present invention;
FIG. 5 is a diagram of an exemplary Database Definition File (DDF) according to the present invention;
FIG. 6 is a diagram of selected columns of an exemplary object table of the present invention;
FIGS. 7A-7C are a flowchart of the method of the present invention, illustrating the linking of interactive speech recognition and natural language processing to a networked object, such as a web-page;
FIG. 8 is a diagram depicting a computer system connecting to other computers, storage media, and web-sites via the Internet; and
FIG. 9 is a diagram of an exemplary user voice profile according to the present invention; FIG. 10 is a flowchart of the method of the present invention, illustrating the retrieval and enabling of an individual's user voice profile during login at a computer workstation. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention will now be disclosed with reference to a functional block diagram of an exemplary computer system 100 of FIG. 1. In FIG. 1, computer system 100 includes a central processing unit (CPU) 102. The CPU 102 may be any general purpose microprocessor or microcontroller as is known in the art, appropriately programmed to perform the method described herein with reference to FIGS. 3A-3D. The software for programming the CPU can be found at storage medium 108 or alternatively from another location across a computer network. For example, CPU 102 may be a conventional microprocessor such as the Pentium II processor manufactured by Intel Corporation or the like.
CPU 102 communicates with a plurality of peripheral equipment, including a display 104, manual input 106, storage medium 108, microphone 110, speaker 112, data input port 114 and network interface 116. Display 104 may be a visual display such as a CRT, LCD screen, touch-sensitive screen, or other monitors as are known in the art for visually displaying images and text to a user. Manual input 106 may be a conventional keyboard, keypad, mouse, trackball, or other input device as is known in the art for the manual input of data. Storage medium 108 may be a conventional read/write memory such as a magnetic disk drive, floppy disk drive, CD- ROM drive, silicon memory or other memory device as is known in the art for storing and retrieving data. Significantly, storage medium 108 may be remotely located from CPU 102, and be connected to CPU 102 via a network such as a local area network (LAN), or a wide area network (WAN), or the Internet. Microphone 110 may be any suitable microphone as is known in the art for providing audio signals to CPU 102. Speaker 112 may be any suitable speaker as is known in the art for reproducing audio signals from CPU 102. It is understood that microphone 110 and speaker 112 may include appropriate digital-to-analog and analog-to-digital conversion circuitry as appropriate. Data input port 114 may be any data port as is known in the art for interfacing with an external accessory using a data protocol such as RS-232, Universal Serial Bus, or the like. Network interface 116 may be any interface as known in the art for communicating or transferring files across a computer network, examples of such networks include TCP/IP, ethernet, or token ring networks. In addition, on some systems, a network interface 116 may consist of a modem connected to the data input port 114.
Thus, FIG. 1 illustrates the functional elements of a computer system 100. Each of the elements of computer system 100 may be suitable off-the-shelf components as described above. The present invention provides a method and system for human interaction with the computer system 100 using speech.
As shown in FIG. 8, the computer system 100 may be connected to the Internet 700, a collection of computer networks. To connect to the Internet 700, computer system 100 may use a network interface 116, a modem connected to the data input port 114, or any other method known in the art. Web-sites 710, other computers 720, and storage media 108 may also be connected to the Internet through such methods known in the art.
Turning now to FIG. 2, FIG. 2 illustrates an expanded functional block diagram of CPU 102 and storage medium 108. It is understood that the functional elements of FIG. 2 may be embodied entirely in software or hardware or both. In the case of a software embodiment, the software may be found at storage medium 108 or at an alternate location across a computer network. CPU 102 includes speech recognition processor 200, data processor 201, natural language processor 202, and application interface 220. The data processor 201 interfaces with the display 104, storage medium 108, microphone 110, speaker 112, data input port 114, and network interface 116. The data processor 201 allows the CPU to locate and read data from these sources. Natural language processor 202 further includes variable replacer 204, string formatter 206, word weighter 208, boolean tester 210, pronoun replacer 211, and search engine
213. Storage medium 108 includes a plurality of context-specific grammar files 212, general grammar file 214, dictation grammar 216, context-specific dictation model 217, and natural language processor (NLP) database 218. In the preferred embodiment, the grammar files 212,
214, and 216 are Bakus-Naur Form (BNF) files, which describe the structure of the language spoken by the user. BNF files are well known in the art for describing the structure of language, and details of BNF files will therefore not be discussed herein. One advantage of BNF files is that hierarchical tree-like structures may be used to describe phrases or word sequences, without the need to explicitly recite all combinations of these word sequences. Thus, the use of BNF files in the preferred embodiment minimizes the physical sizes of the files 212, 214, and 216 in the storage medium 108, increasing the speed at which these files can be enabled and searched as described below. However, in alternate embodiments, other file structures are used.
The context-specific dictation model 217 is an optional file that contains specific models to improve dictation accuracy. These models enable users to specify word orders and word models. The models accomplish this by describing words and their relationship to other words, thus determining word meaning by contextual interpretation in a specific field or topic. Take for example, the phrase "650 megahertz microprocessor computer." A context-specific dictation model 217 for computers may indicate the likelihood of the word "microprocessor" with "computer," and that a number, such as "650" is likely to be found near the word "megahertz." By interpreting the context of the words, via a context-specific dictation model 217, a speech recognition processor would analyze the phrase, interpret a single object, i.e. the computer, and realize that "650 megahertz microprocessor" are adjectives or traits describing the type of computer.
Topics for context-specific dictation models 217 vary widely, and may include any topic area of interest to a user — both broad and narrow. Broad topics may include: history, law, medicine, science, technology, or computers. Specialized topics, such as a particular field of literature encountered at a book retailer's web-site are also possible. Such a context-specific dictation model 217 may contain text for author and title information, for example.
Finally, the context-specific dictation model 217 format relies upon the underlying speech recognition processor 200, and is specific to each type of speech recognition processor 200. The operation and interaction of these functional elements of FIG. 2 will be described with reference to the flowchart of FIGS. 3A-3D. In FIG. 3A, the flow begins at block 300 with the providing of an utterance to speech processor 200. An utterance is a series of sounds having a beginning and an end, and may include one or more spoken words. A microphone 110 that captures spoken words may perform the step of block 300. Alternately, the utterance may be provided to the speech processor 200 over data input port 114, or from storage medium 108. Preferably, the utterance is in a digital format such as the well-known ".wav" audio file format.
The flow proceeds to decision 302 where the speech processor 200 determines whether one of the context-specific grammars 212 has been enabled. If the context-specific grammars 212 are enabled, the context-specific grammars 212 are searched at block 304. In the preferred embodiment, the context-specific grammars 212 are BNF files that contain words and phrases which are related to a parent context. In general, a context is a subject area. For example, in one embodiment of the present invention applicable to personal computers, examples of contexts may be "news", or "weather", or "stocks". In such a case, the context-specific grammars 212 would each contain commands, control words, descriptors, qualifiers, or parameters that correspond to a different one of these contexts. The use of contexts provides a hierarchal structure for types of information. Contexts and their use will be described further below with reference to the NLP database 218. 10
If a context-specific grammar 212 has been enabled, the context-specific grammar 212 is searched for a match to the utterance provided at block 300. However, if a context-specific grammar 212 has not been enabled, the flow proceeds to block 308 where the general grammar 214 is enabled. In the preferred embodiment, the general grammar 214 is a BNF file which contains words and phrases which do not, themselves, belong to a parent context, but may have an associated context for which a context-specific grammar file 212 exists. In other words, the words and phrases in the general grammar 214 may be at the root of the hierarchal context structure. For example, in one embodiment applicable to personal computers, the general grammar 214 would contain commands and control phrases.
In block 310, the general grammar 214 is searched for a matching word or phrase for the utterance provided at block 300. A decision is made, depending on whether the match is found, at block 312. If a match is not found, then the dictation grammar 216 is enabled at block 314. In the preferred embodiment, the dictation grammar 216 is a BNF file that contains a list of words that do not, themselves, have either a parent context or an associated context. For example, in one embodiment applicable to a personal computer, the dictation grammar 216 contains a relatively large list of general words similar to a general dictionary.
In block 316 the dictation grammar is searched for matching words for each word of the utterance provided at block 300. At decision block 318, if no matching words are found, any relevant context-specific dictation model 217 is enabled at block 317.
At decision block 319, if no matching words are found, a visual error message is optionally displayed at the display 104 or an audible error message is optionally reproduced through speaker 112, at block 320. The process ends until another utterance is provided to the speech processor 200 at block 300. Thus, as can be seen from the above description, when an utterance is provided to the speech processor 200, the enabled context-specific grammar 212, if any, is first searched. If there are no matches in the enabled context-specific grammar 212, then the general grammar 214 is enabled and searched. If there are no matches in the general grammar 214, then the dictation grammar 316 is enabled and searched. Finally, if there are not matches in the dictation grammar 316, a context-specific dictation model 217 is enabled 317 and used to interpret the utterance.
In the preferred embodiment, when the speech recognition processor 200 is searching either the context-specific grammar 212 or the general grammar 214, it is said to be in the "command and control" mode. In this mode, the speech recognition processor 200 compares the entire utterance as a whole to the entries in the grammar. By contrast, when the speech recognition processor 200 is searching the dictation grammar, it is said to be in the "dictation" mode. In this mode, the speech recognition processor 200 compares the utterance to the entries in the dictation grammar 216 one word at a time. Finally, when the speech recognition processor 200 is matching the utterance with a context-specific dictation model 217, it is said to be in "model matching" mode. It is expected that searching for a match for an entire utterance in the command and control mode will generally be faster than searching for one word at a time in dictation or model matching modes. It is further expected that any individual context-specific grammar 212 will be smaller in size (i.e., fewer total words and phrases) than the general grammar 214, which in turn will be smaller in size than the dictation grammar 216. Thus, by searching any enabled context-specific grammar 212 first, it is likely that a match, if any, will be found more quickly, due at least in part to the smaller file size. Likewise, by searching the general grammar 214 before the dictation grammar 216, it is likely that a match, if any, will be found more quickly.
Additionally, as will be explained further below with regard to the ability of the present invention to adaptively add to both the context-specific grammar 212 and the general grammar 214, they will contain the most common utterances. As such, it is expected that a match is more likely to be found quickly in the context-specific grammar 212 or the general grammar 214 than in the dictation grammar 216.
Finally, as will be explained further below, the words and phrases in the enabled context- specific grammar 212 are more likely to be uttered by the user because they are words that are highly relevant to the subject matter about which the user was most recently speaking. This also allows the user to speak in a more conversational style, using sentence fragments, with the meaning of his words being interpreted according to the enabled context-specific grammar 212. By searching in the above-described sequence, the present invention may search more efficiently than if the searching were to occur one entry at a time in a single, large list of all expected words and phrases.
Referring back to decision 306, if a match is found in the context-specific grammar 212, then there are three possible next steps shown in FIG. 3A. For each matching entry in the enabled context-specific grammar 212, there may be an associated action to be taken by the speech recognition processor 200. Block 322 shows that one action may be to direct application interface 220 to take some action with respect to a separate software application or entity. For example, application interface 220 may use the Speech Application Programming Interface (SAPI) standard by Microsoft to communicate with an external application. The external application may be directed, for example, to access a particular Internet web site URL or to speak a particular phrase by converting text to speech. Other actions may be taken as will be discussed further below with reference to the NLP database 218 of FIG. 4.
Block 324 shows that another action may be to access a row in the natural language processing (NLP) database 218 directly, thereby bypassing the natural language processing steps described further below. Block 326 shows that another action may be to prepend a word or phrase for the enabled context to the matching word or phrase found in the context-specific grammar 306. For example, if the enabled context were "movies" and the matching utterance were "8 o'clock," the word "movies" would be prepended to the phrase "8 o'clock" to form the phrase "movies at 8 o'clock."
Likewise, if a match is found in the general grammar 214, then the flow may proceed to block 322 where the application interface 220 is directed to take an action as described above, or to block 324 where a row in the NLP database is directly accessed. However, if a match is found in the general grammar 214, no prepending of a context occurs because, as stated above, the entries in the general grammar 214 do not, themselves, have a parent context.
Alternatively, manually entered words may be captured, at block 301, and input into the natural language processor. Finally, with reference to FIG. 3A, words may be entered manually via manual input
106. In this case, no speech recognition is required, and yet natural language processing of the entered words is still desired. Thus, the flow proceeds to FIG. 3B.
In FIG. 3B, at block 328, the natural language processor 202 formats the phrase for natural language processing analysis. This formatting is accomplished by string formatter 206 and may include such text processing as removing duplicate spaces between words, making all letters lower case (or upper case), expanding contractions (e.g., changing "it's" to "it is"), and the like. The purpose of this formatting step is to prepare the phrase for parsing.
The flow continues to block 330 where certain "word-variables" are replaced with an associated wildcard function by variable replacer 204 in preparation for accessing the NLP database 218. As used herein, the term "word-variables" refers to words or phrases that represent amounts, dates, times, currencies, and the like. For example, in one embodiment the phrase "what movies are playing at 8 o'clock" would be transformed at block 330 to "what movies are playing at $time" where "$time" is a wildcard function used to represent any time value. As another example, in one embodiment the phrase "sell IBM stock at 100 dollars" would be transformed at block 330 to "sell IBM stock at Sdollars" where "Sdollars" is a wildcard function used to represent any dollar value. This step may be accomplished by a simple loop that searches the phrase for key tokens such as the words "dollar" or "o'clock" and replaces the word-variables with a specified wildcard function. In order to keep track of the location in the phrase where the substitution was made, an array may be used. This allows re- substitution of the original word- variable back into the phrase at the some position after the NLP database 218 has been searched.
The purpose of replacing word- variables with an associated wildcard function at block 330 is to reduce the number of entries that must be present in the NLP database 218. For example, the NLP database 218 would only contain the phrase "what movies are playing at $time" rather than a separate entry for 8 o'clock, 9 o'clock, 10 o'clock, and so on. The NLP database 218 will be described further below.
At block 332, pronouns in the phrase are replaced with proper names by pronoun replacer 211. For example, in one embodiment the pronouns "I," "my," or "mine" would be replaced with the speaker's name. The purpose of this step is to allow user-specific facts to be stored and accessed in the NLP database 218. For example, the sentence "who are my children" would be transformed into "who are Dean's children" where "Dean" is the speaker's proper name. Again, this step may be performed in a simple loop that searches the phrase for pronouns, and replaces the pronouns found with an appropriate proper name. In order to keep track of the locations in the phrase where a substitution was made, an array may be used.
In block 334, the individual words in the phrase are weighted according to their relative "importance" or "significance" to the overall meaning of the phrase by word weighter 208. For example, in one embodiment there are three weighting factors assigned. The lowest weighting factor is assigned to words such as "a," "an," "the," and other articles. The highest weighting factor is given to words that are likely to have a significant relation to the meaning of the phrase. For example, these may include all verbs, nouns, adjectives, and proper names in the NLP database 218. A medium weighting factor is given to all other words in the phrase. The purpose of this weighting is to allow for more powerful searching of the NLP database 218. An example of selected columns of the NLP database 218 of one embodiment is shown in FIG. 4. The NLP database 218 comprises a plurality of columns 400-410, and a plurality of rows 412A-412N. In column 400, the entries represent phrases that are "known"' to the NLP database. In column 402, a number of required words for each entry in column 400 is shown. In column 404, an associated context or subcontext for each entry in column 400 is shown. In columns 408 and 410, one or more associated actions are shown for each entry in column 400. It should be noted that the NLP database 218 shown in FIG. 4 is merely a simplified example for the purpose of teaching the present invention. Other embodiments may have more or fewer columns with different entries.
Referring back to FIG. 3B, at block 336, the NLP database 218 is searched for possible matches to the phrase, based on whether the entry in column 400 of the NLP database 218 contains any of the words in the phrase (or their synonyms), and the relative weights of those words. At block 338, a confidence value is generated for each of the possible matching entries based on the number of occurrences of each of the words in the phrase and their relative weights. Weighted word searching of a database is well known in the art and may be performed by commercially available search engines such as the product "dtsearch" by DT Software, Inc. of Arlington, Virginia. Likewise, searching using synonyms is well known in the art and may be accomplished using such publicly available tools such as "WordNet," developed by the Cognitive Science Laboratory of Princeton University in Princeton, New Jersey. The search engine may be an integral part of the natural language processor 202.
At decision 340, the natural language processor 202 determines whether any of the possible matching entries has a confidence value greater than or equal to some predetermined minimum threshold, T. The threshold T represents the lowest acceptable confidence value for which a decision can be made as to whether the phrase matched any of the entries in the NLP database 218. If there is no possible matching entry with a confidence value greater than or equal to T, then the flow proceeds to block 342 where an optional error message is either visually displayed to the user over display 104 or audibly reproduced over speaker 112. In one embodiment, the type of error message, if any, displayed to the user may depend on how many "hits" (i.e., how many matching words from the phrase) were found in the highest-confidence NLP database entry. A different type of error message would be generated if there was zero or one hits, than if there were two or more hits.
If, however, there is at least one entry in the NLP database 218 for which a confidence value greater than or equal to T exists, then the flow proceeds to block 344 where the "noise" words are discarded from the phrase. The "noise" words include words that do not contribute significantly to the overall meaning of the phrase relative to the other words in the phrase. These may include articles, pronouns, conjunctions, and words of a similar nature. "Non-noise" words would include words that contribute significantly to the overall meaning of the phrase. "Non-noise" words would include verbs, nouns, adjectives, proper names, and words of a similar nature.
The flow proceeds to FIG. 3C where the non-noise word requirement is retrieved from column 402 of the NLP database 218 for the highest-confidence matching entry at block 346. For example, if the highest-confidence matching phrase was the entry in row 412 A, (e.g., "what movies are playing at $time"), then the number of required non-noise words is 3.
At decision 348, a test is made to determine whether the number of required non-noise words from the phrase is actually present in the highest-confidence entry retrieved from the NLP database 218. This test is a verification of the accuracy of the relevance-style search performed at block 336, it being understood that an entry may generate a confidence value higher than the minimum threshold, T, without being an acceptable match for the phrase.
The nature of the test performed at decision 348 is a boolean "AND" test performed by boolean tester 210. The test determines whether each one of the non-noise words in the phrase (or its synonym) is actually present in the highest-confidence entry. If there are a sufficient number of required words actually present in the highest-confidence entry, then the flow proceeds to block 350, where the natural language processor 202 directs application interface 220 to take an associated action from column 408 or 410. It is understood that additional action columns may also be present.
For example, if the highest confidence entry was the entry in row 412 A, and the boolean test of decision 348 determined that there actually were 3 non-noise words from the phrase in the entry in column 400, then the associated action in column 408 (e.g., access movie web site) would be taken. Other entries in the NLP database have other associated actions. For example, if the highest-confidence entry is that in row 412E (e.g., "what time is it"), the associated action may be for natural language processor 202 to direct a text-to-speech application (not shown) to speak the present time to the user through the speaker 112. As another example, if the highest- confidence entry is that in row 412F (e.g., "show me the news"), the first associated action may be to access a predetermined news web site on the Internet, and a second associated action may be to direct an image display application (not shown) to display images associated with the news. Different or additional actions may also be performed. Also, if the highest-confidence entry contains the required number of non-noise words from the phrase as determined at decision 348, the natural language processor 202 instructs the speech recognition processor 200 to enable the context-specific grammar 212 for the associated context of column 404. Thus, for row 412A, context-specific grammar 212 for the context 16
"movies" would be enabled. Thus, when the next utterance is provided to the speech recognition processor 200 in block 300 of FIG. 3 A, it would search the enabled context-specific grammar 212 for "movies" before searching the general grammar 214. As previously stated, enabling the appropriate context-specific grammar 212 greatly increases the likelihood of fast, successful speech recognition, and enhances the user's ability to communicate with the computer in a conversational style.
If, however, back at decision 348, the required number of non-noise words from the phrase is not actually present in the highest-confidence entry retrieved from the NLP database 218, then the flow proceeds to block 354 where the user is prompted over display 104 or speaker 112 whether the highest-confidence entry was meant. For example, if the user uttered "How much is IBM stock selling for today," the highest-confidence entry in the NLP database 218 may be the entry in row 412B. In this case, although the relevance factor may be high, the number of required words (or their synonyms) may not be sufficient. Thus, the user would be prompted at block 354 whether he meant "what is the price of IBM stock on August 28, 1998." The user may respond either affirmatively or negatively. If it is determined at decision
356 that the user has responded affirmatively, then the action(s) associated with the highest- confidence entry are taken at block 350, and the associated context-specific grammar 212 enabled at block 352.
If, however, it is determined at decision 356 that the user has responded negatively, then the flow proceeds to FIG. 3D where the associated context from column 404 of NLP database 218 is retrieved for the highest-confidence entry, and the user is prompted for information using a context-based interactive dialog at block 360. For example, if the user uttered "what is the price of XICOR stock today," and the highest confidence entry from the NLP database 218 was row 412B (e.g., "what is the price of IBM stock on $date"), then the user would be prompted at block 354 whether that was what he meant.
If the user responds negatively, then the context "stock" is retrieved from column 404 at block 358, and the context-based interactive dialog for the stock context is presented to the user over the display 104 and speaker 112. Such a context-based interactive dialog may entail prompting the user for the name and stock ticker symbol of XICOR stock. The user may respond by speaking the required information. A different context-based interactive dialog may be used for each of the possible contexts. For example, the "weather" context-based interactive dialog may entail prompting the user for the name of the location (e.g., the city) about which weather information is desired. Also, the "news" context-based interactive dialog may entail prompting the user for types of articles, news source, Internet URL for the news site, or other related information.
Upon completion of the context-based interactive dialog, the NLP database 218, general grammar 214, and context-specific grammar 212 are updated to include the new information, at block 362. In this way, the next time the user asks for that information, a proper match will be found, and the appropriate action taken without prompting the user for more information. Thus, the present invention adaptively "learns" to recognize phrases uttered by the user.
In one embodiment of the present invention, one or more of the NLP database 218, context specific grammar 212, general grammar 214, and dictation grammar 216 also contain time-stamp values (not shown) associated with each entry. Each time a matching entry is used, the time-stamp value associated with that entry is updated. At periodic intervals, or when initiated by the user, the entries that have a time-stamp value before a certain date and time are removed from their respective databases/grammars. In this way, the databases/grammars may be kept to an efficient size by "purging" old or out-of-date entries. This also assists in avoiding false matches.
In an alternate embodiment of the present invention, the updates to the NLP database 218, general grammar 214, and context-specific grammar 212 are stored in a user voice profile 800, shown in FIG. 9. A user voice profile 800 would be comprised of any general grammar additions 214a, context-specific grammar additions 212a, and NLP database additions 218a created by the user training. Since each user of the system would have a different user voice profile 800, the invention would be flexible enough to allow for special customizations and could adapt to the idiosyncrasies of individual users.
Moreover, in some embodiments of the present invention, the user voice profile 800 would be stored locally and mirrored at known server location. The mirrored copy, referred to as the "travelling" user voice profile, enables users to access their phrases "adaptively" learned by the invention, even when the user is logged into a different location. FIG. 10 illustrates an exemplary method of the present invention that accesses customized user voice profiles 800 at local and remote (travelling) locations. Initially, a valid system user is verified, by any means known in the art, and then the system searches for a locally stored user voice profile. For example, the system queries the user for their login ID and password as shown in block 900. If the password and login ID match, as determined by decision block 905, the user is deemed to be a valid user. It is well understood that this login ID and password are but one of many methods known in the art to verify valid users, and that all such validation systems could 18 be easily substituted. If no local user voice profile is found, block 910, the system searches for a travelling user voice profile, block 920. If either search turns up a user voice profile, the user voice profile is loaded, blocks 915 and 925, respectively. Provided that the retrieval of the user voice profile 800 is successful, blocks 930 and 935, the user voice profile 800 is enabled by extracting the general grammar additions 214a, context-specific grammar additions 212a, and NLP database additions 218a. These "learned" adaptations are then used by the system, as discussed earlier with the method of FIGS 3A-3D.
In one embodiment of the present invention, speech recognition and natural language processing may be used to interact with objects, such as help files (".hip" files), World- Wide- Web ("WWW" or "web") pages, or any other objects that have a context-sensitive voice-based interface.
FIG. 5 illustrates an exemplary Dialog Definition File (DDF) 500 which represents information necessary to associate the speech recognition and natural language processing to an internet object, such as a text or graphics file or, in the preferred embodiment, a web-page or help file. Although in its simplest embodiment the Dialog Definition File 500 consists of an object table 510, the DDF may also contain additional context-specific grammar files 214 and additional entries for the natural language processing (NLP) database 218, as illustrated in FIG. 5. The preferred embodiment of the DDF 500 includes an object table 510, a context-specific grammar file 214, a context-specific dictation model 217, and a file containing entries to the natural language processing database 218. These components may be compressed and combined into the DDF file 500 by any method known in the art, such as through Lempel-Ziv compression. The context-specific specific grammar file 214 and the natural language processing database 218 are as described in earlier sections. The object table 510 is a memory structure, such as a memory tree, chain or table, which associates an address of a resource with various actions, grammars, or entries in the NLP database 218.
An exemplary embodiment of the object table 510 is illustrated in FIG. 6. FIG. 6 illustrates a memory table which may contain entry columns for: an object 520, a Text-to- Speech (TTS) flag 522, a text speech 524, a use grammar flag 526, an append grammar flag 528, an "is yes/no?" flag, and "do yes" 532 and "do no" 534 actions. Each row in the table 540A- 540n would represent the grammar and speech related to an individual object. The exemplary embodiment of the invention would refer to objects 520 through a Universal Resource Locator (URL). A URL is a standard method of specifying the address of any resource on the Internet that is part of the World- Wide- Web. As this standard is well known in the art for describing the location of Internet resources and objects, the details of URLs will therefore not be discussed herein. One advantage of URLs is that they can specify information in a large variety of object formats, including hypertext, graphical, database and other files, in addition to a number of object devices and communication protocols. However, as shown in FIG. 6, URLs and other method of specifying objects can be used.
When combined with the text speech 524, the Text-to-Speech (TTS) flag 522 indicates whether an initial statement should be voiced over speaker 112 when the corresponding object is transferred. For example, when transferring the web page listed in the object column 520 of row 540A (http : //www. conversit . com), the TTS flag 522 is marked, indicating the text speech 524, "Hello, welcome to...," is to be voiced over speaker 112.
The next three flags relate to the use of grammars associated with this object. The affirmative marking of the "use grammar" 526 or "append grammar" 528 flags indicate the presence of a content-specific grammar file 214 related to the indicated object. The marking of the "use grammar" flag 526 indicates that the new content-specific grammar file 214 replaces the existing content-specific grammar file, and the existing file is disabled. The "append grammar" flag 528 indicates that the new content-specific grammar file should be enabled concurrently with the existing content-specific grammar file.
Lastly, the remaining column entries relate to a "yes/no" grammar structure. If the "Is yes/no?" flag 530 is marked, then a standard "yes/no" grammar is enabled. When a standard "yes/no" grammar is enabled, affirmative commands spoken to the computer result in the computer executing the command indicated in the "Do Yes" entry 532. Similarly, a negative command spoken to the computer results in the computer executing the command indicated in the "Do No" entry 534. The entries in the "Do Yes" 532 and "Do No" 534 columns may either be commands or pointers to commands imbedded in the NLP Database 218. For example, as shown in row 540B, the "Is Yes/No?" flag is marked. An affirmative answer, such as "yes," given to the computer, would result in executing the corresponding command in the "Do Yes" entry 532; in this specific case, the entry is the number "210," a reference to the 210th command in the NLP database. An answer of "no" would result in the computer executing the 211th command in the NLP database.
Turning now to FIG. 7A, a method and system of providing speech and voice commands to objects, such as a computer reading a help file or browsing the World-Wide-Web, is illustrated. The method of FIGS. 7A-7C may be used in conjunction with the method of FIGS 3A-3D and FIG. 10. In block 602, an object location is provided to a help file reader or World-Wide- Web browser. A help file reader/browser is a program used to examine hypertext documents that are written to help users accomplish tasks or solve problems, and is well known in the art. The web browser is a program used to navigate through the Internet, and is well known in the art. The step, at block 602, of providing an object location to the browser, can be as simple as a user clicking on a program "help" menu item, manually typing in a URL, or having a user select a "link" at a chosen web-site. It also may be the result of a voiced command as described earlier with reference to the action associated with each entry in the NLP database 218. Given the object location, the computer must decide on whether it can resolve object location specified, at block 604. This resolution process is a process well known in the art. If the computer is unable to resolve the object location or internet address, an error message is displayed in the browser window, at block 605, and the system is returned to its initial starting state 600. If the object location or internet address is resolved, the computer retrieves the object at block 606. For a networked object, for example, a web browser sends the web-site a request to for the web page, at block 606. For a help file application, the help reader reads the help file off of storage media 108, at block 606.
A decision is made, depending upon whether the object is retrieved, at block 608. If the object cannot be retrieved, an error message is displayed in the browser window, at block 605, and the system is returned to its initial starting state 600. If the object is retrieved, it is displayed in the help-reader or web-site browser, as appropriate, at block 610.
In decision block 612, the computer 100 determines whether the DDF file 500 corresponding to the object is already present on the computer 100. If the DDF file is present, the flow proceeds to FIG. 7C, if not the flow proceeds to FIG. 7B.
Moving to FIG. 7B, if the DDF file 500 is not present, the computer examines whether the DDF file 500 location is encoded within the object. For example, the DDF file location could be encoded within web page HyperText Markup Language (HTML) as a URL. (Note that HTML is well known in the art, and the details of the language will therefore not be discussed herein.) Encoding DDF file location within HTML code may be done either through listing the DDF file location in an initial HTML meta-tag such as: <meta DDF= "http : //www . conversit . com/ConverseIt . ddf" > or directly through a scripting tag written into the variation of HTML supported by the browser, <DDF= "http: //www. conversit . com/ConverseIt . ddf"> - ->
If the DDF file location information is encoded within the web page, the location's internet address is resolved, at block 616, and the computer requests transfer of the DDF file 500, at block 626. An equivalent encoding scheme could be used within help file hypertext.
Alternatively, if the DDF file 500 location is not encoded within the object, there are several alternate places that it may be stored. It may be stored in a pre-defined location at a web-site, such as a certain file location in the root directory, or at a different centralized location, such as another Internet server or the storage medium 108 of FIG. 1. Blocks 618 and 620 test for these possibilities. Block 618 determines whether the DDF file is located at the web-site. At this step, the computer sends query to the web-site inquiring about the presence of the DDF file 500. If the DDF file 500 is present at the web-site, the computer requests transfer of the DDF file 500, at block 626. If the DDF file 500 is not located at the web-site, the computer queries the centralized location about the presence of a DDF file for the web-site, at block 620. If the DDF file is present at the web-site, the computer requests transfer of the DDF file, at block 626. If the DDF file 500 cannot be found, the existing components of any present DDF file, such as the object table 510, context-specific dictation model 217, NLP database 218 associated with the object, and context-specific grammar 214 for any previously-viewed object, are deactivated in block 622. Furthermore, the object is treated as a non-voice-activated object, and only standard grammar files are used, at block - 624. Standard grammar files are the grammar files existing on the system excluding any grammars associated with the content-specific grammar file associated with the object.
If the DDF file 500 is requested at block 626, and its transfer is unsuccessful, any existing components of any present DDF file 500 are deactivated, at block 622, and the web- site is treated as a non-voice-activated object, and only standard grammar files are used, at block 624.
If the DDF file 500 is requested at block 626 and its transfer is successful at block 628, it replaces any prior DDF file, at block 630. Any components of the DDF file 500, such as the object table 510, context-specific-grammar files 214, context-specific-dictation models 217, and NLP database 218 are extracted at block 632. A similar technique may be used for obtaining the software necessary to implement the method illustrated in FIGS. 3A-3D, comprising the functional elements of FIG. 2. 22
The flow moves to FIG. 7C. The object table 510 is read into memory by the computer in block 634. If the object is present in the site object table 510, as determined by block 636, it will be represented by a row 540A-540/I of the table, as shown in FIG. 6. Each row of the object table represents the speech-interactions available to a user for that particular object. If no row corresponding to the object exists, then no-speech interaction exists for the web page, and processing ends.
If the object location is present in the site object table 510, as determined by block 636, the computer checks if the TTS flag 522 is marked, to determine whether a text speech 524 is associated with the web-page, at block 638. If there is a text speech 524, it is voiced at block 640, and flow continues. If there is a context-specific grammar file associated with object, as determined by decision block 642, it is enabled at block 644, and then the NLP database 218 is enabled at block 646. If no context-specific grammar file is associated with the object, only the NLP database 218 is enabled at block 646. Once the NLP database is enabled 646, the system behaves as FIG. 3A-3C, as described above. In summary, the present invention provides a method and system for an object interactive user-interface for a computer. By the use of context-specific grammars that are tied to internet-objects through a Dialog Definition File, the present invention decreases speech recognition time and increases the user's ability to communicate with local and networked objects, such as help files or web-pages, in a conversational style. Adaptive updating of the various grammars and the NLP database, the present invention further increases interactive efficiency. The adaptive updates can be incorporated into user voice profiles that can be stored locally and remotely, to allow users access to the user voice profiles at various locations.
The previous description of the preferred embodiments is provided to enable any person skilled in the art to make or use the present invention. The various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty. Thus, the present invention is not intended to be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims

23I CLAIM:
1. A method for interacting with an object via a computer using utterances, the method comprising the steps of:
searching a context-specific dictation model for a matching phrase for the utterance; searching a database for a matching entry for the matching phrase; and performing an action associated with the matching entry if the matching entry is found in the database.
2. The method of claim 1 wherein the object is a web page.
3. The method of claim 1 wherein the object is a help file.
4. A method for interacting with an object via a computer using utterances, the method comprising the steps of: searching a first grammar file for a matching phrase for the utterance; searching a second grammar file for the matching phrase if the matching phrase is not found in the first grammar file; searching a dictation grammar for the matching phrase if the matching phrase is not found in the second grammar file; searching a context-specific dictation model for the matching phrase if the matching phrase is not found in the dictation grammar file; searching a database for a matching entry for the matching phrase; and performing an action associated with the matching entry if the matching entry is found in the database.
5. The method of claim 4 wherein the first grammar file is a context-specific grammar file.
6. The method of claim 5 wherein the second grammar file is a general grammar file. 24
7. The method of claim 6 further comprising the step of replacing at least one word in the matching phrase prior to the step of searching the database.
8. The method of claim 7 wherein the step of replacing the at least one word comprises substituting a wildcard for the at least one word.
9. The method of claim 8 wherein the step of replacing the at least one word comprises substituting a proper name for the at least one word.
10. The method of claim 9 further comprising the step of text formatting the matching phrase prior to the step of searching the database.
11. The method of claim 9 further comprising the step of weighting individual words in the matching phrase according to a relative significance of the individual words prior to the step of searching the database.
12. The method of claim 4 further comprising the step of updating a user voice profile with at least one of the database, the first grammar file and the second grammar file with the matching phrase if the matching entry is not found in the database.
13. The method of claim 12 further comprising storing the user voice profile locally.
14. The method of claim 12 further comprising storing the user voice profile remote location over a network.
15. The method of claim 12 further comprising storing the user voice profile locally and at a remote location over a network.
16. The method of claim 4 further comprising the step of generating a confidence values for the matching entry.
17. The method of claim 16 further comprising the step of comparing the confidence value with a threshold value.
18. The method of claim 17 further comprising the step of determining whether a required number of words from the matching phrase are present in the matching entry.
19. The method of claim 18 further comprising the step of prompting a user whether the matching entry is a correct interpretation of the utterance if the required number of words from the matching phrase are not present in the matching entry.
20. The method of claim 19 further comprising the step of prompting a user for additional information if the matching entry is not a correct interpretation of the utterance.
21. The method of claim 20 further comprising the step of updating at least one of the database, the first grammar file and the second grammar file with the additional information.
22. The method of claim 21 further comprising storing the user voice profile locally.
23. The method of claim 21 further comprising storing the user voice profile remote location over a network.
24. The method of claim 21 further comprising storing the user voice profile locally and at a remote location over a network.
25. The method of claim 4 wherein the object is a web page.
26. The method of claim 4 wherein the object is a help file. 26
27. A system for interacting with a computer using utterances, the system comprising: a speech processor for searching a context-specific grammar file for a matching phrase for the utterance, for searching a general grammar file for the matching phrase if the matching phrase is not found in the context-specific grammar file, for searching a dictation grammar for the matching phrase if the matching phrase is not found in the general grammar file, and for searching a context-specific dictation model if the matching phrase is not found in the dictation grammar; a natural language processor for searching a database for a matching entry for the matching phrase; and an application interface for performing an action associated with the matching entry if the matching entry is found in the database.
28. The system of claim 27 wherein the natural language processor updates a user voice profile with at least one of the database, the context-specific grammar file and the second grammar file with the matching phrase if the matching entry is not found in the database.
29. The system of claim 28 wherein the user voice profile is stored locally.
30. The system of claim 28 wherein the user voice profile is stored remotely over a network.
31. The system of claim 28 wherein the user voice profile is stored locally and remotely over a network.
32. The system of claim 28 wherein the speech processor searches a context-specific grammar associated with the matching entry for a subsequent matching phrase for a subsequent utterance.
33. The system of claim 27 further wherein the natural language processor replaces at least one word in the matching phrase prior to searching the database.
34. The system of claim 33 further comprising a variable replacer in the natural language processor for substituting a wildcard for the at least one word in the matching phrase.
35. The system of claim 33 further comprising a pronoun substituter in the natural language processor for substituting a proper name for the at least one word in the matching phrase.
36. The system of claim 27 further comprising a string formatter for text formatting the matching phrase prior to searching the database.
37. The system of claim 27 further comprising a word weighter for weighting individual words in the matching phrase according to a relative significance of the individual words prior to searching the database.
38. The system of claim 27 further comprising a search engine in the natural language processor for generating a confidence value for the matching entry.
39. The system of claim 38 wherein the natural language processor compares the confidence value with a threshold value.
40. The system of claim 39 further comprising a boolean tester for determining whether a required number of words from the matching phrase are present in the matching entry.
41. The system of claim 40 wherein the natural language processor prompts a user whether the matching entry is a correct interpretation of the utterance if the required number of words from the matching phrase are not present in the matching entry.
42. The system of claim 40 wherein the natural language processor prompts a user for additional information if the matching entry is not a correct interpretation of the utterance.
43. The system of claim 42 wherein the natural language processor updates at least one of the database, the first grammar file and the second grammar file with the additional information.
PCT/US2000/027407 1999-10-05 2000-10-05 Interactive user interface using speech recognition and natural language processing WO2001026093A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP00968695A EP1221161A1 (en) 1999-10-05 2000-10-05 Interactive user interface using speech recognition and natural language processing
AU78570/00A AU7857000A (en) 1999-10-05 2000-10-05 Interactive user interface using speech recognition and natural language processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/412,929 1999-10-05
US09/412,929 US6434524B1 (en) 1998-09-09 1999-10-05 Object interactive user interface using speech recognition and natural language processing

Publications (1)

Publication Number Publication Date
WO2001026093A1 true WO2001026093A1 (en) 2001-04-12

Family

ID=23635043

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/027407 WO2001026093A1 (en) 1999-10-05 2000-10-05 Interactive user interface using speech recognition and natural language processing

Country Status (4)

Country Link
US (1) US6434524B1 (en)
EP (1) EP1221161A1 (en)
AU (1) AU7857000A (en)
WO (1) WO2001026093A1 (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002086864A1 (en) * 2001-04-18 2002-10-31 Rutgers, The State University Of New Jersey System and method for adaptive language understanding by computers
WO2002089112A1 (en) * 2001-05-02 2002-11-07 Vox Generation Limited Adaptive learning of language models for speech recognition
WO2002101720A1 (en) * 2001-06-08 2002-12-19 Mende Speech Solutions Gmbh & Co.Kg Method for recognition of speech information
EP1293963A1 (en) * 2001-09-07 2003-03-19 Sony International (Europe) GmbH Dialogue management server architecture for dialogue systems
EP1304614A2 (en) * 2001-10-21 2003-04-23 Microsoft Corporation Application abstraction with dialog purpose
US6633846B1 (en) 1999-11-12 2003-10-14 Phoenix Solutions, Inc. Distributed realtime speech recognition system
US6665640B1 (en) 1999-11-12 2003-12-16 Phoenix Solutions, Inc. Interactive speech based learning/training system formulating search queries based on natural language parsing of recognized user queries
WO2004079720A1 (en) * 2003-03-01 2004-09-16 Robert E Coifman Method and apparatus for improving the transcription accuracy of speech recognition software
US7260535B2 (en) 2003-04-28 2007-08-21 Microsoft Corporation Web server controls for web enabled recognition and/or audible prompting for call controls
US7409349B2 (en) 2001-05-04 2008-08-05 Microsoft Corporation Servers for web enabled speech recognition
US7506022B2 (en) 2001-05-04 2009-03-17 Microsoft.Corporation Web enabled recognition architecture
US7552055B2 (en) 2004-01-10 2009-06-23 Microsoft Corporation Dialog component re-use in recognition systems
US7610547B2 (en) 2001-05-04 2009-10-27 Microsoft Corporation Markup language extensions for web enabled recognition
US7657424B2 (en) 1999-11-12 2010-02-02 Phoenix Solutions, Inc. System and method for processing sentence based queries
US7809578B2 (en) 2002-07-17 2010-10-05 Nokia Corporation Mobile device having voice user interface, and a method for testing the compatibility of an application with the mobile device
WO2012027095A1 (en) * 2010-08-27 2012-03-01 Cisco Technologies, Inc. Speech recognition language model
US8311835B2 (en) 2003-08-29 2012-11-13 Microsoft Corporation Assisted multi-modal dialogue
US9076448B2 (en) 1999-11-12 2015-07-07 Nuance Communications, Inc. Distributed real time speech recognition system
EP2856358A4 (en) * 2012-05-24 2016-02-24 Soundhound Inc Systems and methods for enabling natural language processing
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services

Families Citing this family (296)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8352400B2 (en) 1991-12-23 2013-01-08 Hoffberg Steven M Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore
AU6352894A (en) * 1993-03-05 1994-09-26 Roy J. Mankovitz Apparatus and method using compressed codes for television program record scheduling
WO1996013932A1 (en) * 1994-10-27 1996-05-09 Index Systems, Inc. Apparatus and methods for downloading recorder programming data in a video signal
CN1867068A (en) 1998-07-14 2006-11-22 联合视频制品公司 Client-server based interactive television program guide system with remote server recording
CN101383947B (en) 1998-07-17 2012-08-01 联合视频制品公司 Method for access to and providing programme by remote access link
AR020608A1 (en) 1998-07-17 2002-05-22 United Video Properties Inc A METHOD AND A PROVISION TO SUPPLY A USER REMOTE ACCESS TO AN INTERACTIVE PROGRAMMING GUIDE BY A REMOTE ACCESS LINK
US6505348B1 (en) 1998-07-29 2003-01-07 Starsight Telecast, Inc. Multiple interactive electronic program guide system and methods
US7966078B2 (en) 1999-02-01 2011-06-21 Steven Hoffberg Network media appliance system and method
US6567796B1 (en) * 1999-03-23 2003-05-20 Microstrategy, Incorporated System and method for management of an automatic OLAP report broadcast system
US8321411B2 (en) 1999-03-23 2012-11-27 Microstrategy, Incorporated System and method for management of an automatic OLAP report broadcast system
US9208213B2 (en) 1999-05-28 2015-12-08 Microstrategy, Incorporated System and method for network user interface OLAP report formatting
US8607138B2 (en) 1999-05-28 2013-12-10 Microstrategy, Incorporated System and method for OLAP report generation with spreadsheet report within the network user interface
WO2001013255A2 (en) 1999-08-13 2001-02-22 Pixo, Inc. Displaying and traversing links in character array
US6964012B1 (en) 1999-09-13 2005-11-08 Microstrategy, Incorporated System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services, including deployment through personalized broadcasts
US6850603B1 (en) * 1999-09-13 2005-02-01 Microstrategy, Incorporated System and method for the creation and automatic deployment of personalized dynamic and interactive voice services
US8130918B1 (en) 1999-09-13 2012-03-06 Microstrategy, Incorporated System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services, with closed loop transaction processing
US6873693B1 (en) 1999-09-13 2005-03-29 Microstrategy, Incorporated System and method for real-time, personalized, dynamic, interactive voice services for entertainment-related information
US7050977B1 (en) * 1999-11-12 2006-05-23 Phoenix Solutions, Inc. Speech-enabled server for internet website and method
US7392185B2 (en) 1999-11-12 2008-06-24 Phoenix Solutions, Inc. Speech based learning/training system using semantic decoding
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
US7228327B2 (en) * 2000-05-08 2007-06-05 Hoshiko Llc Method and apparatus for delivering content via information retrieval devices
JP3444269B2 (en) * 2000-05-19 2003-09-08 セイコーエプソン株式会社 Network printer for editing and printing content on network and method for printing content on network
US6772196B1 (en) * 2000-07-27 2004-08-03 Propel Software Corp. Electronic mail filtering system and methods
US8200485B1 (en) * 2000-08-29 2012-06-12 A9.Com, Inc. Voice interface and methods for improving recognition accuracy of voice search queries
US7240006B1 (en) * 2000-09-27 2007-07-03 International Business Machines Corporation Explicitly registering markup based on verbal commands and exploiting audio context
CA2418274A1 (en) * 2000-09-28 2002-04-04 Scientific Learning Corporation Method and apparatus for automated training of language learning skills
KR20190096450A (en) 2000-10-11 2019-08-19 로비 가이드스, 인크. Systems and methods for delivering media content
US6973429B2 (en) * 2000-12-04 2005-12-06 A9.Com, Inc. Grammar generation for voice-based searches
US6937986B2 (en) * 2000-12-28 2005-08-30 Comverse, Inc. Automatic dynamic speech recognition vocabulary based on external sources of information
US7027987B1 (en) 2001-02-07 2006-04-11 Google Inc. Voice interface for a search engine
US7970648B2 (en) * 2001-04-27 2011-06-28 Accenture Global Services Limited Advertising campaign and business listing management for a location-based services system
US7698228B2 (en) * 2001-04-27 2010-04-13 Accenture Llp Tracking purchases in a location-based services system
US7437295B2 (en) * 2001-04-27 2008-10-14 Accenture Llp Natural language processing for a location-based services system
US6944447B2 (en) * 2001-04-27 2005-09-13 Accenture Llp Location-based services
US6848542B2 (en) * 2001-04-27 2005-02-01 Accenture Llp Method for passive mining of usage information in a location-based services system
US8229753B2 (en) 2001-10-21 2012-07-24 Microsoft Corporation Web server controls for web enabled recognition and/or audible prompting
ITFI20010199A1 (en) 2001-10-22 2003-04-22 Riccardo Vieri SYSTEM AND METHOD TO TRANSFORM TEXTUAL COMMUNICATIONS INTO VOICE AND SEND THEM WITH AN INTERNET CONNECTION TO ANY TELEPHONE SYSTEM
US6688652B2 (en) * 2001-12-12 2004-02-10 U.S. Pipe And Foundry Company Locking device and method for securing telescoped pipe
US20030144846A1 (en) * 2002-01-31 2003-07-31 Denenberg Lawrence A. Method and system for modifying the behavior of an application based upon the application's grammar
US7246063B2 (en) * 2002-02-15 2007-07-17 Sap Aktiengesellschaft Adapting a user interface for voice control
US8566102B1 (en) * 2002-03-28 2013-10-22 At&T Intellectual Property Ii, L.P. System and method of automating a spoken dialogue service
US20030187658A1 (en) * 2002-03-29 2003-10-02 Jari Selin Method for text-to-speech service utilizing a uniform resource identifier
DE10220524B4 (en) * 2002-05-08 2006-08-10 Sap Ag Method and system for processing voice data and recognizing a language
DE10220521B4 (en) * 2002-05-08 2005-11-24 Sap Ag Method and system for processing voice data and classifying calls
EP1363271A1 (en) * 2002-05-08 2003-11-19 Sap Ag Method and system for processing and storing of dialogue speech data
DE10220522B4 (en) * 2002-05-08 2005-11-17 Sap Ag Method and system for processing voice data using voice recognition and frequency analysis
DE10220520A1 (en) * 2002-05-08 2003-11-20 Sap Ag Method of recognizing speech information
EP1361740A1 (en) * 2002-05-08 2003-11-12 Sap Ag Method and system for dialogue speech signal processing
CA2488256A1 (en) * 2002-05-30 2003-12-11 Custom Speech Usa, Inc. A method for locating an audio segment within an audio file
US7398209B2 (en) 2002-06-03 2008-07-08 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US7693720B2 (en) 2002-07-15 2010-04-06 Voicebox Technologies, Inc. Mobile systems and methods for responding to natural language speech utterance
US7363398B2 (en) * 2002-08-16 2008-04-22 The Board Of Trustees Of The Leland Stanford Junior University Intelligent total access system
US6907397B2 (en) * 2002-09-16 2005-06-14 Matsushita Electric Industrial Co., Ltd. System and method of media file access and retrieval using speech recognition
WO2004049192A2 (en) 2002-11-28 2004-06-10 Koninklijke Philips Electronics N.V. Method to assign word class information
US20040111259A1 (en) * 2002-12-10 2004-06-10 Miller Edward S. Speech recognition system having an application program interface
US7493646B2 (en) 2003-01-30 2009-02-17 United Video Properties, Inc. Interactive television systems with digital video recording and adjustable reminders
US7480619B1 (en) 2003-03-04 2009-01-20 The Board Of Trustees Of The Leland Stanford Junior University Integration manager and natural interaction processor
US6980949B2 (en) 2003-03-14 2005-12-27 Sonum Technologies, Inc. Natural language processor
US7729913B1 (en) * 2003-03-18 2010-06-01 A9.Com, Inc. Generation and selection of voice recognition grammars for conducting database searches
US7669134B1 (en) 2003-05-02 2010-02-23 Apple Inc. Method and apparatus for displaying information during an instant messaging session
CA2431183A1 (en) * 2003-06-05 2004-12-05 Atc Dynamics Inc. Method and system for natural language recognition command interface and data management
US20050027539A1 (en) * 2003-07-30 2005-02-03 Weber Dean C. Media center controller system and method
WO2005043386A1 (en) * 2003-09-30 2005-05-12 Siemens Aktiengesellschaft Method and system for configuring the language of a computer programme
KR100552693B1 (en) * 2003-10-25 2006-02-20 삼성전자주식회사 Pitch detection method and apparatus
US7376752B1 (en) 2003-10-28 2008-05-20 David Chudnovsky Method to resolve an incorrectly entered uniform resource locator (URL)
US7295981B1 (en) * 2004-01-09 2007-11-13 At&T Corp. Method for building a natural language understanding model for a spoken dialog system
US8160883B2 (en) * 2004-01-10 2012-04-17 Microsoft Corporation Focus tracking in dialogs
FR2868588A1 (en) * 2004-04-02 2005-10-07 France Telecom VOICE APPLICATION SYSTEM
US20050234864A1 (en) * 2004-04-20 2005-10-20 Shapiro Aaron M Systems and methods for improved data sharing and content transformation
US7778830B2 (en) * 2004-05-19 2010-08-17 International Business Machines Corporation Training speaker-dependent, phrase-based speech grammars using an unsupervised automated technique
US7580837B2 (en) 2004-08-12 2009-08-25 At&T Intellectual Property I, L.P. System and method for targeted tuning module of a speech recognition system
US7043435B2 (en) * 2004-09-16 2006-05-09 Sbc Knowledgfe Ventures, L.P. System and method for optimizing prompts for speech-enabled applications
US10687166B2 (en) * 2004-09-30 2020-06-16 Uber Technologies, Inc. Obtaining user assistance
US8806533B1 (en) 2004-10-08 2014-08-12 United Video Properties, Inc. System and method for using television information codes
US7242751B2 (en) 2004-12-06 2007-07-10 Sbc Knowledge Ventures, L.P. System and method for speech recognition-enabled automatic call routing
US7921091B2 (en) * 2004-12-16 2011-04-05 At&T Intellectual Property Ii, L.P. System and method for providing a natural language interface to a database
US8788271B2 (en) * 2004-12-22 2014-07-22 Sap Aktiengesellschaft Controlling user interfaces with contextual voice commands
US20060141481A1 (en) * 2004-12-27 2006-06-29 Mariani Brian D HSV-1 and HSV-2 primers and probes
US7751551B2 (en) 2005-01-10 2010-07-06 At&T Intellectual Property I, L.P. System and method for speech-enabled call routing
US7627096B2 (en) * 2005-01-14 2009-12-01 At&T Intellectual Property I, L.P. System and method for independently recognizing and selecting actions and objects in a speech recognition system
TWI269268B (en) * 2005-01-24 2006-12-21 Delta Electronics Inc Speech recognizing method and system
US7409344B2 (en) * 2005-03-08 2008-08-05 Sap Aktiengesellschaft XML based architecture for controlling user interfaces with contextual voice commands
US8041570B2 (en) * 2005-05-31 2011-10-18 Robert Bosch Corporation Dialogue management using scripts
US20070022236A1 (en) * 2005-07-25 2007-01-25 Ing-Kai Huang Computer keyboard integrated with internet phone service
US7640160B2 (en) 2005-08-05 2009-12-29 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US7620549B2 (en) 2005-08-10 2009-11-17 Voicebox Technologies, Inc. System and method of supporting adaptive misrecognition in conversational speech
US7542904B2 (en) 2005-08-19 2009-06-02 Cisco Technology, Inc. System and method for maintaining a speech-recognition grammar
US7949529B2 (en) 2005-08-29 2011-05-24 Voicebox Technologies, Inc. Mobile systems and methods of supporting natural language human-machine interactions
US7634409B2 (en) 2005-08-31 2009-12-15 Voicebox Technologies, Inc. Dynamic speech sharpening
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US20070067155A1 (en) * 2005-09-20 2007-03-22 Sonum Technologies, Inc. Surface structure generation
US7672852B2 (en) * 2005-09-29 2010-03-02 Microsoft Corporation Localization of prompts
US7633076B2 (en) 2005-09-30 2009-12-15 Apple Inc. Automated response to and sensing of user activity in portable devices
US7640158B2 (en) * 2005-11-08 2009-12-29 Multimodal Technologies, Inc. Automatic detection and application of editing patterns in draft documents
US20070266162A1 (en) * 2005-12-07 2007-11-15 Microsoft Corporation Session initiation protocol redirection for process recycling
US20070156682A1 (en) * 2005-12-28 2007-07-05 Microsoft Corporation Personalized user specific files for object recognition
US7693267B2 (en) * 2005-12-30 2010-04-06 Microsoft Corporation Personalized user specific grammars
US20060107218A1 (en) * 2006-01-17 2006-05-18 Microsoft Corporation View-based navigation model for graphical user interfaces
US20060101353A1 (en) * 2006-01-17 2006-05-11 Microsoft Corporation Multi-pane navigation model for graphical user interfaces
US8311836B2 (en) * 2006-03-13 2012-11-13 Nuance Communications, Inc. Dynamic help including available speech commands from content contained within speech grammars
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8073681B2 (en) 2006-10-16 2011-12-06 Voicebox Technologies, Inc. System and method for a cooperative conversational voice user interface
US8204738B2 (en) * 2006-11-03 2012-06-19 Nuance Communications, Inc. Removing bias from features containing overlapping embedded grammars in a natural language understanding system
EP1933302A1 (en) * 2006-12-12 2008-06-18 Harman Becker Automotive Systems GmbH Speech recognition method
DE112007002665B4 (en) * 2006-12-15 2017-12-28 Mitsubishi Electric Corp. Voice recognition system
US7818176B2 (en) 2007-02-06 2010-10-19 Voicebox Technologies, Inc. System and method for selecting and presenting advertisements based on natural language processing of voice-based input
US8418206B2 (en) 2007-03-22 2013-04-09 United Video Properties, Inc. User defined rules for assigning destinations of content
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
CN101669406B (en) * 2007-04-24 2014-06-04 皇家飞利浦电子股份有限公司 Method, system and user interface for automatically creating an atmosphere, particularly a lighting atmosphere, based on a keyword input
ITFI20070177A1 (en) 2007-07-26 2009-01-27 Riccardo Vieri SYSTEM FOR THE CREATION AND SETTING OF AN ADVERTISING CAMPAIGN DERIVING FROM THE INSERTION OF ADVERTISING MESSAGES WITHIN AN EXCHANGE OF MESSAGES AND METHOD FOR ITS FUNCTIONING.
US7983913B2 (en) * 2007-07-31 2011-07-19 Microsoft Corporation Understanding spoken location information based on intersections
US9053089B2 (en) 2007-10-02 2015-06-09 Apple Inc. Part-of-speech tagging using latent analogy
US8595642B1 (en) 2007-10-04 2013-11-26 Great Northern Research, LLC Multiple shell multi faceted graphical user interface
US8364694B2 (en) 2007-10-26 2013-01-29 Apple Inc. Search assistant for digital media assets
US8620662B2 (en) 2007-11-20 2013-12-31 Apple Inc. Context-aware unit selection
US8140335B2 (en) 2007-12-11 2012-03-20 Voicebox Technologies, Inc. System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US8219407B1 (en) 2007-12-27 2012-07-10 Great Northern Research, LLC Method for processing the output of a speech recognizer
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8327272B2 (en) 2008-01-06 2012-12-04 Apple Inc. Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars
US8065143B2 (en) 2008-02-22 2011-11-22 Apple Inc. Providing text input using speech data and non-speech data
US8289283B2 (en) 2008-03-04 2012-10-16 Apple Inc. Language input interface on a device
TWI385932B (en) * 2008-03-26 2013-02-11 Asustek Comp Inc Device and system for remote controlling
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US8589161B2 (en) 2008-05-27 2013-11-19 Voicebox Technologies, Inc. System and method for an integrated, multi-modal, multi-device natural language voice services environment
US9305548B2 (en) 2008-05-27 2016-04-05 Voicebox Technologies Corporation System and method for an integrated, multi-modal, multi-device natural language voice services environment
US8464150B2 (en) 2008-06-07 2013-06-11 Apple Inc. Automatic language identification for dynamic text processing
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8768702B2 (en) 2008-09-05 2014-07-01 Apple Inc. Multi-tiered voice feedback in an electronic device
US8898568B2 (en) 2008-09-09 2014-11-25 Apple Inc. Audio user interface
JP2010066365A (en) * 2008-09-09 2010-03-25 Toshiba Corp Speech recognition apparatus, method, and program
US8583418B2 (en) 2008-09-29 2013-11-12 Apple Inc. Systems and methods of detecting language and natural language strings for text to speech synthesis
US8355919B2 (en) 2008-09-29 2013-01-15 Apple Inc. Systems and methods for text normalization for text to speech synthesis
US8352272B2 (en) 2008-09-29 2013-01-08 Apple Inc. Systems and methods for text to speech synthesis
US8396714B2 (en) 2008-09-29 2013-03-12 Apple Inc. Systems and methods for concatenation of words in text to speech synthesis
US8712776B2 (en) 2008-09-29 2014-04-29 Apple Inc. Systems and methods for selective text to speech synthesis
US8352268B2 (en) 2008-09-29 2013-01-08 Apple Inc. Systems and methods for selective rate of speech and speech preferences for text to speech synthesis
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10063934B2 (en) 2008-11-25 2018-08-28 Rovi Technologies Corporation Reducing unicast session duration with restart TV
WO2010067118A1 (en) 2008-12-11 2010-06-17 Novauris Technologies Limited Speech recognition involving a mobile device
US8862252B2 (en) 2009-01-30 2014-10-14 Apple Inc. Audio user interface for displayless electronic device
US8326637B2 (en) 2009-02-20 2012-12-04 Voicebox Technologies, Inc. System and method for processing multi-modal device interactions in a natural language voice services environment
US8380507B2 (en) 2009-03-09 2013-02-19 Apple Inc. Systems and methods for determining the language to use for speech generated by a text to speech engine
US8930179B2 (en) * 2009-06-04 2015-01-06 Microsoft Corporation Recognition using re-recognition and statistical classification
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10540976B2 (en) 2009-06-05 2020-01-21 Apple Inc. Contextual voice commands
US20120311585A1 (en) 2011-06-03 2012-12-06 Apple Inc. Organizing task items that represent tasks to perform
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US8892439B2 (en) * 2009-07-15 2014-11-18 Microsoft Corporation Combination and federation of local and remote speech recognition
US8560311B2 (en) * 2009-09-23 2013-10-15 Robert W. Williams System and method for isolating uncertainty between speech recognition and natural language processing
WO2011059997A1 (en) 2009-11-10 2011-05-19 Voicebox Technologies, Inc. System and method for providing a natural language content dedication service
US9171541B2 (en) 2009-11-10 2015-10-27 Voicebox Technologies Corporation System and method for hybrid processing in a natural language voice services environment
US8682649B2 (en) 2009-11-12 2014-03-25 Apple Inc. Sentiment prediction from textual data
US9865263B2 (en) * 2009-12-01 2018-01-09 Nuance Communications, Inc. Real-time voice recognition on a handheld device
US8600743B2 (en) 2010-01-06 2013-12-03 Apple Inc. Noise profile determination for voice-related feature
US8311838B2 (en) 2010-01-13 2012-11-13 Apple Inc. Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts
US8381107B2 (en) 2010-01-13 2013-02-19 Apple Inc. Adaptive audio feedback system and method
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
WO2011089450A2 (en) 2010-01-25 2011-07-28 Andrew Peter Nelson Jerram Apparatuses, methods and systems for a digital conversation management platform
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US8639516B2 (en) 2010-06-04 2014-01-28 Apple Inc. User-specific noise suppression for voice quality improvements
US8713021B2 (en) 2010-07-07 2014-04-29 Apple Inc. Unsupervised document clustering using latent semantic density analysis
US9104670B2 (en) 2010-07-21 2015-08-11 Apple Inc. Customized search or acquisition of digital media assets
US8719006B2 (en) 2010-08-27 2014-05-06 Apple Inc. Combined statistical and rule-based part-of-speech tagging for text-to-speech synthesis
US8719014B2 (en) 2010-09-27 2014-05-06 Apple Inc. Electronic device with text error correction based on voice recognition data
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10515147B2 (en) 2010-12-22 2019-12-24 Apple Inc. Using statistical language models for contextual lookup
US8781836B2 (en) 2011-02-22 2014-07-15 Apple Inc. Hearing assistance system for providing consistent human speech
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US20120310642A1 (en) 2011-06-03 2012-12-06 Apple Inc. Automatically creating a mapping between text data and audio data
US8812294B2 (en) 2011-06-21 2014-08-19 Apple Inc. Translating phrases from one language into another using an order-based set of declarative rules
US8706472B2 (en) 2011-08-11 2014-04-22 Apple Inc. Method for disambiguating multiple readings in language conversion
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US8762156B2 (en) 2011-09-28 2014-06-24 Apple Inc. Speech recognition repair using contextual information
US8805418B2 (en) 2011-12-23 2014-08-12 United Video Properties, Inc. Methods and systems for performing actions based on location-based rules
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US8775442B2 (en) 2012-05-15 2014-07-08 Apple Inc. Semantic search using a single-source semantic model
WO2013185109A2 (en) 2012-06-08 2013-12-12 Apple Inc. Systems and methods for recognizing textual identifiers within a plurality of words
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
US8935167B2 (en) 2012-09-25 2015-01-13 Apple Inc. Exemplar-based latent perceptual modeling for automatic speech recognition
KR102516577B1 (en) 2013-02-07 2023-04-03 애플 인크. Voice trigger for a digital assistant
US9977779B2 (en) 2013-03-14 2018-05-22 Apple Inc. Automatic supplementation of word correction dictionaries
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US10572476B2 (en) 2013-03-14 2020-02-25 Apple Inc. Refining a search based on schedule items
US10642574B2 (en) 2013-03-14 2020-05-05 Apple Inc. Device, method, and graphical user interface for outputting captions
US9733821B2 (en) 2013-03-14 2017-08-15 Apple Inc. Voice control to diagnose inadvertent activation of accessibility features
US11151899B2 (en) 2013-03-15 2021-10-19 Apple Inc. User training by intelligent digital assistant
CN112230878A (en) 2013-03-15 2021-01-15 苹果公司 Context-sensitive handling of interrupts
US10748529B1 (en) 2013-03-15 2020-08-18 Apple Inc. Voice activated device for use with a voice-based digital assistant
WO2014144579A1 (en) 2013-03-15 2014-09-18 Apple Inc. System and method for updating an adaptive speech recognition model
WO2014144949A2 (en) 2013-03-15 2014-09-18 Apple Inc. Training an at least partial voice command system
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
EP3008641A1 (en) 2013-06-09 2016-04-20 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
CN105265005B (en) 2013-06-13 2019-09-17 苹果公司 System and method for the urgent call initiated by voice command
WO2015020942A1 (en) 2013-08-06 2015-02-12 Apple Inc. Auto-activating smart responses based on activities from remote devices
US9779722B2 (en) * 2013-11-05 2017-10-03 GM Global Technology Operations LLC System for adapting speech recognition vocabulary
KR101912177B1 (en) * 2013-11-15 2018-10-26 인텔 코포레이션 System and method for maintaining speach recognition dynamic dictionary
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) * 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
EP3149728B1 (en) 2014-05-30 2019-01-16 Apple Inc. Multi-command single utterance input method
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US10418034B1 (en) 2014-06-20 2019-09-17 Nvoq Incorporated Systems and methods for a wireless microphone to access remotely hosted applications
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US9620106B2 (en) 2014-07-30 2017-04-11 At&T Intellectual Property I, L.P. System and method for personalization in speech recogniton
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
EP3195145A4 (en) 2014-09-16 2018-01-24 VoiceBox Technologies Corporation Voice commerce
US9898459B2 (en) 2014-09-16 2018-02-20 Voicebox Technologies Corporation Integration of domain information into state transitions of a finite state transducer for natural language processing
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
CN107003999B (en) 2014-10-15 2020-08-21 声钰科技 System and method for subsequent response to a user's prior natural language input
KR101587625B1 (en) * 2014-11-18 2016-01-21 박남태 The method of voice control for display device, and voice control display device
US10614799B2 (en) 2014-11-26 2020-04-07 Voicebox Technologies Corporation System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance
US10431214B2 (en) 2014-11-26 2019-10-01 Voicebox Technologies Corporation System and method of determining a domain and/or an action related to a natural language input
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10268756B2 (en) * 2015-12-18 2019-04-23 Here Global B.V. Method and apparatus for providing natural language input in a cartographic system
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179309B1 (en) 2016-06-09 2018-04-23 Apple Inc Intelligent automated assistant in a home environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
US10331784B2 (en) 2016-07-29 2019-06-25 Voicebox Technologies Corporation System and method of disambiguating natural language processing requests
CN116844543A (en) * 2016-08-26 2023-10-03 王峥嵘 Control method and system based on voice interaction
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10558695B2 (en) 2017-05-30 2020-02-11 International Business Machines Corporation Weather-based natural language text processing
US11443227B2 (en) * 2018-03-30 2022-09-13 International Business Machines Corporation System and method for cognitive multilingual speech training and recognition
CN110459211B (en) 2018-05-07 2023-06-23 阿里巴巴集团控股有限公司 Man-machine conversation method, client, electronic equipment and storage medium
US10811007B2 (en) 2018-06-08 2020-10-20 International Business Machines Corporation Filtering audio-based interference from voice commands using natural language processing
WO2020136573A1 (en) * 2018-12-27 2020-07-02 3M Innovative Properties Company Voice-assisted dental systems
US10565365B1 (en) 2019-02-21 2020-02-18 Capital One Services, Llc Systems and methods for data access control using narrative authentication questions
US11350185B2 (en) * 2019-12-13 2022-05-31 Bank Of America Corporation Text-to-audio for interactive videos using a markup language
US11176329B2 (en) 2020-02-18 2021-11-16 Bank Of America Corporation Source code compiler using natural language input
US11250128B2 (en) 2020-02-18 2022-02-15 Bank Of America Corporation System and method for detecting source code anomalies
CN112463927A (en) * 2020-12-09 2021-03-09 上海嗨酷强供应链信息技术有限公司 Efficient intelligent semantic matching method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5311429A (en) * 1989-05-17 1994-05-10 Hitachi, Ltd. Maintenance support method and apparatus for natural language processing system
DE4440598C1 (en) * 1994-11-14 1996-05-23 Siemens Ag World Wide Web hypertext information highway navigator controlled by spoken word
EP0834862A2 (en) * 1996-10-01 1998-04-08 Lucent Technologies Inc. Method of key-phrase detection and verification for flexible speech understanding
WO2000014727A1 (en) * 1998-09-09 2000-03-16 One Voice Technologies, Inc. Interactive user interface using speech recognition and natural language processing

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4783803A (en) 1985-11-12 1988-11-08 Dragon Systems, Inc. Speech recognition apparatus and method
US4887212A (en) 1986-10-29 1989-12-12 International Business Machines Corporation Parser for natural language text
JP3716870B2 (en) * 1995-05-31 2005-11-16 ソニー株式会社 Speech recognition apparatus and speech recognition method
DE29511224U1 (en) * 1995-07-11 1995-10-26 Textilma Ag Ultrasonic device for cutting a meltable textile web and simultaneously welding the cut edges
US5799279A (en) * 1995-11-13 1998-08-25 Dragon Systems, Inc. Continuous speech recognition of text and commands
JPH10143191A (en) * 1996-11-13 1998-05-29 Hitachi Ltd Speech recognition system
JP2000163418A (en) * 1997-12-26 2000-06-16 Canon Inc Processor and method for natural language processing and storage medium stored with program thereof
US6144938A (en) * 1998-05-01 2000-11-07 Sun Microsystems, Inc. Voice user interface with personality

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5311429A (en) * 1989-05-17 1994-05-10 Hitachi, Ltd. Maintenance support method and apparatus for natural language processing system
DE4440598C1 (en) * 1994-11-14 1996-05-23 Siemens Ag World Wide Web hypertext information highway navigator controlled by spoken word
EP0834862A2 (en) * 1996-10-01 1998-04-08 Lucent Technologies Inc. Method of key-phrase detection and verification for flexible speech understanding
WO2000014727A1 (en) * 1998-09-09 2000-03-16 One Voice Technologies, Inc. Interactive user interface using speech recognition and natural language processing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"APPROXIMATE WORD-SPOTTING METHOD FOR CONSTRAINED GRAMMARS", IBM TECHNICAL DISCLOSURE BULLETIN,IBM CORP. NEW YORK,US, vol. 37, no. 10, 1 October 1994 (1994-10-01), pages 385, XP000475707, ISSN: 0018-8689 *
WYARD P J ET AL: "SPOKEN LANGUAGE SYSTEMS - BEYOND PROMPT AND RESPONSE", BT TECHNOLOGY JOURNAL,GB,BT LABORATORIES, vol. 14, no. 1, 1996, pages 187 - 207, XP000554648, ISSN: 1358-3948 *

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7702508B2 (en) 1999-11-12 2010-04-20 Phoenix Solutions, Inc. System and method for natural language processing of query answers
US6665640B1 (en) 1999-11-12 2003-12-16 Phoenix Solutions, Inc. Interactive speech based learning/training system formulating search queries based on natural language parsing of recognized user queries
US7657424B2 (en) 1999-11-12 2010-02-02 Phoenix Solutions, Inc. System and method for processing sentence based queries
US9076448B2 (en) 1999-11-12 2015-07-07 Nuance Communications, Inc. Distributed real time speech recognition system
US9190063B2 (en) 1999-11-12 2015-11-17 Nuance Communications, Inc. Multi-language speech recognition system
US6633846B1 (en) 1999-11-12 2003-10-14 Phoenix Solutions, Inc. Distributed realtime speech recognition system
US7729904B2 (en) 1999-11-12 2010-06-01 Phoenix Solutions, Inc. Partial speech processing device and method for use in distributed systems
WO2002086864A1 (en) * 2001-04-18 2002-10-31 Rutgers, The State University Of New Jersey System and method for adaptive language understanding by computers
GB2391680A (en) * 2001-05-02 2004-02-11 Vox Generation Ltd Adaptive learning of language models for speech recognition
WO2002089112A1 (en) * 2001-05-02 2002-11-07 Vox Generation Limited Adaptive learning of language models for speech recognition
GB2391680B (en) * 2001-05-02 2005-07-20 Vox Generation Ltd Adaptive learning of language models for speech recognition
US7409349B2 (en) 2001-05-04 2008-08-05 Microsoft Corporation Servers for web enabled speech recognition
US7506022B2 (en) 2001-05-04 2009-03-17 Microsoft.Corporation Web enabled recognition architecture
US7610547B2 (en) 2001-05-04 2009-10-27 Microsoft Corporation Markup language extensions for web enabled recognition
WO2002101720A1 (en) * 2001-06-08 2002-12-19 Mende Speech Solutions Gmbh & Co.Kg Method for recognition of speech information
EP1293963A1 (en) * 2001-09-07 2003-03-19 Sony International (Europe) GmbH Dialogue management server architecture for dialogue systems
EP1304614A3 (en) * 2001-10-21 2005-08-31 Microsoft Corporation Application abstraction with dialog purpose
JP2009059378A (en) * 2001-10-21 2009-03-19 Microsoft Corp Recording medium and method for abstracting application aimed at dialogue
US7711570B2 (en) 2001-10-21 2010-05-04 Microsoft Corporation Application abstraction with dialog purpose
EP1304614A2 (en) * 2001-10-21 2003-04-23 Microsoft Corporation Application abstraction with dialog purpose
US7809578B2 (en) 2002-07-17 2010-10-05 Nokia Corporation Mobile device having voice user interface, and a method for testing the compatibility of an application with the mobile device
US7426468B2 (en) 2003-03-01 2008-09-16 Coifman Robert E Method and apparatus for improving the transcription accuracy of speech recognition software
WO2004079720A1 (en) * 2003-03-01 2004-09-16 Robert E Coifman Method and apparatus for improving the transcription accuracy of speech recognition software
US7260535B2 (en) 2003-04-28 2007-08-21 Microsoft Corporation Web server controls for web enabled recognition and/or audible prompting for call controls
US8311835B2 (en) 2003-08-29 2012-11-13 Microsoft Corporation Assisted multi-modal dialogue
US7552055B2 (en) 2004-01-10 2009-06-23 Microsoft Corporation Dialog component re-use in recognition systems
CN103262156A (en) * 2010-08-27 2013-08-21 思科技术公司 Speech recognition language model
US8532994B2 (en) 2010-08-27 2013-09-10 Cisco Technology, Inc. Speech recognition using a personal vocabulary and language model
WO2012027095A1 (en) * 2010-08-27 2012-03-01 Cisco Technologies, Inc. Speech recognition language model
EP2856358A4 (en) * 2012-05-24 2016-02-24 Soundhound Inc Systems and methods for enabling natural language processing
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10553215B2 (en) 2016-09-23 2020-02-04 Apple Inc. Intelligent automated assistant
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services

Also Published As

Publication number Publication date
AU7857000A (en) 2001-05-10
EP1221161A1 (en) 2002-07-10
US6434524B1 (en) 2002-08-13

Similar Documents

Publication Publication Date Title
US6434524B1 (en) Object interactive user interface using speech recognition and natural language processing
AU762282B2 (en) Network interactive user interface using speech recognition and natural language processing
AU2001251354A1 (en) Natural language and dialogue generation processing
US7729913B1 (en) Generation and selection of voice recognition grammars for conducting database searches
CA2280331C (en) Web-based platform for interactive voice response (ivr)
JP5330450B2 (en) Topic-specific models for text formatting and speech recognition
US6910012B2 (en) Method and system for speech recognition using phonetically similar word alternatives
JP4267081B2 (en) Pattern recognition registration in distributed systems
CA2437620C (en) Hierarchichal language models
JP4485694B2 (en) Parallel recognition engine
US20020087315A1 (en) Computer-implemented multi-scanning language method and system
JPH08335160A (en) System for making video screen display voice-interactive
JP3476008B2 (en) A method for registering voice information, a method for specifying a recognition character string, a voice recognition device, a storage medium storing a software product for registering voice information, and a software product for specifying a recognition character string are stored. Storage media
WO2007021587A2 (en) Systems and methods of supporting adaptive misrecognition in conversational speech
JP2012520528A (en) System and method for automatic semantic labeling of natural language text
WO2000045375A1 (en) Method and apparatus for voice annotation and retrieval of multimedia data
WO2002054385A1 (en) Computer-implemented dynamic language model generation method and system
House et al. Spoken-Language Access to Multimedia (SLAM)
JP3893893B2 (en) Voice search method, voice search apparatus and voice search program for web pages

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2000968695

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2000968695

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Ref document number: 2000968695

Country of ref document: EP