US20060173686A1 - Apparatus, method, and medium for generating grammar network for use in speech recognition and dialogue speech recognition - Google Patents
Apparatus, method, and medium for generating grammar network for use in speech recognition and dialogue speech recognition Download PDFInfo
- Publication number
- US20060173686A1 US20060173686A1 US11/344,163 US34416306A US2006173686A1 US 20060173686 A1 US20060173686 A1 US 20060173686A1 US 34416306 A US34416306 A US 34416306A US 2006173686 A1 US2006173686 A1 US 2006173686A1
- Authority
- US
- United States
- Prior art keywords
- dialogue
- word
- generating
- sentence
- words
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
- G10L15/19—Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
Definitions
- the present invention relates to speech recognition, and more particularly, to an apparatus and method for adaptively and automatically generating a grammar network for use in speech recognition based on contents of previous dialogue, and an apparatus and method for recognizing dialogue speech by using the grammar network for speech recognition.
- n-gram method a hidden Markov model (HMM) method, a speech application programming interface (SAPI), a voice eXtensible markup language (VXML), and a speech application language tags (SALT) method
- SAPI speech application programming interface
- VXML voice eXtensible markup language
- SALT speech application language tags
- n-gram method real-time discourse information between a speech recognition apparatus and a user is not reflected in utterance prediction.
- HMM method each moment of utterance by a user is assumed as an individual probability event completely independent from other utterance moments of the user or a speech recognition apparatus.
- SAPI, VXML, and SALT methods a predefined grammar in a simple prefixed discourse is loaded on predefined time points.
- a grammar network generation method of the n-gram method using a statistical model may be appropriate to a grammar network generator of a speech recognition apparatus for dictation utterance, but it is not appropriate to that for a speech recognition apparatus for conversational utterance due to a drawback that real-time discourse information is not utilized for utterance prediction.
- grammar network generation methods of the SAPI, VXML and SALT methods that employ a context free grammar (CFG) using a computational language model may be appropriate to a grammar network generator of a speech recognition apparatus for command and control utterance, but these are not appropriate for conversational utterance due to a drawback that the discourse and speech content of the user cannot go beyond a pre-designed fixed discourse.
- the present invention provides an apparatus, method, and medium for adaptively and automatically generating a grammar network for speech recognition based on contents of previous dialogue.
- the present invention also provides an apparatus, method, and medium for performing dialogue speech recognition by using a grammar network for speech recognition generated adaptively and automatically based on contents of previous dialogue.
- an apparatus for generating a grammar network for speech recognition including: a dialogue history storage unit storing a dialogue history between a system and a user; a semantic map formed by clustering words forming each dialogue sentence included in a dialogue sentence corpus depending on semantic correlation, and generating a first candidate group formed of a plurality of words having the semantic correlation extracted for each word forming a dialogue sentence provided from the dialogue history storage unit; a sound map formed by clustering words forming each dialogue sentence included in the dialogue sentence corpus depending on acoustic similarity, and generating a second candidate group formed of a plurality of words having an acoustic similarity extracted for each word forming the dialogue sentence provided from the dialogue history storage unit and each word of the first candidate group; and a grammar network construction unit constructing a grammar network by combining the first candidate group and the second candidate group.
- a method of generating a grammar network for speech recognition including: forming a semantic map by clustering words forming each dialogue sentence included in a dialogue sentence corpus depending on semantic correlation; forming an acoustic map by clustering words forming each dialogue sentence included in the dialogue sentence corpus depending on acoustic similarity; activating the semantic map and generating a first candidate group formed of a plurality of words having the semantic correlation extracted for each word forming a dialogue sentence included in a dialogue history performed between a system and a user; activating the acoustic map and generating a second candidate group formed of a plurality of words having an acoustic similarity extracted for each word forming the dialogue sentence included in the dialogue history and each word of the first candidate group; and generating a grammar network by combining the first candidate group and the second candidate group.
- an apparatus for speech recognition including: a feature extraction unit extracting features from a user's voice and generating a feature vector string; a grammar network generation unit generating a grammar network by activating a semantic map and an acoustic map by using contents of a dialogue most recently spoken, whenever the user speaks; a loading unit loading the grammar network generated by the grammar network generation unit; and a searching unit searching the grammar network loaded in the loading unit, by using the feature vector string, and generating a candidate recognition sentence formed of a word string matching the feature vector string.
- a method of speech recognition including: extracting features from a user's voice and generating a feature vector string; generating a grammar network by activating a semantic map and an acoustic map by using contents of a dialogue most recently spoken, whenever the user speaks; loading the grammar network; and searching the loaded grammar network, by using the feature vector string, and generating a candidate recognition sentence formed of a word string matching the feature vector string.
- At least one computer readable medium storing instructions that control at least one processor for executing a method of generating a grammar network for speech recognition, wherein the method includes: forming a semantic map by clustering words forming each dialogue sentence included in a dialogue sentence corpus depending on semantic correlation; forming an acoustic map by clustering words forming each dialogue sentence included in the dialogue sentence corpus depending on acoustic similarity; activating the semantic map and generating a first candidate group formed of a plurality of words having the semantic correlation extracted for each word forming a dialogue sentence included in a dialogue history performed between a system and a user; activating the acoustic map and generating a second candidate group formed of a plurality of words having an acoustic similarity extracted for each word forming the dialogue sentence included in the dialogue history and each word of the first candidate group; and generating a grammar network by combining the first candidate group and the second candidate group.
- At least one computer readable medium storing instructions that control at least one processor for executing a method of speech recognition, wherein the method includes: extracting features from a user's voice and generating a feature vector string; generating a grammar network by activating a semantic map and an acoustic map by using contents of a dialogue most recently spoken, whenever the user speaks; loading the grammar network; and searching the loaded grammar network, by using the feature vector string, and generating a candidate recognition sentence formed of a word string matching the feature vector string.
- a method of speech recognition including: extracting features from a user's voice and generating a feature vector string; generating a grammar network by activating a semantic map and an acoustic map by using contents of a dialogue spoken by a user; and searching the grammar network, by using the feature vector string, and generating a candidate recognition sentence formed of a word string matching the feature vector string.
- At least one computer readable medium storing instructions that control at least one processor for executing a method of speech recognition, wherein the method includes: extracting features from a user's voice and generating a feature vector string; generating a grammar network by activating a semantic map and an acoustic map by using contents of a dialogue spoken by a user; and searching the grammar network, by using the feature vector string, and generating a candidate recognition sentence formed of a word string matching the feature vector string.
- FIG. 1 is a block diagram illustrating a structure of an apparatus for generating a grammar network for speech recognition according to an exemplary embodiment of the present invention
- FIG. 2 is a block diagram explaining an exemplary process of generating an acoustic map and a semantic map illustrated in FIG. 1 ;
- FIG. 3 is a block diagram illustrating a structure of a dialogue speech recognition apparatus according to an exemplary embodiment of the present invention.
- FIG. 4 is a flowchart illustrating of a speech recognition method according to an exemplary embodiment of the present invention.
- FIG. 1 is a block diagram illustrating a structure of an apparatus for generating a grammar network for speech recognition according to an exemplary embodiment of the present invention, and includes a dialogue history storage unit 110 , a semantic map 130 , an acoustic map 150 , and a grammar network construction unit 170 .
- the dialogue history storage unit 110 stores a dialogue history between a virtual machine or computer having a speech recognition function (hereinafter referred to as a ‘system’) and a user as dialogue progresses up to and including a preset number of times the source (system or user) of the dialogue changes.
- the dialogue history stored in the dialogue history storage unit 110 can be updated as a dialogue between the system and the user progresses.
- the dialog history includes at least one combination among a plurality of candidate recognition results of a user's previous voice input provided from a searching unit 370 of FIG. 3 , a final recognition result of the user's previous voice input provided from an utterance verification unit 380 of FIG. 3 , a reutterance requesting message provided from a reutterance request unit 390 of FIG. 3 , and a system's previous utterance sentence.
- the semantic map 130 is a map formed by clustering word-like units depending on semantic correlation.
- the semantic map 130 is activated by word-like units forming a latest dialogue sentence in the dialogue history stored in the dialogue history storage unit 110 .
- the semantic map 130 extracts at least one or more word-like units having high semantic correlations for each word-like unit in the latest dialogue sentence, and generates a first candidate group formed of a plurality of word-like units extracted for each word-like unit in the latest dialogue sentence.
- the acoustic map 150 is a map formed by clustering word-like units depending on acoustic similarity.
- the sound map 150 is activated by word-like units activated by the semantic map 130 and the word-like units forming a latest dialogue sentence in the dialogue history stored in the dialogue history storage unit 110 .
- the acoustic map 150 extracts at least one or more acoustically similar word-like units for each word-like unit in the latest dialogue sentence, and generates a second candidate group formed of a plurality of word-like units extracted for each word-like unit in the latest dialogue sentence.
- a dialogue sentence of the user recognized most recently by the computer and a dialogue sentence uttered most recently by the computer among the dialogue history stored in the dialogue history storage unit 110 may be received after being separated into respective word-like units.
- the grammar network construction unit 170 builds a grammar network by combining randomly the word-like units included in the first candidate group provided by the semantic map 130 and the word-like units included in the second candidate group provided by the acoustic map 150 or by extracting from a corpus using a variety of methods the word-like units included in the first candidate group provided by the semantic map 130 and the word-like units included in the second candidate group provided by the acoustic map 150 .
- FIG. 2 is a block diagram explaining a process of generating the semantic map 130 and the acoustic map 150 illustrated in FIG. 1 , and includes a dialogue sentence corpus 210 , a semantic map generation unit 230 , and an acoustic map generation unit 250 .
- the dialogue sentence corpus 210 stores all dialogue contents that can be used between a system and a user or between persons, by arranging the contents as sequential dialogue sentences (or partial sentences) in a database. At this time, it is also possible to form dialogue sentences for each domain and store the sentences. Also, a variety of usages of each word may be included in the forming of a dialogue sentence.
- the word-like unit is a word formed of one or more syllables or a string of word.
- the word-like unit serves as a basic element forming each dialogue sentence, and the word-like unit is comprised of a single meaning and a single pronunciation. Accordingly, unless the meaning and pronunciation is maintained, the word-like unit cannot be divided further or cannot be combined with other elements.
- the semantic map generation unit 230 selects one dialogue sentence sequentially in relation to the dialogue contents stored in the dialogue sentence corpus 210 .
- the semantic map generation unit 230 sets at least one dialogue sentence positioned at a point of time previous to the selected dialogue sentence, and at least one dialogue sentence positioned at a point of time after the selected dialogue sentence, as training units. In relation to the set training units, it is determined that word-like units occurring adjacent to each word-like unit have high semantic correlations.
- clustering or classifier training for all dialogue sentences included in the dialogue sentence corpus 210 semantic map generation is performed so that a semantic map is generated.
- a variety of algorithms such as a Kohonen network, vector quantization, a Bayesian network, an artificial neural network, and a Bayesian tree, can be used.
- a method of quantitatively measuring a semantic distance between word-like units in the semantic map generation unit 230 will now be explained.
- a co-occurrence rate is employed for distance measuring that is used when a semantic map is generated from the dialogue sentence corpus 210 through the semantic map generation unit 230 .
- the co-occurrence rate will now be explained further.
- a window is defined to include a sentence in t ⁇ 1 to a sentence in t+1 including the sentence in t.
- one window includes three sentences.
- t ⁇ 1 can be t-n and t+1 can be t+n.
- n may be any value from 1 to 7, but is not limited to these numbers.
- the reason why the maximum number is 7 is that the limit of the short-period memory of a human being is 7 units.
- Word-like units co-occurring in one window are counted respectively.
- a predetermined sentence “Ye Kuraeyo,” is included in a window. Since this sentence includes two word-like units, “Ye (yes)” and “Kuraeyo (right)”, and “Ye (yes): Kuraeyo (right)” is counted once and also, “Kuraeyo (right): Ye (yes)” is counted once.
- the frequencies of these co-occurrences are continuously recorded and then finally counted in relation to the entire contents of the corpus. That is, a counting operation identical to the above is performed each time with moving the window of a constant size in relation to the entire contents of the corpus by one step with respect to time.
- the count value (integer value) in relation to each pair of the entire plurality of word-like units is obtained. If this integer value is divided by the total sum of all count values, each pair of word-like units will have a fractional value between 0.0 and 1.0.
- the distance between a predetermined word-like unit A and another word-like unit B will be a predetermined fractional value. If this value is 0.0, it means that the two word-like units never occurred together, and if this value is 1.0, it means that only this pair exists in the entire contents of the corpus and other possible pairs have never occurred.
- the values for most pairs will be arbitrary values less than 1.0 and greater than 0.0, and if values of all pairs are added, the result will be 1.0.
- the co-occurrence rate described above corresponds to what is obtained by converting all-important semantic relations defined in ordinary linguistics into quantitative amounts. That is, antonyms, synonyms, similar words, super concept words, sub concept words, and part concept words are all included and even interjections frequently occurring are included. Especially in the case of interjections, they have bigger values of semantic distance for a bigger variety of word-like units. Meanwhile, in the case of articles they will occur adjacent to only predetermined sentence types. That is, in the case of the Korean language, articles will occur only after nouns. In conventional technology, linguistic knowledge can be defined one by one manually. However, according to the present invention, if dialogue sentences are correctly collected in the dialogue sentence corpus 210 , words will be automatically arranged and the quantitative distance can be measured. As a result, a grammar network appropriate to the flow of a dialogue, that is, the discourse is generated so that utterance by a user can be predicted.
- the acoustic map generation unit 250 selects one dialogue sentence sequentially in relation to the dialogue sentences stored in the dialogue sentence corpus 210 .
- the acoustic map generation unit 250 matches each word-like unit included in the selected dialogue sentence with at least one or more word-like units having identical pronunciation but having different meaning according to usage, or at least one or more word-like units having a different pronunciation but having identical meaning. Then, with respect to acoustic similarity, semantic or pronunciation indexes are given to the at least one or more word-like units matched with one word-like unit and then, by performing clustering or classifier training, an acoustic map is generated.
- the acoustic map is generated by performing clustering or classifier training in the same manner as in the semantic map generation unit 230 .
- the dialogue sentence corpus 210 includes the usage examples as the following Table 1: TABLE 1 Nadal, Natgari, Nannoko Giyeogja, Byeongi Nasda, Natgwa Bam, Jigwiga Natda, Museun Nacheuro Bona, Agireul Nata, Saekkireul Nata, Baetago Badae, Baega Apeuda, Baega Masita, Maltada, Malgwa Geul, Beore Ssoida, Beoreul Batda, Nuni Apeuda, Nuni Onda, Bami Masita, Bami Eudupda, Dariga Apeuda, Darireul Geonneoda, Achime Boja, Achimi Masisda.
- a total of 45 word-like units can be used to form the following Table 2: TABLE 2 Nad (grain) Al (egg) Gari (stack) Nas (sickle) Nota (put) Giyeog (Giyeog) Ja (letter) Nad (recover) Byeong (sickness) Nad (day) Bam (night) Nad (low) Jigwi (position) Nad (face) Boda (see) Nad (piece) Gae (unit) Nad (bear) Agi (baby) Saekki (young) Al (egg) Bae (ship) Bada (sea) Bae (stomach) Apeuda (sick) Bae (pear) Masisda (tasty) Mal (horse) Tada (ride) Mal (language) Keul (writing) Beol (bee) Ssoda (bite) Beol (punishment) Badda (get) Nun (eye) Apeu
- FIG. 3 is a block diagram illustrating a structure of a dialogue speech recognition apparatus according to an exemplary embodiment of the present invention.
- the dialogue speech recognition apparatus includes a feature extraction unit 310 , a grammar network generation unit 330 , a loading unit 350 , a searching unit 370 , an acoustic model 375 , an utterance verification unit 380 , and a user reutterance request unit 390 .
- the characteristic extraction unit 310 receives a voice signal from a user, and converts the voice signal into a feature vector string useful for speech recognition, such as a Mel-frequency Cepstral coefficient.
- the grammar network generation unit 330 receives the dialogue history most recently generated and generates a grammar network by activating the semantic map ( 130 of FIG. 1 ) and the acoustic map ( 150 of FIG. 1 ) using the received dialogue history.
- the dialog history includes at least one combination among a plurality of candidate recognition results of a user's previous voice input provided from a searching unit 370 , a final recognition result of the user's previous voice input provided from an utterance verification unit 380 , a reutterance requesting message provided from a reutterance request unit 390 , and a system's previous utterance sentence.
- the detailed structure and related specific operations of the grammar network generation unit 330 are the same as described above with reference to FIG. 1 .
- the loading unit 350 expresses phoneme combination information in relation to phonemes included in the grammar network generated in the grammar network generation unit 330 , in a structure such as a context free grammar and loads it into the searching unit 370 .
- the searching unit 370 receives the feature vector string in relation to the currently input voice signal from the feature extraction unit 310 , and performs a Viterbi search for the grammar network formed of phoneme models extracted from the acoustic model 375 , based on the phoneme combination information loaded from the loading unit 350 , in order to find candidate recognition sentences (N-Best) formed of matching word strings.
- the utterance verification unit 380 performs utterance verification for the candidate recognition sentences provided by the searching unit 370 .
- the utterance verification can be performed by using the grammar network generated according to an exemplary embodiment of the present invention. That is, if similarity calculated in relation to one among the candidate recognition sentences by using the grammar network is equal to or greater than a threshold, it is determined that the utterance verification of the current user voice input is successful. If each similarity calculated in relation to all the candidate recognition sentences is less than the threshold, it is determined that the utterance verification of the current user voice input is failed.
- the method disclosed in the Korean Patent Application No. 2004-0115069 which corresponds to U.S. patent application Ser. No. 11/263,826 (title of the invention: method and apparatus for determining the possibility of pattern recognition of a time series signal), can be applied.
- the user reutterance request unit 390 may display text requesting the user to utter again, on a display (not shown), such as an LCD display, or may generate a system utterance sentence requesting the user to utter again through a speaker (not shown).
- FIG. 4 is a flowchart illustrating the operations of a speech recognition method according to an exemplary embodiment of the present invention.
- the dialogue history includes a first dialogue sentence that is spoken most recently by the user and recognized by the system, and a second dialogue sentence that is spoken most recently by the system.
- the first dialogue sentence includes at least one combination of a plurality of candidate recognition results of a user's previous voice input provided from a searching unit 370 and a final recognition result of the user's previous voice input provided from an utterance verification unit 380 .
- the second dialogue sentence includes at least one combination of a reutterance requesting message provided from a reutterance request unit 390 , and a system's previous utterance sentence.
- the semantic map ( 130 of FIG. 1 ) and the acoustic map ( 150 of FIG. 1 ) are activated by using the dialogue history received in operation 410 , and a grammar network is generated by combining randomly or in a variety of ways extracted from the corpus, a plurality of word-like units included in a first candidate group provided by the semantic map 130 , and a plurality of word-like units included in a second candidate group provided by the acoustic map 150 .
- phoneme combination information in relation to phonemes included in the grammar network generated in operation 420 is expressed in a structure such as a context free grammar, and is loaded for a search, such as a Viterbi search.
- the Viterbi search is performed for the grammar network formed of phoneme models extracted from the acoustic model 375 , based on the phoneme combination information loaded in operation 430 in relation to the feature vector string for the current voice signal, which is input in operation 410 , and by doing so, candidate recognition sentences (N-Best) formed of matching word strings are searched for.
- operation 450 it is determined whether or not there is a candidate recognition sentence among the candidate recognition sentences, for which utterance verification is successful according to the search result of operation 440 .
- operation 460 if the determination result of the operation 450 indicates that there is a candidate recognition sentence whose utterance verification is successful, the recognition sentence is output of the system, and in operation 470 , if there is no candidate recognition sentence whose utterance verification is successful, the user is requested to utter again.
- exemplary embodiments of the present invention can also be implemented by executing computer readable code/instructions in/on a medium, e.g., a computer readable medium.
- the medium can correspond to any medium/media permitting the storing and/or transmission of the computer readable code.
- the computer readable code/instructions can be recorded/transferred in/on a medium in a variety of ways, with examples of the medium including magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.), optical recording media (e.g., CD-ROMs, or DVDs), random access memory media, and storage/transmission media such as carrier waves. Examples of storage/transmission media may include wired or wireless transmission (such as transmission through the Internet).
- the medium/media may also be a distributed network, so that the computer readable code/instructions is stored/transferred and executed in a distributed fashion.
- the computer readable code/instructions may be executed by one or more processors.
- dialogue speech recognition is performed by using a grammar network for speech recognition adaptively and automatically generated by reflecting the contents of previous dialogues such that even when the user utters outside a standard grammar structure, the contents can be easily recognized. Accordingly, dialogue can be smoothly and naturally performed.
- the present invention can replace the n-gram, SAPI, VXML, and SALT methods that are conventional technologies, and in addition, it enables a higher dialogue recognition rate through a user speech prediction function.
Abstract
Description
- This application claims the benefit of Korean Patent Application No. 10-2005-0009144, filed on Feb. 1, 2005, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
- 1. Field of the Invention
- The present invention relates to speech recognition, and more particularly, to an apparatus and method for adaptively and automatically generating a grammar network for use in speech recognition based on contents of previous dialogue, and an apparatus and method for recognizing dialogue speech by using the grammar network for speech recognition.
- 2. Description of the Related Art
- Among grammar generation algorithms used in a decoder among elements of a speech recognition apparatus such as a virtual machine and a computer, well-known methods, such as an n-gram method, a hidden Markov model (HMM) method, a speech application programming interface (SAPI), a voice eXtensible markup language (VXML), and a speech application language tags (SALT) method, are used. In the n-gram method, real-time discourse information between a speech recognition apparatus and a user is not reflected in utterance prediction. In the HMM method, each moment of utterance by a user is assumed as an individual probability event completely independent from other utterance moments of the user or a speech recognition apparatus. Meanwhile, in the SAPI, VXML, and SALT methods, a predefined grammar in a simple prefixed discourse is loaded on predefined time points.
- As a result, when the content of utterance by a user falls outside of a predefined standard grammar structure, it becomes difficult for the speech recognition apparatus to recognize the utterance of the user, and therefore the speech recognition apparatus prompts the user to utter again. In conclusion, the time taken by the speech recognition apparatus to recognize the utterance of the user becomes longer such that the dialogue between the speech recognition apparatus and the user becomes unnatural as well as tedious.
- Furthermore, a grammar network generation method of the n-gram method using a statistical model may be appropriate to a grammar network generator of a speech recognition apparatus for dictation utterance, but it is not appropriate to that for a speech recognition apparatus for conversational utterance due to a drawback that real-time discourse information is not utilized for utterance prediction. In addition, grammar network generation methods of the SAPI, VXML and SALT methods that employ a context free grammar (CFG) using a computational language model may be appropriate to a grammar network generator of a speech recognition apparatus for command and control utterance, but these are not appropriate for conversational utterance due to a drawback that the discourse and speech content of the user cannot go beyond a pre-designed fixed discourse.
- Additional aspects, features, and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the invention.
- The present invention provides an apparatus, method, and medium for adaptively and automatically generating a grammar network for speech recognition based on contents of previous dialogue.
- The present invention also provides an apparatus, method, and medium for performing dialogue speech recognition by using a grammar network for speech recognition generated adaptively and automatically based on contents of previous dialogue.
- According to an aspect of the present invention, there is provided an apparatus for generating a grammar network for speech recognition including: a dialogue history storage unit storing a dialogue history between a system and a user; a semantic map formed by clustering words forming each dialogue sentence included in a dialogue sentence corpus depending on semantic correlation, and generating a first candidate group formed of a plurality of words having the semantic correlation extracted for each word forming a dialogue sentence provided from the dialogue history storage unit; a sound map formed by clustering words forming each dialogue sentence included in the dialogue sentence corpus depending on acoustic similarity, and generating a second candidate group formed of a plurality of words having an acoustic similarity extracted for each word forming the dialogue sentence provided from the dialogue history storage unit and each word of the first candidate group; and a grammar network construction unit constructing a grammar network by combining the first candidate group and the second candidate group.
- According to another aspect of the present invention, there is provided a method of generating a grammar network for speech recognition including: forming a semantic map by clustering words forming each dialogue sentence included in a dialogue sentence corpus depending on semantic correlation; forming an acoustic map by clustering words forming each dialogue sentence included in the dialogue sentence corpus depending on acoustic similarity; activating the semantic map and generating a first candidate group formed of a plurality of words having the semantic correlation extracted for each word forming a dialogue sentence included in a dialogue history performed between a system and a user; activating the acoustic map and generating a second candidate group formed of a plurality of words having an acoustic similarity extracted for each word forming the dialogue sentence included in the dialogue history and each word of the first candidate group; and generating a grammar network by combining the first candidate group and the second candidate group.
- According to another aspect of the present invention, there is provided an apparatus for speech recognition including: a feature extraction unit extracting features from a user's voice and generating a feature vector string; a grammar network generation unit generating a grammar network by activating a semantic map and an acoustic map by using contents of a dialogue most recently spoken, whenever the user speaks; a loading unit loading the grammar network generated by the grammar network generation unit; and a searching unit searching the grammar network loaded in the loading unit, by using the feature vector string, and generating a candidate recognition sentence formed of a word string matching the feature vector string.
- According to another aspect of the present invention, there is provided a method of speech recognition including: extracting features from a user's voice and generating a feature vector string; generating a grammar network by activating a semantic map and an acoustic map by using contents of a dialogue most recently spoken, whenever the user speaks; loading the grammar network; and searching the loaded grammar network, by using the feature vector string, and generating a candidate recognition sentence formed of a word string matching the feature vector string.
- According to another aspect of the present invention, there is provided at least one computer readable medium storing instructions that control at least one processor for executing a method of generating a grammar network for speech recognition, wherein the method includes: forming a semantic map by clustering words forming each dialogue sentence included in a dialogue sentence corpus depending on semantic correlation; forming an acoustic map by clustering words forming each dialogue sentence included in the dialogue sentence corpus depending on acoustic similarity; activating the semantic map and generating a first candidate group formed of a plurality of words having the semantic correlation extracted for each word forming a dialogue sentence included in a dialogue history performed between a system and a user; activating the acoustic map and generating a second candidate group formed of a plurality of words having an acoustic similarity extracted for each word forming the dialogue sentence included in the dialogue history and each word of the first candidate group; and generating a grammar network by combining the first candidate group and the second candidate group.
- According to another aspect of the present invention, there is provided at least one computer readable medium storing instructions that control at least one processor for executing a method of speech recognition, wherein the method includes: extracting features from a user's voice and generating a feature vector string; generating a grammar network by activating a semantic map and an acoustic map by using contents of a dialogue most recently spoken, whenever the user speaks; loading the grammar network; and searching the loaded grammar network, by using the feature vector string, and generating a candidate recognition sentence formed of a word string matching the feature vector string.
- According to another aspect of the present invention, there is provided a method of speech recognition including: extracting features from a user's voice and generating a feature vector string; generating a grammar network by activating a semantic map and an acoustic map by using contents of a dialogue spoken by a user; and searching the grammar network, by using the feature vector string, and generating a candidate recognition sentence formed of a word string matching the feature vector string.
- According to another aspect of the present invention, there is provided at least one computer readable medium storing instructions that control at least one processor for executing a method of speech recognition, wherein the method includes: extracting features from a user's voice and generating a feature vector string; generating a grammar network by activating a semantic map and an acoustic map by using contents of a dialogue spoken by a user; and searching the grammar network, by using the feature vector string, and generating a candidate recognition sentence formed of a word string matching the feature vector string.
- These and/or other aspects, features, and advantages of the invention will become apparent and more readily appreciated from the following description of exemplary embodiments, taken in conjunction with the accompanying drawings of which:
-
FIG. 1 is a block diagram illustrating a structure of an apparatus for generating a grammar network for speech recognition according to an exemplary embodiment of the present invention; -
FIG. 2 is a block diagram explaining an exemplary process of generating an acoustic map and a semantic map illustrated inFIG. 1 ; -
FIG. 3 is a block diagram illustrating a structure of a dialogue speech recognition apparatus according to an exemplary embodiment of the present invention; and -
FIG. 4 is a flowchart illustrating of a speech recognition method according to an exemplary embodiment of the present invention. - Reference will now be made in detail to exemplary embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. Exemplary embodiments are described below to explain the present invention by referring to the figures.
-
FIG. 1 is a block diagram illustrating a structure of an apparatus for generating a grammar network for speech recognition according to an exemplary embodiment of the present invention, and includes a dialoguehistory storage unit 110, asemantic map 130, anacoustic map 150, and a grammarnetwork construction unit 170. - Referring to
FIG. 1 , the dialoguehistory storage unit 110 stores a dialogue history between a virtual machine or computer having a speech recognition function (hereinafter referred to as a ‘system’) and a user as dialogue progresses up to and including a preset number of times the source (system or user) of the dialogue changes. According to this, the dialogue history stored in the dialoguehistory storage unit 110 can be updated as a dialogue between the system and the user progresses. For example, the dialog history includes at least one combination among a plurality of candidate recognition results of a user's previous voice input provided from asearching unit 370 ofFIG. 3 , a final recognition result of the user's previous voice input provided from anutterance verification unit 380 ofFIG. 3 , a reutterance requesting message provided from areutterance request unit 390 ofFIG. 3 , and a system's previous utterance sentence. - The
semantic map 130 is a map formed by clustering word-like units depending on semantic correlation. Thesemantic map 130 is activated by word-like units forming a latest dialogue sentence in the dialogue history stored in the dialoguehistory storage unit 110. Thesemantic map 130 extracts at least one or more word-like units having high semantic correlations for each word-like unit in the latest dialogue sentence, and generates a first candidate group formed of a plurality of word-like units extracted for each word-like unit in the latest dialogue sentence. - The
acoustic map 150 is a map formed by clustering word-like units depending on acoustic similarity. Thesound map 150 is activated by word-like units activated by thesemantic map 130 and the word-like units forming a latest dialogue sentence in the dialogue history stored in the dialoguehistory storage unit 110. Theacoustic map 150 extracts at least one or more acoustically similar word-like units for each word-like unit in the latest dialogue sentence, and generates a second candidate group formed of a plurality of word-like units extracted for each word-like unit in the latest dialogue sentence. - In the
semantic map 130 and theacoustic map 150, a dialogue sentence of the user recognized most recently by the computer and a dialogue sentence uttered most recently by the computer among the dialogue history stored in the dialoguehistory storage unit 110 may be received after being separated into respective word-like units. - The grammar
network construction unit 170 builds a grammar network by combining randomly the word-like units included in the first candidate group provided by thesemantic map 130 and the word-like units included in the second candidate group provided by theacoustic map 150 or by extracting from a corpus using a variety of methods the word-like units included in the first candidate group provided by thesemantic map 130 and the word-like units included in the second candidate group provided by theacoustic map 150. -
FIG. 2 is a block diagram explaining a process of generating thesemantic map 130 and theacoustic map 150 illustrated inFIG. 1 , and includes adialogue sentence corpus 210, a semanticmap generation unit 230, and an acousticmap generation unit 250. - The
dialogue sentence corpus 210 stores all dialogue contents that can be used between a system and a user or between persons, by arranging the contents as sequential dialogue sentences (or partial sentences) in a database. At this time, it is also possible to form dialogue sentences for each domain and store the sentences. Also, a variety of usages of each word may be included in the forming of a dialogue sentence. Here, the word-like unit is a word formed of one or more syllables or a string of word. The word-like unit serves as a basic element forming each dialogue sentence, and the word-like unit is comprised of a single meaning and a single pronunciation. Accordingly, unless the meaning and pronunciation is maintained, the word-like unit cannot be divided further or cannot be combined with other elements. Also, only one pair of an identical meaning and an identical pronunciation is defined. Meanwhile, when words having identical pronunciation have meanings even slightly different from each other, for example, homonyms, homophones, homographs, and polysemies, all of the words are arranged and defined as different elements. Also, when words having the same meaning have pronunciations even slightly different from each other, for example, dialectics and abbreviations, all are arranged and defined as different elements. - The semantic
map generation unit 230 selects one dialogue sentence sequentially in relation to the dialogue contents stored in thedialogue sentence corpus 210. The semanticmap generation unit 230 sets at least one dialogue sentence positioned at a point of time previous to the selected dialogue sentence, and at least one dialogue sentence positioned at a point of time after the selected dialogue sentence, as training units. In relation to the set training units, it is determined that word-like units occurring adjacent to each word-like unit have high semantic correlations. By considering semantic correlations, clustering or classifier training for all dialogue sentences included in thedialogue sentence corpus 210, semantic map generation is performed so that a semantic map is generated. At this time, for the clustering or the classifier training, a variety of algorithms, such as a Kohonen network, vector quantization, a Bayesian network, an artificial neural network, and a Bayesian tree, can be used. - Meanwhile, a method of quantitatively measuring a semantic distance between word-like units in the semantic
map generation unit 230 will now be explained. Basically, a co-occurrence rate is employed for distance measuring that is used when a semantic map is generated from thedialogue sentence corpus 210 through the semanticmap generation unit 230. The co-occurrence rate will now be explained further. When taking a sentence (or part of a sentence) from thedialogue sentence corpus 210 referring to a current point in time t as a center, a window is defined to include a sentence in t−1 to a sentence in t+1 including the sentence in t. In this case, one window includes three sentences. Also, t−1 can be t-n and t+1 can be t+n. At this time, n may be any value from 1 to 7, but is not limited to these numbers. The reason why the maximum number is 7 is that the limit of the short-period memory of a human being is 7 units. - Word-like units co-occurring in one window are counted respectively. For example, a predetermined sentence, “Ye Kuraeyo,” is included in a window. Since this sentence includes two word-like units, “Ye (yes)” and “Kuraeyo (right)”, and “Ye (yes): Kuraeyo (right)” is counted once and also, “Kuraeyo (right): Ye (yes)” is counted once. The frequencies of these co-occurrences are continuously recorded and then finally counted in relation to the entire contents of the corpus. That is, a counting operation identical to the above is performed each time with moving the window of a constant size in relation to the entire contents of the corpus by one step with respect to time. If the counting operation in relation to the entire contents of the corpus is finished, the count value (integer value) in relation to each pair of the entire plurality of word-like units is obtained. If this integer value is divided by the total sum of all count values, each pair of word-like units will have a fractional value between 0.0 and 1.0. The distance between a predetermined word-like unit A and another word-like unit B will be a predetermined fractional value. If this value is 0.0, it means that the two word-like units never occurred together, and if this value is 1.0, it means that only this pair exists in the entire contents of the corpus and other possible pairs have never occurred. As a result, the values for most pairs will be arbitrary values less than 1.0 and greater than 0.0, and if values of all pairs are added, the result will be 1.0.
- The co-occurrence rate described above corresponds to what is obtained by converting all-important semantic relations defined in ordinary linguistics into quantitative amounts. That is, antonyms, synonyms, similar words, super concept words, sub concept words, and part concept words are all included and even interjections frequently occurring are included. Especially in the case of interjections, they have bigger values of semantic distance for a bigger variety of word-like units. Meanwhile, in the case of articles they will occur adjacent to only predetermined sentence types. That is, in the case of the Korean language, articles will occur only after nouns. In conventional technology, linguistic knowledge can be defined one by one manually. However, according to the present invention, if dialogue sentences are correctly collected in the
dialogue sentence corpus 210, words will be automatically arranged and the quantitative distance can be measured. As a result, a grammar network appropriate to the flow of a dialogue, that is, the discourse is generated so that utterance by a user can be predicted. - The acoustic
map generation unit 250 selects one dialogue sentence sequentially in relation to the dialogue sentences stored in thedialogue sentence corpus 210. The acousticmap generation unit 250 matches each word-like unit included in the selected dialogue sentence with at least one or more word-like units having identical pronunciation but having different meaning according to usage, or at least one or more word-like units having a different pronunciation but having identical meaning. Then, with respect to acoustic similarity, semantic or pronunciation indexes are given to the at least one or more word-like units matched with one word-like unit and then, by performing clustering or classifier training, an acoustic map is generated. The acoustic map is generated by performing clustering or classifier training in the same manner as in the semanticmap generation unit 230. As an example of a method of quantitatively measuring an acoustic distance between word-like units in the acousticmap generation unit 250, a method is disclosed in Korean Patent Laid Open Application No. 2001-0073506 (title of the invention: A method of measuring a global similarity degree between Korean character strings). - An example of a semantic map generated in the semantic
map generation unit 230 and an acoustic map generated in the acousticmap generation unit 250 will now be explained assuming that thedialogue sentence corpus 210 includes the usage examples as the following Table 1:TABLE 1 Nadal, Natgari, Nannoko Giyeogja, Byeongi Nasda, Natgwa Bam, Jigwiga Natda, Museun Nacheuro Bona, Agireul Nata, Saekkireul Nata, Baetago Badae, Baega Apeuda, Baega Masita, Maltada, Malgwa Geul, Beore Ssoida, Beoreul Batda, Nuni Apeuda, Nuni Onda, Bami Masita, Bami Eudupda, Dariga Apeuda, Darireul Geonneoda, Achime Boja, Achimi Masisda. - A total of 45 word-like units can be used to form the following Table 2:
TABLE 2 Nad (grain) Al (egg) Gari (stack) Nas (sickle) Nota (put) Giyeog (Giyeog) Ja (letter) Nad (recover) Byeong (sickness) Nad (day) Bam (night) Nad (low) Jigwi (position) Nad (face) Boda (see) Nad (piece) Gae (unit) Nad (bear) Agi (baby) Saekki (young) Al (egg) Bae (ship) Bada (sea) Bae (stomach) Apeuda (sick) Bae (pear) Masisda (tasty) Mal (horse) Tada (ride) Mal (language) Keul (writing) Beol (bee) Ssoda (bite) Beol (punishment) Badda (get) Nun (eye) Apeuda (sick)* Nun (snow) Oda (come) Bam (chestnut) Masisda (tasty)* Bam (night) Eudupda (dark) Dari (leg) Apeuda (sick)** Dari (bridge) Geonneoda (cross) Achim (morning) Boda (see)* Achim (breakfast) Masisda (tasty)**
(Here, * and ** indicate redundancy)
- By using the word-like units shown in Table 2, an acoustic map containing relations between pronunciations and polymorphemes as the following Table 3 and a semantic map containing relations between polymorphemes as shown in Table 4 are generated.
TABLE 3 /Gae/ Gae (unit) /Geul/ Geul (writing) /Nad/ Nad (grain) Nas (recover) Nad (day) Nad (low) Nad (face) Nad (piece) Nad (bear) /Nun/ Nun (eye) Nun (snow) /Mal/ Mal (horse) Mal (language) /Bam/ Bam (chestnut) Bam (night) /Bae/ Bae (ship) Bae (stomach) Bae (pear) /Beol/ Beol (bee) Beol (punishment) /Byeong/ Byeong (sickness) /Al/ Al (egg) /Ja/ Ja (letter) /Gari/ Gari (stack) /Gyeok/ Gyeok (Gyeok) /Nota/ Nota (put) /Nodda/Nodda/ /Dari/ Dari (leg) Dari (bridge) /Bada/ Bada (sea) /Badda/ Badda (get) /Badda/ /Boda/ Boda (see) /Jigwi/ Jigwi (position) /Jigi/ /Saekki/ Saekki (young) /Agi/ Agi (baby) /Achim/ Achim (morning) Achim (breakfast) /Oda/ Oda (come) /Tada/ Tada (ride) /Ssoda/ Ssoda (bite) /Geonneoda/ Geonneoda /Masisda/ Masisda (tasty) /Masidda/ /Apeuda/ Apeuda (sick) /Apuda/ /Eodupda/ Eodupda (dark) /Eodupda -
TABLE 4 Nad (grain) - Al (egg) Nad (grain) - Gari (stack) Nad (sickle) - Nota (put) . . . Gyeok (Gyeok) - Ja (letter) Byeong (sickness) - Nas (recover) Nad (day) = Bam (night) Jigwi (position) - Nad (low) Nad (face) - Boda (see) Nad (piece) - Gae (unit) Agi (baby) - Nad (bear) Saekki (young) - Nad (bear) Al (egg) - Nad (bear) Bae (ship) = Bada (sea) Bae (stomach) - Apeuda (sick) Bae (pear) - Masisda (tasty) Mal (horse) - Tada (ride) Mal (language) = Geul (writing) Beol (bee) - Ssoda (bite) Beol (punishment) - Badda (get) Nun (eye) - Apeuda (sick) Nun (snow) - Oda (come) Bam (chestnut) - Masisda (tasty) Bam (night) - Eodupda (dark) Dari (leg) - Apeuda (sick) Dari (bridge) - Geonneoda (cross) Achim (morning) - Boda (see) Achim (breakfast) - Masisda (tasty) - In Table 3, ‘/•/’ indicates a pronunciation, and in Table 4, ‘-’ indicates an adjacent relation, ‘=’ indicates a relation that has nothing to do with an utterance order, and ‘. . . ’ indicates a relation that may be adjacent or may be skipped.
-
FIG. 3 is a block diagram illustrating a structure of a dialogue speech recognition apparatus according to an exemplary embodiment of the present invention. The dialogue speech recognition apparatus includes afeature extraction unit 310, a grammarnetwork generation unit 330, aloading unit 350, a searchingunit 370, anacoustic model 375, anutterance verification unit 380, and a userreutterance request unit 390. - Referring to
FIG. 3 , thecharacteristic extraction unit 310 receives a voice signal from a user, and converts the voice signal into a feature vector string useful for speech recognition, such as a Mel-frequency Cepstral coefficient. - The grammar
network generation unit 330 receives the dialogue history most recently generated and generates a grammar network by activating the semantic map (130 ofFIG. 1 ) and the acoustic map (150 ofFIG. 1 ) using the received dialogue history. The dialog history includes at least one combination among a plurality of candidate recognition results of a user's previous voice input provided from a searchingunit 370, a final recognition result of the user's previous voice input provided from anutterance verification unit 380, a reutterance requesting message provided from areutterance request unit 390, and a system's previous utterance sentence. The detailed structure and related specific operations of the grammarnetwork generation unit 330 are the same as described above with reference toFIG. 1 . - The
loading unit 350 expresses phoneme combination information in relation to phonemes included in the grammar network generated in the grammarnetwork generation unit 330, in a structure such as a context free grammar and loads it into the searchingunit 370. - The searching
unit 370 receives the feature vector string in relation to the currently input voice signal from thefeature extraction unit 310, and performs a Viterbi search for the grammar network formed of phoneme models extracted from theacoustic model 375, based on the phoneme combination information loaded from theloading unit 350, in order to find candidate recognition sentences (N-Best) formed of matching word strings. - The
utterance verification unit 380 performs utterance verification for the candidate recognition sentences provided by the searchingunit 370. At this time, without using a separate language model, the utterance verification can be performed by using the grammar network generated according to an exemplary embodiment of the present invention. That is, if similarity calculated in relation to one among the candidate recognition sentences by using the grammar network is equal to or greater than a threshold, it is determined that the utterance verification of the current user voice input is successful. If each similarity calculated in relation to all the candidate recognition sentences is less than the threshold, it is determined that the utterance verification of the current user voice input is failed. In relation to the utterance verification, the method disclosed in the Korean Patent Application No. 2004-0115069, which corresponds to U.S. patent application Ser. No. 11/263,826 (title of the invention: method and apparatus for determining the possibility of pattern recognition of a time series signal), can be applied. - When utterance verification is failed for all candidate recognition sentences in the
utterance verification unit 380, the userreutterance request unit 390 may display text requesting the user to utter again, on a display (not shown), such as an LCD display, or may generate a system utterance sentence requesting the user to utter again through a speaker (not shown). -
FIG. 4 is a flowchart illustrating the operations of a speech recognition method according to an exemplary embodiment of the present invention. - Referring to
FIG. 4 , a dialogue history most recently generated is received inoperation 410. The dialogue history includes a first dialogue sentence that is spoken most recently by the user and recognized by the system, and a second dialogue sentence that is spoken most recently by the system. The first dialogue sentence includes at least one combination of a plurality of candidate recognition results of a user's previous voice input provided from a searchingunit 370 and a final recognition result of the user's previous voice input provided from anutterance verification unit 380. The second dialogue sentence includes at least one combination of a reutterance requesting message provided from areutterance request unit 390, and a system's previous utterance sentence. - In
operation 420, the semantic map (130 ofFIG. 1 ) and the acoustic map (150 ofFIG. 1 ) are activated by using the dialogue history received inoperation 410, and a grammar network is generated by combining randomly or in a variety of ways extracted from the corpus, a plurality of word-like units included in a first candidate group provided by thesemantic map 130, and a plurality of word-like units included in a second candidate group provided by theacoustic map 150. - In
operation 430, phoneme combination information in relation to phonemes included in the grammar network generated inoperation 420, is expressed in a structure such as a context free grammar, and is loaded for a search, such as a Viterbi search. - In
operation 440, the Viterbi search is performed for the grammar network formed of phoneme models extracted from theacoustic model 375, based on the phoneme combination information loaded inoperation 430 in relation to the feature vector string for the current voice signal, which is input inoperation 410, and by doing so, candidate recognition sentences (N-Best) formed of matching word strings are searched for. - In
operation 450, it is determined whether or not there is a candidate recognition sentence among the candidate recognition sentences, for which utterance verification is successful according to the search result ofoperation 440. - In
operation 460, if the determination result of theoperation 450 indicates that there is a candidate recognition sentence whose utterance verification is successful, the recognition sentence is output of the system, and inoperation 470, if there is no candidate recognition sentence whose utterance verification is successful, the user is requested to utter again. - In addition to the above-described exemplary embodiments, exemplary embodiments of the present invention can also be implemented by executing computer readable code/instructions in/on a medium, e.g., a computer readable medium. The medium can correspond to any medium/media permitting the storing and/or transmission of the computer readable code.
- The computer readable code/instructions can be recorded/transferred in/on a medium in a variety of ways, with examples of the medium including magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.), optical recording media (e.g., CD-ROMs, or DVDs), random access memory media, and storage/transmission media such as carrier waves. Examples of storage/transmission media may include wired or wireless transmission (such as transmission through the Internet). The medium/media may also be a distributed network, so that the computer readable code/instructions is stored/transferred and executed in a distributed fashion. The computer readable code/instructions may be executed by one or more processors.
- According to the present invention as described above, dialogue speech recognition is performed by using a grammar network for speech recognition adaptively and automatically generated by reflecting the contents of previous dialogues such that even when the user utters outside a standard grammar structure, the contents can be easily recognized. Accordingly, dialogue can be smoothly and naturally performed.
- Furthermore, as a grammar network generator of a conversational or dialogue-driven speech recognition apparatus, the present invention can replace the n-gram, SAPI, VXML, and SALT methods that are conventional technologies, and in addition, it enables a higher dialogue recognition rate through a user speech prediction function.
- Although a few exemplary embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these exemplary embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Claims (24)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2005-0009144 | 2005-02-01 | ||
KR20050009144 | 2005-02-01 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20060173686A1 true US20060173686A1 (en) | 2006-08-03 |
US7606708B2 US7606708B2 (en) | 2009-10-20 |
Family
ID=36757750
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/344,163 Expired - Fee Related US7606708B2 (en) | 2005-02-01 | 2006-02-01 | Apparatus, method, and medium for generating grammar network for use in speech recognition and dialogue speech recognition |
Country Status (2)
Country | Link |
---|---|
US (1) | US7606708B2 (en) |
KR (1) | KR100718147B1 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070219974A1 (en) * | 2006-03-17 | 2007-09-20 | Microsoft Corporation | Using generic predictive models for slot values in language modeling |
US20070239454A1 (en) * | 2006-04-06 | 2007-10-11 | Microsoft Corporation | Personalizing a context-free grammar using a dictation language model |
US20070239637A1 (en) * | 2006-03-17 | 2007-10-11 | Microsoft Corporation | Using predictive user models for language modeling on a personal device |
US20070239453A1 (en) * | 2006-04-06 | 2007-10-11 | Microsoft Corporation | Augmenting context-free grammars with back-off grammars for processing out-of-grammar utterances |
WO2008128423A1 (en) * | 2007-04-19 | 2008-10-30 | Shenzhen Institute Of Advanced Technology | An intelligent dialog system and a method for realization thereof |
US20100250251A1 (en) * | 2009-03-30 | 2010-09-30 | Microsoft Corporation | Adaptation for statistical language model |
US20110224982A1 (en) * | 2010-03-12 | 2011-09-15 | c/o Microsoft Corporation | Automatic speech recognition based upon information retrieval methods |
US20140180692A1 (en) * | 2011-02-28 | 2014-06-26 | Nuance Communications, Inc. | Intent mining via analysis of utterances |
CN111178062A (en) * | 2019-12-02 | 2020-05-19 | 云知声智能科技股份有限公司 | Man-machine interaction multi-turn dialogue corpus oriented acceleration labeling method and device |
US11437026B1 (en) * | 2019-11-04 | 2022-09-06 | Amazon Technologies, Inc. | Personalized alternate utterance generation |
US11915697B2 (en) | 2020-11-11 | 2024-02-27 | Samsung Electronics Co., Ltd. | Electronic device, system and control method thereof |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7398209B2 (en) | 2002-06-03 | 2008-07-08 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US7693720B2 (en) | 2002-07-15 | 2010-04-06 | Voicebox Technologies, Inc. | Mobile systems and methods for responding to natural language speech utterance |
US7640160B2 (en) | 2005-08-05 | 2009-12-29 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US7620549B2 (en) | 2005-08-10 | 2009-11-17 | Voicebox Technologies, Inc. | System and method of supporting adaptive misrecognition in conversational speech |
US7949529B2 (en) | 2005-08-29 | 2011-05-24 | Voicebox Technologies, Inc. | Mobile systems and methods of supporting natural language human-machine interactions |
EP1934971A4 (en) | 2005-08-31 | 2010-10-27 | Voicebox Technologies Inc | Dynamic speech sharpening |
US8073681B2 (en) | 2006-10-16 | 2011-12-06 | Voicebox Technologies, Inc. | System and method for a cooperative conversational voice user interface |
US7818176B2 (en) | 2007-02-06 | 2010-10-19 | Voicebox Technologies, Inc. | System and method for selecting and presenting advertisements based on natural language processing of voice-based input |
US8140335B2 (en) | 2007-12-11 | 2012-03-20 | Voicebox Technologies, Inc. | System and method for providing a natural language voice user interface in an integrated voice navigation services environment |
US8589161B2 (en) | 2008-05-27 | 2013-11-19 | Voicebox Technologies, Inc. | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US9305548B2 (en) | 2008-05-27 | 2016-04-05 | Voicebox Technologies Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US8326637B2 (en) | 2009-02-20 | 2012-12-04 | Voicebox Technologies, Inc. | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9502025B2 (en) | 2009-11-10 | 2016-11-22 | Voicebox Technologies Corporation | System and method for providing a natural language content dedication service |
US9171541B2 (en) | 2009-11-10 | 2015-10-27 | Voicebox Technologies Corporation | System and method for hybrid processing in a natural language voice services environment |
US10957310B1 (en) | 2012-07-23 | 2021-03-23 | Soundhound, Inc. | Integrated programming framework for speech and text understanding with meaning parsing |
US20140324528A1 (en) * | 2013-03-14 | 2014-10-30 | Adaequare Inc. | Computerized System and Method for Determining an Action's Relevance to a Transaction |
KR101905827B1 (en) * | 2013-06-26 | 2018-10-08 | 한국전자통신연구원 | Apparatus and method for recognizing continuous speech |
US11295730B1 (en) | 2014-02-27 | 2022-04-05 | Soundhound, Inc. | Using phonetic variants in a local context to improve natural language understanding |
WO2016044321A1 (en) | 2014-09-16 | 2016-03-24 | Min Tang | Integration of domain information into state transitions of a finite state transducer for natural language processing |
EP3195145A4 (en) | 2014-09-16 | 2018-01-24 | VoiceBox Technologies Corporation | Voice commerce |
US9747896B2 (en) | 2014-10-15 | 2017-08-29 | Voicebox Technologies Corporation | System and method for providing follow-up responses to prior natural language inputs of a user |
US10614799B2 (en) | 2014-11-26 | 2020-04-07 | Voicebox Technologies Corporation | System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance |
US10431214B2 (en) | 2014-11-26 | 2019-10-01 | Voicebox Technologies Corporation | System and method of determining a domain and/or an action related to a natural language input |
US9870196B2 (en) * | 2015-05-27 | 2018-01-16 | Google Llc | Selective aborting of online processing of voice inputs in a voice-enabled electronic device |
US9922138B2 (en) | 2015-05-27 | 2018-03-20 | Google Llc | Dynamically updatable offline grammar model for resource-constrained offline device |
US9966073B2 (en) * | 2015-05-27 | 2018-05-08 | Google Llc | Context-sensitive dynamic update of voice to text model in a voice-enabled electronic device |
US10083697B2 (en) * | 2015-05-27 | 2018-09-25 | Google Llc | Local persisting of data for selectively offline capable voice action in a voice-enabled electronic device |
US9940577B2 (en) * | 2015-07-07 | 2018-04-10 | Adobe Systems Incorporated | Finding semantic parts in images |
US9836527B2 (en) | 2016-02-24 | 2017-12-05 | Google Llc | Customized query-action mappings for an offline grammar model |
US10331784B2 (en) | 2016-07-29 | 2019-06-25 | Voicebox Technologies Corporation | System and method of disambiguating natural language processing requests |
KR102102388B1 (en) * | 2017-11-20 | 2020-04-21 | 주식회사 마인즈랩 | System for generating a sentence for machine learning and method for generating a similar sentence using thereof |
JP7401165B2 (en) | 2018-05-23 | 2023-12-19 | ヴァミィヤ マニュファクチャリング カンパニー | Crushing machine |
US10861456B2 (en) * | 2018-09-17 | 2020-12-08 | Adobe Inc. | Generating dialogue responses in end-to-end dialogue systems utilizing a context-dependent additive recurrent neural network |
CN109920432A (en) * | 2019-03-05 | 2019-06-21 | 百度在线网络技术(北京)有限公司 | A kind of audio recognition method, device, equipment and storage medium |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5615296A (en) * | 1993-11-12 | 1997-03-25 | International Business Machines Corporation | Continuous speech recognition and voice response system and method to enable conversational dialogues with microprocessors |
US5748841A (en) * | 1994-02-25 | 1998-05-05 | Morin; Philippe | Supervised contextual language acquisition system |
US5774628A (en) * | 1995-04-10 | 1998-06-30 | Texas Instruments Incorporated | Speaker-independent dynamic vocabulary and grammar in speech recognition |
US6067520A (en) * | 1995-12-29 | 2000-05-23 | Lee And Li | System and method of recognizing continuous mandarin speech utilizing chinese hidden markou models |
US6154722A (en) * | 1997-12-18 | 2000-11-28 | Apple Computer, Inc. | Method and apparatus for a speech recognition system language model that integrates a finite state grammar probability and an N-gram probability |
US6167377A (en) * | 1997-03-28 | 2000-12-26 | Dragon Systems, Inc. | Speech recognition language models |
US6324513B1 (en) * | 1999-06-18 | 2001-11-27 | Mitsubishi Denki Kabushiki Kaisha | Spoken dialog system capable of performing natural interactive access |
US20020013705A1 (en) * | 2000-07-28 | 2002-01-31 | International Business Machines Corporation | Speech recognition by automated context creation |
US20020087312A1 (en) * | 2000-12-29 | 2002-07-04 | Lee Victor Wai Leung | Computer-implemented conversation buffering method and system |
US6418431B1 (en) * | 1998-03-30 | 2002-07-09 | Microsoft Corporation | Information retrieval and speech recognition based on language models |
US20020178005A1 (en) * | 2001-04-18 | 2002-11-28 | Rutgers, The State University Of New Jersey | System and method for adaptive language understanding by computers |
US6499013B1 (en) * | 1998-09-09 | 2002-12-24 | One Voice Technologies, Inc. | Interactive user interface using speech recognition and natural language processing |
US20040098263A1 (en) * | 2002-11-15 | 2004-05-20 | Kwangil Hwang | Language model for use in speech recognition |
US20050043953A1 (en) * | 2001-09-26 | 2005-02-24 | Tiemo Winterkamp | Dynamic creation of a conversational system from dialogue objects |
US6934683B2 (en) * | 2001-01-31 | 2005-08-23 | Microsoft Corporation | Disambiguation language model |
US7120582B1 (en) * | 1999-09-07 | 2006-10-10 | Dragon Systems, Inc. | Expanding an effective vocabulary of a speech recognition system |
US7177814B2 (en) * | 2002-02-07 | 2007-02-13 | Sap Aktiengesellschaft | Dynamic grammar for voice-enabled applications |
US7299181B2 (en) * | 2004-06-30 | 2007-11-20 | Microsoft Corporation | Homonym processing in the context of voice-activated command systems |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100404852B1 (en) * | 1996-08-03 | 2004-02-25 | 엘지전자 주식회사 | Speech recognition apparatus having language model adaptive function and method for controlling the same |
US20020032564A1 (en) * | 2000-04-19 | 2002-03-14 | Farzad Ehsani | Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface |
KR100342785B1 (en) | 2000-01-17 | 2002-07-04 | 정명식 | Method for measuring global distance between character strings of the korean language |
CN1232948C (en) * | 2001-02-28 | 2005-12-21 | 声音鉴析公司 | Natural language query system for accessing information system |
WO2002073449A1 (en) * | 2001-03-14 | 2002-09-19 | At & T Corp. | Automated sentence planning in a task classification system |
KR20030010979A (en) * | 2001-07-28 | 2003-02-06 | 삼성전자주식회사 | Continuous speech recognization method utilizing meaning-word-based model and the apparatus |
US7143035B2 (en) * | 2002-03-27 | 2006-11-28 | International Business Machines Corporation | Methods and apparatus for generating dialog state conditioned language models |
KR100484493B1 (en) * | 2002-12-12 | 2005-04-20 | 한국전자통신연구원 | Spontaneous continuous speech recognition system and method using mutiple pronunication dictionary |
KR20050049207A (en) * | 2003-11-21 | 2005-05-25 | 한국전자통신연구원 | Dialogue-type continuous speech recognition system and using it endpoint detection method of speech |
KR101002135B1 (en) * | 2003-12-27 | 2010-12-16 | 주식회사 케이티 | Transfer method with syllable as a result of speech recognition |
-
2006
- 2006-02-01 KR KR1020060009868A patent/KR100718147B1/en active IP Right Grant
- 2006-02-01 US US11/344,163 patent/US7606708B2/en not_active Expired - Fee Related
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5615296A (en) * | 1993-11-12 | 1997-03-25 | International Business Machines Corporation | Continuous speech recognition and voice response system and method to enable conversational dialogues with microprocessors |
US5748841A (en) * | 1994-02-25 | 1998-05-05 | Morin; Philippe | Supervised contextual language acquisition system |
US5774628A (en) * | 1995-04-10 | 1998-06-30 | Texas Instruments Incorporated | Speaker-independent dynamic vocabulary and grammar in speech recognition |
US6067520A (en) * | 1995-12-29 | 2000-05-23 | Lee And Li | System and method of recognizing continuous mandarin speech utilizing chinese hidden markou models |
US6167377A (en) * | 1997-03-28 | 2000-12-26 | Dragon Systems, Inc. | Speech recognition language models |
US6154722A (en) * | 1997-12-18 | 2000-11-28 | Apple Computer, Inc. | Method and apparatus for a speech recognition system language model that integrates a finite state grammar probability and an N-gram probability |
US6418431B1 (en) * | 1998-03-30 | 2002-07-09 | Microsoft Corporation | Information retrieval and speech recognition based on language models |
US6499013B1 (en) * | 1998-09-09 | 2002-12-24 | One Voice Technologies, Inc. | Interactive user interface using speech recognition and natural language processing |
US6324513B1 (en) * | 1999-06-18 | 2001-11-27 | Mitsubishi Denki Kabushiki Kaisha | Spoken dialog system capable of performing natural interactive access |
US7120582B1 (en) * | 1999-09-07 | 2006-10-10 | Dragon Systems, Inc. | Expanding an effective vocabulary of a speech recognition system |
US20020013705A1 (en) * | 2000-07-28 | 2002-01-31 | International Business Machines Corporation | Speech recognition by automated context creation |
US20020087312A1 (en) * | 2000-12-29 | 2002-07-04 | Lee Victor Wai Leung | Computer-implemented conversation buffering method and system |
US6934683B2 (en) * | 2001-01-31 | 2005-08-23 | Microsoft Corporation | Disambiguation language model |
US20020178005A1 (en) * | 2001-04-18 | 2002-11-28 | Rutgers, The State University Of New Jersey | System and method for adaptive language understanding by computers |
US20050043953A1 (en) * | 2001-09-26 | 2005-02-24 | Tiemo Winterkamp | Dynamic creation of a conversational system from dialogue objects |
US7177814B2 (en) * | 2002-02-07 | 2007-02-13 | Sap Aktiengesellschaft | Dynamic grammar for voice-enabled applications |
US20040098263A1 (en) * | 2002-11-15 | 2004-05-20 | Kwangil Hwang | Language model for use in speech recognition |
US7299181B2 (en) * | 2004-06-30 | 2007-11-20 | Microsoft Corporation | Homonym processing in the context of voice-activated command systems |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070219974A1 (en) * | 2006-03-17 | 2007-09-20 | Microsoft Corporation | Using generic predictive models for slot values in language modeling |
US20070239637A1 (en) * | 2006-03-17 | 2007-10-11 | Microsoft Corporation | Using predictive user models for language modeling on a personal device |
US7752152B2 (en) | 2006-03-17 | 2010-07-06 | Microsoft Corporation | Using predictive user models for language modeling on a personal device with user behavior models based on statistical modeling |
US8032375B2 (en) | 2006-03-17 | 2011-10-04 | Microsoft Corporation | Using generic predictive models for slot values in language modeling |
US20070239454A1 (en) * | 2006-04-06 | 2007-10-11 | Microsoft Corporation | Personalizing a context-free grammar using a dictation language model |
US20070239453A1 (en) * | 2006-04-06 | 2007-10-11 | Microsoft Corporation | Augmenting context-free grammars with back-off grammars for processing out-of-grammar utterances |
US7689420B2 (en) * | 2006-04-06 | 2010-03-30 | Microsoft Corporation | Personalizing a context-free grammar using a dictation language model |
WO2008128423A1 (en) * | 2007-04-19 | 2008-10-30 | Shenzhen Institute Of Advanced Technology | An intelligent dialog system and a method for realization thereof |
WO2010117688A2 (en) * | 2009-03-30 | 2010-10-14 | Microsoft Corporation | Adaptation for statistical language model |
WO2010117688A3 (en) * | 2009-03-30 | 2011-01-13 | Microsoft Corporation | Adaptation for statistical language model |
US20100250251A1 (en) * | 2009-03-30 | 2010-09-30 | Microsoft Corporation | Adaptation for statistical language model |
CN102369567A (en) * | 2009-03-30 | 2012-03-07 | 微软公司 | Adaptation for statistical language model |
US8798983B2 (en) | 2009-03-30 | 2014-08-05 | Microsoft Corporation | Adaptation for statistical language model |
US20110224982A1 (en) * | 2010-03-12 | 2011-09-15 | c/o Microsoft Corporation | Automatic speech recognition based upon information retrieval methods |
US20140180692A1 (en) * | 2011-02-28 | 2014-06-26 | Nuance Communications, Inc. | Intent mining via analysis of utterances |
US11437026B1 (en) * | 2019-11-04 | 2022-09-06 | Amazon Technologies, Inc. | Personalized alternate utterance generation |
CN111178062A (en) * | 2019-12-02 | 2020-05-19 | 云知声智能科技股份有限公司 | Man-machine interaction multi-turn dialogue corpus oriented acceleration labeling method and device |
US11915697B2 (en) | 2020-11-11 | 2024-02-27 | Samsung Electronics Co., Ltd. | Electronic device, system and control method thereof |
Also Published As
Publication number | Publication date |
---|---|
KR20060088512A (en) | 2006-08-04 |
KR100718147B1 (en) | 2007-05-14 |
US7606708B2 (en) | 2009-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7606708B2 (en) | Apparatus, method, and medium for generating grammar network for use in speech recognition and dialogue speech recognition | |
US11455995B2 (en) | User recognition for speech processing systems | |
US11580991B2 (en) | Speaker based anaphora resolution | |
US11170776B1 (en) | Speech-processing system | |
US11270685B2 (en) | Speech based user recognition | |
US20230119954A1 (en) | Sentiment aware voice user interface | |
US10027662B1 (en) | Dynamic user authentication | |
US9484030B1 (en) | Audio triggered commands | |
US11830485B2 (en) | Multiple speech processing system with synthesized speech styles | |
US8214213B1 (en) | Speech recognition based on pronunciation modeling | |
US6067514A (en) | Method for automatically punctuating a speech utterance in a continuous speech recognition system | |
US8244522B2 (en) | Language understanding device | |
US10832668B1 (en) | Dynamic speech processing | |
US11455987B1 (en) | Multiple skills processing | |
US10515637B1 (en) | Dynamic speech processing | |
US11715472B2 (en) | Speech-processing system | |
US11030999B1 (en) | Word embeddings for natural language processing | |
US20240071385A1 (en) | Speech-processing system | |
US11430434B1 (en) | Intelligent privacy protection mediation | |
CN111078937B (en) | Voice information retrieval method, device, equipment and computer readable storage medium | |
US11817090B1 (en) | Entity resolution using acoustic data | |
US11688394B1 (en) | Entity language models for speech processing | |
Colton | Confidence and rejection in automatic speech recognition | |
US11887583B1 (en) | Updating models with trained model update objects | |
Choularton | Early Stage Detection of Speech Recognition Errors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HWANG, KWANGIL;REEL/FRAME:017539/0070 Effective date: 20060201 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FEPP | Fee payment procedure |
Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20211020 |