CA2024382C - Method and apparatus for finding the best splits in a decision tree for a language model for a speech recognizer - Google Patents

Method and apparatus for finding the best splits in a decision tree for a language model for a speech recognizer

Info

Publication number
CA2024382C
CA2024382C CA002024382A CA2024382A CA2024382C CA 2024382 C CA2024382 C CA 2024382C CA 002024382 A CA002024382 A CA 002024382A CA 2024382 A CA2024382 A CA 2024382A CA 2024382 C CA2024382 C CA 2024382C
Authority
CA
Canada
Prior art keywords
feature
predictor
value
values
sets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002024382A
Other languages
French (fr)
Other versions
CA2024382A1 (en
Inventor
Arthur Nadas
David Nahamoo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CA2024382A1 publication Critical patent/CA2024382A1/en
Application granted granted Critical
Publication of CA2024382C publication Critical patent/CA2024382C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/19Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
    • G10L15/197Probabilistic grammars, e.g. word n-grams

Abstract

ABSTRACT OF THE DISCLOSURE

A method and apparatus for finding the best or near best binary classification of a set of observed events, according to a predictor feature X so as to minimize the uncertainty in the value of a category feature Y. Each feature has three or more possible values. First, the predictor feature value and the category feature value of each event is measured. From the measured predictor feature values, the joint probabilities of each category feature value and each predictor feature value are estimated. The events are then split, arbitrarily, into two sets of predictor feature values. From the estimated joint probabilities, the conditional probability of an event falling into one set of predictor feature values is calculated for each category feature value. A number of pairs of sets of category feature values are then defined where each set SYj contains only those category feature values having the j lowest values of the conditional probability. From among these pairs of sets, an optimum pair is found having the lowest uncertainty in the value of the predictor feature.

From the optimum sets of category feature values, the conditional probability that an event falls within one set of category feature values; is calculated for each predictor feature value. A number of pairs of sets of predictor feature values are defined where each set SXi(t + 1) contains only those predictor feature values having the i lowest values of the conditional probability. From among the sets SXi a pair of sets is found having the lowest uncertainty in the value of the category feature. An event is then classified according to whether its predictor feature value is a member of the set of optimal predictor feature values.

Description

~-~ 2~24~2 METHOD AND APP~RATUS FOR FINDING THE BEST SPLITS IN A DECISION

TREE FOR A LANGUA5E MODEL FOR A SpEEcH RECOGNIZER

,''"..'.''.'"'. ~;
Backqround of the Invention -~

The invention relates to t~le binary classification of observed events for constructing decision trees for use in pattern recognition systems. The observed events may be, for example, "
spoken words or written characters. Other observed events which may be classified according to the invention include, but are not limited to medical symptoms, and radar patterns of objects.

More specifically, the invention relates to finding an optimal or near optimal binary classification of a set of observed events (i.e. a training set of observed events~, for the -~
. ~ ,- :
creation of a binary decision tree. ~ In a binary decision tree~
each node of the tree has one input path and two output paths :-(e.g. Output Path A and Output Path B~. ~t each node of the ; ;tree, a question is asked of the form~"Is X an element of the set SX?" If the answer i9 "Yes",~then~Output~Path A~is;followed from the node. If the answer is nNo"~,~-then Output Path B is followed from the node.

In general, e~ch obser~ed t to bo~classified has a prcdictor feature X and a category feature Y. The predictor feature has Yo989-059 ~ . . :: ~., - 202~3~
~ ,, one of M different possible values Xm, and the category feature has one of N possible values Yn , where m and n are positive integers less than or equal to M and N, respectively.

In constructing binary decision trees, it ie advantageous to find the subset SXOpt of SX for which the information regarding the category feature Y is maximized, and the uncertainty in the category feature Y is minimized. The answer to the question "For an observed event to be classified, is the value of the predictor feature X an element of the subset SXopt ?" wili then give, on average over a plurality of observed events, the maximum reduction in the uncertainty about the value of the -category feature Y.

.
One known method of finding the best subset SXopt for minimizing the uncertainty in the category feature Y is by enumerating all .
subsets of SX, and by calculating the information or the uncertainty in the value of the category feature Y for each subset. For a set SX having M elements, there are 2~--1 different possible subsets. Therefore, 2~ l information or uncertainty computations would be required to find the best subset SX. This method, therefore, is~not practically possible for large values of M, such as would be encountered in automatic speech recognition.

- ~ - , - ' : .. ~ ~ ., : ~ ' , .

~243~2 Leo Breiman et al (Classification And Reqression Trees~
Wadsworth Inc., Monterey, California, 1984, pages 101-102) describe a method of finding the best subset SXopt for the special case where the category feature Y has only two possible different values Y~ and Y2 (that is, :For the special case where N=2). In this method, the complete enumeration requiring 2~--1 information computati.ons can be replaced by information computations for only (M-1) subsets.

The Breiman et al method is based upon the fact that the best subset SXOpt of the set SX is among the increasing sequence of subsets defined by ordering the predictor feature values Xm according to the increasing value of the conditional -.
probability of one value Y, of the category feature Y for each value Xm of the predictor feature X. -~
~, .
:~
~hat is, the predictor feature values Xm are first ordered, in :~
the Breiman et al method, according to the values of :
P(Yl l Xm) ~ Next, a number of subsets SXl of set SX are defined, :where each subset SXl contains only those values X~ having the i lowest values of the conditional probability P(YIlXm) . When there are M different values of Xm ~ there will be (M-1) subsets SXL .

- .

: Yog89-o59 ~ 3 ~ ~

., 02~3~
As described by Breiman et al, one of the subsets SXl maximizes the information, and therefore minimizes the uncertainty in the value of the class Y.

While Breiman et al provide a shortened method for finding the best binary classification of events according to a measurement or predictor variable X for the simple case where the events fall into only two classes or categories Yt and Y2, Breiman et al do:not describe how to find the best subset of the predictor variable X when the events fall into more than two classes or categories.
. .

:
:~ Summarv of the Invention . ., ;
It is an object of the invention to provide a method and apparatus for finding the optimum or near optimum binary classification of a set of observed events àccording to a predictor feature X having three or more different possible values Xm~ SO as to minimize the uncertainty in the value of the category:feature Y of the observed events, where the category feature Y has three or more possible values Yn.

,: 1 ' ~
It is another object of the invention to provide such a classification method in which the optimum or near optimum :~ ~ classification of~the predictor~featuré X can be found without :

~ : enumerating all of the possible different subsets of the values : . .
~ Y0989-059 ~ ?g 2 ~ ~ ~

of the predictor feature X, and without calculating the uncertainty in the value of the category feature Y for each of the subsets.

~he invention is a method and apparatus for classifying a set of observed events. Each event has a predictor feature X and -a category feature Y. The predictor feature has one of M ~-different possible values Xml and the category feature has one of N possible values Yn. In the method, M and N are integers greater than or equal to 3, m is an integer from 1 to M, and n is an integer from 1 to N.

According to the invention, the predictor feature value Xm and ~`
the category feature value Ym of each event in the set of observed events are measured. From the measured predictor feature values and the measured category feature values, the probabLlity PtXm, Y~ of occurrence of an event having a cateqory feature value Yn and a predictor feature value Xm are estimated for each Yn and each Xm .

Next, a starting set SXopt(t) of predictor feature values Xm is , ~ selected. The variable t has any initial value. ~

" -From the estimated probabilities, the~conditional probability P(SXop~(t)l~n) that the predictor ~feature has a value~in the set YO989-059 ~ 5 ' :
::

2~2~2 SXopt(t) when the category feature has a value Yn is calculated for each Yn~ ¦

From the conditional probabilities P~SXopt(t)lYn~ a number of pairs of sets SYj(t) and SYj(t) of category feature values Yn are defined, where j is an integer from 1 to tN-1). Each set SY~(t) contains only those category feature values Yn having the j lowest values of the conditional probability P(SXopt(t)lYn) ~ Each set SYj(t) contains only those category feature values Yn having the (N-j) highest values of the conditional probability. Thus, SYj(t) is the complement of SYj(t) .
- : :
-From the (N-1) pairs of sets defined above, a single pair of sets SYop~(t) and SYopttt) is found having the lowest uncertainty in the va~ue of the predictor feature.
~ .

Now, from the estimated event probabilities PtXm, Yn) , the ~` conditional probability PtSYopttt)lXm) that the category feature .
has a value in the set SYopt(t) when the predictor feature has a value Xm is calculated for each value Xm. A number of pairs of sets SXLtt + 1) and SXl(t + 1) of predictor feature values Xm are defined where i is an integèr from l to M. Each set SXitt +1) contains only those predictor feature valùes Xm having the i ;lowest values of the conditional probability P(SYoptttjlxm) . Each set SXi(t + 1) contains only those~predictor feature values Xm YO989-059 ~ - 6 -, - !

: - , .~ ~:

~-i` 2~2~3~2 having the (M-i) highest values of the conditional probability.

There are (M-1) pairs of such sets.
:, From the (M-1) pairs of sets defined above, a single pair of ~ -sets SXopt(t + 1) and SXo~t(t +1) is found having the lowest uncertainty in the value of the category feature.
:, ., :
~: ' Thereafter, an event is classified in a first class if the ~;~ predictor feature value of the event is a member of the set SXopt(t +1) . An event is classified in a second class if the predictor feature value of the event is a member of the set Xo~t(t + 1)-'~ . .'.'i' " ~
In one aspect of t~le invention, an event is classified in the ~ r~ .
first class or the second class by producing a classification signal identifying the event as a member of the first class or , ~ the second class. ~ ~
::

In another aspect of the invention, the method is iterativelyrepeated until the set SXo~t(t + 1) is equal or substantially equal to the previous set SXopt(t).

Preferably, the pair of sets having the lowest uncertainty is found by calcu:a~lng the uncert~in~y~or eveFy p~ir o~ sets.

YO989-059 ~ - 7 -- : ~ ., ~

.:
} :. .

-~ 2~2~3~2 Alternatively, the pair of sets having the lowest uncertainty m<~y be found by calculating the uncertainty of each pair of sets in the order of increasing conditional probability. The pair of sets with the lowest uncertainty is found when the calculated uncertainty stops decreasing.

In a further aspect of the invention, the predictor feature value of an event not in the set of events is measured. The event is classified in the first class if the predictor feature value is a member of the set SXopt(t + 1), and the event is classified in the second class if the predictor feature value is a member of the set sxopt(t +1)-In the method and apparatus according to the present invention,each event may be, for example, a spoken utterance in a series of spoken utterances. The utterances may be classified according~to the present invention, for example, for the purpose of producing models of similar utterances. Alternatively, for example, the utterances may be classified according to the present invention for the purpoae of recognizing a spoken utterance. ~`
.

In another example, each~event lS a spoken word ln a series of spoken words. The~predictor feature value of a word comprises . .
an identification of the immediately preceding word in the ~seriès of spoken words.

Y09~9-059 - 8 -: . : . -: :
.
, .

~, ' ?~2~

The invention also relates to a method and apparatus for '~ , automatically recognizing a spoken utterance. A predictor word signal representing an uttered predictor word is compared,~with predictor feature signals in a decision set. If the predictor , ,' word signal is a member of the decisi~on set, a first predicted word is output, otherwise a second predicted word is output. '~

The method and apparatus according to the invention are`~" ~, advantageous because they can identlfy, for a set of observed events, the subset of predictor feature values of the events which minimize or near minimize the uncertainty in the valùe of the category feature of the events, where the category feature has three or more possible values. The invention finds : , ~ . . , ~, this subset of predictor,feature~values~ in an efficient manner without requiring a complete enumeration of all possible subsets of predictor feature values.

Brief Descri~tion of the Drawinq ,~
": ~

Figure 1 is a flow chart of the method;of classlfyinq a set of observed events according to the present~,invention.

~' Figure 2 lS a flow~chart~showing~how,the sets with the lowest -~
uncertainty are found.'~

~ : .~ ~: ., ~ Y0989-059 ~ 9~

2~2~3~2 Figure 3 is a block diagram of an apparatus for classifying a set of observed events according to the present inventlo~.

Figure 4 shows a decision tree which may be constructed from the classifications produced by the method and apparatus according to the present invention.

Figure 5 is a block diagram of a speech recognition system containing a word context match according to the present .
; invention.
.~

Fiqure 6 is a block diagram of a portion of the word context match of Figure 5.

: .

Descri~tion of the Preferred Embodiments Figure 1 is a flow chart of the method of classifying a set of observed events according to the present invention. Each event in the set of observed events has a predictor feature X and a category feature Y. The predictor feature has one of M
different possible values Xm. The category feature has one of N possible values Yn~ M and N are`positive integers greater , ~
than or equal to 3, and need not be equal to each other. The .
variable m is a positive integer less~than or equal to M. The variable n is a positive integer less than or equal to N.

:
- YO989-059 = 10 -~

.
., -~ 2~3~2 'i' 1/
Each event in the set of observed events may be, for example, a sequence of uttered words. For example, each event ma,y comprise a prior w~rd, and a recognition word following the prior word. If the predictor feature is the prior word~ and the category feature is the recognition word, and if it is desired to classify the events by finding the b~st subset of prior words which minimizes the uncertainty in the value of the recognition word, then the classification method according to the present invention can be used to identify one or more ;
candidate values of the recognition word when the prior word ~ ~
lS known. -~ ~-Other types of observed events having predictor :,.: :,- :
feature/category features which can be classified by the method according to the present invention include, for example, medical symptoms/illnesses, radar patterns/objects, and visual `

patterns/characters. Other observed events can also be : ':
classified by the method of the present invention.

For the purpose of explain1ng the operation of the invention, ~ ;~
the observed events will be described as prior word/recognition word events. In practice, there may~be thousande o$ different words in a language model, and hence the predictor feature of an event will have one of thousands of different available predictor feature values. Similarly~, the category feature of an event will have one of thousands of different available :
~: `:
,~

2~12~3~ I

category feature values. In such a case, to find the best subset SXOI,t of the predictor feature values which minimizjes the uncertainty in the value of the category feature, would require the enumeration of 2~ 1 subsets. For values of M in the thousands, this is not practical.

According to the present invention, it is only necessary to enumerate at most approximately K(M+N) subsets to find the best subset SXOpt, as descrlbed below. K is a constant, for example 6.
' While the predictor feature and the category feature may have thousands of different possible values, for the purpose of explaining the invention an example will be described in which M=5 and N=S. In general, M need not equal N.

Table 1 shows an example of a predictor feature having five different possible values X1 to Xs~ The predictor feature (prior word? values are l'travel", "credit`', "special", "tradei', and "business", respectively.

. : -.

~ YO989-059 ~ ~- 12 ~-: :~

-~` 2~2~3~

PREDICTOR PRIOR
FEATURE WORD
VALUE (PREDICTOR WORD) _ .' X1 travel : .
X2 credit ::
X3 special . :
X4 trade : :
X5 business ~:
: : .: ,,,,' - Table 2 shows a category feature having five different possible ~-~ values Y, to Ys~ The category feature (recognition word) values ~: :
are "agent", "bulletin", "management", "consultant", and "card", respectively. ~ ~

: : `: :: : :::

: TABLE 2 : ~
_ : ::
CATEGORY RECOGNITION
FEATURE WORD
.- VALUE (PREDICTED WORD) Yl aqent : : Y2 bulletin ! Y3 management Y4 consultant . Ys card :
:~ ` .:: ;

Returning to Figure 1, according to the invention the predictor i - feature value Xm and the category feature Yn of each event in the set of events are measured. Table 3 shows an example of possible measurements for ten hypothetical events.

YO989-059 ` - 13 ~
.

: ~:

.
EVENT PREDICTOR CAT~GORY
FE~TURE FEATURE
VALUE VALUE
1 Xs . ._ 2 Xl Ys 3 X3 y4 4 x3 Yl X4 y4 ~ .
~ 7 Xz Ys i 10 X~ Y2 ... ... ...
~ ~' -From the measured predictor feature values and the measuredcategory feature values in Table 3, the probability P(Xm~ Yn) of occurrence of an event having a category feature value Yn and a predictor feature value Xm is estimated for each Yn and each Xm. If the set of events is sufficiently large, the probability P(Xa~ Yb) of occurrence of an event having a category feature value Yb and a predictor feature value X~, can be astimated as the total number of events in the set of observed events having ::
feature values (X~,Yb) divided by the total number of events in the set of observed events.

Table 4 is an~example of hypotheticai estimates of the , :: ~ probabilities P(Xm~ Yn~

: Y0989-059 ~ 14 :

... .

i 20~3~
, ~, TABLE 4 P(Xm ~ Yn) ¦

Ys 0.030616 0.024998 0 ~ b5 0.014540 D 0D~;~l Y~ 0.014505 0.013090 0.068134 0.054667 0.024698 3 0.074583 0.048437 0.035092 0.076762 0.000654 Y2 0.042840 0.081654 0.01159~ 0.055642 0.050448 - ' ~;
Yl 0.022882 0.051536 0.032359 0.005319 0.074012 .~ ~
~ ~ _ ~ X3 ~ Xs , .
~: After the probabilities are estimated, a starting set SXopt(t) ~: of predictor feature values Xm is selected. The varia~le t has ~:~
any initial value. In our example, we will arbitrarily select the starting set SXopttt) equal to the predictor feature values X, and X2 (that is, the prior words "travel" and "credit").
.
-: .
Continuing through the flow chart of Figure 1, from the estimated probabilities P(Xm,Yn)~, the conditional probability PtSXopt(t)lYn) that the predictor feature~has a value:in the set SXopttt) when the category feature has~a value Yn is calculated for each Yn. Table 5 shows the jOillt probabilities of SXopt(t) and Yn, the joint probabilit1es of SXOpttt) and Yn~ and the :
conditional probability of SXoptttj given:Yn for each value of the Y0989-OS9 - ~ - 15 ~

--` 2~2~2 category feature Y, and for t equal 1. These numbers are based on the probabilities shown in Table 4, where P~SXopt(t)~ Yn)=P(Xl, Yn)+p(x2~ Yn)l ~ ~ ~ P(SXopt(t), Yn) = P(X3, Yn) + P(X4, Yn) + P(X5 ~ Yn)~

.
~ and ~: ~ .. .
, .~ , .
P(SX t Y P(SXopt(t), Yn) opt( ) I n) P(SXopt(t) ~ Yn) + P(SXopt(t) ~ Yn) ~ ' :
:

, ~ YoD89-05~ 6 ~

~`` 2~2~33~
''.' , ~

~ "~P(SXol~t(t)~ Yn) ~ 1 ~ ~
. ._ __ Ys 0.055614 0.105466 0.345258 4 0.027596 0.147501 0.157604 Y3 0.123021 0.112509 0.522314 Y2 0.124494 0.117686 0.514056 Y~ 0.074418 0. 1 1 1691 0.3 ~9862 . _ _ ~;
`. ,~
SXo~t(t)= ~xl~x2 SXopt(t~= ~X3,X4,Xs) `
t= 1 After the conditional probabilities are calculated, a number of ordered;pairs of~sets SYj(t)~and~SYj(t) of category feature values Yn are defined.~ The number j~of se~ts is a positlve integer less than or equal to (N-1). Each set SYj(t) contains only~those category feature values Yn having the j lowest values of the conditional probability P(SXopt(tj IYn)~ Each set SYj(t) contains~only those category feature~values Yn havlng the (N~
high~st vAl--s of the condi;ti-nal~ probability.

YO989-059 - 17 ~
,~. : ~ .. . . ~

. ~

i ,, : ~ :
"

~ --` 2 ~ 2 ~
From the conditional probabilities of Table 5, Table 6 shows the ordered pairs of sets SYI(t) and SY,(t) to SYs(t) and SYs~t).
Table 6 also shows the category feature values Yn in each`set.

.
: TABLE 6 SET CATEGORY UNCERTAINTY UNCERTAINTY
SYjtt) FEATURE ~From (From SYj(t~ VALUES T~BLE 4) TABLE 5) ::
SYI(t) Y4 2.258366 0.930633 SY~(t) Ys, Yl, Y2, Y3 . ::
SY2(t) Y4, Ys 2.204743 0.934793 SY2(t) y~, Y2, Y3 SY3(t) Y4, Ys, Yt Z.207630 0.938692 SY3(t) Y2, Y3 ~ ~
SY4(t) Y4, Ys, Yl, Y2 2.2~4523 0.961389 SY4(t) Y3 ~ _ SYopt(t)=sY2(tj= ~Y4,Ysl L~ 'Y''Y"Y~

For each:of~the (N~1~) ordered pairs of sets SYj and SYj, the uncertainty in the value of the predictor feature Xm may be -.. `,.. ~.
calculated in two different ways.~ Preferably, the uncertainty -.
is calculated from the probabilities~of Table 4~ Alternatively, uncertalnty may also be calculated from the probabilities of . ~ .
Table 5. In general, an uncertainty calculation bas~ed upon the Yo9s9-os9 ; ~

::

--` 2~2~32 individual probabilities of Table 4 will be more accurate than an uncertainty calculation based upon the sets SXo~t(t) and SXopt(t) probabilities of Table 5. ~owever, calculating the uncertainty in the reduced Table 5 is faster, and may lead to the same result.

:
: .
In Table 6, the uncertainty H(Split SYj~t)) in the value of the ; predictor feature X for the sets SYj(t) and SYj(t) was calculated ~ :
: from the Table 4 probabilities according to the formula : - , :,.
, , H(Split SYj(t))= P(SYj(t))H(XmlSYj(t))+ P(SYj(t))H(XmlSYj(t)) ,: ~
:: :

:: : where - : :~

H(XmlSY~(t))= - ~P(XmlSYj(t))logP(XmlSYj(t)), m-. I

p(Xm, SYj(t)) P(XmlSYj(t))= P(SYj(t)) ~` M
~;: P(SYj(t))= ~P(Xml SYj(t)), ~: ~ m=l . ~

~ ~ and . j . i .

~ P(Xm~ SYj(t))= ~ P(X~,Y~
~ ~ Yn ~ SYj~t) ,, ~
~: :,, YO989-059 ; - 19 - : : : ~:

, .... :. . . . : . : :

2 ~ 'J~i ~ 2 In Table 6, the base 2 logarithm was used. (See, for éxample, Encyclopedia of Statistical Sciences, Volume 2, John Wiley Sons, 1982, pages 512-516.) `~

Thus, the uncertainty in the value of a feature for a pair of sets was calculated as the sum of the products of the probability of occurrence of an event in the set, for example P(SYj(t)), times the uncertainty, for example H(XmlSYj(t)), for that set.

As shown in Table 6, the pair of sets SYopt(t) and SYop~(t) having the lowest uncertainty in the value of the predictor feature X
is the pair of sets SY2(t) and SY2(t) , as calculated from the probabilities of Table 4.

From the estimated probabilities P(Xm~ Yn)~ new conditional `~
probabilities P(SYopt(t~l Xm) that the category feature has a value in the set SYopt(t) when the predictor feature has a vaIue Xm are `.``~
calculated for each Xm.~ Table 7 shows these conditional ` !.`.. -probabilities, based on the joint probabilities of Table 4, where P(SY~pt(t) ~ X ) = P(Y4 ~ Xm) + P(Ys ~ X ) ' - `

P(SYopt(t), X~) = Pr-~ X~) + P(Y~ X l + P(~/3~ X ) ".: ''-' .
YO989-059 ~20~

: . . : .
: -;

~:

:

~ 2 0 2 ~ 3 ~ 2 ~ ;

P(SYopt(t) ~ Xm) P(SYopt.(t) I Xm) =
P(SYopt(t), Xm) -1- P(syopt(t)~ Xm) ~ "~ ~ 01~ 0~"1-)~ ' ~ ~ ~
~ _ _ X s 0.033459 0.125115 0.211002 X4 0.069207 0.137724 0.334445 X3 0.150300 0.079046 0.655340 2 0.038088 0.181628 0.173353 Xl 0,045122 0.140305 0.2433io ~, ` SYopt(t)= {Y4,Y5 SXopt(t) = {Y1,Y2,Y3 t= l . ~ :
~, ....... ~ ~ ' Next, ordered pairs of sets 5Xi(t~+1) and SXi(t + 1) of predictor features XO are deined.~ The variable i is a positive integer less than or equal to (M-1). Each~ set SX~(t + 1) contains only :
those predictor feature values X~having the i lowest values of the conditional probability P(SYop~(t) I Xm) . Each set SXj(t~
contains only those predictor feature:~values X~ having the (M~
highest values of the cond1tional probabilities.

YO9 8 9 - 0 5 9 ~

- ~ : : '`

;
,, : i ~;
' ~ ~

~ ~2~332 Table 8 shows the ordered pairs o~ sets SXl(t+ 1) and SX1(t + 1) to SX4(t + 1) and SXa(t + 1) of predictor feature values based upon the conditional probabilities of Table 7. Table 8 also shows the predictor feature values associated with each set.

~; TABLE 8 ~

SET PREDICTOR U~CERT~INTY UNCERTAINTY

SXi(t + 1) FEATURE (Prom lFrom SXl(t + 1) VALUES TABLE 4) TABLE 7) _ _ SX1(t ~ 1) X2 2.264894 0.894833 - ~ ~
SX1(t + 1)X5~ X1~ X4~ X3 ~
SX~(t + 1) X2,Xs 2.181004 0.876430 `
SX2(t + 1) Xl, X4~ X3 -~
SX3(t + 1) X2, Xs, X1 ~2.201608 0.850963 ;~
SX3(t+ 1) X4~ X3 ~ `' ,,','~ ',`' ,`.' SX4(t+ 1) X2~ Xs~ Xl~ X4 2.189947 0.82733~
SX/~(t+1) X3 ,.. ".`.:,, .` '~, . . _ .
SXopt(t + 1) = Sx2(t + 1) = (X2,Y5J '.,~i: ."' SXopt(t + 1) = SX2(t + 1) = {X1,X4~ X3J ... , ,. ~ ' ~ ~ t = 1 ;~
: ~

In a similar manner as described~above with respect to Table :

6, the uncertainties in the values of~the category feature were~
:calculated for each pair~of sets~from the probabilities ln .
Tables 4 and 7, respectively. As shown in Table 8, the pair of sets SXopt(t + 1) and SXop~(t ~ 1) hav~lng the lowest uncertainty .. .~ ,. ::

~ YO989-059 - 22 ~
.
~ ~ : : :. :, :
.. .

~ . . .
. ~ - , :
- , !
' ~
,:

~ 2~2~332 in the value of the category feature were SX2(t + 1) and SX.(t + 1), where t=1, as calculated from the probabiliti0s of Table 4.

Finally, having found the pair of sets SXopt(t+ 1) and SXopt(t ~
an event is classified in a first class if the predictor feature value of the event is a member of the set SXopt(t + 1). If the predictor feature value of an event is a member of the set SXop~(t + 1)~ then the event is classified in a second class.

While it is possible, according to the present invention, to classify the set of observed events according to the first values of SXop~(t + 1) and SXop,(t + 1) obtained, better subsets can be obtained by repeating the process. That is, after finding a pair of sets SXopt(t + 1) and sxopt(t + lj , but prior to classifying an event, it is preferred to increment t by 1, and then repeat the steps from calculàting the conditional probabilities P(SXopt(t) IYn) through finding SXopt(t + 1) and _ SXopt(t~ 1). These steps may be repeated, for example, until the -~
set SXopt~t + 1) is equal or substantially equal to the previous set SXopt(t). Alternatively, these steps may be repeated a ~ . . : : :
~ selected number of times.
..
- ~; I -, The present invention was used to classify sets of events based on randomly generated probability distributions by-repeating the steps described above, and by stopping the repetition when .:

: ~ ; -.

~2~
SX~,pt(t + 1) = SX~pt(t) . The classification was repeated 100 timeseach for values of M from 2 to 12 and values of N from 2'to 16.
The classification was completed in all cases after not more than 5 iterations, so that fewer than 5(M+N) subsets were examined. The information ~uncertainty) in the classifications obtained by the invention were close to the optimum.

Figure 2 is a flow chart showing one manner of finding the sets SYop~(t) and SYopt~t) having the lowest uncertainty in the value of the predictor feature. After the ordered pairs of sets SYj and ~ ~ ;
SYj are defined, the variablé j is set equal to 1. Next, the uncertainty H(SplitSYj(t)) is calculated for sets SYj(t) and ~-SYj(t), and the uncertainty H(Split SY~ (t)) is calculated for the sets SY~ and SY(j"~(t). The uncertainty H(Split SYj(t)) is then compared to the uncertainty H(SplitSY(j~1)(t)) . If H(Split SYj(t)) is less than H(SplitSY(~ (t)) then SYopt(t) is set equal to SYj(t) , and SYopt(t) is set equal to SYj(t). If H(Split SYj(t)) is not less than H(Split SY(j+1)(t)) , then j is incremented by 1, and the uncertainties are calculated for the new value of j.

The sets SXopt(t + 1) and SXopt(t +1) having the lowest uncertainty in the value of the category feature can be found in the same ; ;
manner.

Figure 3 schematically shows an apparatus for classifying a set of observed events. The apparatus may comprise, for example, `~
~ ' 2 ~ 2 an appropriately programmed computer system. In this example, the apparatus comprises a general purpose digital processor 10 having a data entry keyboard 12, a display 14, a random access memory 16, and a storage device 18. Under the control of a program stored in the random access memory 16, the processor 10 retrieves the predictor feature values Xm and the category feature values Y~ from the training text in storage device 18. ;
From the feature values of the training text, processor 10 -~ `
calculates estimated probabilities PtXm, Yn) and stores the estimated probabilities in storage device 18.

Next, processor 10 selects a starting set sxo~t(t) of predictor feature values. For example, the set SXopt(t), where t has an initial value, may include the predictor feature values Xl through XM when (M is an even integer), or X1 through X"l (when -M is an odd integer). Any other selection is acceptable.

From the probabilities P(Xm~ Yn) in storage device 18, processor 10 calculates the conditional probabilities P(SXopt(t) I Yn) and stores them in storage device 18. From the conditional probabilities, ordered pairs of sets SYj(t) and SYj(t) are defined and stored in storage device 18.~ ~ -still under the control of the program, the processor 10calculates the uncertainties of the (N-1) ordered pairs of sets, Y0989-OS9 ~ - 2S - ~

~ ~: . : :

,: :
:` : : ~ :

2 ~ 2 ~ 2 and stores the sets SYopt(t) and SY~pt(t) having the lowest~
uncertainty in the value of the predictor feature in the storage device 18.

In a similar manner, processor 10 finds and stores the sets SXopt(t + 1) and SXopt~t + 1) having the lowest uncertainty in the ~
value of the category feature. -Figure 4 shows an example of a decLsion tree which can be generated by the method and apparatus according to the present - -invention. As shown in Figure 4, each node of the decision tree is associated with a set SZO through SZ6 of events. The set SZQ at the top of the tree includes all values of all predictor features. The events in set SZO are classified, in the second - ~-level of the tree, according to a first predictor feature X',"
by using the method and apparatus of the present invention. ;
'' "' ,"" ~

For example, if the first predictor feature X' is the word which ;
immediately precedes the word to be recognized Y, then the set of events SZO is split into a set of events SZl, in which the `~
prior word is a member of set SX'opt and a set Szz in which the ~-prior word is a member of the set jSx~opt~

At the th1rd~level of the~tree, the sets of events SZ1 and SZ2 are further split, for example, by a second predictor value X"

(the word next preceding the word t~o be recognized). Each node Yo989-o59 ~ - 26 ~

i ...
\

-` 2~2~2 of the decision tree has associated with it a probability distribution for the value of the word to be recognized (the value of the category feature). As one proceeds through~the decision tree, the uncertainty in the value of the word to be -~
recognized Y is successively reduced.

Figure 5 is a block diagram of an automatic speech recognition system which utilizes the classification method and apparatus according to the present invention. A similar system is described in, for example, U.S. Patent 4,759,068. The system shown in Figure 5 includes a microphone 20 for converting an utterance into an electrical signal. The signal from-the microphone is processed by an acoustic processor and label match 22 which finds the best-matched acoustic label prototype from the acoustic label prototype store 240 A fast acoustic word match processor 26 matches the label string from acoustic processor 22 against abridged acoustic word models in store 28 to produce an utterance signal.

:
The utterance signal output by the fast acoustic word match processor comprises at least one predictor word signal representing a predictor word of the utterance. In general, however, the ast acoustic match processor will output a number of candidate predictor words. ~

.
: ' ~ : ' ', .

.

202~3,~2 Each predictor word signal produced by the fast acoustic word match processor 26 is input into a word context match 30 which compares the word context to language models in store 32 and outputs at least one category feature signal representing a ~ .:
candidate predicted word. From the recognition candidates ' produced by the fast acoustic match and the language model, the detailed acoustic match 34 matches the label string from acoustic processor 22 against detailed acoustic word models in store 36 and outputs a word string corresponding to the utterance.

' ., ,, ',.,''. ,.
Figure 6 is a more detailed block diagram of a portion of the word context match 30 and language model 32. The word context ;;~
match 30 and language model 32 include predictor feature signal .... . .......
storage 38 for storing all the predictor words. A decision set generator 40 (such as the apparatus described above with respect to Figure 3) generates a subset of the predictor words for the decision set. -, , .
A controller 42 directs the storage of predictor word signals from the fast acoustic word match processor 26 in predictor word Slgna~ storage 44.

A predictor word signal addressed by controller 42 is compared with a predictor feature signal addressed by controller 42 in comparator 46. After the predictor~word signal is compared with ~-~.

:

7~

the decision set, a first category feature signal is output if the predictor word signal is a member of the decision set.
Otherwise, a second category feature is output. ~

An example of the operation of the word context match 30 and language model 32 can be explained with reference to Tables 1, 2, and 9.

.

P(Yn~ SXopttt + 1)) P(Yn~ SXopt(t + 1)) P(Yn I SX~",t(t + l)j P(Yn I SXopt(t + 1)) ~ .
; Ys 0.033759 0.127322 0.089242 0.204794 Y4 0.037788 0.137308 0.099893 0.220856 Y3 0.049092 0.186437 0.129774 0.299880 Y2 0.132102 0.110078 0.349208 0.177057 Y1 0.125548 0.060561 0.331881 0.097411 . `, ~
SXopt(t + l) = (x2, Xs) SX~pt(t + l) = ~X~, X4, X3} ;
t = l ~ :.`~
~ j, ~

~ In an utterance of a string:of words, a prior word (that is, a : :
~ word immediately preceding the word to be recognized) is : :

~ YO989-059 - 29 - ~
:
- :

:
:
:

', : ~ ~ 2 ~ ~ ~ 2 ;

tentatively identified as the word "travel". According to Table . :
1, the prior word "travel" has a predictor feature value X, .
From Table 9, the word "travel" is in set SXopt(t Therefore, the probability of each value Yn of the word to be recognized is given by the conditional probability P(YnlSXopt(t ~ 1)) in Table 9. - ~

If we select the words with the first and second highest -conditional probabilities, then the language model 26 will output words Y3 and Y~ ("management" and "consultant") as `-~;
candidates for the recognition word followin~ "travel". The candidates will then be presented to the detailed acoustic match for further examination.

~ :

, . , '~
. ' ~'': '`

:~ . - ,.
: - : ~ :.. .
. ""': ~

~: ~ : : ::

YO989-059 ~ - 30 -- ` : `

;
i !

~ ~ . ' '. ', :'

Claims (13)

1. A method of classifying a set of observed events, each event having a predictor feature X and a category feature Y, said predictor feature having one of M different possible values Xm, said category feature having one of N possible values Yn, where M is an integer greater than or equal to three, N is an integer greater than or equal to three, m is an integer greater than zero and less than or equal to M, and n is an integer greater than zero and less than or equal to N, said method comprising the steps of:
(a) measuring the predictor feature value Xm and the category feature value Yn of each event in the set of events;
(b) estimating, from the measured predictor feature values and the measured category feature values, the probability P(Xm,Yn) of occurrence of an event having a category feature value Yn and a predictor feature value Xm, for each Yn and each Xm;
(c) selecting a starting set SXopt(t) of predictor feature values Xm, where t has an initial value;
(d) calculating, from the estimated probabilities P(Xm,Yn), the conditional probability P(SXopt(t)¦Yn) that the predictor feature has a value in the set SXopt(t) when the category feature has a value Yn, for each Yn;
(e) defining a number of pairs of sets SYj(t) and ??j(t) of category feature values Yn, where j is an integer greater than zero and less than or equal to (N-1), each set SYj(t) containing only those category feature values Yn having the j lowest values of P(SXopt(t)¦Yn), each set ??j(t) containing only those category feature values Yn having the (N - j) highest values of P(SXopt(t)¦Yn) ;
(f) finding a pair of sets SYopt(t ) and ??opt(t) from among the pairs of sets SYj(t) and ??j(t) such that the pair of sets SYopt(t) and ??opt(t) have the lowest uncertainty in the value of the predictor feature;
(g) calculating, from the estimated probabilities P(Xm, Yn), the conditional probability P(SYopt(t)¦Xm) that the category feature has a value in the set SYopt(t) when the predictor feature has a value Xm, for each Xm ;
(h) defining a number of pairs of sets SXi(t + 1) and ??i(t + 1) of predictor feature values Xm, where i is an integer greater than zero and less than or equal to (M-1), each set SXi(t + 1) containing only those predictor feature values Xm having the i lowest values of P(SYopt(t)¦Xm), each set ??i(t + 1) containing only those predictor feature values Xm having the (M - i) highest values of P(SYopt(t)¦Xm) ;
(i) finding a pair of sets SXopt(t + 1) and ??opt(t + 1) from among the pairs of sets SXi(t +1) and ??i(t+ 1) such that the pair of sets SXopt(t +1) and ??opt(t + 1) have the lowest uncertainty in the value of the category feature;

(1) classifying an event in a first class if the predictor feature value of the event is a member of the set SXopt(t + 1);
and (m) classifying an event in a second class if the predictor feature value of the event is a member of the set ??opt(t +1).
2. A method as claimed in Claim 1, characterized in that:
the step of classifying an event in the first class comprises producing a classification signal identifying the event as a member of the first class if the predictor feature value of the event is a member of the set SXopt(t +1) ; and the step of classifying an event in the second class comprises producing a classification signal identifying the event as a member of the second class if the predictor feature value of the event is a member of the set ??opt(t + 1).
3. A method as claimed in Claim 2, further comprising the steps of, after finding a pair of sets SXopt(t + 1) and ??opt(t +1) but prior to classifying an event, (j) incrementing t by 1; and (k) repeating steps (d) through (k) until the set SXopt(t + 1) is substantially equal to the previous set SXopt(t).
4. A method as claimed in Claim 3, characterized in that the step of finding a pair of sets SYOpt(t) and ?opt(t) from among the pairs of sets SYj(t) and ??j(t) such that the pair of sets SYopt(t) and ??opt(t) have the lowest uncertainty in the value of the predictor feature comprises the steps of:
(f)(1) setting j=1;
(f)(2) calculating the uncertainty H(Split SYj(t)) in the value of the predictor feature for the pair of sets SYj(x) and ??j(t), and the uncertainty H(Split SY(j+1)(t)) in the value of the predictor feature for the pair of sets SY(j+1)(t) and ??(j+1)(t) (f)(3) comparing H(Split SYj(t)) and H(Split SY(j+1)(t)) ; and (f)(4) if H(Split SYj(t)) is less than H(Split SY(j+1)(t)), then setting SYopt(t)= SYj(t) and ??opt(t)= ??j(t), otherwise incrementing j by 1 and repeating steps (f)(2) through (f)(4).
5. A method as claimed in Claim 4, characterized in that the step of finding a pair of sets SXopt(t + 1) and ??opt(t +1) from among the pairs of sets SXi(t + 1) and ??i(t + 1) such that the pair of sets SXopt(t + 1) and ??opt(t + 1) have the lowest uncertainty in the value of the category feature comprises the steps of:
(i)(1) setting i=1;
(i)(2) calculating the uncertainty H(Split SXi(t + 1)) in the value of the category feature for the pair of sets SXi(t + 1) and ??i(t + 1), and the uncertainty H(Split SX(i+1)(t + 1)) in the value of the category feature for the pair of sets SXi(t + 1) and ??(i+1)(t + 1) ;
(i)(3) comparing H(Split SXi(t+ 1)), and H(Split SX(i+1)(t +1)) and (i)(4) if H(Split SXj(t + 1)), is less than H(Split SX(i+1)(t + 1)), then setting SXopt(t + 1)= SXi(t + 1) and ??opt(t + 1)= ??i(t + 1) otherwise, incrementing i by 1 and repeating steps (i)(2) through (i)(4).
6. A method as claimed in Claim 5, characterized in that the uncertainty in the value of the category feature for the pairs of sets SXi(t + 1) and ??i(t + 1) is equal to the probability of occurrence of an event having a predictor feature value in the set SXi(t + 1) times the uncertainty in the value of the category feature for the set SXi(t + 1) plus the probability of occurrence of an event having a predictor feature value in the set ??i(t +1) times the uncertainty in the value of the category feature for the set ??i(t + 1).
7. A method as claimed in Claim 6, further comprising the steps of:
measuring the predictor feature value of an event not in the set of events;
classifying the event in the first class if the predictor feature value of the event is a member of the set SXopt(t + 1) ;
and classifying the event in the second class if the predictor feature value of the event is a member of the set ??opt(t +1).
8. A method as claimed in Claim 7, characterized in that each event is a spoken utterance in a series of spoken utterances.
9. A method as claimed in Claim 8, characterized in that;
each event is a spoken word in a series of spoken words;
and the predictor feature value of a word comprises an identification of the immediately preceding word in the series of spoken words.
10. An apparatus for classifying a set of observed events, each event having a predictor feature X and a category feature Y, said predictor feature having one of M different possible values Xm, said category feature having one of N possible values Yn, where M is an integer greater than or equal to three, N is an integer greater than or equal to three, m is an integer greater than zero and less than or equal to M, and n is an integer greater than zero and less than or equal to N, said apparatus comprising:
(a) means for measuring the predictor feature value Xm and the category feature value Yn of each event in the set of events;
(b) means for estimating, from the measured predictor feature values and the measured category feature values, the probability P(Xm,Yn) of occurrence of an event having a category feature value Yn and a predictor feature value Xm, for each Yn and each Xm;
(c) means for selecting a starting set SXopt(t) of predictor feature values Xm, where t has an initial value;
(d) means for calculating, from the estimated probabilities P(Xm,Yn), the conditional probability P(SXopt(t)¦Yn) that the predictor feature has a value in the set SXopt(t) when the category feature has a value Yn, for each Yn;
(e) means for defining a number of pairs of sets SYj(t) and ??j(t) of category feature values Yn, where j is an integer greater than zero and less than or equal to (N-1), each set SYj(t) containing only those category feature values Yn having the j lowest values of P(SXopt(t)¦Yn), each set ??j(t) containing only those category feature values Yn having the (N - j) highest values of P(SXopt(t)¦Yn) ;
(f) means for finding a pair of sets SYopt(t) and ??opt(t) from among the pairs of sets SYj(t) and ??j(t) such that the pair of sets SYopt(t) and ??opt(t) have the lowest uncertainty in the value of the predictor feature;
(g) means for calculating, from the estimated probabilities P(Xm,Yn), the conditional probability P(SYopt(t)¦Xm) that the category feature has a value in the set SYopt(t) when the predictor feature has a value Xm, for each Xm;
(h) means for defining a number of pairs of sets SXi(t + 1) and ??i(t + 1) of predictor feature values Xm, where i is an integer greater than zero and less than or equal to (M-1), each set SXi(t + 1) containing only those predictor feature values Xm having the i lowest values of P(SYopt(t)¦Xm), each set SXi(t + 1) containing only those predictor feature values Xm having the (M - i) highest values of P(SYopt(t)¦Xm) ;
(i) means for finding a pair of sets SXopt(t + 1) and SXopt(t +1) from among the pairs of sets SXi(t + 1) and SXi(t + 1) such that the pair of sets SXopt(t + 1) and SXopt(t + 1) have the lowest uncertainty in the value of the category feature;
(1) means for classifying an event in a first class if the predictor feature value of the event is a member of the set SXopt(t + 1) ; and (m) means for classifying an event in a second class if the predictor feature value of the event is a member of the set ??opt(t + 1).
11. An apparatus as claimed in Claim 10, further comprising:
means for measuring the predictor feature value of an event not in the set of events;
means for classifying the event in the first class if the predictor feature value of the event is a member of the set SXopt(t + 1) ; and means for classifying the event in the second class if the predictor feature value of the event-is a member of the set ??opt(t +1).

\
12. A method of automatic speech recognition comprising the steps of:
converting an utterance into an utterance signal representing the utterance, said utterance comprising a series of at least a predictor word and a predicted word, said utterance signal comprising at least one predictor word signal representing the predictor word;
providing a set of M predictor feature signals, each predictor feature signal having a predictor feature value Xm, where M is an integer greater than or equal to three and m is an integer greater than zero and less than or equal to M, each predictor feature signal in the set representing a different word;
generating a decision set which contains a subset of the M predictor feature signals representing the words;
comparing the predictor word signal with the predictor feature signals in the decision set;
outputting a first category feature signal representing a first predicted word if the predictor word signal is a member of the decision set, said first category feature signal being one of N category feature signals, each category feature signal representing a different word and having a category feature value Yn, where N is an integer greater than or equal to three, and n is an integer greater than zero and less than or equal to N; and outputting a second category feature signal, different from the first category feature signal and representing a second predicted word different from the first predicted word if the predictor word signal is not a member of the decision set;
characterized in that the contents of the decision set are generated by the steps of:
providing a training text comprising a set of observed events, each event having a predictor feature X representing a predictor word and a category feature Y representing a predicted word, said predictor feature having one of M different possible values Xm, each Xm representing a different predictor word, said category feature having one of N possible values Yn, each Yn representing a different predicted word;
(a) measuring the predictor feature value Xm and the category feature value Yn of each event in the set of events;
(b) estimating, from the measured predictor feature values and the measured category feature values, the probability P(Xm, Yn) of occurrence of an event having a category feature value Yn and a predictor feature value Xm, for each Yn and each Xm;
(c) selecting a starting set SXopt(t) of predictor feature values Xm, where t has an initial value;
(d) calculating, from the estimated probabilities P(Xm, Yn), the conditional probability P(SXopt(t)¦Yn) that the predictor feature has a value in the set SXopt(t) when the category feature has a value Yn, for each Yn;

(e) defining a number of pairs of sets SYj(t) and SYj(t) of category feature values Yn, where j is an integer greater than zero and less than or equal to (N-1), each set SYj(t) containing only those category feature values Yn having the j lowest values of P(SXopt(t)¦Yn), each set ??j(t) containing only those category feature values Yn having the (N - j) highest values of P(SXopt(t)¦Yn) ;
(f) finding a pair of sets SYopt(t) and SYopt(t) from among the pairs of sets SYj(t) and ??j(t) such that the pair of sets SYopt(t) and ??opt(t) have the lowest uncertainty in the value of the predictor feature;
(g) calculating, from the estimated probabilities P(Xm, Yn), the conditional probability P(SYopt(t)¦Xm) that the category feature has a value in the set SYopt(t) when the predictor feature has a value Xm, for each Xm;
(h) defining a number of pairs of sets SXi(t + 1) and SXi(t + 1) of predictor feature values Xm, where i is an integer greater than zero and less than or equal to (M-1), each set SXi(t + 1) containing only those predictor feature values Xm having the i lowest values of P(SYopt(t)¦Xm), each set ??i(t + 1) containing only those predictor feature values Xm having the (M - i) highest values of P(SYopt(t)¦Xm) ;
(i) finding a pair of sets SXopt(t + 1) and ??opt(t + 1) from among the pairs of sets SXi(t + 1) and ??i(t + 1) such that the pair of sets SXopt(t + 1) and ??opt(t + 1) have the lowest uncertainty in the value of the category feature; and (l) setting the decision set equal to the set SXopt(t + 1).
13. An automatic speech recognition system comprising:
means for converting an utterance into an utterance signal representing the utterance, said utterance comprising a series of at least a predictor word and a predicted word, said utterance signal comprising at least one predictor word signal representing the predictor word;
means for storing a set of M predictor feature signals, each predictor feature signal having a predictor feature value Xm, where M is an integer greater than or equal to three and m is an integer greater than zero and less than or equal to M, each predictor feature signal in the set representing a different word;
means for generating a decision set which contains a subset of the M predictor feature signals representing the words;
means for comparing the predictor word signal with the predictor feature signals in the decision set;
means for outputting a first category feature signal representing a first predicted word if the predictor word signal is a member of the decision set, said first category feature signal being one of N category feature signals, each category feature signal representing a different word and having a category feature value Yn, where N is an integer greater than or equal to three, and n is an integer greater than zero and less than or equal to N; and means for outputting a second category feature signal, different from the first category feature signal and representing a second predicted word different from the first predicted word if the predictor word signal is not a member of the decision set;
characterized in that the means for generating the decision set comprises:
means for storing a training text comprising a set of observed events, each event having a predictor feature X
representing a predictor word and a category feature Y
representing a predicted word, said predictor feature having one of M different possible values Xm, each Xm representing a different predictor word, said category feature having one of N possible values Yn, each Yn representing a different predicted word;
(a) means for measuring the predictor feature value Xm and the category feature value Yn of each event in the set of events;
(b) means for estimating, from the measured predictor feature values and the measured category feature values, the probability P(Xm,Yn) of occurrence of an event having a category feature value Yn and a predictor feature value Xm, for each Yn and each Xm;
(c) means for selecting a starting set SXopt(t) of predictor feature values Xm, where t has an initial value;

(d) means for calculating, from the estimated probabilities P(Xm,Yn), the conditional probability P(SXopt(t)¦Yn) that the predictor feature has a value in the set SXopt(t),when the category feature has a value Yn, for each Yn;
(e) means for defining a number of pairs of sets SYj(t) and SYj(t) of category feature values Yn, where j is an integer greater than zero and less than or equal to (N-1), each set ??j(t) containing only those category feature values Yn having the j lowest values of P(SXopt(t)¦Yn), each set ??j(t) containing only those category feature values Yn having the (N - j) highest values of P(SXopt(t)¦Yn) ;
(f) means for finding a pair of sets SYopt(t) and ??opt(t) from among the pairs of sets SYj(t) and ??j(t) such that the pair of sets SYopt(t) and ??opt(t) have the lowest uncertainty in the value of the predictor feature;
(g) means for calculating, from the estimated probabilities P(Xm,Yn), the conditional probability P(SYopt(t)¦Xm) that the category feature has a value in the set SYopt(t) when the predictor feature has a value Xm, for each Xm ;
(h) means for defining a number of pairs of sets SXi(t + 1) and ??i(t + 1) of predictor feature values Xm, where i is an integer greater than zero and less than or equal to (M-1), each set SXi(t + 1) containing only those predictor feature values Xm having the i lowest values of P(SYopt(t)¦Xm), each set ??i(t + 1) containing only those predictor feature values Xm having the (M - i) highest values of P(SYopt(t)¦Xm) ;

(i) means for finding a pair of sets SXopt(t + 1) and ??opt(t + 1) from among the pairs of sets SXi(t + 1) and ??i(t + 1) such that the pair of sets SXopt(t + 1) and ??opt(t + 1) have the lowest uncertainty in the value of the category feature; and (l) means for outputting the set SXopt(t + 1) as the decision set.
CA002024382A 1989-10-26 1990-08-31 Method and apparatus for finding the best splits in a decision tree for a language model for a speech recognizer Expired - Fee Related CA2024382C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/427,420 US5263117A (en) 1989-10-26 1989-10-26 Method and apparatus for finding the best splits in a decision tree for a language model for a speech recognizer
US427,420 1989-10-26

Publications (2)

Publication Number Publication Date
CA2024382A1 CA2024382A1 (en) 1991-04-27
CA2024382C true CA2024382C (en) 1994-08-02

Family

ID=23694801

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002024382A Expired - Fee Related CA2024382C (en) 1989-10-26 1990-08-31 Method and apparatus for finding the best splits in a decision tree for a language model for a speech recognizer

Country Status (4)

Country Link
US (1) US5263117A (en)
EP (1) EP0424665A2 (en)
JP (1) JPH0756675B2 (en)
CA (1) CA2024382C (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5745649A (en) * 1994-07-07 1998-04-28 Nynex Science & Technology Corporation Automated speech recognition using a plurality of different multilayer perception structures to model a plurality of distinct phoneme categories
US5680509A (en) * 1994-09-27 1997-10-21 International Business Machines Corporation Method and apparatus for estimating phone class probabilities a-posteriori using a decision tree
US5729656A (en) 1994-11-30 1998-03-17 International Business Machines Corporation Reduction of search space in speech recognition using phone boundaries and phone ranking
CA2220004A1 (en) * 1995-05-26 1996-11-28 John N. Nguyen Method and apparatus for dynamic adaptation of a large vocabulary speech recognition system and for use of constraints from a database in a large vocabulary speech recognition system
US5822730A (en) * 1996-08-22 1998-10-13 Dragon Systems, Inc. Lexical tree pre-filtering in speech recognition
US5864819A (en) * 1996-11-08 1999-01-26 International Business Machines Corporation Internal window object tree method for representing graphical user interface applications for speech navigation
US6167377A (en) * 1997-03-28 2000-12-26 Dragon Systems, Inc. Speech recognition language models
US6418431B1 (en) * 1998-03-30 2002-07-09 Microsoft Corporation Information retrieval and speech recognition based on language models
WO1999059673A1 (en) * 1998-05-21 1999-11-25 Medtronic Physio-Control Manufacturing Corp. Automatic detection and reporting of cardiac asystole
US7031908B1 (en) 2000-06-01 2006-04-18 Microsoft Corporation Creating a language model for a language processing system
US6865528B1 (en) * 2000-06-01 2005-03-08 Microsoft Corporation Use of a unified language model
US6859774B2 (en) * 2001-05-02 2005-02-22 International Business Machines Corporation Error corrective mechanisms for consensus decoding of speech
US8229753B2 (en) 2001-10-21 2012-07-24 Microsoft Corporation Web server controls for web enabled recognition and/or audible prompting
US7711570B2 (en) * 2001-10-21 2010-05-04 Microsoft Corporation Application abstraction with dialog purpose
US7133856B2 (en) * 2002-05-17 2006-11-07 The Board Of Trustees Of The Leland Stanford Junior University Binary tree for complex supervised learning
US7200559B2 (en) 2003-05-29 2007-04-03 Microsoft Corporation Semantic object synchronous understanding implemented with speech application language tags
US8301436B2 (en) * 2003-05-29 2012-10-30 Microsoft Corporation Semantic object synchronous understanding for highly interactive interface
US7292982B1 (en) * 2003-05-29 2007-11-06 At&T Corp. Active labeling for spoken language understanding
US8160883B2 (en) 2004-01-10 2012-04-17 Microsoft Corporation Focus tracking in dialogs

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4181813A (en) * 1978-05-08 1980-01-01 John Marley System and method for speech recognition
JPS57211338A (en) * 1981-06-24 1982-12-25 Tokyo Shibaura Electric Co Tatal image diagnosis data treating apparatus
JPS58115497A (en) * 1981-12-28 1983-07-09 シャープ株式会社 Voice recognition system
US4658429A (en) * 1983-12-29 1987-04-14 Hitachi, Ltd. System and method for preparing a recognition dictionary
JPS60262290A (en) * 1984-06-08 1985-12-25 Hitachi Ltd Information recognition system
US4759068A (en) * 1985-05-29 1988-07-19 International Business Machines Corporation Constructing Markov models of words from multiple utterances
FR2591005B1 (en) * 1985-12-04 1988-01-08 Thomson Csf METHOD FOR IDENTIFYING A TREE STRUCTURE IN DIGITAL IMAGES AND ITS APPLICATION TO AN IMAGE PROCESSING DEVICE
US4719571A (en) * 1986-03-05 1988-01-12 International Business Machines Corporation Algorithm for constructing tree structured classifiers
US4852173A (en) * 1987-10-29 1989-07-25 International Business Machines Corporation Design and construction of a binary-tree system for language modelling

Also Published As

Publication number Publication date
JPH03147079A (en) 1991-06-24
CA2024382A1 (en) 1991-04-27
EP0424665A2 (en) 1991-05-02
US5263117A (en) 1993-11-16
JPH0756675B2 (en) 1995-06-14
EP0424665A3 (en) 1994-01-12

Similar Documents

Publication Publication Date Title
CA2024382C (en) Method and apparatus for finding the best splits in a decision tree for a language model for a speech recognizer
EP0535909B1 (en) Speech recognition
Lukatela et al. Automatic and pre-lexical computation of phonology in visual word identification
KR101780760B1 (en) Speech recognition using variable-length context
US7467087B1 (en) Training and using pronunciation guessers in speech recognition
US20080319753A1 (en) Technique for training a phonetic decision tree with limited phonetic exceptional terms
CN110556130A (en) Voice emotion recognition method and device and storage medium
EP1668628A1 (en) Method for synthesizing speech
US7054814B2 (en) Method and apparatus of selecting segments for speech synthesis by way of speech segment recognition
WO2004049240A1 (en) Method and device for determining and outputting the similarity between two data strings
Knopoff et al. Information theory for musical continua
US20060277045A1 (en) System and method for word-sense disambiguation by recursive partitioning
JPH0713594A (en) Method for evaluation of quality of voice in voice synthesis
Campbell Analog i/o nets for syllable timing
EP0562138A1 (en) Method and apparatus for the automatic generation of Markov models of new words to be added to a speech recognition vocabulary
US6286012B1 (en) Information filtering apparatus and information filtering method
US7263486B1 (en) Active learning for spoken language understanding
Kello et al. Scale-free networks in phonological and orthographic wordform lexicons
US20210201888A1 (en) Computer-Implemented Phoneme-Grapheme Matching
JPS61120200A (en) Voice recognition method and apparatus
Benkhellat et al. Genetic algorithms in speech recognition systems
EP0420825A2 (en) A method and equipment for recognising isolated words, particularly for very large vocabularies
Dainora Eliminating downstep in prosodic labeling of American English
WO1991007729A2 (en) Improvements in methods and apparatus for signature verification
Yelland Word recognition and lexical access

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed