CA1062811A - Cluster storage apparatus for post processing error correction of a character recognition machine - Google Patents

Cluster storage apparatus for post processing error correction of a character recognition machine

Info

Publication number
CA1062811A
CA1062811A CA230,886A CA230886A CA1062811A CA 1062811 A CA1062811 A CA 1062811A CA 230886 A CA230886 A CA 230886A CA 1062811 A CA1062811 A CA 1062811A
Authority
CA
Canada
Prior art keywords
word
words
character
alpha
characters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA230,886A
Other languages
French (fr)
Inventor
Anne M. Chaires
Jean M. Ciconte
Allen H. Ett
John J. Hilliard
Walter S. Rosenbaum
Donald F. Kocher
Ellen W. Bollinger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Application granted granted Critical
Publication of CA1062811A publication Critical patent/CA1062811A/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/26Techniques for post-processing, e.g. correcting the recognition result
    • G06V30/262Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

A CLUSTER STORE APPARATUS FOR POST PROCESSING ERROR
CORRECTION OF A CHARACTER RECOGNITION MACHINE

ABSTRACT OF THE DISCLOSURE:

A cluster storage apparatus is disclosed for out-putting groups of valid alpha words as potential candidates for the correct form of an alpha word misrecognized by a character recognition machine. Groups of alpha words are arranged in the cluster storage apparatus such that adja-cent locations contain alpha words having similar character recognition misread propensities. Alpha words which have been determined to be misrecognized, are input to the cluster storage apparatus. Numerical values assigned to the characters of which the input word is composed, are used to calculate the address of that group of valid alpha words having similar character recognition misread propensities.
The cluster storage apparatus then outputs the accessed groups of alpha words for subsequent processing. The organization of the cluster storage apparatus minimizes the difference in address between alpha words with similar character recognition misread propensities by assigning high numeric values to highly reliable characters, as determined by measuring the character transfer function of the character recognition machine.

Description

24 FI}~LD O~ THE INVENTION: .
The invention disclosed herein relat~s to data 26 processin~ devices and more particularlv relates to ~ost 27 ~rocessing devices for character reco~nition machines such 28 as optical character readers and speech analyzers. The 29 invention can also be applied to the analvsis of tvpo-graphical errors-resultsing from the use of a standard 31 keyboard.

I~A9-74-002 -1-1~i2811 1 BACKGROUND OF T~IE INVENTION:
From its technical debut, the optical character recognition machine (OCR) has had unique potential for purposes of text process-ing applications. Its input processing rate far exceeds that of key-punch or typewriter inputs and its output is in machine readable form. Despite these very important attributes, optical character recognition machines have made only minor inroads to overall text processing applications. This may be principally due to the tendency of state of the art character recognition machines to generate a sub-stantial percentage of erroneous misreads when a variety of fonts and formats are scanned.
When multifont nonformatted optical character recognition is attempted, problems arise which are not as prevalent in unifont applications. They stem from the highly error prone character recog-nition environment which is created when the character recognition machine operation is performed over many different alphabetic and numeric fonts with minimum control exercised over text conventions and typographical print quality. When scanning such a text, discrimina-tion between confusable character geometries causes a nominal five percent character misrecognition rate.
In the prior art, apparatus for selecting the correct form of a garbled input word misread by an OCR has been limited to correcting errors in the substitution misrecognition mode. For improving the performance of an optical character reader, prior art discloses the use of conditional probabilities for simple substitution of one char-acter for another or of character rejection, for calculating a total conditional probability that its input 1~6Z811 1 OCR word was misread, given that a predetermined dictionary word was actually scanned by the OCR. But the prior art deals only with the simple substitution of confusion pairs occupying the same correspond-ing location in the OCR word and in the dictionary word. The OCR
word and the dictionary word must be of the same length.
A significant advance in the art of post processing error cor-rection apparatus has been contributed by W.S. Rosenbaum, et al., in U.S. patent 3,969,700, issued July 13, 1976, and assigned to the instant applicant. A regional context error correction apparatus is disclosed therein which corrects for segmentation errors as well as substitution errors in the characters read by the OCR. Segmentation misrecognition differs from that of simple substitution in that its independent events correspond to groupings of at least two characters.
Nominally there are three types of segmentation errors. They are:
horizontal splitting segmentation, concatenation segmentation and crowding segmentation. The underlying mechanical factor which these segmentation types have in common is that they are generated by the improper delineation of the character beginning and ending points.
Segmentation errors occur quite frequently in OCR output streams and constitute a substantial impediment to accuracy in text processing applications. The regional context error correction apparatus dis-~he closed in/patent application Scrial Numb~r ~59,8~0 contains a dic-tionary storage 28 shown in Figure 3 containing words which are expected to be read by the OCR. It is disclosed that for general English Text Processing applications the words appearing in a conven-tional dictionary may be stored in the dictionary storage ~' ~0~;2811 1 28. It is seen however that the dictionary storage 2~ would require a substantial storage capacity to accommodate a conventional English dictionary and would require very fast accessing time in order to compare each word in the dictionary with the garbled word input from the OCR. The application also discloses that the dictionary store 28 may optionally have a bulk storage input 3 which could for example supply selected categories of reference words which are most likely to match with the particular type of misrecognized word received from the OCR.
Storage techniques of the associative memory type have been disclosed in the prior art for accessing the correct form of a mis-spelled word. For example, J.J. Giangardella, "Spelling Correction by Vector Representation Using a Digital Computer", IEEE Transactions on Engineering Writing and Speech, Volume EWS-10, Number 2, December 1967, page 57, discloses the use of vector representation of alpha words by assigning the numbers 1 through 26 to the letters A through Z respectively and calculating the vector magnitude and angle for accessing the word from a memory in a general purpose computer. Problems associated with this approach, which are typical of those confronting the prior art, relate to the associative memory accessing and over-inclusive or an under-inclusive class of words to correspond with the input word of interest.
OBJECTS OF THE INVENTION:
It is an object of the invention to associatively access the class of valid alpha words as potential candidates for the correct form of a garbled alpha word, in an improved manner.

~ . ._ . ..

~0628~.1 1 It is another object of the invention to associatively access a group of alpha words as potential candidates for the correct form of a garbled alpha word, the group accessed being less over-inclusive or under-inclusive than was possible in the prior art.
It is still another object of the invention to associatively access a group of valid alpha words as potential candidates for the correct form of a garbled alpha word misrecognized by an OCR machine, in an improved manner.
It is a further object of the invention to associatively access a group of spoken words represented by a sequence of phoneme characters as potential candidates for the correct form of a garbled spoken word as represented by a sequence of phoneme characters, in an improved manner.
It is an additional object of the invention to associatively access a group of words as potential candidates for the correct form of a word containing typographical errors commonly committed in the use of a keyboard, in an improved manner.
SUMMARY OF THE INVENTION:
These and other objects of the invention are accomplished by the cluster storage apparatus disclosed herein. The cluster storage apparatus outputs groups of valid alpha words as potential candidates for the correct form of an alpha word misrecognized by a character recognition machine, a speech analyzer, or a standard keyboard. The cluster storage apparatus comprises a two-dimensional array of alpha word read only storage locations, each location having a group of alpha words arranged such that adjacent locations contain alpha words having similar character recognition misread propensities. A first-`` 106Z81~
1 dimensional accessing means is connected to the read only storage for addressing the locations based upon the values assigned to the char-acters of which the input alpha word is composed. A second-dimensional accessing means is connected to the read only storage for accessing the location therein based upon the number of characters in the input alpha word. The read only storage memory is organized so as to minimize the difference in address between alpha words which have similar OCR misread propensities and so as to cluster words of a given character length, as well as words of other length that have a significant probability of being malsegmented into the given length.
The propensity for misread is determined by empirical measurement of the OCR character transfer function. The transfer function is expressed as a series of equations representing each character's probability of being confused into a false output character. These equations are solved for the optimum character value set which assigns higher numeric values to highly reliable characters and lower numeric values to less reliable characters under a global constraint that characters that are misread related are assigned values within a known maximal distance of one another. In addition the malseg-mentation probability is determined by the OCR character transfer function. The transfer function of the OCR is expressed as a series of values representing the probability of a character being mal-segmented. These values are used to calculate the probability of each word being malsegmented. The malsegmentation probability for a word is compared with a minimum threshold so that words whose malsegmentation propensity exceeds this threshold are stored with words of adjacent lengths.

., . ~

~6Z8~1 1 The cluster storage organization of the read only storage memory therefore has a structure which conforms with a global constraint such that no numeric assignment of two characters which can be misrecog-nized into one another will differ in location by more than a predeter-mined error interval. Thus an input alpha word which is potentially in error can be associated with that portion of the read only storage which contains potential candidates for the correct form of the input alpha word without excessive over-inclusion of extraneous alpha words or under-inclusive of significant alpha words.
DESCRIPTION OF THE DRAWINGS:
.
The foregoing and other objects, features and advantages of the invention will- be apparent from the following more particular descrip-tion of the preferred embodiments of the invention, as illustrated in the accompanying drawings.
Figure 1 is a schematic flow diagram of the vector fetch process.
Figure 2 shows the number selection criteria for the assignment of numerical values to alphabetical characters.
Figure 3 is a schematic diagram showing the initial numeric assignment scheme of numerical values to alphabetical characters.
Figure 4 is a schematic diagram of read only storage arrangement for the second-dimensional access.
Figure 5 is a schematic diagram of the read only storage memory.
Figure 6 is a detailed block diagram of the cluster storage apparatus invention.

~A9-74-002 -7-~6Z81~
1 Figure 7 is a general block diagram of the post processing, error correction system containing the Bayesian Online Numeric Discrimina-tor, U.S. Patent 3,842,402, issued October 15, 1974, and assigned to the instant app1icant, the Binary Reference Matrix Apparatus, of U.S. Patent 3,925,761, issued December 9, 1975, and assigned to the instant applicant, the Regional Context Maximum Likelihood Bayesian Error Correction Apparatus of U.S. Patent 3,969,700 issued on July 13, 1976, and assigned to the instant applicant and the Cluster Storage Apparatus disclosed herein.
DISCUSSIGN OF THE PREFERRED EMBODIMENT:
Theory of Operation: The strategy used to effect OCR error cor-rection is to reference an error correction dictionary and determine from all the words listed therein, "Which of the dictionary entries is the word that was scanned by the OCR and misrecognized into the incor-rect form presently being processed?" Clearly, a basic part of this operation is the ability to determine which segment of the error cor-rection dictionary should be reviewed. Schematically this is shown in Figure 1. The more accurately one can delineate the portion of the dictionary which contains the correct form of the input word, the larger the dictionary can be without compromising the efficiency and speed of the OCR error correction operation.
When a garbled alpha word is detected in an output recognition stream and it is desired to select a group of candidate words for its correct form, the properties of OCR misread, make it impossible to formulate a reliable dictionary accessing means using the normal dic-tionary indexing word attributes of character alphabetic properties and/or word length. The OCR misread propensities can alter either or both of the proceeding word attributes in various 106281~
1 ways. In spite of this, there is still much potential dictionary entry information in the misrecognized data. To utilize a garbled word as key to the dictionary, the character string must be analyzed in a new perspective. The vehicles for this analysis are the Vector Fetch (VF) and Word Group file organization concepts.
The rationale which underlies the VF dictionary accessing methodology can be best understood as a specialized application of classical statistical confidence interval theory. As normally con-figured, an error interval sets up a range of values within which the true value of the factor being estimated can be said to lie with a predetermined error tolerance.
Within the perspective of the error interval analysis, the VF methodology can be configured as a specialized application which uses the garbled word data to:
a. Estimate the dictionary location of the wora that was misrecognized by the OCR.
b. Give relevance to the estimated dictionary access point (DAP) by generating around it a range of locations wherein the required word information lies with a predetermined certainty.
The description of the mechanics involved in the implementing of the preceding dictionary fetch methodology is logically broken into two portions:
(1) A first-dimension accessing means, based on character content, which requires 1~6Z8~
1 a. Estimation of dictionary access point within the storage apparatus b. Determination of the fetch width constraints
(2) A second-dimension accessing means which requires grouping of dictionary words within the storage apparatus to reflect similar length characteristics.
1.1 Estimation of Dictionary Access Point Within The Stora~e Apparatus: The dictionary access point (DAP) is the initial estimate of at what location the correct form of the OCR input word lies in the dictionary storage apparatus. The vehicle for this initia1 estimation process is a specialized hashing transformation applied to the misrecognized input alpha word. Underlying the hashing trans-formation is a specially developed numeric assignment scheme in which each character in the alphabet has a numeric value that reflects its absolute and relative OCR recognition reliability. The particulars of the alphameric assignment scheme will be elaborated upon shortly.
It presently suffices to say that the numeric assigned is re1ated to the reliability of the alpha character. In its simplest form, this im-plies that the more reliable an alpha character recognition, the more weight is put upon it in the hashing calculation.
Given this alphanumeric assignment scheme, the DAP follows as the summation of positive integers:
M

N=l~ LN (1) _ . . . .. .

lO~Z811 1 where:
L = numeric value assigned to the character in the Nth position of the misrecognized word.
M = the number of character positions in the misrecognized word.
The key to this technique is the derivation of the appropriate alphameric assignment scheme. Dual and seemingly conflicting con-straints have to be accommodated in the assignment scheme. Essentially, the alphameric assignment used to compute the DAP has to:
a. Minimize the effects on the DAP of intercharacter sub-stitutions resulting from OCR misreads.
b. Map the dictionary words into a sufficiently uniform spread throughout the storage apparatus.
The first constraint reflects the desire that Equation (1), the hashing formulation, be as insensitive as possible to OCR sub-stitution and segmentation misread. The second constraint seeks to avoid a trivial solution from evolving as a result of the first con-straint. Such a solution will be the collapsing of the dictionary so that all entries occupy a single DAP or a very narrow band of DAPS within the storage apparatus. If this were the case, nearly the entire dictionary would be output in each fetch. For real time processing this would be an unacceptable situation and would defeat the intent of the vector fetch algorithm.

l The optimum a1phameric assignment scheme for the vector fetch can be derived by virtue of a mathematical approach using linear programming. This approach to vector fetch alphameric assignment scheme generation follows by expressing the OCR intercharacter substitution propensities as linear relations. This implies, for every non-null event in the OCR transfer function (confusion matrix), a norm distance is set up of the form:
I X4, - X~31 (2) where:
X~ ,XB are the numeric designates of the alphabetic characters denoted in the general case by "~" and "~ ".
A typical OCR transfer function (confusion matrix) when reconsti-tuted in the above form, yields several hundred separate expressions of the form of Equation (2). Standard linear optimization formula-tion, however, are not able to directly accommodate a norm distance (i.e., and absolute value relationship) as a base variable in the system of constraints or in its objective function.
To allow the programming optimization of the YF alphameric assignment scheme to reflect an analog to the OCR misread character-istics, a mixed integer linear programming formulation was adopted.
Each relationship of the form of Equation (2), is reconstituted as a set of constraints to the form:
X~ -X~ +2KIal?, + Za~ ` K
X~-X~ +2KIc~g~ Z~B- K ( ) Xo_ ' O
2~6-:106Z811 1 where:
I ~ represents the set of integer Yariables constrained to take on the value of either one or zero.
Z~ is the variable over which the objective function optimiza-tion of the form ~ P~ Zq~= min is performed. P~ is relative weight or importance value associated with a respective constraint. In the present analysis P~ has been set equal to the cumulative occurrence rate of the respective ~ , ~ characters.
K is the fetch error tolerance in units of magnitude.
Up to this point, the system of optimization equations has onlytaken into account constraints consistent with goal "a" above.
Goal "b"--the avoidance of inordinate degrees of clustering of dictionary entries in any range of magnitude values is accomplished by appending to the system of OCR misread equations (Equation 3) a series of constraints which maintain a somewhat uniform distribution of entries for all segments of the dictionary. These latter constraints are set up by randomly selected legal entries from the dictionary word list and specifying that a predetermined norm distance be main-tained between them in the final dictionary vector structure. For example, the entries CORNWALL and SHERWOOD can be used to yield a vector dictionary infra-structure constraint of the form:
C O R XN XW+XA+XL+XL)-(Xs+XH+XE+XR+Xw+Xo+Xo+XD) ~ D (4) XC+xN+xA+2xL-xs-xH-xE-xo XD

~ . , , 1~6Z81~
1 The value of Dl represents the norm distance between entries SHERWOOD and CQRNWALL in a dictionary where an initial alphameric assignment scheme has been used which yields good dictionary word lists spread characteristics but does not necessarily meet all the OCR constraints as given by Equation (3). The programming array of constraints is completed by adding the additional infra-structure constraints consistent with the simple linear format described by the SHERWOOD, CORNWALL example described in the Equation (4).
The initial alphameric assignment scheme used to define the D
values of Equation (4), was obtained by treating Equation (1), as a vector magnitude computation; that is, ~ 2 Y = ~ LN
N=l and assigning 1 through 26 (LN2 1 through 676) to the characters in the alphabet.
Figures 2 and 3 indicate how the numeric assignments are made in a manner that is consistent with that required by OCR misread magni-tude distortion minimization constraints posed by Equations (3). If the numeric scale is to be 1 to 26, the squares of these values will range from 1 to 676. A matrix is shown for these values without specifying the character assignments. The vertical part of the matrix represents the input characteristics from the scanned document; the horizontal set represents the OCR recognization decision. All correct recognitions are indicated by the diagonal of the matrix. All sub-stitutions or rejects are off the diagonal. For example, if an H and M, given values of 10 and 9 respectively, and if an H is misread as M, the difference of magnitude will be 100 minus 81 or 19. This would be an appropriate selection since H and M substitution is common.

._ , , , _ . . .,_ . , .

106Z~ll 1 If the OCR misread distortion is set at plus or minus 250 units (i.e., the normal value of the factor K on the right hand side of the system of equations generated from Equations (3)), then a relativ~ly simple yet meaningful initial assignment of alpha char-acters to the numeric values indicated on the axes of the confusion matrix can be derived such that a large number of common recognition errors are contained within these plus or minus 250 unit error inter-vals. These boundaries are indicated in Figure 2. The initial numeric assignment scheme is shown in Figure 3, where the shaded portion of that figure has those misreads for which the initial scheme cannot compensate (the numbers within the matrix relate to the relative occurrence rate of the specific misread errors). Empirical analysis with this numbering scheme has shown that although it did not satis-fy all constraints of the form of Equations (2), it did transform a word list into a suitable distributed dictionary which did not produce inordinate clustering of dictionary entries. For this reason, this numbering scheme was used to define the norm distance between the randomly selected entries used to formulate the dictionary infra-structure constraints as given by Equation (4). It should be noted that other numbering schemes could have been successfully used for the basis on these infra-structure constraints. The vector magnitude scheme was used because of its simplicity and familiarity.
The resulting formulation of Mixed Integer Linear Programming constraints and objective functions were sol,ved using the facilities of IBM Mathematical Programming System Extended (MPSX) (MPS) Program number 5734-XM4. Similar optimization routines are available from several other 10628~1 1 software sources. The final output of the programming solution yielded a set of alphameric assignments which minimized hashing distortions due to OCR misread, while maintaining a relatively uniform spread of entries over the dictionary. The alphameric assignment scheme is shown in Table 1.

Final Fetch Vector Alphanneric Assignment Scheme-Values Generated Using Mixed Integer Linear Programming A=200 B=36 C=256 D=196 E=144 F=16 G=289 H=144 I=64 J=225 K=441 L=25 M=175 N=185 0=225 P=361 ~=289 R=225 S=324 T=121 U=169 V=100 W=49 X=529 Y=9 Z=484 *=121 1.2 Determination of Fetch Width Constraints: If the mis-read word were transformed into a magnitude value using the alpha-meric assignment scheme shown in Table 1, then it could be assumed that the garbled and correct forms of the same word would map into fairly similar (close) magnitude values. If the correct form of each word had been stored in the error correction dictionary with respect to its magnitude, then the DAP yielded by Equation (1) would approach the vicinity of the correct word entry required for completion of error correction processing. However, to successfully perform the decision process which underlies the Regional Context Likelihood Error Correction Apparatus disclosed in U.S. Patent 3,969,700, issued July 13, 1976 and assigned to the applicant prerequisite that the mis-read form of the word be compared in a conditional probabilistic for-mat with the correct version of that word. Hence, the DAP, in itself, is not sufficient for retrieving the data required for the latter 106Z8~1 1 phases of OCR error correction. HoweYer~ the Proximity of the DAP
/ to the correct directory entry makes it a natural axis point for the construction of an error interval a which will act as the delimiter of a directory fetch range. If properly configured, the fetch range ~ will retrieve from locations adjacent to the DAP a set of address entries which will contain within it, with a predetermined error tolerance, the correct version of the misread input word. As in the preceding example, the selection of + 250 as a fetch width ~implies an error tolerance, i.e., the possibility of the correct version of the input word is outs,de the fetch range that was accessed.
The three major OCR misread errors which must be compensated for the construction of the directory fetch range are reject char-acters, substitution errors, and segmentation errors. The fetch is most effective for the reject and substitution errors. Segmentation errors are statistically less predictable and therefore not as readily overcome. A misread word can become unretrievable using the VF if successive misreads within the word additively reinforce one another until a delta magnitude greater than 250 is achieved. This situation is comparatively rare in that successive misreads will tend to randomly cancel, to some degree, the magnitude of the deviation that each has respectively added.
1.3 Word Length Grouping Within the Storage Apparatus: Organ-ization of dictionary structure according to word length similarities is used to complement the accessing potential of the VF methodology.

106Z81~
Figure 1 shows a schematic of the fetch Process for the misrecog-1 nized input word. The magnitude of the input word i5 calculated using Equation (9). For the word shown in this is 1087. The word length is also used to reduce the number of entries in the fetch.
For OCR data, length cannot be used as an absolute discriminant, since segmentation errors may artificially increase or decrease the word length. A common approach to this problem is to include not only words of the same length in the fetch as the input word, but also all words of adjacent length and even those that differ by as much as two positions. This is done according to the set of rules which themselves are length-dependent. The problem with this approach is that it leads to unacceptable large fetch sizes. (on the average, approximately 20 percent of the dictionary) It is again possible to utilize known OCR error propensities to improve the word length discriminant. Since word length changes are caused by some type of segmentation problem (splitting of con-catenation), only the words are prone to be malsegmented by virtue~
of their composition are entered in more than one of the word length subdivision. This leads to the concept of a Word Group discriminant.
In a Word Group, all words of the designated length are included as well as words of all other lengths that have a significant probability of being malsegmented to that length.
The implementation of Word Group accessing is dependent on deter-mination of objective criteria by virtue of which a word's character composition may be evaluated for assessment of the degree of mis-segmentation propensity and accordingly the requirement for multiple Word Group , 1062~ ~
1 entry. To allow assessment of a dictionary word for inclusion in a Word Group, the following segmentation threshold calculation is per-formed.
The probability of word segmentation is described functionally by Equation (5).
P(Wordseg) = 1 - P(Wordseg) = 1 - P(WSeg) (5) where bar notation indicates the complement of the segmentation event, that is, no segmentation occurrence. From empirical data averaged over all word lengths, 80% of all segmentation will occur in words whose P(Wseg) is greater than 0.6%. It would be reasonable, therefore, to take as a threshold for Word Group duplicative entry, any word whose cumulative character segmentation probability sur-passes this nonlinal value or, in other words:
P(Wseg) ~ T = 0.6% (6) of course this threshold could be lowered more but this would add many more duplicative entries while not accommodating significant additional word segmentations. The relationship in Equation (5) can be made more meaningful by posing it in terms of constituent character events as:
P(WSeg) = 1 - P(~lseg) . P(~ 2Seg)......... P(~ Nseg) (7) substituting Equation (7) in Equation (6) results in 1 - P(~ lseg) . P(~ 2seg)....P(~ Nseg) ~ T
or P(~ lseg)~ P(~ 2seg).......... P(cLNSeg) ~ 1 - T
In terms of logs this finally results in a general threshold relation-ship for Word Group candidacy of: (8) ¦log P ( dlse9) + log P(o~2Seg) + ...log P(o~nseg)¦~ log (l-T) By relating Equation (8) back to the binomial model which under-lies its application, we can readily solve for the level of malseg-mentation propensity (probability) 1 that will make a word candidate for duplicatiye entry in one word group, two word groups etc. This is performed as follows:
Threshold for one segmentation event:
M

¦ loq P (a~ ;cq) ¦ > ¦ lo~ -T) ¦ (~) where L = number of characters in a word.
Threshold for two segmentation events:

P (wse~T2) = ('-2 F (~5~q) p (a~q) 2 ~(M 2)! seq seq where F(~ seg) is the average character malsegmentation propensity for the word.
Hence the word malsegmentation threshold for a dictionary entry to be entered in two adjacent Word Groups becomes (W~;eq2) r2 1 ~ -~ (1-P (a_) > T (ln) for instance, for wordsof length, 8, (M=8) This can be put in convenient computational form as ¦lo~ P(aseq) ¦ > ¦loq (l-~T(2;) (6!)/~! ¦
Similar ana1ytical procedures can be applied to yield the complete spectrum of Word Group Thresholds (i.e. for single entry, double entry, triple entry, etc., for each respective word length.) In a Word Group using the previously derived malsegmentation propensity thresholds all words of the designated length are included, as well as words of other lengths that have a significant probability of being malsegmented to that length. Therefore, a single word may appear in several word groups, based on its character 1 composition. For example, in Figure 4, the word CORNWALL, appears in Word Group 8, its correct length. CORNWALL, however, has four char-acters that are prone to splitting segmentation (one character seg-mented into two). These are C, O, N, and W. It has been determined that there is a significant probability of CORNWALL being misread as a nine-character word, such as CORNVVALL, or a ten-character word such as CIJRNVVALL. Therefore, the word is also included in Word Groups 9 and 10. Similarly, WHITEHALL is initially in Word Group 9.
However, it is also included in Word Group 8 because it has two char-acter pairs, either of which are likely to concatenate into single character. These are HI and LL.
In resume, the second dimension of the storage apparatus will take the form of autonomous word groups based on alpha-field length.
This implies that all N character dictionary entries will be listed together where N=l, 2, 3, ..., up to the longest set of dictionary words being considered. Appended to each of these dictionary entry groups will be dictionary words of a different length but whose alphabetic composition makes their segmentation propensity exceed a threshold and therefore, likely candidates for OCR-length distortion effects.
The number of entries in the resultant fetch produced by using both the magnitude and word length group discriminants has been shown in simulation to yield a fetch of between 1 and 2 percent of the number of unique entries in the total dictionary. This reduction in fetch size is achieved while causing only a small decrease in fetch accuracy.

-~06Z81~

1 Keyboard Error Correction: The binary reference matrix 12, cluster storage 22 and regional context apparatus 22 perform post processing functions thru their ability to qualify the error (misread) mechanism being addressed in terms of a confusion matrix of events. The techniques that have been successfully applied to OCR error correction are similarly useful for any other system wherein confusion matrices can be compiled.
The error characteristics related to typewriter keyboard have been extensively studied and qualified. Data concerning over 6,000,000 key strokes have been compiled and partially reduced. Table 2 on the following page shows a confusion matrix resulting from the examination of slightly over 1,000,000 key strokes.
Examination of the events in Table 2 show that the error patterns on keyboard substitution misstroke error fall into three main categories:
1. Yisually confusiable characters (e.g., 1, 1; m, n, etc.) 2. Adjacent keys (key juxtaposition)
3. Same finger position on other hand.
The above error mechanism, even more than in OCR, underlie a stable time invariant process which can be meaningfully qualified in confusion matrix format.
Yisually comparing the event dispersion between the Figures, it is clear that keyboard error patterns are more predictable (i.e., have less options) than those related to OCR. It can be shown, by appealing to an entropy model of our post processing system, the less the dispersion of events in the confusion matrices, the greater the error correction potential of the system.

o ~ ~ ~ _ C~ ~ _ ~ o o ~ o o -- ~J ~--N _ ~J r ~ r _ _ _ _ _ .~ _ _ + _ N

_ C~l _ _ .
X N _ Ln 3 _ ~ _ N
__ _ ~ ~ _ 1-- _ el _ N C~J
~ C~J N 1~ ~D N C`.l C~J _ _ C~J
O' C~
J ~ _ _ el-O ~ C~J _ ~ C~J ~
I~ r-S
--C~JC`J ~7 _ v C~J --~ ~ _ N r _ r N e~
1~ N _ _ _ L~

~t c~l _ _ _ _ el N ~ N C~J

cl: N ~ _ _ C`J ~

cl a~ ~ ~ I~J 1~ C!~ I l_ r~ :~ _ 5 Z O a O' ~ 3 X ~ I~J I ^ ~ X
J.3~1 ~3~N31NI
~-~

lO~iZ8~Ll 1 It follows that a given level of error correction capability attained on OCR using the preceding techniques can be at least equaled if not surpassed by uSing the same techniques on keyboard.
Keyboard Vector Dictionary: The keyboard vector dictionary serves the same purpose in keyboard error correction as the cluster storage 22 does in OCR Error Correction. It allows a word containing misstroke errors to be associated with the segment of the error correction dic-tionary (wordlist) wherein, among other entries, the correct version of the misstroke garbled word lies. Presently, for OCR purposes, this Vector Fetch procedure yields an average fetch size of about 1 percent of the word list. By the nature of the sparsity of the keystroke con-fusion matrix Table 2 even greater discriminant potential exists in a Vector Dictionary built to reflect the keystroke confusion matrix.
Due to the highly analogous nature of the keystroke and OCR error from the confusion matrix standpoint, the existing apparatus shown in Figure 6 is directly useable with the ROS 56, of course, restructured to store cluster of similarly mistyped alpha words. This requires generating the linear program analog of the intercharacter confusions by which the optimal alphanumeric equivalencing scheme is derived.
Maximum Likelihood Misstroke Error Correction (MLMC): The MLMC
performs the same function as the regional context maximum likelihood apparatus 26.

1~2811 1 The MLMC addresses correction of the four dominant categories of keyboard error. They are character:
o substitution o transposition o addition o deletion Character Substitution: Substitution due to misstroke appears to be the most common keyboard error type. The OCR substitution error correction techniques in RCML can be used without modification to effect alpha substitution correction MLMC. All that is required is a data readin of new keyboard confusion statistics.
Character Transposition: Character transposition error relates to the reversal of the correct sequence of occurrence of two other-wise correct characters. The spelling "Recieve" is an example of transposition error. This type of error is not related to OCR
garbling, and hence not treated in RCML. However, the progression of operations from vector fetch cluster storage apparatus through MLMC yields an easily implemented method for affecting character transposition error correction.
Correction of transposition errors follow by use of vector magnitude as a special entry flag in the MLMC process. The vector magnitude of a transposition garbled word is the same as that of the original form of the word. Hence those words in the dictionary fetch for cluster storage 22 which have the same magnitude as the garbled word become candidate for character transposition error correction. The basic transposition error correction technique evoked for this special (i.e., garbled word magnitude=dictionary word magnitude) subset of entries ~IA9-74-002 -24-_ 1 in the fetch involves switching the character juxtaposition when impos-sible mismatch events are encountered between the garbled word and a dictionary word with the same magnitude value.
Character Addition/Deletion: The error mechanism which governs character addition and deletion on keyboard appears to be strongly correlated to the diagram being typed. If the diagram is normally part of a very common trigram, inadvertantly the trigram may be typed resulting in the addition of the spurious character. For example, when typing of the diagram "th" often results in spurious addition of "e" yielding "the'i when only "th" was called for. Con-versely, character deletion seems to be highly correlated to the typing of infrequent trigrams which however contain a common dia-gram. In transcription, the trigram may be aliased as its shorter, more common diagram constituent.
Addition and deletion are highly governed by the above dia-gram/trigram mechanisms and their correction can be achieved in MLMC by relatively minor modification to the present RCML segmenta-tion and concatenation error correction logic.
POST PROCESSING ERROR CORRECTION SYSTEM:
Figure 7 shows a general block diagram of a post processing, error correction system for generating the most likely form of an input alpha word garbled by a word generating source 13. Word generating source 13 is a generalized apparatus which can compromise either an optical character recognition machine, a speech analyzer generating phoneme characters, or a conventional keyboard apparatus.
Each of these specific types of word generating source has the common characteristic that in combination alpha words 106281~
1 output therefrom haYe certain characteristics error propensities whjch can be characterized by a character transfer function. A specific word generating source namely an optical character recognition machine is shown in Figure 7, however a speech analyzer generating phoneme character output or a conventional keyboard generating alpha numeric output could be substituted therefore.
Shown specifically in Figure 7, is the Bayesian Online Numeric Discriminator disclosed in United States Patent 3,842,402, by W.S.
Rosenbaum, et al., assigned to the instant assignee, which accepts input from a dual output optical character recognition machine 2. The bayesian online numeric discriminator 8 outputs alpha numeric char-acters over line 10 to the binary reference matrix 12 which is described in United States Patent 3,925,761 by W.S. Rosenbaum, et al., and assigned to the instant assignee. The input line 10 corresponds to the input line 2 shown in Figure 4 of the binary reference matrix application. In addition, the line 10 is connected to the gate 16 haYing a control input from the binary reference matrix 12 over line 14. Line 14 corresponds to line 44 in the binary reference matrix application. The bayesian online numeric discriminator 8 discriminates numeric character fields from alpha character fields in the output recognition stream from the optical character reader 2. The alpha recognition stream is input to the bjnary reference matrix over line 10 to detect valid alpha words and invalid alpha words. valid alpha words are conducted by gate 16 from ~~

~06Z81~
1 line 10 to 18 by means of the control line 14 from the binary reference matrix 12. If the input alpha word over line 10 is detected as in~
valid by the binary reference matrix 12, the control line 14 causes the gate 16 to direct the alpha word on line 10 to the output line 20 which is input to the cluster storage apparatus 22 which is disclosed in the instant application. The cluster storage apparatus 22 accesses from the read only storage associative memory therein, a group of correct alpha words which have some probability of haYing been con-fused with the invalid alpha word of interest input on line 20. This group of potentially correct alpha words is input over line 24 to the regional context maximum likelihood bayesian error correction appara-tus 26 which is disclosed in United States Patent 3,969,700 by W.S. Rosenbaum, et al., and assigned to the instant assignee. The input line 24 corresponds to the input line 3 in Figure 3 of the regional context application which is input to the dictionary stor-age 28 therein. The incorrect alpha word 20 is input over line 20 to the regional context apparatus over the line number 2 disclosed in that application. The invalid alpha word is then processed in the regional context apparatus, where a conditional probability analysis is executed to determine which word from the correct words which were input over line 24, most closely corresponds to the invalid alpha word output by the OCR. The correct alpha word from the regional context apparatus is output over line 28 which corresponds to line 10 of the regional context application, to the multiplex 3 which in turn outputs the correct alpha -1 word on output line 32 as the best guess alpha word for the garbled word input from the OCR 2.
DÉSCRIPTION OF THE CLUSTER STORAGE APPARATUS
A fundamental concept underlying the cluster storage memory apparatus is the physical relationship of words stored in the read only storage memory to the character transfer function of the char-acter recognition machine or keyboard whose output is being analyzed.
The cluster storage memory is an associative memory, the position of the data entry point in the memory being determined by the characteris-tics of the garbled input word itself. These characteristics of the input word are the word group and the index value.
As is shown in the data flow diagram of Figure 5, word group is used as the X access address and the index value is used as the Y access address for the read only storage. Selection of a word group and an index value result in the transfer of a dictionary word as a Z axis datum for each value of Y of between the index value -~ and the index value~ . This cluster of 2~ + 1 dictionary word constitutes the group accessed for further use in the regional con-text maximum likelihood bayesian error correction apparatus mentioned above.
The data flow diagram of Figure 5 shows, schematically, anarrangement of dictionary words in the read only storage. Twelve word groups are included on the X axis representing word lengths of 2 through 13 characters. The selection of a word group is determined by the length of the input garbled word. As was mentioned above, not all of the dictionary words in a given word group have the 1 same number of characters. The words in the nth group share the common characteristic that the character recognition machine has an appreciable probability of outputting such words with n characters.
This would include all words of the length n and also those that the character recognition machine is likely to segment into n characters.
This arrangement results in multiple entries of some words into different word groups.
Each input garbled word gives rise to 2~ + 1 accesses of the read only storage. The value of the index is determined from the character content of the input word. The range ~ is fixed and repre-sents the confidence interval where there is a high probability of finding the correct entry. A given alpha word inputs from the char-acter recognition machine will result in the output of a range of 2~ magnitude word assignments corresponding to words from the group stored in the read only storage 56, which both will be output to the output buffer 58.
The detail block diagram of the cluster storage apparatus as shown in F~igure 6. A misrecognized alpha word is input from the character recognition machine over line 20. The separation detector 34 detects the beginning and end points for each word. The character counter 36 connected to the word separation detector 34 counts the number of characters in an input alpha word and outputs that number as the value n over line 38 as the second-dimension access value to the read only storage 56. The misrecognized alpha word input over line 20 is also directed to the character value store 40 which has stored therein the character values Ln shown in Table 1. Each 106Z~
1 character in the input alpha word is used to access the corresponding character yalue Ln which is to output to the input register 42. The input register 42, the adder 44 and the register 46 serye to accumulate the sum of the yalues Ln for the characters in the misrecognized alpha ord inputs over line 20 from the character recognition machine. When the word separation detector 34 detects the end of the alpha word input over line 20, the signal is output from the character counter 36 to the register 46 outputting the final sum of the values Ln as the median fetch index value, to the subtractor 48. The delta register 50 contains the value which, for the character values shown in Table 1, equa1s 250. The value of L~is output from the delta register 50 to the subtractor 48 and is substracted from the median fetch index value which is input from the register 46, yielding the minimum value the fetch index which constitutes the first-dimension accessing value input to the read only storage 56.
This minimum fetch index value is output to the adder 52 as the addend and the output from the cyclic counter 54 is input to the adder 52 as the augend, the sum output of which is the first-dimensional accessing address for the read only storage 56. The cyclic counter 54 sequentially outputs integer values from 0 to 2 x ~ to the adder 52, thereby causing the quantity of 2 ~ + 1 accesses of the read only storage 56. The cluster of 2~ + 1 candidate words stored in the read only storage 56 are output to the dictionary fetch store 58 and then over the output line 24 for further utilization.
As used by the regional context error correction apparatus disclosed in United 106Z8~1 1 States Patent 3,969,700, the output line 24 is connected to the regional context apparatus input lfne 3 to the dictionary storage 28, While the invention has been particularly described with reference to the preferred embodiments thereof, it will be understood by those of skill in the art that the foregoing and other changes in form and detail may be made therein without departing from the spirit and scope of the invention.

~ ql

Claims (11)

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. A cluster storage apparatus for outputting groups of valid words as potential candidates for the correct form of a word misrecognized by or mistyped on a machine comprising:
a two-dimensional array of word read only storage locations, each location having a group of words arranged such that adjacent locations contain words having similar propensities for being misrecognized or mistyped;
a first-dimensional accessing means for addressing said locations based upon the values assigned to the characters of which the input word is composed;
a second-dimensional accessing means for accessing said locations based upon the number of characters in said input word;
said first-dimensional accessing means calculating the first-dimensional address as a magnitude where LN is the numeric value assigned to each character;
whereby an input word which is potentially in error can be associated with that portion of the read only storage which contains potential candidates for the correct form of the input word.
2. A cluster storage apparatus according to claim 1 wherein each of the words is an alpha word and said machine is an optical character reader (OCR) machine capable of misrecognizing an alpha word.
3. A cluster storage apparatus according to claim 1 wherein each of the words is a phoneme word and said machine is a speech analyzer machine capable of misrecognizing a phoneme word.
4. A cluster storage apparatus according to claim 1 wherein each of the words is an alpha word and said machine is a keyboard machine capable of mistyping an alpha word.
5. The cluster storage apparatus of claim 2, which further comprises.

said cluster storage being organized so as to minimize the difference in address between alpha words which have similar OCR misread propensities and so as to cluster words of a given character length, as well as words of other lengths that have a significant probability of being malsegmented into said given length;
said propensity being determined by empirical measurement of the OCR character transfer function;
said transfer function being expressed as a series of equations representing each character's probability of being confused into a false output character;
said equations being solved for the optimum character value set which assigns higher numeric values to highly reliable characters and higher numeric values to characters which occur more frequently and lower numberic values to less reliable characters and lower numeric values to characters which occur less frequently;
said malsegmentation probability being determined by the OCR character transfer function;
said transfer function being expressed as a series of values representing the probability of a character being malsegmented;
said values being used to calculate the probability of each word being malsegmented;
said word malsegmentation probability being compared with a minimum threshold so that words whose malsegmentation propensity exceeds this threshold are stored with words of adjacent lengths;
whereby said cluster storage is organized in light of a global constraint so that no numeric assignment of two characters which can be misrecognized into one another will differ in location by more than a predetermined error interval.
6. The cluster storage apparatus of claim 3, which further comprises:
said cluster storage being organized so as to minimize the difference in address between phoneme words which have similar speech analyzer misread propensities and so as to cluster words of a given character length, as well as words of other lengths that have a signi-ficant probability of being malsegmented into said given length;
said propensity being determined by empirical measurement of the speech analyzer transfer function;
said transfer function being expressed as a series of equations representing each characters probability of being confused into a false output character;
said equations being solved for the optimum character value set which assigns higher numeric values to highly reliable characters and higher numeric values to characters which occur more frequently and lower numeric values to less reliable characters and lower numeric values to characters which occur less frequently;
said malsegmentation probability being determined by the speech analyzer character transfer function;
said transfer function being expressed as a series of values representing the probability of a character being malsegmented;
said values being used to calculate the probability of each phoneme word being malsegmented;
said word malsegmentation probability being compared with a minimum threshold so that words whose malsegmentation propensity exceeds this threshold are stored with words of adjacent lengths;
whereby said cluster storage is organized in light of a global constraint so that no numeric assignment of two characters which can be misrecognized into one another will differ in location by more than a predetermined error interval.
7. The cluster storage apparatus of claim 4, which further comprises:
said cluster storage being organized so as to minimize the difference in address between alpha words which have similar keyboard typo-graphical error propensities and so as to cluster words of a given character probability of being malsegmented into said given length;
said propensity being determined by empirical measurement of the keyboard character transfer function;
said transfer function being expressed as a series of equations representing each character probability of being confused into a false output character;
said equations being solved for the optimum character value set which assigns higher numeric values to highly reliable characters and higher numeric values to characters which occur more frequently and lower numeric values to less reliable characters and lower numeric values to characters which occur less frequently;
said malsegmentation probability being determined by the keyboard character transfer function;
said transfer function being expressed as a series of values representing the probability of a character being malsegmented;
said values being used to calculate the probability of each word being malsegmented;
said word malsegmentation probability being compared with a minimum threshold so that words whose malsegmentation propensity exceeds this threshold are stored with words of adjacent lengths;
whereby said cluster storage is organized in light of a global contraint so that no numeric assignment of two characters which can be mistyped into one another will differ in location by more than a predetermined error interval.
8. A post processing error correction system comprising:
a word generating source having a character transfer function which represents the error propensity of multicharacter words output thereby;
a binary reference matrix having an input line connected to the input of said word generating source, to detect invalid alpha words;
said binary reference matrix having an output control line carrying a binary signal which indicates whether the input alpha word is valid;
a gate means connected to said output from said word generating source and having a control input from said control output of said binary reference matrix, for gating the input alpha word from said word generating source onto a first output line in response to a signal on said control line from said binary reference matrix indicating that said alpha word is valid, and gating said input alpha word onto a second output line in res-ponse to a signal from said binary reference matrix control line indicating said alpha word is invalid;
a cluster storage apparatus having an input connected to said second output line from said gating means, to access from an associative memory therein, a group of correct alpha words which have some probability of having been confused with said invalid alpha words input on said second output line from said gate;
the regional context error correction apparatus having an input connected to said output from said gating means and having a second input connected to the output from said cluster storage apparatus for accepting said group of correct alpha words;
said cluster storage apparatus executing a conditional probability analysis to determine which one of the group of correct alpha words most closely corresponds to the invalid alpha word input by said word generating source;
said regional context error correction apparatus outputting the word which most closely corresponds to the invalid alpha word output by said word generating source;
whereby the most probable correct version of a garbled word output from said word generating source, is determined.
9. The post processing error correction system of claim 8 wherein word generating source is an optical character recognition machine.
10. The post processing error correction system of claim 8 wherein said word generating source is a speech analyzer and said output alpha words are composed of a sequence of phoneme characters.
11. The post processing error correction system of claim 8 wherein said word generating source is a keyboard having a character transfer function representing the propensity for the commission of typographical errors.
CA230,886A 1974-10-08 1975-07-07 Cluster storage apparatus for post processing error correction of a character recognition machine Expired CA1062811A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US05/513,202 US3969698A (en) 1974-10-08 1974-10-08 Cluster storage apparatus for post processing error correction of a character recognition machine

Publications (1)

Publication Number Publication Date
CA1062811A true CA1062811A (en) 1979-09-18

Family

ID=24042261

Family Applications (1)

Application Number Title Priority Date Filing Date
CA230,886A Expired CA1062811A (en) 1974-10-08 1975-07-07 Cluster storage apparatus for post processing error correction of a character recognition machine

Country Status (13)

Country Link
US (1) US3969698A (en)
JP (1) JPS5143643A (en)
BE (1) BE832767A (en)
BR (1) BR7506545A (en)
CA (1) CA1062811A (en)
CH (1) CH586429A5 (en)
DE (1) DE2541204C3 (en)
ES (1) ES441353A1 (en)
FR (1) FR2287747A1 (en)
GB (1) GB1500203A (en)
IT (1) IT1042379B (en)
NL (1) NL7511834A (en)
SE (1) SE439848B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7781693B2 (en) 2006-05-23 2010-08-24 Cameron Lanning Cormack Method and system for sorting incoming mail

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4136395A (en) * 1976-12-28 1979-01-23 International Business Machines Corporation System for automatically proofreading a document
US4290105A (en) * 1979-04-02 1981-09-15 American Newspaper Publishers Association Method and apparatus for testing membership in a set through hash coding with allowable errors
US4328561A (en) * 1979-12-28 1982-05-04 International Business Machines Corp. Alpha content match prescan method for automatic spelling error correction
US4355371A (en) * 1980-03-25 1982-10-19 International Business Machines Corporation Instantaneous alpha content prescan method for automatic spelling error correction
DE3164082D1 (en) * 1980-06-17 1984-07-19 Ibm Method and apparatus for vectorizing text words in a text processing system
JPS5854433B2 (en) * 1980-09-11 1983-12-05 日本電気株式会社 Difference detection device
US4355302A (en) * 1980-09-12 1982-10-19 Bell Telephone Laboratories, Incorporated Spelled word recognizer
JPS5876893A (en) * 1981-10-30 1983-05-10 日本電気株式会社 Voice recognition equipment
US5600556A (en) * 1982-01-29 1997-02-04 Canon Kabushiki Kaisha Word processor that automatically capitalizes the first letter of sentence
US4556951A (en) * 1982-06-06 1985-12-03 Digital Equipment Corporation Central processor with instructions for processing sequences of characters
US4674066A (en) * 1983-02-18 1987-06-16 Houghton Mifflin Company Textual database system using skeletonization and phonetic replacement to retrieve words matching or similar to query words
US4580241A (en) * 1983-02-18 1986-04-01 Houghton Mifflin Company Graphic word spelling correction using automated dictionary comparisons with phonetic skeletons
US4771401A (en) * 1983-02-18 1988-09-13 Houghton Mifflin Company Apparatus and method for linguistic expression processing
US4654875A (en) * 1983-05-23 1987-03-31 The Research Foundation Of State University Of New York System to achieve automatic recognition of linguistic strings
US4783758A (en) * 1985-02-05 1988-11-08 Houghton Mifflin Company Automated word substitution using numerical rankings of structural disparity between misspelled words & candidate substitution words
JPS61252594A (en) * 1985-05-01 1986-11-10 株式会社リコー Voice pattern collation system
CA1261472A (en) * 1985-09-26 1989-09-26 Yoshinao Shiraki Reference speech pattern generating method
US4829472A (en) * 1986-10-20 1989-05-09 Microlytics, Inc. Spelling check module
JPS63198154A (en) * 1987-02-05 1988-08-16 インタ−ナショナル・ビジネス・マシ−ンズ・コ−ポレ−ション Spelling error corrector
US4926488A (en) * 1987-07-09 1990-05-15 International Business Machines Corporation Normalization of speech by adaptive labelling
JPH01214964A (en) * 1988-02-23 1989-08-29 Sharp Corp European word processor with correcting function
US4994966A (en) * 1988-03-31 1991-02-19 Emerson & Stern Associates, Inc. System and method for natural language parsing by initiating processing prior to entry of complete sentences
US5075896A (en) * 1989-10-25 1991-12-24 Xerox Corporation Character and phoneme recognition based on probability clustering
US5167016A (en) * 1989-12-29 1992-11-24 Xerox Corporation Changing characters in an image
US5062143A (en) * 1990-02-23 1991-10-29 Harris Corporation Trigram-based method of language identification
US5604897A (en) * 1990-05-18 1997-02-18 Microsoft Corporation Method and system for correcting the spelling of misspelled words
US5313527A (en) * 1991-06-07 1994-05-17 Paragraph International Method and apparatus for recognizing cursive writing from sequential input information
JP3422541B2 (en) * 1992-12-17 2003-06-30 ゼロックス・コーポレーション Keyword modeling method and non-keyword HMM providing method
US6064819A (en) * 1993-12-08 2000-05-16 Imec Control flow and memory management optimization
US5768423A (en) * 1994-09-02 1998-06-16 Panasonic Technologies Inc. Trie structure based method and apparatus for indexing and searching handwritten databases with dynamic search sequencing
CA2155891A1 (en) 1994-10-18 1996-04-19 Raymond Amand Lorie Optical character recognition system having context analyzer
US5617488A (en) * 1995-02-01 1997-04-01 The Research Foundation Of State University Of New York Relaxation word recognizer
US5774588A (en) * 1995-06-07 1998-06-30 United Parcel Service Of America, Inc. Method and system for comparing strings with entries of a lexicon
US5963666A (en) * 1995-08-18 1999-10-05 International Business Machines Corporation Confusion matrix mediated word prediction
US5933531A (en) * 1996-08-23 1999-08-03 International Business Machines Corporation Verification and correction method and system for optical character recognition
EP0859332A1 (en) * 1997-02-12 1998-08-19 STMicroelectronics S.r.l. Word recognition device and method
EP0859333A1 (en) * 1997-02-12 1998-08-19 STMicroelectronics S.r.l. Method of coding characters for word recognition and word recognition device using that coding
US6047300A (en) * 1997-05-15 2000-04-04 Microsoft Corporation System and method for automatically correcting a misspelled word
US6269188B1 (en) * 1998-03-12 2001-07-31 Canon Kabushiki Kaisha Word grouping accuracy value generation
AU770515B2 (en) * 1998-04-01 2004-02-26 William Peterman System and method for searching electronic documents created with optical character recognition
US6216123B1 (en) * 1998-06-24 2001-04-10 Novell, Inc. Method and system for rapid retrieval in a full text indexing system
US6393395B1 (en) 1999-01-07 2002-05-21 Microsoft Corporation Handwriting and speech recognizer using neural network with separate start and continuation output scores
US6662180B1 (en) * 1999-05-12 2003-12-09 Matsushita Electric Industrial Co., Ltd. Method for searching in large databases of automatically recognized text
US6480827B1 (en) * 2000-03-07 2002-11-12 Motorola, Inc. Method and apparatus for voice communication
GB0611561D0 (en) * 2006-06-08 2006-07-19 Ibm A validation engine
GB0623236D0 (en) * 2006-11-22 2007-01-03 Ibm An apparatus and a method for correcting erroneous image identifications generated by an ocr device
CN107427732B (en) * 2016-12-09 2021-01-29 香港应用科技研究院有限公司 System and method for organizing and processing feature-based data structures
US10127219B2 (en) * 2016-12-09 2018-11-13 Hong Kong Applied Science and Technoloy Research Institute Company Limited System and method for organizing and processing feature based data structures
CN109492644A (en) * 2018-10-16 2019-03-19 深圳壹账通智能科技有限公司 A kind of matching and recognition method and terminal device of exercise image
CN111179592B (en) * 2019-12-31 2021-06-11 合肥工业大学 Urban traffic prediction method and system based on spatio-temporal data flow fusion analysis

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3188609A (en) * 1962-05-04 1965-06-08 Bell Telephone Labor Inc Method and apparatus for correcting errors in mutilated text
FR1543777A (en) * 1966-12-23 1900-01-01 Ibm Identifying characters by using context
US3492653A (en) * 1967-09-08 1970-01-27 Ibm Statistical error reduction in character recognition systems
US3651459A (en) * 1970-05-15 1972-03-21 Philco Ford Corp Character distance coding

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7781693B2 (en) 2006-05-23 2010-08-24 Cameron Lanning Cormack Method and system for sorting incoming mail

Also Published As

Publication number Publication date
US3969698A (en) 1976-07-13
DE2541204B2 (en) 1978-03-30
SE439848B (en) 1985-07-01
DE2541204C3 (en) 1978-11-30
DE2541204A1 (en) 1976-04-15
FR2287747B1 (en) 1978-04-07
ES441353A1 (en) 1977-03-16
GB1500203A (en) 1978-02-08
IT1042379B (en) 1980-01-30
BE832767A (en) 1975-12-16
BR7506545A (en) 1976-08-17
JPS573979B2 (en) 1982-01-23
FR2287747A1 (en) 1976-05-07
JPS5143643A (en) 1976-04-14
NL7511834A (en) 1976-04-12
CH586429A5 (en) 1977-03-31
SE7511157L (en) 1976-04-09

Similar Documents

Publication Publication Date Title
CA1062811A (en) Cluster storage apparatus for post processing error correction of a character recognition machine
US3995254A (en) Digital reference matrix for word verification
EP0277356B1 (en) Spelling error correcting system
US5261009A (en) Means for resolving ambiguities in text passed upon character context
US5133023A (en) Means for resolving ambiguities in text based upon character context
US4754489A (en) Means for resolving ambiguities in text based upon character context
Boytsov Indexing methods for approximate dictionary searching: Comparative analysis
US7415171B2 (en) Multigraph optical character reader enhancement systems and methods
US5488719A (en) System for categorizing character strings using acceptability and category information contained in ending substrings
US7296011B2 (en) Efficient fuzzy match for evaluating data records
US6470347B1 (en) Method, system, program, and data structure for a dense array storing character strings
US5784489A (en) Apparatus and method for syntactic signal analysis
Xu et al. Prototype extraction and adaptive OCR
US5787197A (en) Post-processing error correction scheme using a dictionary for on-line handwriting recognition
KR970007281B1 (en) Character recognition method and apparatus
JPS61267885A (en) Word dictionary collating device
US3925761A (en) Binary reference matrix for a character recognition machine
US20200104635A1 (en) Invertible text embedding for lexicon-free offline handwriting recognition
AU2018102145A4 (en) Method of establishing English geographical name index and querying method and apparatus thereof
US20140082021A1 (en) Hierarchical ordering of strings
CA1050167A (en) Bayesian online numeric discriminator
Rosenbaum et al. Multifont OCR postprocessing system
US5008818A (en) Method and apparatus for reconstructing a token from a token fragment
JPS5854433B2 (en) Difference detection device
JP2003331214A (en) Character recognition error correction method, device and program