WO1999036879A1 - A method and a device for matching images - Google Patents

A method and a device for matching images Download PDF

Info

Publication number
WO1999036879A1
WO1999036879A1 PCT/SE1998/002459 SE9802459W WO9936879A1 WO 1999036879 A1 WO1999036879 A1 WO 1999036879A1 SE 9802459 W SE9802459 W SE 9802459W WO 9936879 A1 WO9936879 A1 WO 9936879A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
displacement
displacement positions
values
overlap
Prior art date
Application number
PCT/SE1998/002459
Other languages
French (fr)
Inventor
Christer FÅHRAEUS
Ola Hugosson
Petter Ericson
Original Assignee
C Technologies Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from SE9704924A external-priority patent/SE513059C2/en
Application filed by C Technologies Ab filed Critical C Technologies Ab
Priority to CA002317569A priority Critical patent/CA2317569A1/en
Priority to AU20831/99A priority patent/AU756016B2/en
Priority to JP2000540515A priority patent/JP2002509316A/en
Priority to EP98965357A priority patent/EP1050012A1/en
Priority to IL13707398A priority patent/IL137073A0/en
Priority to BR9814591-6A priority patent/BR9814591A/en
Priority to KR1020007007369A priority patent/KR20010052136A/en
Publication of WO1999036879A1 publication Critical patent/WO1999036879A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • G06V10/7515Shifting the patterns to accommodate for positional errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/142Image acquisition using hand-held instruments; Constructional details of the instruments

Definitions

  • the present invention relates to a method and a device for matching two images, each consisting of a plurality of pixels and having partially overlapping contents, the degree of correspondence between the images being determined for different displacement positions which represent different overlappings of the images .
  • Background of the Invention The applicant has developed a pen by means of which text can be recorded by the pen being passed over the text which is to be recorded.
  • the pen which includes, inter alia, a two-dimensional light-sensitive sensor and a signal-processing unit, records images of the text with partially overlapping contents. Subsequently, the images are put together to form a larger image, which does not contain any redundant information.
  • the characters in the image are identified with the aid of OCR software and are stored in character-coded form in the pen.
  • the pen is described in the applicant's Swedish Patent Application No. 9604008-4, which had not been published when the present application was filed.
  • a desired image-recording frequency for the pen is about 50 Hz, which thus means that it shall be possible to put together two images in about 20 ms .
  • the most time-consuming operation when putting the images together is the matching of the images, i.e. determining the relative position in which the best possible correspondence between the contents of the images is obtained.
  • a possible method for matching two images is to examine all possible overlap positions between the images and, for each overlap position, to examine every pair of overlapping pixels, to determine a score for each pair of overlapping pixels, the score depending on how well the values of the pixels correspond, and to then determine which overlap position provides the best match on the basis of the total of the scores for the overlapping pixels in each position.
  • this procedure is too slow for the application indicated above.
  • one object of the present invention is thus to provide a new method for automatic matching of two images, which method permits faster matching of two images with a given processor than the method described above.
  • a further object is to provide a device for the implementation of the method.
  • the invention is based on determining the degree of correspondence between two images, which each consists of a plurality of pixels and which have partially overlapping contents, for different displacement positions representing different overlappings of the images.
  • the comparison of the contents of the images is effected in a more efficient manner. More specifically, a plurality of numbers are determined for each one of a plurality of displacement positions, each number being formed with the aid of pixel values from both images. The numbers are used to produce overlap assessment values for at least two displacement positions simultaneously. These overlap assessment values are subsequently used in determining the degree of correspondence between the contents of the images.
  • the different displacement positions can be examined with a certain degree of parallelism, making it possible to examine the images more quickly than if all the displacement positions are exa- mined sequentially.
  • This parallelism is achieved with the aid of the numbers, which are used to examine at least two displacement positions simultaneously. Since the numbers are based on the contents of each image, it is possible to calculate in advance the overlap assessment values in the cases where the pixel values which make up the numbers overlap completely or partially.
  • overlap assessment values can be stored and be retrieved with the aid of the numbers when carrying out the matching.
  • the overlap assessment values are predefined. What this means is that if a pixel in one of the images has a first given value and the corresponding overlapping pixel in the other image has a second given value, a certain predetermined overlap value is always obtained. The same applies when the overlap assessment values relate to several overlapping pixels.
  • the numerical values of different overlap assessment values which are obtained for different combinations of pixel values can be determined optionally. In this connection, it should be pointed out that, of course, the images are not physically displaced in relation to each other when the method is being imple- mented, but rather the comparison between the images is carried out for hypothetical displacements.
  • the method furthermore comprises the steps of adding up the overlap assessment values for each of said displacement positions, and of using the totals obtained in this manner to determine which of the displacement positions provides the best possible match between the contents of the images.
  • the overlap assessment values which are added together for a certain displacement position preferably reflect the degree of correspondence between all overlapping pixels for that displacement position.
  • the overlap assessment values are suitably added up in paral- lei for several displacement positions.
  • the adding-up becomes particularly advantageous if it i ⁇ carried out in parallel for the overlap assessment values which are prodcued simultaneously with the aid of said numbers.
  • Each overlap assessment value can relate to one or more overlapping pixels.
  • a matching speed increase is achieved by the fact that it is not necessary to add up the assessment values for each overlapping pixel for a certain displacement position, but rather overlap assessment values which have already been added up for two or more overlapping pixels can be produced directly.
  • the images can be put together in this rela- tive position.
  • the putting-together can be effected by the overlapping pixel values in one of the images being rejected or, preferably, by a combined weighting of the pixel values for each overlapping pixel.
  • the plurality of displacement positions for which numbers are determined can suitably constitute rough displacement positions, and said at least two displacement positions for which the overlap assessment values are produced simultaneously can suitably comprise at least one fine displacement position, representing a smaller displacement from a rough displacement position than the displacement between two rough displacement positions.
  • the second overlap assessment value can relate to the rough displacement position in question or to another fine displacement position.
  • the contents of the images are displaced in relation to each other in one direction only.
  • the method can also be employed when the images are displaced in two different, preferably perpendicular, directions in relation to each other.
  • the rough displacement positions represent different overlap- pings of the images in the first direction, for example horizontally, and to repeat the method for different overlappings of the images in the other direction, for example vertically.
  • the rough displacement positions which thus constitute a subset of the displacement positions examined, are preferably determined by the images being divided into a plurality of rough segments consisting of N x M pixels where N and M are greater than one, the displacement between two adjoining rough displacement positions consisting of a rough segment.
  • the rough segments can thus be achieved by the images being divided into columns or rows, each having the width and the height of several pixels.
  • the images can be represented in various ways. They can be analogue, but it is preferable that they be digital since this facilitates their processing with the aid of a computer.
  • the pixel values can be represented with different resolutions.
  • the method is preferably intended for images which are represented as bitmaps.
  • the numbers are based on the contents of the two images. In a preferred embodiment the numbers are used as addresses for memory locations, which store the overlap assessment values. In this case, the latter are suitably defined by quite simply being calculated or determined in advance.
  • the addresses are used for addressing a lookup table which, for each address, contains said pre- calculated overlap assessment values for at least two displacement positions.
  • the order in which the pixels values are used in the address is of no importance as long as the same order is used for all addresses and as long as the storing of the overlap assessment values in the lookup table is carried out in a predetermined manner in relation to said order.
  • the method according to the invention can be implemented entirely in hardware.
  • the numbers can, as mentioned above, form input signals for a gate circuit which has been designed in such a way that for each given set of input signals the corresponding overlap assessment values are produced as output signals.
  • the overlap assessment values are defined by the design of the gate circuit.
  • the method is implemented in software with the aid of a processor which works with a predetermined word length.
  • the lookup table comprises a plurality of addressable rows, each of which has the predetermined word length and stores the pre-calculated overlap assessment values. By adjusting the width of the table to the word length of the processor, the best possible utilisation of the capacity of the processor is obtained.
  • the various parameters for the method i.e. the rough displacement positions, the number of overlap assessment values stored for each address, the number of tables, etc., are suitably determined on the basis of the processor utilised and its cache memory in order to achieve the highest speed possible.
  • the parameters are chosen so that the two images and all of the pre-calculated overlap assessment values can be contained in the cache memory.
  • each number is formed by a first fine segment, which comprises at least two adjoining pixels values from the first image, and by a second fine segment, which overlaps the first fine segment and which comprises as many adjoining pixel values as the first fine segment from the second image, and a third fine segment, which comprises as many adjoining pixel values as the first fine segment from the second image and which overlaps the first fine segment in an adjacent displacement position for which the determination of a plurality of numbers is carried out, i.e. an adjacent rough displacement position.
  • the number will include all pixel values which can overlap in a rough displacement position and in all fine displacement positions between this rough displacement position and the subsequent rough displacement position, as well as in this subsequent rough displacement position. Accordingly, it is possible to retrieve, with the number, pre-calculated overlap assessment values for all of these displacement positions.
  • each address is advantageously divided into a first and a second sub- address, the first subaddress, which consists of the pixel values from the first and the second fine segment, being used to simultaneously retrieve overlap assessment values in a first table for overlapping pixels belonging to the first and the second fine segment, and the second subaddress, which consists of the pixel values from the first and the third fine segment, being used to simulta- neously retrieve overlap assessment values in a second table for overlapping pixels belonging to the first and the third segment.
  • the first and the second table preferably store an overlap assessment value for each one of said at least two displacement positions, the sum of the two overlap assessment values for a first displacement position, which is retrieved with the first and second subaddresses of an address, constituting an overlap assessment value for all overlapping pixels of the first, the second, and the third fine segment for said first displacement position.
  • the overlap assessment values are preferably stored in the same order with respect to the displacement positions for each address, so that they can be easily added up.
  • a device has a processing unit which is adapted to implement a method according to any one of claims 1-17.
  • the processing unit can be connected to a unit for recording images and can process the images in real time.
  • the device exhibits the same advantages as the method described above, that is, it permits a quicker matching of the images.
  • the invention is implemented in the form of a computer program which is stored in a storage medium which is readable with the aid of a computer .
  • the method according to the invention can be used to examine all possible displacement positions or only a selection. For example, the displacement position in an earlier matching can be used to limit the number of posi- tions which need checking.
  • the invention is applicable to all types of matching of images. It can be used when two images are to be matched in order to subsequently be put together in the position which affords the best correspondence between the contents of the images. It can also be used to match two images when one wishes to check how the images overlap. The invention is especially applicable when a high matching speed is required.
  • Fig. 1 shows an image consisting of a plurality of pixels, with one rough segment and one fine segment indi- cated.
  • Fig. 2 shows a hypothetical overlapping of two images .
  • Fig. 3 shows how an address is formed with the aid of pixel values from a plurality of overlapping pixels in two images.
  • Fig. 4 shows how the overlap assessment values for a plurality of different overlap positions are stored and retrieved simultaneously.
  • Fig. 5 shows how the overlap assessment values are calculated for various displacement positions.
  • Fig. 6 shows how overlap assessment values are stored and retrieved in the case where subaddresses are employed.
  • Fig. 7 shows how overlap values for a plurality of different displacement positions are added up simultaneously.
  • a presently preferred embodiment of a method for matching two images with partially overlapping contents will be described below.
  • the purpose of the method is to find the overlap position which provides the best possible correspondence between the contents of the images.
  • a predetermined assessment criterion is employed.
  • the method is implemented in software with the aid of a 32-bit processor with a clock frequency of 100 MHz and with a 16 kB cache memory, in which the images which are to be matched are stored.
  • An example of a processor of this type is StrongARM supplied by Digital.
  • the processor operates under the control of a program which is read into the program memory of the processor.
  • Fig. 1 schematically shows a digital image 1 consisting of a plurality of pixels 2 of which some are schematically indicated as squares. The image is to be matched with a like image with partially the same contents .
  • the image is 55 pixels wide and 76 pixels high. It is stored as a bitmap, each pixel thus having the value one or zero.
  • the value one represents a black dot and the value zero a white dot.
  • each image is divided into eleven rough segments 3 in the form of ver- tical bands, each being five pixels wide and 76 pixels high.
  • Each rough segment is divided into fine segments 4, each consisting of a horizontal row of five adjoining pixels.
  • the rough segments 3 are employed to define a plurality of rough displacement positions.
  • Fig. 2 shows a first rough displacement position, in which two images la and lb are displaced in relation to each other in such a way that one rough segment 3, indicated by slanting lines, from each image overlap one another.
  • two rough segments from each image will overlap, etc. up to an eleventh rough displacement position in which all the rough segments overlap. The difference between two adjoining rough displace- ment positions is thus one rough segment.
  • each rough segment four fine displacement positions are defined. These represent a displacement in relation to a rough displacement position by one, two, three and four pixel columns.
  • the rough displacement positions and the fine displacement positions represent displacements between the images in a first direction, viz. horizontally. If the images can also be displaced vertically in relation to each other, a number of vertical displacement positions are defined, each vertical displacement position representing a displacement by one pixel row vertically.
  • the left part of Fig. 3 shows a vertical displacement position for a first image la and a second image lb, which is indicated by dashed lines in the overlap position.
  • the fine segments 4 are employed to determine a number of 10-bit subaddresses which in turn are employed to retrieve pre-calculated overlap assessment values, each providing a measure of the degree of correspondence between one or more overlapping pixels for a certain dis- placement position.
  • a first subaddress is formed by the five least significant bits of the address being retrieved from a first fine segment 4a in the first image la and the five most significant bits being retrieved from the corresponding overlapping fine segment 4b in the second image lb.
  • the first subaddress thus represents the value for overlapping pixels which one wishes to compare in order to check the degree of correspondence with respect to contents.
  • Fig. 3 shows an example of how the first fine segment 4a of five bits "10010" is retrieved from the one image la and the second fine segment 4b of five bits "01100” is retrieved from the other image lb and are put together into the address "0110010010".
  • the first subaddresses are employed to address two tables of 1024 rows each (the number of possible different addresses) .
  • the tables are shown schematically as Tables 1 and 2 in Fig. 4.
  • the tables which like the images are stored in the cache memory of the processor, there are pre-calculated overlap assessment values (called scores in the following) . This is shown schematically in Fig. 4 by way of an enlargement of a row in each table.
  • the scores are calculated as follows. Two overlapping white pixels equal one point, two overlapping black pixels equal two points, while one white and one black overlapping pixel equal zero points.
  • Fig. 5 shows the scores which are stored in the tables in Fig. 4 in the row with the address "0110010010" and how these are calculated. Score 0 is stored in Table 2 and Scores 1-4 are stored in Table 1. For each overlapping pixel, a score is achieved in accordance with the scoring set out above. The scores for all overlapping pixels are added to arrive at the total score or the overlap assessment value which is to be stored in the table in the row with the address in question.
  • Table 2 in Fig. 4 contains, for each address, the score (Score 0) achieved when the two fine segments overlap completely, i.e. the overlapping which is obtained in the rough displacement position. This score is the total of the scores for five overlapping pixels and is stored in one byte.
  • Table 1 contains, for each address, the scores (Scores 1-4) which are achieved when the two fine segments are partially displaced in relation to each other, i.e. corresponding to various fine displacement positions. These scores are stored in one byte each in a 32-bit word and can accordingly be retrieved at the same time with one reading or one table lookup during one clock cycle. Score 1 relates to the score achieved when the fine segments are displaced by one increment in relation to each other, so that only four overlapping pixels are obtained.
  • Score 2 relates to the score achieved when the fine segments are displaced by two increments in relation to each other, so that only three overlapping pixels are obtained, etc.
  • the displacements reflect the overlapping obtained in the fine displacement positions between the rough displacement position in question and the following rough displacement position.
  • the overlap assess- ment values which are retrieved using the first subaddress relate only to overlappings between the pixels in the first and the second fine segment for the displacement positions examined.
  • the overlappings which occur in these displacement positions between the pixels in the first fine segment and pixels other than the ones in the second fine segment are not picked up with the aid of the method described above.
  • a second subaddress is formed in addi- tion to the first subaddress.
  • This second subaddress consists of the five pixel values in the first fine segment 4a as well as five pixel values for a third fine segment 4c which adjoins the second fine segment in the second image lb and which overlaps the first fine segment in the subsequent rough displacement position.
  • Fig. 6 shows an example of how the second subaddress is formed.
  • the pixel values "10010" from the first fine segment 4a in the first image la constitute the five most significant bits of the second subaddress, while the pixel values "10101" from the third fine segment 4c in the second image lb constitute the five least significant bits in the second subaddress.
  • the scores or the overlap assessment values for the pixels in the first and the third fine segment which overlap in different displacement positions are stored in a third table, which is indicated as Table 3 in Fig. 7.
  • the scores are, of course, calculated in the same manner as in the case of Table 1, but the scores are stored in "reverse order". Accordingly, Score 4, which relates to one overlapping pixel of the first and the third fine segment, is stored in the first byte of a table row in Table 3. Score 3, which relates to two overlapping pixels of the first and the third segment, is stored in the second byte, etc.
  • Fig. 7 shows Tables 1 and 3, a first and a second subaddress employed to address these tables, and the overlap assessment values in separate rows of the table.
  • a first rough displac - ment position is chosen. For this position, a first pair of overlapping fine segments is chosen. Suppose that the first fine segment in the first image has the pixel values "10010" and that the second fine segment in the second image has the pixel values "01100", as in the example in Fig. 3. These values are used to form the first binary subaddress "0110010010". Moreover, suppose that a third fine segment which adjoins the second fine segment in the second image has the values "10101". These values are used together with the pixel values for the first fine segment to form the second subaddress "1001010101". The first subaddress is employed to address both the first and the second table.
  • the scores 4, 3, 0, and 1 stored in one word are obtained from the first table and the score 1 is obtained from the second table.
  • the second subaddress is employed to address the third table, from which the scores 2, 0, 3, 3 are obtained in the example given.
  • Tables 1 and 3 are added up in parallel, the total scores 6, 3, 3, 4 being obtained.
  • the word A representing the word obtained with a first address, consisting of a first and a second subaddress
  • the word B represent- ing the word obtained with a second address, consisting of a first and a second subaddress
  • the word C representing the total obtained.
  • the proce- dure is repeated for the second and subsequent rough displacement positions until all rough displacement positions have been examined. If the images can also be displaced vertically in relation to each other, the method is repeated for each vertical position, the images thus first being displaced one row vertically in relation to each other and subse- quently all rough and fine displacement positions being examined, whereupon the images are displaced to the next vertical displacement position and are examined and so on until all vertical displacement positions have been scanned.
  • a score will have been obtained for each position. With the assessment criterion used in this example, the highest score will represent the displacement position which provides the best overlapping of the contents of the images.
  • an overlap assessment is first carried out in the manner described above with a lower resolution of the images than the one with which they are stored. In this example, a resolution of 25 x 30 pixels is used. The pur- pose of this is the quick selection of relevant displacement positions for closer examination of the correspondence between the contents of the images. Subsequently, the method is repeated for the images in these and adjoining displacement positions for the original resolu- tion.
  • the overlap assessment values are stored in three different tables. This has been done in order to utilise the processor optimally. In the case of other processors, it may instead be suitable to store all overlap assessment values in one table or in more than three tables. This can be determined by the skilled person on the basis of the above description.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Character Input (AREA)
  • Image Analysis (AREA)

Abstract

In a method for matching two images, each consisting of a plurality of pixels and having partially overlapping contents, the degree of correspondence between the contents of the images is determined for different displacement positions representing different overlappings of the images. More specifically, a plurality of numbers are determined for each one of a plurality of said displacement positions. Each number is formed with the aid of pixel values from both images and is used to simultaneously produce overlap assessment values for at least two said displacement positions. The overlap assessment values retrieved are subsequently used to determine the degree of correspondence between the images for the different displacement positions. The method is carried out with the aid of a computer and can be implemented as a computer program.

Description

A METHOD AND A DEVICE FOR MATCHING IMAGES
Field of the Invention
The present invention relates to a method and a device for matching two images, each consisting of a plurality of pixels and having partially overlapping contents, the degree of correspondence between the images being determined for different displacement positions which represent different overlappings of the images . Background of the Invention The applicant has developed a pen by means of which text can be recorded by the pen being passed over the text which is to be recorded. The pen, which includes, inter alia, a two-dimensional light-sensitive sensor and a signal-processing unit, records images of the text with partially overlapping contents. Subsequently, the images are put together to form a larger image, which does not contain any redundant information. The characters in the image are identified with the aid of OCR software and are stored in character-coded form in the pen. The pen is described in the applicant's Swedish Patent Application No. 9604008-4, which had not been published when the present application was filed.
In order to reduce the memory requirement of the pen, it is desirable that it be possible to put together a recorded image with the previous image before the next image is recorded. A desired image-recording frequency for the pen is about 50 Hz, which thus means that it shall be possible to put together two images in about 20 ms . The most time-consuming operation when putting the images together is the matching of the images, i.e. determining the relative position in which the best possible correspondence between the contents of the images is obtained. A possible method for matching two images is to examine all possible overlap positions between the images and, for each overlap position, to examine every pair of overlapping pixels, to determine a score for each pair of overlapping pixels, the score depending on how well the values of the pixels correspond, and to then determine which overlap position provides the best match on the basis of the total of the scores for the overlapping pixels in each position. However, this procedure is too slow for the application indicated above. Summary of the Invention
In the light of the above, one object of the present invention is thus to provide a new method for automatic matching of two images, which method permits faster matching of two images with a given processor than the method described above.
A further object is to provide a device for the implementation of the method.
The objects are achieved by a method according to claim 1 and a device according to claims 18 and 19. Preferred embodiments are stated in the subclaims .
Like the method described above, the invention is based on determining the degree of correspondence between two images, which each consists of a plurality of pixels and which have partially overlapping contents, for different displacement positions representing different overlappings of the images. However, the comparison of the contents of the images is effected in a more efficient manner. More specifically, a plurality of numbers are determined for each one of a plurality of displacement positions, each number being formed with the aid of pixel values from both images. The numbers are used to produce overlap assessment values for at least two displacement positions simultaneously. These overlap assessment values are subsequently used in determining the degree of correspondence between the contents of the images. By this method, the different displacement positions can be examined with a certain degree of parallelism, making it possible to examine the images more quickly than if all the displacement positions are exa- mined sequentially. This parallelism is achieved with the aid of the numbers, which are used to examine at least two displacement positions simultaneously. Since the numbers are based on the contents of each image, it is possible to calculate in advance the overlap assessment values in the cases where the pixel values which make up the numbers overlap completely or partially.
These overlap assessment values can be stored and be retrieved with the aid of the numbers when carrying out the matching. Alternatively, it is possible to define one or more formulae which, when said numbers are used as parameters, result in overlap assessment values for at least two displacement positions. As a further alternative, it is possible to use a gate circuit which produces overlap assessment values for at least two displacement positions as a result for input signals consisting of said numbers.
Naturally, the efficiency increases the more pixels are included in each number since this increases the parallelism. Thus, the overlap assessment values are predefined. What this means is that if a pixel in one of the images has a first given value and the corresponding overlapping pixel in the other image has a second given value, a certain predetermined overlap value is always obtained. The same applies when the overlap assessment values relate to several overlapping pixels. The numerical values of different overlap assessment values which are obtained for different combinations of pixel values can be determined optionally. In this connection, it should be pointed out that, of course, the images are not physically displaced in relation to each other when the method is being imple- mented, but rather the comparison between the images is carried out for hypothetical displacements.
In a preferred embodiment, the method furthermore comprises the steps of adding up the overlap assessment values for each of said displacement positions, and of using the totals obtained in this manner to determine which of the displacement positions provides the best possible match between the contents of the images. The overlap assessment values which are added together for a certain displacement position preferably reflect the degree of correspondence between all overlapping pixels for that displacement position.
In order further to increase the matching speed, the overlap assessment values are suitably added up in paral- lei for several displacement positions. The adding-up becomes particularly advantageous if it iε carried out in parallel for the overlap assessment values which are prodcued simultaneously with the aid of said numbers.
Each overlap assessment value can relate to one or more overlapping pixels. In the latter case, a matching speed increase is achieved by the fact that it is not necessary to add up the assessment values for each overlapping pixel for a certain displacement position, but rather overlap assessment values which have already been added up for two or more overlapping pixels can be produced directly.
When the displacement position which provides the best match between the contents of the images has been determined, the images can be put together in this rela- tive position. The putting-together can be effected by the overlapping pixel values in one of the images being rejected or, preferably, by a combined weighting of the pixel values for each overlapping pixel.
The plurality of displacement positions for which numbers are determined can suitably constitute rough displacement positions, and said at least two displacement positions for which the overlap assessment values are produced simultaneously can suitably comprise at least one fine displacement position, representing a smaller displacement from a rough displacement position than the displacement between two rough displacement positions. The second overlap assessment value can relate to the rough displacement position in question or to another fine displacement position.
In the simplest embodiment of the method, the contents of the images are displaced in relation to each other in one direction only. However, the method can also be employed when the images are displaced in two different, preferably perpendicular, directions in relation to each other. In this case, in order to arrive at the position in which the correspondence between the contents of the images is at a maximum, it is suitable to let the rough displacement positions represent different overlap- pings of the images in the first direction, for example horizontally, and to repeat the method for different overlappings of the images in the other direction, for example vertically.
The rough displacement positions, which thus constitute a subset of the displacement positions examined, are preferably determined by the images being divided into a plurality of rough segments consisting of N x M pixels where N and M are greater than one, the displacement between two adjoining rough displacement positions consisting of a rough segment. The rough segments can thus be achieved by the images being divided into columns or rows, each having the width and the height of several pixels.
The images can be represented in various ways. They can be analogue, but it is preferable that they be digital since this facilitates their processing with the aid of a computer. The pixel values can be represented with different resolutions. However, the method is preferably intended for images which are represented as bitmaps. As mentioned above, the numbers are based on the contents of the two images. In a preferred embodiment the numbers are used as addresses for memory locations, which store the overlap assessment values. In this case, the latter are suitably defined by quite simply being calculated or determined in advance.
Preferably, the addresses are used for addressing a lookup table which, for each address, contains said pre- calculated overlap assessment values for at least two displacement positions. The order in which the pixels values are used in the address is of no importance as long as the same order is used for all addresses and as long as the storing of the overlap assessment values in the lookup table is carried out in a predetermined manner in relation to said order.
The method according to the invention can be implemented entirely in hardware. In that case, the numbers can, as mentioned above, form input signals for a gate circuit which has been designed in such a way that for each given set of input signals the corresponding overlap assessment values are produced as output signals. Thus, in this case, the overlap assessment values are defined by the design of the gate circuit. This method can be advantageous for large images. However, in a preferred embodiment, the method is implemented in software with the aid of a processor which works with a predetermined word length. In this case, the lookup table comprises a plurality of addressable rows, each of which has the predetermined word length and stores the pre-calculated overlap assessment values. By adjusting the width of the table to the word length of the processor, the best possible utilisation of the capacity of the processor is obtained. For example, different rows in the table can be added up in an efficient manner. The various parameters for the method, i.e. the rough displacement positions, the number of overlap assessment values stored for each address, the number of tables, etc., are suitably determined on the basis of the processor utilised and its cache memory in order to achieve the highest speed possible. Preferably, the parameters are chosen so that the two images and all of the pre-calculated overlap assessment values can be contained in the cache memory.
In a preferred embodiment, each number is formed by a first fine segment, which comprises at least two adjoining pixels values from the first image, and by a second fine segment, which overlaps the first fine segment and which comprises as many adjoining pixel values as the first fine segment from the second image, and a third fine segment, which comprises as many adjoining pixel values as the first fine segment from the second image and which overlaps the first fine segment in an adjacent displacement position for which the determination of a plurality of numbers is carried out, i.e. an adjacent rough displacement position. In this way, the number will include all pixel values which can overlap in a rough displacement position and in all fine displacement positions between this rough displacement position and the subsequent rough displacement position, as well as in this subsequent rough displacement position. Accordingly, it is possible to retrieve, with the number, pre-calculated overlap assessment values for all of these displacement positions.
In order to save memory space so that all necessary information can be stored in the cache memory of a processor and thus be quickly accessible, each address is advantageously divided into a first and a second sub- address, the first subaddress, which consists of the pixel values from the first and the second fine segment, being used to simultaneously retrieve overlap assessment values in a first table for overlapping pixels belonging to the first and the second fine segment, and the second subaddress, which consists of the pixel values from the first and the third fine segment, being used to simulta- neously retrieve overlap assessment values in a second table for overlapping pixels belonging to the first and the third segment.
In this case, for each address, the first and the second table preferably store an overlap assessment value for each one of said at least two displacement positions, the sum of the two overlap assessment values for a first displacement position, which is retrieved with the first and second subaddresses of an address, constituting an overlap assessment value for all overlapping pixels of the first, the second, and the third fine segment for said first displacement position. The overlap assessment values are preferably stored in the same order with respect to the displacement positions for each address, so that they can be easily added up.
In order further to increase the matching speed, the degree of correspondence between the images is first determined with a first resolution of the images for selection of a plurality of displacement positions and subsequently with a second, higher resolution of the images for the displacement positions selected and adjoining displacement positions. In this way, it is possible to reject whole areas of the image which are not of interest for further examination. More specifically, a device according to the invention has a processing unit which is adapted to implement a method according to any one of claims 1-17. The processing unit can be connected to a unit for recording images and can process the images in real time. The device exhibits the same advantages as the method described above, that is, it permits a quicker matching of the images.
In a preferred embodiment, the invention is implemented in the form of a computer program which is stored in a storage medium which is readable with the aid of a computer . The method according to the invention can be used to examine all possible displacement positions or only a selection. For example, the displacement position in an earlier matching can be used to limit the number of posi- tions which need checking.
The invention is applicable to all types of matching of images. It can be used when two images are to be matched in order to subsequently be put together in the position which affords the best correspondence between the contents of the images. It can also be used to match two images when one wishes to check how the images overlap. The invention is especially applicable when a high matching speed is required. Brief Description of the Drawings An example of how the invention can be implemented will be described below with reference to- the accompanying schematic drawings.
Fig. 1 shows an image consisting of a plurality of pixels, with one rough segment and one fine segment indi- cated.
Fig. 2 shows a hypothetical overlapping of two images .
Fig. 3 shows how an address is formed with the aid of pixel values from a plurality of overlapping pixels in two images.
Fig. 4 shows how the overlap assessment values for a plurality of different overlap positions are stored and retrieved simultaneously.
Fig. 5 shows how the overlap assessment values are calculated for various displacement positions.
Fig. 6 shows how overlap assessment values are stored and retrieved in the case where subaddresses are employed.
Fig. 7 shows how overlap values for a plurality of different displacement positions are added up simultaneously. Description of a Preferred Embodiment
A presently preferred embodiment of a method for matching two images with partially overlapping contents will be described below. The purpose of the method is to find the overlap position which provides the best possible correspondence between the contents of the images. In order to determine what constitutes the best possible correspondence, a predetermined assessment criterion is employed. In this example, the method is implemented in software with the aid of a 32-bit processor with a clock frequency of 100 MHz and with a 16 kB cache memory, in which the images which are to be matched are stored. An example of a processor of this type is StrongARM supplied by Digital. The processor operates under the control of a program which is read into the program memory of the processor.
The way in which the images are picked up and fed into the cache memory of the processor lies outside the scope of the present invention and will therefore not be described in more detail. One way, however, is to use the same technique as in the pen described by way of introduction, that is, to record the images with a light-sensitive, two-dimensional sensor and to store them in a memory, from which the processor can read the images into its cache memory.
Fig. 1 schematically shows a digital image 1 consisting of a plurality of pixels 2 of which some are schematically indicated as squares. The image is to be matched with a like image with partially the same contents .
The image is 55 pixels wide and 76 pixels high. It is stored as a bitmap, each pixel thus having the value one or zero. In this example, the value one represents a black dot and the value zero a white dot.
For the implementation of the method, each image is divided into eleven rough segments 3 in the form of ver- tical bands, each being five pixels wide and 76 pixels high. Each rough segment is divided into fine segments 4, each consisting of a horizontal row of five adjoining pixels. The rough segments 3 are employed to define a plurality of rough displacement positions. Fig. 2 shows a first rough displacement position, in which two images la and lb are displaced in relation to each other in such a way that one rough segment 3, indicated by slanting lines, from each image overlap one another. In a second rough displacement position, two rough segments from each image will overlap, etc. up to an eleventh rough displacement position in which all the rough segments overlap. The difference between two adjoining rough displace- ment positions is thus one rough segment.
In each rough segment, four fine displacement positions are defined. These represent a displacement in relation to a rough displacement position by one, two, three and four pixel columns. The rough displacement positions and the fine displacement positions represent displacements between the images in a first direction, viz. horizontally. If the images can also be displaced vertically in relation to each other, a number of vertical displacement positions are defined, each vertical displacement position representing a displacement by one pixel row vertically. The left part of Fig. 3 shows a vertical displacement position for a first image la and a second image lb, which is indicated by dashed lines in the overlap position. The fine segments 4 are employed to determine a number of 10-bit subaddresses which in turn are employed to retrieve pre-calculated overlap assessment values, each providing a measure of the degree of correspondence between one or more overlapping pixels for a certain dis- placement position. A first subaddress is formed by the five least significant bits of the address being retrieved from a first fine segment 4a in the first image la and the five most significant bits being retrieved from the corresponding overlapping fine segment 4b in the second image lb. The first subaddress thus represents the value for overlapping pixels which one wishes to compare in order to check the degree of correspondence with respect to contents.
Fig. 3 shows an example of how the first fine segment 4a of five bits "10010" is retrieved from the one image la and the second fine segment 4b of five bits "01100" is retrieved from the other image lb and are put together into the address "0110010010".
The first subaddresses are employed to address two tables of 1024 rows each (the number of possible different addresses) . The tables are shown schematically as Tables 1 and 2 in Fig. 4. In the tables, which like the images are stored in the cache memory of the processor, there are pre-calculated overlap assessment values (called scores in the following) . This is shown schematically in Fig. 4 by way of an enlargement of a row in each table.
In this example, the scores are calculated as follows. Two overlapping white pixels equal one point, two overlapping black pixels equal two points, while one white and one black overlapping pixel equal zero points. Fig. 5 shows the scores which are stored in the tables in Fig. 4 in the row with the address "0110010010" and how these are calculated. Score 0 is stored in Table 2 and Scores 1-4 are stored in Table 1. For each overlapping pixel, a score is achieved in accordance with the scoring set out above. The scores for all overlapping pixels are added to arrive at the total score or the overlap assessment value which is to be stored in the table in the row with the address in question.
Table 2 in Fig. 4 contains, for each address, the score (Score 0) achieved when the two fine segments overlap completely, i.e. the overlapping which is obtained in the rough displacement position. This score is the total of the scores for five overlapping pixels and is stored in one byte. Table 1 contains, for each address, the scores (Scores 1-4) which are achieved when the two fine segments are partially displaced in relation to each other, i.e. corresponding to various fine displacement positions. These scores are stored in one byte each in a 32-bit word and can accordingly be retrieved at the same time with one reading or one table lookup during one clock cycle. Score 1 relates to the score achieved when the fine segments are displaced by one increment in relation to each other, so that only four overlapping pixels are obtained. Score 2 relates to the score achieved when the fine segments are displaced by two increments in relation to each other, so that only three overlapping pixels are obtained, etc. The displacements reflect the overlapping obtained in the fine displacement positions between the rough displacement position in question and the following rough displacement position.
As can be seen from the above, the overlap assess- ment values which are retrieved using the first subaddress relate only to overlappings between the pixels in the first and the second fine segment for the displacement positions examined. The overlappings which occur in these displacement positions between the pixels in the first fine segment and pixels other than the ones in the second fine segment are not picked up with the aid of the method described above.
In order to permit the examination of these overlappings as well, a second subaddress is formed in addi- tion to the first subaddress. This second subaddress consists of the five pixel values in the first fine segment 4a as well as five pixel values for a third fine segment 4c which adjoins the second fine segment in the second image lb and which overlaps the first fine segment in the subsequent rough displacement position.
Fig. 6 shows an example of how the second subaddress is formed. The pixel values "10010" from the first fine segment 4a in the first image la constitute the five most significant bits of the second subaddress, while the pixel values "10101" from the third fine segment 4c in the second image lb constitute the five least significant bits in the second subaddress.
The scores or the overlap assessment values for the pixels in the first and the third fine segment which overlap in different displacement positions are stored in a third table, which is indicated as Table 3 in Fig. 7. The scores are, of course, calculated in the same manner as in the case of Table 1, but the scores are stored in "reverse order". Accordingly, Score 4, which relates to one overlapping pixel of the first and the third fine segment, is stored in the first byte of a table row in Table 3. Score 3, which relates to two overlapping pixels of the first and the third segment, is stored in the second byte, etc.
In this way, overlap values for four fine displacement positions can be retrieved with the aid of the first and the second subaddress. By adding up the overlap values for the first and the second subaddress an overlap value is obtained for each displacement position. Each of these overlap values relates to five overlapping pixels for the displacement position in question. Fig. 7 shows Tables 1 and 3, a first and a second subaddress employed to address these tables, and the overlap assessment values in separate rows of the table.
A description of how the matching of the images is performed is given below. First, a first rough displac - ment position is chosen. For this position, a first pair of overlapping fine segments is chosen. Suppose that the first fine segment in the first image has the pixel values "10010" and that the second fine segment in the second image has the pixel values "01100", as in the example in Fig. 3. These values are used to form the first binary subaddress "0110010010". Moreover, suppose that a third fine segment which adjoins the second fine segment in the second image has the values "10101". These values are used together with the pixel values for the first fine segment to form the second subaddress "1001010101". The first subaddress is employed to address both the first and the second table. In the example given, the scores 4, 3, 0, and 1 stored in one word are obtained from the first table and the score 1 is obtained from the second table. The second subaddress is employed to address the third table, from which the scores 2, 0, 3, 3 are obtained in the example given. The scores from
Tables 1 and 3 are added up in parallel, the total scores 6, 3, 3, 4 being obtained.
When these first overlapping fine segments have been compared, the matching continues with two new overlapping fine segments, until a complete comparison between the overlapping rough segment or segments has "been performed.
Each time a word is obtained with the four scores added up for Tables 1 and 3, the word is added to the words previously obtained. The scores for four different displacement positions are thus added up in parallel by means of one single addition. Since the scores are low, a large number of additions can be performed before there is a carry and, consequently, before any storing in a different location has to be done. The scores from the second table are added up in the same way. Fig. 8 schematically shows how the scores for four displacement positions are added up in parallel, the word A representing the word obtained with a first address, consisting of a first and a second subaddress, and the word B represent- ing the word obtained with a second address, consisting of a first and a second subaddress, and the word C representing the total obtained.
When all overlapping fine segments have been examined for the first rough displacement position, the proce- dure is repeated for the second and subsequent rough displacement positions until all rough displacement positions have been examined. If the images can also be displaced vertically in relation to each other, the method is repeated for each vertical position, the images thus first being displaced one row vertically in relation to each other and subse- quently all rough and fine displacement positions being examined, whereupon the images are displaced to the next vertical displacement position and are examined and so on until all vertical displacement positions have been scanned. When all displacement positions have been examined a score will have been obtained for each position. With the assessment criterion used in this example, the highest score will represent the displacement position which provides the best overlapping of the contents of the images. In a presently preferred embodiment of the invention, an overlap assessment is first carried out in the manner described above with a lower resolution of the images than the one with which they are stored. In this example, a resolution of 25 x 30 pixels is used. The pur- pose of this is the quick selection of relevant displacement positions for closer examination of the correspondence between the contents of the images. Subsequently, the method is repeated for the images in these and adjoining displacement positions for the original resolu- tion.
In the above example, the overlap assessment values are stored in three different tables. This has been done in order to utilise the processor optimally. In the case of other processors, it may instead be suitable to store all overlap assessment values in one table or in more than three tables. This can be determined by the skilled person on the basis of the above description.

Claims

1. A method for matching two images, each consisting of a plurality of pixels and having partially overlapping contents, the degree of correspondence between the contents of the images being determined for different displacement positions which represent different overlappings of the images, comprising the following steps: - determining a plurality of numbers for each of a plurality of displacement positions, each number being formed with the aid of pixel values from both images,
- using each number to produce overlap assessment values for at least two displacement positions simulta- neously, and
- using the overlap assessment values in determining the degree of correspondence between the contents of the images for the different displacement positions.
2. A method according to claim 1, further comprising the following steps:
- adding up the overlap assessment values for each of said displacement positions, and
- using the totals obtained in this manner to determine which of the displacement positions provides the best possible correspondence between the contents of the images .
3. A method according to claim 1 or 2, wherein the overlap assessment values are added up in parallel for a plurality of displacement positions.
4. A method according to any one of the preceding claims, wherein at least one of the overlap assessment values relates to more than one overlapping pixel.
5. A method according to any one of claims 1-4, further comprising the step of putting together the images in the displacement position which provides the best possible correspondence between the images.
6. A method according to any one of claims 1-5, wherein said plurality of displacement positions for "which the numbers are determined constitute rough displacement positions and said at least two displacement positions for which overlap assessment values are produced simultaneously comprise at least one fine displacement position representing a smaller displacement from a rough displacement position than the displacement between two rough displacement positions.
7. A method according to claim 6, wherein the rough displacement positions represent different overlappings of the images in a first direction, and further comprising the step of repeating the method for different overlappings of the images in a second direction.
8. A method according to claim 6 or 7, wherein the rough displacement positions are determined by the images being divided into a plurality of rough segments consisting of N x M pixels, where N and M are greater than one, the displacement between two adjoining rough displacement positions consisting of a rough segment.
9. A method according to any one of the preceding claims, wherein the images consist of bitmaps.
10. A method according to any one of the preceding claims, wherein the numbers constitute addresses of memory locations, which store said overlap assessment values consisting of pre-calculated values.
11. A method according to claim 10, wherein the addresses are employed to address at least one lookup table which, for each address, contains the pre-calcu- lated overlap assessment values for at least two displacement positions.
12. A method according to claim 11, which method is performed with the aid of a processor working with a predetermined word length and wherein said at least one lookup table comprises a plurality of addressable rows, each having the predetermined word length and storing said pre-calculated overlap assessment values.
13. A method according to claim 12, wherein the storing of the overlap assessment values is performed in such a manner that all overlap assessment values as well as the images which are to be matched can be contained in a cache memory in the processor.
14. A method according to any one of the preceding claims, further comprising the step of forming each number of a first fine segment, which comprises at least two adjoining pixel values from the first image, and of a second fine segment, which overlaps the first fine segment and which comprises as many adjoining pixel values as the first fine segment from the second image, and of a third fine segment, which comprises as many adjoining pixel values as the first fine segment from the second image and which overlaps the first fine segment in an adjacent displacement position, for which* the determination of a plurality of numbers is carried out.
15. A method according to claim 14 in combination with claim 10, wherein each address is divided into a first -and a second subaddress, the first subaddress, which consists of the pixel values from the first and the second fine segment, being used to simultaneously retrieve overlap assessment values in a first table for overlapping pixels belonging to the first and the second fine segment, and the second subaddress, which consists of the pixel values from the first and the third fine segment, being used to simultaneously retrieve overlap assessment values in a second table for overlapping pixels belonging to the first and the third segment.
16. A method according to claim 15, wherein, for each address, the first and the second table store an overlap assessment value for each one of said at least two displacement positions, and wherein the sum of the overlap assessment values for a first displacement posi- tion, which is retrieved using the first and second subaddresses of an address, constitutes an overlap assessment value for all overlapping pixels between the first, the second, and the third fine segment for said first displacement position.
17. A method according to any one of the preceding claims, wherein the degree of correspondence between the images is first determined with a first resolution of the images for selection of a plurality of displacement positions, and is subsequently determined with a second, higher resolution of the images for the displacement positions selected and adjoining displacement positions.
18. A device for matching two images, each consisting of a plurality of pixels and having partially overlapping contents, c h a r a c t e r i s e d by a processing unit which is adapted to implement a method according to any one of claims 1-17.
19. A device for matching two images, each consisting of a plurality of pixels and having partially overlapping contents, which device comprises a storage medium, which is readable with the aid of a computer and in which is stored a computer program for implementing the method according to any one of claims 1-17.
PCT/SE1998/002459 1997-12-30 1998-12-30 A method and a device for matching images WO1999036879A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
CA002317569A CA2317569A1 (en) 1997-12-30 1998-12-30 A method and a device for matching images
AU20831/99A AU756016B2 (en) 1997-12-30 1998-12-30 A method and a device for matching images
JP2000540515A JP2002509316A (en) 1997-12-30 1998-12-30 Image matching method and apparatus
EP98965357A EP1050012A1 (en) 1997-12-30 1998-12-30 A method and a device for matching images
IL13707398A IL137073A0 (en) 1997-12-30 1998-12-30 A method and a device for matching images
BR9814591-6A BR9814591A (en) 1997-12-30 1998-12-30 Method and device for combining images
KR1020007007369A KR20010052136A (en) 1997-12-30 1998-12-30 A method and a device for matching images

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
SE9704924A SE513059C2 (en) 1997-12-30 1997-12-30 Methods and apparatus for matching images
SE9704924-1 1997-12-30
US09/024,641 US6563951B2 (en) 1997-12-30 1998-02-17 Method and a device for matching images
US09/024,641 1998-02-17

Publications (1)

Publication Number Publication Date
WO1999036879A1 true WO1999036879A1 (en) 1999-07-22

Family

ID=26663174

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE1998/002459 WO1999036879A1 (en) 1997-12-30 1998-12-30 A method and a device for matching images

Country Status (9)

Country Link
EP (1) EP1050012A1 (en)
JP (1) JP2002509316A (en)
CN (1) CN1284188A (en)
AU (1) AU756016B2 (en)
BR (1) BR9814591A (en)
CA (1) CA2317569A1 (en)
ID (1) ID26716A (en)
IL (1) IL137073A0 (en)
WO (1) WO1999036879A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6839453B1 (en) 2000-05-16 2005-01-04 The Upper Deck Company, Llc Method and apparatus for authenticating unique items such as sports memorabilia
US7865043B2 (en) 2003-12-16 2011-01-04 Anoto Ab Method, apparatus, computer program and storage medium for recording a movement of a user unit

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101546428B (en) * 2009-05-07 2011-08-17 西北工业大学 Image fusion of sequence infrared and visible light based on region segmentation
JP6756209B2 (en) 2016-09-16 2020-09-16 富士通株式会社 Information processing device, genome data aggregation program, and genome data aggregation method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5054089A (en) * 1988-12-29 1991-10-01 Kabushiki Kaisha Toshiba Individual identification apparatus
US5640468A (en) * 1994-04-28 1997-06-17 Hsu; Shin-Yi Method for identifying objects and features in an image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4949391A (en) * 1986-09-26 1990-08-14 Everex Ti Corporation Adaptive image acquisition system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5054089A (en) * 1988-12-29 1991-10-01 Kabushiki Kaisha Toshiba Individual identification apparatus
US5640468A (en) * 1994-04-28 1997-06-17 Hsu; Shin-Yi Method for identifying objects and features in an image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6839453B1 (en) 2000-05-16 2005-01-04 The Upper Deck Company, Llc Method and apparatus for authenticating unique items such as sports memorabilia
US7027623B2 (en) 2000-05-16 2006-04-11 The Upper Deck Company, Llc Apparatus for capturing an image
US7865043B2 (en) 2003-12-16 2011-01-04 Anoto Ab Method, apparatus, computer program and storage medium for recording a movement of a user unit

Also Published As

Publication number Publication date
CN1284188A (en) 2001-02-14
EP1050012A1 (en) 2000-11-08
CA2317569A1 (en) 1999-07-22
BR9814591A (en) 2001-10-30
AU756016B2 (en) 2003-01-02
AU2083199A (en) 1999-08-02
ID26716A (en) 2001-02-01
IL137073A0 (en) 2001-06-14
JP2002509316A (en) 2002-03-26

Similar Documents

Publication Publication Date Title
JP2986383B2 (en) Method and apparatus for correcting skew for line scan images
JP5183578B2 (en) Method and system for finding document images in a document collection using local visual two-dimensional fingerprints
JP5180156B2 (en) System and method for finding picture images in an image collection using localized two-dimensional visual fingerprints
US4003024A (en) Two-dimensional binary data enhancement system
US4723298A (en) Image compression technique
US6563951B2 (en) Method and a device for matching images
US5029228A (en) Image data filing system
US4648119A (en) Method and apparatus for forming 3×3 pixel arrays and for performing programmable pattern contingent modifications of those arrays
US4371865A (en) Method for analyzing stored image details
US5386482A (en) Address block location method and apparatus
JPS645351B2 (en)
JPH0550783B2 (en)
WO1999036879A1 (en) A method and a device for matching images
WO1999036880A1 (en) A method for a device for matching images of body-specific patterns
MXPA00006583A (en) A method and a device for matching images
JP3132771B2 (en) Image storage device and image processing device having the same
JP3016687B2 (en) Image processing device
JPS6343788B2 (en)
JPH0199174A (en) Shape recognizing device
SE513058C2 (en) Method for matching two images of body specific patterns, each image with several pixels and partially overlapping contents, degree of correspondence between images is determined for different displacement positions
JP2619971B2 (en) Image processing device
JP2954218B2 (en) Image processing method and apparatus
JPH0927895A (en) Image compositing device
JPH08171639A (en) Method and device for inspecting image
JP2504373B2 (en) Character recognition device

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 137073

Country of ref document: IL

Ref document number: 98813404.7

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CU CZ CZ DE DE DK DK EE EE ES FI FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: PA/a/2000/006583

Country of ref document: MX

Ref document number: 20831/99

Country of ref document: AU

Ref document number: 1020007007369

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 2317569

Country of ref document: CA

Ref document number: 2317569

Country of ref document: CA

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1998965357

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1998965357

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1020007007369

Country of ref document: KR

WWG Wipo information: grant in national office

Ref document number: 20831/99

Country of ref document: AU

WWW Wipo information: withdrawn in national office

Ref document number: 1020007007369

Country of ref document: KR

WWW Wipo information: withdrawn in national office

Ref document number: 1998965357

Country of ref document: EP