WO1996005571A1 - Method and apparatus for locating and extracting data from a two-dimensional code - Google Patents

Method and apparatus for locating and extracting data from a two-dimensional code Download PDF

Info

Publication number
WO1996005571A1
WO1996005571A1 PCT/US1995/010172 US9510172W WO9605571A1 WO 1996005571 A1 WO1996005571 A1 WO 1996005571A1 US 9510172 W US9510172 W US 9510172W WO 9605571 A1 WO9605571 A1 WO 9605571A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge
data
symbol
line
pixels
Prior art date
Application number
PCT/US1995/010172
Other languages
French (fr)
Inventor
Daniel J. Nelson, Jr.
Original Assignee
International Data Matrix, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Data Matrix, Inc. filed Critical International Data Matrix, Inc.
Publication of WO1996005571A1 publication Critical patent/WO1996005571A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1456Methods for optical code recognition including a method step for retrieval of the optical code determining the orientation of the optical code with respect to the reader and correcting therefore

Definitions

  • the present invention concerns locating a two dimensional code symbol in a bounding box within a field of view and extracting data contained in the symbol, more particularly to improvements in the location and verification of a two dimensional code symbol having a distinctive perimeter in a cluttered field of view.
  • Modern one dimensional and two dimensional code symbols such as bar codes, two dimensional or stacked bar codes, two dimensional matrix codes, and the like are used for object identification, information encodation, manufacturing and inventory control, item authentification, and a variety of other purposes.
  • Use of these code symbols for marking and/or identifying objects requires that an image of the code symbol be captured by a reading device, and the captured image processed to determine the information encoded in the code symbol.
  • an area capture means such as a video camera or CCD area array for capturing an image of the field of view including the code symbol, together with a frame grabber device for saving a video frame of the captured image, and memory for storing the captured image frame as, e.g., a bit map of the image pixels captured.
  • Other techniques include using a linear array to capture a "line” or a portion of a line of image data at a time, and a raster scan of a laser beam or other flying spot scanner, to capture a two dimensional image of a field of view, and equipment to accumulate a bit map of a two dimensional area.
  • the image must be processed to locate the symbol in the "field of view," i.e., the portions of the bit map corresponding to the two-dimensional symbol image in the field of view. More specifically, a "bounding box" is defined within the field of view, which defines the boundary of the pixel area in which the search for the symbol will occur.
  • the located code symbol must then be processed to extract the information recorded or encoded in the symbol.
  • the bounding box is typically spaced inside the field of view e.g., ten to twenty pixels (of a 512 x 484 array) and may be spaced to envelope only a subset of the field of view.
  • the analysis typically starts the several probes at different margins of the bounding box such that each probe traverses a different path. Once the several probe evaluations are completed, the identified black-white and white-black transition locations are evaluated to determine whether or not the identified transitions correspond to those of a symbol to be recognized. See, for example, Wang U.S. Patent 5,304,787.
  • Another known multi-probe technique is that offered by the assignee hereof, International Data Matrix, Inc., Clearwater, Florida USA, which is embodied in its commercial devices having the trade names models C-102 and C-302 decoder systems.
  • FIGS. 7A - 7D in this prior art multi probe technique, four series of parallel probes are used to acquire data points. Each series is made of a plurality of parallel probes that pass from one margin of the bounding box to the opposite margin, such that successive probes are laterally spaced in parallel to walk across the entire bounding box. The four series are left to right, right to left, top to bottom and bottom to top probes. The transitions located by each multi-probe series are evaluated to identify if there are any straight lines.
  • a grid of visual data cell centers is then mapped onto the "located" symbol and the data contents of the cell centers is extracted.
  • the extracted data is then formed into a bit stream, and a decode is attempted of the bit stream, using the decode process and apparatus of the aforementioned model C-102 and/or C-302 devices. If the decode is unsuccessful, then another set of four series of multiple probes are attempted, with the distance between the parallel probes in each series being reduced so that the probes are closer together. If the second attempt is unsuccessful, the routine will quit.
  • One of the problems with these multiple probe techniques is that the symbol locating process requires a significant amount of image bit analysis to declare whether or not a valid symbol exists.
  • the known multiprobe systems spend significant amounts of data processing time processing transitions caused by clutter. If they are unable to distinguish the symbol from the clutter, these systems spend further time attempting to verify whether or not the clutter corresponds to a symbol, and/or extracting data from the clutter. In some cases, the system may determine that the clutter is clutter and reject it, and in other cases, the system will fail to reject the clutter and yield invalid data. Further, the clutter transitions may be located several times by the same or different probes. These difficulties further delay locating the correct symbol and provide a slow rate of reading symbols.
  • an object of the present invention to provide improved methods and apparatus for processing captured image data to locate a symbol in a cluttered field of view, and to extract data therefrom.
  • the present invention concerns methods and apparatus for processing a bit map of a captured two-dimensional image to locate preselected symbols, if present in the image, and to quickly reject clutter in the image, thereby to improve the speed and reliability of locating the symbol.
  • One aspect of the invention is directed to using a first or main probe of the captured image to find a first required color transition that may or may not be part of an edge (or side) of the symbol to be located.
  • a second probe also called a deviation probe
  • the main probe and the deviation probe or probes search for required color transitions which are likely to be part of an edge of the same symbol.
  • color transition refers to a specific color transition of interest, e.g., black to white or white to black, and does not refer to other color transitions which exist and may be ignored. If a deviation probe fails to find such a second transition, then the suspected "edge” is rejected. In such case, the main probe resumes looking for another first transition at the location where it had stopped. If, however, all of the deviation probes find a second transition, and those second transitions are likely to correspond to the suspected symbol edge located by the main probe, then a plurality of additional probes, called feeler probes are applied. The feeler probes are used to locate and define the edge located by the main and/or deviation probes.
  • the feeler probes define a selected side or sides of the symbol perimeter.
  • the located and defined portion of the symbol i.e., the edge or a selected side or sides, is preferably tested for validity. If the dimensions are unacceptable, then the edge is rejected and the main probe resumes. Otherwise, processing will continue.
  • certain symbol perimeter parameters are acquired and tested to determine whether the located side(s) correspond to a valid symbol.
  • the data of the symbol is then extracted.
  • the data is extracted in the form of a bit stream. The acquisition of the image to be processed, and the use of the data extracted from the symbol, form no part of the present invention.
  • the deviation probes there are two deviation probes which are spaced apart from and straddle the main probe and look for second transitions that might be part of the same edge that was located by the main probe.
  • the spacing of the deviation probes is such that they can define a slope of an edge based on the at least two of three transitions located.
  • the degree of linearity of the first transition located by the main probe and the two second transitions located by the deviation probes can be used to reject an edge that is not sufficiently linear to correspond to a valid edge. In as much as only two points are needed to define the slope of a line, only the main probe and one deviation probe are actually needed for locating a straight edge side of a symbol. For symbols having a circular edge feature to be located, at least three points on the curve are required.
  • the method is used to locate and extract data from a DATA MATRIX symbol, which is a two dimensional matrix array code that is available from International Data Matrix, Inc., Clearwater, Florida USA, the assignee hereof, and is described in, e.g., U.S patents 4,939,354, 5,329,107, and 5,324,923.
  • the DATA MATRIX symbol has a distinctive rectangular (normally square) perimeter of two adjacent solid lines intersecting at a first corner, and two adjacent lines of alternating light and dark areas (so-called "dashed edges") intersecting at a second corner (opposite the first corner) .
  • the data is encoded with error detection and correction and arranged within the perimeter in a matrix array of rows and columns of visual data cells ("cells") .
  • the cells all have the same nominal dimensions (i.e., a root cell size) and each cell represents either a binary 1 or a binary 0.
  • One such method finds the DATA MATRIX symbol location through edge validation, which includes the following sequence of steps.
  • step 5 If the edge passes both the ratio and validation test, then begin step 5; (ii) If the edge fails either the ratio or the validation tests, then restart step 3 with starting locations same as the failed edge; b) If no edge was found and if the bounding box is reached by the main probe, begin step 4. 4. If no edge was found, change the main probe vertical offset using a calculation based on an approximate size of the matrix to be located (+/- summing toggle) . At each pass through step 4 the summing toggle will increase in magnitude and alternate in direction so that as it is alternately added and subtracted from the center of the bounding box, it will be spaced further and further away each time. The search will terminate before exceeding the limits of the bounding box. 5.
  • bit stream can be provided to a DATA MATRIX decode machine embedder in a suitable controller device (e.g., Model No. C-102) to attempt a decode of the bits placed into the decode bit stream.
  • a signal indicating whether the decoder was valid or invalid can be communicated to the user.
  • a processing system for analyzing data corresponding to a scanned image to locate symbols which may be present in a bounding box of the field of view, includes an input port operable to receive a data set corresponding to the pixel image data, as may be obtained by a scanning device, a memory storage device operable to store the data set and a plurality of processing system instructions, a processing unit for operating on the data set to identify color transitions corresponding to portions of a symbol in the bounding box, and optionally an output port for providing an output data set corresponding to the information of the symbol located in the bounding box.
  • the processing unit retrieves and executes at least one of the processing system instructions from the memory storage device.
  • the processing system instructions direct the processing unit examine selected subsets of data corresponding to selected lines of pixels in the bounding box of the field of view, to identify color transitions in the lines of pixels which may correspond to portions of the identifiable edge of a symbol, if one exists.
  • the instructions sets operate to conduct searches of selected subsets of the data in a sequence that a first probe search examines a first subset for a first transition, in response to which a second probe search examines a second subset for a second transition, and in response to which a third probe search examines a plurality of third data subsets to define the suspected edge.
  • One embodiment for using and/or distributing the present invention is as software stored to a storage medium.
  • the software includes a plurality of computer instructions for controlling one or more processing units for processing data corresponding to a captured field of view of an image which may include a preselected symbol having an identifiable edge and data, so that the symbol can be located and validated, and the data of the symbol extracted, in accordance with the principles of the present invention.
  • the computer will include the necessary search and test algorithms, or parts thereof to be used.
  • the storage mediums utilized may include, but are not limited to, magnetic storage, optical memory, and/or semiconductor chips, to name some examples. Such semiconductor chips include RAM, ROM, EPROM, EEPROM and flash non-volatile code storage devices.
  • FIGS. 1, and 1A - IH are flow chart diagrams of the operation of the invention in accordance with a preferred embodiment of the present invention.
  • FIGS. 2A - 2C are diagrams of different applications of the method of locating a two dimensional code symbol in a field of view containing clutter, in accordance with embodiments of the present invention
  • FIG. 3 is a block schematic diagram of the apparatus of the invention.
  • FIGS. 4A - 4C illustrate examples of deviation probes of the present invention.
  • FIGS. 5, and 5A - 5B illustrate methods of locating the corner of the symbol of FIG. 2A;
  • FIGS. 6A and 6B are sample DATA MATRIX symbols used in the examples discussed below.
  • FIGS. 7A-7D are drawings of a prior art multiprobe system for locating a symbol.
  • FIGS. 1, 1A - IH and 2A - 2C a preferred embodiment of the present invention is shown.
  • Probes 10, 20, and 30 are sequentially used to locate an edge of a two dimensional symbol 1 in a memory storage corresponding to bounding box 2.
  • Probes 32, and optionally probes 35 are used to define further the corners of the edge of symbol 1. Once the edge is acquired, it is tested for validity.
  • Bounding box 2 is preferably spaced 10 to 20 pixels inside the field of view, although it could be configured to include all or any portion of the field of view.
  • the drawings depict and discussion herein describes the probes and stored image data containing the code symbol in the visual form, i.e., as scanned in the field of view and searched within bounding box 2, and not as the data is actually stored in memory, e.g., in a bit map.
  • the data stored in memory is pixel color data corresponding to the scanned image pixels stored in a prescribed sequence of address locations, and not necessarily by rows and columns corresponding to the image pixels of the captured field of view.
  • the pixel values may be binary, but are more typically a grey scale of color ranging from 0-255 pixel units (based on 8 bit values) .
  • probes are described in terms of passing in a dimension (i.e., a line) in the bounding box 2 of the field of view, rather than examining the contents of memory addresses corresponding to the image or pixel line of interest.
  • (x,y) coordinates in connection with the location of a color transition or pixel should be understood to refer to the relative location of the pixel in a bounding box in the case that a cartesian coordinate system is used to define the bounding box and the corresponding memory locations.
  • (x,y) coordinate should be interpreted broadly to include other coordinate systems or ways of defining locations in the bounding box and not as limited to cartesian systems.
  • the present invention is applied to the location and extraction of a particular symbol the appearance of which is known in advance. This reflects the commercial use of such symbols, wherein typically only one symbol type is used in a given application. Also, in accordance with the present invention, for the particular symbol chosen, either uniform symbol characteristics are to be used (a dedicated system) , or the user may provide certain information as to some basic structure and some estimated dimensions, to facilitate the processing of the acquired image, as discussed below.
  • the present invention is able to discard quickly data that is not likely to correspond to a valid symbol without wasting very much processing time. This reduces the time needed to locate a valid symbol.
  • the present invention is particularly well suited to processing symbols of the same type having different information encoded therein.
  • the invention may be adapted to recognize different types of symbols, each of which also may have certain user provided estimated symbol parameters input to speed up the processing.
  • a device in accordance with the invention could test for a valid edge of each type of symbol in a predetermined sequence of tests.
  • the process will continue for the other symbol types until either a valid symbol is found or the data is rejected for all symbol types.
  • Generating the appropriate test sequence to identify the different symbol types is believed to be within the skill of a person of ordinary skill in the art.
  • This embodiment is useful in an environment wherein the same scanning apparatus is used to process more than one type of symbol. However, such complexity will reduce the speed at which a symbol can be detected according to the number of different symbols to be detected.
  • the symbol 1 used in the preferred embodiment is the DATA MATRIX symbol previously described, samples of which are illustrated in FIGS 6A and 6B.
  • the user provides an estimate of the number of image pixels in the visual cell diameter (i.e., the height or width for a square visual cell), the border color (i.e., black on white or white on black) , and the number of rows and columns of the visual cells (including the perimeter) . These values are used for performing certain tests to validate data and avoid processing clutter 3.
  • the invention also may be adapted to determine automatically the type of symbol to be processed and to acquire from the image itself some or all of the data that the user would otherwise provide, to render the operation less user dependent. This is in part discussed below.
  • the main routine starts at step 50 and passes to step 51 where a main probe 10 is initialized.
  • the main probe 10 is provided with a range of examination ("range") , in which main probe 10 searches for a first transition that extends the width of bounding box 2.
  • range a range of examination
  • Main probe 10 starts from the left side in the center of the bounding box 2 and begins a left to right probe, searching for a first transition possibly corresponding to an edge of the symbol 1. If no first transition is found within the range of main probe 10 (i.e., the width of bounding box 2) , the routine passes to step 52 where the direction of main probe 10 is reversed.
  • Main probe 10 then resumes probing, but now travels along the same horizontal row of bounding box 2 in the right to left direction. If again no edge is found within the range, then the routine passes to step 54 where main probe 10 is vertically moved to a different left side starting location, spaced from the initial center row, and begins another left to right search for a transition.
  • a summing toggle is maintained which is based on the user provided number of rows and columns of the symbol, and the number of pixels per data cell area of the symbol, to set the next main probe 10 starting point.
  • the starting point for the next main probe 10 is vertically above or below the center of bounding box 2 by a multiple of the number of times that main probe 10 has been started at a different location. Typically, even multiples are spaced above the center line, and odd multiples are spaced below the center line. In this way, the routine will continue to space main probe 10 progressively further away from the center line to scan across enough of the entire bounding box 2 to find any symbol 1 that is present.
  • step 53 the routine passes to step 53 where the slope of the edge is determined, if possible, using deviation probes 20 to locate second transitions.
  • deviation probes 20 are spaced above and below the first transition found by main probe 10. If each of deviation probes 20 find a second transition within its range (a range that is substantially less than the range of main probe 10, as discussed elsewhere) , the locations are noted. The slope and deviation are then determined based on the noted transition locations.
  • the first and second transition points are tested for linearity within a defined tolerance limit of the cell width divided by two. If the points are not within the limit, then the first edge is rejected and the routine returns to step 51 to resume main probe 10 search. If the points are within the limit, then the routine passes to step 56 and begins a WALK_ABOVE_EDGE routine.
  • the linearity test at step 55 may be omitted. However, including the test provides a coarse filter to find and reject edge data that is likely to be invalid.
  • the WALK_ABOVE_EDGE routine at step 56 begins an edge validation by using a series of small horizontal feeler probes 30.
  • Feeler probes 30 are spaced apart, vertically above main probe 10 and are spaced horizontally to straddle, and thus respect, the predicted slope of the possible edge found by deviation probes 20.
  • the horizontal spacing may be adjusted based on the location of the preceding transition located and the calculated deviation and slope. Alternately, the horizontal spacing may be based on the calculated slope, the first transition location, and the distance from the first transition.
  • feeler probes 30 start at the transition located by main probe 10 and continue to walk up the suspected edge above the center line, until they can no longer find a transition in the feeler probe range corresponding to the suspected edge.
  • the feeler probe range is smaller than the deviation probe range.
  • the start x location must back up enough to be outside the suspected edge, and the stop x location must be inside the suspected edge to allow for imperfections in the print. As is apparent, for other deviations, and for other directions of walking, the signs of the calculations change according to the slope.
  • the routine passes to step 57 where it begins a WALK_BELOW_EDGE routine.
  • the WALK_BELOW_EDGE routine continues the suspected edge validation by walking a series of the same horizontal feeler probes 30 downward from the transition found by main probe 10 to the bottom corner of the edge, i.e., until they can no longer find a required transition.
  • feeler probes 30 are also uniformly spaced apart, vertically below main probe 10, and spaced horizontally based upon the slope found by deviation probes 20.
  • the dimension of the first edge is tested at step 58 to reject edges that are considered too small to be a part of a valid symbol.
  • the routine then passes to step 59 where the
  • WALK_EDGE_FROM_BOTTOM routine begins. Similar to the test at step 55, the test at step 58 also provides a coarse filter test to reject data that is likely to be invalid, based however on different criteria. As will become apparent, after each stage of testing for valid data, progressively more time is spent to further validate the symbol. Thus, the omission of the tests at steps 55 and/or 58 may result in spending time attempting to validate data that could have been earlier detected and is ultimately rejected. However, the use of these tests for every first edge, at a cost of some processing time, improves the overall speed at which the probes can work to locate a valid symbol 1.
  • the WALK_EDGE_FROM BOTTOM routine begins to walk a series of small vertical feeler probes 30 to the right, from the bottom corner found during the WALK_BELOW_EDGE routine at step 57.
  • Feeler probe 30 window range placement also is adjusted, if necessary, based on the slope of the first edge found by main probe 10 and deviation probes 20.
  • Feeler probes 30 continue to walk to the right of the edge until they can no longer find a transition.
  • the WALK_EDGE_FROM_BOTTOM routine operates as already described, except that vertical feeler probes 30 walk to the left from where the WALK BELOW EDGE routine terminated.
  • the routine tests the dimension of the two sides defined by feeler probes 30 by combining the distances between top edge and bottom edge (as found in the WALK_ABOVE_EDGE and WALK_BELOW_ EDGE routines) , based on where the feeler probes 30 terminated relative to the main probe 10, and comparing that dimension with the length determined during the WALK_EDGE_FROM BOTTOM routine. If the comparison yields a ratio that is within a preselected limit indicative of a valid symbol 1, then the routine passes to step 62.
  • the centers of the first and second dashed edges of a DATA MATRIX symbol are determined and tested. If the centers of the first and second dashed edges both show a potential for a DATA MATRIX symbol, e.g., that the dashed edges are at least 80% valid, then the routine passes to the data extraction phase at step 63. If the ratio does not show the potential for DATA MATRIX symbol, e.g., the dashed edges are less than 80% valid, then the routine passes to step 65. At step 65, the routine queries whether both a
  • WALK_EDGE_FROM_TOP and a WALK_EDGE_FROM_BOTTOM routine have occurred. If the top walk has not yet occurred, then the routine passes to step 61 where a WALK_EDGE_FROM_TOP routine applies another sequence of feeler probes 30 to define the "second" edge of the symbol. In this step, feeler probes 30 attempt to validate the top edge by using small vertical probes and walking from the point where the WALK_ABOVE_EDGE routine terminated, in the direction of travel of main probe 10, to where the end of the second solid edge of the DATA MATRIX symbol should be. In other words, if main probe 10 found the first edge passing from left to right, then the routine will walk to the right from the top corner of the first edge.
  • step 61 If instead, main probe 10 was passing from right to left, then the routine at step 61 will walk left. Similar to the WALK_EDGE_FROM_BOTTOM routine, vertical feeler probes 30 are used walking to the right or left. Also, the probe range placement is based on the slope of the initial edge found by main probe 10 and deviation probes 20.
  • step 61 has concluded the WALK_EDGE_FROM_TOP routine, the ratios of the walk above edge plus walk below edge to the walk edge from top are compared again at step 60 and the ratio for an acceptable matrix is tested. If the ratio test shows no potential for a DATA MATRIX symbol at step 60, and the top and bottom walk have been performed, as tested at step 65, the routine returns to step 51. At that point main probe 10 resumes probing from the point where it had stopped, shifted by an amount to not detect the same edge that was rejected, and searches for another first transition. Otherwise, after edge validation at step 62, the data extraction will commence at step 63.
  • the data extraction step 63 concerns identifying each matrix cell, typically passing from left to right relative to the solid borders found by the WALK_ABOVE_EDGE and
  • WALK_BELOW_EDGE and WALK_EDGE_FROM_TOP (or BOTTOM) routines.
  • the data is extracted and provided to a bit stream at step 64 for processing by, e.g., a commercial device for decoding DATA MATRIX symbols.
  • the decoder device then conventionally processes the data bit stream at step 64 for the appropriate use.
  • step 100 the user provides the estimated visual cell dimension and the number of rows and columns of the matrix.
  • image data is acquired or accessed, e.g., a RAM or video RAM memory device filled with a field or frame of video data and made available for processing in accordance with the invention.
  • main probe 10 is initialized with beginning location, a direction, and a range of examination.
  • main probe 10 is advanced from the given starting location, in the given direction through the range.
  • main probe 10 is initialized in step 120 to have a range of examination and direction that starts at the left most pixel of the center line of the image in bounding box 2, and advances horizontally across the center line of data in bounding box 2.
  • Main probe 10 represented in FIGS. 2A - 2C as a solid black line, follows a bit stream analyzing routine that analyzes a series of address locations corresponding to a line of image pixels and searches for changes in value of the image (i.e., the stored value) from white to black
  • transition (herein defined as a "transition” or an “edge transition”) which may be part of a symbol.
  • the transition is preferably determined based on a comparison of the values of the pixels exceeding a selected threshold.
  • a suitable threshold is a percent contrast as between the dark and the light areas, e.g. , 20% difference, measured in pixel values.
  • the required transition, white on black or black on white is provided by the user. A default to black on white is typically provided. In cases where the symbol to be captured is not likely to be in the middle of the field of view, main probe 10 could begin at a more appropriate location. It also is to be understood that the transition may be of a white to black transition, depending on the nature of the symbol on the marked object.
  • relative values of contrast are used in the image/pixel analysis rather than absolute values, so that the same routines will work for positive symbols, e.g., white to black edge transitions, and negative symbols, e.g., a black to white edge transition.
  • the routine could be modified to toggle from a positive image search to a negative image search if the former does not find a valid symbol, using the same data in memory, before determining that there is no symbol in the field of view.
  • the analysis is conducted by using a WINDOW routine, which is illustrated in FIG. 1C.
  • the WINDOW routine assumes that there is a valid range of examination for the probe, and will examine the range for transition.
  • the WINDOW routine thus reports back either that an edge was found, and its location, or that no edge was found.
  • the WINDOW routine is initialized by being provided an examination range, a starting location, the direction for the probe, and a value N corresponding to the spacing between pixels to be evaluated for a transition.
  • the WINDOW routine selects a first pixel (memory address) and acquires the color value PI of that pixel (memory address contents) .
  • the WINDOW routine selects a second pixel, which is spaced N pixels away from the first pixel, and acquires the value P2 of that second pixel.
  • the values of the first and second pixels PI and P2 are compared.
  • a transition is declared.
  • the transition is checked at step 243 to determine if it is a correct color transition, i.e., the desired white to black or black to white. If the transition is not correct, the routine passes to step 248. If it is correct, the transition location is stored at step 245. The WINDOW routine then ends and returns to step 135 (FIG. 1A) .
  • the routine tests see if the probe has reached the end of its range. If not, then the first pixel is shifted at step 250 and at step 220 the WINDOW routine selects the value of the now adjusted pixel (new memory address and contents) as the current PI value, again acquires a second pixel value P2 that is spaced N pixels from the current PI value at step 230, and compares the current values PI and P2 at step 240.
  • the test to test increment of the pixel PI is one pixel so that the probing window is shifted one pixel at a time across the image pixel line of the bounding box.
  • Steps 248 and 250 are thus used to control the pixel address PI in the probe range so that the WINDOW routine can be used for all probing with minimal computational time requirements. If at step 248 the probe is at the end of the range, then the WINDOW routine ends and the routine returns to step 135.
  • the value N initialized at step 210 may be, for example, 2, 4 or some other integer value suitable for located a color transition likely to be of a symbol edge.
  • sharp edges permit using fewer pixels between PI and P2 than fuzzy edges.
  • the precise value is a compromise to be selected by the user based on the type, size, and quality of the symbol to be processed, and resolution of the scanning equipment.
  • the thickness of the edge is thus one value that may be estimated by the user and used to control the WINDOW routine.
  • the value of N may be calculated as a function of the estimated visual cell diameter, e.g., 50%.
  • the contrast threshold limit is typically preset based on the range of anticipated values possible, and the level of contrast desired for the visual cells corresponding to extremes of the grey scale.
  • a 20% contrast limit corresponds to a difference between PI and P2 of 51 pixel units.
  • the contrast level of the test used in the WINDOW routine preferably can be altered by the user for the environment of the symbol. For high contrast symbols, a higher contrast threshold can be used, which provides more reliable detection of a likely edge in the first instance. For low contrast symbols, using a lower contrast threshold may result in more processing of bad data until the bad data is rejected, but it also will likely enable detecting symbols that otherwise would be missed.
  • a user-controlled input could be provided to adjust the contrast threshold in view of the symbol quality and contrast to be processed.
  • good contrast DATA MATRIX symbols printed in ink typically have a contrast of approximately 50-70%.
  • the discrimination of light and dark areas is highly reliable.
  • the present invention is capable of processing scanned images having grey scale values to locate and validate the symbol 1, and, except for the case of a dashed edge validation routine, does not require determining whether any particular pixel (or area) corresponds to a binary 1 or 0 value, until data is to be extracted.
  • the WINDOW routine may acquire a third pixel data point P3 that is N + Y pixels from the first point PI, and tests the value of pixel P3 relative to PI (or optionally P2) , to confirm a transition.
  • This test may be based on the difference between P3 and PI also being outside the contrast limit (or optionally P3 and P2 being within the contrast limit, or both) .
  • this provides an added step which slows the processing time, it also reduces the likelihood of misinterpreting clutter 3, such as an ink spot, as an edge transition.
  • a switch that allows the user to indicate whether the print quality of the symbols to be processed is such that the third pixel point P3 confirmation, is or is not used.
  • a test is made at step 137 to determine if probe 10 has searched in both directions. If it has not, then at step 138 the direction of main probe 10 is reversed and at step 125, the next main probe 10 will commence. If it has, then at step 139 the main probe 10 is shifted to start a new row or to stop at step 150 if the sequence is completed.
  • further tests are conducted in response to locating the first transition.
  • the slope of the suspected edge is determined, relative to the horizontal x axis of the direction of the probe. This is achieved by the use of a pair of deviation probes 20 to find, if they can, a pair of second transitions which are capable of being a part of the same edge as the first transition.
  • the second transitions are different from the first transition found by main probe 10.
  • the pair of deviation probes 20 are respectively spaced apart a distance which is sufficient to determine with reasonable accuracy the slope of an edge of a symbol.
  • deviation probes 20 are spaced on either side of main probe 10 by the same distance which is approximately twice the estimated visual cell diameter.
  • each deviation probe 20 is spaced from main probe 10 such that there are four cell diameters between deviation probes 20.
  • deviation probes 20 may be spaced further apart, and vice versa. The limit on the spacing between deviation probes 20 is practical in that it is more desirable to find two points on the first edge than to miss one of the points because a deviation probe was spaced too far from main probe 10.
  • deviation probes 20 do not start from the margin of bounding box 2 and then proceed across the bounding box 2 until a transition or the other margin of the field of view is reached. Instead, each deviation probe 20 searches in a limited range for a transition.
  • the range which is preferably the same length as the spacing between deviation probes 20, is centered on an axis of the first transition detected by main probe 10.
  • the length of the range of deviation probes 20 and the spacing of deviation probes 20 from main probe 10 are such that if the first transition located by main probe 10 is part of a symbol edge, and if main probe 10 located that first transition at an acceptable location on the symbol edge, then the symbol edge also should be located by both deviation probes 20, somewhere within the probe 20 ranges. If the two transitions are located and confirmed as a possible edge, then the edge is further tested to determine whether or not it is part of an edge of a perimeter of the symbol to be located.
  • Each deviation probe 20 preferably uses the aforementioned WINDOW routine to look for a transition.
  • one deviation probe 20 is probed at a time, with the first probe starting at one end of the determined range at step 304, searching for a transition using WINDOW at step 306, and if no transition is found, passing to step 350 (and returning to step 147 of the main routine) if no transition is found in the probe range. If a transition is found, the coordinates are saved at step 308.
  • the routine aborts quickly without spending the time and energy to process the data for the second deviation probe 20.
  • the second deviation probe 20 undergoes the same process, preferably subsequent to the first deviation probe 20.
  • the second deviation probe 20 starts at one end of its probe range at step 314, and searches for a transition at step 316. If no transition is found, the routine passes to step 350 (and returns to step 147) . If a transition is found, the coordinates are saved at step 318.
  • a typical range for deviation probes is the dimension of four visual cells of data, e.g., based on the user provided estimate or a calculated value. However, for small size matrices, smaller ranges or a user provided multiple may be provided. For a 9 x 9 matrix, the first and second distances are each typically twice the estimated cell diameter.
  • Deviation probes 20 preferably advance though their limited ranges in the same direction as main probe 10 advances.
  • the starting location is determined based on the direction and the (x,y) coordinates of the first transition so that the deviation probe ranges are centered on a line intersecting one of the (x,y) coordinates and perpendicular to the direction of the deviation probes 20.
  • the use of deviation probes 20 are illustrated in FIGS. 4A -4B.
  • deviation probes 20 locate two transitions which correspond to a valid edge of symbol 1, which is later validated according to the routines described below.
  • the deviation probes 20a and 20b are produced in response to main probe 10a finding a first transition edge as main probe 10a traverses from left to right across bounding box 2.
  • deviation probe 20a finds an edge transition and deviation probe 20b does not find an edge transition.
  • the first edge found by main probe 10a is rejected.
  • main probe 10a then resumes probing where it left off (represented by dashed lines in FIG. 4B) , and eventually reaches the end of its range at the right margin of bounding box 2.
  • Main probe 10a then reverses direction and becomes main probe lOb, as illustrated.
  • main probe 10b locates a first transition
  • two deviation probes 20c and 20d then search for their respective second transitions, and each finds one.
  • the routine will then try to validate further the symbol based on these edge transition detections, as described below.
  • each deviation probe 20 identifies a transition
  • the saved locations of those transitions are used to calculate a deviation and the slope of the edge defined by the two transitions, at step 319.
  • the slope of the lines between each second transition and the first transition are determined and compared. If the deviation probe transitions are not within a tolerance limit of ⁇ one half cell of each other, then they are assumed invalid and the routine passes to step 350.
  • the deviation and slope values for valid slopes are stored for use by other routines as described below. More specifically, the deviation is stored as one of three values, e.g., + if the calculated slope value is positive, - if the slope is negative, and 0 if the slope is zero (a vertical line) . In practice, status flags are set based on the value of the deviation.
  • the routine ends at step 321 and returns to the main routine step 140 (FIG. 1A) . If the routine aborts at step 350, the routine then returns via step 143 to step 147.
  • clutter acceptance is tested to determine if it is on or off.
  • main probe 10 is assumed to reach the probing limit at the boundary margin and, at step 137, is started to probe in the reverse direction. By this routine, the main probe 10 will not process the edge transitions attributable to data cells inside the matrix perimeter, which processing could increase the time to locate the symbol.
  • step 148 the routine passed to step 148 where the starting position coordinates of main probe 10 is reset to the location saved at step 135 and shifted in the direction of travel by one cell diameter, thereby to look for another first transition at step 125, with main probe 10 at the "shifted coordinate location".
  • the shift could be N pixels where N is the value described in connection with the WINDOW routine.
  • Step 143 is a dummy transfer step
  • the routine at step 154 performs an edge validation routine called WALK_ABOVE_EDGE.
  • the WALK_ABOVE_EDGE routine uses a plurality of feeler probes 30 to locate the upper extent of the edge corresponding to the transitions found by main probe 10 and deviation probes 20.
  • the feeler probes 30 have a more limited search range, are spaced closer together in parallel, and are greater in number and therefore potentially cover (define) a greater portion of the edge to be validated.
  • Feeler probes 30 are spaced apart from each other by a distance DI.
  • feeler probes 30 Similar to deviation probes 20, feeler probes 30 have a searching range a direction and a starting point. The searching range is selected to straddle the anticipated edge. The starting point of the feeler probe range, and the length of the feeler probe range, are calculated based on the last edge detected, the slope, and the estimated cell diameter. The direction in this case is the same as deviation probes 20. These parameters are set or initialized at step 322.
  • the first feeler probe 30 is the applied at step 323 where the feeler probe range is shifted based on the slope cell diameter, and last detected edge.
  • the feeler probe range is checked to see if it is completely within bounding box 2. If it is, then an edge transition is searched at step 325 using the WINDOW routine in the manner previously described.
  • step 326 the routine advances to border damage acceptance at step 326, which is discussed below. If a transition is found at step 325, then another feeler probe 30 range is selected at step 323, the range is checked relative to the bounding box at step 324, and, if acceptable, step 325 is repeated with the new (adjusted) feeler probe 30. At step 323, the feeler probes 30 are incremented by distance DI of one cell dimension, unless the corner search flag is set, in which case the distance DI is only one pixel unit. Thus, in this routine, as each additional feeler probe 30 is used, the starting point is vertically shifted above the starting point of the preceding feeler probe 30 by the distance DI of one cell dimension set between feeler probes.
  • each feeler probe 30 is horizontally shifted a second distance, according to the calculated deviation (slope) , relative to the last transition detected.
  • the horizontal shift is to a point that is calculated as a function of slope, cell diameter, and last edge, as previously described.
  • feeler probe 30 If a feeler probe 30 reaches the end of its range without detecting a transition, or if its range exists out of the bounding box, it is designated feeler probe 30', representing no edge found, and the routine passed to a series of steps to identify more precisely the corner location, as described below.
  • the routine determines whether the border damage acceptance routine is on. If it is on, then a damage counter at step 328 is incremented to count the number of consecutive times that no transition was detected. The detection of an edge will operate to reset the damage counter at step 328.
  • the count is tested against a set damage limit e.g., 0, 15% or 30% of the edge.
  • the limit is preferably set as the selected percent times the number of pixels per cell times the number of rows or columns expected in the edge. For example, in a 10x10 matrix having 5 pixels per cell, for a damage acceptance of 15%, the limit is 7.5 pixels (rounded up to 8 pixels), corresponding to less than two cells. In this regard, if the spacing between feeler probe 30 is one cell dimension, then two consecutive missed edges will result in the damage counter exceeding the set limit.
  • a numerical limit or other percent limits could be used.
  • step 329 the routine returns to step 323 to select the next feeler probe 30, treating the missed edge as if an edge has been found.
  • the routine may predict where the edge should have occurred and use the predicted location to select the range starting point for the next feeler probe 30. If the limit is exceeded, then the routine advances to step 330 for the corner searching routine. Similarly, if the border damage acceptance criteria is not on, the routine simply passes directly to step 330.
  • the routines for applying feeler probes 30 tests the starting point of each feeler probe range relative to the calculated slope to determine if the feeler probes 30 are following an edge that is not of the symbol. For example, as illustrated in FIG. 5C, a mark 6 intersecting a valid symbol 1 will cause feeler probes 30 to follow the edge of mark 6, because the successive feeler probes 30 are horizontally shifted relative to the last transition, and lose the symbol edge.
  • the feeler probe range 30 can be corrected to find the correct edge. For example, a predicted location may be obtained as follows. The initial slope based on the second transitions is obtained.
  • the feeler probes 30 horizontal displacement can be based on the determined deviation in the x any y axis relative to the first transition detected by main probe 10 and the number of vertical shifts (or the corresponding dimension) . In this embodiment, the range deviation test would likely not be needed.
  • feeler probe 30' when feeler probe 30' does not detect a transition or the damage counter exceeds the limit, the location of the last transition detected for the upper extreme edge is recalled, and the routine then applies a CORNER SEARCH routine.
  • the CORNER SEARCH routine a second plurality of feeler probes 32 are used.
  • Feeler probes 32 are identical to feeler probes 30 except that they are spaced much more closely together, and retrace a part of the edge between the last edge detected and where no edge was detected, to locate more precisely the corner A. As illustrated in FIG.
  • the CORNER SEARCH involves, at step 331, setting the corner search flag, setting the feeler probe 32 range, direction and starting location to that of the last feeler probe 30 location that detected a transition, and then returning to the step 323 of the WALK_ABOVE_EDGE routine. In this manner, each next feeler probe 32 to be applied is shifted up by the distance DI, now, e.g., one pixel unit, and a transition is searched for by following step 325 as described.
  • the WALK_ABOVE_EDGE routine tests at step 326 whether border damage acceptance criteria is on. If it is, then the damage counter will still be at its limit at step 329, because the damage counter is only reset when a feeler probe 30 detects a transition, and will pass to step 330. If border damage acceptance criteria is not on, then the routine directly passes to step 330. Because the CORNER SEARCH routine flag is set, the routine then passes to the CORNER LOCKING routine, which is described below.
  • feeler probes 30 are spaced apart a distance DI that is the estimated dimension for one visual cell, and the corner searching feeler probes 32 are spaced one pixel unit apart.
  • the CORNER LOCKING routine is used when deemed appropriate to locate more reliably the corner of the symbol 1.
  • the CORNER LOCKING routine uses a series of feeler probes 35 which may be horizontally or vertically applied, depending on the application of the routine to the validation of the symbol as described below, to locate the coordinates of the symbol corner to be located.
  • Feeler probes 35 typically advance in a different direction and have a different orientation than the aforementioned corner searching feeler probes 32.
  • the CORNER LOCKING routine first tests whether the routine is to be used at step 340. If it is not, the routine exits at step 341; otherwise the routine continues. The test at step 340 examines the deviation previously stored for the detected edge. If the slope is a • ⁇ +• ⁇ or "0", the CORNER LOCKING routine is not used. If the slope is "-", then the routine advances to set the feeler probe 35 to an appropriate starting point, such as the last location of a probe 32 to locate a transition at step 342. More preferably, the starting point is backed up from the location of that feeler probe 32 by approximately one cell dimension, in a direction away from the expected location of the corner.
  • corner locking may be used every time.
  • the decision is made whether the probe is to be horizontal or vertical.
  • a vertical probe 35 is used, the direction is downward, and successive probes 35 are successively shifted by one pixel unit toward the same edge of the bounding box from which probe 10 advanced.
  • CORNER LOCKING is applied to account for the expected shape of the corner presented to the probes 32, and whether the failure to detect an edge with feeler probe 32 corresponds to the outermost corner point of the real corner.
  • the feeler probe 32' that fails to detect a transition does not recognize the actual corner A, because there is, for example, at least one feeler probe 32 that recognizes a transition in its range which is past the true corner A (and is actually on a different side of the symbol) .
  • corner locking feeler probes 35 oriented perpendicular to feeler probes 32, the feeler probe 35' that does fail to detect an edge locates most accurately the true corner A. In comparison, as illustrated in FIG.
  • a "0" deviation may be in either category, preferably in the category that performs the corner locking routine so that fuzzy, damaged, and otherwise not well defined corners can be more accurately located. What is important is that the routine recognize when the failure to detect an edge is likely to be because of a real corner, rather than a poor quality or damaged corner, and to find the location most closely corresponding to the real corner location.
  • the CORNER LOCKING routine uses the saved location of the last corner searching feeler probe 32 of the WALK_ABOVE_EDGE routine as the starting point.
  • the first feeler probe 35 is selected to be one pixel unit from the saved location coordinate (in this case, where probe 32' failed to detect an edge), and optionally is backed up half a cell from there.
  • the range probe 35 is checked at step 347 to determine whether it is in or out of the bounding box. If it is in, an edge transition is then searched at step 350 in the same manner already described. If an edge is found at step 350, then the next feeler probe 35 is selected at step 354, the range is tested at step 347, and feeler probe 35 is applied at step 350. If the range is not in the bounding box, then the routine passes to step 358. This sequence of successive feeler probes 35 continues until no edge is found. When an edge is not found, then the location of the last edge transition detected by a feeler probe 35 is saved at step 358 and used as the corner location A. In the CORNER LOCKING routine, the determined slope is respected for starting point of each range of the feeler probes 35.
  • steps 342 and 354 could be consolidated in step 342, similar to the step 323 described in connection with FIG. ID.
  • the WALK_ABOVE_EDGE routine provides in the first instance a coarse finding of a corner A, and once the general corner location is found, the CORNER SEARCHING routine, together with the CORNER LOCKING routine when appropriate, provides a more precise location of corner A.
  • the corner searching feeler probes 32 and corner locking feeler probes 35 are used to acquire the edge transitions at the corner. These transitions are then examined for the degree of "squareness", i.e., how straight the two sides of the corner are.
  • a virtual corner Based on the edges detected, a virtual corner then may be identified by determining two straight sides at the corner under investigation, projecting those lines to intersect, and determining the coordinates of intersection.
  • a virtual corner location can be used in the following routines to extract more accurately data for validating the symbol, and extracting data from a valid symbol.
  • a WALK_BELOW_EDGE routine is used at step 156 to identify the location of corner B at the lower extreme of the first edge identified by main probe 10, deviation probes 20, and feeler probes 30 and 32 (and 35) .
  • the action of the WALK_BELOW_EDGE routine which is illustrated in FIGS. 2A and 5, is similar to the WALK_ABOVE_EDGE routine and therefore is not discussed in detail.
  • this routine uses a second series of feeler probes 30 that are vertically displaced a distance DI apart and horizontally displaced relative to the last transition found, to walk down the edge, searching for edge transitions, until no transition is found in the probe 30 range subject to damage acceptance criteria.
  • the distance DI is initially set to be the same as in the WALK_ABOVE_EDGE routine, or about one cell diameter.
  • the WALK_BELOW_EDGE routine then applies the same CORNER SEARCHING routine, and a CORNER LOCKING routine when appropriate, to locate the initial corner B coordinates more precisely. Referring to FIG.
  • the CORNER LOCKING routine test at step 340 examines when the deviation is "+", and then applies the vertical corner locking routine.
  • feeler probes 35 are used in the same manner as was described above in connection with the WALK_ABOVE_EDGE routine, based on the last corner searching feeler probe 32 to locate an edge (optionally backed up one cell) , to advance back in the direction of main probe 10 to locate the initial corner B coordinates, except that the feeler probes 35 are in the upward direction.
  • these various probe routines preferably repeatedly execute the same instruction steps, thereby using one "probe", the parameters of which are subject to differences in the constants used to change the probing, such as pixel starting coordinates, direction of travel, size of distance DI changes when selecting the next probe starting location, and the range of examination of the probe.
  • the same instructions repeatedly, and by changing the probe control parameters, programming efficiency and symbol processing speed are greatly enhanced and memory space requirements are minimized.
  • separate routines also could be used.
  • a test is performed at step 158 (FIG. 1A) to determine whether the dimension between corners A and B (herein "side AB") corresponds to a likely edge dimension. If it does not, then the edge is rejected as not corresponding to a symbol 1, and the routine returns to step 147 where, if clutter acceptance is on, at step 148 main probe 10 resumes searching for another first transition. If it does, then the routine advances to test for a second side of the symbol at step 160.
  • the test at step 158 is, in the exemplary embodiment, determining the length XI of side AB defined by corners A and B, and determining whether the length XI is greater than one-half of the number of estimated columns times the estimated dimension of the visual cell diameter.
  • the routine determines that side AB does not correspond to a valid symbol edge, and returns to step 147 to resume main probe 10 searching. If the side AB dimension XI is greater than one half the expected size, then the routine will continue and try to validate further a symbol as follows.
  • the routine passes to a FIND SECOND SIDE CORNER routine at step 160 that seeks to locate and validate a second side of symbol 1, namely the second solid perimeter of a DATA MATRIX symbol as illustrated in FIG. 2A.
  • the routine initially assumes that the second side will be at the bottom of the first edge and, hence, begins to probe along the bottom edge. This occurs at step 410, where feeler probes 30 are initialized.
  • the distance DI between feeler probes 30 and the range for the feeler probes 30 are adjusted to the values based on slope, cell diameter, and last edge found (in this case, the bottom edge corner) .
  • the direction of probing is thus changed to probe in an upward vertical direction.
  • the probe range is checked at step 415 to be sure it is bounding box. If it is, the WINDOW routine is used to search for a transition at step 430 in the same manner already described. Similarly, as already described, when the first feeler probe 30' fails to detect an edge in its range, the routine is tested for border damage acceptance at step 432 as previously described.
  • the routine begins the CORNER SEARCH routine at steps 436 and 437 and possibly the CORNER LOCKING routine, at step 450, with feeler probes 32 and 35 spaced a pixel unit apart (as set by the search flag at step 437) , to locate more precisely the corner C.
  • the CORNER LOCKING routine was used to locate corner A, then it will likely not be used to locate corner C. As illustrated in FIG. 5, the feeler probes 32 will locate the corner C accurately (absent damage) .
  • the slope is assumed to be ninety degrees rotated from the slope of the side AB, and hence the feeler probes 30 and 32 are also appropriately vertically shifted relative to the last detected transition, based on the deviation, as they are horizontally spaced relative to initial corner B.
  • one embodiment of the invention is as follows. After corner C is located at step 451, then at step 460 the dimensions of side AB and side BC are evaluated against certain predetermined criteria for the symbol 1 to be located. This is illustrated in FIG. A at step 165.
  • the test used is to determine if the distance between corners A and B (herein side AB) divided by the distance between corners B and C (herein side BC) is less 3.0 (3000 in high precision integers), more preferably between 0.4 and 2.6 (400 and 2600 in high precision integers) .
  • step 464 If the condition is satisfied, and the routine has not yet probed the top edge (step 464) then the routine assumes it is a valid edge and will continue to process the symbol. If the condition is not satisfied, then the routine will check to see if the top side has been probed at step 461. If it has not, then the second side search routine will switch the horizontal walk of feeler probes 30 for the second edge to the top edge at step 463, beginning at corner A and moving in the same direction as main probe 10. The same probing control parameters are used as in the prior search along the bottom edge, except that the starting locations are different and the direction of probing is now downward. In addition, the corner searching flag is reset so that the distance DI between feeler probes 30 is again one cell dimension, and the border damage acceptance control counter is cleared, if it is used.
  • the test of the dimensions of sides AB and BC is repeated at step 460. If the condition is satisfied, then corner C is located. If the dimension condition is not satisfied, then the routine aborts at step 462 and returns to step 147 of the main routine, assuming the edge located by main probe 10 to be invalid data. In addition, if necessary, the routine will redefine the corners A, B, and C at step 465 so that corner B is at the intersection of the two solid sides, and corners A and C are at the extremes with corner A defined as the top left corner of the matrix. This definition is applied regardless of the actual orientation of the matrix in memory or the field of view for extracting data from the visual cells in an efficient order.
  • the foregoing edge locating routine is graphically illustrated to identify clutter 3 relatively quickly.
  • main probe 10 and deviation probes 20 find transitions corresponding to the left edge of clutter 3.
  • Feeler probes 30, 32, and 35 next identify two corners Al and Bl corresponding to a possible edge side Al-B in which the distance XI of side Al-Bl passes the first size test of one-half the expected matrix size. If it did not, then the transitions would be rejected as clutter 3. If it does, then feeler probes 30, 32 (and 35) identify corner Cl. However, the subsequent distance comparison of side Al-Bl and side Bl-Cl for the first edge and bottom edge fails the ratio test. Feeler probes 30, 32 and 35 then locate corner Cl.
  • the boundary of corners Al, Bl, Cl, and DI is stored in memory as corresponding to identified clutter 3 so that any transition corresponding to an edge detected within that boundary can be subsequently ignored; main probe 10 can simply pass through to the other side of the boundary.
  • main probe 10 can simply pass through to the other side of the boundary.
  • the first edge struck by main probe 10, defined by the corners A2 and B2 will fail the deviation probe 20 test because the top and bottom deviation probe 20, shown in phantom lines, do not both find an edge transition in the range.
  • probe 10 will resume at the shifted coordinates and, if clutter acceptance is on, continue advancing and eventually hit the inside of the edge defined by corners B2 and C2.
  • Any visual cells, e.g., of data hit by main probe 10 in the interim will likely fail the expected XI dimension size test and be quickly disregarded, or else will fail the side AB-BC ratio test and also be disregarded.
  • deviation probes 20 will likely locate respective transitions on the inside edge, but the edge validation routine will eventually fail because the feeler probes 30 will not find a valid second side at either the top or bottom of the located edge. This is because the feeler probing for the second side continues in the same direction as main probe 10. As is shown in FIG. 2A, the actual second side edges extend in the opposite direction to the feeler probes in this situation. Turning clutter acceptance off at step 147 will skip these processing steps. Accordingly, main probe 10 will again resume and eventually reach the right margin of bounding box 2, reverse direction, and continue probing back along the same horizontal line of pixels, to seek another transition.
  • main probe 10 will find a transition on the edge defined by corners B2 and C2, the deviation probes 20 will locate respective second transitions and determine the slope and deviation, and the corners A and B (located corners "A" and "B" are labeled in FIG. 2A as corners C2 and B2 respectively) are found using the WALK_ABOVE_EDGE and WALK_BELOW_EDGE routines. Then, the routine will attempt to locate the second edge by feeler probing along the top in the same direction as the probe 10, i.e., towards the left margin of the bounding box. This attempt will fail and the routine will change to probe along the bottom. Consequently, the bottom edge probes find an initial corner C (illustrated in FIG. 2A as corner A2) . The dimensions of the located corner locations AB and BC (i.e., sides C2-B2 and B2-A2 as illustrated in FIG. 2A) are tested against the ratio condition. In this example, the test condition is satisfied.
  • the further processing of the symbol involves matrix validation (step 165, FIG. 1A) and is based on the location of corners A, B, and C being in a predetermined relationship, such that corner A is used as the starting point.
  • the matrix symbol 1 is at least initially defined relative to corner A with corner B being vertically aligned below corner A, and corner C being horizontally aligned with corner B, such that the sides AB and BC form an angle.
  • the located corners A, B, and C do not correspond to the preferred orientation of the actual corners of the symbol 1 for data extraction, as symbol is located in the boundary box of the field of view. Therefore, the preferred routine takes the located corners A, B, and C (e.g., corresponding to labeled corners C2, B2 and A2 in FIG. 2A respectively) and internally redefines them as illustrated in FIG. 2A and step 465 of FIG. 1F-2. This redefinition does not however, involve any rotation or shifting of information in memory, but rather redefines the coordinate system for evaluating the symbol 1 data.
  • main probe 10 in the event that main probe 10 does not locate any edge associated with a symbol 1 after traversing right to left and left to right across the field of view, e.g., after processing clutter 3 or detecting a portion of a symbol 1 at a location that cannot be validated, then the routine will shift main probe 10 location by distance DI.
  • distance DI is preferably selected to allow main probe 10 to scan across the field of view to locate any symbol 1 that is present in the field.
  • FIG. 2B illustrates a sequence of five main probes 10, numbered 1-5 at the left edge of bounding box 2, in which the first four left to right and right to left traverses are unable to validate an edge of a symbol 1.
  • the vertical starting location of main probe 10 is controlled by a toggle that increases the distance from the horizontal center of bounding box 2 each time main probe 10 returns to the starting margin, defined herein as the left margin.
  • a summing toggle is used, such that the next vertical probe level is above or below the center line by a multiple of the number of times probe 10 has traversed the field of view, with even numbers being shifted above the center line, and odd numbers being shifted below the center line.
  • the symbol is located on the fifth main probe, designated 10-5 and its associated deviation probes 20-5. Although the upper deviation probe 20-5 does not find the correct edge, it finds a transition that is within the linearity limit and permits validating the symbol 1.
  • the third probe 10-3 it is shown in the left to right probe 10-3 and the right to left probe 10-3 1 .
  • the lower deviation probes 20-3 and 20-3 1 actually overlap, but are shown separated for ease of comprehension.
  • only horizontal probing is needed for main probe 10. This is because main probe 10 will locate one of the two solid border sides during either a right to left or left to right probe.
  • the single main probe 10 travels only in the left to right direction (or the right to left direction, but not both) , and the deviation probes 20 are configured to probe horizontally and vertically (or vice versa) in sequence, unless one pair of deviation probes 20 finds two corresponding transitions in the ranges.
  • the feeler probing will assume the initial direction of travel of the deviation probes which locate the first edge to define the edge.
  • the first pair of deviation probes 20 are parallel to main probe 20 and centered on the x coordinate of the first transition. They do not each detect a second transition. Consequently, an alternate pair of deviation probes 21, which are perpendicular to main probe 10 and centered on the y coordinate of the first transition are used. These probes each detect a second transition.
  • the symbol can then be validated and data extracted as discussed herein.
  • the deviation probes 20 and 21 may probe in both directions in the deviation probe range, to increase the likelihood of finding two transitions on the border edge as quickly as possible. Reversing the direction of the deviation probes will avoid rejecting a valid symbol edge because the deviation probe starting location was inside the symbol, rather than outside the symbol and thus did not detect the so-called "required color transition".
  • main probe 10 may cycle between horizontal and vertical probing, or may complete all of the horizontal probing (after toggling through the bounding box) before starting any vertical probing, vice versa, or some combination thereof.
  • the XI dimension and the ratio tests may be completely different, for example, to identify initially the start bar or bars of a one dimensional or two dimensional bar code or some other "line" of a symbology, e.g. , the known CODE ONE symbology.
  • Concerning bar code symbols after locating a start bar sequence, the main probe may advance on a line perpendicular to the slope of the start bar, to locate the stop bar(s). Then, having found the boundary, the data may be extracted. For a one dimensional bar code, the data can be extracted by processing the line of pixels probed by main probe 10 between the start and stop bars.
  • the data extraction technique can be used to decode the bar code by locating and sampling each root cell value and evaluating the bit stream as relative distances in a known manner. From evaluating the data, the type of bar code can be determined and its data extracted. With reference to other symbols, such as CODE ONE, once the key line edge parameter is identified, by use of main probe 10, deviation probes 20 and feeler probes 30, 32 (and 35) , main probe 10 may be used again to probe in one or more directions, on one or more lines perpendicular and/or parallel to the identified key line edge to define the boundary of the symbol based on either the user provided information or information acquired by probing.
  • border damage acceptance may be used to modify any or all of the feeler probe 30 edge detection routines so that the failure to detect an edge does not automatically trigger the corner searching and locking routines.
  • the failure to recognize a transition is used to set an edge transition failure flag.
  • a subsequent feeler probe 30 is then used, shifted the distance DI to the next probe location with its range adjusted for the calculated deviation and slope, to determine whether or not an edge can be located there. If an edge is not detected after a set number of misses, which may be sequential or cumulative, then the routine will return to the corner searching, based on the last feeler probe 30 to detect an edge transition.
  • feeler probes 30 continue to locate transitions, then the one failure to recognize an edge will be disregarded and the flag will be reset.
  • This modified routine is implemented to tolerate symbol edges that include irregularly printed or damaged edges, which imperfection might otherwise be misinterpreted as the end of an edge, when it is not.
  • the aforementioned damage counter is replaced with a counter that is incremented by a first value when no edge is detected, and decremented by a second value when an edge is detected, so that a selected number x of failures to detect an edge within a predetermined number y of successive feeler probes causes the counter to indicate the end of an edge.
  • the routine determines that the transitions detected and not detected do not correspond to a reliable solid edge, and therefore sets the edge length to the last reliable transition location. In this case, the best data available may be used and tested in the ratio test to accept or reject the edge detections.
  • the routine proceeds to step 165 where the matrix symbol is to be validated.
  • the invention is used in an application where the DATA MATRIX symbols to be read are of the same number of rows and columns which are known. In an alternate embodiment (not shown) the routine may be modified to determine the number of rows and columns automatically, as follows.
  • the DATA MATRIX symbol is defined with one solid edge having the thickness of each column of visual cells, and the other solid edge having the thickness of the each row of visual cells. In a square DATA MATRIX symbol, these thicknesses are ideally the same.
  • the DATA MATRIX symbol is defined with the first row and column space inside corner B as the opposite binary value of the solid perimeter lines, e.g, white when the solid perimeter edges are black.
  • the routine could easily be adapted to use horizontal and vertical feeler probes 30 to scan the solid sides near corner B, scanning from the clear space outside the perimeter into the first data cell inside corner B, to determine the thickness of each column and each row.
  • the number of rows and columns of the matrix can be determined. From this determined data, the center of the visual cell AA at initial corner A can be determined.
  • the actual number of rows and columns is needed for an optimal application of the routine. It does not matter whether the number is provided by the user or determined from the symbol. The number of pixels in each visual cell is less important, and the estimate of this value has a wide tolerance for error. The estimate must only be close enough to enable the routine to find corners A, B, and C. In this regard, if the routine repeatedly fails to locate symbols it should find, then the user is preferably prompted to change the estimate to facilitate symbol recognition.
  • new deviation values DX and DY are calculated, based on the dimensions of initial corner A to initial corner B (side AB) and initial corner B to initial corner C (side BC) and the number of rows and columns in the symbol.
  • the deviations DX and DY correspond to the deviation from a given point of one visual cell to the corresponding given point of the adjacent visual cell.
  • These new deviation values DX and DY are regarded as more accurate than the deviation and/or slope values provided by deviation probes 20 because the AB segment is longer than the transitions located by probes 20 (and the corners A and B have been corner-locked if necessary to provide more accurate corner information) .
  • the information is based on measures in two dimensions for the actual orientation of the symbol in the field of view rather than one dimension relative to a defined horizontal axis, and are effectively averaged over the number of cells.
  • using the corners A, B, and C to determine the deviations DX and DY also inherently corrects for any distortion of the symbol in the field of view with respect to stretching of the symbol and to any pitch, yaw or roll relative to the normal image plane of the scanning device. This avoids having to perform separate steps of measuring the distortion directly to correct for any distortion that is found.
  • the coordinates of the center point AA of the visual cell containing initial corner A are calculated based on the determined dimensions of sides AB and BC, the number of rows and columns, and the Pythagorean theorem.
  • the number of rows and columns and the side dimensions provides the height and width of each cell, which enables the system to read symbols that have been stretched in one or two dimensions. See, e.g., FIG. 2C which is stretched in one direction along the x axis.
  • the center points of each visual cell of the matrix can be calculated.
  • trigonometric tables are used to calculate the center point AA to minimize the computation time required to find the cell center AA.
  • the value of each center cell of the predicted side AD of alternating light and dark areas are sampled, one at a time, by determining the deviation of the cell center of that visual cell from center AA, and sampling the color value of the visual cell.
  • the value is based on the measure of a single pixel at the cell center.
  • the color value also could be based on a sum, an average or a voting routine of a number of pixels in the cell surrounding the center, which may or may not include the calculated center pixel.
  • the number of pixels to be used in the measure is a matter of design choice, and may be based on the manner in which the symbol is printed or marked on the article, object or substrate scanned. The errors due to imperfect printing or marking techniques, which are likely to include "pinholes", scratches, or other unmarked areas that might be mistaken for the wrong data value, can be minimized.
  • the deviations are determined relative to center AA, rather than from the center of the adjacent cell, for ease of computing.
  • the routine multiplies the deviation values DX and DY by the number of rows and columns respectively, between the cell of corner D and the cell containing center point AA. This same process of multiplying the deviations DX and DY by the number of intervening rows and columns is used for calculating the center of all of the other visual cells of the symbol.
  • the routine is adopted because it provides good results with minimal computational time requirements.
  • the routine tests whether those values are within 80% of a 10101 etc. pattern, which corresponds to a valid dashed edge. The test occurs by evaluating the value of adjacent transitions, looking for black to white and white to black transitions. The occurrence of a transition results in incrementing a value as the percent of the number of transitions that are to occur in a dashed side.
  • each black to white and white to black transition is 10% of the total, and the number of transitions are added as the data is evaluated. Thus, if there are ten transitions, the sum at the end of the row will be 100%. If there are more than 80%, then the dashed edge is declared valid. If there are less than 80%, then the dashed edge is declared not valid. In the case that the dashed edge is not valid, the routine returns to step 156 where a horizontal walk step may be repeated (unless both the top and bottom horizontal steps have already occurred for this symbol) to attempt to validated a different edge.
  • the return to search for a second solid edge is explained. It may be the case that the spacing of the feeler probes 30 during the prior step 156 was such that a dashed edge BC was erroneously detected as a solid edge, because the feeler probes 30 detected sufficient edge transitions to satisfy the ratio test. This may have occurred as a result of, for example, some distortion of the symbol, i.e. stretching in a dimension, a misprinting of the symbol, e.g., an ink spot 5, tolerated damage and the like. Thus, because the attempt to validate the assumed side AD as a dashed edge would find no transitions (or at least not enough transitions to satisfy the 80% test at step 540) this portion of the validation routine would fail.
  • the routine By causing the routine to look again for another solid edge at the top, namely side AD, the routine saves the computational time already spent validating the one good side, AB, and takes advantage of having identified a second parameter indicative of a valid symbol, namely side BC (even though that parameter was mistaken for a solid side) . As a result, the routine then looks for other data, and if it can confirm that side AD (FIG. 2C) is a valid solid edge, it will continue to validate the rest of the symbol and extract data. Hence, it is demonstrated that the routines of the invention conserve as much of the investment made to validate a symbol 1 as possible, such that each level of validation consumes a greater quantum of computational time and effort and is more likely to find and validate a symbol.
  • step 540 corresponds to the data integrity standard (also known called border damage acceptance) of the DATA MATRIX symbol, which tolerates a loss of 20% of the symbol with successful readability.
  • the test threshold could be made higher or lower depending on the environment in which the symbol is used, the level of error correction in the symbol, and the desired reliability of decoding the symbol.
  • step 545 the routine checks whether or not both sides AD and DC have been tested. This test is inserted at this point to conserve computation time and to use more efficiently the same instruction steps for testing both sides AD and DC. If side DC is not tested, then the routine passes to step 550 where the values of the center points of side DC are determined, based on the multiples of the deviations of those center points from center point AA as described. If the values of side DC also satisfy the threshold test at step 540, then the routine has validated the symbol as valid and extracts the data of the symbol. If the values of side DC do not pass the test, then again the routine will return to step 156 and attempt to validate another solid side of the symbol, if possible. If not possible, then the routine will restart main probe 10 to search for another edge to test.
  • step 180 (FIG. 1A) to extract the data.
  • the data extraction proceeds by calculating the center point of each visual cell of the matrix, within the defined rows and columns based on the multiples of the deviations DX and DY, and sampling the value of the cell as already described. As the data is acquired, it is tested to determine its digital value, and the resultant value is then provided as part of a bit stream of data.
  • the digital value is binary, 1 or 0, resulting in a bit stream of Is and 0s.
  • the entire DATA MATRIX symbol is converted into the bit stream, including the perimeter to be compatible with commercial decoding equipment, e.g., Models C-102 and C-302 available from International Data Matrix.
  • the data extraction routine returns to the cell containing center point AA, and begins sampling each cell, typically moving along a row and then advancing to the next row down, until all of the data is extracted.
  • the routine in examining the value of the center points of the visual cells, the routine assumes that the column 0, row 0 value of center point AA is black (or white when the symbol is a negative) and initializes at step 610 a variable BLACKave with the value of center point AA (e.g., a determined value in a grey scale of 0-255 pixel colors) .
  • the inventor has discovered that in an environment in which the color contrast is good, e.g., greater than approximately 40%, only a one step test is required to reliably separate the black and white cells.
  • the test is the cell value - BLACK AVE ⁇ 20%
  • the value of the cell to be extracted is determined.
  • the unknown cell value is compared to the known BLACKave. If the difference is less than 20% of the known BLACKave value, then the unknown cell is determined to be a black value (step 642) .
  • the routine sends a "black” or "1" bit to the bit stream (step 660) and stops further processing of the data cell. Otherwise, the cell is set white at step 646, and a "white” or "0" bit value is sent to the bit stream at step 660.
  • the routine selects the next cell to sample at step 670. This process involves using the multiples of the row and column and the deviations DX and DY as already described.
  • row counters and column counters may be used to store the location of the cell, such that the counters are incremented after each cell is sampled, and reset upon reaching the end of the row or column.
  • a test at step 675 is illustrated to indicate that after the entire symbol has been processed to extract therefrom the pertinent data, the routine ends.
  • the inventor also has discovered that, an alternate two test decision process can be used to process a large quantity of pixel colors, which substantially minimizes the time required to process the color values, with high reliability.
  • the first step determines whether the difference between the sampled cell value and the BLACKave is less than 20%, and if it is, declares that cell black. If it is not, the next step determines whether the difference between the sampled cell value and a "WHITEave" is less than 20%. If it is, then that cell is declared white. If the cell also fails the WHITEave test, the next step is to determine whether the cell value is closer to the WHITEave or BLACKave, and to declare the cell the color of the average to which its value is closest.
  • the WHITEave value may be obtained by sampling the "white" cells during the dashed edge validation phase.
  • main probe 10 may be restarted to locate other symbols.
  • Already detected symbols may be blocked from main probe 10 so that main probe 10 will not try to validate a symbol that has already been processed, in the same manner that identified clutter 3 may be blocked, as already described.
  • the routine likely will be able to locate both symbols provided that the overlap covers less than 20% (the damage limit) of the underlying symbol.
  • the value BLACKave may be updated based on the measure of each black cell in side AB, corresponding to column 0. This is indicated at steps 680 and 685. This averaging accounts for some possible variation in printing of the symbol, and enhances the reliability of the extracted data. In the event that the column 0 cell contains a value that is more than 20% different from the prior value of BLACKave, then the prior value BLACKave is used without being updated. This provides for not including a "white" cell value detected in the border (e.g., a printing problem) in the BLACKave value, which would distort the reliability of the data being extracted.
  • initializing the value BLACKave at step 610, if centerpoint AA has a measured black value that is not "black”, then some other initial value of BLACKave may be used.
  • the initial BLACKave value may be, for example, the value of the black portion of the transition located by main probe 10 (and/or deviation probes 20 or some combination thereof) , or an averaging of the determined black cells in a validated dashed edge (e.g., row 0) .
  • the values of the pixels at the center of each of the cells in the dash side may be determined as described in connection with the data extraction procedure.
  • the value BLACKave is typically the initial value for center AA (unless some alternative technique for calculating the value is used) . Then, after all the values are obtained, they are examined for black-white, and white-black transitions as previously described.
  • the routine may examine the visual cell containing corner D, measure the height and width of that cell and determine therefrom the cell center DD (FIG. 2A) .
  • the deviations DX and DY as the evaluation continues along the second dash side are based on the corrected center DD, rather than center AA. This will provide improved identification and sampling of the centers of the visual cells in the dash sides. It also will overcome problems arising from basing centerpoint AA on a damaged corner.
  • the validation of the two solid sides is conducted in a different manner from the sequence illustrated in FIGS. 1F-2 AND 1G, which is not separately illustrated.
  • the routine attempts to validate the first dashed edge using steps 510, 520, 530 and 540 as previously discussed in connection with FIG. 1G. If the side does satisfy the 80% limit, then the routine will continue to validate the second dashed edge side. If the second side also passes the 80% limit, then the routine passes to extract data at step 180 (FIG. 1A) .
  • step 540 the routine sets a toggle flag (a toggle) indicating that the bottom and top edges will have been tested for a solid side, and return to the sequence to search for the second side, but now at the top edge. If the top edge result also fails the ratio test, then, because the toggle flag was set, the routine will return to step 147 and resume probing for a first transition with main probe 10. If, instead, the second side passes the ratio test, then the routine proceeds to validate the dashed edges as described. If both dashed edges pass the 80% test, then the data will be extracted. If, however, one of the dashed edges fails, then because the toggle flag is set, the routine will return to step 147. Hence, in this alternate embodiment, the routine will continue further validating data without necessarily returning to the main program as indicated in FIGS. 1-F2 and 1G. Similarly, data extraction also does not require first returning to the main routine.
  • a toggle flag indicating that the bottom and top edges will have been tested for a solid side
  • the apparatus comprise a personal computer 700 including RAM memory 760, a display device 710, a keyboard 720, and a mouse 730.
  • Computer 700 is operated to execute a sequence of software instruction sets in response to user provided data.
  • Computer 700 also is modified to include a conventional frame grabber board 740 and also may include a video RAM memory 750.
  • Frame grabber board 740 may be, for example, a model Cortex I device, available from Image National Corporation, Beaverton, Oregon, USA. Creation of suitable software instructions is within the abilities of a person of ordinary skill in the art.
  • the software is preferably capable of processing a symbol captured at ⁇ 90° of rotation in the boundary box. It is further believed to be within the abilities of a person of ordinary skill in the art to process images captured in any rotation (360°) .
  • a video camera 770 which is capable of capturing an image of a field of view containing symbol 1.
  • the image captured by camera 770 is temporarily stored in frame grabber 740 and is then transferred into RAM 760 or video RAM 750.
  • the processing routine may be stored as a sequence of instructions steps in RAM 760 (or ROM or other conventional memory devices; not shown) for processing the image stored in memory.
  • Mouse 730 and keyboard 720 are used by the user to control execution of the instructions and to provide input information for use by the stored instruction sets.
  • Suitable personal computer 700 include devices containing 486 DX50 microprocessor as a CPU platform, as well as 386 SX25 microprocessors and other compatible and similar devices.
  • a simple DATA MATRIX code symbol containing the numbers 1, 2, and 3 in encoded form (See FIG. 6A) was captured in a clean field of view (i.e., no visible clutter) wherein the symbol filled one-fifth of the field of view.
  • the personal computer used to process the information was a 386 SX 40 CPU platform having a Cortex I frame grabber board and using RAM memory.
  • the aforementioned prior art multiprobe edge detection routine which is a part of a commercial decoder Model C-102, available from International Data Matrix, Inc., was used and compared to the present invention substantially as set forth in the software appendix. The prior art method was capable or reading this symbol at a rate of 2.7 reads per second.
  • the method in accordance with the present invention using floating point mathematics read the same symbol at a rate of 2.9 reads per second.
  • the method in accordance with the present invention using the high precision integer mathematics, read the same symbol at a rate of 15.1 reads per second.
  • the method in accordance with the present invention using the high precision integer mathematics read the same symbol at a rate of 24.3 reads per second.
  • the time required to locate, decode and extract the data from the symbol was approximately 10 ms.
  • the majority of the time spent in locating the symbol was approximately 30 ms to capture the image of the code.
  • EXAMPLE 2 In this example, a damaged and cluttered field of view containing a symbol as illustrated in FIG. 6B was presented in the field of view. In accordance with the aforementioned prior art method (model C-102) , this symbol was read at a rate of 1.4 reads per second. In accordance with the present invention, using the high precision integer mathematics package, the same code was read at a rate of 5.2 reads per second. In considering the above examples, it should be recognized that the ability of the present invention to read codes at a faster rate depends upon the degree of clutter in the field of view, which is difficult to quantify, and other possible variations in scanning the same symbol for the different operating systems.
  • the prior art system may be faster in reading certain symbols in certain circumstances than the invention, particularly in a clutter-free field of view where the symbol substantially fills the field of view.
  • the present invention provides improved performance for reading symbols under excellent reading conditions, and under poor reading conditions.
  • a primary advantage of the invention which is not demonstrated by the Examples, is that the present invention allows for extracting data from symbols in circumstances in which the presence of clutter or damage renders the prior art methods device unusable because they cannot locate and validate a symbol.

Abstract

To locate a preselected symbol in a two-dimensional image, a single main probe (10) of a line of pixels extending across a boundary box of the field of view is conducted to find a first color transition. Once a first transition is found, a second probe (20) is conducted to search in a second line of pixels for a second color transition. If no second color transition is found, the main probe resumes searching along the line of pixels for another first color transition from where it had stopped. If a second transition is found, then a plurality of additional probes (30) are conducted to search for transitions in respective third lines of pixels to locate and define the edge located by the main and second probes. The located and defined portion of the symbol is tested for validity.

Description

METHOD AND APPARATUS FOR LOCATING AMD EXTRACTING DATA FROM A TWO DIMENSIONAL CODE
FIELD OF THE INVENTION
The present invention concerns locating a two dimensional code symbol in a bounding box within a field of view and extracting data contained in the symbol, more particularly to improvements in the location and verification of a two dimensional code symbol having a distinctive perimeter in a cluttered field of view.
BACKGROUND OF THE INVENTION
Modern one dimensional and two dimensional code symbols, such as bar codes, two dimensional or stacked bar codes, two dimensional matrix codes, and the like are used for object identification, information encodation, manufacturing and inventory control, item authentification, and a variety of other purposes. Use of these code symbols for marking and/or identifying objects requires that an image of the code symbol be captured by a reading device, and the captured image processed to determine the information encoded in the code symbol.
Among the techniques known for scanning an image of code symbols are using an area capture means, such as a video camera or CCD area array for capturing an image of the field of view including the code symbol, together with a frame grabber device for saving a video frame of the captured image, and memory for storing the captured image frame as, e.g., a bit map of the image pixels captured.
Other techniques include using a linear array to capture a "line" or a portion of a line of image data at a time, and a raster scan of a laser beam or other flying spot scanner, to capture a two dimensional image of a field of view, and equipment to accumulate a bit map of a two dimensional area. Once the image of the field of view is captured in bit map form in memory, the image must be processed to locate the symbol in the "field of view," i.e., the portions of the bit map corresponding to the two-dimensional symbol image in the field of view. More specifically, a "bounding box" is defined within the field of view, which defines the boundary of the pixel area in which the search for the symbol will occur. The located code symbol must then be processed to extract the information recorded or encoded in the symbol. The bounding box is typically spaced inside the field of view e.g., ten to twenty pixels (of a 512 x 484 array) and may be spaced to envelope only a subset of the field of view.
Many techniques for locating the symbols within a bit map stored in memory are known. Most of these known techniques require either powerful graphic image processing for character (symbol) recognition (based on identifying the entire symbol or distinctive subportions of the symbol) , or the use of multiple "probes" that look for patterns of black-to-white and/or white-to-black transitions which correspond to the symbol. The term "probe" refers to an analysis of the contents of a group of memory addresses in a stored bit map, corresponding to examining an image pixel line, as the line appears in the bounding box of the scanned field of view, to identify color transitions, e.g., black to white or white to black. The analysis typically starts the several probes at different margins of the bounding box such that each probe traverses a different path. Once the several probe evaluations are completed, the identified black-white and white-black transition locations are evaluated to determine whether or not the identified transitions correspond to those of a symbol to be recognized. See, for example, Wang U.S. Patent 5,304,787.
Another known multi-probe technique is that offered by the assignee hereof, International Data Matrix, Inc., Clearwater, Florida USA, which is embodied in its commercial devices having the trade names models C-102 and C-302 decoder systems. With reference to FIGS. 7A - 7D, in this prior art multi probe technique, four series of parallel probes are used to acquire data points. Each series is made of a plurality of parallel probes that pass from one margin of the bounding box to the opposite margin, such that successive probes are laterally spaced in parallel to walk across the entire bounding box. The four series are left to right, right to left, top to bottom and bottom to top probes. The transitions located by each multi-probe series are evaluated to identify if there are any straight lines. The best available corners are then selected, and the edges of the symbol are defined. A grid of visual data cell centers is then mapped onto the "located" symbol and the data contents of the cell centers is extracted. The extracted data is then formed into a bit stream, and a decode is attempted of the bit stream, using the decode process and apparatus of the aforementioned model C-102 and/or C-302 devices. If the decode is unsuccessful, then another set of four series of multiple probes are attempted, with the distance between the parallel probes in each series being reduced so that the probes are closer together. If the second attempt is unsuccessful, the routine will quit. One of the problems with these multiple probe techniques is that the symbol locating process requires a significant amount of image bit analysis to declare whether or not a valid symbol exists. This, in turn, requires either the use of extremely powerful, and hence more expensive, processors, or a relatively lengthy delay in declaring a successful read of the symbol due to the processing time. The delay translates into a slow rate of reading signals which has heretofore hindered consumer acceptance of the known devices for symbol applications where fast read times are required.
Another problem with the known multiple probe techniques is that they have difficulty in locating the symbols when the symbols appear in a field of view that also includes "clutter". As used herein, the term "clutter" refers to other marks or color transitions that are not symbols or parts of symbols. In particular, the known multiprobe systems spend significant amounts of data processing time processing transitions caused by clutter. If they are unable to distinguish the symbol from the clutter, these systems spend further time attempting to verify whether or not the clutter corresponds to a symbol, and/or extracting data from the clutter. In some cases, the system may determine that the clutter is clutter and reject it, and in other cases, the system will fail to reject the clutter and yield invalid data. Further, the clutter transitions may be located several times by the same or different probes. These difficulties further delay locating the correct symbol and provide a slow rate of reading symbols.
There remains a continuing need for better methods and apparatus for image processing to locate a symbol and to extract the information encoded in the symbol.
OBJECTS AND SUMMARY OF THE INVENTION
It is, therefore, an object of the present invention to provide improved methods and apparatus for processing captured image data to locate a symbol in a cluttered field of view, and to extract data therefrom.
It is another object to determine rapidly whether a detected color transition suspected of being of a symbol is likely to be of a symbol and to continue processing, or likely to be of a clutter image and disregarded. It is another object of the invention to speed up the captured image processing time while maintaining accuracy by using high precision integer mathematics in image data processing.
The present invention concerns methods and apparatus for processing a bit map of a captured two-dimensional image to locate preselected symbols, if present in the image, and to quickly reject clutter in the image, thereby to improve the speed and reliability of locating the symbol.
One aspect of the invention is directed to using a first or main probe of the captured image to find a first required color transition that may or may not be part of an edge (or side) of the symbol to be located. Once a first transition is found, a second probe (also called a deviation probe) is applied to determine whether there is a second required color transition which occurs in a region, more preferably a pair of second required color transitions which occur in a pair of regions, likely to correspond to an edge of the symbol including the first transition. Stated otherwise, the main probe and the deviation probe or probes search for required color transitions which are likely to be part of an edge of the same symbol. The term "required color transition" (hereinafter "color transition" or "transition") refers to a specific color transition of interest, e.g., black to white or white to black, and does not refer to other color transitions which exist and may be ignored. If a deviation probe fails to find such a second transition, then the suspected "edge" is rejected. In such case, the main probe resumes looking for another first transition at the location where it had stopped. If, however, all of the deviation probes find a second transition, and those second transitions are likely to correspond to the suspected symbol edge located by the main probe, then a plurality of additional probes, called feeler probes are applied. The feeler probes are used to locate and define the edge located by the main and/or deviation probes. More typically, the feeler probes define a selected side or sides of the symbol perimeter. The located and defined portion of the symbol, i.e., the edge or a selected side or sides, is preferably tested for validity. If the dimensions are unacceptable, then the edge is rejected and the main probe resumes. Otherwise, processing will continue. Preferably, if the selected side or sides are found, then certain symbol perimeter parameters are acquired and tested to determine whether the located side(s) correspond to a valid symbol. Once the edge verification is completed and a valid symbol located, the data of the symbol is then extracted. Preferably, the data is extracted in the form of a bit stream. The acquisition of the image to be processed, and the use of the data extracted from the symbol, form no part of the present invention.
Preferably, there are two deviation probes which are spaced apart from and straddle the main probe and look for second transitions that might be part of the same edge that was located by the main probe. The spacing of the deviation probes is such that they can define a slope of an edge based on the at least two of three transitions located. In addition, the degree of linearity of the first transition located by the main probe and the two second transitions located by the deviation probes can be used to reject an edge that is not sufficiently linear to correspond to a valid edge. In as much as only two points are needed to define the slope of a line, only the main probe and one deviation probe are actually needed for locating a straight edge side of a symbol. For symbols having a circular edge feature to be located, at least three points on the curve are required. In accordance with a preferred embodiment of the present invention, the method is used to locate and extract data from a DATA MATRIX symbol, which is a two dimensional matrix array code that is available from International Data Matrix, Inc., Clearwater, Florida USA, the assignee hereof, and is described in, e.g., U.S patents 4,939,354, 5,329,107, and 5,324,923. The DATA MATRIX symbol has a distinctive rectangular (normally square) perimeter of two adjacent solid lines intersecting at a first corner, and two adjacent lines of alternating light and dark areas (so-called "dashed edges") intersecting at a second corner (opposite the first corner) . The data is encoded with error detection and correction and arranged within the perimeter in a matrix array of rows and columns of visual data cells ("cells") . The cells all have the same nominal dimensions (i.e., a root cell size) and each cell represents either a binary 1 or a binary 0. One such method finds the DATA MATRIX symbol location through edge validation, which includes the following sequence of steps.
1. Start a horizontal main probe in the vertical center, and far left horizontal value of the field of view as determined by a bounding box.
2. Using simple edge detection techniques, advance the main probe left to right until an edge is found which satisfies the contrast/transition requirements. a) If an edge was found, attempt a validation phase using additional probes, e.g., deviation probes and feeler probes, testing the ratio of dimension and validating the dashed edges of the DATA MATRIX symbol; (i) if the edge passes both the ratio and validation tests, then begin step 5; (ii) If the edge fails either the ratio or the validation tests, then restart step 2 with starting locations the same as the failed edge. b) If no edge was found and the bounding box is reached by the main probe, begin step 3.
3. Using simple edge detection techniques, advance the main probe right to left until an edge is found which satisfies the contrast/transition requirements. a) If an edge was found, attempt a validation phase using additional probes, e.g., deviation probes and feeler probes, testing the ratio of dimension and validating the dashed edges of the DATA MATRIX symbol;
(i) If the edge passes both the ratio and validation test, then begin step 5; (ii) If the edge fails either the ratio or the validation tests, then restart step 3 with starting locations same as the failed edge; b) If no edge was found and if the bounding box is reached by the main probe, begin step 4. 4. If no edge was found, change the main probe vertical offset using a calculation based on an approximate size of the matrix to be located (+/- summing toggle) . At each pass through step 4 the summing toggle will increase in magnitude and alternate in direction so that as it is alternately added and subtracted from the center of the bounding box, it will be spaced further and further away each time. The search will terminate before exceeding the limits of the bounding box. 5. With values obtained during the validation phase for a valid DATA MATRIX symbol border and the three corner locations Top Left A, Bottom Left B, and Bottom Right C, begin to extract the cells from the matrix and add them to a "decode" bit stream. Following the foregoing, the bit stream can be provided to a DATA MATRIX decode machine embedder in a suitable controller device (e.g., Model No. C-102) to attempt a decode of the bits placed into the decode bit stream. A signal indicating whether the decoder was valid or invalid can be communicated to the user.
A processing system, in accordance with the present invention, for analyzing data corresponding to a scanned image to locate symbols which may be present in a bounding box of the field of view, includes an input port operable to receive a data set corresponding to the pixel image data, as may be obtained by a scanning device, a memory storage device operable to store the data set and a plurality of processing system instructions, a processing unit for operating on the data set to identify color transitions corresponding to portions of a symbol in the bounding box, and optionally an output port for providing an output data set corresponding to the information of the symbol located in the bounding box. The processing unit retrieves and executes at least one of the processing system instructions from the memory storage device. The processing system instructions direct the processing unit examine selected subsets of data corresponding to selected lines of pixels in the bounding box of the field of view, to identify color transitions in the lines of pixels which may correspond to portions of the identifiable edge of a symbol, if one exists. In one embodiment, the instructions sets operate to conduct searches of selected subsets of the data in a sequence that a first probe search examines a first subset for a first transition, in response to which a second probe search examines a second subset for a second transition, and in response to which a third probe search examines a plurality of third data subsets to define the suspected edge.
One embodiment for using and/or distributing the present invention is as software stored to a storage medium. The software includes a plurality of computer instructions for controlling one or more processing units for processing data corresponding to a captured field of view of an image which may include a preselected symbol having an identifiable edge and data, so that the symbol can be located and validated, and the data of the symbol extracted, in accordance with the principles of the present invention. The computer will include the necessary search and test algorithms, or parts thereof to be used. The storage mediums utilized may include, but are not limited to, magnetic storage, optical memory, and/or semiconductor chips, to name some examples. Such semiconductor chips include RAM, ROM, EPROM, EEPROM and flash non-volatile code storage devices.
BRIEF DESCRIPTION OF THE DRAWINGS
Further features of the invention, its nature and various advantages will be more apparent from the accompanying drawings and the following detailed description of the invention, in which like reference characters refer to like elements, and in which:
FIGS. 1, and 1A - IH are flow chart diagrams of the operation of the invention in accordance with a preferred embodiment of the present invention;
FIGS. 2A - 2C are diagrams of different applications of the method of locating a two dimensional code symbol in a field of view containing clutter, in accordance with embodiments of the present invention;
FIG. 3 is a block schematic diagram of the apparatus of the invention; FIGS. 4A - 4C illustrate examples of deviation probes of the present invention;
FIGS. 5, and 5A - 5B illustrate methods of locating the corner of the symbol of FIG. 2A;
FIGS. 6A and 6B are sample DATA MATRIX symbols used in the examples discussed below; and
FIGS. 7A-7D are drawings of a prior art multiprobe system for locating a symbol.
DETAILED DESCRIPTION OF THE INVENTION Referring to FIGS. 1, 1A - IH and 2A - 2C, a preferred embodiment of the present invention is shown. In accordance with the invention, there are defined a symbol 1 in a bounding box 2 of a scanned field of view containing elements of clutter 3, a main probe 10, a pair of deviation probes 20, and a plurality of feeler probes 30, 32, and 35. Probes 10, 20, and 30 are sequentially used to locate an edge of a two dimensional symbol 1 in a memory storage corresponding to bounding box 2. Probes 32, and optionally probes 35, are used to define further the corners of the edge of symbol 1. Once the edge is acquired, it is tested for validity. If the edge is determined to be valid (or not determined to be invalid), additional probes 30, 32, and perhaps 35 are used to find one or more other edges of symbol 1. Bounding box 2 is preferably spaced 10 to 20 pixels inside the field of view, although it could be configured to include all or any portion of the field of view.
For simplicity of explanation, the drawings depict and discussion herein describes the probes and stored image data containing the code symbol in the visual form, i.e., as scanned in the field of view and searched within bounding box 2, and not as the data is actually stored in memory, e.g., in a bit map. In this regard, it is noted that the data stored in memory is pixel color data corresponding to the scanned image pixels stored in a prescribed sequence of address locations, and not necessarily by rows and columns corresponding to the image pixels of the captured field of view. The pixel values may be binary, but are more typically a grey scale of color ranging from 0-255 pixel units (based on 8 bit values) . Hence, probes are described in terms of passing in a dimension (i.e., a line) in the bounding box 2 of the field of view, rather than examining the contents of memory addresses corresponding to the image or pixel line of interest.
As used herein, the term "(x,y) coordinates" in connection with the location of a color transition or pixel should be understood to refer to the relative location of the pixel in a bounding box in the case that a cartesian coordinate system is used to define the bounding box and the corresponding memory locations. Thus, the term (x,y) coordinate should be interpreted broadly to include other coordinate systems or ways of defining locations in the bounding box and not as limited to cartesian systems.
The extension of the discussion of the visual image to apply the invention to process the image information stored in memory is both straightforward and within the ability of a person of ordinary skill in the art. It is contemplated that the present invention is applied to the location and extraction of a particular symbol the appearance of which is known in advance. This reflects the commercial use of such symbols, wherein typically only one symbol type is used in a given application. Also, in accordance with the present invention, for the particular symbol chosen, either uniform symbol characteristics are to be used (a dedicated system) , or the user may provide certain information as to some basic structure and some estimated dimensions, to facilitate the processing of the acquired image, as discussed below.
Advantageously, by having certain user provided symbol specific information, and evaluating the image data in view of the specific information, the present invention is able to discard quickly data that is not likely to correspond to a valid symbol without wasting very much processing time. This reduces the time needed to locate a valid symbol. The present invention is particularly well suited to processing symbols of the same type having different information encoded therein.
In addition, the invention may be adapted to recognize different types of symbols, each of which also may have certain user provided estimated symbol parameters input to speed up the processing. In such case, a device in accordance with the invention could test for a valid edge of each type of symbol in a predetermined sequence of tests. Hence, when the acquired data fails for one symbol type, the process will continue for the other symbol types until either a valid symbol is found or the data is rejected for all symbol types. Generating the appropriate test sequence to identify the different symbol types is believed to be within the skill of a person of ordinary skill in the art. This embodiment is useful in an environment wherein the same scanning apparatus is used to process more than one type of symbol. However, such complexity will reduce the speed at which a symbol can be detected according to the number of different symbols to be detected.
For illustrative purposes, the symbol 1 used in the preferred embodiment is the DATA MATRIX symbol previously described, samples of which are illustrated in FIGS 6A and 6B. In the application of the present invention to the DATA MATRIX symbol (hereinafter "symbol" unless otherwise indicated) the user provides an estimate of the number of image pixels in the visual cell diameter (i.e., the height or width for a square visual cell), the border color (i.e., black on white or white on black) , and the number of rows and columns of the visual cells (including the perimeter) . These values are used for performing certain tests to validate data and avoid processing clutter 3.
The invention also may be adapted to determine automatically the type of symbol to be processed and to acquire from the image itself some or all of the data that the user would otherwise provide, to render the operation less user dependent. This is in part discussed below.
Referring to FIGS. 1 and 2A, a method for locating a symbol 1 through an edge validation process in accordance with the present invention is described. The main routine starts at step 50 and passes to step 51 where a main probe 10 is initialized. The main probe 10 is provided with a range of examination ("range") , in which main probe 10 searches for a first transition that extends the width of bounding box 2. Main probe 10 starts from the left side in the center of the bounding box 2 and begins a left to right probe, searching for a first transition possibly corresponding to an edge of the symbol 1. If no first transition is found within the range of main probe 10 (i.e., the width of bounding box 2) , the routine passes to step 52 where the direction of main probe 10 is reversed. Main probe 10 then resumes probing, but now travels along the same horizontal row of bounding box 2 in the right to left direction. If again no edge is found within the range, then the routine passes to step 54 where main probe 10 is vertically moved to a different left side starting location, spaced from the initial center row, and begins another left to right search for a transition.
At step 54, a summing toggle is maintained which is based on the user provided number of rows and columns of the symbol, and the number of pixels per data cell area of the symbol, to set the next main probe 10 starting point. The starting point for the next main probe 10 is vertically above or below the center of bounding box 2 by a multiple of the number of times that main probe 10 has been started at a different location. Typically, even multiples are spaced above the center line, and odd multiples are spaced below the center line. In this way, the routine will continue to space main probe 10 progressively further away from the center line to scan across enough of the entire bounding box 2 to find any symbol 1 that is present.
If, during a left to right probe (step 51) or a right to left probe (step 52) , a transition is found (the so called "first edge" or "first transition") , the routine passes to step 53 where the slope of the edge is determined, if possible, using deviation probes 20 to locate second transitions. In the preferred embodiment, deviation probes 20 are spaced above and below the first transition found by main probe 10. If each of deviation probes 20 find a second transition within its range (a range that is substantially less than the range of main probe 10, as discussed elsewhere) , the locations are noted. The slope and deviation are then determined based on the noted transition locations.
At step 55, the first and second transition points are tested for linearity within a defined tolerance limit of the cell width divided by two. If the points are not within the limit, then the first edge is rejected and the routine returns to step 51 to resume main probe 10 search. If the points are within the limit, then the routine passes to step 56 and begins a WALK_ABOVE_EDGE routine. The linearity test at step 55 may be omitted. However, including the test provides a coarse filter to find and reject edge data that is likely to be invalid.
The WALK_ABOVE_EDGE routine at step 56 begins an edge validation by using a series of small horizontal feeler probes 30. Feeler probes 30 are spaced apart, vertically above main probe 10 and are spaced horizontally to straddle, and thus respect, the predicted slope of the possible edge found by deviation probes 20. The horizontal spacing may be adjusted based on the location of the preceding transition located and the calculated deviation and slope. Alternately, the horizontal spacing may be based on the calculated slope, the first transition location, and the distance from the first transition. In this regard, feeler probes 30 start at the transition located by main probe 10 and continue to walk up the suspected edge above the center line, until they can no longer find a transition in the feeler probe range corresponding to the suspected edge. As described elsewhere, the feeler probe range is smaller than the deviation probe range. Regarding the horizontal shifting of feeler probes 30, as is set forth in the microfiche appendix, in the case where the calculated deviation of the edge is greater than - 2 and less than +2, the feeler probes are provided with a range having starting and ending (x,y) coordinates that are calculated as follows: a) range start x = the last detected edge x coordinate minus one cell dimension b) range start y = the last detected edge y coordinate minus one cell dimension c) range stop x = the last detected edge x coordinate plus one cell dimension d) range stop y *= the last detected edge y coordinate minus one cell dimension. In the case where the calculated deviation of the edge is less than -2 the feeler probes are provided with a range having starting and ending (x,y) coordinates that are calculated as follows: a) range start x = the last detected edge x coordinate minus two cell dimensions b) range start y = the last detected edge y coordinate minus one cell dimension c) range stop x = the last detected edge x coordinate plus one cell dimension d) range stop y = the last detected edge y coordinate minus one cell dimension. The start x location must back up enough to be outside the suspected edge, and the stop x location must be inside the suspected edge to allow for imperfections in the print. As is apparent, for other deviations, and for other directions of walking, the signs of the calculations change according to the slope.
After finding the top corner of the edge at step 56, the routine passes to step 57 where it begins a WALK_BELOW_EDGE routine. The WALK_BELOW_EDGE routine continues the suspected edge validation by walking a series of the same horizontal feeler probes 30 downward from the transition found by main probe 10 to the bottom corner of the edge, i.e., until they can no longer find a required transition. In the WALK_BELOW_EDGE routine, feeler probes 30 are also uniformly spaced apart, vertically below main probe 10, and spaced horizontally based upon the slope found by deviation probes 20.
Optionally, after finding the top and bottom corners of the first edge, the dimension of the first edge is tested at step 58 to reject edges that are considered too small to be a part of a valid symbol. For the edges that pass the test, the routine then passes to step 59 where the
WALK_EDGE_FROM_BOTTOM routine begins. Similar to the test at step 55, the test at step 58 also provides a coarse filter test to reject data that is likely to be invalid, based however on different criteria. As will become apparent, after each stage of testing for valid data, progressively more time is spent to further validate the symbol. Thus, the omission of the tests at steps 55 and/or 58 may result in spending time attempting to validate data that could have been earlier detected and is ultimately rejected. However, the use of these tests for every first edge, at a cost of some processing time, improves the overall speed at which the probes can work to locate a valid symbol 1.
In the case that main probe 10 found the edge on a left to right probe search, then at step 59 the WALK_EDGE_FROM BOTTOM routine begins to walk a series of small vertical feeler probes 30 to the right, from the bottom corner found during the WALK_BELOW_EDGE routine at step 57. Feeler probe 30 window range placement also is adjusted, if necessary, based on the slope of the first edge found by main probe 10 and deviation probes 20. Feeler probes 30 continue to walk to the right of the edge until they can no longer find a transition.
In the case that main probe 10 found the first edge on a right to left probe, then the WALK_EDGE_FROM_BOTTOM routine operates as already described, except that vertical feeler probes 30 walk to the left from where the WALK BELOW EDGE routine terminated. At step 60, the routine tests the dimension of the two sides defined by feeler probes 30 by combining the distances between top edge and bottom edge (as found in the WALK_ABOVE_EDGE and WALK_BELOW_ EDGE routines) , based on where the feeler probes 30 terminated relative to the main probe 10, and comparing that dimension with the length determined during the WALK_EDGE_FROM BOTTOM routine. If the comparison yields a ratio that is within a preselected limit indicative of a valid symbol 1, then the routine passes to step 62.
At step 62, the centers of the first and second dashed edges of a DATA MATRIX symbol are determined and tested. If the centers of the first and second dashed edges both show a potential for a DATA MATRIX symbol, e.g., that the dashed edges are at least 80% valid, then the routine passes to the data extraction phase at step 63. If the ratio does not show the potential for DATA MATRIX symbol, e.g., the dashed edges are less than 80% valid, then the routine passes to step 65. At step 65, the routine queries whether both a
WALK_EDGE_FROM_TOP and a WALK_EDGE_FROM_BOTTOM routine have occurred. If the top walk has not yet occurred, then the routine passes to step 61 where a WALK_EDGE_FROM_TOP routine applies another sequence of feeler probes 30 to define the "second" edge of the symbol. In this step, feeler probes 30 attempt to validate the top edge by using small vertical probes and walking from the point where the WALK_ABOVE_EDGE routine terminated, in the direction of travel of main probe 10, to where the end of the second solid edge of the DATA MATRIX symbol should be. In other words, if main probe 10 found the first edge passing from left to right, then the routine will walk to the right from the top corner of the first edge. If instead, main probe 10 was passing from right to left, then the routine at step 61 will walk left. Similar to the WALK_EDGE_FROM_BOTTOM routine, vertical feeler probes 30 are used walking to the right or left. Also, the probe range placement is based on the slope of the initial edge found by main probe 10 and deviation probes 20. When step 61 has concluded the WALK_EDGE_FROM_TOP routine, the ratios of the walk above edge plus walk below edge to the walk edge from top are compared again at step 60 and the ratio for an acceptable matrix is tested. If the ratio test shows no potential for a DATA MATRIX symbol at step 60, and the top and bottom walk have been performed, as tested at step 65, the routine returns to step 51. At that point main probe 10 resumes probing from the point where it had stopped, shifted by an amount to not detect the same edge that was rejected, and searches for another first transition. Otherwise, after edge validation at step 62, the data extraction will commence at step 63.
The data extraction step 63 concerns identifying each matrix cell, typically passing from left to right relative to the solid borders found by the WALK_ABOVE_EDGE and
WALK_BELOW_EDGE and WALK_EDGE_FROM_TOP (or BOTTOM) routines. The data is extracted and provided to a bit stream at step 64 for processing by, e.g., a commercial device for decoding DATA MATRIX symbols. The decoder device then conventionally processes the data bit stream at step 64 for the appropriate use.
Referring now to FIGS. 1A, and 2A, locating the symbol 1 is described in greater detail. At step 100, the user provides the estimated visual cell dimension and the number of rows and columns of the matrix. At step 110, image data is acquired or accessed, e.g., a RAM or video RAM memory device filled with a field or frame of video data and made available for processing in accordance with the invention. At step 120, main probe 10 is initialized with beginning location, a direction, and a range of examination. At step 125, main probe 10 is advanced from the given starting location, in the given direction through the range. Preferably, main probe 10 is initialized in step 120 to have a range of examination and direction that starts at the left most pixel of the center line of the image in bounding box 2, and advances horizontally across the center line of data in bounding box 2. Main probe 10, represented in FIGS. 2A - 2C as a solid black line, follows a bit stream analyzing routine that analyzes a series of address locations corresponding to a line of image pixels and searches for changes in value of the image (i.e., the stored value) from white to black
(herein defined as a "transition" or an "edge transition") which may be part of a symbol. The transition is preferably determined based on a comparison of the values of the pixels exceeding a selected threshold. A suitable threshold is a percent contrast as between the dark and the light areas, e.g. , 20% difference, measured in pixel values. The required transition, white on black or black on white is provided by the user. A default to black on white is typically provided. In cases where the symbol to be captured is not likely to be in the middle of the field of view, main probe 10 could begin at a more appropriate location. It also is to be understood that the transition may be of a white to black transition, depending on the nature of the symbol on the marked object. In this regard, relative values of contrast (ratios) are used in the image/pixel analysis rather than absolute values, so that the same routines will work for positive symbols, e.g., white to black edge transitions, and negative symbols, e.g., a black to white edge transition. If appropriate, the routine could be modified to toggle from a positive image search to a negative image search if the former does not find a valid symbol, using the same data in memory, before determining that there is no symbol in the field of view. Preferably, the analysis is conducted by using a WINDOW routine, which is illustrated in FIG. 1C. The WINDOW routine assumes that there is a valid range of examination for the probe, and will examine the range for transition. The WINDOW routine thus reports back either that an edge was found, and its location, or that no edge was found. In operation, at step 210, the WINDOW routine is initialized by being provided an examination range, a starting location, the direction for the probe, and a value N corresponding to the spacing between pixels to be evaluated for a transition. At step 220, the WINDOW routine selects a first pixel (memory address) and acquires the color value PI of that pixel (memory address contents) . At step 230, the WINDOW routine selects a second pixel, which is spaced N pixels away from the first pixel, and acquires the value P2 of that second pixel. At step 240, the values of the first and second pixels PI and P2 are compared. If the difference between values PI and P2 is greater than a preset contrast threshold, then a transition is declared. The transition is checked at step 243 to determine if it is a correct color transition, i.e., the desired white to black or black to white. If the transition is not correct, the routine passes to step 248. If it is correct, the transition location is stored at step 245. The WINDOW routine then ends and returns to step 135 (FIG. 1A) .
If, however, the difference at step 240 is less than the contrast threshold, then at step 248 the routine tests see if the probe has reached the end of its range. If not, then the first pixel is shifted at step 250 and at step 220 the WINDOW routine selects the value of the now adjusted pixel (new memory address and contents) as the current PI value, again acquires a second pixel value P2 that is spaced N pixels from the current PI value at step 230, and compares the current values PI and P2 at step 240. Preferably, at step 250 (FIG. 1C) , the test to test increment of the pixel PI is one pixel so that the probing window is shifted one pixel at a time across the image pixel line of the bounding box. Steps 248 and 250 are thus used to control the pixel address PI in the probe range so that the WINDOW routine can be used for all probing with minimal computational time requirements. If at step 248 the probe is at the end of the range, then the WINDOW routine ends and the routine returns to step 135. The value N initialized at step 210 may be, for example, 2, 4 or some other integer value suitable for located a color transition likely to be of a symbol edge. The value N is selected based on the thickness of the symbol edge relative to the dimension of a pixel and the clarity of the edge. In this regard, if the thickness of the symbol edge to be detected is five pixels (e.g., based on the user provided estimate) , a value of n=2 is more appropriate. On the other hand, if the edge thickness is 8 or more pixels, a value of n=4 (or more) is more appropriate. Regarding the quality of the printed symbol, sharp edges permit using fewer pixels between PI and P2 than fuzzy edges. The precise value is a compromise to be selected by the user based on the type, size, and quality of the symbol to be processed, and resolution of the scanning equipment. The thickness of the edge is thus one value that may be estimated by the user and used to control the WINDOW routine. Alternatively, the value of N may be calculated as a function of the estimated visual cell diameter, e.g., 50%. The contrast threshold limit is typically preset based on the range of anticipated values possible, and the level of contrast desired for the visual cells corresponding to extremes of the grey scale. In the case of the DATA MATRIX symbol, a 20% contrast limit is a standard used to distinguish the binary dark (black = 1) and light (white = 0) areas. Thus, for a grey scale color range of 0-255 pixel units (8 bit values) , a 20% contrast limit corresponds to a difference between PI and P2 of 51 pixel units. However, the contrast level of the test used in the WINDOW routine preferably can be altered by the user for the environment of the symbol. For high contrast symbols, a higher contrast threshold can be used, which provides more reliable detection of a likely edge in the first instance. For low contrast symbols, using a lower contrast threshold may result in more processing of bad data until the bad data is rejected, but it also will likely enable detecting symbols that otherwise would be missed. Accordingly, as an alternative, a user-controlled input could be provided to adjust the contrast threshold in view of the symbol quality and contrast to be processed. In practice, good contrast DATA MATRIX symbols printed in ink typically have a contrast of approximately 50-70%. Thus, the discrimination of light and dark areas is highly reliable. Advantageously, the present invention is capable of processing scanned images having grey scale values to locate and validate the symbol 1, and, except for the case of a dashed edge validation routine, does not require determining whether any particular pixel (or area) corresponds to a binary 1 or 0 value, until data is to be extracted.
In another alternate embodiment, not shown in FIG. 1C, if the comparison at step 240 yields a likely transition, the WINDOW routine may acquire a third pixel data point P3 that is N + Y pixels from the first point PI, and tests the value of pixel P3 relative to PI (or optionally P2) , to confirm a transition. This test may be based on the difference between P3 and PI also being outside the contrast limit (or optionally P3 and P2 being within the contrast limit, or both) . Although this provides an added step which slows the processing time, it also reduces the likelihood of misinterpreting clutter 3, such as an ink spot, as an edge transition. In this case, Y is typically one-half of N, such that if N=4, then Y=2. Hence, it is possible to provide a switch that allows the user to indicate whether the print quality of the symbols to be processed is such that the third pixel point P3 confirmation, is or is not used. Referring again to FIG. 1A, if no edge transition is found and main probe 10 has reached the end of the row (its range) at step 130, a test is made at step 137 to determine if probe 10 has searched in both directions. If it has not, then at step 138 the direction of main probe 10 is reversed and at step 125, the next main probe 10 will commence. If it has, then at step 139 the main probe 10 is shifted to start a new row or to stop at step 150 if the sequence is completed.
Once the first edge transition is identified at step 130, the location of main probe 10, i.e., the horizontal x and vertical y coordinates (x,y) , are saved at step 135. At step 140, further tests are conducted in response to locating the first transition. In this regard, with reference to FIGS 1A and IB, at step 140 the slope of the suspected edge is determined, relative to the horizontal x axis of the direction of the probe. This is achieved by the use of a pair of deviation probes 20 to find, if they can, a pair of second transitions which are capable of being a part of the same edge as the first transition. Preferably, the second transitions are different from the first transition found by main probe 10. The pair of deviation probes 20 are respectively spaced apart a distance which is sufficient to determine with reasonable accuracy the slope of an edge of a symbol.
In a preferred embodiment, deviation probes 20 are spaced on either side of main probe 10 by the same distance which is approximately twice the estimated visual cell diameter. As an example, for a symbol that is a 9 x 9 matrix of visual data cells, each deviation probe 20 is spaced from main probe 10 such that there are four cell diameters between deviation probes 20. For symbols having more than 9 rows and 9 columns, deviation probes 20 may be spaced further apart, and vice versa. The limit on the spacing between deviation probes 20 is practical in that it is more desirable to find two points on the first edge than to miss one of the points because a deviation probe was spaced too far from main probe 10. In contrast to main probe 10, deviation probes 20 do not start from the margin of bounding box 2 and then proceed across the bounding box 2 until a transition or the other margin of the field of view is reached. Instead, each deviation probe 20 searches in a limited range for a transition. The range, which is preferably the same length as the spacing between deviation probes 20, is centered on an axis of the first transition detected by main probe 10.
The length of the range of deviation probes 20 and the spacing of deviation probes 20 from main probe 10 are such that if the first transition located by main probe 10 is part of a symbol edge, and if main probe 10 located that first transition at an acceptable location on the symbol edge, then the symbol edge also should be located by both deviation probes 20, somewhere within the probe 20 ranges. If the two transitions are located and confirmed as a possible edge, then the edge is further tested to determine whether or not it is part of an edge of a perimeter of the symbol to be located.
Referring now to FIG. IB, a preferred embodiment of determining the slope of the edge (step 140 illustrated in FIG. 1A) is explained. A range for the deviations probes including a starting location and a direction, is calculated at step 302. Each deviation probe 20 preferably uses the aforementioned WINDOW routine to look for a transition. Typically one deviation probe 20 is probed at a time, with the first probe starting at one end of the determined range at step 304, searching for a transition using WINDOW at step 306, and if no transition is found, passing to step 350 (and returning to step 147 of the main routine) if no transition is found in the probe range. If a transition is found, the coordinates are saved at step 308. Thus, if the first deviation probe 20 fails to find a transition, the routine aborts quickly without spending the time and energy to process the data for the second deviation probe 20.
The second deviation probe 20 undergoes the same process, preferably subsequent to the first deviation probe 20. In this regard, the second deviation probe 20 starts at one end of its probe range at step 314, and searches for a transition at step 316. If no transition is found, the routine passes to step 350 (and returns to step 147) . If a transition is found, the coordinates are saved at step 318. A typical range for deviation probes is the dimension of four visual cells of data, e.g., based on the user provided estimate or a calculated value. However, for small size matrices, smaller ranges or a user provided multiple may be provided. For a 9 x 9 matrix, the first and second distances are each typically twice the estimated cell diameter.
Deviation probes 20 preferably advance though their limited ranges in the same direction as main probe 10 advances. The starting location is determined based on the direction and the (x,y) coordinates of the first transition so that the deviation probe ranges are centered on a line intersecting one of the (x,y) coordinates and perpendicular to the direction of the deviation probes 20. The use of deviation probes 20 are illustrated in FIGS. 4A -4B. In FIG. 4A, deviation probes 20 locate two transitions which correspond to a valid edge of symbol 1, which is later validated according to the routines described below. In FIG. 4B, the deviation probes 20a and 20b are produced in response to main probe 10a finding a first transition edge as main probe 10a traverses from left to right across bounding box 2. In this example, deviation probe 20a finds an edge transition and deviation probe 20b does not find an edge transition. Thus, the first edge found by main probe 10a is rejected. As illustrated in FIG. 4B, main probe 10a then resumes probing where it left off (represented by dashed lines in FIG. 4B) , and eventually reaches the end of its range at the right margin of bounding box 2. Main probe 10a then reverses direction and becomes main probe lOb, as illustrated. When main probe 10b locates a first transition, two deviation probes 20c and 20d then search for their respective second transitions, and each finds one. Thus, the routine will then try to validate further the symbol based on these edge transition detections, as described below.
Referring again to FIG. IB, if each deviation probe 20 identifies a transition, then the saved locations of those transitions are used to calculate a deviation and the slope of the edge defined by the two transitions, at step 319. Preferably, at step 320 the slope of the lines between each second transition and the first transition are determined and compared. If the deviation probe transitions are not within a tolerance limit of ± one half cell of each other, then they are assumed invalid and the routine passes to step 350.
Also at step 319, the deviation and slope values for valid slopes are stored for use by other routines as described below. More specifically, the deviation is stored as one of three values, e.g., + if the calculated slope value is positive, - if the slope is negative, and 0 if the slope is zero (a vertical line) . In practice, status flags are set based on the value of the deviation. Following determination of the deviation and slope, the routine ends at step 321 and returns to the main routine step 140 (FIG. 1A) . If the routine aborts at step 350, the routine then returns via step 143 to step 147. At step 147, clutter acceptance is tested to determine if it is on or off. Clutter acceptance, when it is off, assumes that the first edge hit will likely be a matrix or a solid border. Consequently, to speed up processing, main probe 10 is assumed to reach the probing limit at the boundary margin and, at step 137, is started to probe in the reverse direction. By this routine, the main probe 10 will not process the edge transitions attributable to data cells inside the matrix perimeter, which processing could increase the time to locate the symbol.
If clutter acceptance is on, then the routine passed to step 148 where the starting position coordinates of main probe 10 is reset to the location saved at step 135 and shifted in the direction of travel by one cell diameter, thereby to look for another first transition at step 125, with main probe 10 at the "shifted coordinate location". Alternately, the shift could be N pixels where N is the value described in connection with the WINDOW routine. Otherwise, the routine returns via step 143 to step 154. (Step 143 is a dummy transfer step) .
Referring to FIGS. 1A, ID, and 5, after determining the deviation and slope of the edge found by main probe 10 and deviation probes 20, the routine at step 154 performs an edge validation routine called WALK_ABOVE_EDGE. The WALK_ABOVE_EDGE routine uses a plurality of feeler probes 30 to locate the upper extent of the edge corresponding to the transitions found by main probe 10 and deviation probes 20. As compared to deviation probes 20, the feeler probes 30 have a more limited search range, are spaced closer together in parallel, and are greater in number and therefore potentially cover (define) a greater portion of the edge to be validated. Feeler probes 30 are spaced apart from each other by a distance DI. Similar to deviation probes 20, feeler probes 30 have a searching range a direction and a starting point. The searching range is selected to straddle the anticipated edge. The starting point of the feeler probe range, and the length of the feeler probe range, are calculated based on the last edge detected, the slope, and the estimated cell diameter. The direction in this case is the same as deviation probes 20. These parameters are set or initialized at step 322. The first feeler probe 30 is the applied at step 323 where the feeler probe range is shifted based on the slope cell diameter, and last detected edge. At step 324, the feeler probe range is checked to see if it is completely within bounding box 2. If it is, then an edge transition is searched at step 325 using the WINDOW routine in the manner previously described. If the range is not in bounding box 2, then the routine advances to border damage acceptance at step 326, which is discussed below. If a transition is found at step 325, then another feeler probe 30 range is selected at step 323, the range is checked relative to the bounding box at step 324, and, if acceptable, step 325 is repeated with the new (adjusted) feeler probe 30. At step 323, the feeler probes 30 are incremented by distance DI of one cell dimension, unless the corner search flag is set, in which case the distance DI is only one pixel unit. Thus, in this routine, as each additional feeler probe 30 is used, the starting point is vertically shifted above the starting point of the preceding feeler probe 30 by the distance DI of one cell dimension set between feeler probes. Also at step 323, the starting point of each feeler probe 30 is horizontally shifted a second distance, according to the calculated deviation (slope) , relative to the last transition detected. Preferably, the horizontal shift is to a point that is calculated as a function of slope, cell diameter, and last edge, as previously described. As feeler probe 30 detects a transition, the location is noted, and at least the last detected transition is saved.
If a feeler probe 30 reaches the end of its range without detecting a transition, or if its range exists out of the bounding box, it is designated feeler probe 30', representing no edge found, and the routine passed to a series of steps to identify more precisely the corner location, as described below.
At step 326, the routine determines whether the border damage acceptance routine is on. If it is on, then a damage counter at step 328 is incremented to count the number of consecutive times that no transition was detected. The detection of an edge will operate to reset the damage counter at step 328. At step 329, the count is tested against a set damage limit e.g., 0, 15% or 30% of the edge. The limit is preferably set as the selected percent times the number of pixels per cell times the number of rows or columns expected in the edge. For example, in a 10x10 matrix having 5 pixels per cell, for a damage acceptance of 15%, the limit is 7.5 pixels (rounded up to 8 pixels), corresponding to less than two cells. In this regard, if the spacing between feeler probe 30 is one cell dimension, then two consecutive missed edges will result in the damage counter exceeding the set limit. Of course, a numerical limit or other percent limits could be used.
If the set damage limit is not exceeded at step 329, the routine returns to step 323 to select the next feeler probe 30, treating the missed edge as if an edge has been found. In this regard, the routine may predict where the edge should have occurred and use the predicted location to select the range starting point for the next feeler probe 30. If the limit is exceeded, then the routine advances to step 330 for the corner searching routine. Similarly, if the border damage acceptance criteria is not on, the routine simply passes directly to step 330.
In an alternate embodiment, the routines for applying feeler probes 30 tests the starting point of each feeler probe range relative to the calculated slope to determine if the feeler probes 30 are following an edge that is not of the symbol. For example, as illustrated in FIG. 5C, a mark 6 intersecting a valid symbol 1 will cause feeler probes 30 to follow the edge of mark 6, because the successive feeler probes 30 are horizontally shifted relative to the last transition, and lose the symbol edge. Hence, by testing the starting point relative to the original slope, the feeler probe range 30 can be corrected to find the correct edge. For example, a predicted location may be obtained as follows. The initial slope based on the second transitions is obtained. Because the y steps in incrementing feeler probes 30 is constant, one cell dimension can be subtracted from the last known edge y coordinate. The x coordinate at the predicted edge is the initial deviation value times the number of walks so far (i.e., the number of feeler probes). Then, the next feeler probe range is calculated based on the predicted edge, rather than the actual last edge detected. In this manner, clutter mark 6 can be avoided and the correct corner A located. In yet another embodiment, the feeler probes 30 horizontal displacement can be based on the determined deviation in the x any y axis relative to the first transition detected by main probe 10 and the number of vertical shifts (or the corresponding dimension) . In this embodiment, the range deviation test would likely not be needed.
With reference to FIGS. ID and 5, when feeler probe 30' does not detect a transition or the damage counter exceeds the limit, the location of the last transition detected for the upper extreme edge is recalled, and the routine then applies a CORNER SEARCH routine. In the CORNER SEARCH routine, a second plurality of feeler probes 32 are used. Feeler probes 32 are identical to feeler probes 30 except that they are spaced much more closely together, and retrace a part of the edge between the last edge detected and where no edge was detected, to locate more precisely the corner A. As illustrated in FIG. ID, the CORNER SEARCH involves, at step 331, setting the corner search flag, setting the feeler probe 32 range, direction and starting location to that of the last feeler probe 30 location that detected a transition, and then returning to the step 323 of the WALK_ABOVE_EDGE routine. In this manner, each next feeler probe 32 to be applied is shifted up by the distance DI, now, e.g., one pixel unit, and a transition is searched for by following step 325 as described.
At the next failure to detect a transition, the WALK_ABOVE_EDGE routine tests at step 326 whether border damage acceptance criteria is on. If it is, then the damage counter will still be at its limit at step 329, because the damage counter is only reset when a feeler probe 30 detects a transition, and will pass to step 330. If border damage acceptance criteria is not on, then the routine directly passes to step 330. Because the CORNER SEARCH routine flag is set, the routine then passes to the CORNER LOCKING routine, which is described below.
In one useful embodiment, feeler probes 30 are spaced apart a distance DI that is the estimated dimension for one visual cell, and the corner searching feeler probes 32 are spaced one pixel unit apart.
Referring to FIG. IE, the CORNER LOCKING routine is used when deemed appropriate to locate more reliably the corner of the symbol 1. The CORNER LOCKING routine uses a series of feeler probes 35 which may be horizontally or vertically applied, depending on the application of the routine to the validation of the symbol as described below, to locate the coordinates of the symbol corner to be located. Feeler probes 35 typically advance in a different direction and have a different orientation than the aforementioned corner searching feeler probes 32.
For the case of the WALK_ABOVE_EDGE routine, the CORNER LOCKING routine first tests whether the routine is to be used at step 340. If it is not, the routine exits at step 341; otherwise the routine continues. The test at step 340 examines the deviation previously stored for the detected edge. If the slope is a •+• or "0", the CORNER LOCKING routine is not used. If the slope is "-", then the routine advances to set the feeler probe 35 to an appropriate starting point, such as the last location of a probe 32 to locate a transition at step 342. More preferably, the starting point is backed up from the location of that feeler probe 32 by approximately one cell dimension, in a direction away from the expected location of the corner. Of course, corner locking may be used every time. In setting the feeler probe 35, the decision is made whether the probe is to be horizontal or vertical. In the case of the WALK_ABOVE_EDGE routine, a vertical probe 35 is used, the direction is downward, and successive probes 35 are successively shifted by one pixel unit toward the same edge of the bounding box from which probe 10 advanced.
CORNER LOCKING is applied to account for the expected shape of the corner presented to the probes 32, and whether the failure to detect an edge with feeler probe 32 corresponds to the outermost corner point of the real corner. With reference to FIG. 5A, it is illustrated that the feeler probe 32' that fails to detect a transition does not recognize the actual corner A, because there is, for example, at least one feeler probe 32 that recognizes a transition in its range which is past the true corner A (and is actually on a different side of the symbol) . However, by use of corner locking feeler probes 35, oriented perpendicular to feeler probes 32, the feeler probe 35' that does fail to detect an edge locates most accurately the true corner A. In comparison, as illustrated in FIG. 5B, for a "+" deviation, when feeler probe 32' does miss an edge, it finds the true corner A within the resolution of one pixel, and therefore there is no need to use the CORNER LOCKING routine. Hence, for "+" deviations, in the WALK_ABOVE_EDGE routine, as illustrated in FIG. 5B, no corner locking is needed and the routine is not used, whereas it is used for negative deviations. It is understood that actual values of slopes could be used in place of the "+", "-", and "0" deviation ratings, such that a first range of slopes correspond to a "+11 deviation, and a second range of slopes is applied for "-" deviations. A "0" deviation may be in either category, preferably in the category that performs the corner locking routine so that fuzzy, damaged, and otherwise not well defined corners can be more accurately located. What is important is that the routine recognize when the failure to detect an edge is likely to be because of a real corner, rather than a poor quality or damaged corner, and to find the location most closely corresponding to the real corner location. In further operation, at step 340, the CORNER LOCKING routine uses the saved location of the last corner searching feeler probe 32 of the WALK_ABOVE_EDGE routine as the starting point. The first feeler probe 35 is selected to be one pixel unit from the saved location coordinate (in this case, where probe 32' failed to detect an edge), and optionally is backed up half a cell from there. The range probe 35 is checked at step 347 to determine whether it is in or out of the bounding box. If it is in, an edge transition is then searched at step 350 in the same manner already described. If an edge is found at step 350, then the next feeler probe 35 is selected at step 354, the range is tested at step 347, and feeler probe 35 is applied at step 350. If the range is not in the bounding box, then the routine passes to step 358. This sequence of successive feeler probes 35 continues until no edge is found. When an edge is not found, then the location of the last edge transition detected by a feeler probe 35 is saved at step 358 and used as the corner location A. In the CORNER LOCKING routine, the determined slope is respected for starting point of each range of the feeler probes 35.
Although illustrated in FIG. IE as separate steps 342 and 354, these steps could be consolidated in step 342, similar to the step 323 described in connection with FIG. ID.
Thus, the WALK_ABOVE_EDGE routine provides in the first instance a coarse finding of a corner A, and once the general corner location is found, the CORNER SEARCHING routine, together with the CORNER LOCKING routine when appropriate, provides a more precise location of corner A. As the inventor has realized, it is difficult in many instances to print corners that are substantially square. Accordingly, it is desirable to identify a virtual corner location, i.e., where the corner of a symbol would be if the symbol were well-printed. Hence, in one embodiment, the corner searching feeler probes 32 and corner locking feeler probes 35 are used to acquire the edge transitions at the corner. These transitions are then examined for the degree of "squareness", i.e., how straight the two sides of the corner are. Based on the edges detected, a virtual corner then may be identified by determining two straight sides at the corner under investigation, projecting those lines to intersect, and determining the coordinates of intersection. By this analysis, a virtual corner location can be used in the following routines to extract more accurately data for validating the symbol, and extracting data from a valid symbol.
Referring to FIG. 1A, following the WALK_ABOVE_EDGE routine, a WALK_BELOW_EDGE routine is used at step 156 to identify the location of corner B at the lower extreme of the first edge identified by main probe 10, deviation probes 20, and feeler probes 30 and 32 (and 35) . The action of the WALK_BELOW_EDGE routine, which is illustrated in FIGS. 2A and 5, is similar to the WALK_ABOVE_EDGE routine and therefore is not discussed in detail. In summary, and as is set forth in detail in the microfiche appendix, this routine uses a second series of feeler probes 30 that are vertically displaced a distance DI apart and horizontally displaced relative to the last transition found, to walk down the edge, searching for edge transitions, until no transition is found in the probe 30 range subject to damage acceptance criteria. The distance DI is initially set to be the same as in the WALK_ABOVE_EDGE routine, or about one cell diameter. Once an edge transition is not found, the WALK_BELOW_EDGE routine then applies the same CORNER SEARCHING routine, and a CORNER LOCKING routine when appropriate, to locate the initial corner B coordinates more precisely. Referring to FIG. IE in the case of the WALK_BELOW_EDGE routine, the CORNER LOCKING routine test at step 340 examines when the deviation is "+", and then applies the vertical corner locking routine. In such case, feeler probes 35 are used in the same manner as was described above in connection with the WALK_ABOVE_EDGE routine, based on the last corner searching feeler probe 32 to locate an edge (optionally backed up one cell) , to advance back in the direction of main probe 10 to locate the initial corner B coordinates, except that the feeler probes 35 are in the upward direction.
It is to be understood that these various probe routines preferably repeatedly execute the same instruction steps, thereby using one "probe", the parameters of which are subject to differences in the constants used to change the probing, such as pixel starting coordinates, direction of travel, size of distance DI changes when selecting the next probe starting location, and the range of examination of the probe. By using the same instructions repeatedly, and by changing the probe control parameters, programming efficiency and symbol processing speed are greatly enhanced and memory space requirements are minimized. Of course, separate routines also could be used.
After corners A and B are located, a test is performed at step 158 (FIG. 1A) to determine whether the dimension between corners A and B (herein "side AB") corresponds to a likely edge dimension. If it does not, then the edge is rejected as not corresponding to a symbol 1, and the routine returns to step 147 where, if clutter acceptance is on, at step 148 main probe 10 resumes searching for another first transition. If it does, then the routine advances to test for a second side of the symbol at step 160. The test at step 158 is, in the exemplary embodiment, determining the length XI of side AB defined by corners A and B, and determining whether the length XI is greater than one-half of the number of estimated columns times the estimated dimension of the visual cell diameter. In other words, if the side AB dimension XI is less than one half of the estimated size of the symbol 1, then the routine determines that side AB does not correspond to a valid symbol edge, and returns to step 147 to resume main probe 10 searching. If the side AB dimension XI is greater than one half the expected size, then the routine will continue and try to validate further a symbol as follows.
Referring to FIGS. 1A, 1F-1 and 1F-2, the routine passes to a FIND SECOND SIDE CORNER routine at step 160 that seeks to locate and validate a second side of symbol 1, namely the second solid perimeter of a DATA MATRIX symbol as illustrated in FIG. 2A. For convenience, the routine initially assumes that the second side will be at the bottom of the first edge and, hence, begins to probe along the bottom edge. This occurs at step 410, where feeler probes 30 are initialized. At step 420, the distance DI between feeler probes 30 and the range for the feeler probes 30 are adjusted to the values based on slope, cell diameter, and last edge found (in this case, the bottom edge corner) . The direction of probing is thus changed to probe in an upward vertical direction. The probe range is checked at step 415 to be sure it is bounding box. If it is, the WINDOW routine is used to search for a transition at step 430 in the same manner already described. Similarly, as already described, when the first feeler probe 30' fails to detect an edge in its range, the routine is tested for border damage acceptance at step 432 as previously described. If border damage is not on, or if the damage counter at step 433 has increased to its limit at step 434, then the routine begins the CORNER SEARCH routine at steps 436 and 437 and possibly the CORNER LOCKING routine, at step 450, with feeler probes 32 and 35 spaced a pixel unit apart (as set by the search flag at step 437) , to locate more precisely the corner C. In this case, if the CORNER LOCKING routine was used to locate corner A, then it will likely not be used to locate corner C. As illustrated in FIG. 5, the feeler probes 32 will locate the corner C accurately (absent damage) .
In shifting the successive feeler probes 30 and 32 along the bottom edge (see step 420) , the slope is assumed to be ninety degrees rotated from the slope of the side AB, and hence the feeler probes 30 and 32 are also appropriately vertically shifted relative to the last detected transition, based on the deviation, as they are horizontally spaced relative to initial corner B.
Referring to FIG. 1F-2, one embodiment of the invention is as follows. After corner C is located at step 451, then at step 460 the dimensions of side AB and side BC are evaluated against certain predetermined criteria for the symbol 1 to be located. This is illustrated in FIG. A at step 165. In the case of the DATA MATRIX symbol, where the two solid line sides are of the same length, the test used is to determine if the distance between corners A and B (herein side AB) divided by the distance between corners B and C (herein side BC) is less 3.0 (3000 in high precision integers), more preferably between 0.4 and 2.6 (400 and 2600 in high precision integers) . If the condition is satisfied, and the routine has not yet probed the top edge (step 464) then the routine assumes it is a valid edge and will continue to process the symbol. If the condition is not satisfied, then the routine will check to see if the top side has been probed at step 461. If it has not, then the second side search routine will switch the horizontal walk of feeler probes 30 for the second edge to the top edge at step 463, beginning at corner A and moving in the same direction as main probe 10. The same probing control parameters are used as in the prior search along the bottom edge, except that the starting locations are different and the direction of probing is now downward. In addition, the corner searching flag is reset so that the distance DI between feeler probes 30 is again one cell dimension, and the border damage acceptance control counter is cleared, if it is used.
If the top edge walk finds a corner C, using feeler probes 30, corner feeler probes 32, and corner locking feeler probes 35 when appropriate, then the test of the dimensions of sides AB and BC is repeated at step 460. If the condition is satisfied, then corner C is located. If the dimension condition is not satisfied, then the routine aborts at step 462 and returns to step 147 of the main routine, assuming the edge located by main probe 10 to be invalid data. In addition, if necessary, the routine will redefine the corners A, B, and C at step 465 so that corner B is at the intersection of the two solid sides, and corners A and C are at the extremes with corner A defined as the top left corner of the matrix. This definition is applied regardless of the actual orientation of the matrix in memory or the field of view for extracting data from the visual cells in an efficient order.
With reference to FIG. 2A, the foregoing edge locating routine is graphically illustrated to identify clutter 3 relatively quickly. In this regard, main probe 10 and deviation probes 20 find transitions corresponding to the left edge of clutter 3. Feeler probes 30, 32, and 35 next identify two corners Al and Bl corresponding to a possible edge side Al-B in which the distance XI of side Al-Bl passes the first size test of one-half the expected matrix size. If it did not, then the transitions would be rejected as clutter 3. If it does, then feeler probes 30, 32 (and 35) identify corner Cl. However, the subsequent distance comparison of side Al-Bl and side Bl-Cl for the first edge and bottom edge fails the ratio test. Feeler probes 30, 32 and 35 then locate corner Cl. The dimension test of side Al-Dl and side Al-Bl also fails the ratio test. Consequently, the clutter 3 is identified as such and disregarded. Main probe 10 then resumes at the shifted coordinates (or at the right margin of the boundary box in the reverse direction if clutter acceptance is not on) , and eventually locates symbol 1 (FIG. 2A) on the reverse direction probe.
In one alternate embodiment, the boundary of corners Al, Bl, Cl, and DI is stored in memory as corresponding to identified clutter 3 so that any transition corresponding to an edge detected within that boundary can be subsequently ignored; main probe 10 can simply pass through to the other side of the boundary. Thus, while there is some computational time involved in testing whether an edge transition located by a main probe 10 is in an already defined clutter boundary, which test can be coupled with testing the range to see if the range is in the bounding box, there should be an overall speed advantage in passing over identified clutter as main probe 10 traverses bounding box 2, in either direction. Care must be taken, however, not to block out part of a valid symbol that could not be validated, and was thus considered to be clutter, so that a different main probe can locate the edge and that symbol can be validated.
In the case of symbol 1 shown in FIG. 2A, the first edge struck by main probe 10, defined by the corners A2 and B2, will fail the deviation probe 20 test because the top and bottom deviation probe 20, shown in phantom lines, do not both find an edge transition in the range. Thus, probe 10 will resume at the shifted coordinates and, if clutter acceptance is on, continue advancing and eventually hit the inside of the edge defined by corners B2 and C2. Any visual cells, e.g., of data hit by main probe 10 in the interim will likely fail the expected XI dimension size test and be quickly disregarded, or else will fail the side AB-BC ratio test and also be disregarded. In this case, deviation probes 20 will likely locate respective transitions on the inside edge, but the edge validation routine will eventually fail because the feeler probes 30 will not find a valid second side at either the top or bottom of the located edge. This is because the feeler probing for the second side continues in the same direction as main probe 10. As is shown in FIG. 2A, the actual second side edges extend in the opposite direction to the feeler probes in this situation. Turning clutter acceptance off at step 147 will skip these processing steps. Accordingly, main probe 10 will again resume and eventually reach the right margin of bounding box 2, reverse direction, and continue probing back along the same horizontal line of pixels, to seek another transition. In this case, main probe 10 will find a transition on the edge defined by corners B2 and C2, the deviation probes 20 will locate respective second transitions and determine the slope and deviation, and the corners A and B (located corners "A" and "B" are labeled in FIG. 2A as corners C2 and B2 respectively) are found using the WALK_ABOVE_EDGE and WALK_BELOW_EDGE routines. Then, the routine will attempt to locate the second edge by feeler probing along the top in the same direction as the probe 10, i.e., towards the left margin of the bounding box. This attempt will fail and the routine will change to probe along the bottom. Consequently, the bottom edge probes find an initial corner C (illustrated in FIG. 2A as corner A2) . The dimensions of the located corner locations AB and BC (i.e., sides C2-B2 and B2-A2 as illustrated in FIG. 2A) are tested against the ratio condition. In this example, the test condition is satisfied.
In the following preferred routine, the further processing of the symbol involves matrix validation (step 165, FIG. 1A) and is based on the location of corners A, B, and C being in a predetermined relationship, such that corner A is used as the starting point. The matrix symbol 1 is at least initially defined relative to corner A with corner B being vertically aligned below corner A, and corner C being horizontally aligned with corner B, such that the sides AB and BC form an angle. In the case where main probe 10 is advancing left to right and locates the side BC first, and where the main probe 10 is advancing right to left and locates the side AB first, then the located corners A, B, and C do not correspond to the preferred orientation of the actual corners of the symbol 1 for data extraction, as symbol is located in the boundary box of the field of view. Therefore, the preferred routine takes the located corners A, B, and C (e.g., corresponding to labeled corners C2, B2 and A2 in FIG. 2A respectively) and internally redefines them as illustrated in FIG. 2A and step 465 of FIG. 1F-2. This redefinition does not however, involve any rotation or shifting of information in memory, but rather redefines the coordinate system for evaluating the symbol 1 data.
Referring to FIG. 2B, in the event that main probe 10 does not locate any edge associated with a symbol 1 after traversing right to left and left to right across the field of view, e.g., after processing clutter 3 or detecting a portion of a symbol 1 at a location that cannot be validated, then the routine will shift main probe 10 location by distance DI. At this point in the routine, distance DI is preferably selected to allow main probe 10 to scan across the field of view to locate any symbol 1 that is present in the field.
FIG. 2B illustrates a sequence of five main probes 10, numbered 1-5 at the left edge of bounding box 2, in which the first four left to right and right to left traverses are unable to validate an edge of a symbol 1. Preferably, the vertical starting location of main probe 10 is controlled by a toggle that increases the distance from the horizontal center of bounding box 2 each time main probe 10 returns to the starting margin, defined herein as the left margin.
More specifically, a summing toggle is used, such that the next vertical probe level is above or below the center line by a multiple of the number of times probe 10 has traversed the field of view, with even numbers being shifted above the center line, and odd numbers being shifted below the center line. The symbol is located on the fifth main probe, designated 10-5 and its associated deviation probes 20-5. Although the upper deviation probe 20-5 does not find the correct edge, it finds a transition that is within the linearity limit and permits validating the symbol 1.
Regarding the third probe 10-3, it is shown in the left to right probe 10-3 and the right to left probe 10-31. The lower deviation probes 20-3 and 20-31 actually overlap, but are shown separated for ease of comprehension. In the case of the DATA MATRIX symbol and related symbols that have uniquely identifiable borders that include two solid lines intersecting at a corner, only horizontal probing is needed for main probe 10. This is because main probe 10 will locate one of the two solid border sides during either a right to left or left to right probe.
In an alternate embodiment of the present invention, the single main probe 10 travels only in the left to right direction (or the right to left direction, but not both) , and the deviation probes 20 are configured to probe horizontally and vertically (or vice versa) in sequence, unless one pair of deviation probes 20 finds two corresponding transitions in the ranges. In this alternative, the feeler probing will assume the initial direction of travel of the deviation probes which locate the first edge to define the edge. In other words, as illustrated in FIG. 4C, when main probe 10 finds a first transition, the first pair of deviation probes 20 are parallel to main probe 20 and centered on the x coordinate of the first transition. They do not each detect a second transition. Consequently, an alternate pair of deviation probes 21, which are perpendicular to main probe 10 and centered on the y coordinate of the first transition are used. These probes each detect a second transition.
Accordingly, the symbol can then be validated and data extracted as discussed herein.
As a further alternative, in this embodiment the deviation probes 20 and 21 may probe in both directions in the deviation probe range, to increase the likelihood of finding two transitions on the border edge as quickly as possible. Reversing the direction of the deviation probes will avoid rejecting a valid symbol edge because the deviation probe starting location was inside the symbol, rather than outside the symbol and thus did not detect the so-called "required color transition".
In the case of a symbol that has uniquely identified perimeter on two opposing sides, or on one side only, then it may be necessary to use both horizontally and vertically directed main probes 10. This provides for locating the one side using a vertical probe (not shown) if the appearance of the symbol in the bounding box is such that a horizontal main probe 10, or more specifically the associated deviation probes 20, cannot detect the transitions corresponding to the desired edge. In such case, main probe 10 may cycle between horizontal and vertical probing, or may complete all of the horizontal probing (after toggling through the bounding box) before starting any vertical probing, vice versa, or some combination thereof.
It also should be understood that for symbols of this type, the XI dimension and the ratio tests may be completely different, for example, to identify initially the start bar or bars of a one dimensional or two dimensional bar code or some other "line" of a symbology, e.g. , the known CODE ONE symbology. Concerning bar code symbols, after locating a start bar sequence, the main probe may advance on a line perpendicular to the slope of the start bar, to locate the stop bar(s). Then, having found the boundary, the data may be extracted. For a one dimensional bar code, the data can be extracted by processing the line of pixels probed by main probe 10 between the start and stop bars. For two dimensional bar codes, since each bar code symbol has a root cell, the data extraction technique can be used to decode the bar code by locating and sampling each root cell value and evaluating the bit stream as relative distances in a known manner. From evaluating the data, the type of bar code can be determined and its data extracted. With reference to other symbols, such as CODE ONE, once the key line edge parameter is identified, by use of main probe 10, deviation probes 20 and feeler probes 30, 32 (and 35) , main probe 10 may be used again to probe in one or more directions, on one or more lines perpendicular and/or parallel to the identified key line edge to define the boundary of the symbol based on either the user provided information or information acquired by probing. This will permit extracting the data into a bit stream for analysis of the data of the symbol. As previously indicated, border damage acceptance may be used to modify any or all of the feeler probe 30 edge detection routines so that the failure to detect an edge does not automatically trigger the corner searching and locking routines. In an alternate embodiment, the failure to recognize a transition is used to set an edge transition failure flag. A subsequent feeler probe 30 is then used, shifted the distance DI to the next probe location with its range adjusted for the calculated deviation and slope, to determine whether or not an edge can be located there. If an edge is not detected after a set number of misses, which may be sequential or cumulative, then the routine will return to the corner searching, based on the last feeler probe 30 to detect an edge transition. If instead, feeler probes 30 continue to locate transitions, then the one failure to recognize an edge will be disregarded and the flag will be reset. This modified routine is implemented to tolerate symbol edges that include irregularly printed or damaged edges, which imperfection might otherwise be misinterpreted as the end of an edge, when it is not. In yet another alternative, the aforementioned damage counter is replaced with a counter that is incremented by a first value when no edge is detected, and decremented by a second value when an edge is detected, so that a selected number x of failures to detect an edge within a predetermined number y of successive feeler probes causes the counter to indicate the end of an edge. In this embodiment, the routine determines that the transitions detected and not detected do not correspond to a reliable solid edge, and therefore sets the edge length to the last reliable transition location. In this case, the best data available may be used and tested in the ratio test to accept or reject the edge detections. Referring to FIG. 1A, after the two solid sides of the DATA MATRIX perimeter are located, the routine proceeds to step 165 where the matrix symbol is to be validated. In the preferred embodiment, the invention is used in an application where the DATA MATRIX symbols to be read are of the same number of rows and columns which are known. In an alternate embodiment (not shown) the routine may be modified to determine the number of rows and columns automatically, as follows. First, the DATA MATRIX symbol is defined with one solid edge having the thickness of each column of visual cells, and the other solid edge having the thickness of the each row of visual cells. In a square DATA MATRIX symbol, these thicknesses are ideally the same. Second, the DATA MATRIX symbol is defined with the first row and column space inside corner B as the opposite binary value of the solid perimeter lines, e.g, white when the solid perimeter edges are black. Thus, the routine could easily be adapted to use horizontal and vertical feeler probes 30 to scan the solid sides near corner B, scanning from the clear space outside the perimeter into the first data cell inside corner B, to determine the thickness of each column and each row. Then, knowing the thickness of the row and column dimensions, and by knowing the length of each side based on the measures of sides AB and BC, the number of rows and columns of the matrix can be determined. From this determined data, the center of the visual cell AA at initial corner A can be determined.
The actual number of rows and columns is needed for an optimal application of the routine. It does not matter whether the number is provided by the user or determined from the symbol. The number of pixels in each visual cell is less important, and the estimate of this value has a wide tolerance for error. The estimate must only be close enough to enable the routine to find corners A, B, and C. In this regard, if the routine repeatedly fails to locate symbols it should find, then the user is preferably prompted to change the estimate to facilitate symbol recognition.
Referring to FIG. 1G, the symbol validation of the DATA MATRIX occurs as follows. At step 510, new deviation values DX and DY are calculated, based on the dimensions of initial corner A to initial corner B (side AB) and initial corner B to initial corner C (side BC) and the number of rows and columns in the symbol. The deviations DX and DY correspond to the deviation from a given point of one visual cell to the corresponding given point of the adjacent visual cell. These new deviation values DX and DY are regarded as more accurate than the deviation and/or slope values provided by deviation probes 20 because the AB segment is longer than the transitions located by probes 20 (and the corners A and B have been corner-locked if necessary to provide more accurate corner information) . Further, the information is based on measures in two dimensions for the actual orientation of the symbol in the field of view rather than one dimension relative to a defined horizontal axis, and are effectively averaged over the number of cells. In addition, using the corners A, B, and C to determine the deviations DX and DY also inherently corrects for any distortion of the symbol in the field of view with respect to stretching of the symbol and to any pitch, yaw or roll relative to the normal image plane of the scanning device. This avoids having to perform separate steps of measuring the distortion directly to correct for any distortion that is found. At step 520, the coordinates of the center point AA of the visual cell containing initial corner A are calculated based on the determined dimensions of sides AB and BC, the number of rows and columns, and the Pythagorean theorem. The number of rows and columns and the side dimensions provides the height and width of each cell, which enables the system to read symbols that have been stretched in one or two dimensions. See, e.g., FIG. 2C which is stretched in one direction along the x axis. From this center point AA, and from the deviations DX and DY, the center points of each visual cell of the matrix can be calculated. Preferably, trigonometric tables are used to calculate the center point AA to minimize the computation time required to find the cell center AA.
At step 530, the value of each center cell of the predicted side AD of alternating light and dark areas (a "dashed edge") are sampled, one at a time, by determining the deviation of the cell center of that visual cell from center AA, and sampling the color value of the visual cell. In an exemplary embodiment, the value is based on the measure of a single pixel at the cell center. However, the color value also could be based on a sum, an average or a voting routine of a number of pixels in the cell surrounding the center, which may or may not include the calculated center pixel. The number of pixels to be used in the measure is a matter of design choice, and may be based on the manner in which the symbol is printed or marked on the article, object or substrate scanned. The errors due to imperfect printing or marking techniques, which are likely to include "pinholes", scratches, or other unmarked areas that might be mistaken for the wrong data value, can be minimized.
In the preferred embodiment, the deviations are determined relative to center AA, rather than from the center of the adjacent cell, for ease of computing. Thus, to find the center point of corner D, the routine multiplies the deviation values DX and DY by the number of rows and columns respectively, between the cell of corner D and the cell containing center point AA. This same process of multiplying the deviations DX and DY by the number of intervening rows and columns is used for calculating the center of all of the other visual cells of the symbol. The routine is adopted because it provides good results with minimal computational time requirements.
As the centers of the cells of the dashed edge AD are determined, the values of the center points are determined. The value acquisition is discussed below in connection with data extraction. At step 540, the routine tests whether those values are within 80% of a 10101 etc. pattern, which corresponds to a valid dashed edge. The test occurs by evaluating the value of adjacent transitions, looking for black to white and white to black transitions. The occurrence of a transition results in incrementing a value as the percent of the number of transitions that are to occur in a dashed side. Thus, for a dashed side having ten cells of data (not counting the column 0 containing the solid line of the perimeter) , each black to white and white to black transition is 10% of the total, and the number of transitions are added as the data is evaluated. Thus, if there are ten transitions, the sum at the end of the row will be 100%. If there are more than 80%, then the dashed edge is declared valid. If there are less than 80%, then the dashed edge is declared not valid. In the case that the dashed edge is not valid, the routine returns to step 156 where a horizontal walk step may be repeated (unless both the top and bottom horizontal steps have already occurred for this symbol) to attempt to validated a different edge.
Referring to FIG. 2C, the return to search for a second solid edge is explained. It may be the case that the spacing of the feeler probes 30 during the prior step 156 was such that a dashed edge BC was erroneously detected as a solid edge, because the feeler probes 30 detected sufficient edge transitions to satisfy the ratio test. This may have occurred as a result of, for example, some distortion of the symbol, i.e. stretching in a dimension, a misprinting of the symbol, e.g., an ink spot 5, tolerated damage and the like. Thus, because the attempt to validate the assumed side AD as a dashed edge would find no transitions (or at least not enough transitions to satisfy the 80% test at step 540) this portion of the validation routine would fail. By causing the routine to look again for another solid edge at the top, namely side AD, the routine saves the computational time already spent validating the one good side, AB, and takes advantage of having identified a second parameter indicative of a valid symbol, namely side BC (even though that parameter was mistaken for a solid side) . As a result, the routine then looks for other data, and if it can confirm that side AD (FIG. 2C) is a valid solid edge, it will continue to validate the rest of the symbol and extract data. Hence, it is demonstrated that the routines of the invention conserve as much of the investment made to validate a symbol 1 as possible, such that each level of validation consumes a greater quantum of computational time and effort and is more likely to find and validate a symbol. By this process, clearly invalid symbols are quickly discarded, and the more valid the symbol appears to be, the more time that is spent to validate it. The 80% value used in step 540 corresponds to the data integrity standard (also known called border damage acceptance) of the DATA MATRIX symbol, which tolerates a loss of 20% of the symbol with successful readability. The test threshold could be made higher or lower depending on the environment in which the symbol is used, the level of error correction in the symbol, and the desired reliability of decoding the symbol.
If the side AD satisfies the test, then at step 545 the routine checks whether or not both sides AD and DC have been tested. This test is inserted at this point to conserve computation time and to use more efficiently the same instruction steps for testing both sides AD and DC. If side DC is not tested, then the routine passes to step 550 where the values of the center points of side DC are determined, based on the multiples of the deviations of those center points from center point AA as described. If the values of side DC also satisfy the threshold test at step 540, then the routine has validated the symbol as valid and extracts the data of the symbol. If the values of side DC do not pass the test, then again the routine will return to step 156 and attempt to validate another solid side of the symbol, if possible. If not possible, then the routine will restart main probe 10 to search for another edge to test.
After validating the symbol, the routine then proceeds to step 180 (FIG. 1A) to extract the data. The data extraction proceeds by calculating the center point of each visual cell of the matrix, within the defined rows and columns based on the multiples of the deviations DX and DY, and sampling the value of the cell as already described. As the data is acquired, it is tested to determine its digital value, and the resultant value is then provided as part of a bit stream of data. In the exemplary embodiment described, the digital value is binary, 1 or 0, resulting in a bit stream of Is and 0s. Also in the exemplary embodiment, the entire DATA MATRIX symbol is converted into the bit stream, including the perimeter to be compatible with commercial decoding equipment, e.g., Models C-102 and C-302 available from International Data Matrix. Accordingly, the data extraction routine, with reference to FIG. IH, returns to the cell containing center point AA, and begins sampling each cell, typically moving along a row and then advancing to the next row down, until all of the data is extracted.
Referring to steps 610-685, in examining the value of the center points of the visual cells, the routine assumes that the column 0, row 0 value of center point AA is black (or white when the symbol is a negative) and initializes at step 610 a variable BLACKave with the value of center point AA (e.g., a determined value in a grey scale of 0-255 pixel colors) . Advantageously, this determined value BLACKave is conveniently used as a benchmark to evaluate whether the other visual cells in the dashed edge are "black=l" or "white=0". It should be understood that the term BLACKave corresponds to the color of the border and thus may be "white" in the case of white on black transitions. The inventor has discovered that in an environment in which the color contrast is good, e.g., greater than approximately 40%, only a one step test is required to reliably separate the black and white cells. In this embodiment, illustrated in FIG. IH, the test is the cell value - BLACKAVE < 20%
At step 630, the value of the cell to be extracted is determined. At step 640, the unknown cell value is compared to the known BLACKave. If the difference is less than 20% of the known BLACKave value, then the unknown cell is determined to be a black value (step 642) . The routine sends a "black" or "1" bit to the bit stream (step 660) and stops further processing of the data cell. Otherwise, the cell is set white at step 646, and a "white" or "0" bit value is sent to the bit stream at step 660. After determining the cell value, the routine then selects the next cell to sample at step 670. This process involves using the multiples of the row and column and the deviations DX and DY as already described. In application, row counters and column counters may be used to store the location of the cell, such that the counters are incremented after each cell is sampled, and reset upon reaching the end of the row or column. A test at step 675 is illustrated to indicate that after the entire symbol has been processed to extract therefrom the pertinent data, the routine ends.
The inventor also has discovered that, an alternate two test decision process can be used to process a large quantity of pixel colors, which substantially minimizes the time required to process the color values, with high reliability. The first step determines whether the difference between the sampled cell value and the BLACKave is less than 20%, and if it is, declares that cell black. If it is not, the next step determines whether the difference between the sampled cell value and a "WHITEave" is less than 20%. If it is, then that cell is declared white. If the cell also fails the WHITEave test, the next step is to determine whether the cell value is closer to the WHITEave or BLACKave, and to declare the cell the color of the average to which its value is closest. The WHITEave value may be obtained by sampling the "white" cells during the dashed edge validation phase.
In the event that more than one symbol may be present in the field of view, then main probe 10 may be restarted to locate other symbols. Already detected symbols may be blocked from main probe 10 so that main probe 10 will not try to validate a symbol that has already been processed, in the same manner that identified clutter 3 may be blocked, as already described. However, in the case of overlapping symbols, the routine likely will be able to locate both symbols provided that the overlap covers less than 20% (the damage limit) of the underlying symbol.
In processing row after row of symbol data, the value BLACKave may be updated based on the measure of each black cell in side AB, corresponding to column 0. This is indicated at steps 680 and 685. This averaging accounts for some possible variation in printing of the symbol, and enhances the reliability of the extracted data. In the event that the column 0 cell contains a value that is more than 20% different from the prior value of BLACKave, then the prior value BLACKave is used without being updated. This provides for not including a "white" cell value detected in the border (e.g., a printing problem) in the BLACKave value, which would distort the reliability of the data being extracted. In addition, in initializing the value BLACKave, at step 610, if centerpoint AA has a measured black value that is not "black", then some other initial value of BLACKave may be used. The initial BLACKave value may be, for example, the value of the black portion of the transition located by main probe 10 (and/or deviation probes 20 or some combination thereof) , or an averaging of the determined black cells in a validated dashed edge (e.g., row 0) .
In this regard, when testing whether or not the transitions of the dashed edges correspond to the 80% test at step 540 (FIG. 1G) , the values of the pixels at the center of each of the cells in the dash side may be determined as described in connection with the data extraction procedure. In this case, the value BLACKave is typically the initial value for center AA (unless some alternative technique for calculating the value is used) . Then, after all the values are obtained, they are examined for black-white, and white-black transitions as previously described.
In connection with examining the second dashed side of the matrix, optionally the routine may examine the visual cell containing corner D, measure the height and width of that cell and determine therefrom the cell center DD (FIG. 2A) . In this embodiment, the deviations DX and DY as the evaluation continues along the second dash side, are based on the corrected center DD, rather than center AA. This will provide improved identification and sampling of the centers of the visual cells in the dash sides. It also will overcome problems arising from basing centerpoint AA on a damaged corner.
In an alternate and preferred embodiment the validation of the two solid sides is conducted in a different manner from the sequence illustrated in FIGS. 1F-2 AND 1G, which is not separately illustrated. In this embodiment, if at step 460 the ratio test is satisfied, then the routine attempts to validate the first dashed edge using steps 510, 520, 530 and 540 as previously discussed in connection with FIG. 1G. If the side does satisfy the 80% limit, then the routine will continue to validate the second dashed edge side. If the second side also passes the 80% limit, then the routine passes to extract data at step 180 (FIG. 1A) .
If, however, the first dashed edge side does not pass the 80% valid edge test, then after step 540 the routine sets a toggle flag (a toggle) indicating that the bottom and top edges will have been tested for a solid side, and return to the sequence to search for the second side, but now at the top edge. If the top edge result also fails the ratio test, then, because the toggle flag was set, the routine will return to step 147 and resume probing for a first transition with main probe 10. If, instead, the second side passes the ratio test, then the routine proceeds to validate the dashed edges as described. If both dashed edges pass the 80% test, then the data will be extracted. If, however, one of the dashed edges fails, then because the toggle flag is set, the routine will return to step 147. Hence, in this alternate embodiment, the routine will continue further validating data without necessarily returning to the main program as indicated in FIGS. 1-F2 and 1G. Similarly, data extraction also does not require first returning to the main routine.
In accordance with a preferred embodiment of the invention, applicant has discovered that a surprising increase in processing speed can be obtained by the use of high precision integer mathematics in place of floating point mathematics in the processing of image data. In this regard, in performing the high precision integer mathematics, the actual values, integer or fractional, are multiplied by a factor of, for example, 100 or 1000. Hence, a value that is 2.5 becomes 250 or 2500. By this technique, any fractional portions of a resultant value, whether obtained as a product, ratio, sum, or difference, are relatively insignificant digits and can easily be ignored without affecting the accuracy of the calculation. By using a high precision integer mathematics, the inventor estimates that there is typically a five to ten time increase in the speed of processing information to identify symbols. This increase is independent of the computer CPU platform, and is limited by the time required to capture and store an image in memory.
Referring to Fig. 3, apparatus for implementing the present invention is shown. In a preferred embodiment, the apparatus comprise a personal computer 700 including RAM memory 760, a display device 710, a keyboard 720, and a mouse 730. Computer 700 is operated to execute a sequence of software instruction sets in response to user provided data. Computer 700 also is modified to include a conventional frame grabber board 740 and also may include a video RAM memory 750. Frame grabber board 740 may be, for example, a model Cortex I device, available from Image Nation Corporation, Beaverton, Oregon, USA. Creation of suitable software instructions is within the abilities of a person of ordinary skill in the art. The software is preferably capable of processing a symbol captured at ± 90° of rotation in the boundary box. It is further believed to be within the abilities of a person of ordinary skill in the art to process images captured in any rotation (360°) .
Coupled to frame grabber board 740 is a video camera 770 which is capable of capturing an image of a field of view containing symbol 1. The image captured by camera 770 is temporarily stored in frame grabber 740 and is then transferred into RAM 760 or video RAM 750. The processing routine may be stored as a sequence of instructions steps in RAM 760 (or ROM or other conventional memory devices; not shown) for processing the image stored in memory. Mouse 730 and keyboard 720 are used by the user to control execution of the instructions and to provide input information for use by the stored instruction sets. Suitable personal computer 700 include devices containing 486 DX50 microprocessor as a CPU platform, as well as 386 SX25 microprocessors and other compatible and similar devices. EXAMPLES Set forth below are two examples illustrating certain advantages of the present application, under limited testing conditions. EXAMPLE 1
In this example, a simple DATA MATRIX code symbol containing the numbers 1, 2, and 3 in encoded form (See FIG. 6A) was captured in a clean field of view (i.e., no visible clutter) wherein the symbol filled one-fifth of the field of view. The personal computer used to process the information was a 386 SX 40 CPU platform having a Cortex I frame grabber board and using RAM memory. The aforementioned prior art multiprobe edge detection routine which is a part of a commercial decoder Model C-102, available from International Data Matrix, Inc., was used and compared to the present invention substantially as set forth in the software appendix. The prior art method was capable or reading this symbol at a rate of 2.7 reads per second. The method in accordance with the present invention using floating point mathematics, read the same symbol at a rate of 2.9 reads per second. The method in accordance with the present invention, using the high precision integer mathematics, read the same symbol at a rate of 15.1 reads per second. By way of comparison, when the 386 SX 40 CPU platform was replaced with a 486 DX50 CPU platform, the method in accordance with the present invention using the high precision integer mathematics, read the same symbol at a rate of 24.3 reads per second. On the 486 CPU platform, the time required to locate, decode and extract the data from the symbol was approximately 10 ms. Hence, the majority of the time spent in locating the symbol was approximately 30 ms to capture the image of the code. EXAMPLE 2 In this example, a damaged and cluttered field of view containing a symbol as illustrated in FIG. 6B was presented in the field of view. In accordance with the aforementioned prior art method (model C-102) , this symbol was read at a rate of 1.4 reads per second. In accordance with the present invention, using the high precision integer mathematics package, the same code was read at a rate of 5.2 reads per second. In considering the above examples, it should be recognized that the ability of the present invention to read codes at a faster rate depends upon the degree of clutter in the field of view, which is difficult to quantify, and other possible variations in scanning the same symbol for the different operating systems. Thus, it may occur that the prior art system may be faster in reading certain symbols in certain circumstances than the invention, particularly in a clutter-free field of view where the symbol substantially fills the field of view. However, as demonstrated by Examples l and 2, the present invention provides improved performance for reading symbols under excellent reading conditions, and under poor reading conditions. A primary advantage of the invention, which is not demonstrated by the Examples, is that the present invention allows for extracting data from symbols in circumstances in which the presence of clutter or damage renders the prior art methods device unusable because they cannot locate and validate a symbol.
One skilled in the art will appreciate that the present invention can be practiced by other than the described embodiments which are presented for the purposes of illustration and not of limitation.

Claims

Claims
1. An apparatus for processing data corresponding to an array of image pixels defining an image in a field of view having a boundary, to identify a symbol having predetermined characteristics including an identifiable edge, comprising: a single main probe having a first range of examination to process a first subset of the data to locate a corresponding first color transition in said first range, said first color transition having a first (x,y) coordinate location, the first range comprising a first line of pixels extending from a first edge of the boundary to a second edge of the boundary; a second probe having a second range of examination to process a second subset of the data to locate a corresponding second color transition in said second range, the second color transition having a second (x,y) coordinate location, the second range comprising a second line of pixels having a length that is less than the first line of pixels and a center that is positioned on one of the x and y coordinates of the first color transition, the second probe operating in response to the main probe locating said first color transition; means for calculating a curve parameter corresponding to a suspected symbol edge, based on the (x,y) coordinates of at least two different located color transitions; and a plurality of third probes, each third probe having a third range of examination to process a third subset of the data to locate a corresponding color transition in said third range, the color transition having an (x,y) coordinate, the third range comprising a third line of pixels having a length that is less than the length of said second line of pixels and a center that is centered relative to the suspected edge based on the calculated curve parameter, wherein said plurality of third probes process different subsets of the data in response to the second probe locating said second color transition to validate whether or not the suspected edge is likely to be a valid symbol edge.
2. The apparatus of claim 1 further comprising: means for controlling said main, second, and plurality of third probes so that the main probe stops at the first color transition location (x,y) coordinates in response to locating said first color transition, and resumes probing for another first color transition at a shifted coordinate location in said first range in response to one of the second probe failing to locate a second color transition in the second range, and the plurality of third probes failing to validate the suspected symbol edge.
3. The apparatus of claim 2 wherein the main probe further comprises a first direction single main probe and a second direction single main probe, the first direction single main probe searching for said first color transition along the first line in one direction between the first and second boundary edges and the second direction single main probe searching for said first color transition along the first line in the reverse direction between the first and second boundary edges.
4. The apparatus of claim 3 further comprising means for selecting a different subset of the data corresponding to a different line of pixels for said first range for said main probe to use in response to said main probe having searched in said one and reverse directions of a prior first range.
5. The apparatus of claim 1 wherein the calculating means calculates a slope of a line based upon the (x,y) coordinates of the first color transition detected by said main probe and the second color transition detected by said second probe, said slope corresponding to a suspected side of a symbol. 6. The apparatus of claim 1 wherein said second probe further comprises a first deviation probe and a second deviation probe, each said first and second deviation probe having said second range of examination to process respectively different subsets of the data to locate respective second color transitions, said different subsets corresponding to two parallel lines of pixels spaced apart, and wherein the calculating means calculates a slope of a line based upon the (x,y) coordinates of the respective color transitions detected by said first and second deviation probes, said slope corresponding to a suspected side of a symbol.
7. The apparatus of claim 6 wherein the calculating means further comprises means for determining whether the (x,y) coordinates of the first color transition detected by said main probe and the respective second color transitions detected by said first and second deviation probes are within a preset range of linearity, and means for calculating a slope if said (x,y) coordinates are within said preset range of linearity.
8. The apparatus of claim 6 wherein the first and second deviation probes process respective subsets of data corresponding to said two parallel lines of pixels that are spaced apart from each other by approximately the length of the second range, wherein said two parallel lines of the deviation probes are approximately equidistant from said one (X Y) coordinate of said first color transition.
9. The apparatus of claim 6 further comprising means for controlling said main probe and said deviation probes, and said plurality of third probes so that the main probe stops at the (x,y) coordinates of a first color transition in response to locating said first color transition, and resumes probing for another first color transition at a shifted (x,y) coordinate location in said first range in -62-
means for applying a second single main probe in a fourth range of examination to process a subset of data corresponding to a line of pixels having a direction perpendicular to said identifiable edge to locate a fourth color transition in said fourth range, said applying means being response to said plurality of third probes determining that the identifiable edge is valid, said fourth range being of a length sufficient to intersect said start and stop bar edges and said fourth color transition having an (x,y) coordinate suspected of corresponding to said stop bar edge; and means for applying said plurality of third probes to process respective subsets of data to locate corresponding third color transitions corresponding to said suspected stop bar edge to validate whether or not the suspected edge is likely to be a valid stop bar edge.
21. The apparatus of claim 20 further comprising: means for determining the boundary of determined valid start and stop bar edges of said bar code symbol, and means for processing said fourth range between said stop and start bar edges to extract therefrom the bar code data.
22. The apparatus of claim 20 wherein the bar code is a two dimensional bar code having a root data cell size, further comprising: means for determining the boundary of determined valid start and stop bar edges of said bar code symbol, means for processing said boundary to determine a root data cell of said two dimensional bar code symbol, means for processing said image to locate each data cell of said bar code, and means for determining the data of each located data cell to extract therefrom the data of said bar code symbol.
23. The apparatus of claim 1 wherein the identifiable edge is a solid straight line border of a symbol having a rectangular boundary including first and second solid line borders intersecting at a corner, and a matrix array of data -61-
second distance, said third plurality of feeler probes being applied at the defined boundary of said suspected edge, each feeler probe having said third range of examination to locate a third color transition, wherein said third range of examination of said third plurality of feeler probes correspond to a plurality of pixel lines that are perpendicular to the plurality of pixel lines corresponding to said second plurality of feeler probes; and means for further defining a corner location at said boundary of the suspected edge in response to the third plurality of feeler probes detecting and not detecting said respective third color transitions.
17. The apparatus of claim 14 further comprising means for applying said first plurality of feeler probes in response to said defined first edge to locate corresponding third color transitions corresponding to said second edge, and means for validating said second edge of said symbol in response to said first plurality of feeler probes locating corresponding third color transitions corresponding to said second side having a second length, wherein the first length and the second length form a ratio within a preset range.
18. The apparatus of claim 17 further comprising means for defining the boundary of said symbol in response to validating said first and second sides, and means for extracting data from said defined symbol boundary.
19. The apparatus of claim 14 wherein the identifiable edge is non linear and said first plurality of feeler probes process respective subsets of data corresponding to respective lines of pixels that are radially disposed toward a common origin.
20. The apparatus of claim 1 wherein the identifiable edge is a start bar of a bar code symbol, said bar code symbol having a stop bar edge, further comprising: -60-
to at least one deviation probe not locating a second color transition in said first and second directions in said second range, and means for applying said other pair of deviation probes, in response to one of said one pair of deviation probes not locating a corresponding second color transition.
13. The apparatus of claim 11 wherein said first pair of pixel lines is in a direction that is one of parallel and perpendicular to the first line of pixels of said main probe, and the second pair of pixel lines is in a direction that is the other of parallel and perpendicular to the first line of pixels of said main probe.
14. The apparatus of claim 1 wherein said plurality of third probes further comprise: a first plurality of feeler probes uniformly spaced apart a first distance along the suspected edge, each feeler probe having said third range of examination to locate a third color transition; and means for defining the boundary of the suspected edge in response to the first plurality of feeler probes detecting and not detecting said respective third color transitions in the respective third ranges.
15. The apparatus of claim 14 further comprising a second plurality of feeler probes uniformly spaced apart a second distance, said second plurality of feeler probes being applied at the defined boundary of said suspected edge, each feeler probe having said third range of examination to locate a third color transition; and means for defining a corner location at said boundary of the suspected edge in response to the second plurality of feeler probes detecting and not detecting said respective third color transitions.
16. The apparatus of claim 15 further comprising a third plurality of feeler probes uniformly spaced apart a -59-
response to one of the first deviation probe failing to locate a second color transition in its second range, the second deviation probe failing to locate a second color transition in its second range, and the plurality of third probes failing to validate the suspected symbol edge.
10. The apparatus of claim 1 wherein said first, second, and third probes process the respective ranges using high precision integer mathematics to validate the symbol.
11. The apparatus of claim 1 wherein said second probe and said second line of pixels further comprise: a first pair of deviation probes and a second pair of deviation probes, each said deviation probe having said second range of examination to process respectively different subsets of data to locate respective second color transitions, said different subsets of said first pair of deviation probes corresponding to a first pair of lines of pixels that are in parallel and spaced apart from each other, and said different subsets of said second pair of deviation probes corresponding to a second pair of lines of pixels that are in parallel and spaced apart from each other, the first pair of pixel lines being in a first direction and the second pair of pixel lines being in a direction that is perpendicular to said first direction; and a sequence control, response to said first probe, to apply one of said first and second pairs of deviation probes in response to said main probe locating said first transition, and to apply the other of said first and second pairs of deviation probes in response to said one pair not locating respective second color transitions.
12. The apparatus of claim 11 wherein said sequence control further comprises means for applying each deviation probe to process its corresponding subset of data in a first direction and in a second direction along said same corresponding pixel line, and means for determining that no corresponding second color transition is located in response interior to said boundary, wherein said first transition located by the main probe is one of said first and second solid line borders, said second transition located by said second probe is said same one solid line border, and said third plurality of probes located edge transitions in said one solid line border sufficient to validate the solid line border.
24. The apparatus of claim 1 wherein the identifiable edge is a solid straight line border of a symbol having a rectangular boundary including first and second solid line borders intersecting at a corner, and a matrix array of data interior to said boundary, further comprising: means for controlling said main, second, and plurality of third probes so that the main probe stops at the first color location (x,y) coordinates in response to locating a first color transition, and resumes probing for another first color transition at a shifted coordinate location in said first range in response to one of the second probe failing to locate a second color transition in the second range, and the plurality of third probes failing to validate the suspected symbol edge as one of said first and second solid line borders.
25. A method for processing data corresponding to an array of image pixels defining an image in a field of view having a boundary, to identify a symbol having predetermined characteristics including an identifiable edge, comprising: a) processing a first subset of the data corresponding to a first line of pixels extending from a first edge of the boundary to a second edge of the boundary to locate a corresponding first color transition in said first line, said first color transition having a first (x,y) coordinates; b) processing a second subset of the data corresponding to a second line of pixels to locate a corresponding second color transition in said second line, the second line having a length that is less than the first line of pixels and a center that is positioned on one of the x and y coordinates of the first color transition, said second color transition having a second (x,y) coordinates, wherein step b) is performed in response to step a) locating said first color transition; c) calculating a curve parameter corresponding to a suspected symbol edge, based on the (x,y) coordinates of at least two different located color transitions in said data; and d) processing a plurality of third subsets of the data corresponding to a respective plurality of third lines of pixels to locate in each said third line a corresponding third color transition, the color transition having an (x,y) coordinates, each third line of pixels having a length that is less than the length of said second line of pixels and a center that is centered relative to the suspected edge based on the calculated curve parameter, wherein step c) processes said respective third data subsets in a preselected sequence in response to step b) locating said second color transition, to validate whether or not the suspected edge is likely to be a valid symbol edge.
26. The method of claim 25 wherein step a) further comprises stopping processing of said first data subset at the first color transition (x,y) coordinates in response to locating said first color transition, and resuming processing of said first data subset for another first color transition at a shifted pixel location in said first line of pixels in response to one of step b) failing to locate a second color transition and step c) failing to validate the suspected symbol edge.
27. The method of claim 26 wherein step a) further comprises processing said first data subset in a first order corresponding to searching the first line of pixels in a first direction between the first and second boundary edges, and processing said first data subset in a second order corresponding to searching the first line of pixels in a second direction between the first and second boundary edges, the second direction being opposite to the first direction, and selecting a different first data subset corresponding to a different first line of pixels extending between the first edge of the boundary and the second edge of the boundary in response to step a) having processed said first data subset in the first and second orders.
28. The method of claim 25 wherein step c) further comprises calculating a slope of a line based upon the (x,y) coordinates of the first color transition and the second color transition, said slope corresponding to a suspected side of a symbol.
29. The method of claim 25 wherein step b) further comprises processing a fourth subset of the data corresponding to a fourth line of pixels that is parallel to said second line of pixels to locate a corresponding fourth color transition in said fourth line of pixels, said fourth color transition having a third (x,y) coordinates, the fourth line of pixels having the same length as said third lines of pixels.
30. The method of claim 29 wherein step c) further comprises calculating a slope of a line based upon the (x,y) coordinates of the located second and fourth color transitions, said slope corresponding to a suspected side of a symbol.
31. The method of claim 29 wherein step c) further comprises determining whether the (x,y) coordinates of the located first, second, and fourth color transitions are within a preset range of linearity, and calculating the slope if said (x,y) coordinates are within said preset range of linearity, and resuming processing of the first data subset for another first color transition in response to the said (x,y) coordinates not being within the preset range of linearity. 32. The method of claim 29 wherein step b) further comprises processing two respective second data subsets corresponding to two parallel lines of pixels that are spaced apart from each other by approximately the length of the second line, wherein said two parallel lines of pixels are approximately equidistant from said one (x,y) coordinate of said first color transition.
33. The method of claim 29 wherein step a) further comprises stopping processing at the (x,y) coordinates of a first color transition in response to locating said first color transition, and resuming processing to locate another first color transition at a shifted (x,y) coordinate location in said first line of pixels in response to one of step b) failing to locate one of the second and fourth color transitions, and step d) failing to validate the suspected symbol edge.
34. The method of claim 25 wherein step b) further comprise processing respectively different second data subsets corresponding to a first pair of lines of pixels that are in parallel and spaced apart from each other, and a second pair of lines of pixels that are in parallel and spaced apart from each other, to locate respective second color transitions in each said pixel line, the first pair of pixel lines being in a first direction and the second pair of pixel lines being in a direction that is perpendicular to said first direction; and wherein said respective processing further comprises first processing said second data subsets corresponding to one of said first and second pairs of lines of pixels in response to locating said first color transition, and then processing the second data subsets corresponding to the other of said first and second pairs of lines of pixels in response to not locating the respective second color transitions in said one pair. 35. The method of claim 34 wherein step b) further comprises processing each of said second data subsets in a first order corresponding to searching for a second color transition in a first direction along said line of pixels and in a second order corresponding to searching for a second color transition in a second direction along said line of pixels, the second direction being opposite to the first direction.
36. The method of claim 25 wherein step d) further comprises: selecting the plurality of third data subsets to correspond to parallel third lines of pixels that are uniformly spaced a first distance apart along the suspected edge; and defining the boundary of the suspected edge in response to detecting and not detecting said respective third color transitions in the respective third lines of pixels.
37. The method of claim 36 further comprising processing a plurality of fourth subsets of the data corresponding to a plurality of fourth lines of pixels at the defined boundary of said suspected edge having the same length as the third line of pixels and a center that is centered relative to the suspected edge based on the calculated curve parameter, to locate in each fourth data subset a corresponding color transition in said fourth line of pixels, the fourth color transition having an (x,y) coordinates wherein the fourth lines of pixels are spaced a distance apart that is less than the distance between the third lines of pixels; and defining a corner location at said boundary of the suspected edge in response to locating and not locating said respective fourth color transitions in the plurality of fourth data subsets. 38. The method of claim 37 further comprising: processing the plurality of fifth data subsets corresponding to a plurality of fifth lines of pixels to locate in each fifth data subset a corresponding color transition having an (x,y) coordinates, each fifth line of pixels having the same length as the third line of pixels and a center that is centered relative to the suspected edge based on the calculated curve parameter, wherein the fifth lines of pixels are spaced a distance apart that is less than the distance between the third lines of pixels; and further defining a corner location at said boundary of the suspected edge in response to locating and not detecting said respective fifth color transitions in the plurality of fifth data subsets.
39. The method of claim 37 wherein said symbol has a first edge and a second edge forming a corner, and a plurality of visual cells interior tq said first and second edges, and wherein step a) further comprises stopping processing at the first color transition (x,y) coordinates in response to locating a first color transition, and resuming processing to locate another first color transition at a shifted coordinate location in said first line of pixels in response to one of step b) failing to locate a second color transition and step c) failing to define a valid first side and a valid second side of said symbol.
40. The apparatus of claim 25 wherein symbol is a bar code having a start bar and a stop bar and the identifiable edge is one of the start bar and stop bar edges, further comprising: processing a selected subset of data corresponding to a fourth line of pixels having a direction perpendicular to said identifiable edge to locate a fourth color transition in said fourth line, in response to the identifiable edge being valid, said fourth line being of a length sufficient to intersect said start and stop bar edges and said fourth color transition having an (x,y) coordinate suspected of corresponding to the other of said start bar and stop bar edges; and wherein step d) further comprises selecting another plurality of third data subsets to locate corresponding third color transitions corresponding to a second suspected edge including the fourth color transition and being the other of said start bar and stop bar edges, to validate whether or not the second suspected edge is likely to be a valid edge.
41. The method of claim 40 further comprising determining the boundary of determined valid start and stop bar edges of said bar code symbol, and processing said fourth data subset to extract therefrom the bar code data.
42. The method of claim 40 wherein the bar code is a two dimensional bar code having a root data cell size, further comprising: determining the boundary of the determined valid start and stop bar edges of said bar code symbol, processing said boundary to determine a root data cell of said two dimensional bar code symbol, processing said data to locate each root data cell of said bar code, and determining the data content of each located root data cell to extract therefrom the data of said bar code symbol.
43. The method of claim 25 wherein the identifiable edge is a straight line border of a symbol having a rectangular boundary including first and second straight line borders intersecting at a corner, and a matrix array of data interior to said boundary, wherein said located first transition is one of said first and second line borders, said located second transition is a part of said same one line border, and step d) further comprises processing said third data subsets to locate sufficient third color transitions in said one line border to validate the line border. 44. The method of claim 25 wherein the identifiable edge is a straight line border of a symbol having a rectangular boundary including first and second line borders intersecting at a corner, and a matrix array of data interior to said boundary, wherein step a) further comprises: stopping processing at the first color transition (x,y) coordinates in response to locating a first color transition, and resuming processing for another first color transition at a shifted coordinates in said first line of pixels in response to one of step b) failing to locate a second coDor transition, and step c) failing to validate the suspected symbol edge as one of said first and second solid line borders.
45. The method of claim 44 wherein said located first transition is one of said first and second line borders, said located second transition is said same one line border, and said located third transitions in said one line border are sufficient to validate the solid line border, further comprising: selecting a different plurality of third data subsets corresponding to a second suspected side of said symbol and repeating step d) by processing said different plurality of third data subsets to validate the other of the first and second line borders in response to validating the one line border; and wherein step a) further comprises resuming processing to locate another first color transition at a shifted coordinates in said first line of pixels in response to said repeating of step d) failing to validate the other line border.
46. The method of claim 45 wherein said selecting step further comprises: selecting a first plurality of third data subsets corresponding to a first side of said rectangular boundary adjacent said one line border and processing said first different plurality of third data subsets to validate said first side as said other line border, and selecting a second different plurality of third data subsets corresponding to a second side of said rectangular boundary adjacent said one line border and opposite said first side, in response to the processing of the first different plurality of third subset not validating said first side as said other second line border.
47. The method of claim 25 wherein step d) further comprises processing said plurality of third data subsets one at a time and halting said processing in response to not locating a third color transition.
48. The method of claim 25 further comprising using high precision integer mathematics to locate and validate the symbol in the boundary box.
PCT/US1995/010172 1994-08-11 1995-08-10 Method and apparatus for locating and extracting data from a two-dimensional code WO1996005571A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US28923294A 1994-08-11 1994-08-11
US08/289,232 1994-08-11

Publications (1)

Publication Number Publication Date
WO1996005571A1 true WO1996005571A1 (en) 1996-02-22

Family

ID=23110622

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1995/010172 WO1996005571A1 (en) 1994-08-11 1995-08-10 Method and apparatus for locating and extracting data from a two-dimensional code

Country Status (2)

Country Link
CA (1) CA2173955A1 (en)
WO (1) WO1996005571A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006078359A1 (en) 2004-12-03 2006-07-27 Symbol Technologies, Inc. Bar code scanner decoding
US7466844B2 (en) 2002-12-11 2008-12-16 The Nielsen Company (U.S.), L.L.C. Methods and apparatus to count people appearing in an image
US7609853B2 (en) 2002-12-11 2009-10-27 The Nielsen Company (Us), Llc Detecting a composition of an audience
CN103377363A (en) * 2012-10-29 2013-10-30 福建博思软件股份有限公司 Bill Internet of Things networking kit
US8620088B2 (en) 2011-08-31 2013-12-31 The Nielsen Company (Us), Llc Methods and apparatus to count people in images
US9344205B2 (en) 2008-08-08 2016-05-17 The Nielsen Company (Us), Llc Methods and apparatus to count persons in a monitored environment
US11711638B2 (en) 2020-06-29 2023-07-25 The Nielsen Company (Us), Llc Audience monitoring systems and related methods
US11758223B2 (en) 2021-12-23 2023-09-12 The Nielsen Company (Us), Llc Apparatus, systems, and methods for user presence detection for audience monitoring
US11860704B2 (en) 2021-08-16 2024-01-02 The Nielsen Company (Us), Llc Methods and apparatus to determine user presence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3136976A (en) * 1957-04-17 1964-06-09 Int Standard Electric Corp Method for the automatic recognition of characters, in particular writing characters
US3346845A (en) * 1964-12-11 1967-10-10 Bunker Ramo Character recognition method and apparatus
US4105998A (en) * 1976-03-30 1978-08-08 Fujitsu Limited Pattern recognition processing system
US5296690A (en) * 1991-03-28 1994-03-22 Omniplanar, Inc. System for locating and determining the orientation of bar codes in a two-dimensional image
US5304787A (en) * 1993-06-01 1994-04-19 Metamedia Corporation Locating 2-D bar codes
US5319181A (en) * 1992-03-16 1994-06-07 Symbol Technologies, Inc. Method and apparatus for decoding two-dimensional bar code using CCD/CMD camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3136976A (en) * 1957-04-17 1964-06-09 Int Standard Electric Corp Method for the automatic recognition of characters, in particular writing characters
US3346845A (en) * 1964-12-11 1967-10-10 Bunker Ramo Character recognition method and apparatus
US4105998A (en) * 1976-03-30 1978-08-08 Fujitsu Limited Pattern recognition processing system
US5296690A (en) * 1991-03-28 1994-03-22 Omniplanar, Inc. System for locating and determining the orientation of bar codes in a two-dimensional image
US5319181A (en) * 1992-03-16 1994-06-07 Symbol Technologies, Inc. Method and apparatus for decoding two-dimensional bar code using CCD/CMD camera
US5304787A (en) * 1993-06-01 1994-04-19 Metamedia Corporation Locating 2-D bar codes

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8194923B2 (en) 2002-12-11 2012-06-05 The Nielsen Company (Us), Llc Methods and apparatus for detecting a composition of an audience of an information presenting device
US8824740B2 (en) 2002-12-11 2014-09-02 The Nielsen Company (Us), Llc Methods and apparatus for detecting a composition of an audience of an information presenting device
US7466844B2 (en) 2002-12-11 2008-12-16 The Nielsen Company (U.S.), L.L.C. Methods and apparatus to count people appearing in an image
US7609853B2 (en) 2002-12-11 2009-10-27 The Nielsen Company (Us), Llc Detecting a composition of an audience
US8660308B2 (en) 2002-12-11 2014-02-25 The Nielsen Company (Us), Llc Methods and apparatus for detecting a composition of an audience of an information presenting device
EP1836646A4 (en) * 2004-12-03 2010-08-04 Symbol Technologies Inc Bar code scanner decoding
WO2006078359A1 (en) 2004-12-03 2006-07-27 Symbol Technologies, Inc. Bar code scanner decoding
EP1836646A1 (en) * 2004-12-03 2007-09-26 Symbol Technologies, Inc. Bar code scanner decoding
US9344205B2 (en) 2008-08-08 2016-05-17 The Nielsen Company (Us), Llc Methods and apparatus to count persons in a monitored environment
US8620088B2 (en) 2011-08-31 2013-12-31 The Nielsen Company (Us), Llc Methods and apparatus to count people in images
US9237379B2 (en) 2011-08-31 2016-01-12 The Nielsen Company (Us), Llc Methods and apparatus to count people in images
CN103377363A (en) * 2012-10-29 2013-10-30 福建博思软件股份有限公司 Bill Internet of Things networking kit
CN103377363B (en) * 2012-10-29 2016-02-17 福建博思软件股份有限公司 A kind of bill thing union part
US11711638B2 (en) 2020-06-29 2023-07-25 The Nielsen Company (Us), Llc Audience monitoring systems and related methods
US11860704B2 (en) 2021-08-16 2024-01-02 The Nielsen Company (Us), Llc Methods and apparatus to determine user presence
US11758223B2 (en) 2021-12-23 2023-09-12 The Nielsen Company (Us), Llc Apparatus, systems, and methods for user presence detection for audience monitoring

Also Published As

Publication number Publication date
CA2173955A1 (en) 1996-02-22

Similar Documents

Publication Publication Date Title
EP0669593B1 (en) Two-dimensional code recognition method
US11551341B2 (en) Method and device for automatically drawing structural cracks and precisely measuring widths thereof
CN110210409B (en) Method and system for detecting form frame lines in form document
US6015089A (en) High speed image acquisition system and method of processing and decoding bar code symbol
US5120940A (en) Detection of barcodes in binary images with arbitrary orientation
CN110264445B (en) Battery silk-screen quality detection method combining block template matching with morphological processing
US5420937A (en) Fingerprint information extraction by twin tracker border line analysis
JPH0519753B2 (en)
JPH06119481A (en) Device and method for detecting direction of line segment
WO2017041600A1 (en) Chinese-sensitive code feature pattern detection method and system
US6941026B1 (en) Method and apparatus using intensity gradients for visual identification of 2D matrix symbols
WO1996005571A1 (en) Method and apparatus for locating and extracting data from a two-dimensional code
US4876732A (en) System for detecting rotational angle of objective pattern
JPH0512487A (en) Optical recognizing system and recognizing method of bar code character
JP3322958B2 (en) Print inspection equipment
CN110969612B (en) Two-dimensional code printing defect detection method
JP4631384B2 (en) Printing state inspection method, character inspection method, and inspection apparatus using these methods
JPH09147056A (en) Method and device for checking appearance of mark
JP3608923B2 (en) Meander follower for defect inspection apparatus and defect inspection apparatus
JPS61109176A (en) Deciding device for print quality
JPH07159340A (en) Inspection apparatus for printed object
KR0145255B1 (en) Dot pattern inspecting apparatus
JP3191997B2 (en) Symbol information reader
CA1310754C (en) Optical character reader
JPH0798763A (en) Method and device for processing image

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA CN JP SG

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

WWE Wipo information: entry into national phase

Ref document number: 2173955

Country of ref document: CA

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase