US20010023896A1 - Techniques for reading two dimensional code, including maxicode - Google Patents

Techniques for reading two dimensional code, including maxicode Download PDF

Info

Publication number
US20010023896A1
US20010023896A1 US09/816,173 US81617301A US2001023896A1 US 20010023896 A1 US20010023896 A1 US 20010023896A1 US 81617301 A US81617301 A US 81617301A US 2001023896 A1 US2001023896 A1 US 2001023896A1
Authority
US
United States
Prior art keywords
modules
maxicode
symbol
runs
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/816,173
Other versions
US6340119B2 (en
Inventor
Duanfeng He
Kevin Hunter
Eugene Joseph
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Symbol Technologies LLC
Original Assignee
Symbol Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Symbol Technologies LLC filed Critical Symbol Technologies LLC
Priority to US09/816,173 priority Critical patent/US6340119B2/en
Publication of US20010023896A1 publication Critical patent/US20010023896A1/en
Application granted granted Critical
Publication of US6340119B2 publication Critical patent/US6340119B2/en
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SYMBOL TECHNOLOGIES, INC.
Assigned to SYMBOL TECHNOLOGIES, INC. reassignment SYMBOL TECHNOLOGIES, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JPMORGANCHASE BANK, N.A.
Assigned to SYMBOL TECHNOLOGIES, INC. reassignment SYMBOL TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HE, DUANFENG, JOSEPH, EUGENE, HUNTER, KEVIN
Assigned to MORGAN STANLEY SENIOR FUNDING, INC. AS THE COLLATERAL AGENT reassignment MORGAN STANLEY SENIOR FUNDING, INC. AS THE COLLATERAL AGENT SECURITY AGREEMENT Assignors: LASER BAND, LLC, SYMBOL TECHNOLOGIES, INC., ZEBRA ENTERPRISE SOLUTIONS CORP., ZIH CORP.
Assigned to SYMBOL TECHNOLOGIES, LLC reassignment SYMBOL TECHNOLOGIES, LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SYMBOL TECHNOLOGIES, INC.
Assigned to SYMBOL TECHNOLOGIES, INC. reassignment SYMBOL TECHNOLOGIES, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: MORGAN STANLEY SENIOR FUNDING, INC.
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1456Methods for optical code recognition including a method step for retrieval of the optical code determining the orientation of the optical code with respect to the reader and correcting therefore

Definitions

  • the invention relates to techniques for reading two dimensional code such as MaxiCode. Aspects of the invention are particularly useful in imaging optical code readers which are designed to read various kinds of optical code.
  • Optical codes are patterns made up of image areas having different light reflective or light emissive properties, which are typically assembled in accordance with a prior rules.
  • the term “bar code” is sometimes used to describe certain kinds of optical codes.
  • the optical properties and patterns of optical codes are selected to distinguish them in appearance from the background environments in which they are used.
  • Devices for identifying or extracting data from optical codes are sometimes referred to as “optical code readers” of which bar code scanners are one type.
  • Optical code readers are used in both fixed or portable installations in many diverse environments such as in stores for check-out services, in manufacturing locations for work flow and inventory control and in transport vehicles for tracking package handling.
  • the optical code can be used as a rapid, generalized means of data entry, for example, by reading a target bar code from a printed listing of many bar codes.
  • the optical code reader is connected to a portable data processing device or a data collection and transmission device.
  • the optical code reader includes a handheld sensor which is manually directed at a target code.
  • the bar code is a pattern of variable-width rectangular bars separated by fixed or variable width spaces. The bars and spaces have different light reflecting characteristics.
  • One example of a one dimensional bar code is the UPC/EAN code used to identify, for example, product inventory.
  • An example of a two dimensional or stacked bar code is the PDF417 bar code.
  • a description of PDF417 bar code and techniques for decoding it are disclosed in U.S. Pat. No. 5,635,697 to Shellhammer et al. and assigned to Symbol Technologies, Inc., which patent is incorporated herein by reference.
  • Conventional codes are known which are based on a two dimensional grid whose geometry is independent of data content.
  • the grid may be a plane tiled by regular polygons such as squares or hexagons.
  • regular polygons such as squares or hexagons.
  • a black or white feature or polygon is located at each grid location.
  • MaxiCode is a matrix symbology made up of offset rows of hexagonal modules arranged around a finder pattern.
  • a MaxiCode symbol may consist of a unique central finder pattern: up to 3 dark concentric rings and 3 included light areas, sometimes called a “bulls-eye”.
  • the central finder pattern is surrounded by an approximately square-shaped array of 33 offset rows of hexagonal modules.
  • the 33 rows in the symbol alternate between 30 and 29 modules in width.
  • Orientation information (rotation) is provided by 6 patterns of three modules each located adjacent to the bulls-eye. Data is carried in the presence or absence of darkened modules within the hexagonal grid.
  • a binary encoding scheme may be used to represent information in black (low reflectivity) and white (high reflectivity). MaxiCode is described in detail in the publication “International Symbology Specification—MaxiCode”, by AIM International, Inc. (hereinafter “AIM Specification”).
  • Various optical codes including MaxiCode can be read employing imaging devices.
  • an image sensor may be employed which has a two dimensional array of cells or photo sensors which correspond to image elements or pixels in a field of view of the device.
  • Such an image sensor may be a two dimensional or area charge coupled device (CCD) and associated circuits for producing electronic signals corresponding to a two dimensional array of pixel information for a field of view.
  • CCD area charge coupled device
  • An imaging engine usable in reading MaxiCode is disclosed in U.S. patent application Ser. No. 09/096,578 filed Jun. 12, 1998, entitled IMAGING ENGINE AND METHOD FOR CODE READERS to Correa et al. which is hereby incorporated by reference. Electronic circuits in or associated with the imaging engine may be provided to decode MaxiCode.
  • MaxiCode is a type of matrix code which lacks extensive self-synchronization such as that built into PDF417. Some variants of Data Matrix also fall into this category. To decode such types of two dimensional bar codes, a projection or interpolation based on a single line or even a few lines is not adequate and could result in module displacement.
  • the disclosure relates to techniques for determining the presence, orientation and location of features in an image of a two dimensional optical code especially grid-based codes whose geometry is independent of data content, and more especially grid-based code based on repeating non-rectangular module shapes such as hexagons.
  • the techniques are adapted for use in locating finder patterns and orientation modules, and in mapping data in an image pixel plane with grid locations in a grid-based two dimensional code to account for size, rotation, tilt, warping and distortion of the code symbol.
  • a code is a MaxiCode
  • techniques are disclosed for determining the presence and location of the MaxiCode bulls-eye and orientation modules.
  • a preferred embodiment of the present invention is a method for determining the presence and location of a MaxiCode symbol bulls-eye and the orientation of the symbol in pixel data obtained by an optical code reader.
  • a candidate center in a run of pixels having a color indicative of the center area of the bulls-eye is identified.
  • the candidate center is tested to determine if adjacent pixel runs have a predetermined amount of mirror symmetry with respect to the candidate center.
  • Plural points are located on axes radiating from the identified run and intersecting an edge of a ring of the MaxiCode bulls-eye.
  • An ellipse is fitted to the located points and the ellipse is expanded outwardly to estimate the location of orientation modules of the MaxiCode.
  • the orientation of the MaxiCode symbol is determined from information read from the orientation modules.
  • the fitted ellipse may be required to have major and minor dimensions having a ratio of 2:1 or less. If this ratio is not met, the image may be rejected as exhibiting too great a tilt.
  • a least squares fit may be employed to find the major and minor dimensions of the ellipse fitted to the located points.
  • the ring edge located and fitted is the inner edge of the outermost black ring of the MaxiCode bulls-eye. Up to 24 points may be used in the least square fit to find the major and minor dimensions of the ellipse.
  • Another preferred embodiment of the present invention includes techniques for associating pixel data in a pixel plane of a two dimensional grid-based symbol with corresponding modules in the grid.
  • subsets of the pixel data having a known association with plural seed modules such as MaxiCode orientation modules, are identified.
  • the coefficients of an Euler transform are determined through a least-squared best-fit method using the identified association between the pixel data subsets and the seed modules. Using the Euler transform, other locations in the pixel plane are associated with additional symbol modules.
  • the seed modules may be a selected subset of the 18 orientation hexagon modules of the MaxiCode symbol, preferably as many of such modules whose locations are readily distinguishable.
  • the technique is usable, for example, where the grid-based symbol is not located in any single plane perpendicular to the optical axis of the optical code reader, such as where the symbol lies on a tilted plane or is printed on the curved face of a can.
  • a method for decoding a MaxiCode symbol to obtain data from the primary data modules and secondary data modules located on the hexagonal grid.
  • the central finder pattern is located and then the orientation modules located based on the location of the central finder pattern.
  • the location of primary data modules of the MaxiCode are found based on the previously determined locations of the orientation modules.
  • the secondary data modules are sequentially locating outwardly from the primary message modules toward the edge of the symbol using previously located positions of adjacent modules. Secondary data modules are located in a successive radial progression of rings of hexagons outward toward the edges of the symbol.
  • the location of secondary data modules are estimated using the positions of adjacent modules and just past adjacent modules.
  • the location of a first set of secondary data modules may be refined by centering black/white transitions before locating a next set of secondary data modules further outward for the central finder pattern. Deviations from a regular hexagonal grid are accumulated as the location of secondary data modules proceeds outwardly from the central finder pattern.
  • the initial location of the central finder pattern may be based on finding a candidate bulls-eye center by testing local symmetry about selected axes.
  • a candidate sequence of alternating runs running in a horizontal direction (with respect to the image detector) may be tested, the 5 th run being the candidate center run.
  • the candidate sequence is evaluated to verify that the lengths of the runs are within a predetermined proportion of a selected one of the runs.
  • a candidate center may be further tested by evaluating pixel data vertically above and below the candidate center for approximate mirror symmetry in the location of corresponding black/white transitions in the data.
  • horizontal and vertical distances may be subjected to a ratio test to evaluate the image.
  • further testing of a candidate center may be done by evaluating pixel data on diagonal lines passing through the candidate center for appropriate mirror symmetry in the location of corresponding black/white transitions in the data. If all tests are passed, the candidate center and/or black/white transition data may be used to locate additional features of the MaxiCode.
  • FIG. 1 is a block diagram of various electronic circuits employed in a preferred embodiment of the present invention.
  • FIG. 2 is a combined flow chart and system block diagram illustrating the preprocessing and decoding of CCD data in a preferred embodiment of the present invention
  • FIG. 3 is a flow chart illustrating the reading and translating of two dimensional, image data identified by preprocessing as corresponding to a MaxiCode symbol
  • FIG. 4 is a plan view of a portion of the arrangement of a conventional MaxiCode symbol, with the addition of reference axes;
  • FIG. 5 is an example of pixel runs used in a preferred embodiment of the present invention.
  • FIG. 6 is a detail of an image of a central finder or bulls-eye of a MaxiCode symbol showing the location of ellipse fitting points in accordance with a preferred embodiment of the present invention
  • FIG. 7 illustrates aspects of the present invention related to determining the rotation or orientation of a MaxiCode symbol
  • FIG. 8 is an illustration of grid points generated to correspond to a MaxiCode symbol in accordance with a preferred embodiment of the present invention.
  • FIG. 9 is an illustration of a hexagonal grid with indexing numbers (i, j);
  • FIG. 10 illustrates a vector field usable in the present invention to correct distortions of image data of a two dimensional code symbol.
  • FIG. 1 is a block diagram of various electronic circuits employed in preferred embodiments of the present invention.
  • An image sensor 2 has an optical axis 3 .
  • An image is obtained of a symbol, shown located on an arbitrary warped and tilted surface 5 (not in a plane and not perpendicular to the optical axis 3 ).
  • electronic signals from the image sensor 2 pass to FPGA (or ASIC) circuit 4 .
  • the image sensor 2 includes a CCD detector and various signal conditioning circuits which produce a digital output signal.
  • This digital output signal may be in the form of electronic signals corresponding to a two dimensional array of pixel information for a target field of view.
  • Digital signals from the imaging sensor are supplied to the microprocessor 6 by the FPGA circuit 4 .
  • the FPGA also provides control signals from the microprocessor for control of, for example, the aiming systems, illumination systems and objective lens servo systems of the imaging engine.
  • the microprocessor also provides information to external systems via the RS 232 driver 10 . This may include data decoded in accordance with the techniques described below.
  • the micro-processor may also communicate by data line to Flash memory 12 and DRAM 14 on which software and data for the system are stored, respectively.
  • This stored information may include data from a target optical code.
  • FIG. 2 is a combined flow chart and system block diagram illustrating autodiscrimination of image sensor data.
  • Data obtained by the image sensor circuitry is indicated at 100 .
  • This data may be in the form of electronic signals corresponding to a two dimensional array of pixel information for a target image.
  • the data may be stored for subsequent processing in the DRAM of the optical code reader.
  • the processing software which implements the processes of the present disclosure may have access to the stored image data at all levels. At various processing steps, portions of the pixel data may be called up for further processing or to confirm on-going analyses.
  • the pixel data may be divided into subimages, for example, 32 ⁇ 32 pixel subimages. These subimages are analyzed for properties known to be associated with various types of optical codes and known to distinguish a particular code from other codes and environmental (non-code) images. More particularly, a process of statistical Autodiscrimination may be employed. In statistical Autodiscrimination the image is divided into sub-images or sections and some statistic computed for each section. Subimages with similar statistics can be grouped to form regions of interest or clusters which may contain codes. The advantage of the statistical approach is that once the statistics are compiled, only the sub-images need to be processed, significantly reducing the computation requirements. In addition, the compilation of the statistics is simple and can be done in hardware for super fast systems.
  • the statistic used in preferred embodiments is a histogram of local surface orientations.
  • the statistics can be obtained by analyzing surface tangents to cluster the subimages. Once a cluster is identified, the image data may be further analyzed to detect the presence of tangents associated with particular types of optical codes.
  • Statistical Autodiscrimination is a subject of a U.S. patent application Ser. No. 09/096,348 entitled AUTODISCRIMINATION AND LINE DRAWING TECHNIQUES FOR CODE READERS and assigned to Symbol Technologies, Inc., which application is hereby incorporated by reference.
  • a neural network can be used to discriminate image areas of possible interest as containing optical code.
  • a neural network can also be designed to look directly at the input image.
  • the location of the aiming pattern with respect to the subimages may also be used as an indicia or weighting factor in selecting subimages for further processing.
  • Autodiscrimination software executed by the system microprocessor determines which subimage clusters contain codes of a particular type and the coordinates in the pixel data array of certain boundaries or features of preliminarily identified code areas.
  • This system is indicated at 102 .
  • the image data may be preliminarily identified as a one dimensional code, two dimensional code (PDF), Postal Code or MaxiCode, it being understood that other code types with recognizable statistical or contrast patterns could be identified at this stage of processing. This preliminary identification is used to select the appropriate decoding techniques.
  • the autodiscrimination function also passes information about the image useful in decoding. This information is shown in data windows 104 through 110 in FIG. 2.
  • Data window 110 corresponds to MaxiCode data used in decoding: location information concerning one or more clusters of subimages preliminarily identified by the Autodiscrimination function as containing MaxiCode.
  • FIG. 3 is a flow chart illustrating the processing and decoding of two dimensional pixel data identified by the Autodiscrimination function 102 . This identification is based on the fact that the Autodiscrimination function has found within an image cluster a contrast variation within predetermined limits, but no threshold groupings of surface tangents which would be indicative of other codes having generally orthogonal feature edges such as one dimensional code, postal code or PDF code. In the processing of FIG. 3, cluster data may be accepted line by line.
  • the technique attempts to locate a MaxiCode bulls-eye at 150 by analyzing the pixel data for patterns indicative of concentric rings. An ellipse may then be fitted to the inside edge of the outermost black ring at 152 . The rotation of the grid is determined from the location of the 18 orientation hexagons in the image as indicated at 154 . A transformation is calculated to account for scale and tilt of the target code and to map a grid, at processing step 156 . Grid locations are adapted progressively outwardly by shifting hexagon centers to better fit the pixel data, as indicated at block 158 . The result of this processing is indexed presence/absence data for the hexagonal MaxiCode grid. This data is passed to the MaxiCode translating function at 160 . If the MaxiCode decoding or translating fails during the processing of FIG. 3, process control may be returned to the Autodiscrimination function 102 to select another code type and/or cluster of subimage pixels for analysis.
  • FIG. 4 illustrates the arrangement of features in a portion of the MaxiCode symbol.
  • a code “symbol” refers to an optical code printed or otherwise presented to view and having variable light reflectivity capable of being detected by a code reader.
  • a “MaxiCode symbol” refers to a symbol having a central finder pattern and data carried in the presence or absence of darker modules arranged in a hexagonal grid surrounding the finder pattern. Examples of such MaxiCode symbols are described in the AIM Specification.
  • the central finder pattern (bulls-eye) 200 is first located.
  • this process proceeds as follows.
  • a linked list of subimages or cluster is identified by the Autodiscrimination function. Within this identified area, each horizontal line is converted into run lengths.
  • a moving window averaging technique is used to generate an estimated grey scale threshold to determine the color of the center pixel in the window. Alternatively, the threshold may be generated in the previously preformed subimage processing.
  • Each higher reflectance (white) run is tentatively treated as if it were in the white center circle 202 of a bulls-eye.
  • a run is a string of two or more adjacent pixels each indicating a reflectance from the symbol either above or below a threshold value. For example, runs of from 2 to 30 white pixels may be treated as candidate center runs. Four adjacent runs of alternating color are examined on either side of the candidate center. According to the context, the word “color” is used to indicate the grey scale value of one or more pixels or a determination that a run or area of the image is black or white. The widths of corresponding pairs of runs are compared to determine whether they are too big or too small in relation to one another.
  • the thresholds for rejection are tunable parameters based on the optical characteristic of the system. Processing is expedited by noting a bad run size as it is collected and preventing further analysis as long as this run is in the current window.
  • FIG. 5 This operation is illustrated with a simple example in FIG. 5.
  • the line of black and white dots 250 corresponds to a window of pixels from a horizontal row of pixels in an image cluster identified by the Autodiscrimination function as possibly containing the image of a MaxiCode symbol.
  • the four central pixels have been identified as a candidate center run.
  • Four adjacent runs on the left side L 1 , L 2 , D 1 , D 2 and on right side L′ 1 , L′ 2 , D′ 1 , D′ 2 corresponding to black and white pixels (differentiated Light and bark grey scale values) have been determined to have lengths falling within tunable parameters of the system.
  • each of corresponding runs on the left and right side of the candidate center have equal widths, i.e. they are exactly mirror symmetric about the candidate center C.
  • a cumulative radius from the candidate center of the bulls-eye is calculated.
  • the left and right cumulative radii must be approximately symmetric. If the mismatch of the radii exceeds a predetermined threshold (e.g. a ratio of 2:1), the candidate center is rejected.
  • a set of vertical run lengths is generated using the same technique. This process works from the center out, however, and stops when five black/white or white/black transitions have been seen above and below the candidate center. This set of runs is subjected to the same symmetry and min/max run size tests listed above. If the vertical run passes, a further test is made the ratio of the horizontal to vertical sizes must be between 1:2 and 2:1. Ratios outside this would indicate a symbol tilt exceeding 60 degrees, and, thus, would not represent a good candidate image for decoding. If both the horizontal and vertical tests are passed, a pair of diagonal runs are also tested. Each diagonal run must individually pass the run length min/max test and the symmetry test and the pair must meet the 1:2-2:1 test.
  • the initial location of the central finder pattern may be based on finding a candidate center using the following steps:
  • each run is between half and twice the length of the first run
  • each run is between 1 and 30 pixels in length.
  • step (c) examining a vertical sequence of pixels running on a vertical axis passing through the candidate center found in step (b) and evaluating whether the four adjacent runs above the candidate center run and the four adjacent runs located below the candidate center run are between half and twice the length of the run of the candidate center;
  • step (f) examining a new horizontal sequence of pixels running through the new candidate center and evaluating the run lengths as in step (c) to determine whether they have the required proportionality;
  • step (g) examining pixels sequences lying on two different diagonal scan lines passing through the new horizontal sequence of pixels, and evaluating the runs lengths as in step (c) to determine whether they have the required proportionality;
  • FIG. 6 illustrates some of the geometry of these tests.
  • the Figure shows the projection in the image plane of a portion of a MaxiCode bulls-eye lying in a plane tilted with respect to the optical axis of the image sensor.
  • Horizontal and vertical axes passing near or through the candidate center run are identified by numerals 260 and 262 , respectively.
  • the line ⁇ overscore (HH ) ⁇ lies on a horizontal run and the line ⁇ overscore (VV ) ⁇ lies on a vertical run.
  • the length of the lines may be compared to determine whether they exceed a 2:1 ratio.
  • Axes 264 and 266 lie on diagonal runs which may also be tested as discussed above.
  • V 2 ⁇ S 22 ⁇ 1 ⁇ S 12 T ⁇ V 1
  • the processing proceeds to find the orientation of the symbol.
  • the ellipse calculated above is expanded outward from the center to the three radii corresponding to estimated radial distances of the center of the 18 orientation hexagons in the symbol.
  • the 18 orientation hexagons are arranged in six groups 214 as shown in FIG. 4.
  • the three radii 300 , 302 and 304 are shown in FIG. 7 for the simplest case where the image of the bulls-eye is found to be essentially circular (an ellipse with zero eccentricity).
  • a set of 360 samples is taken along each ellipse (each circle in the case of the image of FIG. 7), one per degree of rotation around the ellipse.
  • a correlated search is performed for the orientation markers.
  • the rotation(s) at which the maximum number of the orientation hexes is matched is noted. If a continuous set of angles (e.g. from 18-21 degrees) all match the same set of hexes, the center of this range is used to try to locate the center of the orientation hexes if possible. This allows the calculation of the best known position of each of the 18 hexes.
  • the hexagon locations are refined by shifting the computed centers to center the internal black-to-white transitions among the hexagons, as well as a black-to-white transition toward the center of the bulls-eye.
  • step (iii) If step (ii) yields a proper color match, the triplet is subjected to the same refinement procedure as in (i).
  • a best fitted grid technique generally usable in decoding matrix codes will now be described.
  • Processing may then proceed to decode the primary message of the MaxiCode.
  • the primary message includes 20 symbol characters designated 1-20 in FIG. 4 and including 120 modules. Symbol characters 1-10 are used to encode data; symbol characters 11-20 are used for error correction. Decoding the primary message may proceed as follows:
  • the positions of the centers of the hexagons can be corrected. This is done by finding internal black-to-white transitions in the EEC-corrected data, and then shifting the hex centers in the image to place the transitions found in the image at a position centered between the hex centers. Note that one easy way to do this is to perturb the orientation hexagons, since shifts in their locations will shift the hex centers in proper, correlated fashion.
  • mapping proceeds outwardly from the primary message hexagons into the surrounding hexagons, ring by ring i.e. a radial progression from the center of the symbol.
  • Each hexagon in the surrounding area can be estimated by at least two independent methods using hexagons that are adjacent to it and just past adjacent to it. These two results can be averaged to correct for rounding errors.
  • the colors (grey scale values) on either side can be examined to produce a local black-white threshold, allowing for changes in illumination across the image.
  • Black hexagons especially if in white neighborhood, can be located more exactly. This property may be exploited to provide a vector field for tweaking the grid.
  • a smooth, unidirectional vector field which is zero at the bulls-eye is shown in FIG. 10.
  • such a vector field could be used to correct distortions caused by cylindrical warp of the symbol.
  • the sampling grid could be warped somewhat and the secondary message hex locations recalculated to try to achieve a successful decode. Again, a possible metric to use would be to have edge locations centered between hex centers.
  • translated data may be obtained from signals corresponding to a two dimensional array of pixel information from a field of view containing the image of a MaxiCode symbol.
  • the disclosed techniques are designed to automatically adjust for image size, and tolerate as much as 60° tilt, 360° of rotation and significant warping (hexagon distance change of 50% or more).

Abstract

The disclosure relates to techniques for determining the presence, orientation and location of features in an image of a two dimensional optical code. The techniques are adapted for use in mapping data in an image pixel plane with grid locations in a grid-based two dimensional code to account for size, rotation, tilt, warping and distortion of the code symbol. Where such a code is a MaxiCode, techniques are disclosed for determining the presence and location of the MaxiCode bulls-eye, orientation modules, primary data modules and secondary data modules.

Description

    FIELD OF THE INVENTION
  • The invention relates to techniques for reading two dimensional code such as MaxiCode. Aspects of the invention are particularly useful in imaging optical code readers which are designed to read various kinds of optical code. [0001]
  • BACKGROUND OF THE INVENTION AND OBJECTS
  • Optical codes are patterns made up of image areas having different light reflective or light emissive properties, which are typically assembled in accordance with a prior rules. The term “bar code” is sometimes used to describe certain kinds of optical codes. The optical properties and patterns of optical codes are selected to distinguish them in appearance from the background environments in which they are used. Devices for identifying or extracting data from optical codes are sometimes referred to as “optical code readers” of which bar code scanners are one type. Optical code readers are used in both fixed or portable installations in many diverse environments such as in stores for check-out services, in manufacturing locations for work flow and inventory control and in transport vehicles for tracking package handling. The optical code can be used as a rapid, generalized means of data entry, for example, by reading a target bar code from a printed listing of many bar codes. In some uses, the optical code reader is connected to a portable data processing device or a data collection and transmission device. Frequently, the optical code reader includes a handheld sensor which is manually directed at a target code. [0002]
  • Most conventional optical scanning systems are designed to read one-dimensional bar code symbols. The bar code is a pattern of variable-width rectangular bars separated by fixed or variable width spaces. The bars and spaces have different light reflecting characteristics. One example of a one dimensional bar code is the UPC/EAN code used to identify, for example, product inventory. An example of a two dimensional or stacked bar code is the PDF417 bar code. A description of PDF417 bar code and techniques for decoding it are disclosed in U.S. Pat. No. 5,635,697 to Shellhammer et al. and assigned to Symbol Technologies, Inc., which patent is incorporated herein by reference. [0003]
  • Conventional codes are known which are based on a two dimensional grid whose geometry is independent of data content. The grid may be a plane tiled by regular polygons such as squares or hexagons. Typically a black or white feature or polygon is located at each grid location. [0004]
  • One such two dimensional optical code known in the art is MaxiCode. A portion of a conventional MaxiCode symbol is shown in FIG. 4. MaxiCode is a matrix symbology made up of offset rows of hexagonal modules arranged around a finder pattern. A MaxiCode symbol may consist of a unique central finder pattern: up to 3 dark concentric rings and 3 included light areas, sometimes called a “bulls-eye”. The central finder pattern is surrounded by an approximately square-shaped array of 33 offset rows of hexagonal modules. The 33 rows in the symbol alternate between 30 and 29 modules in width. Orientation information (rotation) is provided by 6 patterns of three modules each located adjacent to the bulls-eye. Data is carried in the presence or absence of darkened modules within the hexagonal grid. A binary encoding scheme may be used to represent information in black (low reflectivity) and white (high reflectivity). MaxiCode is described in detail in the publication “International Symbology Specification—MaxiCode”, by AIM International, Inc. (hereinafter “AIM Specification”). [0005]
  • Various optical codes including MaxiCode can be read employing imaging devices. For example an image sensor may be employed which has a two dimensional array of cells or photo sensors which correspond to image elements or pixels in a field of view of the device. Such an image sensor may be a two dimensional or area charge coupled device (CCD) and associated circuits for producing electronic signals corresponding to a two dimensional array of pixel information for a field of view. An imaging engine usable in reading MaxiCode is disclosed in U.S. patent application Ser. No. 09/096,578 filed Jun. 12, 1998, entitled IMAGING ENGINE AND METHOD FOR CODE READERS to Correa et al. which is hereby incorporated by reference. Electronic circuits in or associated with the imaging engine may be provided to decode MaxiCode. [0006]
  • The above-mentioned AIM Specification discloses a frequency—domain technique to decode MaxiCode. In such processes three principal MaxiCode axes, assumed to correspond to the hexagon grid, are found in two dimensional pixel data using a Fourier transform technique. Grid locations for the modules are then determined by use of inverse Fourier transform techniques. However, such techniques require computationally intensive, two dimensional transforms, which have long processing times particularly at high resolution levels preferred for accurate decoding. Moreover, such techniques may not accurately accommodate tilted or warped MaxiCode symbols. [0007]
  • Accordingly, it is an object of the present invention to provide a fast and efficient technique for decoding grid-based two dimensional code such as MaxiCode. [0008]
  • It is another object of the present invention to provide a technique for decoding grid based, two dimensional code which better tolerates tilt and/or warping of the code symbol as seen by the image sensor. [0009]
  • It is another object of the present invention to provide a technique for decoding MaxiCode which produces an accurate grid mapping which can better accommodate symbol grid distortion and which can take into account tilt in the plane of the MaxiCode symbol with respect to the optical axis of the image sensor. [0010]
  • In U.S. Pat. No. 5,637,849 to Wang et al., it has been proposed to extract data from a MaxiCode symbol using a spatial domain technique, without frequency domain transforms. The technique includes identifying two intersecting diameters of at least one traversal line having an alignment normal to the sides of the hexagonal data cells. In the case that the symbol is on a non-planar surface, Wang et al. teaches that the top row of data cells must be sampled by an angled row of pixels and, the middle row of data samples by a horizontal row of pixels and intermediate rows of data cells sampled at angles interpolated between the top angle and horizontal. Alternatively, or in addition, a normalization routine may be implemented for individual rows of data cells. [0011]
  • MaxiCode is a type of matrix code which lacks extensive self-synchronization such as that built into PDF417. Some variants of Data Matrix also fall into this category. To decode such types of two dimensional bar codes, a projection or interpolation based on a single line or even a few lines is not adequate and could result in module displacement. [0012]
  • Accordingly, it is an object of the present invention to provide more accurate techniques for mapping a code grid, particularly one which lacks extensive self-synchronization such as MaxiCode. [0013]
  • It is another object of the present invention to provide techniques for more accurately and efficiently decoding grid-based two dimensional codes in situations where the code symbol does not lie in a plane normal to the optical axis of the image sensor. [0014]
  • It is another object of the present invention to provide techniques for decoding grid-based two dimensional codes in situations where the code symbol or the image of the code symbol is progressively distorted from the nominal grid pattern on which the code is based. [0015]
  • These and other objects and features of the invention will be apparent from this written description and drawings. [0016]
  • SUMMARY OF THE INVENTION
  • The disclosure relates to techniques for determining the presence, orientation and location of features in an image of a two dimensional optical code especially grid-based codes whose geometry is independent of data content, and more especially grid-based code based on repeating non-rectangular module shapes such as hexagons. The techniques are adapted for use in locating finder patterns and orientation modules, and in mapping data in an image pixel plane with grid locations in a grid-based two dimensional code to account for size, rotation, tilt, warping and distortion of the code symbol. Where such a code is a MaxiCode, techniques are disclosed for determining the presence and location of the MaxiCode bulls-eye and orientation modules. [0017]
  • More particularly, a preferred embodiment of the present invention is a method for determining the presence and location of a MaxiCode symbol bulls-eye and the orientation of the symbol in pixel data obtained by an optical code reader. In this method a candidate center in a run of pixels having a color indicative of the center area of the bulls-eye is identified. The candidate center is tested to determine if adjacent pixel runs have a predetermined amount of mirror symmetry with respect to the candidate center. Plural points are located on axes radiating from the identified run and intersecting an edge of a ring of the MaxiCode bulls-eye. An ellipse is fitted to the located points and the ellipse is expanded outwardly to estimate the location of orientation modules of the MaxiCode. The orientation of the MaxiCode symbol is determined from information read from the orientation modules. [0018]
  • The fitted ellipse may be required to have major and minor dimensions having a ratio of 2:1 or less. If this ratio is not met, the image may be rejected as exhibiting too great a tilt. A least squares fit may be employed to find the major and minor dimensions of the ellipse fitted to the located points. In preferred embodiments the ring edge located and fitted is the inner edge of the outermost black ring of the MaxiCode bulls-eye. Up to 24 points may be used in the least square fit to find the major and minor dimensions of the ellipse. [0019]
  • Another preferred embodiment of the present invention includes techniques for associating pixel data in a pixel plane of a two dimensional grid-based symbol with corresponding modules in the grid. In accordance with this technique, subsets of the pixel data having a known association with plural seed modules such as MaxiCode orientation modules, are identified. The coefficients of an Euler transform are determined through a least-squared best-fit method using the identified association between the pixel data subsets and the seed modules. Using the Euler transform, other locations in the pixel plane are associated with additional symbol modules. [0020]
  • In preferred embodiments of this technique, more than six seed modules are employed. In the case that the symbol is a MaxiCode symbol, the seed modules may be a selected subset of the 18 orientation hexagon modules of the MaxiCode symbol, preferably as many of such modules whose locations are readily distinguishable. The technique is usable, for example, where the grid-based symbol is not located in any single plane perpendicular to the optical axis of the optical code reader, such as where the symbol lies on a tilted plane or is printed on the curved face of a can. [0021]
  • In another preferred embodiment of the present invention a method is employed for decoding a MaxiCode symbol to obtain data from the primary data modules and secondary data modules located on the hexagonal grid. In such a method the central finder pattern is located and then the orientation modules located based on the location of the central finder pattern. The location of primary data modules of the MaxiCode are found based on the previously determined locations of the orientation modules. The secondary data modules are sequentially locating outwardly from the primary message modules toward the edge of the symbol using previously located positions of adjacent modules. Secondary data modules are located in a successive radial progression of rings of hexagons outward toward the edges of the symbol. [0022]
  • In preferred embodiments the location of secondary data modules are estimated using the positions of adjacent modules and just past adjacent modules. The location of a first set of secondary data modules may be refined by centering black/white transitions before locating a next set of secondary data modules further outward for the central finder pattern. Deviations from a regular hexagonal grid are accumulated as the location of secondary data modules proceeds outwardly from the central finder pattern. [0023]
  • The initial location of the central finder pattern may be based on finding a candidate bulls-eye center by testing local symmetry about selected axes. A candidate sequence of alternating runs running in a horizontal direction (with respect to the image detector) may be tested, the 5[0024] th run being the candidate center run.
  • Initially, the candidate sequence is evaluated to verify that the lengths of the runs are within a predetermined proportion of a selected one of the runs. [0025]
  • In preferred embodiments, a candidate center may be further tested by evaluating pixel data vertically above and below the candidate center for approximate mirror symmetry in the location of corresponding black/white transitions in the data. In one embodiment, horizontal and vertical distances may be subjected to a ratio test to evaluate the image. Yet further testing of a candidate center may be done by evaluating pixel data on diagonal lines passing through the candidate center for appropriate mirror symmetry in the location of corresponding black/white transitions in the data. If all tests are passed, the candidate center and/or black/white transition data may be used to locate additional features of the MaxiCode. [0026]
  • The scope of the present invention is defined in the claims.[0027]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of various electronic circuits employed in a preferred embodiment of the present invention; [0028]
  • FIG. 2 is a combined flow chart and system block diagram illustrating the preprocessing and decoding of CCD data in a preferred embodiment of the present invention; [0029]
  • FIG. 3 is a flow chart illustrating the reading and translating of two dimensional, image data identified by preprocessing as corresponding to a MaxiCode symbol; [0030]
  • FIG. 4 is a plan view of a portion of the arrangement of a conventional MaxiCode symbol, with the addition of reference axes; [0031]
  • FIG. 5 is an example of pixel runs used in a preferred embodiment of the present invention; [0032]
  • FIG. 6 is a detail of an image of a central finder or bulls-eye of a MaxiCode symbol showing the location of ellipse fitting points in accordance with a preferred embodiment of the present invention; [0033]
  • FIG. 7 illustrates aspects of the present invention related to determining the rotation or orientation of a MaxiCode symbol; [0034]
  • FIG. 8 is an illustration of grid points generated to correspond to a MaxiCode symbol in accordance with a preferred embodiment of the present invention; [0035]
  • FIG. 9 is an illustration of a hexagonal grid with indexing numbers (i, j); [0036]
  • FIG. 10 illustrates a vector field usable in the present invention to correct distortions of image data of a two dimensional code symbol.[0037]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • FIG. 1 is a block diagram of various electronic circuits employed in preferred embodiments of the present invention. An [0038] image sensor 2 has an optical axis 3. An image is obtained of a symbol, shown located on an arbitrary warped and tilted surface 5 (not in a plane and not perpendicular to the optical axis 3).
  • As shown in FIG. 1, electronic signals from the [0039] image sensor 2 pass to FPGA (or ASIC) circuit 4. In a preferred embodiment, the image sensor 2 includes a CCD detector and various signal conditioning circuits which produce a digital output signal. This digital output signal may be in the form of electronic signals corresponding to a two dimensional array of pixel information for a target field of view. Digital signals from the imaging sensor are supplied to the microprocessor 6 by the FPGA circuit 4. As indicated by the data line 8, the FPGA also provides control signals from the microprocessor for control of, for example, the aiming systems, illumination systems and objective lens servo systems of the imaging engine. The microprocessor also provides information to external systems via the RS 232 driver 10. This may include data decoded in accordance with the techniques described below.
  • The micro-processor may also communicate by data line to [0040] Flash memory 12 and DRAM 14 on which software and data for the system are stored, respectively. This stored information may include data from a target optical code.
  • FIG. 2 is a combined flow chart and system block diagram illustrating autodiscrimination of image sensor data. Data obtained by the image sensor circuitry is indicated at [0041] 100. This data may be in the form of electronic signals corresponding to a two dimensional array of pixel information for a target image. The data may be stored for subsequent processing in the DRAM of the optical code reader. It will be understood that the processing software which implements the processes of the present disclosure may have access to the stored image data at all levels. At various processing steps, portions of the pixel data may be called up for further processing or to confirm on-going analyses.
  • The pixel data may be divided into subimages, for example, 32×32 pixel subimages. These subimages are analyzed for properties known to be associated with various types of optical codes and known to distinguish a particular code from other codes and environmental (non-code) images. More particularly, a process of statistical Autodiscrimination may be employed. In statistical Autodiscrimination the image is divided into sub-images or sections and some statistic computed for each section. Subimages with similar statistics can be grouped to form regions of interest or clusters which may contain codes. The advantage of the statistical approach is that once the statistics are compiled, only the sub-images need to be processed, significantly reducing the computation requirements. In addition, the compilation of the statistics is simple and can be done in hardware for super fast systems. The statistic used in preferred embodiments is a histogram of local surface orientations. The statistics can be obtained by analyzing surface tangents to cluster the subimages. Once a cluster is identified, the image data may be further analyzed to detect the presence of tangents associated with particular types of optical codes. Statistical Autodiscrimination is a subject of a U.S. patent application Ser. No. 09/096,348 entitled AUTODISCRIMINATION AND LINE DRAWING TECHNIQUES FOR CODE READERS and assigned to Symbol Technologies, Inc., which application is hereby incorporated by reference. Alternatively, a neural network can be used to discriminate image areas of possible interest as containing optical code. A neural network can also be designed to look directly at the input image. The location of the aiming pattern with respect to the subimages may also be used as an indicia or weighting factor in selecting subimages for further processing. [0042]
  • In preferred embodiments, Autodiscrimination software executed by the system microprocessor determines which subimage clusters contain codes of a particular type and the coordinates in the pixel data array of certain boundaries or features of preliminarily identified code areas. This system is indicated at [0043] 102. As shown in FIG. 2, the image data may be preliminarily identified as a one dimensional code, two dimensional code (PDF), Postal Code or MaxiCode, it being understood that other code types with recognizable statistical or contrast patterns could be identified at this stage of processing. This preliminary identification is used to select the appropriate decoding techniques. The autodiscrimination function also passes information about the image useful in decoding. This information is shown in data windows 104 through 110 in FIG. 2. Data window 110 corresponds to MaxiCode data used in decoding: location information concerning one or more clusters of subimages preliminarily identified by the Autodiscrimination function as containing MaxiCode.
  • FIG. 3 is a flow chart illustrating the processing and decoding of two dimensional pixel data identified by the [0044] Autodiscrimination function 102. This identification is based on the fact that the Autodiscrimination function has found within an image cluster a contrast variation within predetermined limits, but no threshold groupings of surface tangents which would be indicative of other codes having generally orthogonal feature edges such as one dimensional code, postal code or PDF code. In the processing of FIG. 3, cluster data may be accepted line by line.
  • In basic overview, the technique attempts to locate a MaxiCode bulls-eye at [0045] 150 by analyzing the pixel data for patterns indicative of concentric rings. An ellipse may then be fitted to the inside edge of the outermost black ring at 152. The rotation of the grid is determined from the location of the 18 orientation hexagons in the image as indicated at 154. A transformation is calculated to account for scale and tilt of the target code and to map a grid, at processing step 156. Grid locations are adapted progressively outwardly by shifting hexagon centers to better fit the pixel data, as indicated at block 158. The result of this processing is indexed presence/absence data for the hexagonal MaxiCode grid. This data is passed to the MaxiCode translating function at 160. If the MaxiCode decoding or translating fails during the processing of FIG. 3, process control may be returned to the Autodiscrimination function 102 to select another code type and/or cluster of subimage pixels for analysis.
  • Preferred embodiments of the decoding technique will now be described in greater detail, with particular reference to FIG. 4 which illustrates the arrangement of features in a portion of the MaxiCode symbol. [0046]
  • For purposes of this patent application, a code “symbol” refers to an optical code printed or otherwise presented to view and having variable light reflectivity capable of being detected by a code reader. A “MaxiCode symbol” refers to a symbol having a central finder pattern and data carried in the presence or absence of darker modules arranged in a hexagonal grid surrounding the finder pattern. Examples of such MaxiCode symbols are described in the AIM Specification. [0047]
  • In MaxiCode decoding, typically the central finder pattern (bulls-eye) [0048] 200 is first located. In a preferred embodiment of the present invention this process proceeds as follows. A linked list of subimages or cluster is identified by the Autodiscrimination function. Within this identified area, each horizontal line is converted into run lengths. A moving window averaging technique is used to generate an estimated grey scale threshold to determine the color of the center pixel in the window. Alternatively, the threshold may be generated in the previously preformed subimage processing.
  • Each higher reflectance (white) run is tentatively treated as if it were in the [0049] white center circle 202 of a bulls-eye. A run is a string of two or more adjacent pixels each indicating a reflectance from the symbol either above or below a threshold value. For example, runs of from 2 to 30 white pixels may be treated as candidate center runs. Four adjacent runs of alternating color are examined on either side of the candidate center. According to the context, the word “color” is used to indicate the grey scale value of one or more pixels or a determination that a run or area of the image is black or white. The widths of corresponding pairs of runs are compared to determine whether they are too big or too small in relation to one another. The thresholds for rejection are tunable parameters based on the optical characteristic of the system. Processing is expedited by noting a bad run size as it is collected and preventing further analysis as long as this run is in the current window.
  • This operation is illustrated with a simple example in FIG. 5. In FIG. 5 the line of black and [0050] white dots 250 corresponds to a window of pixels from a horizontal row of pixels in an image cluster identified by the Autodiscrimination function as possibly containing the image of a MaxiCode symbol. The four central pixels have been identified as a candidate center run. Four adjacent runs on the left side L1, L2, D1, D2 and on right side L′1, L′2, D′1, D′2 corresponding to black and white pixels (differentiated Light and bark grey scale values) have been determined to have lengths falling within tunable parameters of the system. In this case each of corresponding runs on the left and right side of the candidate center have equal widths, i.e. they are exactly mirror symmetric about the candidate center C.
  • Next, for each of the 4 adjacent runs on either side, a cumulative radius from the candidate center of the bulls-eye is calculated. The left and right cumulative radii must be approximately symmetric. If the mismatch of the radii exceeds a predetermined threshold (e.g. a ratio of 2:1), the candidate center is rejected. [0051]
  • This operation is illustrated in the example of FIG. 5. From the pixel data of FIG. 5 the system compares the left radius R[0052] L and right radius RR (the distances in the pixel plane from the candidate center C to the posited inner edge 252 of the third black ring). In the example these distances are equal, therefore, the candidate center would not be rejected at this stage of processing, but would be further examined.
  • If the candidate center passes the previous test, a set of vertical run lengths is generated using the same technique. This process works from the center out, however, and stops when five black/white or white/black transitions have been seen above and below the candidate center. This set of runs is subjected to the same symmetry and min/max run size tests listed above. If the vertical run passes, a further test is made the ratio of the horizontal to vertical sizes must be between 1:2 and 2:1. Ratios outside this would indicate a symbol tilt exceeding 60 degrees, and, thus, would not represent a good candidate image for decoding. If both the horizontal and vertical tests are passed, a pair of diagonal runs are also tested. Each diagonal run must individually pass the run length min/max test and the symmetry test and the pair must meet the 1:2-2:1 test. [0053]
  • Another preferred procedure for locating a central finder pattern of a MaxiCode will now be described. The initial location of the central finder pattern may be based on finding a candidate center using the following steps: [0054]
  • (a) locating a horizontal sequence of 9 alternating black and white runs of pixels such as shown in FIG. 5, wherein [0055]
  • (i) the first run is white, [0056]
  • (ii) each run is between half and twice the length of the first run, and [0057]
  • (iii) each run is between 1 and 30 pixels in length. [0058]
  • (b) choosing the center of the middle (i.e. 5[0059] th) run as a candidate center;
  • (c) examining a vertical sequence of pixels running on a vertical axis passing through the candidate center found in step (b) and evaluating whether the four adjacent runs above the candidate center run and the four adjacent runs located below the candidate center run are between half and twice the length of the run of the candidate center; [0060]
  • (d) determining whether the ratio of the length of the vertical sequence is between half and twice the length of the horizontal sequence; if so [0061]
  • (e) using the center points of the horizontal and vertical sequences to redetermine (i.e. recenter) the candidate center, thereby locating a new candidate center; [0062]
  • (f) examining a new horizontal sequence of pixels running through the new candidate center and evaluating the run lengths as in step (c) to determine whether they have the required proportionality; [0063]
  • (g) examining pixels sequences lying on two different diagonal scan lines passing through the new horizontal sequence of pixels, and evaluating the runs lengths as in step (c) to determine whether they have the required proportionality; [0064]
  • (h) determining whether the ratio of the length of one diagonal sequence is between half and twice the length of the other diagonal sequence. If so, the systems proceeds on the basis that the candidate center is the center of the central finder pattern. [0065]
  • It will be observed the foregoing that similar evaluation schemes may be used to locate finder patterns other than concentric round bulls-eye patterns, such as nested squares. This is so because the pattern detection scheme is based on mirror symmetries with respect to one or more selected axes. [0066]
  • FIG. 6 illustrates some of the geometry of these tests. The Figure shows the projection in the image plane of a portion of a MaxiCode bulls-eye lying in a plane tilted with respect to the optical axis of the image sensor. Horizontal and vertical axes passing near or through the candidate center run are identified by [0067] numerals 260 and 262, respectively. The line {overscore (HH
    Figure US20010023896A1-20010927-P00900
    )} lies on a horizontal run and the line {overscore (VV
    Figure US20010023896A1-20010927-P00900
    )} lies on a vertical run. The length of the lines may be compared to determine whether they exceed a 2:1 ratio. Axes 264 and 266 lie on diagonal runs which may also be tested as discussed above.
  • Once a candidate center is found that passes the above tests, an ellipse is fitted to the [0068] inside edge 252 of the outer black ring. (The outside edge of this ring may be corrupted by adjacent black hexagonal modules, so this is the largest reliable ring). The points p at which the horizontal, vertical and diagonal lines (axes 260-266 of FIG. 6) intersect the black ring are computed to sub-pixel accuracy using the calculated threshold. This gives eight points on the ring. An additional 16 points are generated by backing off from this intersection point and moving diagonally (for horizontal or vertical runs) or horizontally/vertically (for diagonal runs) until the edge crossing is found again. These ellipse fit points are located at the end of the 16 vectors (arrows) is shown in FIG. 6. Not all of the 24 points so defined need be located—the described approach is sufficiently robust that the bulls-eye may be sufficiently well located without finding all 24 points. An ellipse is fitted to the points so generated, using a least-squares fit method to determine the centroid and the major and minor axes directions and dimensions of the ellipse.
  • Set out below are calculations by which the least squares ellipse fitting may be accomplished: [0069]
  • Gather statistics [0070]
  • U m.n =Σx i m y i n(1≦m+n≦4)
  • Create matrices [0071] D = [ 1 0 0 0 2 0 0 0 1 ] S 11 = [ u 4.0 u 3.1 u 2.2 u 3.1 u 2.2 u 1.3 u 2.2 u 1.3 u 0.4 ] S 12 = [ u 3.0 u 2.1 u 2.0 u 2.1 u 1.2 u 1.1 u 1.2 u 0.3 u 0.2 ] S 22 = [ u 2.0 u 1.1 u 1.0 u 1.1 u 0.2 u 0.1 u 1.0 u 0.1 u 0.0 ]
    Figure US20010023896A1-20010927-M00001
  • Find eigenvec+or for the smallest eigenvalue [0072]
  • (S 11 −S 12 ·S 22 −1 ·S 12 T)·V1 =λ·D·V 1
  • Calculate second vector [0073]
  • V 2 =−S 22 −1 ·S 12 T ·V 1
  • The components of the two vectors are the coefficients for the ellipse equation [0074]
  • Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0
  • [0075] V 1 = ( A B C ) 1 V L = ( D E F ) 2
    Figure US20010023896A1-20010927-M00002
  • Convert to equivalent ellipse equation form [0076] ( ( x - x 0 ) cos φ + ( y - y 0 ) sin φ r maj ) 2 + ( ( x 1 - x 0 ) sin φ - ( y - y 0 ) cos φ r min ) 2 = 1
    Figure US20010023896A1-20010927-M00003
  • Once the major and minor axes of the ellipse are determined, the processing proceeds to find the orientation of the symbol. The ellipse calculated above is expanded outward from the center to the three radii corresponding to estimated radial distances of the center of the 18 orientation hexagons in the symbol. The 18 orientation hexagons are arranged in six [0077] groups 214 as shown in FIG. 4. The three radii 300, 302 and 304 are shown in FIG. 7 for the simplest case where the image of the bulls-eye is found to be essentially circular (an ellipse with zero eccentricity). A set of 360 samples is taken along each ellipse (each circle in the case of the image of FIG. 7), one per degree of rotation around the ellipse. Using this set of 3×360 samples, a correlated search is performed for the orientation markers. The rotation(s) at which the maximum number of the orientation hexes is matched is noted. If a continuous set of angles (e.g. from 18-21 degrees) all match the same set of hexes, the center of this range is used to try to locate the center of the orientation hexes if possible. This allows the calculation of the best known position of each of the 18 hexes.
  • For the four orientation triplets which contain two black and one white hexagon: [0078]
  • (i) if the postulated positions from the pervious step yield the correct sample colors (grey scale values), the hexagon locations are refined by shifting the computed centers to center the internal black-to-white transitions among the hexagons, as well as a black-to-white transition toward the center of the bulls-eye. [0079]
  • (ii) if the colors do not match, the algorithm looks for a complete match. It uses as potential offsets the following: [0080]
  • (a) The reverse of the refinement offset applied to the triplet on the opposite side of the bulls-eye. [0081]
  • (b) The average distance the hexagons were perturbed during step (i). [0082]
  • (c) All 1-pixel offsets (8 directions). [0083]
  • (d) All 2-pixel offsets (8 directions). [0084]
  • (iii) If step (ii) yields a proper color match, the triplet is subjected to the same refinement procedure as in (i). [0085]
  • For the two triplets that are monochromatic (upper right and upper left of FIGS. 4 and 7 several “guesses” as to the correct location are generated using the known bulls-eye center and the positions of the black/white triplets. These are averaged with the current location to obtain a refined location. [0086]
  • Once the rotation of the MaxiCode symbol is determined the hexagonal grid may be mapped. A best-fitted grid technique for doing this will now be described, it being understood that alternative techniques such as a bilinear transformation could be used to map the grid using image data from the bulls-eye. [0087]
  • A best fitted grid technique generally usable in decoding matrix codes will now be described. A linear grid through a Euler transform remains linear, and can be expressed by two co-planes: [0088] { x = A x i + B x j + C x y = A y i + B y j + C y
    Figure US20010023896A1-20010927-M00004
  • Once the coefficients are known, these equations can be used to map a set of indices (i, j) specifying a module in a two dimensional code of interest with the module's coordinate (x, y) in the image or pixel plane. A drawing with a grid generated as such is shown in FIG. 8. [0089]
  • The coefficients A[0090] x, Ay, Bx; By, Cx, Cy) themselves can be found through least-square, best-fit method, if we have a few “seed modules” or “seed points” for which we know both (i, j) and (x, y). Such points, corresponding to the orientation hexagons previously determined for an image of a MaxiCode symbol, are shown as dots in FIG. 8. Generated grid points are shown as crosses in FIG. 8. The equations for the least-squared bit fit are: ( i 2 ij i ij j 2 j i j N ) ( A x A y B x B y C x C y ) = ( ix iy jx jy x y )
    Figure US20010023896A1-20010927-M00005
  • Here we can also see the reason these two planes are called co-planes: the calculations of their coefficients are closely tied together. In fact, by calculating them at the same time, almost 50% reduction in the amount of calculations can be achieved compared to calculating them separately. [0091]
  • To apply the described method to MaxiCode, it is necessary to skip some of the index numbers (i, j pairs) so that the modules are indexed linearly in both directions. This is shown graphically in FIG. 9. For example, in FIG. 9 there is no module indexed as (0, 1). [0092]
  • From a comparison of a best-fitted grid technique with a prediction method based on lines, it has been observed that in tilted images the line method starts to fail from very close to the seed points, but the grid method can accurately map the modules locations all the way to the edge of the MaxiCode symbol. [0093]
  • Processing may then proceed to decode the primary message of the MaxiCode. The primary message includes 20 symbol characters designated 1-20 in FIG. 4 and including 120 modules. Symbol characters 1-10 are used to encode data; symbol characters 11-20 are used for error correction. Decoding the primary message may proceed as follows: [0094]
  • (a) Using the positions of the 18 orientation hexes, the position of the hexes in the primary message are calculated. This is largely done by interpolating between the 18 hexes, or extrapolating past them. [0095]
  • (b) The resulting hex locations are sampled to determine their color (grey scale equivalent of black or white). The resulting 1's and 0's are packed into codewords and the Error Correction Code (ECC) procedure applied per the AIM Specification. [0096]
  • (c) If the primary message is decoded properly, further correction and reading can proceed. If not, the next candidate orientation can be tried, if any. [0097]
  • Using the known correct values of each of the hexagons or modules in the primary message, the positions of the centers of the hexagons can be corrected. This is done by finding internal black-to-white transitions in the EEC-corrected data, and then shifting the hex centers in the image to place the transitions found in the image at a position centered between the hex centers. Note that one easy way to do this is to perturb the orientation hexagons, since shifts in their locations will shift the hex centers in proper, correlated fashion. [0098]
  • Using the known primary message hexagon locations, mapping proceeds outwardly from the primary message hexagons into the surrounding hexagons, ring by ring i.e. a radial progression from the center of the symbol. Each hexagon in the surrounding area can be estimated by at least two independent methods using hexagons that are adjacent to it and just past adjacent to it. These two results can be averaged to correct for rounding errors. Once a new set of postulated positions is obtained, it can be refined by “tweaking” the positions to center black-to-white edges. This process, being incremental, will compensate for perspective shifts due to symbol tilt, curvature, and/or distortion. In addition, any time a black-to-white transition is found, the colors (grey scale values) on either side can be examined to produce a local black-white threshold, allowing for changes in illumination across the image. [0099]
  • To further expand the power of best-fitted grid technique described above, one would start from near the seed modules and accumulate the deviations as they are observed while moving away from the seed modules. The deviations can be found from matched filters, a well-known technique in image processing. The matched filters method can work because the module size is know, otherwise we would not have been able to get at least the minimum number (3) of points needed for solving the best-fit equations. Combined with this deviation accumulation method, it is possible to decode codes suffering from non-linear distortion, such as foreshoftening at very close distance. [0100]
  • Black hexagons, especially if in white neighborhood, can be located more exactly. This property may be exploited to provide a vector field for tweaking the grid. A smooth, unidirectional vector field, which is zero at the bulls-eye is shown in FIG. 10. For example, such a vector field could be used to correct distortions caused by cylindrical warp of the symbol. [0101]
  • Variants of the foregoing technique may be employed. For example, after the orientation hexagons have been located and the grid for the primary message hexagons set up, the orientation hexagon locations could be refined to properly center black-white transitions found in the image before the ECC is run. In contrast, in the procedure discussed above, the ECC is run first. If it succeeds, the system obtains better information about what it should be seeing and computation is reduced in most cases. However, if the ECC fails, refinement to center the black-white transitions could then be attempted and the ECC process tried again. Similarly, additional passes through the secondary message might be desirable if its ECC process fails. Given the nominal outline of the code, as calculated above, and the location of the primary message hexagons, the sampling grid could be warped somewhat and the secondary message hex locations recalculated to try to achieve a successful decode. Again, a possible metric to use would be to have edge locations centered between hex centers. [0102]
  • Once the complete grid has been sampled, the secondary message ECC process is run. The result is indexed presence/absence data for some or all of the MaxiCode grid. This data is translated to deliver the informational content of the symbol. [0103]
  • Thus, through the use of the processing techniques exemplified in FIGS. 2 through 10, translated data may be obtained from signals corresponding to a two dimensional array of pixel information from a field of view containing the image of a MaxiCode symbol. The disclosed techniques are designed to automatically adjust for image size, and tolerate as much as 60° tilt, 360° of rotation and significant warping (hexagon distance change of 50% or more). [0104]
  • The described embodiments of the present invention are intended to be illustrative rather than restrictive, and are not intended to represent every embodiment of the present invention. Various modifications and variations can be made to the disclosed systems without departing from the spirit or scope of the invention as set forth in the following claims both literally and in equivalents recognized in law. [0105]

Claims (19)

We claim:
1. A method for determining the presence and location of a MaxiCode symbol bulls-eye and the orientation of the symbol in pixel data obtained by an optical code reader comprising the steps of:
identifying a candidate center in a run of pixels having a color indicative of the center area of the bulls-eye;
testing the candidate center to determine if adjacent pixel runs have a predetermined amount of mirror symmetry with respect to the candidate center;
locating plural points located radially outward of the identified run which points correspond to an edge of a ring of the MaxiCode bulls-eye;
fitting an ellipse to said located points;
expanding the ellipse outwardly to estimate the location of orientation modules of the MaxiCode; and
determining the orientation of the MaxiCode symbol from information read from the orientation modules.
2. The method of
claim 1
, wherein the ellipse has major and minor dimensions having a ratio of 2:1 or less.
3. The method of
claim 1
, wherein a least squares fit is employed to find the major and minor dimensions of the ellipse fitted to the located points.
4. The method of
claim 3
, wherein the ring edge located is the inner edge of the outermost black ring of the MaxiCode bulls-eye.
5. The method of
claim 3
, wherein 8 points on the edge of the ring of the MaxiCode bulls-eyes are located and an additional 16 points are identified on the edge, located at angularly adjacent locations on either side of each of the 8 located points and wherein all 24 points are used in the least square fit to find the major and minor dimensions of the ellipse.
6. A method for associating pixel data in a pixel plane, obtained by an optical code reader from an image of a two dimensional grid-based symbol, with corresponding modules in the grid, comprising the steps of:
identifying subsets of the pixel data having a known association with plural seed modules;
determining the coefficients of an Euler transform through a least-squares best-fit method using the identified association between the pixel data subsets and the seed modules; and
using the Euler transform to associate other locations in the pixel plane with additional symbol modules.
7. The method of
claim 7
, wherein more than six seed modules are employed.
8. The method of
claim 7
, wherein the symbol is a MaxiCode symbol and the seed modules are the 18 orientation hexagon modules of the MaxiCode symbol.
9. The method of
claim 6
, wherein the grid-based symbol is located, at least in part, outside of any single plane perpendicular to the optical axis of the optical code reader.
10. A method of decoding a MaxiCode symbol imaged by an optical code reader, the MaxiCode symbol having a central finder pattern and orientation modules, primary data modules and secondary data modules located on a hexagonal grid, comprising the steps of
locating the central finder pattern;
locating the orientation modules based on the location of the central finder pattern;
locating primary data modules of the MaxiCode based on the determined location of the orientation modules; and
sequentially locating the secondary data modules outwardly from the primary message modules toward the edge of the symbol using previously located positions of adjacent modules.
11. The method of
claim 10
, wherein the location of secondary data modules are estimated using the positions of adjacent modules and just past adjacent modules.
12. The method of
claim 10
, further comprising the step of refining the location of a first set of secondary data modules by centering black/white transitions before locating a next set of secondary data modules further outward for the central finder pattern.
13. The method of
claim 10
, wherein the secondary data modules are located in a successive radial progression of rings of hexagons.
14. The method of
claim 10
, wherein deviations from a regular hexagonal grid are accumulated as the location of secondary data modules proceeds outwardly from the central finder pattern.
15. A method of determining a candidate location of a MaxiCode symbol bulls-eye center comprising the steps of:
(a) locating a first sequence of plural adjacent pixel runs along a horizontal axis, the runs in the sequence having alternating colors;
(b) determining whether the lengths of the located horizontal pixel runs have a predetermined proportionate relationship;
(c) locating a second sequence of plural, adjacent pixel runs having alternating colors along a second axis running through the center of the central horizontal run and not parallel with the horizontal axis;
(d) determining whether the lengths of the pixel runs of the second sequence have a predetermined proportionate relationship with respect to a central run; and
(e) identifying a candidate location of a MaxiCode symbol bulls-eye center based on a positive determination in step (d).
16. The method of
claim 15
, wherein the first sequence is selected so that it has nine runs, the runs in the sequence being alternating black and white runs, first of which is white.
17. The method of
claim 15
, further comprising the step of using the candidate location determined in step (e) to define new axes along which new sequences of pixel runs are evaluated to determine whether the runs have a predetermined proportionate relationship.
18. The method of
claim 17
, further comprising the step of determining whether the lengths of sequences of runs which run in different directions have a predetermined proportionate relationship.
19. The method of
claim 15
, wherein the predetermined proportionate relationship with respect to the central run is one in which pixel runs on both sides of the central run have lengths between one-half and twice the length of the central run.
US09/816,173 1998-10-22 2001-03-26 Techniques for reading two dimensional code, including MaxiCode Expired - Lifetime US6340119B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/816,173 US6340119B2 (en) 1998-10-22 2001-03-26 Techniques for reading two dimensional code, including MaxiCode

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/176,894 US6088482A (en) 1998-10-22 1998-10-22 Techniques for reading two dimensional code, including maxicode
US09/594,093 US6234397B1 (en) 1998-10-22 2000-06-15 Techniques for reading two dimensional code, including maxicode
US09/816,173 US6340119B2 (en) 1998-10-22 2001-03-26 Techniques for reading two dimensional code, including MaxiCode

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/594,093 Division US6234397B1 (en) 1998-10-22 2000-06-15 Techniques for reading two dimensional code, including maxicode

Publications (2)

Publication Number Publication Date
US20010023896A1 true US20010023896A1 (en) 2001-09-27
US6340119B2 US6340119B2 (en) 2002-01-22

Family

ID=22646323

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/176,894 Expired - Lifetime US6088482A (en) 1998-10-22 1998-10-22 Techniques for reading two dimensional code, including maxicode
US09/594,093 Expired - Lifetime US6234397B1 (en) 1998-10-22 2000-06-15 Techniques for reading two dimensional code, including maxicode
US09/816,173 Expired - Lifetime US6340119B2 (en) 1998-10-22 2001-03-26 Techniques for reading two dimensional code, including MaxiCode

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US09/176,894 Expired - Lifetime US6088482A (en) 1998-10-22 1998-10-22 Techniques for reading two dimensional code, including maxicode
US09/594,093 Expired - Lifetime US6234397B1 (en) 1998-10-22 2000-06-15 Techniques for reading two dimensional code, including maxicode

Country Status (1)

Country Link
US (3) US6088482A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040086191A1 (en) * 2002-10-31 2004-05-06 Microsoft Corporation Passive embedded interaction code
US20040086181A1 (en) * 2002-10-31 2004-05-06 Microsoft Corporation Active embedded interaction code
US20040212620A1 (en) * 1999-08-19 2004-10-28 Adobe Systems Incorporated, A Corporation Device dependent rendering
US20040256462A1 (en) * 2003-04-17 2004-12-23 Staffan Solen Method and device for recording of data
US20070003169A1 (en) * 2002-10-31 2007-01-04 Microsoft Corporation Decoding and Error Correction In 2-D Arrays
US20080191041A1 (en) * 2007-02-08 2008-08-14 Silverbrook Research Pty Ltd Coding Pattern with Flags for Determining Tag Data or Block Data
US7430497B2 (en) 2002-10-31 2008-09-30 Microsoft Corporation Statistical model for global localization
US7826074B1 (en) 2005-02-25 2010-11-02 Microsoft Corporation Fast embedded interaction code printing with custom postscript commands
CN102867205A (en) * 2012-09-19 2013-01-09 腾讯科技(深圳)有限公司 Information management and two-dimensional code generation method and related devices
US20150028110A1 (en) * 2013-07-29 2015-01-29 Owens-Brockway Glass Container Inc. Container with a Data Matrix Disposed Thereon
US20150090795A1 (en) * 2013-09-29 2015-04-02 Founder Mobile Media Technology (Beijing) Co., Ltd. Method and system for detecting detection patterns of qr code
CN108776828A (en) * 2018-06-07 2018-11-09 中国联合网络通信集团有限公司 Two-dimensional code generation method, Quick Response Code generating means and Quick Response Code

Families Citing this family (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2326003B (en) * 1997-06-07 2001-02-28 Aquasol Ltd Coding systems
US6685095B2 (en) * 1998-05-05 2004-02-03 Symagery Microsystems, Inc. Apparatus and method for decoding damaged optical codes
US6088482A (en) 1998-10-22 2000-07-11 Symbol Technologies, Inc. Techniques for reading two dimensional code, including maxicode
FR2788871B1 (en) * 1999-01-22 2001-06-15 Intermec Scanner Technology Ct OPTOELECTRONIC DEVICE FOR ACQUIRING IMAGES OF CODES WITH ONE AND TWO DIMENSIONS
US6176428B1 (en) * 1999-04-07 2001-01-23 Symbol Technologies, Inc. Techniques for reading postal code
US6556690B1 (en) * 1999-06-17 2003-04-29 Eastman Kodak Company Articles bearing invisible encodements on curved surfaces
US6315197B1 (en) * 1999-08-19 2001-11-13 Mitsubishi Electric Research Laboratories Vision-enabled vending machine
US7558563B2 (en) * 1999-09-17 2009-07-07 Silverbrook Research Pty Ltd Retrieving contact details via a coded surface
US6695209B1 (en) * 1999-10-04 2004-02-24 Psc Scanning, Inc. Triggerless optical reader with signal enhancement features
US6973207B1 (en) * 1999-11-30 2005-12-06 Cognex Technology And Investment Corporation Method and apparatus for inspecting distorted patterns
US20060082557A1 (en) * 2000-04-05 2006-04-20 Anoto Ip Lic Hb Combined detection of position-coding pattern and bar codes
US9252955B2 (en) * 2000-08-18 2016-02-02 United States Postal Service Apparatus and methods for the secure transfer of electronic data
AU2001282483A1 (en) 2000-08-29 2002-03-13 Imageid Ltd. Indexing, storage and retrieval of digital images
DE10137093A1 (en) * 2001-07-30 2003-02-13 Sick Ag Recognition of a code, particularly a two-dimensional matrix type code, within a graphical background or image whereby recognition is undertaken using a neuronal network
US20030080191A1 (en) * 2001-10-26 2003-05-01 Allen Lubow Method and apparatus for applying bar code information to products during production
US6801245B2 (en) * 2002-01-18 2004-10-05 Imageid Ltd. Method for automatic identification and data capture
EP1422657A1 (en) * 2002-11-20 2004-05-26 Setrix AG Method of detecting the presence of figures and methods of managing a stock of components
WO2004059483A1 (en) * 2002-12-23 2004-07-15 United States Postal Services Advanced crypto round dater
US7181066B1 (en) 2002-12-26 2007-02-20 Cognex Technology And Investment Corporation Method for locating bar codes and symbols in an image
US7823783B2 (en) 2003-10-24 2010-11-02 Cognex Technology And Investment Corporation Light pipe illumination system and method
US7823789B2 (en) 2004-12-21 2010-11-02 Cognex Technology And Investment Corporation Low profile illumination for direct part mark readers
US9536124B1 (en) 2003-10-24 2017-01-03 Cognex Corporation Integrated illumination assembly for symbology reader
US9070031B2 (en) 2003-10-24 2015-06-30 Cognex Technology And Investment Llc Integrated illumination assembly for symbology reader
US7604174B2 (en) 2003-10-24 2009-10-20 Cognex Technology And Investment Corporation Method and apparatus for providing omnidirectional lighting in a scanning device
US7874487B2 (en) 2005-10-24 2011-01-25 Cognex Technology And Investment Corporation Integrated illumination assembly for symbology reader
US7583842B2 (en) * 2004-01-06 2009-09-01 Microsoft Corporation Enhanced approach of m-array decoding and error correction
CN1922613A (en) * 2004-01-14 2007-02-28 国际条形码公司 System and method for compensating for bar code image distortions
US7263224B2 (en) * 2004-01-16 2007-08-28 Microsoft Corporation Strokes localization by m-array decoding and fast image matching
US20050227217A1 (en) * 2004-03-31 2005-10-13 Wilson Andrew D Template matching on interactive surface
US7204428B2 (en) * 2004-03-31 2007-04-17 Microsoft Corporation Identification of object on interactive display surface by identifying coded pattern
US7394459B2 (en) * 2004-04-29 2008-07-01 Microsoft Corporation Interaction between objects and a virtual environment display
US7270277B1 (en) 2004-05-21 2007-09-18 Koziol Jeffrey E Data encoding mark for placement in a compact area and an object carrying the data encoding mark
US7787706B2 (en) * 2004-06-14 2010-08-31 Microsoft Corporation Method for controlling an intensity of an infrared source used to detect objects adjacent to an interactive display surface
US7593593B2 (en) * 2004-06-16 2009-09-22 Microsoft Corporation Method and system for reducing effects of undesired signals in an infrared imaging system
US7519223B2 (en) 2004-06-28 2009-04-14 Microsoft Corporation Recognizing gestures and using gestures for interacting with software applications
US20060027657A1 (en) 2004-08-04 2006-02-09 Laurens Ninnink Method and apparatus for high resolution decoding of encoded symbols
US20060050961A1 (en) * 2004-08-13 2006-03-09 Mohanaraj Thiyagarajah Method and system for locating and verifying a finder pattern in a two-dimensional machine-readable symbol
US7175090B2 (en) * 2004-08-30 2007-02-13 Cognex Technology And Investment Corporation Methods and apparatus for reading bar code identifications
US7576725B2 (en) * 2004-10-19 2009-08-18 Microsoft Corporation Using clear-coded, see-through objects to manipulate virtual objects
US9292724B1 (en) 2004-12-16 2016-03-22 Cognex Corporation Hand held symbology reader illumination diffuser with aimer optics
US7617984B2 (en) 2004-12-16 2009-11-17 Cognex Technology And Investment Corporation Hand held symbology reader illumination diffuser
US7963448B2 (en) 2004-12-22 2011-06-21 Cognex Technology And Investment Corporation Hand held machine vision method and apparatus
US9552506B1 (en) 2004-12-23 2017-01-24 Cognex Technology And Investment Llc Method and apparatus for industrial identification mark verification
US7607076B2 (en) 2005-02-18 2009-10-20 Microsoft Corporation Embedded interaction code document
US20060215913A1 (en) * 2005-03-24 2006-09-28 Microsoft Corporation Maze pattern analysis with image matching
US20060242562A1 (en) * 2005-04-22 2006-10-26 Microsoft Corporation Embedded method for embedded interaction code array
US7421439B2 (en) 2005-04-22 2008-09-02 Microsoft Corporation Global metadata embedding and decoding
US7499027B2 (en) * 2005-04-29 2009-03-03 Microsoft Corporation Using a light pointer for input on an interactive display surface
US7400777B2 (en) * 2005-05-25 2008-07-15 Microsoft Corporation Preprocessing for information pattern analysis
US7729539B2 (en) * 2005-05-31 2010-06-01 Microsoft Corporation Fast error-correcting of embedded interaction codes
US7580576B2 (en) * 2005-06-02 2009-08-25 Microsoft Corporation Stroke localization and binding to electronic document
US7412106B1 (en) 2005-06-25 2008-08-12 Cognex Technology And Investment Corporation Methods for locating and decoding distorted two-dimensional matrix symbols
US7525538B2 (en) * 2005-06-28 2009-04-28 Microsoft Corporation Using same optics to image, illuminate, and project
US7817816B2 (en) * 2005-08-17 2010-10-19 Microsoft Corporation Embedded interaction code enabled surface type identification
US7911444B2 (en) 2005-08-31 2011-03-22 Microsoft Corporation Input method for surface of interactive display
US7756526B2 (en) 2005-09-19 2010-07-13 Silverbrook Research Pty Ltd Retrieving a web page via a coded surface
US7558599B2 (en) * 2005-09-19 2009-07-07 Silverbrook Research Pty Ltd Printing a bill using a mobile device
US7855805B2 (en) 2005-09-19 2010-12-21 Silverbrook Research Pty Ltd Printing a competition entry form using a mobile device
US7621442B2 (en) 2005-09-19 2009-11-24 Silverbrook Research Pty Ltd Printing a subscription using a mobile device
US7995054B2 (en) * 2005-11-21 2011-08-09 Leica Geosystems Ag Identification of edge regions from 3D point data
US7843448B2 (en) * 2005-11-21 2010-11-30 Leica Geosystems Ag Identification of occluded edge regions from 3D point data
US7942340B2 (en) * 2005-11-24 2011-05-17 Canon Kabushiki Kaisha Two-dimensional code, and method and apparatus for detecting two-dimensional code
US7878402B2 (en) * 2005-12-20 2011-02-01 Cognex Technology And Investment Corporation Decoding distorted symbols
US8060840B2 (en) * 2005-12-29 2011-11-15 Microsoft Corporation Orientation free user interface
US7614563B1 (en) 2005-12-29 2009-11-10 Cognex Technology And Investment Corporation System and method for providing diffuse illumination in a symbology reader
US7515143B2 (en) * 2006-02-28 2009-04-07 Microsoft Corporation Uniform illumination of interactive display panel
US8150163B2 (en) * 2006-04-12 2012-04-03 Scanbuy, Inc. System and method for recovering image detail from multiple image frames in real-time
PL2023812T3 (en) 2006-05-19 2017-07-31 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US8108176B2 (en) 2006-06-29 2012-01-31 Cognex Corporation Method and apparatus for verifying two dimensional mark quality
US9213875B1 (en) * 2006-07-18 2015-12-15 Cognex Corporation System and method for automatically modeling symbology data in a symbology reader
US8169478B2 (en) 2006-12-14 2012-05-01 Cognex Corporation Method and apparatus for calibrating a mark verifier
US8212857B2 (en) * 2007-01-26 2012-07-03 Microsoft Corporation Alternating light sources to reduce specular reflection
US8335341B2 (en) * 2007-09-07 2012-12-18 Datalogic ADC, Inc. Compensated virtual scan lines
US7854385B2 (en) * 2007-10-31 2010-12-21 Symbol Technologies, Inc. Automatic region of interest focusing for an imaging-based bar code reader
US9734376B2 (en) 2007-11-13 2017-08-15 Cognex Corporation System and method for reading patterns using multiple image frames
US9606209B2 (en) 2011-08-26 2017-03-28 Kineticor, Inc. Methods, systems, and devices for intra-scan motion correction
EP2801055B1 (en) * 2012-01-02 2016-04-20 Telecom Italia S.p.A. Method and system for image analysis
US9946947B2 (en) * 2012-10-31 2018-04-17 Cognex Corporation System and method for finding saddle point-like structures in an image and determining information from the same
US9511276B2 (en) 2012-11-30 2016-12-06 Michael S. Caffrey Gaming system using gaming surface having computer readable indicia and method of using same
US10327708B2 (en) 2013-01-24 2019-06-25 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9717461B2 (en) 2013-01-24 2017-08-01 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9305365B2 (en) 2013-01-24 2016-04-05 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
WO2014120734A1 (en) 2013-02-01 2014-08-07 Kineticor, Inc. Motion tracking system for real time adaptive motion compensation in biomedical imaging
WO2015148391A1 (en) 2014-03-24 2015-10-01 Thomas Michael Ernst Systems, methods, and devices for removing prospective motion correction from medical imaging scans
WO2016014718A1 (en) 2014-07-23 2016-01-28 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
DE102015208121A1 (en) * 2015-04-30 2016-11-03 Prüftechnik Dieter Busch AG Method for obtaining information from a coding body, system with a coding body, computer program product and data storage means
EP3109823A1 (en) 2015-06-22 2016-12-28 Sick IVP AB Method and arrangements for estimating one or more dominating orientations in a digital image
US9943247B2 (en) 2015-07-28 2018-04-17 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
WO2017091479A1 (en) 2015-11-23 2017-06-01 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
CN106503638B (en) * 2016-10-13 2019-09-13 金鹏电子信息机器有限公司 Image procossing, vehicle color identification method and system for color identification
MX2020001107A (en) 2017-07-28 2020-10-28 Coca Cola Co Method and apparatus for encoding and decoding circular symbolic codes.
US10762405B2 (en) 2017-10-26 2020-09-01 Datalogic Ip Tech S.R.L. System and method for extracting bitstream data in two-dimensional optical codes
US11222188B2 (en) * 2017-12-22 2022-01-11 Dell Products L.P. Using passively represented information to identify items within a multi-dimensional space
US10483303B1 (en) * 2018-11-02 2019-11-19 Omnivision Technologies, Inc. Image sensor having mirror-symmetrical pixel columns
CN111753573B (en) * 2020-06-28 2023-09-15 北京奇艺世纪科技有限公司 Two-dimensional code image recognition method and device, electronic equipment and readable storage medium
CN113076768B (en) * 2021-04-08 2023-04-11 中山大学 Distortion correction method for fuzzy recognizable two-dimensional code
CN114143474B (en) * 2021-12-06 2023-10-10 广州尚臣电子有限公司 Image average gray level-based somatosensory processing method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5635697A (en) 1989-03-01 1997-06-03 Symbol Technologies, Inc. Method and apparatus for decoding two-dimensional bar code
CA1334218C (en) * 1989-03-01 1995-01-31 Jerome Swartz Hand-held laser scanning for reading two dimensional bar codes
US5637849A (en) * 1995-05-31 1997-06-10 Metanetics Corporation Maxicode data extraction using spatial domain features
US5966463A (en) * 1995-11-13 1999-10-12 Meta Holding Corporation Dataform readers using interactive storage and analysis of image data
JP4122629B2 (en) * 1998-09-03 2008-07-23 株式会社デンソー 2D code generation method
US6088482A (en) * 1998-10-22 2000-07-11 Symbol Technologies, Inc. Techniques for reading two dimensional code, including maxicode

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060181533A1 (en) * 1999-08-19 2006-08-17 Adobe Systems Incorporated, A Delaware Corporation Device dependent rendering
US7646387B2 (en) 1999-08-19 2010-01-12 Adobe Systems Incorporated Device dependent rendering
US20040212620A1 (en) * 1999-08-19 2004-10-28 Adobe Systems Incorporated, A Corporation Device dependent rendering
US7425960B2 (en) * 1999-08-19 2008-09-16 Adobe Systems Incorporated Device dependent rendering
US20070104372A1 (en) * 2002-10-31 2007-05-10 Microsoft Corporation Active embedded interaction coding
US7486822B2 (en) 2002-10-31 2009-02-03 Microsoft Corporation Active embedded interaction coding
EP1416433A3 (en) * 2002-10-31 2005-07-13 Microsoft Corporation Active embedded interaction coding
US7133563B2 (en) 2002-10-31 2006-11-07 Microsoft Corporation Passive embedded interaction code
US20070003169A1 (en) * 2002-10-31 2007-01-04 Microsoft Corporation Decoding and Error Correction In 2-D Arrays
US20040086191A1 (en) * 2002-10-31 2004-05-06 Microsoft Corporation Passive embedded interaction code
US20070104371A1 (en) * 2002-10-31 2007-05-10 Microsoft Corporation Active embedded interaction coding
US7684618B2 (en) 2002-10-31 2010-03-23 Microsoft Corporation Passive embedded interaction coding
US7330605B2 (en) 2002-10-31 2008-02-12 Microsoft Corporation Decoding and error correction in 2-D arrays
US7386191B2 (en) 2002-10-31 2008-06-10 Microsoft Corporation Decoding and error correction in 2-D arrays
US20040086181A1 (en) * 2002-10-31 2004-05-06 Microsoft Corporation Active embedded interaction code
US7502507B2 (en) 2002-10-31 2009-03-10 Microsoft Corporation Active embedded interaction code
US7502508B2 (en) 2002-10-31 2009-03-10 Microsoft Corporation Active embedded interaction coding
US20060165290A1 (en) * 2002-10-31 2006-07-27 Microsoft Corporation Active embedded interaction coding
US7486823B2 (en) 2002-10-31 2009-02-03 Microsoft Corporation Active embedded interaction coding
US7430497B2 (en) 2002-10-31 2008-09-30 Microsoft Corporation Statistical model for global localization
US7303130B2 (en) 2003-04-17 2007-12-04 Anoto Group Ab Method and device for recording of data
US20040256462A1 (en) * 2003-04-17 2004-12-23 Staffan Solen Method and device for recording of data
US7826074B1 (en) 2005-02-25 2010-11-02 Microsoft Corporation Fast embedded interaction code printing with custom postscript commands
US8107733B2 (en) 2007-02-08 2012-01-31 Silverbrook Research Pty Ltd Method of imaging coding pattern comprising replicated and non-replicated coordinate data
US20080191041A1 (en) * 2007-02-08 2008-08-14 Silverbrook Research Pty Ltd Coding Pattern with Flags for Determining Tag Data or Block Data
US20080193054A1 (en) * 2007-02-08 2008-08-14 Silverbrook Research Pty Ltd Method of Imaging Coding Pattern Comprising Replicated and Non-Replicated Coordinate Data
US20080193030A1 (en) * 2007-02-08 2008-08-14 Silverbrook Research Pty Ltd Coding Pattern Comprising Replicated and Non-Replicated Coordinate Data
US8006912B2 (en) * 2007-02-08 2011-08-30 Silverbrook Research Pty Ltd Coding pattern with flags for determining tag data or block data
US20080191021A1 (en) * 2007-02-08 2008-08-14 Silverbrook Research Pty Ltd Method of Imaging Coding Pattern using Flags for Determining Tag Data or Block Data
WO2014044174A1 (en) * 2012-09-19 2014-03-27 Tencent Technology (Shenzhen) Company Limited Information obtaining method and apparatus
CN102867205A (en) * 2012-09-19 2013-01-09 腾讯科技(深圳)有限公司 Information management and two-dimensional code generation method and related devices
US9117130B2 (en) 2012-09-19 2015-08-25 Tencent Technology (Shenzhen) Company Limited Information obtaining method and apparatus
KR101554518B1 (en) 2012-09-19 2015-09-21 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 Information obtaining method and apparatus
JP2015534672A (en) * 2012-09-19 2015-12-03 テンセント テクノロジー (シェンツェン) カンパニー リミテッド Information acquisition method and information acquisition apparatus
US20150028110A1 (en) * 2013-07-29 2015-01-29 Owens-Brockway Glass Container Inc. Container with a Data Matrix Disposed Thereon
US20150090795A1 (en) * 2013-09-29 2015-04-02 Founder Mobile Media Technology (Beijing) Co., Ltd. Method and system for detecting detection patterns of qr code
US9177188B2 (en) * 2013-09-29 2015-11-03 Peking University Founder Group Co., Ltd. Method and system for detecting detection patterns of QR code
CN108776828A (en) * 2018-06-07 2018-11-09 中国联合网络通信集团有限公司 Two-dimensional code generation method, Quick Response Code generating means and Quick Response Code

Also Published As

Publication number Publication date
US6088482A (en) 2000-07-11
US6234397B1 (en) 2001-05-22
US6340119B2 (en) 2002-01-22

Similar Documents

Publication Publication Date Title
US6234397B1 (en) Techniques for reading two dimensional code, including maxicode
CA2187209C (en) Method and apparatus for decoding two-dimensional symbols in the spatial domain
US6097839A (en) Method and apparatus for automatic discriminating and locating patterns such as finder patterns, or portions thereof, in machine-readable symbols
US5189292A (en) Finder pattern for optically encoded machine readable symbols
US5742041A (en) Method and apparatus for locating and decoding machine-readable symbols, including data matrix symbols
CN110414293B (en) Decoding bar codes
EP0754328B1 (en) Method and apparatus for decoding bar code images using information from previous scan lines
US8534567B2 (en) Method and system for creating and using barcodes
US6064763A (en) Time-efficient method of analyzing imaged input data to locate two-dimensional machine-readable symbols or other linear images therein
US6250551B1 (en) Autodiscrimination and line drawing techniques for code readers
US6708884B1 (en) Method and apparatus for rapid and precision detection of omnidirectional postnet barcode location
US5936224A (en) Method and apparatus for reading machine-readable symbols by employing a combination of multiple operators and/or processors
US5515447A (en) Method and apparatus for locating an acquisition target in two-dimensional images by detecting symmetry in two different directions
EP0669593B1 (en) Two-dimensional code recognition method
US5814801A (en) Maxicode data extraction using spatial domain features exclusive of fourier type domain transfer processing
EP0880103B1 (en) Method and apparatus for detecting and decoding bar code symbols
EP0887760B1 (en) Method and apparatus for decoding bar code symbols
EP1138013B1 (en) Skew processing of raster scan images
US6386454B2 (en) Detecting bar code candidates
US5777309A (en) Method and apparatus for locating and decoding machine-readable symbols
EP0754327B1 (en) Method and apparatus for decoding bar code images using multi-order feature vectors
CN100390807C (en) Trilateral poly-dimensional bar code easy for omnibearing recognition and reading method thereof
JPH0950473A (en) Method and apparatus for decoding of undecided complicated large-width bar-code sign profile
US5786583A (en) Method and apparatus for locating and decoding machine-readable symbols
EP0996079B1 (en) Method for locating codes in bidimensional images

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:SYMBOL TECHNOLOGIES, INC.;REEL/FRAME:016116/0203

Effective date: 20041229

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: SYMBOL TECHNOLOGIES, INC., NEW YORK

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGANCHASE BANK, N.A.;REEL/FRAME:025441/0228

Effective date: 20060901

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: SYMBOL TECHNOLOGIES, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HE, DUANFENG;HUNTER, KEVIN;JOSEPH, EUGENE;SIGNING DATES FROM 19981012 TO 19981016;REEL/FRAME:033167/0054

AS Assignment

Owner name: MORGAN STANLEY SENIOR FUNDING, INC. AS THE COLLATERAL AGENT, MARYLAND

Free format text: SECURITY AGREEMENT;ASSIGNORS:ZIH CORP.;LASER BAND, LLC;ZEBRA ENTERPRISE SOLUTIONS CORP.;AND OTHERS;REEL/FRAME:034114/0270

Effective date: 20141027

Owner name: MORGAN STANLEY SENIOR FUNDING, INC. AS THE COLLATE

Free format text: SECURITY AGREEMENT;ASSIGNORS:ZIH CORP.;LASER BAND, LLC;ZEBRA ENTERPRISE SOLUTIONS CORP.;AND OTHERS;REEL/FRAME:034114/0270

Effective date: 20141027

AS Assignment

Owner name: SYMBOL TECHNOLOGIES, LLC, NEW YORK

Free format text: CHANGE OF NAME;ASSIGNOR:SYMBOL TECHNOLOGIES, INC.;REEL/FRAME:036083/0640

Effective date: 20150410

AS Assignment

Owner name: SYMBOL TECHNOLOGIES, INC., NEW YORK

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:036371/0738

Effective date: 20150721