WO2007068901A1 - Image processing method and apparatus - Google Patents

Image processing method and apparatus Download PDF

Info

Publication number
WO2007068901A1
WO2007068901A1 PCT/GB2006/004625 GB2006004625W WO2007068901A1 WO 2007068901 A1 WO2007068901 A1 WO 2007068901A1 GB 2006004625 W GB2006004625 W GB 2006004625W WO 2007068901 A1 WO2007068901 A1 WO 2007068901A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
predetermined
pixel
predetermined direction
pixels
Prior art date
Application number
PCT/GB2006/004625
Other languages
French (fr)
Inventor
Christopher Reginald Chatwin
Rupert Charles David Young
Frederic Vladimir Claret-Tournier
Karlis Harold Obrams
Original Assignee
Xvista Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xvista Limited filed Critical Xvista Limited
Publication of WO2007068901A1 publication Critical patent/WO2007068901A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20008Globally adaptive

Definitions

  • This invention relates to a method of and to an apparatus for processing a pixel- based image.
  • Digital image processing techniques are generally considered as convolution or filtering operations, where each pixel is used for several mathematical operations. Such techniques are very important in digital image processing for improving the quality of images, for detecting shapes and patterns for still images, or for detecting movement in video input.
  • Convolution or filtering techniques are intended to be applied to a range of input images, producing a reliable and satisfactory result.
  • these techniques require substantial computing power to perform the mathematical operations involved. For example, performing a filter operation on an image using an NxM mask results in the computation of NxM multiplications and NxM additions for each pixel of the input image.
  • NxM multiplications and NxM additions for each pixel of the input image.
  • a relatively simple filter results in 2xNxMx(image height)x(image width) operations.
  • DSP digital signal processor
  • a method of analysing a pixel-based image which method comprises the steps of:
  • an apparatus for analysing a pixel-based image which apparatus comprises:
  • the background value may be determined by identifying one or more subsequent pixels which has a tested value higher than the background value.
  • the threshold value may be determined on the basis of the average of the foreground value and the background value.
  • the image may be scanned in a direction which extends substantially diagonally across the image.
  • One or more further scans may be conducted in the predetermined direction from one or more further predetermined points offset from the first-mentioned predetermined point.
  • a third aspect of the present invention there is provided a method for determining the length and orientation of a continuous rectilinear line in a pixel-based image, which method comprises the steps of:
  • the first predetermined direction may be an orthogonal direction.
  • the first predetermined direction may be chosen so as to intersect the continuous rectilinear line at a predetermined angle, for example substantially at right angles.
  • an apparatus for determining the length and orientation of a continuous rectilinear line in a pixel-based image which apparatus comprises:
  • the first predetermined direction may be an orthogonal direction.
  • the first predetermined direction may be chosen so as to intersect the continuous rectilinear line at a predetermined angle, for example substantially at right angles.
  • the threshold value may be determined in accordance with the first or second aspect of the present invention.
  • the image may be scanned in a further predetermined direction substantially perpendicular to the first predetermined direction so as to determine the length and orientation of a further rectilinear line.
  • the further rectilinear line may be substantially at right angles to the first-mentioned rectilinear line.
  • the further predetermined direction may be an orthogonal direction.
  • the orientation of the line may be determined on the basis of the offset as represented by the number of pixels in the first predetermined direction between the last of the identified pixels when scanning in the second predetermined direction and the last of the identified pixels when scanning in the third predetermined direction and the offset in the second and third predetermined directions as represented by the length of the line.
  • a fifth aspect of the present invention there is provided a method for determining the number of transitions between light and dark areas in a pixel- based image, which method comprises the steps of:
  • an apparatus for determining the number of transitions between light and dark areas in a pixel-based image which apparatus comprises:
  • the threshold value may be determined in accordance with the first or second aspect of the present invention.
  • the angle of the line may be stored together with the number of transitions.
  • the lines may be scanned at angles which are initially outside a predetermined region of the image and which rotate progressively towards and into the predetermined region.
  • Figure 1 is a schematic illustration of a mobile telephone acquiring an image in the form of a data matrix symbol
  • Figure 2 shows the particular data matrix symbol on a larger scale
  • Figure 3 is a flow chart illustrating the basic steps employed in thresholding the image
  • Figure 4 is a flow chart representing stages of image thresholding
  • Figure 5 illustrates a procedure for identifying a threshold in relation to an image
  • Figure 6 is a flow chart representing the stages of identifying a threshold
  • Figure 7 illustrates a procedure for identifying a finder pattern in relation to an image
  • Figure 8 is a flow chart representing the stages of identifying a finder pattern
  • Figure 9 illustrates a procedure for identifying density bars in relation to an image
  • Figure 10 is a flow chart representing the stages of identifying the density bars
  • Figure 11 illustrates a procedure for scanning the data matrix
  • Figure 12 is a flow chart representing the stages of scanning the data matrix.
  • a data matrix is a known and very efficient two dimensional barcode symbology which employs a square module perimeter pattern.
  • a data matrix is a known and very efficient two dimensional barcode symbology which employs a square module perimeter pattern.
  • the description of such an image is merely by way of illustration and not by way of limitation.
  • Figure 1 shows a portable computing device in the form of a mobile telephone 1 which is provided with a camera that can be used to acquire an image 3 and to store the image data in memory, for example in greyscale or in colour format for real-time processing as will be explained hereinafter.
  • the portable computing device is equipped with a digital display which is capable of displaying an image to the user. In this way, the displayed image can be used to position and focus the image to be acquired.
  • Such a method of acquiring an image allows the design of a procedure for analysing the image to be simplified because relatively few scaling and rotation operations are required, which operations are particularly numerically intensive.
  • An image to be processed is selected to be the first available image after a command is issued by a user and any subsequent image if the analysis procedure fails to produce a successful result. As will be explained hereinafter, processing will be carried out in a number of distinct stages and failure at any stage will require the acquisition of a new image.
  • a typical data matrix symbol has a zone 5 around a pattern 7, which zone is referred to as a quiet zone.
  • the quiet zone is referred to as a zone 5 around a pattern 7, which zone is referred to as a quiet zone.
  • the data matrix symbol includes a finder pattern 9 in the form of L-shaped solid lines, generally situated at the left and bottom edges of the pattern, and two density bars 11 , generally situated at the top and right edges of the pattern, which are composed of alternating dark and light modules. Within the area defined by the finder pattern and density bars is a binary pattern 13 which contains encoded data represented as a collection of dark and light modules.
  • Figure 4 is a flow chart representing the stages of image thresholding in order to be able to obtain a black and white image from a 256 level greyscale input. Because the data matrix symbol consists of dark and light cells, thresholding of the image is essential for correct decoding of the data within the symbol.
  • the image consists primarily of the data matrix symbol and maximum and minimum reflectances can be determined by identifying a small number of pixels at each level.
  • the subsequent pixels are tested in a somewhat different manner. If the pixel value is less than the background value minus a predetermined tolerance (the value of which can readily be determined by experiment) the pixel is considered to be a foreground pixel and the foreground value is set to the value of that pixel.
  • a predetermined number of foreground pixels have been identified together with a predetermined number of background pixels (which predetermined number may be different to the number required to identify the quiet zone) a threshold value is calculated by averaging the background value and the foreground value. If a threshold value cannot be calculated a new starting point is chosen and the second and third diagonal lines are scanned. In the event a pixel to be tested lies outside the image area an exception is generated, a new image is captured and the procedure is repeated.
  • the steps illustrated in Figure 6 can be followed to adjust the tolerance, as represented by the threshold value, which controls the contrast of the image on the basis of a minimum number of background and foreground pixels required to assess the threshold value.
  • the algorithm initially determines the lightest pixel prior to the data matrix symbol, and then identifies predetermined numbers of background and foreground pixels in order to correctly determine the threshold value. It has been found that a good approximation of an ideal threshold value is important for subsequent image processing, because a higher or a lower value would respectively increase or decrease the size of the dark regions of the data matrix.
  • the next step is to identify the finder pattern 9. This is accomplished in accordance with Figures 7 and 8.
  • the finder pattern is in the form of two solid lines in the form on an L situated at the left and bottom edges of the pattern 7.
  • the finder pattern is used primarily to determine the physical size and orientation of the image together with any distortion of the symbol.
  • the finder pattern 9 is identified by scanning along two lines, one vertical line 17 and one horizontal line 19, substantially in the middle of the image as illustrated in Figure 7.
  • the procedure for each of the lines 17 and 19 is shown in Figure 8 and will be explained with reference to the horizontal line 19.
  • Scanning along horizontal line 19 from the left hand side of the image when a pixel (an edge pixel) is encountered that is likely to form part of the data matrix symbol, that is it has a value less than the threshold value, the pixel is assumed to be along the left hand side of the straight upright arm of the L.
  • a search is then conducted in a direction perpendicular to the original scanning line 19, that is in a vertical direction. The search initially proceeds either upwardly or downwardly and subsequently proceeds in the opposite direction. For the present explanation it will be presumed that the search initially proceeds upwardly.
  • the objective is to follow the left hand edge of the arm of the finder pattern until the quiet zone 5 is encountered.
  • the quiet zone is considered to be encountered if a predetermined number of pixels representing the minimum size of the quiet zone in the search direction are found to be above the threshold value.
  • the end of the arm of the finder pattern has been determined and the search can be conducted in the opposite direction until the other end of the arm of the finder pattern is determined.
  • the vertical index of the pixel is decremented by a predetermined small value, say 5 pixels, and then each pixel a short distance, say up to 2 pixels, in each horizontal direction is then tested.
  • the ratio of two pixels to five pixels gives a maximum inclination of 20 degrees. Because the pixels being tested are on the edge of the finder pattern, it should be possible to identify adjacent pixels in the horizontal direction having different values, one above the threshold value and one below the threshold value. This determines the horizontal position of the edge of the finder pattern 9 and thus the horizontal position can be adjusted to correct for any inclination of the arm of the finder pattern. In this way the algorithm is able to determine and automatically compensate for the horizontal position of the edge of the finder pattern as the search is conducted in a vertical direction. This procedure can be conducted both along the vertical line 17 and along the horizontal line 19, with searches in opposite perpendicular directions so as to determine the length and orientation of the arms of the finder pattern.
  • the end of the finder pattern 9 has been located and the procedure can move on.
  • the angle of inclination of the finder pattern could be above the maximum acceptable angle and a new image is acquired and processed from the beginning.
  • the finder pattern can be assessed for suitability.
  • a new vertical line 17 or horizontal line 19 is selected at a predetermined distance from the previous line and the procedure to identify the ends of a solid line begins again.
  • the point of intersection of the two edges is calculated and the angle between the edges is also calculated. Processing continues if the point of intersection is found to lie within satisfactory tolerances (which can readily be determined by experimentation) and if the angle between the edges is sufficiently close to a right angle (the tolerance can again be determined by experimentation). If either value is not within predetermined tolerances a new image is acquired and processing starts again from the beginning.
  • the length of each of the finder patterns 9 is also tested to determine whether the ratio of the two lengths falls within a predetermined tolerance of a predetermined value.
  • the predetermined value would be, for example, 1 for a square data matrix and would have other readily computed values for other rectangular forms of data matrix. The tolerance can readily be determined by experimentation. If the length ratio is not within the predetermined tolerance, the shorter of the two finder patterns is discarded and the length is determined again.
  • the procedure employed to determine the boundaries of the finder patterns 9 of the data matrix can also be used to determine the presence and extent of straight lines in other bar code symbologies including linear symbols and two-dimensional symbols. It has been found that most symbols of this type incorporate straight lines surrounding and/or within the data region and which provide information relating to the scale and orientation of the symbol.
  • the next step is to identify the density bars 11. This is accomplished in accordance with Figures 9 and 10.
  • the density bars indicate the size of a single cell within the data matrix pattern 7 and correspond to the maximum number of transitions from dark to light. Mainly due to distortion and orientation of the camera of the telephone 1 , the acquired image 3 is often skewed with the density bars 11 not parallel to the finder pattern 9.
  • a series of lines is scanned from the end of one of the finder patterns 9.
  • Figure 9 illustrates the procedure with reference to the lower finder pattern.
  • the scanned lines are rotated about a point at the end of the finder pattern 9, with the rotation angle varying within a predetermined range.
  • the scanned lines are rotated counter clockwise as indicated by the arrow 23, whereas in order to identify the upper density bar, the scanned lines a rotated clockwise about a point at the end of the finder pattern 9 at the left-hand side of the image.
  • the scanned lines are intended initially to be outside the data matrix, subsequently to align substantially with the density bar 11 and thereafter to be within the pattern 7 of the data matrix.
  • the flow chart shown in Figure 10 is used to identify the maximum number of transitions and the corresponding angle of rotation for each of the density bars. From the procedure for identifying the finder patterns 9, the expected length of the density bars 11 is known, although a predetermined tolerance is applied to allow for image distortion.
  • a first line 27 is scanned at a predetermined angle, say +10 degrees (i.e., 10 degrees clockwise of an upright line or 10 degrees anti- clockwise of a horizontal line), which is intended to be outside the data matrix.
  • the scanning angle is rotated by a predetermined amount which can readily be determined by experimentation and the scan is repeated.
  • a final line 29 is scanned at a predetermined angle of, say -10 degrees (i.e., 10 degrees anti-clockwise of an upright line or 10 degrees clockwise of a horizontal line), which is intended to be well within the data matrix.
  • the result of the scans will be that the number of transitions from dark to light with start at substantially zero and will progressively rise, as an increasing number of the density cells are traversed, to a constant value for a small number of scanned lines as all the density cells are traversed with a scanning line substantially parallel to the edge of the density bar. Thereafter, the number of transitions will vary as the data within the data matrix is traversed.
  • the cell size will also vary as the scanning line traverses the cells at different angles. As the scanning procedure progresses, the positions of the transitions are recorded together with the distance between transitions so as to enable an accurate determination of the cell size.
  • the selected angle of a density bar is determined as the minimum angle having the constant number of transitions and a stable cell size.
  • the scanned lines corresponding to the selected angle of rotation for each of the density bars 11 are intersected, the point of intersection 25 representing the fourth comer of the data matrix symbol. If either the number of transitions is invalid (for example, if the number of horizontal transitions and/or the number of vertical transitions does not correspond to any known data matrix format) or the point of intersection is invalid (for example, if the co-ordinates of the point of intersection are found to lie outside the image area) the procedure returns a fail, a new image is acquired and processing starts again at the beginning.
  • the procedure for identifying the density bars 11 has been described in relation to a constant cell size, the procedure can also be used with cells of variable size, for example to provide an accurate estimation of image distortion in other barcode symbologies.
  • the procedure for identifying the density bars 11 is designed to operate on alternating black and white cells, whereas the procedure for identifying the finder patterns 9 is designed for identifying solid lines. Together, these two procedures can be used to detect a bar code symbol because a bar code consists primarily of solid (conventionally upright) lines and alternating black and white (conventionally horizontal) patterns.
  • each of the four corners of the data matrix symbol are moved inwardly by a distance corresponding to half the cell size.
  • the distance between the two left-hand corners is divided by the number of vertical cells plus 1 so as to create N points along the left-hand finder pattern 9.
  • the distance between the two right-hand corners is divided by the number of vertical cells plus 1 so as to create N points along the right-hand density bar 11 , each of which points is substantially central of a cell of the density bar. This gives rise to N lines joining the left-hand finder pattern 9 to the right-hand density bar 11.
  • Each of the N lines is scanned at regular intervals corresponding to the horizontal cell size to determine whether of not the surrounding cell is above or below the threshold value, and consequently to determine whether or not the cell is light or dark.
  • the resulting data is then re-arranged and decoded according to the well-known data matrix standards which need not be repeated here.
  • the result of the decoding is not valid (for example, if the number of errors is found to be greater than the maximum number of correctable errors)
  • a fresh image is acquired and processing starts at the beginning.
  • the portable computing device may incorporate audio and/or visual means for indicating to the user that the data matrix symbol has been successfully decoded.
  • it is conventional to test every pixel of the image in order to determine the threshold value requiring in this case some 311040 pixels to be tested.
  • the method of the present invention uses some 150 times fewer computing operations in order to determine the threshold value.
  • edge detection can be accomplished with the use of a differentiation filter and a procedure to detect the ends of the edges.
  • the filtering operation requires 640x480x2 operations
  • edge end detection requires 640x480 operations for each edge.
  • the procedure described above for determining the ends of the finder patterns 9 within a rotation angle of +/- 20 degrees requires only 1200 operations and therefore represents an improvement factor in excess of 1000.
  • the total number of operations required for the procedure described above with reference to Figures 1 to 11 is of the order of 4000 operations, resulting in an overall improvement factor of about 400.

Abstract

A pixel-based image (3) is analysed by scanning the image in a predetermined direction (15) from a predetermined point along a first edge (9) of the image towards a second edge (11). The brightness of the first pixel is determined and a value for a background value is established on the basis of the brightness of the first pixel. The brightness of each subsequent pixel is then tested in the predetermined direction (15) and, if any such subsequent pixel is found to have a value which differs substantially from the background value, substituting the tested value for the background value. After a predetermined number of tested pixels have been found to have substantially the same background value, each subsequent pixel is tested in the following manner: a value for a foreground value is established on the basis of the tested pixel if the tested pixel is found to have a value which differs from the background value by more than a predetermined amount; and a threshold value is determined on the basis of the foreground value and the background value after a predetermined number of foreground pixels and a predetermined number of background pixels have been identified.

Description

IMAGE PROCESSING METHOD AND APPARATUS
This invention relates to a method of and to an apparatus for processing a pixel- based image.
Digital image processing techniques are generally considered as convolution or filtering operations, where each pixel is used for several mathematical operations. Such techniques are very important in digital image processing for improving the quality of images, for detecting shapes and patterns for still images, or for detecting movement in video input.
Convolution or filtering techniques are intended to be applied to a range of input images, producing a reliable and satisfactory result. However, these techniques require substantial computing power to perform the mathematical operations involved. For example, performing a filter operation on an image using an NxM mask results in the computation of NxM multiplications and NxM additions for each pixel of the input image. In total, a relatively simple filter results in 2xNxMx(image height)x(image width) operations. In the case of a VGA input and a 5x5 filtering mask, the number of operations could be up to 2x5x5x480x640 = 1.5 x 107 elementary operations. This number of operations is acceptable when powerful workstations or purpose-built digital signal processor (DSP) arrays are employed to perform the calculations, but when smaller handheld computing devices, such as PDAs (personal digital assistants), or mobile telephones are being used, real time image processing cannot be achieved.
It is well known to provide mobile telephones and PDAs with cameras which are able to take both still and video images. There is therefore a demand for image processing techniques which enable such devices to process images in real time. It is therefore an object of the present invention to provide an image processing technique which is able to effect real time image processing on a low computing power device.
According to a first aspect of the present invention there is provided a method of analysing a pixel-based image, which method comprises the steps of:
scanning an image in a predetermined direction from a predetermined point along a first edge of the image towards a second edge thereof;
determining a brightness of the first pixel and establishing a value for a background value on the basis of the brightness of the first pixel;
testing the brightness of each subsequent pixel in the predetermined direction and, if any such subsequent pixel is found to have a value, selected from a first one of a higher value and a lower value, which differs substantially from the background value, substituting the tested value for the background value; and
after a predetermined number of tested pixels have been found to have substantially the same background value, testing each subsequent pixel in the following manner:
establishing a value for a foreground value on the basis of the tested pixel if the tested pixel is found to have a value, selected from a second one of the higher value and the lower value, which differs from the background value by more than a predetermined amount; and
determining a threshold value on the basis of the foreground value and the background value after a predetermined number of foreground pixels and a predetermined number of background pixels have been identified. According to a second aspect of the present invention there is provided an apparatus for analysing a pixel-based image, which apparatus comprises:
means for scanning an image in a predetermined direction from a predetermined point along a first edge of the image towards a second edge thereof;
means for determining a brightness of the first pixel and for establishing a value for a background value on the basis of the brightness of the first pixel;
means for testing the brightness of each subsequent pixel in the predetermined direction and, if any such subsequent pixel is found to have a value, selected from a first one of a higher value and a lower value, which differs substantially from the background value, means for substituting the tested value for the background value; and
after a predetermined number of tested pixels have been found to have substantially the same background value, means for testing each subsequent pixel in the following manner:
establishing a value for a foreground value on the basis of the tested pixel if the tested pixel is found to have a value, selected from a second one of the higher value and the lower value, which differs from the background value by more than a predetermined amount; and
determining a threshold value on the basis of the foreground value and the background value after a predetermined number of foreground pixels and a predetermined number of background pixels have been identified.
The background value may be determined by identifying one or more subsequent pixels which has a tested value higher than the background value. - A -
The threshold value may be determined on the basis of the average of the foreground value and the background value.
The image may be scanned in a direction which extends substantially diagonally across the image.
One or more further scans may be conducted in the predetermined direction from one or more further predetermined points offset from the first-mentioned predetermined point.
According to a third aspect of the present invention there is provided a method for determining the length and orientation of a continuous rectilinear line in a pixel-based image, which method comprises the steps of:
scanning an image in a first predetermined direction from a predetermined point along a first edge of the image so as to intersect the continuous rectilinear line;
testing the brightness of each pixel in the predetermined direction until a pixel is identified which has a value, selected from a higher value and a lower value, which crosses a predetermined threshold value, the identified pixel being presumed to lie on an edge of the line;
scanning the image in a second predetermined direction starting from the identified pixel, the second predetermined direction being at right angles to the first predetermined direction, in the following manner:
moving a predetermined number of pixels in the second predetermined direction and selecting an initial pixel;
determining the brightness of a predetermined number of pixels either side of the initial pixel in the first predetermined direction; scanning the pixels in the first predetermined direction so as to identify a pixel which has a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, the identified pixel being presumed to lie on the edge of the line;
repeating the scans in the second predetermined direction until such time as no pixel is found to have a value which crosses the predetermined threshold value;
scanning the image in a third predetermined direction starting from the first identified pixel, the third predetermined direction being opposite to the second predetermined direction, in the following manner:
moving a predetermined number of pixels in the third predetermined direction and selecting an initial pixel;
determining the brightness of a predetermined number of pixels either side of the initial pixel in the first predetermined direction;
scanning the pixels in the first predetermined direction so as to identify a pixel which has a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, the identified pixel being presumed to lie on the edge of the line;
repeating the scans in the third predetermined direction until such time as no pixel is found to have a value which crosses the predetermined threshold value;
determining the length of the line from the distance between the scans in the second and third directions in which no pixel is found to have a value which crosses the predetermined threshold value; and determining the orientation of the line on the basis of the offset between identified pixels in the first predetermined direction and between identified pixels in at least one of the second and third directions.
The first predetermined direction may be an orthogonal direction. The first predetermined direction may be chosen so as to intersect the continuous rectilinear line at a predetermined angle, for example substantially at right angles.
According to a fourth aspect of the present invention there is provided an apparatus for determining the length and orientation of a continuous rectilinear line in a pixel-based image, which apparatus comprises:
means for scanning an image in a first predetermined direction from a predetermined point along a first edge of the image so as to intersect the continuous rectilinear line;
means fortesting the brightness of each pixel in the predetermined direction until a pixel is identified which has a value, selected from a higher value and a lower value, which crosses a predetermined threshold value, the identified pixel being presumed to lie on an edge of the line;
means for scanning the image in a second predetermined direction starting from the identified pixel, the second predetermined direction being at right angles to the first predetermined direction, in the following manner:
moving a predetermined number of pixels in the second predetermined direction and selecting an initial pixel;
determining the brightness of a predetermined number of pixels either side of the initial pixel in the first predetermined direction; scanning the pixels in the first predetermined direction so as to identify a pixel which has a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, the identified pixel being presumed to lie on the edge of the line;
means for repeating the scans in the second predetermined direction until such time as no pixel is found to have a value which crosses the predetermined threshold value;
means for scanning the image in a third predetermined direction starting from the first identified pixel, the third predetermined direction being opposite to the second predetermined direction, in the following manner:
moving a predetermined number of pixels in the third predetermined direction and selecting an initial pixel;
determining the brightness of a predetermined number of pixels either side of the initial pixel in the first predetermined direction;
scanning the pixels in the first predetermined direction so as to identify a pixel which has a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, the identified pixel being presumed to lie on the edge of the line;
means for repeating the scans in the third predetermined direction until such time as no pixel is found to have a value which crosses the predetermined threshold value;
means for determining the length of the line from the distance between the scans in the second and third directions in which no pixel is found to have a value which crosses the predetermined threshold value; and means for determining the orientation of the line on the basis of the offset between identified pixels in the first predetermined direction and between identified pixels in at least one of the second and third directions.
The first predetermined direction may be an orthogonal direction. The first predetermined direction may be chosen so as to intersect the continuous rectilinear line at a predetermined angle, for example substantially at right angles.
The threshold value may be determined in accordance with the first or second aspect of the present invention.
The image may be scanned in a further predetermined direction substantially perpendicular to the first predetermined direction so as to determine the length and orientation of a further rectilinear line. The further rectilinear line may be substantially at right angles to the first-mentioned rectilinear line. The further predetermined direction may be an orthogonal direction.
The orientation of the line may be determined on the basis of the offset as represented by the number of pixels in the first predetermined direction between the last of the identified pixels when scanning in the second predetermined direction and the last of the identified pixels when scanning in the third predetermined direction and the offset in the second and third predetermined directions as represented by the length of the line.
According to a fifth aspect of the present invention there is provided a method for determining the number of transitions between light and dark areas in a pixel- based image, which method comprises the steps of:
scanning a first line in a predetermined direction from a predetermined starting point and testing each pixel in turn so as to identify pixels which have a value, selected from a higher value and a lower value, which crosses a predetermined threshold, and storing the number of transitions; and
scanning a plurality of further lines in sequence from the predetermined starting point, the lines being scanned at progressive predetermined angles relative to the first line, and testing each pixel in turn so as to identify pixels which have a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, comparing the number of transitions to the stored number of transitions, and replacing the stored number of transitions in the event the number exceeds the stored number.
According to a sixth aspect of the present invention there is provided an apparatus for determining the number of transitions between light and dark areas in a pixel-based image, which apparatus comprises:
means for scanning a first line in a predetermined direction from a predetermined starting point and testing each pixel in turn so as to identify pixels which have a value, selected from a higher value and a lower value, which crosses a predetermined threshold, and storing the number of transitions; and
means for scanning a plurality of further lines in sequence from the predetermined starting point, the lines being scanned at progressive predetermined angles relative to the first line, and testing each pixel in turn so as to identify pixels which have a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, comparing the number of transitions to the stored number of transitions, and replacing the stored number of transitions in the event the number exceeds the stored number.
The threshold value may be determined in accordance with the first or second aspect of the present invention. The angle of the line may be stored together with the number of transitions.
The lines may be scanned at angles which are initially outside a predetermined region of the image and which rotate progressively towards and into the predetermined region.
For a better understanding of the present invention and to show more clearly how it may be carried into effect reference will now be made, by way of example, to the accompanying drawings in which:
Figure 1 is a schematic illustration of a mobile telephone acquiring an image in the form of a data matrix symbol;
Figure 2 shows the particular data matrix symbol on a larger scale;
Figure 3 is a flow chart illustrating the basic steps employed in thresholding the image;
Figure 4 is a flow chart representing stages of image thresholding;
Figure 5 illustrates a procedure for identifying a threshold in relation to an image;
Figure 6 is a flow chart representing the stages of identifying a threshold;
Figure 7 illustrates a procedure for identifying a finder pattern in relation to an image;
Figure 8 is a flow chart representing the stages of identifying a finder pattern;
Figure 9 illustrates a procedure for identifying density bars in relation to an image; Figure 10 is a flow chart representing the stages of identifying the density bars;
Figure 11 illustrates a procedure for scanning the data matrix; and
Figure 12 is a flow chart representing the stages of scanning the data matrix.
The present invention will be illustrated with reference to the processing of an image in the form of a data matrix symbol. A data matrix is a known and very efficient two dimensional barcode symbology which employs a square module perimeter pattern. However, it will be appreciated that the description of such an image is merely by way of illustration and not by way of limitation.
Figure 1 shows a portable computing device in the form of a mobile telephone 1 which is provided with a camera that can be used to acquire an image 3 and to store the image data in memory, for example in greyscale or in colour format for real-time processing as will be explained hereinafter. The portable computing device is equipped with a digital display which is capable of displaying an image to the user. In this way, the displayed image can be used to position and focus the image to be acquired. Such a method of acquiring an image allows the design of a procedure for analysing the image to be simplified because relatively few scaling and rotation operations are required, which operations are particularly numerically intensive. An image to be processed is selected to be the first available image after a command is issued by a user and any subsequent image if the analysis procedure fails to produce a successful result. As will be explained hereinafter, processing will be carried out in a number of distinct stages and failure at any stage will require the acquisition of a new image.
As can be seen from Figures 1 and 2, a typical data matrix symbol has a zone 5 around a pattern 7, which zone is referred to as a quiet zone. The quiet zone
5 is used to distinguish the pattern from background. The data matrix symbol includes a finder pattern 9 in the form of L-shaped solid lines, generally situated at the left and bottom edges of the pattern, and two density bars 11 , generally situated at the top and right edges of the pattern, which are composed of alternating dark and light modules. Within the area defined by the finder pattern and density bars is a binary pattern 13 which contains encoded data represented as a collection of dark and light modules.
In order to decode the data matrix symbol it is necessary successfully to conduct four process steps as illustrated in Figure 3. The image already having been acquired, the steps are determination of a threshold for binarisation, detection of the L-shaped finder pattern, detection of the density bars, and detection of the binary pattern. As illustrated in Figure 4, in the event any of these processes should fail, the procedure will abort and return to the beginning in order to acquire a fresh image. It should be noted that any fresh image is likely to differ from the previous image and the outcome of the procedure is also likely to be different.
Figure 4 is a flow chart representing the stages of image thresholding in order to be able to obtain a black and white image from a 256 level greyscale input. Because the data matrix symbol consists of dark and light cells, thresholding of the image is essential for correct decoding of the data within the symbol.
Nevertheless, computation of an unconstrained scene is a difficult problem. In the present case, it can be assumed that the image consists primarily of the data matrix symbol and maximum and minimum reflectances can be determined by identifying a small number of pixels at each level.
An initial assumption is made that the symbol to be scanned is situated on one of the diagonal lines 15 shown on Figure 5. The diagonal lines are scanned in sequence in a manner illustrated in Figure 6, beginning with the centre diagonal, the proceeding to the rightmost diagonal if required, and finally proceeding to the leftmost diagonal if required. The first pixel is presumed to be a background pixel and the background value is set accordingly. Each subsequent pixel along the diagonal line is then tested in turn by incrementing the column and decrementing the row (because the top left pixel of the image is designated pixel [0,0] and the bottom right pixel is designated pixel [max1 , max2]). If any pixel is found to have a significantly higher value than the current background value, the background value is reset to the higher value. If the symbol is on the diagonal 15 that is being scanned, it will be inevitable that pixels forming part of the quiet zone 5 will be tested and that the background value will be determined correctly.
Once the quiet zone 5 has been located by identifying a predetermined minimum number of pixels having the same value, for example thirty, the subsequent pixels are tested in a somewhat different manner. If the pixel value is less than the background value minus a predetermined tolerance (the value of which can readily be determined by experiment) the pixel is considered to be a foreground pixel and the foreground value is set to the value of that pixel. When a predetermined number of foreground pixels have been identified together with a predetermined number of background pixels (which predetermined number may be different to the number required to identify the quiet zone) a threshold value is calculated by averaging the background value and the foreground value. If a threshold value cannot be calculated a new starting point is chosen and the second and third diagonal lines are scanned. In the event a pixel to be tested lies outside the image area an exception is generated, a new image is captured and the procedure is repeated.
In this way the steps illustrated in Figure 6 can be followed to adjust the tolerance, as represented by the threshold value, which controls the contrast of the image on the basis of a minimum number of background and foreground pixels required to assess the threshold value.
To summarise the steps according to Figure 6, the algorithm initially determines the lightest pixel prior to the data matrix symbol, and then identifies predetermined numbers of background and foreground pixels in order to correctly determine the threshold value. It has been found that a good approximation of an ideal threshold value is important for subsequent image processing, because a higher or a lower value would respectively increase or decrease the size of the dark regions of the data matrix.
The next step is to identify the finder pattern 9. This is accomplished in accordance with Figures 7 and 8. As noted previously, the finder pattern is in the form of two solid lines in the form on an L situated at the left and bottom edges of the pattern 7. The finder pattern is used primarily to determine the physical size and orientation of the image together with any distortion of the symbol.
The finder pattern 9 is identified by scanning along two lines, one vertical line 17 and one horizontal line 19, substantially in the middle of the image as illustrated in Figure 7. The procedure for each of the lines 17 and 19 is shown in Figure 8 and will be explained with reference to the horizontal line 19. Scanning along horizontal line 19 from the left hand side of the image, when a pixel (an edge pixel) is encountered that is likely to form part of the data matrix symbol, that is it has a value less than the threshold value, the pixel is assumed to be along the left hand side of the straight upright arm of the L. A search is then conducted in a direction perpendicular to the original scanning line 19, that is in a vertical direction. The search initially proceeds either upwardly or downwardly and subsequently proceeds in the opposite direction. For the present explanation it will be presumed that the search initially proceeds upwardly.
The objective is to follow the left hand edge of the arm of the finder pattern until the quiet zone 5 is encountered. The quiet zone is considered to be encountered if a predetermined number of pixels representing the minimum size of the quiet zone in the search direction are found to be above the threshold value. Once the quiet zone has been encountered, the end of the arm of the finder pattern has been determined and the search can be conducted in the opposite direction until the other end of the arm of the finder pattern is determined. For an edge pixel on the upright arm of the finder pattern 9 and a subsequent search in an upward direction, the vertical index of the pixel is decremented by a predetermined small value, say 5 pixels, and then each pixel a short distance, say up to 2 pixels, in each horizontal direction is then tested. The ratio of two pixels to five pixels gives a maximum inclination of 20 degrees. Because the pixels being tested are on the edge of the finder pattern, it should be possible to identify adjacent pixels in the horizontal direction having different values, one above the threshold value and one below the threshold value. This determines the horizontal position of the edge of the finder pattern 9 and thus the horizontal position can be adjusted to correct for any inclination of the arm of the finder pattern. In this way the algorithm is able to determine and automatically compensate for the horizontal position of the edge of the finder pattern as the search is conducted in a vertical direction. This procedure can be conducted both along the vertical line 17 and along the horizontal line 19, with searches in opposite perpendicular directions so as to determine the length and orientation of the arms of the finder pattern.
As explained above, in the event the values of all the pixels in the horizontal direction are found to be above the threshold value, then the end of the finder pattern 9 has been located and the procedure can move on. Alternatively, in the event the values of all the pixels are found to be below the threshold value the angle of inclination of the finder pattern could be above the maximum acceptable angle and a new image is acquired and processed from the beginning.
Once both the vertical line 17 and the horizontal line 19 have been scanned, or earlier if it is determined that the edge being followed is unsuitable, the finder pattern can be assessed for suitability.
As explained above, if the angle of the arm is determined to diverge at too great an angle from the horizontal or vertical, the image is discarded and a new image is acquired, with the analysis starts again from the beginning. A suitable range for continuing the procedure has been found to lie in a range of plus or minus 20 degrees from the appropriate horizontal or vertical direction. It has been found that as the line diverges increasingly from the horizontal or vertical the number of computations required to analyse the pattern increases significantly. For example, an increase in divergence from 20 degrees to 30 degrees (as represented by testing three pixels in each direction rather than two) has been found to increase the number of computations required by some 66 percent.
If the algorithm is unable to find the ends of the finder pattern 9, a new vertical line 17 or horizontal line 19 is selected at a predetermined distance from the previous line and the procedure to identify the ends of a solid line begins again.
If the vertical line 17 or the horizontal line 19 encounters an element 21 that does not form part of the finder pattern 5, such as a printing error or a letter, prior to intersecting the finder pattern, such elements can be identified and discarded. Generally such elements can be identified and discarded because they do not have a sufficient length (see, for example, the lower two elements 21 in Figure
7) to correspond to the finder pattern or they do not have a sufficiently straight edge (see, for example, the upper element 21 in Figure 7) to correspond to the finder pattern.
As a final check, the point of intersection of the two edges is calculated and the angle between the edges is also calculated. Processing continues if the point of intersection is found to lie within satisfactory tolerances (which can readily be determined by experimentation) and if the angle between the edges is sufficiently close to a right angle (the tolerance can again be determined by experimentation). If either value is not within predetermined tolerances a new image is acquired and processing starts again from the beginning. The length of each of the finder patterns 9 is also tested to determine whether the ratio of the two lengths falls within a predetermined tolerance of a predetermined value. The predetermined value would be, for example, 1 for a square data matrix and would have other readily computed values for other rectangular forms of data matrix. The tolerance can readily be determined by experimentation. If the length ratio is not within the predetermined tolerance, the shorter of the two finder patterns is discarded and the length is determined again.
It should be noted that the procedure employed to determine the boundaries of the finder patterns 9 of the data matrix can also be used to determine the presence and extent of straight lines in other bar code symbologies including linear symbols and two-dimensional symbols. It has been found that most symbols of this type incorporate straight lines surrounding and/or within the data region and which provide information relating to the scale and orientation of the symbol.
The next step is to identify the density bars 11. This is accomplished in accordance with Figures 9 and 10. The density bars indicate the size of a single cell within the data matrix pattern 7 and correspond to the maximum number of transitions from dark to light. Mainly due to distortion and orientation of the camera of the telephone 1 , the acquired image 3 is often skewed with the density bars 11 not parallel to the finder pattern 9.
In order to identify the density bars, a series of lines is scanned from the end of one of the finder patterns 9. Figure 9 illustrates the procedure with reference to the lower finder pattern. The scanned lines are rotated about a point at the end of the finder pattern 9, with the rotation angle varying within a predetermined range. In order to identify the right hand density bar 11 in Figure 9, the scanned lines are rotated counter clockwise as indicated by the arrow 23, whereas in order to identify the upper density bar, the scanned lines a rotated clockwise about a point at the end of the finder pattern 9 at the left-hand side of the image.
That is, in each case the scanned lines are intended initially to be outside the data matrix, subsequently to align substantially with the density bar 11 and thereafter to be within the pattern 7 of the data matrix.
The flow chart shown in Figure 10 is used to identify the maximum number of transitions and the corresponding angle of rotation for each of the density bars. From the procedure for identifying the finder patterns 9, the expected length of the density bars 11 is known, although a predetermined tolerance is applied to allow for image distortion. A first line 27 is scanned at a predetermined angle, say +10 degrees (i.e., 10 degrees clockwise of an upright line or 10 degrees anti- clockwise of a horizontal line), which is intended to be outside the data matrix.
Consequently, for a line scanned at such an angle and for a distance corresponding to the maximum expected line length the number of transitions from dark to light (as determined with reference to the threshold value) is substantially zero. The scanning angle is rotated by a predetermined amount which can readily be determined by experimentation and the scan is repeated.
This continues until a final line 29 is scanned at a predetermined angle of, say -10 degrees (i.e., 10 degrees anti-clockwise of an upright line or 10 degrees clockwise of a horizontal line), which is intended to be well within the data matrix.
The result of the scans will be that the number of transitions from dark to light with start at substantially zero and will progressively rise, as an increasing number of the density cells are traversed, to a constant value for a small number of scanned lines as all the density cells are traversed with a scanning line substantially parallel to the edge of the density bar. Thereafter, the number of transitions will vary as the data within the data matrix is traversed. The cell size will also vary as the scanning line traverses the cells at different angles. As the scanning procedure progresses, the positions of the transitions are recorded together with the distance between transitions so as to enable an accurate determination of the cell size. The selected angle of a density bar is determined as the minimum angle having the constant number of transitions and a stable cell size.
The scanned lines corresponding to the selected angle of rotation for each of the density bars 11 are intersected, the point of intersection 25 representing the fourth comer of the data matrix symbol. If either the number of transitions is invalid (for example, if the number of horizontal transitions and/or the number of vertical transitions does not correspond to any known data matrix format) or the point of intersection is invalid (for example, if the co-ordinates of the point of intersection are found to lie outside the image area) the procedure returns a fail, a new image is acquired and processing starts again at the beginning.
Although the procedure for identifying the density bars 11 has been described in relation to a constant cell size, the procedure can also be used with cells of variable size, for example to provide an accurate estimation of image distortion in other barcode symbologies. The procedure for identifying the density bars 11 is designed to operate on alternating black and white cells, whereas the procedure for identifying the finder patterns 9 is designed for identifying solid lines. Together, these two procedures can be used to detect a bar code symbol because a bar code consists primarily of solid (conventionally upright) lines and alternating black and white (conventionally horizontal) patterns.
As illustrated in Figure 11 , for the final stage of reading the data matrix, each of the four corners of the data matrix symbol are moved inwardly by a distance corresponding to half the cell size. The distance between the two left-hand corners is divided by the number of vertical cells plus 1 so as to create N points along the left-hand finder pattern 9. Similarly, the distance between the two right-hand corners is divided by the number of vertical cells plus 1 so as to create N points along the right-hand density bar 11 , each of which points is substantially central of a cell of the density bar. This gives rise to N lines joining the left-hand finder pattern 9 to the right-hand density bar 11. Each of the N lines is scanned at regular intervals corresponding to the horizontal cell size to determine whether of not the surrounding cell is above or below the threshold value, and consequently to determine whether or not the cell is light or dark. The resulting data is then re-arranged and decoded according to the well-known data matrix standards which need not be repeated here. In the event the result of the decoding is not valid (for example, if the number of errors is found to be greater than the maximum number of correctable errors), a fresh image is acquired and processing starts at the beginning. If desired, the portable computing device may incorporate audio and/or visual means for indicating to the user that the data matrix symbol has been successfully decoded.
When the method according to the present invention is employed to process a data matrix symbol, we have found that the number of elementary computing operations is very significantly reduced. For example, we have found that an accurate threshold value can be determined by testing no more than 3x640 (=2040) pixels for a 640x480 pixel image. On the other hand it is conventional to test every pixel of the image in order to determine the threshold value, requiring in this case some 311040 pixels to be tested. Thus, the method of the present invention uses some 150 times fewer computing operations in order to determine the threshold value. Similarly, during determination of the threshold value, it is not necessary to test every pixel and it is therefore not necessary to binarise the entire image as would conventionally be the case.
By way of comparison, edge detection can be accomplished with the use of a differentiation filter and a procedure to detect the ends of the edges. Conventionally, for the same image size, the filtering operation requires 640x480x2 operations, while edge end detection requires 640x480 operations for each edge. Thus, using conventional image processing techniques some 4x640x480 (=1228800) elementary operations are required to detect the ends of both data matrix edges. The procedure described above for determining the ends of the finder patterns 9 within a rotation angle of +/- 20 degrees requires only 1200 operations and therefore represents an improvement factor in excess of 1000. The total number of operations required to decode a data matrix employing simple, but standard, image processing techniques for an image of VGA (640x480) size is of the order of 5x640x480 (=1536000) operations. By way of contrast, the total number of operations required for the procedure described above with reference to Figures 1 to 11 is of the order of 4000 operations, resulting in an overall improvement factor of about 400.

Claims

1. A method of analysing a pixel-based image, which method comprises the steps of:
scanning an image (3) in a predetermined direction (15) from a predetermined point along a first edge (9) of the image towards a second edge (11 ) thereof;
determining a brightness of the first pixel and establishing a value for a background value on the basis of the brightness of the first pixel;
testing the brightness of each subsequent pixel in the predetermined direction (15) and, if any such subsequent pixel is found to have a value, selected from a first one of a higher value and a lower value, which differs substantially from the background value, substituting the tested value for the background value; and
after a predetermined number of tested pixels have been found to have substantially the same background value, testing each subsequent pixel in the following manner:
establishing a value for a foreground value on the basis of the tested pixel if the tested pixel is found to have a value, selected from a second one of the higher value and the lower value, which differs from the background value by more than a predetermined amount; and
determining a threshold value on the basis of the foreground value and the background value after a predetermined number of foreground pixels and a predetermined number of background pixels have been identified.
2. A method according to claim 1 , characterised in that the background value is determined by identifying one or more subsequent pixels which has a tested value higher than the background value.
3. A method according to claim 1 or 2, characterised in that the threshold value is determined on the basis of the average of the foreground value and the background value.
4. A method according to any preceding claim, characterised in that the image (3) is scanned in a direction (15) which extends substantially diagonally across the image.
5. A method according to any preceding claim, characterised in that one or more further scans are conducted in the predetermined direction (15) from one or more further predetermined points offset from the first-mentioned predetermined point.
6. A method according to any preceding claim and including determining the length and orientation of a continuous rectilinear line (17, 19) in a pixel-based image (3), which method comprises the steps of:
scanning an image (3) in a first predetermined direction (19) from a predetermined point along a first edge of the image so as to intersect the continuous rectilinear line;
testing the brightness of each pixel in the predetermined direction until a pixel is identified which has a value, selected from a higher value and a lower value, which crosses a predetermined threshold value, the identified pixel being presumed to lie on an edge of the line; scanning the image in a second predetermined direction (17) starting from the identified pixel, the second predetermined direction being at right angles to the first predetermined direction, in the following manner:
moving a predetermined number of pixels in the second predetermined direction (17) and selecting an initial pixel;
determining the brightness of a predetermined number of pixels either side of the initial pixel in the first predetermined direction (19);
scanning the pixels in the first predetermined direction so as to identify a pixel which has a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, the identified pixel being presumed to lie on the edge of the line;
repeating the scans in the second predetermined direction (17) until such time as no pixel is found to have a value which crosses the predetermined threshold value;
scanning the image in a third predetermined direction starting from the first identified pixel, the third predetermined direction being opposite to the second predetermined direction (17), in the following manner:
moving a predetermined number of pixels in the third predetermined direction and selecting an initial pixel;
determining the brightness of a predetermined number of pixels either side of the initial pixel in the first predetermined direction (19);
scanning the pixels in the first predetermined direction (19) so as to identify a pixel which has a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, the identified pixel being presumed to lie on the edge of the line;
repeating the scans in the third predetermined direction until such time as no pixel is found to have a value which crosses the predetermined threshold value;
determining the length of the line from the distance between the scans in the second and third directions in which no pixel is found to have a value which crosses the predetermined threshold value; and
determining the orientation of the line on the basis of the offset between identified pixels in the first predetermined direction and between identified pixels in at least one of the second and third directions.
7. A method according to claim 6, characterised in that the first predetermined direction (19) is an orthogonal direction.
8. A method according to claim 6 or 7 and including the step of choosing the first predetermined direction (19) so as to intersect the continuous rectilinear line at a predetermined angle.
9. A method according to claim 8, characterised in that the first predetermined direction (19) is chosen so as to intersect the continuous rectilinear line substantially at right angles.
10. A method according to any one of claims 6 to 9 and including the step of scanning the image in a further predetermined direction (17) substantially perpendicular to the first predetermined direction (19) so as to determine the length and orientation of a further rectilinear line.
11. A method according to claim 10, characterised in that the further rectilinear line (17) is substantially at right angles to the first-mentioned rectilinear line (19).
12. A method according to claim 11 , characterised in that the further predetermined direction (17) is an orthogonal direction.
13. A method according to any one of claims 6 to 12 and including the step of determining the orientation of the line (17, 19) on the basis of the offset as represented by the number of pixels in the first predetermined direction (19) between the last of the identified pixels when scanning in the second predetermined direction (17) and the last of the identified pixels when scanning in the third predetermined direction and the offset in the second and third predetermined directions as represented by the length of the line.
14. A method according to any preceding claim and including determining the number of transitions between light and dark areas in a pixel-based image (3), which method comprises the steps of:
scanning a first line (27) in a predetermined direction from a predetermined starting point and testing each pixel in turn so as to identify pixels which have a value, selected from a higher value and a lower value, which crosses a predetermined threshold, and storing the number of transitions; and
scanning a plurality of further lines (29) in sequence from the predetermined starting point, the lines being scanned at progressive predetermined angles relative to the first line, and testing each pixel in turn so as to identify pixels which have a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, comparing the number of transitions to the stored number of transitions, and replacing the stored number of transitions in the event the number exceeds the stored number.
15. A method according to claim 14, characterised in that the angle of the line is stored together with the number of transitions.
16. A method according to claim 14 or 15, characterised in that the lines are scanned at angles which are initially outside a predetermined region of the image and which rotate progressively towards and into the predetermined region.
17. An apparatus for analysing a pixel-based image (3), which apparatus comprises:
means for scanning an image (3) in a predetermined direction (15) from a predetermined point along a first edge (9) of the image towards a second edge (11) thereof;
means for determining a brightness of the first pixel and for establishing a value for a background value on the basis of the brightness of the first pixel;
means for testing the brightness of each subsequent pixel in the predetermined direction (15) and, if any such subsequent pixel is found to have a value, selected from a first one of a higher value and a lower value, which differs substantially from the background value, means for substituting the tested value for the background value; and
after a predetermined number of tested pixels have been found to have substantially the same background value, means for testing each subsequent pixel in the following manner:
establishing a value for a foreground value on the basis of the tested pixel if the tested pixel is found to have a value, selected from a second one of the higher value and the lower value, which differs from the background value by more than a predetermined amount; and determining a threshold value on the basis of the foreground value and the background value after a predetermined number of foreground pixels and a predetermined number of background pixels have been identified.
18. Apparatus as claimed in claim 17, characterised in that means is provided for determining the background value by identifying one or more subsequent pixels which has a tested value higher than the background value.
19. Apparatus as claimed in claim 17 or 18, characterised in that means is provided for determining the threshold value on the basis of the average of the foreground value and the background value.
20. Apparatus as claimed in any one of claims 17 to 19, characterised in that means is provided for scanning the image (3) in a direction (15) which extends substantially diagonally across the image.
21. Apparatus as claimed in any one of claims 17 to 20, characterised in that means is provided for conducting one or more further scans in the predetermined direction (15) from one or more further predetermined points offset from the first-mentioned predetermined point.
22. Apparatus as claimed in any one of claims 17 to 21 , characterised in that means is provided for determining the length and orientation of a continuous rectilinear line (17, 19) in a pixel-based image (3), which means comprises:
means for scanning an image (3) in a first predetermined direction (19) from a predetermined point along a first edge of the image so as to intersect the continuous rectilinear line;
means for testing the brightness of each pixel in the predetermined direction until a pixel is identified which has a value, selected from a higher value and a lower value, which crosses a predetermined threshold value, the identified pixel being presumed to lie on an edge of the line;
means for scanning the image in a second predetermined direction (17) starting from the identified pixel, the second predetermined direction being at right angles to the first predetermined direction, in the following manner:
moving a predetermined number of pixels in the second predetermined direction (17) and selecting an initial pixel;
determining the brightness of a predetermined number of pixels either side of the initial pixel in the first predetermined direction (19);
scanning the pixels in the first predetermined direction (19) so as to identify a pixel which has a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, the identified pixel being presumed to lie on the edge of the line;
means for repeating the scans in the second predetermined direction (17) until such time as no pixel is found to have a value which crosses the predetermined threshold value;
means for scanning the image in a third predetermined direction starting from the first identified pixel, the third predetermined direction being opposite to the second predetermined direction (17), in the following manner:
moving a predetermined number of pixels in the third predetermined direction and selecting an initial pixel;
determining the brightness of a predetermined number of pixels either side of the initial pixel in the first predetermined direction (19); scanning the pixels in the first predetermined direction (19) so as to identify a pixel which has a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, the identified pixel being presumed to lie on the edge of the line;
means for repeating the scans in the third predetermined direction until such time as no pixel is found to have a value which crosses the predetermined threshold value;
means for determining the length of the line from the distance between the scans in the second and third directions in which no pixel is found to have a value which crosses the predetermined threshold value; and
means for determining the orientation of the line on the basis of the offset between identified pixels in the first predetermined direction and between identified pixels in at least one of the second and third directions.
23. Apparatus as claimed in claim 22, characterised in that the first predetermined direction (19) is an orthogonal direction.
24. Apparatus as claimed in claim 22 or 23, characterised in that the first predetermined direction (19) is chosen so as to intersect the continuous rectilinear line at a predetermined angle.
25. Apparatus as claimed in claim 24, characterised in that the first predetermined direction (19) intersects the continuous rectilinear line substantially at right angles.
26. Apparatus as claimed in any one of claims 22 to 25, characterised in that means is provided for scanning the image in a further predetermined direction
(17) substantially perpendicular to the first predetermined direction (19) so as to determine the length and orientation of a further rectilinear line.
27. Apparatus as claimed in claim 26, characterised in that the further rectilinear line (17) is substantially at right angles to the first-mentioned rectilinear line (19).
28. Apparatus as claimed in claim 26 or 27, characterised in that the further predetermined direction (17) is an orthogonal direction.
29. Apparatus as claimed in any one of claims 22 to 28, characterised in that means is provided for determining the orientation of the line (17, 19) on the basis of the offset as represented by the number of pixels in the first predetermined direction (19) between the last of the identified pixels when scanning in the second predetermined direction (17) and the last of the identified pixels when scanning in the third predetermined direction and the offset in the second and third predetermined directions as represented by the length of the line.
30. Apparatus as claimed in any one of claims 17 to 29, characterised in that means is provided for determining the number of transitions between light and dark areas in a pixel-based image (3), which means comprises:
means for scanning a first line (27) in a predetermined direction from a predetermined starting point and testing each pixel in turn so as to identify pixels which have a value, selected from a higher value and a lower value, which crosses a predetermined threshold, and storing the number of transitions; and
means for scanning a plurality of further lines (29) in sequence from the predetermined starting point, the lines being scanned at progressive predetermined angles relative to the first line, and testing each pixel in turn so as to identify pixels which have a value, selected from the higher value and the lower value, which crosses the predetermined threshold value, comparing the number of transitions to the stored number of transitions, and replacing the stored number of transitions in the event the number exceeds the stored number.
31. Apparatus as claimed in claim 30, characterised in that means is provided for storing the angle of the line together with the number of transitions.
32. Apparatus as claimed in claim 30 or 31 , characterised in that means is provided for scanning the lines at angles which are initially outside a predetermined region of the image and which rotate progressively towards and into the predetermined region.
PCT/GB2006/004625 2005-12-13 2006-12-12 Image processing method and apparatus WO2007068901A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0525285.3 2005-12-13
GBGB0525285.3A GB0525285D0 (en) 2005-12-13 2005-12-13 Image processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2007068901A1 true WO2007068901A1 (en) 2007-06-21

Family

ID=35735980

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2006/004625 WO2007068901A1 (en) 2005-12-13 2006-12-12 Image processing method and apparatus

Country Status (2)

Country Link
GB (1) GB0525285D0 (en)
WO (1) WO2007068901A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3462372A1 (en) * 2017-09-29 2019-04-03 Datalogic IP Tech S.r.l. System and method for detecting optical codes with damaged or incomplete finder patterns

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5396054A (en) * 1989-03-01 1995-03-07 Symbol Technologies, Inc. Bar code reader using scanned memory array
US5742041A (en) * 1996-05-29 1998-04-21 Intermec Corporation Method and apparatus for locating and decoding machine-readable symbols, including data matrix symbols
US20030197063A1 (en) * 1998-11-05 2003-10-23 Welch Allyn Data Collection, Inc. Method for processing images captured with bar code reader having area image sensor
US20040074967A1 (en) * 2002-10-10 2004-04-22 Fujitsu Limited Bar code recognizing method and decoding apparatus for bar code recognition
US20050011957A1 (en) * 2003-07-16 2005-01-20 Olivier Attia System and method for decoding and analyzing barcodes using a mobile device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5396054A (en) * 1989-03-01 1995-03-07 Symbol Technologies, Inc. Bar code reader using scanned memory array
US5742041A (en) * 1996-05-29 1998-04-21 Intermec Corporation Method and apparatus for locating and decoding machine-readable symbols, including data matrix symbols
US20030197063A1 (en) * 1998-11-05 2003-10-23 Welch Allyn Data Collection, Inc. Method for processing images captured with bar code reader having area image sensor
US20040074967A1 (en) * 2002-10-10 2004-04-22 Fujitsu Limited Bar code recognizing method and decoding apparatus for bar code recognition
US20050011957A1 (en) * 2003-07-16 2005-01-20 Olivier Attia System and method for decoding and analyzing barcodes using a mobile device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WESZKA J S: "A SURVEY OF THRESHOLD SELECTION TECHNIQUES", COMPUTER GRAPHICS AND IMAGE PROCESSING, ACADEMIC PRESS. NEW YORK, US, vol. 7, no. 2, April 1978 (1978-04-01), pages 259 - 265, XP001149105 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3462372A1 (en) * 2017-09-29 2019-04-03 Datalogic IP Tech S.r.l. System and method for detecting optical codes with damaged or incomplete finder patterns
US10540532B2 (en) 2017-09-29 2020-01-21 Datalogic Ip Tech S.R.L. System and method for detecting optical codes with damaged or incomplete finder patterns

Also Published As

Publication number Publication date
GB0525285D0 (en) 2006-01-18

Similar Documents

Publication Publication Date Title
EP0669593B1 (en) Two-dimensional code recognition method
JP3115610B2 (en) High speed image capture system and method
US6097839A (en) Method and apparatus for automatic discriminating and locating patterns such as finder patterns, or portions thereof, in machine-readable symbols
Liu et al. Recognition of QR Code with mobile phones
JP3209108B2 (en) 2D code reader
US9319548B2 (en) Interactive user interface for capturing a document in an image signal
EP0880103B1 (en) Method and apparatus for detecting and decoding bar code symbols
EP0582858B1 (en) Method and apparatus for detecting artifact corners in two-dimensional images
EP0887760B1 (en) Method and apparatus for decoding bar code symbols
US8254683B2 (en) Code image processing method
EP1469420A2 (en) Method and device for recording of data
WO2013044875A1 (en) Linear barcode identification method and system
JPH1063772A (en) Method and device for detecting and decoding symbol having data matrix symbol and readable with machine
US5854478A (en) Method and apparatus for reading machine-readable symbols having surface or optical distortions
CN113538603B (en) Optical detection method and system based on array product and readable storage medium
US6941026B1 (en) Method and apparatus using intensity gradients for visual identification of 2D matrix symbols
CN111339797A (en) Decoding method and terminal capable of accurately identifying damaged one-dimensional bar code
JP4335229B2 (en) QR code recognition device, QR code recognition device control method, QR code recognition device control program, and computer-readable recording medium recording the same
CN110543798B (en) Two-dimensional code identification method and device
WO2007068901A1 (en) Image processing method and apparatus
WO1996009597A1 (en) Method and apparatus for detecting and adaptively decoding bar code symbols in continuous images
US7451931B2 (en) Detecting barcodes on two-dimensional images using frequency domain data
CN110263597B (en) Quick and accurate QR (quick response) code correction method and system
JPH10154198A (en) Bar code reading method and device
Liang et al. Real time recognition of 2D bar codes in complex image conditions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06820488

Country of ref document: EP

Kind code of ref document: A1