WO2000019372A1 - Pixel coding and image processing method - Google Patents

Pixel coding and image processing method Download PDF

Info

Publication number
WO2000019372A1
WO2000019372A1 PCT/IL1998/000477 IL9800477W WO0019372A1 WO 2000019372 A1 WO2000019372 A1 WO 2000019372A1 IL 9800477 W IL9800477 W IL 9800477W WO 0019372 A1 WO0019372 A1 WO 0019372A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
value
pixel
assigning
image
Prior art date
Application number
PCT/IL1998/000477
Other languages
French (fr)
Inventor
Nissim Savariego
Danny Shalom
Original Assignee
Orbotech Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=11062362&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO2000019372(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Orbotech Ltd. filed Critical Orbotech Ltd.
Priority to PCT/IL1998/000477 priority Critical patent/WO2000019372A1/en
Priority to AU94562/98A priority patent/AU9456298A/en
Priority to IL14202898A priority patent/IL142028A0/en
Publication of WO2000019372A1 publication Critical patent/WO2000019372A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]

Definitions

  • the present invention is related to the field of image inspection and characterization and is especially useful in the field of inspection of flat objects such as printed circuit boards.
  • Another methodology used mainly in PC board inspection, is to optically inspect a board to determine the morphology of a pattern thereon and to compare these patterns with design rules and/or to a feature reference which govern the board layout. Failure of the feature pattern on the board to meet the rules or include the requisite features usually signifies a flaw in the pattern on the board.
  • An example of a device that uses this methodology is the Orbotech Model V-309 circuit board tester.
  • the tester optically images a board under test. Edges between copper conductors and unclad laminate are detected. Based on this detection, a binary map or image, having a resolution greater than that of an optical image of the board is produced.
  • the optical image is a pixelated image having a given resolution and the binary image has a pixel size smaller than that of the optical image.
  • a feature map is generated by way of morphological analysis, including scaling measurements and successive stages of erosion and/or dilation operations or other conventional image processing methods.
  • the feature map is then checked to determine whether it includes the features and meets predetermined rules, such as minimum line width, for its structure.
  • erosion can be used to determine the diameter and center of a round feature.
  • Dilation can be used to determine the distance between two features and to determine which features are closer than a given distance.
  • the present invention seeks, in some preferred embodiments thereof, to provide more efficient methods for analytically representing an image, so that morphological operations, such as dilation and erosion and scale measurement operations, can be efficiently and quickly performed on the image.
  • One aspect of some preferred embodiments of the invention relates to mapping the image into relatively large computational pixels, and encoding each computational pixel using two or more bits per computational pixel, and methods used for such encoding.
  • the coding preferably indicates a spatial portion of a computational pixel that is to be treated as being located in either of the regions.
  • the coding of each computational pixel is made according to its position with reference to an edge between two adjacent regions in the image.
  • the location of an edge in the image is conventionally determined, preferably at a level of accuracy that exceeds the resolution of the optical pixels used to generate the image.
  • the image is analytically represented by relatively large computational pixels. Computational pixels that are entirely in the region on one side of the edge are assigned a first predetermined value, computational pixels that are entirely in the region on the other side of the edge are assigned a second predetermined value, and computational pixels that straddle the edge such that they are partly in the regions on either side of the edge are assigned another value according to a rule.
  • the value assigned to computational pixels that are located entirely in a region that is on one side of the edge or entirely in a region that is on the other side of the edge is preferably one of the opposite extreme values within a range of predetermined pixel values.
  • the other value, assigned to computational pixels that are partly in the regions on either side of the edge, is either one of the opposite extreme values in the range, or is an intermediate value within in the range.
  • the image can be treated as having the same resolution as if it had been comprised of small pixels each being one quarter or one ninth of the size of the computational pixel.
  • the performance of a measurement or morphological operation would require that each small pixel to be individually counted, eroded or dilated.
  • the use of two or more bits per computational pixel block provides for performing morphological operations on blocks equivalent in size to four or nine conventional small pixels.
  • the use of three bits per pixel block allows for performing morphological operations on even larger computational pixels, able to achieve an effective resolution equivalent to up to 8 conventional small unit pixels. This results in a saving of at least 50% in data (and often much more) while preserving operational resolution.
  • the coding is based on a relationship between the position and orientation of an edge and a computational pixel through which it passes. In one preferred embodiment of the invention, the coding is based on the percentage of the area of the computational on either side of the edge.
  • a computational pixel is coded based in a manner such that dilation, erosion and pixel counting operations mimic the same operations on a binary image having a higher resolution than the non-binary pixels.
  • An aspect of some preferred embodiments of the invention relates to methods of performing erosion/dilation operations on a pixelated image that is represented by two (or more) bit coded computational pixels.
  • edges between regions of different brightness are determined. Pixels through which an edge does not pass are given values representative of the regions they are in. At least some of the pixel through which the edge passes are given a different value.
  • these "border" pixels are dilated in two or more steps, such that the resolution of the erosion/dilation steps is the same as if the erosion/dilation had been performed on a conventional bit map of the image having a resolution finer than the computational pixel size.
  • the non-binary pixels are used to measure a dimension using a multi-angled scale type measurement. It is known in the art to utilize binary pixels in the measurement, for example, of line thickness. In this method, a thickness is measured by measuring the number of on or off pixels along a plurality of directions. The width of a line or space is determined as the minimum number of whole pixels measured along any of the directions plus the partial pixels from the transitional pixels.
  • the same type measurement is made using the coded pixels of the invention. Utilizing these pixels, measurements to a high degree of accuracy may be made.
  • An aspect of some preferred embodiments of the invention relates to performing morphological measurements and operations on a pixilated image at a resolution finer than the pixel size.
  • An aspect of some preferred embodiments of the invention relates to specific algorithms for performing isotropic erosion/dilation.
  • a method of multi-level pixelization of images comprising: determining at least one edge between a first area and a second area in the image; dividing the image into pixels; assigning a first value to pixels completely in the first area; assigning a second value to pixels completely in the second area; and assigning a value to pixels through which the edge passes, said value being one of the first value, the second value or a different value.
  • the value assigned to pixels through which the edge passes is based on a relationship between portions of the pixel in the first and second areas.
  • the value assigned to pixels through which the edge passes is based on a relationship between portions of the pixel in the first and second areas.
  • the value assigned to pixels through which the edge passes is based on the area of portions of the pixel in the first and second areas.
  • the value assigned to pixels through which the edge passes is based on position and orientation of the edge in the pixel.
  • the first value corresponds to bright areas of the image
  • the second value corresponds to dark areas of the image
  • the edges correspond to edge boundaries between the bright and dark areas.
  • assigning said values comprises: assigning said first value to pixels through which the edge passes if they meet a first condition; assigning said second value to pixels through which the edge passes if they meet a second condition; and assigning a third value to pixels through which the edge passes if they meet a third condition.
  • the first, second and third conditions are related to a two level thresholding, and the value identifies the relative proportion of a pixel in the first and second areas respectively.
  • assigning pixel values comprises: forming a sub-area within the pixel; assigning the first value to the pixel if the sub-area is entirely within the first area; assigning the second value to the pixel if the sub-area is entirely within the second area; and assigning a third value to the pixel if the edge passes through the sub-area.
  • each pixel has one of three values.
  • pixelizing the image comprises: assigning a fourth value to pixels through which the edge passes if they meet a fourth condition.
  • the fourth value is assigned to pixels based on position and orientation of the edge in the pixel.
  • pixelizing the image comprises assigning a fifth value to pixels through which the edge passes if they meet a fifth condition.
  • the fifth value is assigned to pixels based on position and orientation of the edge in the pixel.
  • the method includes acquiring the image as a gray level image at a given optical pixel size, wherein the pixelization is performed at a different spatial resolution than that of the optical pixels. Preferably, the pixelization is performed at a higher spatial resolution that that of the optical pixels.
  • the method includes: acquiring the image as a gray level image at a given optical pixel size, wherein the pixelization is performed at the same spatial resolution as that of the optical pixels.
  • the method includes: assigning a hierarchy to the pixel values ranked based at least approximately on the amount of filling of the pixel with the first area that the values represent.
  • a method of image analysis pixelizing the image utilizing a pixelization method in which the values of the pixels are represented by one of more than two values; and performing at least one spatial morphology operation on the pixelized image.
  • the at least one spatial morphology operation comprises a measurement of distance.
  • the at least one spatial morphology operation comprises at least one erosion operation.
  • the at least one erosion operation is substantially isotropic.
  • the at least one spatial morphology operation comprises at least one dilation operation.
  • the at least one dilation operation is substantially isotropic.
  • erosion or dilation is performed at a resolution higher than a pixel size of the pixelization.
  • the method of analysis is performed according on an image pixelized according to the above defined methods.
  • a method of erosion/dilation of a pixelized image comprising: providing an image pixelized according to the invention and having a hierarchy of values; and performing an erosion/dilation based on the position in the hierarchy of neighboring pixels.
  • the method comprises changing a pixel value to a lower value in the hierarchy if it meets an erosion criterion.
  • the erosion criterion is based on the position in the hierarchy of neighboring pixels.
  • the method comprises changing a pixel value to a higher value in the hierarchy if it meets a dilation criterion.
  • the dilation criterion is based on the position in the hierarchy of neighboring pixels.
  • the method includes further iteratively spatially dilating or eroding the image.
  • further iteratively spatially dilating or eroding utilize criteria for dilating or eroding having a compensating anisotropy for the different iterations.
  • a method of performing a morphology operation comprising: providing an pixelated image that is pixelated at a given spatial resolution; and performing at least one morphology operation at a resolution finer than the given spatial resolution.
  • the at least one spatial morphology operation comprises a measurement of distance.
  • the at least one spatial morphology operation comprises at least one erosion operation.
  • the at least one erosion operation is substantially isotropic.
  • the at least one spatial morphology operation comprises at least one dilation operation.
  • the at least one dilation operation is substantially isotropic.
  • a method for analytically representing an image having therein at least two regions comprising: dividing the image into pixels; assigning to each pixel a multi-bit code, said code relating sub-pixel portions of the pixel that are treated as being situated in the first region and the second region.
  • the assigning of said values includes: assigning a first predetermined value to pixels that are completely in the first region; and assigning a second predetermined value to pixels that are completely in the second region.
  • the method includes assigning a value to pixels that are partly in the first region and partly in the second region, said value being determined by the portion of the pixel in the first and second regions, with reference to a boundary edge between the regions.
  • assigning a value to pixels that are partly in the first region and partly in the second region comprises assigning one of said first or second values or a third values to the pixel.
  • the method comprises assigning a different value to pixels that are partly in the first region and partly in the second region, said value being determined by an edge between the regions meeting a spatial condition.
  • assigning a value to pixels partly in the first and second regions comprises:
  • Preferably assigning the code (b) includes: assigning the first value to pixels for which the sub-area is completely in the first region; and assigning the second value to pixels for which the sub-area is completely in the second region.
  • assigning the code (b) includes: assigning the different value to pixels for which the at least one sub-area is partly in the first and partly in the second regions.
  • the image is an image of a printed circuit board.
  • Fig. 1 shows a small portion of an exemplary image to be analyzed and pixelated in accordance with conventional prior art methods
  • Fig. 2 illustrates a methodology for assigning pixel values, in accordance with a preferred embodiment of the invention
  • Fig. 3A-3C illustrate erosion according to a preferred embodiment of the invention
  • Fig. 4 illustrates a methodology for assigning pixel values, in accordance with another preferred embodiment of the invention
  • Fig. 5A and 5B illustrate the application of erosion and dilation to measurements in printed circuit board testing; and Figs. 6A-6E illustrate five operators used to perform erosion/dilation.
  • Fig. 1 shows an optical image 10 of a portion of a PC board pixelated in a conventional manner.
  • Squares 12, delineated by dashed lines 14 represent the optical pixels of the image.
  • these images are gray scale images of the portion of the board.
  • such boards are divided into two parts, areas which are coated with conducting material (usually copper) and areas which are not.
  • conducting material usually copper
  • the image of the conducting material is much brighter than that of the other areas.
  • edge 16 between conductor coated (to the left of and below the line) and uncoated (to the right of and above the line) areas.
  • edge 16 as obtained from the image generated by optical pixels is not a thin line as shown, but rather it is represented by full pixels.
  • Those pixels marked "H” have a high brightness (pixel value) characteristic of copper conductor
  • those pixels marked "L” have a low pixel value characteristic of uncoated laminate
  • those pixels marked "T” are transitional pixels having some intermediate pixel value characteristic of the edge between copper and laminate.
  • an optical pixel such as squate 18 in Fig. 1
  • each optical pixel may be divided into other numbers of small pixels, for example four or sixteen or some other number depending on the method of edge determination used and the desired edge resolution.
  • the pixels are smaller as compared to the optical pixels, a greater resolution can be achieved and the edge can be smoothed.
  • a threshold will be set for determining to which class the pixel belongs.
  • This threshold may be an area threshold (generally 50%) where the position of the transition is known or a brightness threshold (usually for gray level images) where the level can be set half way between the two areas being separated by the edge. It should be understood that many edges are not very sharp and that there is a real transition region in optical brightness. Nevertheless, a binarization operation sets the edge position as if the transition were sharp.
  • Fig. 2 shows the arrangement of multi-bit computational pixels relative to an edge in accordance with a preferred embodiment of the present invention.
  • a portion of an edge 16 between two joining regions is shown.
  • the location of edge 16 has been determined using edge detection techniques, such as those described in US Patent 5,774,573, and is preferably to a greater accuracy than optical pixels.
  • Computational pixels 20, 22 and 23 are generated in the image, and as will be seen, are each assigned a multi-bit value.
  • a quasi-binary image is formed using "computational" pixels 20, 22 and 23.
  • the computational pixels have one of three states, namely, "high” (sometimes called “on"), low (sometimes called “off) and transitional. Since at least two bits are required to define such a computational pixel, for the sake of illustration we will adopt a two-bit nomenclature in which we denote the off pixel by "00", the on pixel by "11” and the transition pixel by "10".
  • some of the computational pixels through which the edge passes are classified as off and some are classified as on.
  • computational pixels which are "nearly" completely on one or the other side of the edge are classified as being completely on the particular side of the edge.
  • a double threshold may be used for determining the status of a computational pixel with respect to the edge. In such a system, the area within the computational pixel on either side of the edge is determined respective of a low and high threshold. If the pixel value is less than the low threshold the computational pixel is classified as low. If the pixel value is greater than the upper threshold, the computational pixel is classified as high.
  • the thresholds are set symmetrically about the mid-point.
  • FIG. 2 three computational pixel blocks 20, 22 and 23 are shown as being superimposed on a grid representing a pixel size analogous to conventional one bit small pixels 19 (Fig. 1).
  • the size of the computational pixel is equivalent to 2
  • X 2 small one-bit pixels 19 such that each computational pixel is twice the size (and four times the area) of small pixels 19.
  • Points 25 are located along the central vertical horizontal axes of each computational pixel. The distance from the edge of each of points 25 is one quarter of the height and width dimensions of the computational the pixel.
  • a smaller square 24, interconnecting each of points 25 and covering one-quarter the area of the computational pixel is formed inside the computational pixel. Classification of a computational pixel as high, low or transitional is then made by the following method.
  • computational pixel 20 is classified as a transitional pixel and assigned a transitional value such as "10".
  • Computational pixel 22 is classified as being copper and assigned a high value of "11 ".Computational pixel 23 is classified as being laminate and assigned a low value of "00”.
  • the multi-bit code when a computational pixel is classified as transitional, the multi-bit code allows the pixel to be considered as being divided into sub-sections that are respectively spatially situated in either of the regions along the edge. Additionally, it is appreciated that while, in the example shown, the regions are described in absolute terms of high and low, they may also be gray level regions conventionally represented.
  • the computational pixels are twice the size and four times the area as the small pixels 19 of Fig. 1.
  • the number of bits used to classify the area covered by the computation pixel is only two bits as compared with four bits when a binary map comprised of the small unit pixels is used.
  • morphological operations such as erosion/dilation and scale measurements can be carried out at substantially the same resolution as for images having a binary representation in small pixels, for all directions of edge 16.
  • less than one bit is required to represent an erosion/dilation resolution element.
  • a scale type measurement will give the same values for both representations.
  • FIGs. 3A-3C illustrate in a simple manner, two erosion steps using a two-bit quasi- binary pixelization representation in the erosion of an edge.
  • Fig. 3 shows an edge 16 and computational pixels, appropriately classified according to the previous discussion. Let us assume that the following very simplified erosion algorithm is used: A computational pixel that touches three pixels having "00" is changed by changing its designation to "00" if it was originally "10” and by changing it to "10” if it was originally "11 ".
  • Fig. 3 A shows the situation prior to any erosion.
  • Fig. 3B shows the situation after the first erosion step, wherein the position of the edge following erosion is indicated as 16b.
  • Fig. 3C shows the situation after the second erosion step wherein the position of the edge following further erosion is indicated as 16c.
  • the advance of the edge can be considered as being between points 25 (Fig. 2), and not from edge to center of each pixel. Because, in the example shown in Fig. 2 points 25 are each one quarter of the distance from the edge of each computational pixel along a central axis, each erosion or dilation step will effectively advance the edge one half pixel at a time.
  • computational pixels may be divided into larger sections so that the advance will be effectively at a rate of l/(n-l), where n is the number of bit units used to represent the spatial section of a pixel in one of the regions, and n-1 is the number of sub-sections into which the pixel is divided. It is evident from the above example that the number of comparisons, and pixel switch is only half as great when the two-bit pixelization is used as compared to a conventional binary representation of the image. Moreover, computational pixels that are represented by the extreme values, "00" or "11”, can be eroded, dilated or counted as a unit in each step, which is equivalent to operating on four small pixels in the foregoing example.
  • the multi-bit pixelation enables resolution at edges to be maintained as in a map of small binary pixels since the portion of a transitional pixel that needs to be considered in conducting measurement operations - for example none of the pixel, the entire pixel or a section of the pixel - is intrinsic to the multi-bit pixelation in accordance with the present invention.
  • a larger or smaller number of levels may be used to divide the computational pixels into a larger number of subsections.
  • pixelization using two bits and four levels may be used in systems where erosion at the rate of one third of a computational pixel per erosion step takes place.
  • a computational pixels 26 are shown.
  • a rectangle 28 interconnects points 27, and the center 29 of the computational pixel is indicated. Rectangle 28 is utilized, as described below to determine the values of the bits which represent the pixel.
  • pixel 26 may have one of four levels, 00, 01, 10 and 11, in order of increased coverage by the portion of the pixel on the high side of the edge. If the rectangle 28 is completely in the high portion, the pixel is coded as 11. If the rectangle 28 is completely in the low portion, the pixel is coded as 00. If the rectangle is partly in and partly outside the rectangle, the pixel is coded as 01 if the center of the pixel 29 is in the low portion and 10 if the center of the pixel 29 is in the high portion. Thus, the left pixel in Fig. 4, is coded as 00 and the right pixel is coded as 01.
  • the erosion operation would reduce the status of the pixel from 11 to
  • the edge is considered to advance from a point 27 to a center 29 to a point 27, and then to the next point 27 in an adjacent computational pixel, following which the advance would be repeated until the erosion operation is completed. It is readily evident, that in this manner a resolution of one third of a pixel is obtainable.
  • Similar methodologies may be used for pixel coding using three bits, which may include five, six, seven or eight levels, to divide the computational pixel into an equal number of subsections.
  • five levels two squares would be inscribed in the pixel. These squares would both have the same orientation as squares 24 and 28 and would be so placed as to provide the same resolution in all directions as for the underlying binary pixels which give the same resolution.
  • six levels two squares and the center of the pixel would be used as in Fig. 4. This method could be used for seven, or eight levels with the addition of an additional square inside the pixel. Similarly, it could be used for more levels, with more bits per pixel and more squares. Erosion and dilation would be performed in a manner analogous to that of the pixelizations of Figs. 2 and 4.
  • Figs 6A-6E show a series of operators which can be used, for the coding system illustrated with respect to Fig. 2, for achieving erosion with a high degree, on the average of isotropy. While in principle, it would be desirable to have an operator that performs erosion isotropically such a single operator is not achievable, in essence because the erosion step is different in each direction. Thus, in accordance with a preferred embodiment of the invention, a relatively large number of operators is used, in series, in order to provide, on the average, isotropic erosion.
  • Each of the operators provides a different erosion rate over different primary directions, characteristic of a geometry of 3x3 three level computational pixels, each equivalent to 6 X 6 small binary pixels, namely approximately, 0, 14, 26, 37 and 45 degrees (and, or course, mirror images and 90 degree rotations of these angles.
  • binary pixels of the prior art only three directions are built into a 3x3 matrix of pixels, namely, 0, 45 and 26degrees (and, or course, mirror images and 90 degree rotations of these angles).
  • prior art methods used only up to three operators (and usually only two operators) for near uniform erosion/dilation.
  • the state of the central pixel is changed by one level (either from 11 to 10 or from 10 to 00) if the surrounding pixels meet a criteria shown in the matrices of the operator.
  • Black elements and * mean "don't care.”
  • the central pixel changes from 11 to 00 when the criteria is met.
  • Dilation can be performed in one of two ways.
  • the simplest way is to invert the image such that 00 is changed to 11 and 11 to 00 and perform an erosion operation on the resulting image.
  • the inversion may be performed in the operators.
  • the sequence of applications of the operators is determined by keeping track of the amount of erosion performed by the previous erosion steps and then applying, as the next operator in the sequence, an operator that will correct any anisotropy to the greatest extent possible. For the operators shown, one such sequence for the first 24 erosion steps is those shown in Figs.
  • the sequence is optimized to result in most nearly perfect isotropy at a given level of erosion. For example, if there is an expected number of erosions that will be necessary, the process may be optimized to allow for greater anisotropy for intermediate erosion steps. It should be understood that while erosion and dilation using two bit coded pixels has been described with respect to morphology determinations in PC board testing, the utility of the invention is not limited to PC board testing, but can be used for many of the same applications for which single bit pixel (binarized) images are used. Some of these uses are described in the book by Russ. For example, operations such as dilation followed by erosion may also be performed.
  • Fig. 5A shows a line 40 having an irregular edge. In particular, in one area, the width of the line is reduced.
  • PC boards are designed and manufactured utilizing certain rules. One of these rules is a minimum width of conductor.
  • One use of erosion/dilation image processing, or scale measuring, is to determine if there exist, anywhere on the board, lines that have a thickness less than the minimum design thickness.
  • Fig. 5A shows the result of successive erosion operations on the line, (for simplicity, the pixelization is not shown. This operation by erosion would, as described above, be effectively the same for single and multi- bit pixelization. After a number of successive operations, the width of the line at the narrowest point goes to zero. If this situation is detected, the position of the break in the line is noted. The number of erosions required to break the line gives the width of the line at that point.
  • Fig. 5B shows a different situation in which the distance between two adjoining lines 42 and 44 has been reduced by an imperfection in the manufacture of one of the lines.
  • a dilation function would be successively applied to the image until the two lines meet. The number of dilation steps would then give the distance between the lines prior to dilation.
  • multi-bit pixelization in morphology determination is in the measurement of line widths or spaces utilizing scales.
  • the number of high (or low) binary pixels is counted along a plurality of directions around a point. This measurement is preferably made at 0, 14, 26, 37, 45, 53, 64, 76 and 90, degrees. The direction which has the smallest number of pixels is considered the width of the line/space.
  • a similar measurement is made utilizing one of the above described methods of multi-level pixelization.
  • a count of the number of pixels in a limited number of directions is made, with the transition pixels counting a partial distances in accordance with the level of the pixel. For example, for the three level pixelization of Fig. 2 the number of pixels and partial pixels is counted in the above directions, and the line or space width is given as the total number of pixel and partial pixels times the pixel width. For the example of Fig. 4, the pixels are also counted in each of the above directions.
  • the resolution of this method is the same as if binary pixels having a resolution of a resolution (n-1) times that of the multi-level pixels were used in the measurement, where n is the number of levels used in characterizing the pixel, and n-1 is the number of subsections into which the pixel is divided.

Abstract

A method of multi-level pixelization of images comprising: determining at least one edge between a first area and a second area in the image; dividing the image into pixels; assigning a first value to pixels completely in the first area; assigning a second value to pixels completely in the second area; and assigning a value to pixels through which the edge passes, said value being one of the first value, the second value or a different value.

Description

PIXEL CODING AND IMAGE PROCESSING METHOD
FIELD OF THE INVENTION The present invention is related to the field of image inspection and characterization and is especially useful in the field of inspection of flat objects such as printed circuit boards. BACKGROUND OF THE INVENTION
Two main methodologies are used in the automatic optical inspection of flat objects such as integrated circuit reticules and unloaded printed circuit (PC) boards, in order to find flaws in patterns formed on the objects.
One methodology, widely used in reticule inspection systems and in PC board inspection is to produce an optical image of an object in the form of a bit map and to compare this optical image with a reference image that is in bit map format. When the two images deviate in any given position by more than a given amount, the position is indicated to contain an error, which may result in a rejection of the object or a requirement for further visual or other inspection. US Patents 4,579,455 and 5,586,058 show examples of this type of inspection system
Another methodology, used mainly in PC board inspection, is to optically inspect a board to determine the morphology of a pattern thereon and to compare these patterns with design rules and/or to a feature reference which govern the board layout. Failure of the feature pattern on the board to meet the rules or include the requisite features usually signifies a flaw in the pattern on the board.
An example of a device that uses this methodology is the Orbotech Model V-309 circuit board tester. The tester optically images a board under test. Edges between copper conductors and unclad laminate are detected. Based on this detection, a binary map or image, having a resolution greater than that of an optical image of the board is produced. In particular, the optical image is a pixelated image having a given resolution and the binary image has a pixel size smaller than that of the optical image.
A feature map is generated by way of morphological analysis, including scaling measurements and successive stages of erosion and/or dilation operations or other conventional image processing methods. The feature map is then checked to determine whether it includes the features and meets predetermined rules, such as minimum line width, for its structure.
A general description of erosion and dilation and their uses is described in "The Image Processing Handbook", By John C. Russ CRC Press, 1995 (see especially chapter 7), which is incorporated herein by reference. To determine the width of a thin portion of a thicker line, the image of the line is eroded until the line splits into two parts. The location at which it splits is the location of the thinnest portion of the line and the number of erosions required to split the line gives the thickness at that point. It should be noted that while this does not seem to be a particularly efficient way of making this measurement, this method allows for determining such thicknesses on the entire image at one time without any a priori knowledge of the feature and/or the location.
Similarly, erosion can be used to determine the diameter and center of a round feature. Dilation can be used to determine the distance between two features and to determine which features are closer than a given distance.
Such determinations are especially important in PC board inspection in which common flaws include insufficient distances between features, round pads that are too small, misformed or misplaced, and conductors that are too thin over part of their length. Since the position of these flaws (or even their existence) is not known in advance, the ability to automatically determine their existence, location and severity is very useful.
Nevertheless, on large complicated boards, the amount of data required to represent the image and the amount of processing required to perform feature detection steps, such as dilation and erosion, is considerable, even when the processing is performed on only a limited number of lines, such as three lines at a time. SUMMARY OF THE INVENTION
The present invention seeks, in some preferred embodiments thereof, to provide more efficient methods for analytically representing an image, so that morphological operations, such as dilation and erosion and scale measurement operations, can be efficiently and quickly performed on the image. One aspect of some preferred embodiments of the invention relates to mapping the image into relatively large computational pixels, and encoding each computational pixel using two or more bits per computational pixel, and methods used for such encoding. In an image having at least two regions, the coding preferably indicates a spatial portion of a computational pixel that is to be treated as being located in either of the regions. In some preferred embodiments of the invention the coding of each computational pixel is made according to its position with reference to an edge between two adjacent regions in the image. First, the location of an edge in the image is conventionally determined, preferably at a level of accuracy that exceeds the resolution of the optical pixels used to generate the image. The image is analytically represented by relatively large computational pixels. Computational pixels that are entirely in the region on one side of the edge are assigned a first predetermined value, computational pixels that are entirely in the region on the other side of the edge are assigned a second predetermined value, and computational pixels that straddle the edge such that they are partly in the regions on either side of the edge are assigned another value according to a rule.
The value assigned to computational pixels that are located entirely in a region that is on one side of the edge or entirely in a region that is on the other side of the edge is preferably one of the opposite extreme values within a range of predetermined pixel values. The other value, assigned to computational pixels that are partly in the regions on either side of the edge, is either one of the opposite extreme values in the range, or is an intermediate value within in the range.
For example, if a two bit representation is assigned the computational pixel, then the extreme values would be "00" and "11" respectively, while the intermediate value could be either "01" or "10". In such a representation, the assignment of an extreme value, "00" or "11" to a computational pixel strattling an edge would indicate that the entire computational pixel is to be treated as being situated in one or the other of the regions separated by the edge. The assignment of an intermediate value, "01" or "10", to such a computational pixel indicates that only some part of the computational pixel is to be treated as being situated in one or the other of the regions. It is readily appreciated that by treating the image as divided into multi-bit computational pixels, fewer computations are required for morphological and scale measurement operations. In particular, the present invention enables erosion/dilation and scale measurement operations to be more efficiently performed.
By using multi-bit computational pixels, it is possible to maintain an operational resolution that is effectively the same as if the image had been divided up into smaller unit-bit pixels. For example, in a two bit representation of a computational pixel, the image can be treated as having the same resolution as if it had been comprised of small pixels each being one quarter or one ninth of the size of the computational pixel. In a simple conventional bit map representation of the same image comprised of four times or nine times as many small pixels, the performance of a measurement or morphological operation would require that each small pixel to be individually counted, eroded or dilated.
By comparison, in preferred embodiments of the present invention, the use of two or more bits per computational pixel block provides for performing morphological operations on blocks equivalent in size to four or nine conventional small pixels. The use of three bits per pixel block allows for performing morphological operations on even larger computational pixels, able to achieve an effective resolution equivalent to up to 8 conventional small unit pixels. This results in a saving of at least 50% in data (and often much more) while preserving operational resolution. In preferred embodiments of the invention, the coding is based on a relationship between the position and orientation of an edge and a computational pixel through which it passes. In one preferred embodiment of the invention, the coding is based on the percentage of the area of the computational on either side of the edge.
In a second, more preferred embodiment of the invention, a computational pixel is coded based in a manner such that dilation, erosion and pixel counting operations mimic the same operations on a binary image having a higher resolution than the non-binary pixels.
An aspect of some preferred embodiments of the invention relates to methods of performing erosion/dilation operations on a pixelated image that is represented by two (or more) bit coded computational pixels. In a preferred embodiment of the invention, edges between regions of different brightness are determined. Pixels through which an edge does not pass are given values representative of the regions they are in. At least some of the pixel through which the edge passes are given a different value. During dilation/erosion steps, according to preferred embodiments of the invention, these "border" pixels are dilated in two or more steps, such that the resolution of the erosion/dilation steps is the same as if the erosion/dilation had been performed on a conventional bit map of the image having a resolution finer than the computational pixel size.
In preferred embodiments of the invention, this results, in a same erosion/dilation resolution, using fewer bits and fewer operations than in the prior art. In an alternative preferred embodiment of the invention, the non-binary pixels are used to measure a dimension using a multi-angled scale type measurement. It is known in the art to utilize binary pixels in the measurement, for example, of line thickness. In this method, a thickness is measured by measuring the number of on or off pixels along a plurality of directions. The width of a line or space is determined as the minimum number of whole pixels measured along any of the directions plus the partial pixels from the transitional pixels.
In this preferred embodiment of the present invention, the same type measurement is made using the coded pixels of the invention. Utilizing these pixels, measurements to a high degree of accuracy may be made. An aspect of some preferred embodiments of the invention, relates to performing morphological measurements and operations on a pixilated image at a resolution finer than the pixel size.
An aspect of some preferred embodiments of the invention, relates to specific algorithms for performing isotropic erosion/dilation.
There is thus provided, in accordance with a preferred embodiment of the invention, a method of multi-level pixelization of images comprising: determining at least one edge between a first area and a second area in the image; dividing the image into pixels; assigning a first value to pixels completely in the first area; assigning a second value to pixels completely in the second area; and assigning a value to pixels through which the edge passes, said value being one of the first value, the second value or a different value.
Preferably, the value assigned to pixels through which the edge passes is based on a relationship between portions of the pixel in the first and second areas. Preferably,
In accordance with a preferred embodiment of the invention, the value assigned to pixels through which the edge passes is based on the area of portions of the pixel in the first and second areas.
Alternatively, in a preferred embodiment of the invention, the value assigned to pixels through which the edge passes is based on position and orientation of the edge in the pixel.
Preferably, the first value corresponds to bright areas of the image, the second value corresponds to dark areas of the image and the edges correspond to edge boundaries between the bright and dark areas.
In a preferred embodiment of the invention, assigning said values comprises: assigning said first value to pixels through which the edge passes if they meet a first condition; assigning said second value to pixels through which the edge passes if they meet a second condition; and assigning a third value to pixels through which the edge passes if they meet a third condition.
In a preferred embodiment of the invention, the first, second and third conditions are related to a two level thresholding, and the value identifies the relative proportion of a pixel in the first and second areas respectively. Alternatively, in a preferred embodiment of the invention, assigning pixel values comprises: forming a sub-area within the pixel; assigning the first value to the pixel if the sub-area is entirely within the first area; assigning the second value to the pixel if the sub-area is entirely within the second area; and assigning a third value to the pixel if the edge passes through the sub-area. In a preferred embodiment of the invention, each pixel has one of three values. In a preferred embodiment of the invention, pixelizing the image comprises: assigning a fourth value to pixels through which the edge passes if they meet a fourth condition.
Preferably, the fourth value is assigned to pixels based on position and orientation of the edge in the pixel. Preferably, pixelizing the image comprises assigning a fifth value to pixels through which the edge passes if they meet a fifth condition. Preferably, the fifth value is assigned to pixels based on position and orientation of the edge in the pixel. In a preferred embodiment of the invention, the method includes acquiring the image as a gray level image at a given optical pixel size, wherein the pixelization is performed at a different spatial resolution than that of the optical pixels. Preferably, the pixelization is performed at a higher spatial resolution that that of the optical pixels.
In a preferred embodiment of the invention, the method includes: acquiring the image as a gray level image at a given optical pixel size, wherein the pixelization is performed at the same spatial resolution as that of the optical pixels.
Preferably, the method includes: assigning a hierarchy to the pixel values ranked based at least approximately on the amount of filling of the pixel with the first area that the values represent.
There is further provided, in accordance with a preferred embodiment of the invention, a method of image analysis : pixelizing the image utilizing a pixelization method in which the values of the pixels are represented by one of more than two values; and performing at least one spatial morphology operation on the pixelized image. In a preferred embodiment of the invention, the at least one spatial morphology operation comprises a measurement of distance.
Alternatively or additionally, the at least one spatial morphology operation comprises at least one erosion operation. Preferably, the at least one erosion operation is substantially isotropic.
Alternatively or additionally, the at least one spatial morphology operation comprises at least one dilation operation. Preferably, the at least one dilation operation is substantially isotropic.
Preferably, erosion or dilation is performed at a resolution higher than a pixel size of the pixelization.
Preferably, the method of analysis is performed according on an image pixelized according to the above defined methods.
There is further provided, in accordance with a preferred embodiment of the invention, a method of erosion/dilation of a pixelized image comprising: providing an image pixelized according to the invention and having a hierarchy of values; and performing an erosion/dilation based on the position in the hierarchy of neighboring pixels.
Preferably, the method comprises changing a pixel value to a lower value in the hierarchy if it meets an erosion criterion. Preferably, the erosion criterion is based on the position in the hierarchy of neighboring pixels.
Preferably, the method comprises changing a pixel value to a higher value in the hierarchy if it meets a dilation criterion. Preferably, the dilation criterion is based on the position in the hierarchy of neighboring pixels. Preferably, the method includes further iteratively spatially dilating or eroding the image. Preferably, further iteratively spatially dilating or eroding utilize criteria for dilating or eroding having a compensating anisotropy for the different iterations.
Preferably, erosion or dilation is performed at a resolution higher than a pixel size of the pixelization. There is further provided, in accordance with a preferred embodiment of the invention, a method of performing a morphology operation comprising: providing an pixelated image that is pixelated at a given spatial resolution; and performing at least one morphology operation at a resolution finer than the given spatial resolution. In a preferred embodiment of the invention, the at least one spatial morphology operation comprises a measurement of distance. Alternatively or additionally the at least one spatial morphology operation comprises at least one erosion operation. Preferably, the at least one erosion operation is substantially isotropic. Alternatively or additionally, the at least one spatial morphology operation comprises at least one dilation operation. Preferably, the at least one dilation operation is substantially isotropic.
There is further provided, in accordance with a preferred embodiment of the invention a method for analytically representing an image having therein at least two regions, the method comprising: dividing the image into pixels; assigning to each pixel a multi-bit code, said code relating sub-pixel portions of the pixel that are treated as being situated in the first region and the second region.
Preferably, the assigning of said values includes: assigning a first predetermined value to pixels that are completely in the first region; and assigning a second predetermined value to pixels that are completely in the second region.
In a preferred embodiment of the invention, the method includes assigning a value to pixels that are partly in the first region and partly in the second region, said value being determined by the portion of the pixel in the first and second regions, with reference to a boundary edge between the regions. Preferably, assigning a value to pixels that are partly in the first region and partly in the second region comprises assigning one of said first or second values or a third values to the pixel.
Alternatively, in a preferred embodiment of the invention, the method comprises assigning a different value to pixels that are partly in the first region and partly in the second region, said value being determined by an edge between the regions meeting a spatial condition. Preferably, assigning a value to pixels partly in the first and second regions comprises:
(a) defining at least one sub-area within the pixel; and (b) assigning the code based on a spatial relationship between the at least one sub-area and the edge.
Preferably assigning the code (b) includes: assigning the first value to pixels for which the sub-area is completely in the first region; and assigning the second value to pixels for which the sub-area is completely in the second region.
Preferably, assigning the code (b) includes: assigning the different value to pixels for which the at least one sub-area is partly in the first and partly in the second regions.
In a preferred embodiment of the invention, the image is an image of a printed circuit board.
BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be more fully understood from the following description of preferred embodiments thereof taken together with the following drawings in which:
Fig. 1 shows a small portion of an exemplary image to be analyzed and pixelated in accordance with conventional prior art methods;
Fig. 2 illustrates a methodology for assigning pixel values, in accordance with a preferred embodiment of the invention; Fig. 3A-3C illustrate erosion according to a preferred embodiment of the invention;
Fig. 4 illustrates a methodology for assigning pixel values, in accordance with another preferred embodiment of the invention;
Fig. 5A and 5B illustrate the application of erosion and dilation to measurements in printed circuit board testing; and Figs. 6A-6E illustrate five operators used to perform erosion/dilation.
DETAILED DESCRIPTION OF THE INVENTION Fig. 1 shows an optical image 10 of a portion of a PC board pixelated in a conventional manner. Squares 12, delineated by dashed lines 14 represent the optical pixels of the image. As acquired, these images are gray scale images of the portion of the board. In general such boards are divided into two parts, areas which are coated with conducting material (usually copper) and areas which are not. In general, the image of the conducting material is much brighter than that of the other areas.
Also shown in Fig. 1 is an edge 16, between conductor coated (to the left of and below the line) and uncoated (to the right of and above the line) areas. Of course due to the resolution limits of the optical acquisition, edge 16 as obtained from the image generated by optical pixels is not a thin line as shown, but rather it is represented by full pixels. Those pixels marked "H" have a high brightness (pixel value) characteristic of copper conductor, those pixels marked "L" have a low pixel value characteristic of uncoated laminate, and those pixels marked "T" are transitional pixels having some intermediate pixel value characteristic of the edge between copper and laminate.
Numerous methods exist for determining the position of the edge to an accuracy greater than that of the optical pixel size, and are well known. Any of these methods may be used, however, the methods described in US Patent 5,774,573 to Caspi, et al., the disclosure of which is incorporated herein by reference, are preferred.
After an accurate position of the edge is determined, it is known in the prior art to generate a binary pixel map comprised of pixels having a spatial resolution that is different, preferably higher, than the resolution of the optical pixels. By way of example, an optical pixel, such as squate 18 in Fig. 1, may be divided into nine small pixels 19. It is appreciated that in accordance with conventional methods for determining the position of an edge, each optical pixel may be divided into other numbers of small pixels, for example four or sixteen or some other number depending on the method of edge determination used and the desired edge resolution. As can be seen from Fig. 1, if the optical pixel is divided into small pixels 19, as is performed conventionally, some are H, some are L and some are still T. However, it is appreciated that because the pixels are smaller as compared to the optical pixels, a greater resolution can be achieved and the edge can be smoothed.
In general, in a binarized image, in which only H and L designations are allowed, a threshold will be set for determining to which class the pixel belongs. This threshold may be an area threshold (generally 50%) where the position of the transition is known or a brightness threshold (usually for gray level images) where the level can be set half way between the two areas being separated by the edge. It should be understood that many edges are not very sharp and that there is a real transition region in optical brightness. Nevertheless, a binarization operation sets the edge position as if the transition were sharp.
The erosion/dilation of binary images of the type described above is straightforward.
Various algorithms, some of which are described in the above referenced book (pp. 436-441), may be used for different purposes and to assure a measure of isotropy in the process. That is to say it is desirable that the process work at the same rate whether the edge is along the x or y axes or at an angle. Since a Cartesian system is intrinsically anisotropic, (since, for one reason, resolution depends on the angle) such correction is generally used for accurate image processing.
Fig. 2 shows the arrangement of multi-bit computational pixels relative to an edge in accordance with a preferred embodiment of the present invention. A portion of an edge 16 between two joining regions is shown. The location of edge 16 has been determined using edge detection techniques, such as those described in US Patent 5,774,573, and is preferably to a greater accuracy than optical pixels. Computational pixels 20, 22 and 23 are generated in the image, and as will be seen, are each assigned a multi-bit value. In accordance with a preferred embodiment of the invention, a quasi-binary image is formed using "computational" pixels 20, 22 and 23. In this quasi-binary image, the computational pixels have one of three states, namely, "high" (sometimes called "on"), low (sometimes called "off) and transitional. Since at least two bits are required to define such a computational pixel, for the sake of illustration we will adopt a two-bit nomenclature in which we denote the off pixel by "00", the on pixel by "11" and the transition pixel by "10".
In a preferred embodiment of the invention, some of the computational pixels through which the edge passes are classified as off and some are classified as on. In this preferred embodiment of the invention, computational pixels which are "nearly" completely on one or the other side of the edge are classified as being completely on the particular side of the edge. For example, a double threshold, may be used for determining the status of a computational pixel with respect to the edge. In such a system, the area within the computational pixel on either side of the edge is determined respective of a low and high threshold. If the pixel value is less than the low threshold the computational pixel is classified as low. If the pixel value is greater than the upper threshold, the computational pixel is classified as high. If the pixel value of the computational pixel is between the thresholds, the entire computational pixel is classified as transitional, in which case it is treated in computational operations as if part of the pixel is classified high and part of the pixel is classified low. Normally, the thresholds are set symmetrically about the mid-point.
However, this method of pixel value assignment does not give precisely the same results for lines in all directions as would be achieved if conventional binary pixels are used. Therefore, in a preferred embodiment of the invention, another method is used to classify the computational pixels as high, low or transitional.
Referring to Fig. 2, three computational pixel blocks 20, 22 and 23 are shown as being superimposed on a grid representing a pixel size analogous to conventional one bit small pixels 19 (Fig. 1). In the example shown the size of the computational pixel is equivalent to 2
X 2 small one-bit pixels 19 such that each computational pixel is twice the size (and four times the area) of small pixels 19. Points 25 are located along the central vertical horizontal axes of each computational pixel. The distance from the edge of each of points 25 is one quarter of the height and width dimensions of the computational the pixel. In order to classify the computational pixels, according to the preferred method, a smaller square 24, interconnecting each of points 25 and covering one-quarter the area of the computational pixel is formed inside the computational pixel. Classification of a computational pixel as high, low or transitional is then made by the following method. If the area of the smaller square 24 is completely on one or the other side of the edge, the computational pixel is classified in the same way as it would be if the entire computational pixel were on that side of the edge. If the edge passes through the small square, as shown for computational pixel 20, the entire computational pixel is then classified as transitional. Thus, in accordance with a preferred embodiment of the invention, computational pixel 20 is classified as a transitional pixel and assigned a transitional value such as "10". Computational pixel 22 is classified as being copper and assigned a high value of "11 ".Computational pixel 23 is classified as being laminate and assigned a low value of "00".
In accordance with preferred embodiments of the invention, when a computational pixel is classified as transitional, the multi-bit code allows the pixel to be considered as being divided into sub-sections that are respectively spatially situated in either of the regions along the edge. Additionally, it is appreciated that while, in the example shown, the regions are described in absolute terms of high and low, they may also be gray level regions conventionally represented.
As noted above, for the example given, the computational pixels are twice the size and four times the area as the small pixels 19 of Fig. 1. However, the number of bits used to classify the area covered by the computation pixel is only two bits as compared with four bits when a binary map comprised of the small unit pixels is used. However, as will be illustrated below, for images represented by computational pixels in accordance with the present invention, because resolution is maintained substantially at a computational pixel sub-section level, morphological operations such as erosion/dilation and scale measurements can be carried out at substantially the same resolution as for images having a binary representation in small pixels, for all directions of edge 16. Thus, less than one bit is required to represent an erosion/dilation resolution element. Additionally, a scale type measurement will give the same values for both representations. It should be understood that while, for an individual multi-bit computational pixel, no information is available regarding orientation of the edge (this being lost when the multi-bit coding is performed), it will be evident that this information is supplied by the neighbors of the pixel. Figs. 3A-3C illustrate in a simple manner, two erosion steps using a two-bit quasi- binary pixelization representation in the erosion of an edge. Fig. 3 shows an edge 16 and computational pixels, appropriately classified according to the previous discussion. Let us assume that the following very simplified erosion algorithm is used: A computational pixel that touches three pixels having "00" is changed by changing its designation to "00" if it was originally "10" and by changing it to "10" if it was originally "11 ". Fig. 3 A shows the situation prior to any erosion. Fig. 3B shows the situation after the first erosion step, wherein the position of the edge following erosion is indicated as 16b. Fig. 3C shows the situation after the second erosion step wherein the position of the edge following further erosion is indicated as 16c.
It will be noted that in the given example for on form of a two bit representation of a computational pixel, erosion takes place at the rate of one-half a computational pixel, per step. This is the same rate as would be the case if smaller pixels and a binary representation had been used. In accordance with one understanding of the operation of errosion and dilation in accordance with a preferred embodiment of the invention, the advance of the edge can be considered as being between points 25 (Fig. 2), and not from edge to center of each pixel. Because, in the example shown in Fig. 2 points 25 are each one quarter of the distance from the edge of each computational pixel along a central axis, each erosion or dilation step will effectively advance the edge one half pixel at a time.
As will be illustrated below, it should be appreciated that computational pixels may be divided into larger sections so that the advance will be effectively at a rate of l/(n-l), where n is the number of bit units used to represent the spatial section of a pixel in one of the regions, and n-1 is the number of sub-sections into which the pixel is divided. It is evident from the above example that the number of comparisons, and pixel switch is only half as great when the two-bit pixelization is used as compared to a conventional binary representation of the image. Moreover, computational pixels that are represented by the extreme values, "00" or "11", can be eroded, dilated or counted as a unit in each step, which is equivalent to operating on four small pixels in the foregoing example. Similar methodology is used for dilation, except that an "00" and "10" move up one level on each computational cycle for which they meet a dilation requirement (similar to the erosion requirement described above) of nearest neighbors. Moreover, by providing pixels divided into a larger number of sections, as discussed below, then instead of erosion or dilation progressing at a rate of one half computational pixel per operation, the erosion or dilation can be made to progress according to the smaller subsections, thus increasing resolution.
In summary, when two-bit pixelization at a given resolution is used, both the data and the number of operations required for erosion and dilation operations is halved over that required when a one-bit pixelized image is used which gives the same erosion rate. This is a very significant difference for high speed, high throughput systems in which substantial image processing is required.
It is also appreciated that in scale measurement operations where a given length or distance is determined by counting pixels, by using a multi-bit pixelation for larger computational pixels, less pixels need to be counted. At the same time, the multi-bit pixelation enables resolution at edges to be maintained as in a map of small binary pixels since the portion of a transitional pixel that needs to be considered in conducting measurement operations - for example none of the pixel, the entire pixel or a section of the pixel - is intrinsic to the multi-bit pixelation in accordance with the present invention. In a similar manner, a larger or smaller number of levels may be used to divide the computational pixels into a larger number of subsections. For example, pixelization using two bits and four levels may be used in systems where erosion at the rate of one third of a computational pixel per erosion step takes place. One possible example of such a system is shown in Fig. 4, in which a computational pixels 26 are shown. Also shown in Fig. 4 are points 27each being located along the central vertical and horizontal axes of each computational pixel. The distance from the edge of each of points 27 is one sixth of the height and width dimension of the computational the pixel, and the distance between respective points 27 is two thirds of the height and width dimension. . A rectangle 28 interconnects points 27, and the center 29 of the computational pixel is indicated. Rectangle 28 is utilized, as described below to determine the values of the bits which represent the pixel.
In accordance this preferred embodiment of the invention pixel 26 may have one of four levels, 00, 01, 10 and 11, in order of increased coverage by the portion of the pixel on the high side of the edge. If the rectangle 28 is completely in the high portion, the pixel is coded as 11. If the rectangle 28 is completely in the low portion, the pixel is coded as 00. If the rectangle is partly in and partly outside the rectangle, the pixel is coded as 01 if the center of the pixel 29 is in the low portion and 10 if the center of the pixel 29 is in the high portion. Thus, the left pixel in Fig. 4, is coded as 00 and the right pixel is coded as 01.
In such a system, the erosion operation would reduce the status of the pixel from 11 to
10 to 01 to 00 to advance the edge at a rate of one third of a pixel at a time. In accordance with the understanding of errosion and dilation according to preferred embodiments of the invention, described above, with each erosion operation, the edge is considered to advance from a point 27 to a center 29 to a point 27, and then to the next point 27 in an adjacent computational pixel, following which the advance would be repeated until the erosion operation is completed. It is readily evident, that in this manner a resolution of one third of a pixel is obtainable.
By comparison, in order to perform erosion at this rate on single bit pixels, 9 pixels each represented by a unit bit would be required and three times as many operations for each erosion step would be required. Likewise, for measurement operations, resolution is maintained to an accuracy of one third of the size of a computational pixel, while providing for a larger operational block.
It should be noted that for the coding of both Figs. 2 and 4, the resolution in all directions is the same as that for the binary pixelization that is shown in the corresponding figure. While other methods of coding may be used, the method shown in Figs. 2 and 4 is characterized as preferred for this reason.
Similar methodologies may be used for pixel coding using three bits, which may include five, six, seven or eight levels, to divide the computational pixel into an equal number of subsections. For five levels, two squares would be inscribed in the pixel. These squares would both have the same orientation as squares 24 and 28 and would be so placed as to provide the same resolution in all directions as for the underlying binary pixels which give the same resolution. For six levels, two squares and the center of the pixel would be used as in Fig. 4. This method could be used for seven, or eight levels with the addition of an additional square inside the pixel. Similarly, it could be used for more levels, with more bits per pixel and more squares. Erosion and dilation would be performed in a manner analogous to that of the pixelizations of Figs. 2 and 4.
The range of erosion and dilation algorithms possible for two-bit (or greater bit) pixelization is very similar to that available for one bit pixelization. They are in addition, the range of problems is similar, with the main problem being the directionality of the erosion/dilation. Similar solutions may be provided, for example, alternating erosion rules with different biases and others of the methods described, together with various algorithms, in the above referenced book by Russ.
Figs 6A-6E show a series of operators which can be used, for the coding system illustrated with respect to Fig. 2, for achieving erosion with a high degree, on the average of isotropy. While in principle, it would be desirable to have an operator that performs erosion isotropically such a single operator is not achievable, in essence because the erosion step is different in each direction. Thus, in accordance with a preferred embodiment of the invention, a relatively large number of operators is used, in series, in order to provide, on the average, isotropic erosion. Each of the operators provides a different erosion rate over different primary directions, characteristic of a geometry of 3x3 three level computational pixels, each equivalent to 6 X 6 small binary pixels, namely approximately, 0, 14, 26, 37 and 45 degrees (and, or course, mirror images and 90 degree rotations of these angles. For binary pixels of the prior art, only three directions are built into a 3x3 matrix of pixels, namely, 0, 45 and 26degrees (and, or course, mirror images and 90 degree rotations of these angles). Thus, prior art methods used only up to three operators (and usually only two operators) for near uniform erosion/dilation.
For each of the operators of Fig. 6, the state of the central pixel is changed by one level (either from 11 to 10 or from 10 to 00) if the surrounding pixels meet a criteria shown in the matrices of the operator. Black elements and *, mean "don't care." Furthermore, for those matrices where the center pixel is shown as dark, the central pixel changes from 11 to 00 when the criteria is met.
Dilation can be performed in one of two ways. The simplest way is to invert the image such that 00 is changed to 11 and 11 to 00 and perform an erosion operation on the resulting image. Alternatively, the inversion may be performed in the operators. In a preferred embodiment of the invention, the sequence of applications of the operators is determined by keeping track of the amount of erosion performed by the previous erosion steps and then applying, as the next operator in the sequence, an operator that will correct any anisotropy to the greatest extent possible. For the operators shown, one such sequence for the first 24 erosion steps is those shown in Figs. 6B, 6C, 6A, 6D, 6B, 6B, 6E, 6C, 6A, 6C, 6D, 6B, 6B, 6C, 6E, 6A, 6C, 6B, 6C, 6D, 6A, 6B, 6B, 6E.
In other preferred embodiments of the invention, the sequence is optimized to result in most nearly perfect isotropy at a given level of erosion. For example, if there is an expected number of erosions that will be necessary, the process may be optimized to allow for greater anisotropy for intermediate erosion steps. It should be understood that while erosion and dilation using two bit coded pixels has been described with respect to morphology determinations in PC board testing, the utility of the invention is not limited to PC board testing, but can be used for many of the same applications for which single bit pixel (binarized) images are used. Some of these uses are described in the book by Russ. For example, operations such as dilation followed by erosion may also be performed.
It is instructive however, to describe some of the special applications of erosion/dilation in the field of PC board testing. It should be understood that these applications are not new (for binary pixels) and they have been performed, at least, in the above mentioned Orbotech PC board tester. However, since they are not described in the book by Russ, it may be instructive to describe some of them to underline the ultimate utility of the method in PC board testing.
Fig. 5A shows a line 40 having an irregular edge. In particular, in one area, the width of the line is reduced. In general, PC boards are designed and manufactured utilizing certain rules. One of these rules is a minimum width of conductor. One use of erosion/dilation image processing, or scale measuring, is to determine if there exist, anywhere on the board, lines that have a thickness less than the minimum design thickness. Fig. 5A shows the result of successive erosion operations on the line, (for simplicity, the pixelization is not shown. This operation by erosion would, as described above, be effectively the same for single and multi- bit pixelization. After a number of successive operations, the width of the line at the narrowest point goes to zero. If this situation is detected, the position of the break in the line is noted. The number of erosions required to break the line gives the width of the line at that point.
Fig. 5B shows a different situation in which the distance between two adjoining lines 42 and 44 has been reduced by an imperfection in the manufacture of one of the lines. Here a dilation function would be successively applied to the image until the two lines meet. The number of dilation steps would then give the distance between the lines prior to dilation.
Other uses of erosion/dilation are well known in the art and will occur to persons of skill in the art. Another use of multi-bit pixelization in morphology determination is in the measurement of line widths or spaces utilizing scales. In a conventional use of this method, the number of high (or low) binary pixels is counted along a plurality of directions around a point. This measurement is preferably made at 0, 14, 26, 37, 45, 53, 64, 76 and 90, degrees. The direction which has the smallest number of pixels is considered the width of the line/space. In a preferred embodiment of the invention, a similar measurement is made utilizing one of the above described methods of multi-level pixelization. A count of the number of pixels in a limited number of directions is made, with the transition pixels counting a partial distances in accordance with the level of the pixel. For example, for the three level pixelization of Fig. 2 the number of pixels and partial pixels is counted in the above directions, and the line or space width is given as the total number of pixel and partial pixels times the pixel width. For the example of Fig. 4, the pixels are also counted in each of the above directions. It should be understood that the resolution of this method is the same as if binary pixels having a resolution of a resolution (n-1) times that of the multi-level pixels were used in the measurement, where n is the number of levels used in characterizing the pixel, and n-1 is the number of subsections into which the pixel is divided.
The present invention has been described partly with reference to non-limiting preferred embodiments thereof. However, the invention is not limited by details of these embodiments, but is delineated by the following claims. As used in the following claims, the words "comprises", "comprising," "includes", "including" or their conjugations shall mean "including but not necessarily limited to".

Claims

1. A method of multi-level pixelization of images comprising: determining at least one edge between a first area and a second area in the image; dividing the image into pixels; assigning a first value to pixels completely in the first area; assigning a second value to pixels completely in the second area; and assigning a value to pixels through which the edge passes, said value being one of the first value, the second value or a different value.
2. A method according to claim 1 wherein the value assigned to pixels through which the edge passes is based on a relationship between portions of the pixel in the first and second areas.
3. A method according to claim 2 wherein the value assigned to pixels through which the edge passes is based on the area of portions of the pixel in the first and second areas.
4. A method according to claim 1 wherein the value assigned to pixels through which the edge passes is based on position and orientation of the edge in the pixel.
5. A method according to any of the preceding claims wherein the first value corresponds to bright areas of the image, the second value corresponds to dark areas of the image and the edges correspond to edge boundaries between the bright and dark areas.
6. A method according to any of the preceding claims wherein assigning said values comprises: assigning said first value to pixels through which the edge passes if they meet a first condition; assigning said second value to pixels through which the edge passes if they meet a second condition; and assigning a third value to pixels through which the edge passes if they meet a third condition.
7. A method according to claim 6 wherein the first, second and third conditions are related to a two level thresholding, and the value identifies the relative proportion of a pixel in the first and second areas respectively.
8. A method according to any of claims 1-6 wherein assigning pixel values comprises: forming a sub-area within the pixel; assigning the first value to the pixel if the sub-area is entirely within the first area; assigning the second value to the pixel if the sub-area is entirely within the second area; and assigning a third value to the pixel if the edge passes through the sub-area.
9. A method according to any of the preceding claims wherein each pixel has one of three values.
10. A method according to any of claims 6-8 wherein pixelizing the image comprises: assigning a fourth value to pixels through which the edge passes if they meet a fourth condition.
11. A method according to claim 10 wherein the fourth value is assigned to pixels based on position and orientation of the edge in the pixel.
12. A method according to claim 10 or claim 11 wherein pixelizing the image comprises: assigning a fifth value to pixels through which the edge passes if they meet a fifth condition.
13. A method according to claim 12 wherein the fifth value is assigned to pixels based on position and orientation of the edge in the pixel.
14. A method according to any of the preceding claims and including: acquiring the image as a gray level image at a given optical pixel size, wherein the pixelization is performed at a different spatial resolution than that of the optical pixels.
15. A method according to claim 14 wherein the pixelization is performed at a higher spatial resolution that that of the optical pixels.
16. A method according to any of claims 1-14 and including: acquiring the image as a gray level image at a given optical pixel size, wherein the pixelization is performed at the same spatial resolution as that of the optical pixels.
17. A method according to any of the preceding claims and including: assigning a hierarchy to the pixel values ranked based at least approximately on the amount of filling of the pixel with the first area that the values represent.
18. A method of image analysis : pixelizing the image utilizing a pixelization method in which the values of the pixels are represented by one of more than two values; and performing at least one spatial morphology operation on the pixelized image.
19. A method according to claim 18 wherein the at least one spatial morphology operation comprises a measurement of distance.
20. A method according to claim 18 or claim 19 wherein the at least one spatial morphology operation comprises at least one erosion operation.
21. A method according to claim 20 wherein the at least one erosion operation is substantially isotropic.
22. A method according to any of claims 18-21 wherein the at least one spatial morphology operation comprises at least one dilation operation.
23. A method according to claim 22 wherein the at least one dilation operation is substantially isotropic.
24. A method according to any of claims 18-23 wherein erosion or dilation is performed at a resolution higher than a pixel size of the pixelization.
25. A method according to any of claims 18-24 wherein the image is pixelized according to any of claims 1-17.
26. A method of erosion/dilation of a pixelized image comprising: providing an image pixelized according to claim 17; and performing an erosion/dilation based on the position in the hierarchy of neighboring pixels.
27. A method according to claim 26 and comprising: changing a pixel value to a lower value in the hierarchy if it meets an erosion criterion.
28. A method according to claim 27 wherein said erosion criterion is based on the position in the hierarchy of neighboring pixels.
29. A method according to any of claims 26-28 and comprising: changing a pixel value to a higher value in the hierarchy if it meets a dilation criterion.
30. A method according to claim 29 wherein said dilation criterion is based on the position in the hierarchy of neighboring pixels.
31. A method according to any of claims 26-30 and including: further iteratively spatially dilating or eroding the image.
32. A method according to claim 31 wherein further iteratively spatially dilating or eroding utilize criteria for dilating or eroding having a compensating anisotropy for the different iterations.
33. A method according to any of claims 26-32 wherein erosion or dilation is performed at a resolution higher than a pixel size of the pixelization.
34. A method of performing a morphology operation comprising: providing an pixelated image that is pixelated at a given spatial resolution; and performing at least one morphology operation at a resolution finer than the given spatial resolution.
35. A method according to claim 34 wherein the at least one spatial morphology operation comprises a measurement of distance.
36. A method according to claim 34 or claim 35 wherein the at least one spatial morphology operation comprises at least one erosion operation.
37. A method according to claim 36 wherein the at least one erosion operation is substantially isotropic.
38. A method according to any of claims 34-37 wherein the at least one spatial morphology operation comprises at least one dilation operation.
39. A method according to claim 38 wherein the at least one dilation operation is substantially isotropic.
40. A method for analytically representing an image having therein at least two regions, the method comprising: dividing the image into pixels; and assigning to each pixel a multi-bit code, said code relating to sub-pixel portions of the pixel that are treated as being situated in the first region and the second region.
41. A method according to claim 40, wherein the assigning of said values includes: assigning a first predetermined value to pixels that are completely in the first region; and assigning a second predetermined value to pixels that are completely in the second region.
42. A method according to claim 41 and comprising: assigning a value to pixels that are partly in the first region and partly in the second region, said value being determined by the portion of the pixel in the first and second regions with reference to an boundary edge between the two regions.
43. A method according to claim 42 wherein assigning a value to pixels that are partly in the first region and partly in the second region comprises assigning one of said first or second values or a third values to the pixel.
44. A method according to claim 40 or claim 41 and comprising: assigning a different value to pixels that are partly in the first region and partly in the second region, said value being determined by a boundary edge between the two regions meeting a spatial condition.
45. A method according to claim 44 wherein assigning a value to pixels partly in the first and second regions comprises:
(a) defining at least one sub-area within the pixel; and
(b) assigning the code based on a spatial relationship between the at least one sub-area and the edge.
46. A method according to claim 45 wherein assigning the code (b) includes: assigning the first value to pixels for which the sub-area is completely in the first region; and assigning the second value to pixels for which the sub-area is completely in the second region.
47. A method according to claim 45 or claim 46 wherein assigning the code (b) includes: assigning the different value to pixels for which the at least one sub-area is partly in the first and partly in the second regions.
48. A method according to any of the preceding claims wherein the image is an image of a printed circuit board.
PCT/IL1998/000477 1998-09-28 1998-09-28 Pixel coding and image processing method WO2000019372A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/IL1998/000477 WO2000019372A1 (en) 1998-09-28 1998-09-28 Pixel coding and image processing method
AU94562/98A AU9456298A (en) 1998-09-28 1998-09-28 Pixel coding and image processing method
IL14202898A IL142028A0 (en) 1998-09-28 1998-09-28 Pixel coding and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IL1998/000477 WO2000019372A1 (en) 1998-09-28 1998-09-28 Pixel coding and image processing method

Publications (1)

Publication Number Publication Date
WO2000019372A1 true WO2000019372A1 (en) 2000-04-06

Family

ID=11062362

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL1998/000477 WO2000019372A1 (en) 1998-09-28 1998-09-28 Pixel coding and image processing method

Country Status (3)

Country Link
AU (1) AU9456298A (en)
IL (1) IL142028A0 (en)
WO (1) WO2000019372A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1237358A2 (en) * 2000-12-29 2002-09-04 e-talk Corporation System and method for reproducing a video session using accelerated frame recording
USRE38559E1 (en) 1984-12-20 2004-07-27 Orbotech Ltd Automatic visual inspection system
USRE38716E1 (en) 1984-12-20 2005-03-22 Orbotech, Ltd. Automatic visual inspection system
US7177458B1 (en) 2000-09-10 2007-02-13 Orbotech Ltd. Reduction of false alarms in PCB inspection
US7181059B2 (en) 1999-08-05 2007-02-20 Orbotech Ltd. Apparatus and methods for the inspection of objects
US7200259B1 (en) 1999-07-25 2007-04-03 Orbotech Ltd. Optical inspection system
US7218771B2 (en) 1999-12-23 2007-05-15 Orbotech, Ltd. Cam reference for inspection of contour images
US7231080B2 (en) 2001-02-13 2007-06-12 Orbotech Ltd. Multiple optical input inspection system
US20100239187A1 (en) * 2009-03-17 2010-09-23 Sehoon Yea Method for Up-Sampling Depth Images

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5123085A (en) * 1990-03-19 1992-06-16 Sun Microsystems, Inc. Method and apparatus for rendering anti-aliased polygons
US5438656A (en) * 1993-06-01 1995-08-01 Ductus, Inc. Raster shape synthesis by direct multi-level filling

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5123085A (en) * 1990-03-19 1992-06-16 Sun Microsystems, Inc. Method and apparatus for rendering anti-aliased polygons
US5438656A (en) * 1993-06-01 1995-08-01 Ductus, Inc. Raster shape synthesis by direct multi-level filling

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
EN-HUI LIANG ET AL: "HIERARCHICAL ALGORITHMS FOR MORPHOLOGICAL IMAGE PROCESSING", PATTERN RECOGNITION, vol. 26, no. 4, 1 April 1993 (1993-04-01), pages 511 - 529, XP000367195 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE38559E1 (en) 1984-12-20 2004-07-27 Orbotech Ltd Automatic visual inspection system
USRE38716E1 (en) 1984-12-20 2005-03-22 Orbotech, Ltd. Automatic visual inspection system
US7200259B1 (en) 1999-07-25 2007-04-03 Orbotech Ltd. Optical inspection system
US7388978B2 (en) 1999-08-05 2008-06-17 Orbotech Ltd. Apparatus and methods for the inspection of objects
US7181059B2 (en) 1999-08-05 2007-02-20 Orbotech Ltd. Apparatus and methods for the inspection of objects
US7206443B1 (en) 1999-08-05 2007-04-17 Orbotech Ltd. Apparatus and methods for the inspection of objects
US7218771B2 (en) 1999-12-23 2007-05-15 Orbotech, Ltd. Cam reference for inspection of contour images
US7177458B1 (en) 2000-09-10 2007-02-13 Orbotech Ltd. Reduction of false alarms in PCB inspection
EP1237358A3 (en) * 2000-12-29 2006-08-16 ETALK Corporation System and method for reproducing a video session using accelerated frame recording
EP1237358A2 (en) * 2000-12-29 2002-09-04 e-talk Corporation System and method for reproducing a video session using accelerated frame recording
US7231080B2 (en) 2001-02-13 2007-06-12 Orbotech Ltd. Multiple optical input inspection system
US7925073B2 (en) 2001-02-13 2011-04-12 Orbotech Ltd. Multiple optical input inspection system
US20100239187A1 (en) * 2009-03-17 2010-09-23 Sehoon Yea Method for Up-Sampling Depth Images
US8189943B2 (en) * 2009-03-17 2012-05-29 Mitsubishi Electric Research Laboratories, Inc. Method for up-sampling depth images

Also Published As

Publication number Publication date
AU9456298A (en) 2000-04-17
IL142028A0 (en) 2002-03-10

Similar Documents

Publication Publication Date Title
US7916286B2 (en) Defect detection through image comparison using relative measures
CN115222733B (en) Metal component surface defect detection method based on image recognition
KR100474571B1 (en) Method of setting reference images, method and apparatus using the setting method for inspecting patterns on a wafer
US6118893A (en) Analysis of an image of a pattern of discrete objects
CN115345885A (en) Method for detecting appearance quality of metal fitness equipment
US5226095A (en) Method of detecting the position of an object pattern in an image
JP6099479B2 (en) Crack detection method
CN101082592B (en) Uneven checking method and device
JP2005529388A (en) Pattern inspection method
JP5385593B2 (en) Crack detection method
WO2000019372A1 (en) Pixel coding and image processing method
CN109444169A (en) A kind of bearing defect detection method and system
CN114878574A (en) Cloth defect detection method, device and system based on machine vision
CN115100206B (en) Printing defect identification method for textile with periodic pattern
CN115861307B (en) Fascia gun power supply driving plate welding fault detection method based on artificial intelligence
CN115330773A (en) Metal grinding pockmark defect detection method
JPS63211076A (en) Pattern checking device
CN114972173A (en) Defect detection method, defect detection device and system
US20020038510A1 (en) Method for detecting line width defects in electrical circuit inspection
JP4462037B2 (en) Method for distinguishing common defects between semiconductor wafers
GB2102122A (en) Detecting defects in a pattern
JP4084969B2 (en) Pattern inspection apparatus and pattern inspection method
EP0367295B1 (en) A method of detecting the position of an object pattern in an image
JP2008185510A (en) Crack detection method
Wu et al. Highway crack monitoring system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 09787132

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 142028

Country of ref document: IL

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase