WO2003034050A1 - System and method for inspecting components - Google Patents

System and method for inspecting components Download PDF

Info

Publication number
WO2003034050A1
WO2003034050A1 PCT/US2002/030629 US0230629W WO03034050A1 WO 2003034050 A1 WO2003034050 A1 WO 2003034050A1 US 0230629 W US0230629 W US 0230629W WO 03034050 A1 WO03034050 A1 WO 03034050A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
feature
brightness
component
planarity
Prior art date
Application number
PCT/US2002/030629
Other languages
French (fr)
Inventor
Thomas Casey Carrington
Hak Chuah Sim
Mark Christopher Moore
Charles Kenneth Harris
Marvin Dean Turney
Sanjeev Mathur
Dennis Melvin Botkin
Garry Melvin Starnes
Original Assignee
Semiconductor Technologies & Instruments, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Semiconductor Technologies & Instruments, Inc. filed Critical Semiconductor Technologies & Instruments, Inc.
Publication of WO2003034050A1 publication Critical patent/WO2003034050A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • G01N21/95684Patterns showing highly reflecting parts, e.g. metallic elements

Definitions

  • the present invention relates generally to the inspection of components, and more specifically to the use of 2D measurements to determine the co-planarity of components, including semiconductor circuits.
  • the techniques and methods of the invention can also be applied to the inspection of bumped contacts for ball grid array (BGA) integrated circuits, chip scale package (CSP) integrated circuits, bumped dies, wafers, and other suitable components.
  • BGA ball grid array
  • CSP chip scale package
  • Such systems can be used to perform lead inspection (such as to identify bent leads), component marking inspection (such as to verify that the proper vendor's mark, component number and type designators have been applied) , and component defect inspection (such as to locate holes, cracks, or other irregularities in the component) of semiconductor packages.
  • lead inspection such as to identify bent leads
  • component marking inspection such as to verify that the proper vendor's mark, component number and type designators have been applied
  • component defect inspection such as to locate holes, cracks, or other irregularities in the component of semiconductor packages.
  • a system and method for inspecting components are provided that overcome known problems with inspecting of components.
  • a system and method for inspecting components such as for unacceptable co- planarity or other conditions, by inferring 3D data from measurements of 2D data are disclosed.
  • a system for the inspection of a component having three or more features, such as ball contacts of a ball grid array includes a light source illuminating the component with non-coherent light, such as an on-axis light source.
  • An image system generates image data of the component that includes feature data for each feature.
  • a co-planarity system receives the image data and generates co-planarity data using feature data of the image data, such as by determining whether brightness data for each of the ball contacts indicates that one or more of the ball contacts is not co-planar with the other ball contacts.
  • Embodiments of the present invention provide many important technical advantages.
  • One advantage of an embodiment of the present invention is a system for inspection of components that reduces the amount of time spent measuring 3D data, such as co- planarity of ball-type leads on semiconductor devices.
  • the present invention thus decreases the production time for semiconductor devices.
  • FIGURE 1 is a diagram of a system for determining the co- planarity of a component from brightness data in accordance with an exemplary embodiment of the present invention
  • FIGURE 2 is a diagram of a system for inspecting components in accordance with an exemplary embodiment of the present invention
  • FIGURE 3 is a diagram of a system for determining the co- planarity of a component in accordance with an exemplary embodiment of the present invention
  • FIGURE 4 is a diagram of a system for determining feature quality data, such as ball contacts of a semiconductor device, in accordance with an exemplary embodiment of the present invention
  • FIGURE 5 is a flowchart of a method for determining the co- planarity of features of a component using 2D data in accordance with an exemplary embodiment of the present invention
  • FIGURE 6 is a flowchart of a method for determining feature quality data in accordance with an exemplary embodiment of the present invention.
  • FIGURE 7 is a flowchart of a method for establishing feature quality limit data in accordance with an exemplary embodiment of the present invention.
  • FIGURE 1 is a diagram of a system 100 for determining the co- planarity of a component from brightness data in accordance with an exemplary embodiment of the present invention.
  • System 100 uses the brightness data from uniform features on the component, such as ball contacts of a ball grid array, to determine the co- planarity and other suitable characteristics of the features.
  • System 100 includes a system support that allows it to be placed on an existing inspection system, so as to use ' the component handling systems of such existing inspection systems. In this manner, system 100 can be used to supplement or supplant the inspection functions of existing inspection systems but allows the component handling systems of those existing inspection systems to be used.
  • System 100 includes an image analysis system such as co- planarity system 102, which can be implemented in hardware, software, or a suitable combination of hardware and software and which can be one or more software systems operating on a general purpose server platform.
  • a software system can include one or more objects, agents, threads, subroutines, separate software applications, one or more lines of code or other suitable software structures operating in two or more separate software applications, on one or more different processors or other suitable software architectures.
  • a software system can include one or more lines of code or other suitable software structures operating in a general purpose software application, such as an operating system, and one or more lines of code or other suitable software structures operating in a specific purpose software application.
  • Co-planarity system 102 is coupled to digital image system 104 via communications medium 106.
  • the term "couple,” and its cognate terms such as “couples,” and “coupled,” can include a physical connection (such as through one or more copper conductors) , a virtual connection (such as one or more randomly assigned data memory locations of a data memory device) , a logical connection (such as through one or more logical devices of a semiconducting circuit) , a wireless connection, other suitable connections, or a suitable combination of such connections.
  • systems and components can be coupled to other systems and components through intervening systems and components, such as through an operating system of a general purpose server platform.
  • Communications medium 106 can be a serial data bus, a parallel data bus, a copper conductor, a data memory device, a semiconducting circuit, other suitable communications media, or a suitable combination of such communications media.
  • Digital image system 104 is mounted on system support 114 and generates an N x M array of pixels of brightness data, such as a relative brightness value ranging from 0 to 255.
  • the array can be 1024 x 1024 pixels having an adjustable image size ranging from 0.12 x 0.12 inches to 0.4 x 0.4 inches, such that when the image size is 0.2 x 0.2 inches, each pixel represents an area of approximately 5.12 x 5.12 micrometers.
  • Light source 108 is mounted on system support 114- and illuminates a component that is disposed underneath system 100 in a manner that causes uniform features on the component to be evenly illuminated.
  • light source 108 can be a commercially available, experimental or custom-made collinear illuminator, such as a light source which may be integral to a diffuser and partial mirror.
  • light source 108 can utilize a fiber optic cable or other suitable optical methods to project a field of illumination onto the component being measured.
  • Light source 108 allows an imaging system to generate direct-axis image data of the component.
  • Other suitable illumination sources can also or alternatively be used.
  • Light source 108 is oriented so as to provide direct illumination of the component (i.e. light perpendicular to the base plane of the component being inspected) .
  • System support 114 allows system 100 to be placed on an existing component inspection system that has a component handling system. In this manner, system support 114 can be used to supplement or supplant the inspection .analysis functions of such other systems. Likewise, system support 114 can be used in conjunction with a component handling system that includes no inspection functionality, such as a component handling system that is used in conjunction with two or more different modular image analysis systems such as system 100.
  • system 100 inspects components and generates co-planarity data that is used to determine whether the component is coplanar within predetermined tolerances.
  • Co-planarity system 102 receives digital image data of the component from digital image system 104 that is generated when the component is illuminated by light source 108, and uses the brightness data of three or more known features on the component to determine whether the component is coplanar.
  • the features can be ball contacts of a ball grid array, chip scale package, a bumped die, or other suitable components for which co- planarity of the ball contacts is required in order to ensure that uniform electrical contact can be made with each ball contact.
  • Light source 108 illuminates the three or more known features in a manner that causes the features to generate uniform levels of brightness data when digital image system generates through an image if those features are otherwise undamaged and coplanar.
  • the digital image data generated by digital image system 104 is received by co-planarity system 102, and the pixels of brightness data corresponding to the two or more known features are then identified.
  • the brightness data for these pixels is then analyzed by co-planarity system 102 in conjunction with empirically- or otherwise-determined rules that define allowable variations for the brightness data, and co-planarity data is generated that can be used to determine whether the component, the features, or the component and the features have an acceptable level of co- planarity, or are unacceptably non-planar.
  • any feature having brightness data that is statistically different from the brightness data for other features to a predetermined degree correlates to features that do not form acceptable electrical contact.
  • the brightness data for a component can be analyzed to determine whether the statistical data for the features of each component fall within the empirically observed ranges for the brightness data of the control components, whether the brightness data for any given feature correlates with unacceptable features to an acceptable level of certainty, or whether any other suitable conditions can be detected that can be used to establish whether the feature will operate properly in service.
  • FIGURE 2 is a diagram of a system 200 for inspecting components in accordance with an exemplary embodiment of the present invention.
  • System 200 shows system 100 in operation in conjunction with an existing component inspection system that inspects semiconductor devices that have been cut from silicon wafers .
  • System 200 includes pick and place arm 202, wafer support 204, positioning mechanism 212, wafer 208, and components 210.
  • Positioning mechanism 212 moves wafer support 204 into a position under system 100 for visual light-based inspection.
  • Pick and place arm 202 can include a telescoping head with a suitable orifice at its distal end connected to a source of vacuum (not explicitly shown) that provides a pick up and holding force on a component 210.
  • wafer support 204 holds components 210 that have been cut from wafer 208, which can include chip scale packages, ball grid arrays, bumped wafers, or other suitable components. The components 210 on wafer support 204 can be complete and ready for final inspection.
  • Pick and place arm 202 can move any of components 210 after they have been inspected and approved to a suitable packing media.
  • the packing media can be sticky tape 214, tape and reel 216, a waffle pack or small tray, a film frame, a gel pack, other suitable packing media, or a suitable combination of packing media.
  • the packing media can also be processed using a belt drive mechanism or other suitable systems or components .
  • FIGURE 3 is a diagram of a system 300 for determining the co- planarity of a component in accordance with an exemplary embodiment of the present invention.
  • System 300 includes co- planarity system 102 and image receiving system 302, grid alignment system 304, and 2D co-planarity system 306, each of which can be implemented in hardware, software, or a suitable combination of hardware and software, and which can be one or more software systems operating on a general purpose processing platform.
  • Image receiving system 302 receives digital image data of a component, such as image data generated when the component is being illuminated by light source 108, laser 110, or other suitable image data.
  • Image receiving system 302 controls the analysis of the image data by grid alignment system 304 and 2D co- planarity system 306, such as by determining the type of light that was used to generate the image data, by controlling the image analysis process in accordance with predetermined process flow controls, in response to operator-entered instructions, or in other suitable manners.
  • Grid alignment system 304 receives the image data from image receiving system 302, locates three or more features of the component in the image data that will be used for co-planarity analysis, and generates grid data.
  • Grid alignment system 304 can identify bumped contacts of a BGA, CSP, bumped wafer, or other suitable features.
  • grid alignment system 304 can identify the features using feature design data, such as the diameter of the bumped contacts.
  • a. user can select a feature and the image data can then be analyzed to find other similar features, such as by using (copending application on location of image features in image data incorporate by reference) .
  • Grid alignment system 304 then generates a template that identifies the boundaries of the features of interest, so as to allow areas outside of the features to be ignored.
  • Grid alignment system 304 can also generate position control data, such as for controlling the position of belt drive mechanism 212 of FIGURE 2 or other suitable positioning devices or systems .
  • 2D co-planarity system 306 receives the image data from image receiving system 302 and the grid data from grid alignment system 304 and analyzes the brightness data of each feature.
  • 2D co-planarity system 306 determines ball brightness data for contact bumps of a bumped grid array, chip scale package, bumped wafer, or other suitable components, such as by measuring the brightness data for each pixel in a square inscribed within a circle representing the ball and having a diagonal length equivalent to the diameter of the ball.
  • 2D co- planarity system 306 then analyzes the feature image data ' to generate feature analysis data, such as the average feature brightness, the maximum feature brightness, the standard deviation of the feature brightness for all features of a component, and other suitable data.
  • 2D co-planarity system 306 then generates feature acceptance data based on the feature analysis data, such as by comparing the feature analysis data to acceptable ranges for feature analysis data.
  • 2D co-planarity system 306, image receiving system 302, or other suitable systems can then generate component control data based on the feature analysis data that causes pick and place arm 202 to transfer the component to a packing media, a holding bin for rejected components, to by-pass the component and select the next component, or to perform other suitable functions.
  • system 300 In operation, system 300 generates co-planarity data from image data of a component. Features, such as bumped leads on a semiconductor device, are first located and a grid pattern is generated that outlines the areas in which inspection and analysis of the image data will be performed. System 300 can accept or reject components by determining co-planarity from two-dimensional data or other suitable data.
  • FIGURE 4 is a diagram of a system 400 for determining feature quality data, such as ball contacts of a semiconductor device, in accordance with an exemplary embodiment of the present invention.
  • System 400 includes 2D co-planarity system 306 and feature brightness system 402, brightness sigma system 404, brightness deviation system 406, feature quality system 408, and average brightness system 410, each of which can be implemented in hardware, software, or a suitable combination of hardware and software, and which can be one or more software systems operating on a general purpose server platform.
  • Feature brightness system 402 generates brightness data for each feature of a component.
  • feature brightness system 402 calculates the average of the pixel brightness data values of the pixels within a square centered at the feature' s centroid and having a diagonal equivalent to the length of the feature diameter, such as where the feature is a circle.
  • Pixel brightness data can have a value from zero for black to 255 for white or other suitable values correlated to a brightness measuring system.
  • feature brightness system 402 generates average brightness data by calculating the percent brightness, such as zero percent for pitch black (pixel brightness level zero) and one hundred percent for pure white (pixel brightness level 255) .
  • Feature brightness system 402 provides the brightness data to brightness sigma system 404, brightness deviation system 406 and average brightness system 410. Feature brightness system 402 also determines whether the brightness data for each feature exceeds predetermined maximum or minimum levels, such as in cases where a large numbers of features are flattened or otherwise unacceptable but would produce acceptable results through brightness sigma system 404 and brightness deviation system 406, provides feature acceptance data to feature quality system 408 and to average brightness system 410 that defines whether a feature has acceptable, borderline, or unacceptable average brightness data.
  • Brightness sigma system 404 receives brightness data from feature brightness system 402 and generates brightness sigma data for the features. In one exemplary embodiment, brightness sigma system 404 determines the standard deviation of the brightness data for all features of a component, and then determines whether the values fall outside of a predetermined number of standard deviations from the norm of the average brightness data for all features. Brightness sigma system 404 then generates feature acceptance data based on the brightness sigma data and the standard deviation data. Brightness sigma system 404 then provides brightness sigma data, feature acceptance data, and other suitable data to feature quality system 408 that identifies those features that have average brightness data that is acceptable, borderline, or unacceptable in light of the standard deviation of the average brightness data.
  • Brightness deviation system 406 receives the feature brightness data from feature brightness system 402 and the average feature brightness data from average brightness system 410 and generates brightness deviation data for each feature. In one exemplary embodiment, brightness deviation system 406 determines whether any features have average brightness data values that are greater or lesser than the component average brightness data by more than a predetermined amount. Brightness deviation system 406 then generates feature acceptance data based on the brightness deviation data and the component average brightness data. Brightness deviation system 404 provides brightness deviation data, feature acceptance data, and other suitable data to feature quality system 408 that identifies those features that have average brightness data that is acceptable, borderline, or unacceptable in light of the component average brightness data.
  • Feature quality system 408 receives average brightness data, brightness sigma data, brightness deviation data, feature acceptance data, and other suitable data and determines whether the features of the component indicate that co-planarity of the features is acceptable. Feature quality system 408 can evaluate borderline feature data, feature data for all features, or other suitable data, and can generate component acceptance data based on the feature data. In one exemplary embodiment, feature quality system 408 can generate control data that causes pick and place arm 202 to move component 210 from wafer support 208 to a packing media, a holding bin for rejected components, can generate a prompt requesting operator assistance, or can perform other suitable functions.
  • Average brightness system 410 receives the feature brightness data for all features of the component from feature brightness system 402 and generates average feature brightness data for the component. The average feature brightness data is then provided to brightness deviation system 406 and other suitable systems. [0040] In operation, system 400 accepts or rejects components based on the co-planarity of features of the component by comparing generated feature quality data to predetermined limits. Components having feature quality data within predetermined limit data can be accepted, and components a having feature quality data beyond the limit data can be rejected. In this manner, system 400 permits the use of 2D criteria for determining the co-planarity of features on the component, such as bumped contacts of a ball grid array or other suitable semiconductor devices.
  • FIGURE 5 is a flowchart of a method 500 for determining the co-planarity of features of a component using 2D data in accordance with an exemplary embodiment of the present invention.
  • Method 500 can be used to determine whether bumped contacts of a ball grid array, bumped wafer, chip scale package, or other suitable components are sufficiently co-planar to allow them to operate properly when placed in service.
  • Method 500 begins at 502 where image data of a component is captured.
  • the component can include a ball grid array, a chip scale package, a bumped wafer, or other suitable configurations of semiconductor packages. The method then proceeds to 504.
  • a grid pattern is generated.
  • the grid pattern is generated by determining the edges of two or more features of the component. Alternatively, a user can manually select the appropriate grid pattern.
  • the method then proceeds to 506 where it is determined whether the features within the image are aligned with the grid pattern selected at 504. If not, the method proceeds to 508 where the component can be adjusted so that the features are aligned with the grid.
  • control data can be sent to belt drive mechanism 212 to cause component 210 to be moved so as to be aligned with the grid pattern.
  • the grid pattern can be adjusted so that the features are aligned with the grid. Other suitable procedures can also be used. The method then returns to 502.
  • Feature quality data can include feature brightness data, feature brightness sigma data, feature brightness deviation data, and other suitable data.
  • the method then proceeds to 512 where the feature quality data is analyzed, such as by comparing the feature quality data to predetermined limits.
  • the predetermined limits can be determined based upon empirical data for features from similar components that are determined to have acceptable coplanarity. Other suitable procedures can also be used. The method then proceeds to 514.
  • the component is accepted or rejected based on the results of the analysis at 512. In one exemplary embodiment, if any of the individual classes of feature quality data exceed a predetermined limit, then the component can be rejected. In another exemplary embodiment, additional empirical relationships between acceptable results for certain classes of feature quality data and borderline or unacceptable results for other classes of feature quality data can be determined, 3D image data can be analyzed, or other suitable procedures can be performed. The method then proceeds to 516 where the next component is prepared for inspection. The method then returns to 502.
  • method 500 allows image data of a component to be analyzed to determine .whether two or more features on the component are co-planar.
  • Method 500 uses 2D image data to generate co-planarity pass/fail criteria, which results in faster inspection of components having features, such as ball grid arrays, chip scale packages, bumped wafers, or other suitable components .
  • FIGURE 6 is a flowchart of a method ' 600 for determining feature quality data in accordance with an exemplary embodiment of the present invention.
  • Method 600 begins at 602 where the pixels of each feature within the image are identified.
  • the pixels used to determine feature quality data can be located within a square inscribed within the circle in the image data representing a ball-type lead on a semiconductor device.
  • the inscribed square can have a diagonal of a length equivalent to the length of the diameter of the circle.
  • a rectangle, polygon, or other suitable shapes can be used to approximate the shape of the feature.
  • random pixels within the circle can be used to generate the feature quality data
  • the actual boundary of the feature can be detected and the brightness data for each pixel within the boundary can be used to determine feature quality, or other suitable procedures can be used. The method then proceeds to 604.
  • the pixel brightness of each pixel identified at 602 is measured.
  • the pixel brightness can be a value from zero to 255 where zero represents pitch black and 255 represents pure white.
  • the method then proceeds to 606 where the average brightness data of the feature is determined.
  • the average brightness data for a feature is determined by calculating the mean of the pixel brightness values of the pixels measured at 604. It can also be determined whether the average brightness data is borderline to or greater than a predetermined limit, such as where average brightness data above that limit has been empirically or otherwise determined to represent a damaged feature. If the average brightness data is borderline to or above the limit, feature discrepancy data can be generated and stored.
  • the statistical distribution of pixel brightness data within the l feature can also be analyzed, such as to identify features having standard deviations that are borderline to or exceed predetermined allowable limits, a number of pixels having brightness values that lie outside of a predetermined number of standard deviations that is borderline to or greater than a predetermined limit, or other suitable per-feature statistical data.
  • Feature discrepancy data for such features can also or alternatively be stored. The average brightness data for the feature is then stored, and the method proceeds to 608.
  • the feature brightness is determined.
  • feature brightness is determined by converting the value of the average brightness data for the feature to a percentage, such as where zero percent represents black (feature brightness data value zero) and where one hundred percent represents white (feature brightness data value 255).
  • the method then proceeds to 610 where it is determined whether the feature brightness data has been determined for all of the features of the component. If not, the method proceeds to 612 where the next feature is identified. The method then returns to 602. If it is determined at 610 that the feature brightness data for all of the features has been measured, the method proceeds to 614.
  • the feature brightness sigma data for each feature of the component relative to other features of the component is determined.
  • the standard deviation of the feature brightness data for all features of a component can be determined, and then any features have feature brightness data value that falls outside of a predetermined number of standard deviations from the norm of the feature brightness data for all features can be identified, where such number of standard deviations is empirically or otherwise determined.
  • Feature discrepancy data can then be generated and stored.
  • Feature brightness sigma data thus identifies features having large variations in brightness, such as those resulting from uneven feature surfaces, damage, or other feature damage. The method then proceeds to 616.
  • the feature brightness deviation data for each feature of the component relative to other features is determined.
  • the feature brightness deviation data is a per .feature measurement which is equated to the feature brightness data value minus the average feature brightness data. If the feature brightness deviation data' is borderline to or exceeds a predetermined limit, such as one that is empirically or otherwise determined, then feature discrepancy data for the feature can be generated and stored. Feature brightness deviation data can identify unacceptable features using brightness data where most features have approximately the same brightness except for a few features that might otherwise have acceptable feature brightness sigma data or acceptable average brightness data. The method then proceeds to 618.
  • feature discrepancy data is evaluated and component acceptance data is generated.
  • Feature discrepancy data can be compared to predetermined criteria for determining acceptable feature co-planarity or other suitable characteristics, such as those that have been determined empirically, by analysis, or by other suitable procedures. If it is determined that the feature discrepancy data indicates that the features are acceptable, such as that the features as a whole have acceptable planarity, then the acceptance data can be used to allow the component to be placed in a packing media, moved on to the next manufacturing process, or other suitable actions can be performed based on the acceptance data.
  • the acceptance data can be used to place the component in a holding bin for rejected components, the component can be left on a wafer support, or other suitable processes can be performed.
  • method 600 determines the feature quality data, such as average brightness data, feature brightness sigma data, feature brightness deviation data, and other suitable data, of each feature within a component.
  • Method 600 allows the co- planarity and other suitable characteristics of features on a component to be determined using brightness data.
  • FIGURE 7 is a flowchart of a method 700 for establishing feature quality limit data in accordance with an exemplary embodiment of the present invention.
  • Method 700 begins at 702, where image data for a reference component, such as an acceptable semiconductor device, is generated.
  • the image data can include an N x M array of pixel data, such as a 1024 x 1024 array of pixel data where each pixel represents brightness data ranging from 0 (representing an absence of light) to 255 (representing the highest level of light) .
  • the focal field of the array can also be adjustable, such as ranging from 0.12 x 0.12 inches to 0.4 x 0.4 inches, or other suitable focal field sizes.
  • the method then proceeds to 704.
  • a grid pattern is generated, such as a grid that outlines two or more features on the reference component.
  • a reference grid pattern can be used, features can be selected manually or using a suitable process, or other suitable procedures can be used.
  • the method then proceeds to 706 where it is determined whether the features within the image data of the reference component are aligned with the grid pattern selected at 704, such as by allowing an operator to review the grid pattern and image data, by performing spot checking of features and the grid pattern, or by other suitable procedures. If the grid pattern is not aligned, the method proceeds to 708 where the component can be adjusted so the features are aligned with the grid. The method then proceeds to 702. If it is determined at 706 that the features within the image are aligned with the grid pattern, then the method proceeds to 710.
  • the pixels of each feature within the image are identified, such as by using a geometric approximation, by detecting the feature edges and using all pixels conatined within the edge, or by other suitable procedures. The method then proceeds to 714.
  • the average pixel brightness data of each feature of the reference component is determined and stored as feature brightness reference data.
  • the method then proceeds to 716.
  • the feature brightness limit data is determined.
  • feature brightness limit data can. be provided by a user, can be determined by using a statistically relevant number of measurements of average pixel brightness data for features and correlating that data with measurements of electrical conductivity, height-, or other physical measurements, or in other suitable manners. For example, a statistically relevant population size of components can be determined using measurements, process-related variables, or other factors, and measurements of feature physical properties can be taken after the average pixel brightness data for each feature is measured.
  • the correlation between features having unacceptable feature physical properties can then be statistically analyzed to determine the confidence interval for the correlation between the feature physical property and the average pixel brightness data. If the confidence interval is less than a predetermined acceptable level, error data can be generated for operator intervention. Otherwise, the method then proceeds to 718 where it is determined whether this is the last feature in the image data for the reference component for which the average pixel brightness should be measured. If not, then the feature brightness limit data is stored and the method proceeds to 720 where the next feature is identified. The method then returns 710.
  • the method proceeds 722, where the feature brightness sigma limit data of the component is determined.
  • the feature brightness sigma limit data can be determined by using a statistically relevant number of measurements of average pixel brightness data for features and correlating that data with measurements of electrical conductivity, height, or other physical measurements, or in other suitable manners. For example, a statistically relevant population size of components can be determined using measurements, process-related variables, or other factors, and measurements of feature physical properties can be taken after the average pixel brightness data for each feature is measured.
  • the correlation between features having unacceptable feature physical properties can then be statistically analyzed to determine the confidence interval for the correlation between the feature physical property and the feature brightness sigma data. If the confidence interval is less than a predetermined acceptable level, error data can be generated for operator intervention. Otherwise, the feature brightness sigma limit data is stored and the method proceeds to 724.
  • the feature brightness deviation limit data for each feature of the reference component is determined.
  • a user can input a value for the feature brightness deviation limit data, the. feature brightness deviation limit data can be determined by using a statistically relevant number of measurements of average pixel brightness data for features and correlating that data with measurements of electrical conductivity, height, or other physical measurements, or in other suitable manners.
  • a statistically relevant population size of components can be determined using measurements, process-related variables, or other factors, and measurements of feature physical properties can be taken after the average pixel brightness data for each feature is measured.
  • the correlation between features having unacceptable feature physical properties can then be statistically analyzed to determine the confidence interval for the correlation between the feature physical property and the feature brightness deviation limit data. If the confidence interval is less a predetermined acceptable level, error data can be generated for operator intervention. Otherwise, the feature brightness deviation limit data is stored.
  • method 700 can be used to determine feature quality limit data.
  • Method 700 allows physically-measured characteristics of features to be statistically correlated with the brightness data measured for features, so as to establish acceptable limits for determining feature quality using brightness data.
  • physical characteristics such as co- planarity can be correlated with brightness data in a manner that allows brightness data for components to be analyzed to determined whether the components have acceptable physical characteristics without requiring those physical characteristics to be directly measured.

Abstract

A system for the inspection of a component having two or more features, such as ball contacts of a ball grid array, is provided. The system includes a light source illuminating the component with non-coherent light. An image system generates image data of the component that includes feature data for each feature. A co-planarity system receives the image data and generates co-planarity data using feature data of the image data, such as by determining whether brightness data for each of the ball contacts indicates that one or more of the ball contacts is not co-planar with the other ball contacts.

Description

IN THE UNITED STATES PATENT AND TRADEMARK OFFICE U.S. RECEIVING OFFICE
SPECIFICATION accompanying
TITLE OF THE INVENTION: SYSTEM AND METHOD FOR INSPECTING COMPONENTS
FIELD OF THE INVENTION [0001] The present invention relates generally to the inspection of components, and more specifically to the use of 2D measurements to determine the co-planarity of components, including semiconductor circuits. The techniques and methods of the invention can also be applied to the inspection of bumped contacts for ball grid array (BGA) integrated circuits, chip scale package (CSP) integrated circuits, bumped dies, wafers, and other suitable components. BACKGROUND OF THE INVENTION [0002] Systems that perform component inspection using image data are known in the art. Such systems can be used to perform lead inspection (such as to identify bent leads), component marking inspection (such as to verify that the proper vendor's mark, component number and type designators have been applied) , and component defect inspection (such as to locate holes, cracks, or other irregularities in the component) of semiconductor packages. [0003] One drawback with such systems when used for measuring 3D features is the significant amount of time that is required. For example, such systems can be used to measure the dimensions of bumped leads of ball-grid array, chip scale package or bumped wafer by using a laser to trace a straight line on the feature. The coordinates of the feature are then determined based on deviations from the straight line. These inspection processes are time consuming because it is necessary to trace the laser over each feature that needs to be measured at least once. Furthermore, because dimensions are inferred based on deviations in the laser track from a straight line, it is possible that such deviations might not be properly measured or located, such as if the laser track does not fall on the deviation.
[0004] Thus, while systems and methods can be used to inspect components, there are types of components and defects for which inspection is time consuming and inaccurate.
SUMMARY OF THE INVENTION [0005] In accordance with the present invention, a system and method for inspecting components are provided that overcome known problems with inspecting of components. In particular, a system and method for inspecting components, such as for unacceptable co- planarity or other conditions, by inferring 3D data from measurements of 2D data are disclosed.
[0006] In accordance with an exemplary embodiment of the present invention, a system for the inspection of a component having three or more features, such as ball contacts of a ball grid array, is provided. The system includes a light source illuminating the component with non-coherent light, such as an on-axis light source. An image system generates image data of the component that includes feature data for each feature. A co-planarity system receives the image data and generates co-planarity data using feature data of the image data, such as by determining whether brightness data for each of the ball contacts indicates that one or more of the ball contacts is not co-planar with the other ball contacts.
[0007] Embodiments of the present invention provide many important technical advantages. One advantage of an embodiment of the present invention is a system for inspection of components that reduces the amount of time spent measuring 3D data, such as co- planarity of ball-type leads on semiconductor devices. The present invention thus decreases the production time for semiconductor devices.
[0008] Those skilled in the art will further appreciate the advantages and superior features of the invention together with other important aspects thereof on reading the detailed description that follows in conjunction with the drawings. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS [0009] FIGURE 1 is a diagram of a system for determining the co- planarity of a component from brightness data in accordance with an exemplary embodiment of the present invention;
[0010] FIGURE 2 is a diagram of a system for inspecting components in accordance with an exemplary embodiment of the present invention;
[0011] FIGURE 3 is a diagram of a system for determining the co- planarity of a component in accordance with an exemplary embodiment of the present invention;
[0012] FIGURE 4 is a diagram of a system for determining feature quality data, such as ball contacts of a semiconductor device, in accordance with an exemplary embodiment of the present invention; [0013] FIGURE 5 is a flowchart of a method for determining the co- planarity of features of a component using 2D data in accordance with an exemplary embodiment of the present invention; [0014] FIGURE 6 is a flowchart of a method for determining feature quality data in accordance with an exemplary embodiment of the present invention; and
[0015] FIGURE 7 is a flowchart of a method for establishing feature quality limit data in accordance with an exemplary embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION [0016] In the description that follows, like parts are marked throughout the specification and drawings with the same reference numerals, respectively. The drawing figures are not necessarily to scale, and certain components can be shown in generalized or schematic form and identified by commercial designations in the interest of clarity and conciseness.
[0017] FIGURE 1 is a diagram of a system 100 for determining the co- planarity of a component from brightness data in accordance with an exemplary embodiment of the present invention. System 100 uses the brightness data from uniform features on the component, such as ball contacts of a ball grid array, to determine the co- planarity and other suitable characteristics of the features. System 100 includes a system support that allows it to be placed on an existing inspection system, so as to use' the component handling systems of such existing inspection systems. In this manner, system 100 can be used to supplement or supplant the inspection functions of existing inspection systems but allows the component handling systems of those existing inspection systems to be used.
[0018] System 100 includes an image analysis system such as co- planarity system 102, which can be implemented in hardware, software, or a suitable combination of hardware and software and which can be one or more software systems operating on a general purpose server platform. As used herein, a software system can include one or more objects, agents, threads, subroutines, separate software applications, one or more lines of code or other suitable software structures operating in two or more separate software applications, on one or more different processors or other suitable software architectures. In one exemplary embodiment, a software system can include one or more lines of code or other suitable software structures operating in a general purpose software application, such as an operating system, and one or more lines of code or other suitable software structures operating in a specific purpose software application. [0019] Co-planarity system 102 is coupled to digital image system 104 via communications medium 106. As used herein, the term "couple," and its cognate terms such as "couples," and "coupled," can include a physical connection (such as through one or more copper conductors) , a virtual connection (such as one or more randomly assigned data memory locations of a data memory device) , a logical connection (such as through one or more logical devices of a semiconducting circuit) , a wireless connection, other suitable connections, or a suitable combination of such connections. In one exemplary embodiment, systems and components can be coupled to other systems and components through intervening systems and components, such as through an operating system of a general purpose server platform. Communications medium 106 can be a serial data bus, a parallel data bus, a copper conductor, a data memory device, a semiconducting circuit, other suitable communications media, or a suitable combination of such communications media.
[0020] Digital image system 104 is mounted on system support 114 and generates an N x M array of pixels of brightness data, such as a relative brightness value ranging from 0 to 255. In one exemplary embodiment, the array can be 1024 x 1024 pixels having an adjustable image size ranging from 0.12 x 0.12 inches to 0.4 x 0.4 inches, such that when the image size is 0.2 x 0.2 inches, each pixel represents an area of approximately 5.12 x 5.12 micrometers. [0021] Light source 108 is mounted on system support 114- and illuminates a component that is disposed underneath system 100 in a manner that causes uniform features on the component to be evenly illuminated. In one exemplary embodiment, light source 108 can be a commercially available, experimental or custom-made collinear illuminator, such as a light source which may be integral to a diffuser and partial mirror. Alternatively, light source 108 can utilize a fiber optic cable or other suitable optical methods to project a field of illumination onto the component being measured. Light source 108 allows an imaging system to generate direct-axis image data of the component. Other suitable illumination sources can also or alternatively be used. Light source 108 is oriented so as to provide direct illumination of the component (i.e. light perpendicular to the base plane of the component being inspected) .
[0022] System support 114 allows system 100 to be placed on an existing component inspection system that has a component handling system. In this manner, system support 114 can be used to supplement or supplant the inspection .analysis functions of such other systems. Likewise, system support 114 can be used in conjunction with a component handling system that includes no inspection functionality, such as a component handling system that is used in conjunction with two or more different modular image analysis systems such as system 100.
[0023] In operation, system 100 inspects components and generates co-planarity data that is used to determine whether the component is coplanar within predetermined tolerances. Co-planarity system 102 receives digital image data of the component from digital image system 104 that is generated when the component is illuminated by light source 108, and uses the brightness data of three or more known features on the component to determine whether the component is coplanar. In one exemplary embodiment, the features can be ball contacts of a ball grid array, chip scale package, a bumped die, or other suitable components for which co- planarity of the ball contacts is required in order to ensure that uniform electrical contact can be made with each ball contact. Light source 108 illuminates the three or more known features in a manner that causes the features to generate uniform levels of brightness data when digital image system generates through an image if those features are otherwise undamaged and coplanar. The digital image data generated by digital image system 104 is received by co-planarity system 102, and the pixels of brightness data corresponding to the two or more known features are then identified. The brightness data for these pixels is then analyzed by co-planarity system 102 in conjunction with empirically- or otherwise-determined rules that define allowable variations for the brightness data, and co-planarity data is generated that can be used to determine whether the component, the features, or the component and the features have an acceptable level of co- planarity, or are unacceptably non-planar.
[0024] For example, it can be empirically determined that any feature having brightness data that is statistically different from the brightness data for other features to a predetermined degree correlates to features that do not form acceptable electrical contact. In this example, the brightness data for a component can be analyzed to determine whether the statistical data for the features of each component fall within the empirically observed ranges for the brightness data of the control components, whether the brightness data for any given feature correlates with unacceptable features to an acceptable level of certainty, or whether any other suitable conditions can be detected that can be used to establish whether the feature will operate properly in service.
[0025] FIGURE 2 is a diagram of a system 200 for inspecting components in accordance with an exemplary embodiment of the present invention. System 200 shows system 100 in operation in conjunction with an existing component inspection system that inspects semiconductor devices that have been cut from silicon wafers .
[0026] System 200 includes pick and place arm 202, wafer support 204, positioning mechanism 212, wafer 208, and components 210. Positioning mechanism 212 moves wafer support 204 into a position under system 100 for visual light-based inspection. Pick and place arm 202 can include a telescoping head with a suitable orifice at its distal end connected to a source of vacuum (not explicitly shown) that provides a pick up and holding force on a component 210. In one exemplary embodiment, wafer support 204 holds components 210 that have been cut from wafer 208, which can include chip scale packages, ball grid arrays, bumped wafers, or other suitable components. The components 210 on wafer support 204 can be complete and ready for final inspection. [0027] Pick and place arm 202 can move any of components 210 after they have been inspected and approved to a suitable packing media. In one exemplary embodiment, the packing media can be sticky tape 214, tape and reel 216, a waffle pack or small tray, a film frame, a gel pack, other suitable packing media, or a suitable combination of packing media. The packing media can also be processed using a belt drive mechanism or other suitable systems or components .
[0028] In operation, components 210 are inspected by system 100 as they are being transferred to a packing media. Each component 210 can be inspected prior to being moved by pick and place head 202, after being moved by pick and place head 202, or at other suitable times. Likewise, all of the components 210 from a wafer 208 can be inspected prior to being moved into position for handling by pick and place head 202, or can be inspected at other suitable locations. The modularity of system 100 facilitates the flexibility at which inspection by system 100 can be performed. [0029] FIGURE 3 is a diagram of a system 300 for determining the co- planarity of a component in accordance with an exemplary embodiment of the present invention. System 300 includes co- planarity system 102 and image receiving system 302, grid alignment system 304, and 2D co-planarity system 306, each of which can be implemented in hardware, software, or a suitable combination of hardware and software, and which can be one or more software systems operating on a general purpose processing platform.
[0030] Image receiving system 302 receives digital image data of a component, such as image data generated when the component is being illuminated by light source 108, laser 110, or other suitable image data. Image receiving system 302 controls the analysis of the image data by grid alignment system 304 and 2D co- planarity system 306, such as by determining the type of light that was used to generate the image data, by controlling the image analysis process in accordance with predetermined process flow controls, in response to operator-entered instructions, or in other suitable manners.
[0031] Grid alignment system 304 receives the image data from image receiving system 302, locates three or more features of the component in the image data that will be used for co-planarity analysis, and generates grid data. Grid alignment system 304 can identify bumped contacts of a BGA, CSP, bumped wafer, or other suitable features. In one exemplary embodiment, grid alignment system 304 can identify the features using feature design data, such as the diameter of the bumped contacts. Alternatively, a. user can select a feature and the image data can then be analyzed to find other similar features, such as by using (copending application on location of image features in image data incorporate by reference) . Grid alignment system 304 then generates a template that identifies the boundaries of the features of interest, so as to allow areas outside of the features to be ignored. Grid alignment system 304 can also generate position control data, such as for controlling the position of belt drive mechanism 212 of FIGURE 2 or other suitable positioning devices or systems .
[0032] 2D co-planarity system 306 receives the image data from image receiving system 302 and the grid data from grid alignment system 304 and analyzes the brightness data of each feature. In one exemplary embodiment, 2D co-planarity system 306 determines ball brightness data for contact bumps of a bumped grid array, chip scale package, bumped wafer, or other suitable components, such as by measuring the brightness data for each pixel in a square inscribed within a circle representing the ball and having a diagonal length equivalent to the diameter of the ball. 2D co- planarity system 306 then analyzes the feature image data ' to generate feature analysis data, such as the average feature brightness, the maximum feature brightness, the standard deviation of the feature brightness for all features of a component, and other suitable data. 2D co-planarity system 306 then generates feature acceptance data based on the feature analysis data, such as by comparing the feature analysis data to acceptable ranges for feature analysis data. 2D co-planarity system 306, image receiving system 302, or other suitable systems can then generate component control data based on the feature analysis data that causes pick and place arm 202 to transfer the component to a packing media, a holding bin for rejected components, to by-pass the component and select the next component, or to perform other suitable functions.
[0033] In operation, system 300 generates co-planarity data from image data of a component. Features, such as bumped leads on a semiconductor device, are first located and a grid pattern is generated that outlines the areas in which inspection and analysis of the image data will be performed. System 300 can accept or reject components by determining co-planarity from two-dimensional data or other suitable data.
[0034] FIGURE 4 is a diagram of a system 400 for determining feature quality data, such as ball contacts of a semiconductor device, in accordance with an exemplary embodiment of the present invention. System 400 includes 2D co-planarity system 306 and feature brightness system 402, brightness sigma system 404, brightness deviation system 406, feature quality system 408, and average brightness system 410, each of which can be implemented in hardware, software, or a suitable combination of hardware and software, and which can be one or more software systems operating on a general purpose server platform.
[0035] Feature brightness system 402 generates brightness data for each feature of a component. In one exemplary embodiment, feature brightness system 402 calculates the average of the pixel brightness data values of the pixels within a square centered at the feature' s centroid and having a diagonal equivalent to the length of the feature diameter, such as where the feature is a circle. Pixel brightness data can have a value from zero for black to 255 for white or other suitable values correlated to a brightness measuring system. In one exemplary embodiment, feature brightness system 402 generates average brightness data by calculating the percent brightness, such as zero percent for pitch black (pixel brightness level zero) and one hundred percent for pure white (pixel brightness level 255) . Feature brightness system 402 provides the brightness data to brightness sigma system 404, brightness deviation system 406 and average brightness system 410. Feature brightness system 402 also determines whether the brightness data for each feature exceeds predetermined maximum or minimum levels, such as in cases where a large numbers of features are flattened or otherwise unacceptable but would produce acceptable results through brightness sigma system 404 and brightness deviation system 406, provides feature acceptance data to feature quality system 408 and to average brightness system 410 that defines whether a feature has acceptable, borderline, or unacceptable average brightness data.
[0036] Brightness sigma system 404 receives brightness data from feature brightness system 402 and generates brightness sigma data for the features. In one exemplary embodiment, brightness sigma system 404 determines the standard deviation of the brightness data for all features of a component, and then determines whether the values fall outside of a predetermined number of standard deviations from the norm of the average brightness data for all features. Brightness sigma system 404 then generates feature acceptance data based on the brightness sigma data and the standard deviation data. Brightness sigma system 404 then provides brightness sigma data, feature acceptance data, and other suitable data to feature quality system 408 that identifies those features that have average brightness data that is acceptable, borderline, or unacceptable in light of the standard deviation of the average brightness data.
[0037] Brightness deviation system 406 receives the feature brightness data from feature brightness system 402 and the average feature brightness data from average brightness system 410 and generates brightness deviation data for each feature. In one exemplary embodiment, brightness deviation system 406 determines whether any features have average brightness data values that are greater or lesser than the component average brightness data by more than a predetermined amount. Brightness deviation system 406 then generates feature acceptance data based on the brightness deviation data and the component average brightness data. Brightness deviation system 404 provides brightness deviation data, feature acceptance data, and other suitable data to feature quality system 408 that identifies those features that have average brightness data that is acceptable, borderline, or unacceptable in light of the component average brightness data. [0038] Feature quality system 408 receives average brightness data, brightness sigma data, brightness deviation data, feature acceptance data, and other suitable data and determines whether the features of the component indicate that co-planarity of the features is acceptable. Feature quality system 408 can evaluate borderline feature data, feature data for all features, or other suitable data, and can generate component acceptance data based on the feature data. In one exemplary embodiment, feature quality system 408 can generate control data that causes pick and place arm 202 to move component 210 from wafer support 208 to a packing media, a holding bin for rejected components, can generate a prompt requesting operator assistance, or can perform other suitable functions.
[0039] Average brightness system 410 receives the feature brightness data for all features of the component from feature brightness system 402 and generates average feature brightness data for the component. The average feature brightness data is then provided to brightness deviation system 406 and other suitable systems. [0040] In operation, system 400 accepts or rejects components based on the co-planarity of features of the component by comparing generated feature quality data to predetermined limits. Components having feature quality data within predetermined limit data can be accepted, and components a having feature quality data beyond the limit data can be rejected. In this manner, system 400 permits the use of 2D criteria for determining the co-planarity of features on the component, such as bumped contacts of a ball grid array or other suitable semiconductor devices.
[0041] FIGURE 5 is a flowchart of a method 500 for determining the co-planarity of features of a component using 2D data in accordance with an exemplary embodiment of the present invention. Method 500 can be used to determine whether bumped contacts of a ball grid array, bumped wafer, chip scale package, or other suitable components are sufficiently co-planar to allow them to operate properly when placed in service.
[0042] Method 500 begins at 502 where image data of a component is captured. In one exemplary embodiment, the component can include a ball grid array, a chip scale package, a bumped wafer, or other suitable configurations of semiconductor packages. The method then proceeds to 504.
[0043] At 504, a grid pattern is generated. In one exemplary embodiment, the grid pattern is generated by determining the edges of two or more features of the component. Alternatively, a user can manually select the appropriate grid pattern. The method then proceeds to 506 where it is determined whether the features within the image are aligned with the grid pattern selected at 504. If not, the method proceeds to 508 where the component can be adjusted so that the features are aligned with the grid. In one exemplary embodiment, control data can be sent to belt drive mechanism 212 to cause component 210 to be moved so as to be aligned with the grid pattern. In another exemplary embodiment, the grid pattern can be adjusted so that the features are aligned with the grid. Other suitable procedures can also be used. The method then returns to 502.
[0044] If it is determined at 506 that the features within the image are aligned with the grid pattern, then the method proceeds to 510 where the feature quality data is generated. Feature quality data can include feature brightness data, feature brightness sigma data, feature brightness deviation data, and other suitable data. The method then proceeds to 512 where the feature quality data is analyzed, such as by comparing the feature quality data to predetermined limits. In one exemplary embodiment, the predetermined limits can be determined based upon empirical data for features from similar components that are determined to have acceptable coplanarity. Other suitable procedures can also be used. The method then proceeds to 514.
[0045] At 514, the component is accepted or rejected based on the results of the analysis at 512. In one exemplary embodiment, if any of the individual classes of feature quality data exceed a predetermined limit, then the component can be rejected. In another exemplary embodiment, additional empirical relationships between acceptable results for certain classes of feature quality data and borderline or unacceptable results for other classes of feature quality data can be determined, 3D image data can be analyzed, or other suitable procedures can be performed. The method then proceeds to 516 where the next component is prepared for inspection. The method then returns to 502.
[0046] In operation, method 500 allows image data of a component to be analyzed to determine .whether two or more features on the component are co-planar. Method 500 uses 2D image data to generate co-planarity pass/fail criteria, which results in faster inspection of components having features, such as ball grid arrays, chip scale packages, bumped wafers, or other suitable components .
[0047] FIGURE 6 is a flowchart of a method '600 for determining feature quality data in accordance with an exemplary embodiment of the present invention. Method 600 begins at 602 where the pixels of each feature within the image are identified. In one exemplary embodiment, the pixels used to determine feature quality data can be located within a square inscribed within the circle in the image data representing a ball-type lead on a semiconductor device. In this exemplary embodiment, the inscribed square can have a diagonal of a length equivalent to the length of the diameter of the circle. Also or alternatively, a rectangle, polygon, or other suitable shapes can be used to approximate the shape of the feature. In a further exemplary embodiment, random pixels within the circle can be used to generate the feature quality data, the actual boundary of the feature can be detected and the brightness data for each pixel within the boundary can be used to determine feature quality, or other suitable procedures can be used. The method then proceeds to 604.
[0048] At 604, the pixel brightness of each pixel identified at 602 is measured. In one exemplary embodiment, the pixel brightness can be a value from zero to 255 where zero represents pitch black and 255 represents pure white. The method then proceeds to 606 where the average brightness data of the feature is determined. In one exemplary embodiment, the average brightness data for a feature is determined by calculating the mean of the pixel brightness values of the pixels measured at 604. It can also be determined whether the average brightness data is borderline to or greater than a predetermined limit, such as where average brightness data above that limit has been empirically or otherwise determined to represent a damaged feature. If the average brightness data is borderline to or above the limit, feature discrepancy data can be generated and stored. Likewise, the statistical distribution of pixel brightness data within the l feature can also be analyzed, such as to identify features having standard deviations that are borderline to or exceed predetermined allowable limits, a number of pixels having brightness values that lie outside of a predetermined number of standard deviations that is borderline to or greater than a predetermined limit, or other suitable per-feature statistical data. Feature discrepancy data for such features can also or alternatively be stored. The average brightness data for the feature is then stored, and the method proceeds to 608.
[0049] At 608, the feature brightness is determined. In. one exemplary embodiment, feature brightness is determined by converting the value of the average brightness data for the feature to a percentage, such as where zero percent represents black (feature brightness data value zero) and where one hundred percent represents white (feature brightness data value 255). The method then proceeds to 610 where it is determined whether the feature brightness data has been determined for all of the features of the component. If not, the method proceeds to 612 where the next feature is identified. The method then returns to 602. If it is determined at 610 that the feature brightness data for all of the features has been measured, the method proceeds to 614.
[0050] At 614, the feature brightness sigma data for each feature of the component relative to other features of the component is determined. In one exemplary embodiment, the standard deviation of the feature brightness data for all features of a component can be determined, and then any features have feature brightness data value that falls outside of a predetermined number of standard deviations from the norm of the feature brightness data for all features can be identified, where such number of standard deviations is empirically or otherwise determined. Feature discrepancy data can then be generated and stored. Feature brightness sigma data thus identifies features having large variations in brightness, such as those resulting from uneven feature surfaces, damage, or other feature damage. The method then proceeds to 616.
[0051] At 616, the feature brightness deviation data for each feature of the component relative to other features is determined. In one exemplary embodiment, the feature brightness deviation data is a per .feature measurement which is equated to the feature brightness data value minus the average feature brightness data. If the feature brightness deviation data' is borderline to or exceeds a predetermined limit, such as one that is empirically or otherwise determined, then feature discrepancy data for the feature can be generated and stored. Feature brightness deviation data can identify unacceptable features using brightness data where most features have approximately the same brightness except for a few features that might otherwise have acceptable feature brightness sigma data or acceptable average brightness data. The method then proceeds to 618.
[0052] At 618, feature discrepancy data is evaluated and component acceptance data is generated. Feature discrepancy data can be compared to predetermined criteria for determining acceptable feature co-planarity or other suitable characteristics, such as those that have been determined empirically, by analysis, or by other suitable procedures. If it is determined that the feature discrepancy data indicates that the features are acceptable, such as that the features as a whole have acceptable planarity, then the acceptance data can be used to allow the component to be placed in a packing media, moved on to the next manufacturing process, or other suitable actions can be performed based on the acceptance data. Likewise, if the feature discrepancy data indicates that the features are not acceptable for a predetermined reason, then the acceptance data can be used to place the component in a holding bin for rejected components, the component can be left on a wafer support, or other suitable processes can be performed.
[0053] In operation, method 600 determines the feature quality data, such as average brightness data, feature brightness sigma data, feature brightness deviation data, and other suitable data, of each feature within a component. Method 600 allows the co- planarity and other suitable characteristics of features on a component to be determined using brightness data.
[0054] FIGURE 7 is a flowchart of a method 700 for establishing feature quality limit data in accordance with an exemplary embodiment of the present invention. Method 700 begins at 702, where image data for a reference component, such as an acceptable semiconductor device, is generated. In one exemplary embodiment, the image data can include an N x M array of pixel data, such as a 1024 x 1024 array of pixel data where each pixel represents brightness data ranging from 0 (representing an absence of light) to 255 (representing the highest level of light) . The focal field of the array can also be adjustable, such as ranging from 0.12 x 0.12 inches to 0.4 x 0.4 inches, or other suitable focal field sizes. The method then proceeds to 704.
[0055] At 704, a grid pattern is generated, such as a grid that outlines two or more features on the reference component. A reference grid pattern can be used, features can be selected manually or using a suitable process, or other suitable procedures can be used. The method then proceeds to 706 where it is determined whether the features within the image data of the reference component are aligned with the grid pattern selected at 704, such as by allowing an operator to review the grid pattern and image data, by performing spot checking of features and the grid pattern, or by other suitable procedures. If the grid pattern is not aligned, the method proceeds to 708 where the component can be adjusted so the features are aligned with the grid. The method then proceeds to 702. If it is determined at 706 that the features within the image are aligned with the grid pattern, then the method proceeds to 710.
[0056] At 710, the pixels of each feature within the image are identified, such as by using a geometric approximation, by detecting the feature edges and using all pixels conatined within the edge, or by other suitable procedures. The method then proceeds to 714.
[0057] At 714, the average pixel brightness data of each feature of the reference component is determined and stored as feature brightness reference data. The method then proceeds to 716. [0058] At 716, the feature brightness limit data is determined. In one exemplary embodiment, feature brightness limit data can. be provided by a user, can be determined by using a statistically relevant number of measurements of average pixel brightness data for features and correlating that data with measurements of electrical conductivity, height-, or other physical measurements, or in other suitable manners. For example, a statistically relevant population size of components can be determined using measurements, process-related variables, or other factors, and measurements of feature physical properties can be taken after the average pixel brightness data for each feature is measured. The correlation between features having unacceptable feature physical properties can then be statistically analyzed to determine the confidence interval for the correlation between the feature physical property and the average pixel brightness data. If the confidence interval is less than a predetermined acceptable level, error data can be generated for operator intervention. Otherwise, the method then proceeds to 718 where it is determined whether this is the last feature in the image data for the reference component for which the average pixel brightness should be measured. If not, then the feature brightness limit data is stored and the method proceeds to 720 where the next feature is identified. The method then returns 710.
[0059] If it is determined at 718 that the average pixel brightness of the last feature of the reference component was measured at 714, then the method proceeds 722, where the feature brightness sigma limit data of the component is determined. A user can input a value for the feature brightness sigma limit data, the feature brightness sigma limit data can be determined by using a statistically relevant number of measurements of average pixel brightness data for features and correlating that data with measurements of electrical conductivity, height, or other physical measurements, or in other suitable manners. For example, a statistically relevant population size of components can be determined using measurements, process-related variables, or other factors, and measurements of feature physical properties can be taken after the average pixel brightness data for each feature is measured. The correlation between features having unacceptable feature physical properties can then be statistically analyzed to determine the confidence interval for the correlation between the feature physical property and the feature brightness sigma data. If the confidence interval is less than a predetermined acceptable level, error data can be generated for operator intervention. Otherwise, the feature brightness sigma limit data is stored and the method proceeds to 724.
[0060] At 724, the feature brightness deviation limit data for each feature of the reference component is determined. A user can input a value for the feature brightness deviation limit data, the. feature brightness deviation limit data can be determined by using a statistically relevant number of measurements of average pixel brightness data for features and correlating that data with measurements of electrical conductivity, height, or other physical measurements, or in other suitable manners. For example, a statistically relevant population size of components can be determined using measurements, process-related variables, or other factors, and measurements of feature physical properties can be taken after the average pixel brightness data for each feature is measured. The correlation between features having unacceptable feature physical properties can then be statistically analyzed to determine the confidence interval for the correlation between the feature physical property and the feature brightness deviation limit data. If the confidence interval is less a predetermined acceptable level, error data can be generated for operator intervention. Otherwise, the feature brightness deviation limit data is stored.
[0061] In operation, method 700 can be used to determine feature quality limit data. Method 700 allows physically-measured characteristics of features to be statistically correlated with the brightness data measured for features, so as to establish acceptable limits for determining feature quality using brightness data. In this manner, physical characteristics such as co- planarity can be correlated with brightness data in a manner that allows brightness data for components to be analyzed to determined whether the components have acceptable physical characteristics without requiring those physical characteristics to be directly measured.
[0062] In view of the above detailed description of the present invention and associated drawings, other modifications and variations will now become apparent to those skilled in the art. It should also be apparent that such other modifications and variations may be effected without departing from the spirit and scope of the present invention.

Claims

CLAIMS What is claimed is:
1. A system for the inspection of a component having two or more features, comprising: a light source illuminating the component with non-coherent light; an image system generating image data of the component that includes feature data for each feature; and a co-planarity system receiving the image data and generating co-planarity data using feature data of the image data .
2. The system of claim 1 wherein the co-planarity system further comprises a feature brightness system determining whether feature brightness data exceeds feature brightness limit data.
3. The system of claim 1 wherein the co-planarity system further comprises a brightness sigma system determining whether feature brightness data exceeds brightness sigma limit data.
4. The system of claim 1 wherein the co-planarity system further comprises a brightness deviation system determining whether feature brightness data exceeds brightness deviation limit data.
5. The system of claim 1 wherein the co-planarity system further comprises a grid alignment system generating a grid pattern to isolate the feature data for each of the two or more features .
6. The system of claim 1 wherein the light source is a circular xenon flash lamp.
7. The system of claim 1 further comprising: a laser light source; a 3D co-planarity system; and wherein the image system generates image data of the component when it is illuminated by the laser light source and provides the image data to the 3D co-planarity system, and the 3D co-planarity system generates 3D co-planarity data.
8. A system for the inspection of a component, comprising: a light source illuminating the component with non-coherent light; an image system generating image data of the component; an image analysis system coupled to the image system, the image analysis system receiving the image data and generating image analysis data using the image data; and a system support coupled to the light source and the image system, wherein the system support allows the system to be placed on a device handling system.
9. The system of claim 8 wherein the image analysis system is a co-planarity system.
10. The system of claim 8 wherein the light source is a direct axis light source.
11. A method for the inspection of a component that includes one or more features comprising: generating image data of the component including feature data for each of the features; generating feature quality data using the feature data; and generating acceptance data using the feature quality data.
12. The method of claim 11 wherein generating feature quality data comprises generating feature brightness data.
13. The method of claim 11 wherein generating feature quality data comprises generating brightness sigma data.
14. The method of claim 11 wherein generating feature quality data comprises generating brightness deviation data.
15. The method of claim 11 wherein generating acceptance data using the feature quality data comprises: comparing the feature quality data to feature quality limit data; and generating acceptance data if the feature quality data is within the feature quality limit data.
16. The method of claim 15 wherein the feature quality limit data is feature brightness limit data.
17. The method of claim 15 wherein the feature quality limit data is brightness sigma limit data.
18. The method of claim 15 wherein the feature quality limit data is brightness deviation limit data.
PCT/US2002/030629 2001-09-28 2002-09-27 System and method for inspecting components WO2003034050A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US96671201A 2001-09-28 2001-09-28
US09/966,712 2001-09-28

Publications (1)

Publication Number Publication Date
WO2003034050A1 true WO2003034050A1 (en) 2003-04-24

Family

ID=25511775

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/030629 WO2003034050A1 (en) 2001-09-28 2002-09-27 System and method for inspecting components

Country Status (1)

Country Link
WO (1) WO2003034050A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5563703A (en) * 1994-06-20 1996-10-08 Motorola, Inc. Lead coplanarity inspection apparatus and method thereof
WO1999056075A1 (en) * 1998-04-28 1999-11-04 Semiconductor Technologies & Instruments, Inc. Inspection system and method for leads of semiconductor devices
US6028673A (en) * 1998-03-31 2000-02-22 Ngk Spark Plug Co., Ltd. Inspection of solder bumps of bump-attached circuit board
US6177682B1 (en) * 1998-10-21 2001-01-23 Novacam Tyechnologies Inc. Inspection of ball grid arrays (BGA) by using shadow images of the solder balls

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5563703A (en) * 1994-06-20 1996-10-08 Motorola, Inc. Lead coplanarity inspection apparatus and method thereof
US6028673A (en) * 1998-03-31 2000-02-22 Ngk Spark Plug Co., Ltd. Inspection of solder bumps of bump-attached circuit board
WO1999056075A1 (en) * 1998-04-28 1999-11-04 Semiconductor Technologies & Instruments, Inc. Inspection system and method for leads of semiconductor devices
US6177682B1 (en) * 1998-10-21 2001-01-23 Novacam Tyechnologies Inc. Inspection of ball grid arrays (BGA) by using shadow images of the solder balls

Similar Documents

Publication Publication Date Title
US5862973A (en) Method for inspecting solder paste in printed circuit board manufacture
EP0638801B1 (en) Method of inspecting the array of balls of an integrated circuit module
US7729528B2 (en) Automated wafer defect inspection system and a process of performing such inspection
US6236747B1 (en) System and method for image subtraction for ball and bumped grid array inspection
US20090123060A1 (en) inspection system
US7336816B2 (en) Method and apparatus for measuring shape of bumps
JPH0814848A (en) Inspection device and inspection method
JP3019005B2 (en) LSI handler
WO2015069191A2 (en) An apparatus and method for inspecting a semiconductor package
JPH10242219A (en) Method of testing bumps
EP1109215A2 (en) Apparatus and method for solder bump inspection
JP3978507B2 (en) Bump inspection method and apparatus
US7747066B2 (en) Z-axis optical detection of mechanical feature height
EP1288668A2 (en) System and method for testing electronic device interconnections
WO2003034050A1 (en) System and method for inspecting components
US5412477A (en) Apparatus for measuring bend amount of IC leads
KR20010108632A (en) Apparatus and Method for Inspecting Solder Ball of Semiconductor Device
JP3355978B2 (en) Bond coating inspection apparatus and bond coating inspection method
JP2946570B2 (en) Coplanarity measuring device
JPS6336543A (en) Method and apparatus for automatic inspection of semiconductor device
JP3012939B2 (en) Solder bridge inspection method and apparatus for implementing the method
US6787378B2 (en) Method for measuring height of sphere or hemisphere
US6543127B1 (en) Coplanarity inspection at the singulation process
Zhang et al. Computer vision system for the measurement of IC wire-bond height
JPH06242016A (en) Visual inspection system for bump

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BY BZ CA CH CN CO CR CU CZ DE DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ OM PH PL PT RU SD SE SG SI SK SL TJ TM TN TR TZ UA UG US UZ VC VN YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG US

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP