US20030025918A1 - Confocal 3D inspection system and process - Google Patents

Confocal 3D inspection system and process Download PDF

Info

Publication number
US20030025918A1
US20030025918A1 US10/196,335 US19633502A US2003025918A1 US 20030025918 A1 US20030025918 A1 US 20030025918A1 US 19633502 A US19633502 A US 19633502A US 2003025918 A1 US2003025918 A1 US 2003025918A1
Authority
US
United States
Prior art keywords
light
camera
beamsplitter
elevation
inspection device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/196,335
Inventor
Cory Watkins
David Vaughnn
Alan Blair
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
August Technology Corp
Original Assignee
August Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by August Technology Corp filed Critical August Technology Corp
Priority to US10/196,335 priority Critical patent/US20030025918A1/en
Assigned to AUGUST TECHNOLOGY CORP. reassignment AUGUST TECHNOLOGY CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VAUGHNN, DAVID, BLAIR, ALAN, WATKINS, CORY
Publication of US20030025918A1 publication Critical patent/US20030025918A1/en
Priority to US10/696,871 priority patent/US20040102043A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/008Details of detection or image processing, including general computer control
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/9501Semiconductor wafers
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B13/00Optical objectives specially designed for the purposes specified below
    • G02B13/22Telecentric objectives or lens systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/0036Scanning details, e.g. scanning stages
    • G02B21/004Scanning details, e.g. scanning stages fixed arrays, e.g. switchable aperture arrays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/16Microscopes adapted for ultraviolet illumination ; Fluorescence microscopes

Definitions

  • the present invention relates to a system, and process for use thereof, for inspecting wafers and other semiconductor or microelectronic substrates, and specifically for inspecting three dimensional (3D) surfaces or features thereon such as bumps.
  • the present invention relates to a confocal optical system for inspecting bumps and other 3D features on wafers or like substrates, and a process of using such system.
  • Microelectronics and semiconductor have in effect revolutionized society by introducing computers, electronic advances, and generally revolutionizing many previously difficult, expensive and/or time consuming mechanical processes into simplistic and quick electronic processes.
  • This boom has been fueled by an insatiable desire by business and individuals for computers and electronics, and more particularly, faster, more advanced computers and electronics whether it be on an assembly line, on test equipment in a lab, on the personal computer at one's desk, or in the home via electronics and toys.
  • the inspecting of semiconductors or like substrates, and specifically the inspection of three dimensional surfaces or features, such as bumps, is accomplished by the present invention, which is a confocal sensor with a given depth response functioning using the principle of eliminating out of focus light thereby resulting in the sensor producing a signal only when the surface being inspected is in a narrow focal range.
  • the result is an accurate height determination for a given point or area being inspected such that the accumulation of a plurality of height determinations from use of the confocal sensor system across a large surface allows the user to determine the topography thereof.
  • this system and process creates multiple parallel confocal optical paths whereby the out of focus light is eliminated by placing an aperture at a plane which is a conjugate focal plane to the surface of the sample.
  • the result is that the sensor produces a signal only when the sample surface is in a narrow focal range.
  • FIG. 1 is a drawing of one embodiment of the present invention.
  • the three dimensional (3D) inspection system of the present invention is indicated generally at 120 as is best shown overall in FIG. 1 and is used in one environment to view, inspect, or otherwise optically measure three dimensional features or surfaces.
  • the 3D inspection system includes a light source 122 , an optical subsystem 124 , and a camera 126 .
  • the optical subsystem includes an intermediate focal assembly and a pair of imager or reimagers.
  • the intermediate focal assembly in one embodiment includes an optional critical baffle 129 , a beamsplitter 130 , a photon motel 131 , and an array mount including an aperture array 132 , while the imager/reimagers in one embodiment include an object imager 134 , and a camera reimager 136 .
  • the light source 122 is any source of light that provides sufficient light to illuminate the sample S, and the light source may be positioned in any position so long as it provides the necessary light to sample S to be viewed, inspected or otherwise optically observed.
  • Examples of the light source include, but are not limited to white light sources such as halogen or arc lights, lasers, light emitting diodes (LEDs) including white LEDs or any of the various colored LEDs, fluorescent lights, or any other type of light source.
  • the light source 122 is an incoherent light source, preferably of an incandescent type, that includes a filament, a condenser lens and a heat absorbing glass.
  • the light source is an incoherent, optimally baffled, lamp. It is also preferred in certain embodiments that the light source is spatially uniform and of a quasi-telecentric or preferably telecentric design.
  • the system also needs bright light which is typically provided by a broadband or broad-spectrum style light source. It is also very desirable to define the light source as one of a matched numerical aperture design with the object imager to reduce stray light and improve efficiency.
  • incoherent light source is an incandescent lamp such as a halogen lamp such as a 12 volt, 100 watt quartz-style halogen lamp with a color temperature in the 3000-3500 Kelvin range.
  • halogen provides a very consistent, stable light output that is broadband or over many wavelengths and is cost effective and readily manufacturable. It is highly preferred that the light source is incoherent to avoid or reduce spatial non uniformity and/or speckle.
  • the light source is of a Köhler design such as a simple or reimaged Köhler design, and most preferably a reimaged Köhler design which includes additional lenses beyond the condenser lens, which effectively matches the telecentric properties of the optical subsystem thereby matching the numerical aperture and field properties needed by the system 120 to produce accurate height measurements of bumps on the surface of the sample S.
  • a Köhler design such as a simple or reimaged Köhler design, and most preferably a reimaged Köhler design which includes additional lenses beyond the condenser lens, which effectively matches the telecentric properties of the optical subsystem thereby matching the numerical aperture and field properties needed by the system 120 to produce accurate height measurements of bumps on the surface of the sample S.
  • a Köhler design such as a simple or reimaged Köhler design
  • a reimaged Köhler design which includes additional lenses beyond the condenser lens
  • the spatial extent of the source coupled with the numerical aperture of the condenser lens plus the focal length and the conjugate provides a combination of field of view and numerical aperture that is optimized.
  • the system optimizes the A ⁇ where the A is the size of the area of the field and ⁇ is the solid angle of the cone of light. This provides a very uniform field.
  • the Köhler illumination design (1) maps pupil of light source onto spatial extension of aperture array, and (2) maps spatial extension of filament in light source into numerical aperture or angle space of reimaging system.
  • the reimaged Köhler design differs from a standard Köhler design in two ways: (1) reimaged Köhler designs have a filament that is reimaged to a circular aperture that very precisely defines a constant numerical aperture over an entire field, and (2) in between the filament and the sample there is a focal plane that is conjugated to our aperture array, and at that focal plane the light is baffled and masked so that light outside of the desired range at the aperture array never enters the system.
  • One baffle defines the numerical aperture and another baffle limits the light that passes through to only light within the desired field of view.
  • the filament length, width and packing density are optimized.
  • the filament length, width and packing density are adjustable to scale the system to inspect significantly larger or smaller bumps or the like.
  • the light source is also broadband which provides significant light to the system, and avoids laser-based system speckle problems.
  • the broadband concept provides light across many wavelengths versus a single or small range of wavelengths. This broadband is provided by mercury-type, incandescent-type or other broadband light sources.
  • the system has an illuminator interface in the light path adjacent to the illuminator or light source 122 .
  • This illuminator interface provides optimal baffling to better match the illuminator cone of light with the field of view cone of light, i.e., in effect improving the quasi-telecentricity to as close as possible to complete telecentricity.
  • from one to multiple baffles are provided in front of the illuminator, and optimally sized and shaped. These baffles are basically windows that are sized and shaped to allow desirable light through while eliminating stray, peripheral or other undesirable light. This provides the optimal signal to noise ratio.
  • an optical filter may be used in conjunction with baffling within the illuminator interface. This removes unwanted wavelengths of light.
  • a thermal barrier may optionally also be provided to reduce the heat transfer of the light from the light source to other components of the system. This reduces or eliminates thermal expansion which causes distortions.
  • the thermal barrier is preferably placed between the lens and the baffle.
  • This light source provides sufficient energy to illuminate the sample S.
  • the light emitted from the light source 122 is directed into the optical subsystem 122 . Specifically the light is directed toward beamsplitter 130 .
  • the optical subsystem 124 includes the intermediate focal assembly including the critical baffle 129 , beamsplitter 130 , photon motel 131 , and array mount with aperture array 132 , and the system further includes object imager 134 , and camera reimager 136 .
  • Critical baffle 129 is an optional additional baffle that is positioned as close as possible to an intermediate focal surface as possible.
  • the critical baffle is positioned in between the light source and the beamsplitter and is preferably as close as possible to the beamsplitter.
  • the critical baffle reduces stray light entering the intermediate focal assembly as well as removing stray reflections off of the beamsplitter.
  • Beamsplitter 130 in the embodiment shown is a pellicle beamsplitter.
  • a pellicle beamsplitter has several advantages since it is achromatic, has very low polarization effects, and less variation with angle and color issues, and more uniformly provides light even after beam splitting effects than a polarized beamsplitter.
  • a pellicle beamsplitter also allows for an optical system that does not need a 1 ⁇ 4 waveplate.
  • the light or illumination source 122 provides reflected light to the beamsplitter whereby some of this light passes through the beamsplitter and emanates out of the entire system and is lost, a small amount may be lost within the beamsplitter, and the remaining light is reflected toward the aperture array.
  • the camera is axial with the imager/reimagers while the light source is not and uses the beamsplitter to introduce the light into the axis defined between the imager/reimagers 134 and 136 and the camera.
  • This design maintains a good transmitted wave front through a pellicle beamsplitter, i.e., the imaging performance is preserved between the imager/reimagers through the array and beamsplitter.
  • the reason for the maintaining of this good transmitted wave front is the combination of the axial camera and reimager design coupled with a pellicle beamsplitter as defined below rather than a polarizing beamsplitter since the pellicle beamsplitter have good transmitting wave fronts in comparison to reflective wave fronts versus the polarizing beamsplitter which has good reflective wave fronts in comparison to its transmitted wave front.
  • the beamsplitter 130 is pellicle and is of a broadband configuration, low polarizing effect that is spatially dependent, low scattering, non-absorbing or low absorbing, and is color independent with negligible internal or stray reflections.
  • the pellicle beamsplitter in this embodiment overcomes the detrimental design limitations of a typical achromatic beamsplitter.
  • This broadband configuration is necessary because in a typical achromatic beamsplitter it is difficult to successfully achieve very small Fresnel reflections on the surfaces unless the beamsplitter includes coatings that adopt broad wavelength ranges which are very expensive, very sophisticated and difficult to provide.
  • the pellicle beamsplitter in one embodiment provides better performance than the polarizing beamsplitter in the above described arrangement with the axial camera and reimagers even though a polarizing beamsplitter and 1 ⁇ 4 wave plate with axial camera and reimagers would only require the system to lose half of its light once while the pellicle system with axial cameras and reimagers requires the system to lose half of its light twice or successively. This is acceptable due the providing of broadband illumination from the light source which provides more light so extra loss is allowed.
  • a pellicle beamsplitter is preferred over merely a beamsplitter because the pellicle removes internal obstructions and optical aberrations that are undesirable.
  • the pellicle beamsplitter is more efficient, provides less stray light, is more spatially uniform, and generally provides better properties than the polarizing beamsplitter when the system uses a broadband light source.
  • the pellicle beamsplitter is all dielectric rather than containing a metallic layer resulting in a beamsplitter that is non-absorbing or low absorbing.
  • the dielectric pellicle beamsplitter is also preferably as close to 50/50 reflective/transmissive. It also preferred that the beamsplitter is low scattering.
  • the system has optimized the amount of “good” light that passes through while minimizing the amount of “bad” light passing through which is absorbed or scattered light.
  • Photon motel 131 is critical because 50% of light is lost in the beamsplitter. This large amount of “lost” light needs to be eliminated from the system so an optimized and efficient photon motel is critical.
  • Photon motel 131 is a two walled device where the first wall is a highly efficient light absorbing and controlled reflecting glass surface and the second wall is a highly efficient light absorbing surface optimally positioned to receive the light reflected from the first wall.
  • the first wall is a piece of highly polished absorbing glass that eliminates significant amounts of the light while the remaining light is reflected in a controlled manner but not scattered. In the one embodiment, 96% of the light is absorbed.
  • the reflected light is directed toward the second face which is a flat black coated surface where significant amounts of the light reflected from the first wall is absorbed while the remainder is scattered into a Lambertian distribution. In one embodiment, 90% of the light reflected to the second wall is absorbed while 10% is scattered. The result is that less than 1 ⁇ 2 of a percent is scattered back into the intermediate focal assembly since 10% of 4% is less than 1 ⁇ 2 of a percent.
  • An aluminum anodized mounting holder that is pinned holds the aperture array 132 in place.
  • the pins allow the aperture array to be removed, returned and/or replaced in the exact same position.
  • Aperture array 132 in the embodiment shown is an opaque pinhole array.
  • the aperture array is chrome on glass or chrome on quartz with the chrome being on the first or reflective side while the pinholes are etched out of the second side which is the side facing the sample S (chrome side) while the reflective side faces the beamsplitter.
  • Either one or both sides of the array in one alternative embodiment include an anti-reflective (A/R) coating.
  • the chrome coating has an optical density of 5.
  • the pinhole array may be of any x by y number of pinholes, while in the most preferred embodiment is an approximately 100 pinhole by an approximately 1000 pinhole array.
  • the holes in this embodiment are of a circular nature although other configurations are contemplated. However, other aperture, pinhole or like arrays of differing numbers and ranges of holes are contemplated.
  • the aperture array is slightly canted as shown by 13 .
  • This canting results in the directing or steering away of stray reflections in directions that do not effect the system. For instance, the canting keeps light reflected from the pellicle toward the aperture array that does not pass through a pinhole in the array from being reflected back into the camera reimager and camera.
  • the canting ⁇ is 4.5 degrees although it may be at other angles between 0.1 degree and 25 degrees. As discovered, the greater the cant angle the easier it is to remove stray light such as that caused by the reflection from the chrome surface; however, canting too much introduces other negative effects.
  • the pinholes in the aperture array are optimized in terms of size and pitch.
  • the size of the pinholes matches the camera pixels, that is the size of each pinhole matches the diffraction size of the spot coming back from the object imager.
  • under sampling meaning the system has more pinholes than camera pixels, and as such more than one pinhole is mapped or correlated into each pixel.
  • This under sampling reduces the effects of aliasing in the system so that holes do not have to match up directly with the pixels and thus alignment, distortions, and imperfections in optical system and other similar issues are avoided because this design assures that the same or substantially the same amount of light reaches each pixel regardless of the orientation, phase, etc. of the pixel with respect to a pinhole.
  • the under sampling also broadens the depth response profile of our optical system to allow the system to operate over a broad range of three dimensional heights on the sample S.
  • the apertures are orthogonal or grid-like.
  • the apertures are non-orthogonal or non-grid-like such as a hexagonal or other geometric pattern. This non-orthogonal pattern in at least certain applications may reduce aliasing and alignment issues.
  • Pitch is preferably calculated from pinhole size which is optimized to numerical aperture size.
  • the pinhole size is chosen inter alia to match the diffraction of the object imager.
  • the pitch is twice the pinhole size which optimizes the reduction of cross talk between pinholes while maximizing the number of resolution elements. Magnification and spatial coverage may then be adjusted to optimize resolution at the wafer surface.
  • Another key feature of this invention is that light passing from the aperture array is in transmission so that any surface anomalies on the pellicle beamsplitter are irrelevant to the imaging properties of our system and we are not susceptible to vibrations of pellicle beamsplitter.
  • the positioning of the aperture array into the system provides a confocal response. Only light that passes through an aperture in the aperture array, passes through the dual telecentric object imager, reflects off of the sample S, passes back through the dual telecentric object imager, and passes back through an aperture in the aperture array is in focus.
  • This confocal principle results in bright illumination of a feature in focus while dim or no illumination of an out of focus feature.
  • Aperture array in the preferred embodiment is a fused-silica material such as chrome on glass or chrome on quartz because of the low coefficient of thermal expansion. It may alternatively be made of any other material having a low coefficient of thermal expansion such as air apertures, black materials, etc. This eliminates a mismatch potential between pixel sizes and the CCD camera elements.
  • the object imager 134 in the preferred embodiment shown is of a dual telecentric design.
  • the object imager includes a plurality of lenses separated by one or more stops or baffles.
  • the object imager includes two to six lenses, and preferably three to four, on the right side of the imager and two to six lenses, and preferably three to four, on the left side of the imager separated in the middle by the stop. Since the imager is dual telecentric, the stop is located one group focal length away from the cumulative location of the lenses on each side.
  • the object imager functions to: (1) provide a front path for the light or illumination to pass from the aperture array to the object (wafer or sample S), and (2) provide a back path for the reimaging of the object (wafer or other sample S) to the aperture array 132 .
  • This system is unique because it is a dual telecentric optical imager/reimager.
  • This dual telecentric property means that when viewed from both ends the pupil is at infinity and that the chief rays across the entire field of view are all parallel to the optical axis.
  • This provides two major benefits.
  • One benefit which relates to the object or sample end of the imager is that magnification across the field remains constant as the objectives focus in and out in relation to the sample.
  • the second benefit relates to the aperture end of the imager where the light that comes through the aperture array is collected efficiently as the telecentric object imager aligns with the telecentric camera reimager.
  • the optical throughput is very high. This is a result of a numerical aperture of the system on the object side is in excess of 0.23 with a field of view on the object with a diameter of 5 mm.
  • the numerical aperture of the object imager may be adjustable or changeable by placing a mechanized iris in for the stop. This would allow for different depth response profile widths. This allows for broader ranges of bump or three dimensional measurements since the taller the object that it is desirable to measure the lower the desirable numerical aperture to maintain speed of the system. Similarly the smaller the object to be measured, the more desirable it is to have a higher numerical aperture to maintain sharpness, i.e., accuracy.
  • the magnifications of the object imager are 4 x.
  • the aperture array is four times larger than the object (sample S).
  • the camera reimager 136 in the preferred embodiment shown is of a telecentric design, although it may in other embodiments be a dual telecentric design.
  • the camera reimager includes a plurality of lenses separated by a stop.
  • the camera reimager includes two to six lenses, and preferably three to four, on the right side of the reimager and two to six lenses, and preferably three to four, on the left side of the reimager separated in the middle by the stop. Since the reimager is telecentric, on the telecentric side which is the side nearest the pellicle beamsplitter, the stop is located one group focal length away from the cumulative location of the lenses on that side.
  • the camera reimager functions to provide a path for the light passing through the aperture array from the object imager to the camera. It is preferable to match or optimize the camera reimager properties to the object imager and the camera where such properties include numerical aperture, magnifications, pixel sizes, fields of view, etc.
  • the telecentric properties of the camera reimager are on the aperture array side or end so that it efficiently and uniformly across the field of view couples the light coming through the aperture array from the object imager 134 . It is pixel sampling resolution limited so its aberrations are less than that from the degradation of the pixel sampling. Its numerical aperture is designed based upon the object imager so any misalignments between the reimagers do not translate into a field dependent change in efficiency across the field of view.
  • the combined system magnification of the object and camera imagers/reimagers is chosen to match spatial resolution at the object to pixel size.
  • the magnifications of the camera reimager are 0.65 x.
  • the CCD or detector array is 0.65 times the aperture array.
  • the preferred object and camera reimager magnification is 2.6 x.
  • the imagers/reimagers have very high numerical apertures, and the greater the numerical aperture the finer the resolution and the sharper/narrower the depth response curve.
  • an optional feature in this invention that is used in certain embodiments is the canting of either the sample S with reference to the optical axis of the entire optical subsystem, or vice versa (that is the canting of the entire optical subsystem with respect to the sample S).
  • This option compensates for the canting of the aperture array as described above thus maintaining the Scheimpflug condition.
  • the canting is shown as a.
  • the cant angle a is 0 degrees, although in other embodiments it ranges from 0 to 5 degrees such as a cant angle a of 1.2 degrees in one alternative embodiment.
  • the camera 126 may be any line scan camera, area scan camera, combination of multiple line scan cameras, time delay integration (TDI) line scan camera or other camera or cameras as one of skill in the art would recognize as functionally operational herewith.
  • the camera may be angled ⁇ .
  • the camera 126 is a TDI camera.
  • TDI provides additional speed by transferring the charge such that the system integrates light over time.
  • the aperture array with line scan camera uses only one array of pinholes while with TDI the aperture array is 100 or more arrays by multiple apertures in each line (an example is 100 lines by 1024 apertures per line).
  • Image acquisition is typically limited by camera read rates, stage velocity and light. This broadband solution eliminates or significantly reduces light issues. Thus continue scalability of the system will occur as read rates continue to improve for TDI cameras or related technology such as CMOS imagers. Alternatively, system throughput is also increasable by increasing the number of apertures from approximately 1000 to 2000 or even 4000.
  • Sampling or viewing may be 1:1 or at another ratio. Where at 1:1, the camera operates at a 1 pinhole to 1 pixel ratio. Where under sampling is used, the camera is at a ratio other that 1:1 pinholes to pixels, and in one embodiment is at 11 ⁇ 2 or 2 pinholes per pixel element at the camera sensor.
  • the point that is illuminated is in or near focus, substantially all of the light reflects back into the object imager while if not in focus then little or none is reflected back.
  • Light passes back through the object imager and is directed toward the aperture array.
  • Light that reaches the aperture array either passes through an aperture therein, or hits the plate around the holes in the aperture array and is reflected out of the system due to the cant.
  • Light that passed through the aperture array is in focus due to the confocal principle, and it is reimaged and collimated in the telecentric camera reimager. It is directed into the camera and the intensity recorded. In any given pass, the above process occurs for every point on the sample that is being viewed.
  • the light that passes through the system is received by camera 126 and stored. After this process has been repeated at different heights, and across at least a portion of the surface, all of the stored data is then processed by a computer or the like to calculate or determine the topography of the sample including the location, size, shape, contour, roughness, and/or metrology of the bumps or other three dimensional features thereon.
  • the process involves one, two or more (generally three or more) passes over all or a selected portion of the sample surface S each at a different surface target elevation to measure surface elevation followed by two or more (generally three or more) passes each at a different bump target elevations to measure bump elevation followed by calculations to determine bump height.
  • the result of the passes is an intensity measurement for each point at each elevation where these points as to surface elevation and separately as to bump elevation are plotted or fitted to a Gaussian curve to determine the elevation of both the surface and the bump from which the actual bump height at a given point is determined. It is the difference between the surface elevation and the bump elevation.
  • a pass is made over a portion or the entire surface of the sample S. Intensity is determined for each pixel. Initially, a course or approximate surface elevation is used that is approximating the surface location or elevation of the sample S. The entire sample (or portion it is desired to measure) is scanned and the intensities are noted for each pixel, while if very small or no intensity at a given point then the system is significantly out of focus at that location or pixel (an example is scanning at the surface elevation where bumps exists results in little or no intensity feedback).
  • This step is generally repeated twice more (though any number of passes may be used so long as a curve can be calculated from the number of passes) at a slightly different elevation such as 5, 10 or 20 microns difference in elevation to the first pass.
  • the result is three data points of intensity for each pixel to plot or fit a Gaussian curve to determine the actual wafer surface elevation at that location.
  • the wafer surface elevation is now known for the entire sample except where bumps or other significant three dimensional protrusions or valleys exist since each of these reported no intensity as they were too out of focus to reflect back any light. Curve fitting may be used to determine surface location under the bumps.
  • the second step is to determine the elevation of these significant protrusions or valleys (such as bumps). Another pass is made over a portion or the entire surface of the sample S (often only where bumps are expected, known, or no intensity was found in the surface elevation passes). This pass occurs at a course or rough approximation as to the elevation of the expected bumps such as 50, 100, 200, 300 or the like microns above the surface.
  • Intensity is determined at each pixel as the entire sample (or only select locations where bumps are expected, known or no intensity was previously found) is scanned and the intensities are noted for each pixel, while if very small or no intensity at a given point then the system is significantly out of focus at that location or pixel (an example is scanning at bump elevations where no bump exists results in little or no intensity feedback).
  • This step is generally repeated several more times (though any number of passes may be used so long as a curve can be calculated from the number of passes) at a slightly different elevation such as 5, 10 or 20 microns different.
  • the result is multiple data points of intensity for each pixel to plot or fit a Gaussian curve to determine the bump elevation at that point.
  • the bump heights can be determined.
  • the surface elevations are determined for the bump location based upon analysis, plotting, and/or other known curve extension techniques of all of the proximate surface elevations around the bump. The difference between a bump elevation and the proximate surface elevations therearound, or the bump elevation and the calculated surface elevation thereunder, equate to the bump height for a given bump.
  • the scanning process for the above invention is as follows: The system will scan lines across the sample surface S at a fixed elevation above the sample surface S. This scan will generate one z axis elevation on a depth response curve for each pixel on the sample under the scan. The sensor will then be moved in the z axis direction to a second elevation and the scan will be repeated to generate a second z axis elevation on the depth response curve for each location on the sample S under the scan. This can then be repeated any number of times desired for the interpolation method used (typically at least two or three scans, although more are certainly contemplated and will improve accuracy). The multiple locations on the depth response curve are then interpolated for each pixel to generate a map of the surface height under the scan. The elevation of the sample surface S is now known.
  • this process may be repeated at the approximate elevation of the outermost portion of the protrusions just as it was performed above at the approximate elevation of the sample surface S.
  • the bump elevations will then be known, and the bump heights are then calculated as the difference between the surface elevation and the bump elevation.
  • the size of the “in focus” region is determined by the telecentric imaging lens. If this lens has a larger numerical aperture ( ⁇ ratio of the focal length to diameter) the focus range will be small, and conversely if the lens has a low numerical aperture the focus range will be large. The best in focus range is dependent on the elevation range that needs to be measured.
  • the invention also in at least one embodiment is capable of adjusting depth response. This is desirous since with larger bumps a broader depth response is desirable while with smaller bumps a thinner or smaller depth response is desired. In effect, the system degrades the high numerical aperture to look at larger or taller bumps, and this assists in maintaining speed. Inversely, to view smaller or thinner bumps it is desirable to provide a higher numerical aperture. This broadening of depth response is accomplished either by providing a baffle to adjust the aperture, or by providing or increasing the tilt of the sensor.
  • a significantly different alternative involves imaging multiple heights at each point rather than making multiple passes. This is accomplished by using multi-line line scan cameras where each camera or sensor is looking at different elevations. For example, a four line, line-scan camera system would involve line 1 reading elevation 0 , line 2 reading elevation plus 20 microns, line 3 reading elevation plus 40 microns, and line 4 reading elevation plus 60 microns. All four data points in this example are gathered simultaneously. It is also contemplated and recognized that a CMOS imager would work successfully. Alternatively, multiple TDI sensors could also be used stacked close together.
  • two modes of speed are provided.
  • a precise mode is provided where scanning occurs as to every die in either or both surface elevation determination and bump elevation determination.
  • a faster mode is provided where scanning as to wafer surface elevation is performed only in one or a few places along the wafer and interpolation is used to calculate the surface over the remaining surface including at the die.
  • Some alternative light sources include an illuminator with a filament designed for providing a uniformly filled area internally imaged first into a numerical aperture stop and then reimaged into the telecentric pupil of the object imager and whereby the angular spectrum from the filament is mapped first into a field stop inside the illuminator and then reimaged to the a filed located in the intermediate focus or IFA of the object imager at the aperture array.
  • Another light source is an illuminator with a filament designed to provide a uniformly filled area that is imaged into the telecentric pupil of the imaging system (object imager) and whereby the angular spectrum from the filament is mapped into the field located in the intermediate focus or IFA of the imaging system at the aperture array and whereby the light outside the useful A ⁇ product of the imaging system is eliminated via a series of baffles.
  • Yet another light source is an illuminator with an array of bright monochromatic or quasi-monochromatic sources instead of a filament.
  • an even further illuminator is a bright monochromatic or quasi-chromatic source that is collimated and directed into the field located in the intermediate focus or IFA of the imaging system at the aperture array whereby preferably an array of lenslettes are employed to create an angular spectrum at each aperture, whereby it is preferably but optional that the source is apodized.

Abstract

A confocal three dimensional inspection system, and process for use thereof, allows for rapid inspecting of bumps and other three dimensional (3D) features on wafers, other semiconductor substrates and other large format micro topographies. The sensor eliminates out of focus light using a confocal principal to create a narrow depth response in the micron range.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims priority to the following provisional patent applications all filed on Jul. 16, 2001: U.S. Ser. No. 60/305,730, and U.S. Ser. No. 60/305,729.[0001]
  • BACKGROUND OF THE INVENTION
  • 1. Technical Field [0002]
  • The present invention relates to a system, and process for use thereof, for inspecting wafers and other semiconductor or microelectronic substrates, and specifically for inspecting three dimensional (3D) surfaces or features thereon such as bumps. Specifically, the present invention relates to a confocal optical system for inspecting bumps and other 3D features on wafers or like substrates, and a process of using such system. [0003]
  • 2. Background Information [0004]
  • Over the past several decades, the microelectronics and semiconductor has exponentially grown in use and popularity. Microelectronics and semiconductors have in effect revolutionized society by introducing computers, electronic advances, and generally revolutionizing many previously difficult, expensive and/or time consuming mechanical processes into simplistic and quick electronic processes. This boom has been fueled by an insatiable desire by business and individuals for computers and electronics, and more particularly, faster, more advanced computers and electronics whether it be on an assembly line, on test equipment in a lab, on the personal computer at one's desk, or in the home via electronics and toys. [0005]
  • The manufacturers of microelectronics and semiconductors have made vast improvements in end product quality, speed and performance as well as in manufacturing process quality, speed and performance. However, there I continues to be demand for faster, more reliable and higher performing semiconductors. [0006]
  • One process that has evolved over the past decade plus is the microelectronic and semiconductor inspection process. The merit in inspecting microelectronics and semiconductors throughout the manufacturing process is obvious in that bad wafers may be removed at the various steps rather than processed to completion only to find out a defect exists either by end inspection or by failure during use. In the beginning, wafers and like substrates were manually inspected such as by humans using microscopes. As the process has evolved, many different systems, devices, apparatus, and methods have been developed to automate this process such as the method developed by August Technology and disclosed in U.S. patent application Ser. No. 09/352,564. Many of these automated inspection systems, devices, apparatus, and methods focus on two dimensional inspection, that is inspection of wafers or substrates that are substantially or mostly planar in nature. [0007]
  • One rapidly growing area in the semiconductor industry is the use of bumps or other three dimensional (3D) features that protrude outward from the wafer or substrate. The manufacturers, processors, and users of such wafers or like substrates having bumps or other three dimensional desire to inspect these wafers or like substrates in the same or similar manner to the two dimensional substrates. However, many obstacles exist as the significant height of bumps or the like causes focusing problems, shadowing problems, and just general depth perception problems. Many of the current systems, devices, apparatus, and methods are either completely insufficient to handle these problems or cannot satisfy the speed, accuracy, and other requirements. [0008]
  • SUMMARY OF THE INVENTION
  • The inspecting of semiconductors or like substrates, and specifically the inspection of three dimensional surfaces or features, such as bumps, is accomplished by the present invention, which is a confocal sensor with a given depth response functioning using the principle of eliminating out of focus light thereby resulting in the sensor producing a signal only when the surface being inspected is in a narrow focal range. The result is an accurate height determination for a given point or area being inspected such that the accumulation of a plurality of height determinations from use of the confocal sensor system across a large surface allows the user to determine the topography thereof. [0009]
  • In sum, this system and process creates multiple parallel confocal optical paths whereby the out of focus light is eliminated by placing an aperture at a plane which is a conjugate focal plane to the surface of the sample. The result is that the sensor produces a signal only when the sample surface is in a narrow focal range.[0010]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Preferred embodiment of the invention, illustrative of the best mode in which applicant has contemplated applying the principles, are set forth in the following description and are shown in the drawings and are particularly and distinctly pointed out and set forth in the appended claims. [0011]
  • FIG. 1 is a drawing of one embodiment of the present invention. [0012]
  • Similar numerals refer to similar parts throughout the drawings.[0013]
  • DESCRIPTION OF THE PREFERRED EMBODIMENT
  • The three dimensional (3D) inspection system of the present invention is indicated generally at [0014] 120 as is best shown overall in FIG. 1 and is used in one environment to view, inspect, or otherwise optically measure three dimensional features or surfaces. One example is the measurement of bumps on wafers or like substrates. The 3D inspection system includes a light source 122, an optical subsystem 124, and a camera 126. The optical subsystem includes an intermediate focal assembly and a pair of imager or reimagers. The intermediate focal assembly in one embodiment includes an optional critical baffle 129, a beamsplitter 130, a photon motel 131, and an array mount including an aperture array 132, while the imager/reimagers in one embodiment include an object imager 134, and a camera reimager 136.
  • The [0015] light source 122 is any source of light that provides sufficient light to illuminate the sample S, and the light source may be positioned in any position so long as it provides the necessary light to sample S to be viewed, inspected or otherwise optically observed. Examples of the light source include, but are not limited to white light sources such as halogen or arc lights, lasers, light emitting diodes (LEDs) including white LEDs or any of the various colored LEDs, fluorescent lights, or any other type of light source.
  • In the preferred embodiment, the [0016] light source 122 is an incoherent light source, preferably of an incandescent type, that includes a filament, a condenser lens and a heat absorbing glass. In one embodiment, which is most preferred and is shown in the Figures, the light source is an incoherent, optimally baffled, lamp. It is also preferred in certain embodiments that the light source is spatially uniform and of a quasi-telecentric or preferably telecentric design. The system also needs bright light which is typically provided by a broadband or broad-spectrum style light source. It is also very desirable to define the light source as one of a matched numerical aperture design with the object imager to reduce stray light and improve efficiency.
  • One style of incoherent light source is an incandescent lamp such as a halogen lamp such as a 12 volt, 100 watt quartz-style halogen lamp with a color temperature in the 3000-3500 Kelvin range. Halogen provides a very consistent, stable light output that is broadband or over many wavelengths and is cost effective and readily manufacturable. It is highly preferred that the light source is incoherent to avoid or reduce spatial non uniformity and/or speckle. [0017]
  • The light source is of a Köhler design such as a simple or reimaged Köhler design, and most preferably a reimaged Köhler design which includes additional lenses beyond the condenser lens, which effectively matches the telecentric properties of the optical subsystem thereby matching the numerical aperture and field properties needed by the [0018] system 120 to produce accurate height measurements of bumps on the surface of the sample S. One of the embodiments to create such a Köhler or reimaged Köhler system uses an aspheric condenser lens.
  • One of the advantages of our Köhler or reimaged Köhler design is that every field position or spot in our field “sees” all of the filament so the system has a very uniform irradiance. [0019]
  • The spatial extent of the source coupled with the numerical aperture of the condenser lens plus the focal length and the conjugate provides a combination of field of view and numerical aperture that is optimized. The system optimizes the AΩ where the A is the size of the area of the field and Ω is the solid angle of the cone of light. This provides a very uniform field. The Köhler illumination design (1) maps pupil of light source onto spatial extension of aperture array, and (2) maps spatial extension of filament in light source into numerical aperture or angle space of reimaging system. The reimaged Köhler design differs from a standard Köhler design in two ways: (1) reimaged Köhler designs have a filament that is reimaged to a circular aperture that very precisely defines a constant numerical aperture over an entire field, and (2) in between the filament and the sample there is a focal plane that is conjugated to our aperture array, and at that focal plane the light is baffled and masked so that light outside of the desired range at the aperture array never enters the system. One baffle defines the numerical aperture and another baffle limits the light that passes through to only light within the desired field of view. [0020]
  • In one embodiment, the filament length, width and packing density are optimized. The filament length, width and packing density are adjustable to scale the system to inspect significantly larger or smaller bumps or the like. [0021]
  • In one embodiment, the light source is also broadband which provides significant light to the system, and avoids laser-based system speckle problems. The broadband concept provides light across many wavelengths versus a single or small range of wavelengths. This broadband is provided by mercury-type, incandescent-type or other broadband light sources. [0022]
  • In the telecentric environment of one of the embodiments, it is optional to provide an intermediate numerical aperture stop and intermediate field stop that provides improved stray light elimination. [0023]
  • In the quasi-telecentric environment of one of the embodiments, the system has an illuminator interface in the light path adjacent to the illuminator or [0024] light source 122. This illuminator interface provides optimal baffling to better match the illuminator cone of light with the field of view cone of light, i.e., in effect improving the quasi-telecentricity to as close as possible to complete telecentricity. In various embodiments, from one to multiple baffles are provided in front of the illuminator, and optimally sized and shaped. These baffles are basically windows that are sized and shaped to allow desirable light through while eliminating stray, peripheral or other undesirable light. This provides the optimal signal to noise ratio. Additionally in a non-preferred embodiment, an optical filter may be used in conjunction with baffling within the illuminator interface. This removes unwanted wavelengths of light.
  • A thermal barrier may optionally also be provided to reduce the heat transfer of the light from the light source to other components of the system. This reduces or eliminates thermal expansion which causes distortions. The thermal barrier is preferably placed between the lens and the baffle. [0025]
  • This light source provides sufficient energy to illuminate the sample S. The light emitted from the [0026] light source 122 is directed into the optical subsystem 122. Specifically the light is directed toward beamsplitter 130.
  • In more detail and in the embodiment shown in the Figures, the optical subsystem [0027] 124 includes the intermediate focal assembly including the critical baffle 129, beamsplitter 130, photon motel 131, and array mount with aperture array 132, and the system further includes object imager 134, and camera reimager 136.
  • [0028] Critical baffle 129 is an optional additional baffle that is positioned as close as possible to an intermediate focal surface as possible. The critical baffle is positioned in between the light source and the beamsplitter and is preferably as close as possible to the beamsplitter. The critical baffle reduces stray light entering the intermediate focal assembly as well as removing stray reflections off of the beamsplitter.
  • Beamsplitter [0029] 130 in the embodiment shown is a pellicle beamsplitter. A pellicle beamsplitter has several advantages since it is achromatic, has very low polarization effects, and less variation with angle and color issues, and more uniformly provides light even after beam splitting effects than a polarized beamsplitter. A pellicle beamsplitter also allows for an optical system that does not need a ¼ waveplate.
  • Another important feature is the design, setup, alignment and configuration of the [0030] light source 122, pellicle beam splitter 130 and the aperture array 132 as is shown in the FIG. 1. The light or illumination source 122 provides reflected light to the beamsplitter whereby some of this light passes through the beamsplitter and emanates out of the entire system and is lost, a small amount may be lost within the beamsplitter, and the remaining light is reflected toward the aperture array. In one embodiment as is shown in the Figures, the camera is axial with the imager/reimagers while the light source is not and uses the beamsplitter to introduce the light into the axis defined between the imager/ reimagers 134 and 136 and the camera. This design maintains a good transmitted wave front through a pellicle beamsplitter, i.e., the imaging performance is preserved between the imager/reimagers through the array and beamsplitter. The reason for the maintaining of this good transmitted wave front is the combination of the axial camera and reimager design coupled with a pellicle beamsplitter as defined below rather than a polarizing beamsplitter since the pellicle beamsplitter have good transmitting wave fronts in comparison to reflective wave fronts versus the polarizing beamsplitter which has good reflective wave fronts in comparison to its transmitted wave front.
  • The [0031] beamsplitter 130 is pellicle and is of a broadband configuration, low polarizing effect that is spatially dependent, low scattering, non-absorbing or low absorbing, and is color independent with negligible internal or stray reflections. In contrast to a polarizing beamsplitter where incoming light is reflected at 90 degrees to the path of at least one of the paths of outgoing light such that incoming and all exiting light are basically near normal incident to the faces of the cube, the pellicle beamsplitter in this embodiment overcomes the detrimental design limitations of a typical achromatic beamsplitter. This broadband configuration is necessary because in a typical achromatic beamsplitter it is difficult to successfully achieve very small Fresnel reflections on the surfaces unless the beamsplitter includes coatings that adopt broad wavelength ranges which are very expensive, very sophisticated and difficult to provide.
  • The pellicle beamsplitter in one embodiment provides better performance than the polarizing beamsplitter in the above described arrangement with the axial camera and reimagers even though a polarizing beamsplitter and ¼ wave plate with axial camera and reimagers would only require the system to lose half of its light once while the pellicle system with axial cameras and reimagers requires the system to lose half of its light twice or successively. This is acceptable due the providing of broadband illumination from the light source which provides more light so extra loss is allowed. [0032]
  • A pellicle beamsplitter is preferred over merely a beamsplitter because the pellicle removes internal obstructions and optical aberrations that are undesirable. [0033]
  • It has been discovered that using the above system, the pellicle beamsplitter is more efficient, provides less stray light, is more spatially uniform, and generally provides better properties than the polarizing beamsplitter when the system uses a broadband light source. The pellicle beamsplitter is all dielectric rather than containing a metallic layer resulting in a beamsplitter that is non-absorbing or low absorbing. The dielectric pellicle beamsplitter is also preferably as close to 50/50 reflective/transmissive. It also preferred that the beamsplitter is low scattering. As a result, the system has optimized the amount of “good” light that passes through while minimizing the amount of “bad” light passing through which is absorbed or scattered light. [0034]
  • [0035] Photon motel 131 is critical because 50% of light is lost in the beamsplitter. This large amount of “lost” light needs to be eliminated from the system so an optimized and efficient photon motel is critical. Photon motel 131 is a two walled device where the first wall is a highly efficient light absorbing and controlled reflecting glass surface and the second wall is a highly efficient light absorbing surface optimally positioned to receive the light reflected from the first wall. The first wall is a piece of highly polished absorbing glass that eliminates significant amounts of the light while the remaining light is reflected in a controlled manner but not scattered. In the one embodiment, 96% of the light is absorbed. The reflected light is directed toward the second face which is a flat black coated surface where significant amounts of the light reflected from the first wall is absorbed while the remainder is scattered into a Lambertian distribution. In one embodiment, 90% of the light reflected to the second wall is absorbed while 10% is scattered. The result is that less than ½ of a percent is scattered back into the intermediate focal assembly since 10% of 4% is less than ½ of a percent.
  • An aluminum anodized mounting holder that is pinned holds the [0036] aperture array 132 in place. The pins allow the aperture array to be removed, returned and/or replaced in the exact same position.
  • [0037] Aperture array 132 in the embodiment shown is an opaque pinhole array. Specifically, the aperture array is chrome on glass or chrome on quartz with the chrome being on the first or reflective side while the pinholes are etched out of the second side which is the side facing the sample S (chrome side) while the reflective side faces the beamsplitter. Either one or both sides of the array in one alternative embodiment include an anti-reflective (A/R) coating. The chrome coating has an optical density of 5.
  • The pinhole array may be of any x by y number of pinholes, while in the most preferred embodiment is an approximately 100 pinhole by an approximately 1000 pinhole array. The holes in this embodiment are of a circular nature although other configurations are contemplated. However, other aperture, pinhole or like arrays of differing numbers and ranges of holes are contemplated. [0038]
  • The aperture array is slightly canted as shown by [0039] 13. This canting results in the directing or steering away of stray reflections in directions that do not effect the system. For instance, the canting keeps light reflected from the pellicle toward the aperture array that does not pass through a pinhole in the array from being reflected back into the camera reimager and camera. In the embodiment shown the canting β is 4.5 degrees although it may be at other angles between 0.1 degree and 25 degrees. As discovered, the greater the cant angle the easier it is to remove stray light such as that caused by the reflection from the chrome surface; however, canting too much introduces other negative effects.
  • The pinholes in the aperture array are optimized in terms of size and pitch. In one embodiment, the size of the pinholes matches the camera pixels, that is the size of each pinhole matches the diffraction size of the spot coming back from the object imager. [0040]
  • However, in another embodiment, under sampling is used meaning the system has more pinholes than camera pixels, and as such more than one pinhole is mapped or correlated into each pixel. This under sampling reduces the effects of aliasing in the system so that holes do not have to match up directly with the pixels and thus alignment, distortions, and imperfections in optical system and other similar issues are avoided because this design assures that the same or substantially the same amount of light reaches each pixel regardless of the orientation, phase, etc. of the pixel with respect to a pinhole. The under sampling also broadens the depth response profile of our optical system to allow the system to operate over a broad range of three dimensional heights on the sample S. [0041]
  • In addition, in one embodiment the apertures are orthogonal or grid-like. However, in alternative embodiments the apertures are non-orthogonal or non-grid-like such as a hexagonal or other geometric pattern. This non-orthogonal pattern in at least certain applications may reduce aliasing and alignment issues. [0042]
  • Pitch is preferably calculated from pinhole size which is optimized to numerical aperture size. The pinhole size is chosen inter alia to match the diffraction of the object imager. The pitch is twice the pinhole size which optimizes the reduction of cross talk between pinholes while maximizing the number of resolution elements. Magnification and spatial coverage may then be adjusted to optimize resolution at the wafer surface. [0043]
  • Another key feature of this invention is that light passing from the aperture array is in transmission so that any surface anomalies on the pellicle beamsplitter are irrelevant to the imaging properties of our system and we are not susceptible to vibrations of pellicle beamsplitter. [0044]
  • The positioning of the aperture array into the system provides a confocal response. Only light that passes through an aperture in the aperture array, passes through the dual telecentric object imager, reflects off of the sample S, passes back through the dual telecentric object imager, and passes back through an aperture in the aperture array is in focus. This confocal principle results in bright illumination of a feature in focus while dim or no illumination of an out of focus feature. [0045]
  • Aperture array in the preferred embodiment is a fused-silica material such as chrome on glass or chrome on quartz because of the low coefficient of thermal expansion. It may alternatively be made of any other material having a low coefficient of thermal expansion such as air apertures, black materials, etc. This eliminates a mismatch potential between pixel sizes and the CCD camera elements. [0046]
  • The [0047] object imager 134 in the preferred embodiment shown is of a dual telecentric design. The object imager includes a plurality of lenses separated by one or more stops or baffles. In one embodiment, the object imager includes two to six lenses, and preferably three to four, on the right side of the imager and two to six lenses, and preferably three to four, on the left side of the imager separated in the middle by the stop. Since the imager is dual telecentric, the stop is located one group focal length away from the cumulative location of the lenses on each side.
  • The object imager functions to: (1) provide a front path for the light or illumination to pass from the aperture array to the object (wafer or sample S), and (2) provide a back path for the reimaging of the object (wafer or other sample S) to the [0048] aperture array 132.
  • This system is unique because it is a dual telecentric optical imager/reimager. This dual telecentric property means that when viewed from both ends the pupil is at infinity and that the chief rays across the entire field of view are all parallel to the optical axis. This provides two major benefits. One benefit which relates to the object or sample end of the imager is that magnification across the field remains constant as the objectives focus in and out in relation to the sample. The second benefit relates to the aperture end of the imager where the light that comes through the aperture array is collected efficiently as the telecentric object imager aligns with the telecentric camera reimager. [0049]
  • The optical throughput is very high. This is a result of a numerical aperture of the system on the object side is in excess of 0.23 with a field of view on the object with a diameter of 5 mm. [0050]
  • In an alternative embodiment, the numerical aperture of the object imager may be adjustable or changeable by placing a mechanized iris in for the stop. This would allow for different depth response profile widths. This allows for broader ranges of bump or three dimensional measurements since the taller the object that it is desirable to measure the lower the desirable numerical aperture to maintain speed of the system. Similarly the smaller the object to be measured, the more desirable it is to have a higher numerical aperture to maintain sharpness, i.e., accuracy. [0051]
  • The magnifications of the object imager are 4 x. The aperture array is four times larger than the object (sample S). [0052]
  • The camera reimager [0053] 136 in the preferred embodiment shown is of a telecentric design, although it may in other embodiments be a dual telecentric design. The camera reimager includes a plurality of lenses separated by a stop. In one embodiment, the camera reimager includes two to six lenses, and preferably three to four, on the right side of the reimager and two to six lenses, and preferably three to four, on the left side of the reimager separated in the middle by the stop. Since the reimager is telecentric, on the telecentric side which is the side nearest the pellicle beamsplitter, the stop is located one group focal length away from the cumulative location of the lenses on that side.
  • The camera reimager functions to provide a path for the light passing through the aperture array from the object imager to the camera. It is preferable to match or optimize the camera reimager properties to the object imager and the camera where such properties include numerical aperture, magnifications, pixel sizes, fields of view, etc. [0054]
  • The telecentric properties of the camera reimager are on the aperture array side or end so that it efficiently and uniformly across the field of view couples the light coming through the aperture array from the [0055] object imager 134. It is pixel sampling resolution limited so its aberrations are less than that from the degradation of the pixel sampling. Its numerical aperture is designed based upon the object imager so any misalignments between the reimagers do not translate into a field dependent change in efficiency across the field of view.
  • The combined system magnification of the object and camera imagers/reimagers is chosen to match spatial resolution at the object to pixel size. [0056]
  • The magnifications of the camera reimager are 0.65 x. The CCD or detector array is 0.65 times the aperture array. Thus, the preferred object and camera reimager magnification is 2.6 x. [0057]
  • The imagers/reimagers have very high numerical apertures, and the greater the numerical aperture the finer the resolution and the sharper/narrower the depth response curve. [0058]
  • In addition, an optional feature in this invention that is used in certain embodiments is the canting of either the sample S with reference to the optical axis of the entire optical subsystem, or vice versa (that is the canting of the entire optical subsystem with respect to the sample S). This option compensates for the canting of the aperture array as described above thus maintaining the Scheimpflug condition. In the Figure, the canting is shown as a. In the current preferred embodiment, the cant angle a is 0 degrees, although in other embodiments it ranges from 0 to 5 degrees such as a cant angle a of 1.2 degrees in one alternative embodiment. [0059]
  • It is also an option not to cant the sample or the optical subsystem when the aperture array is canted. In this scenario, some desensitivity of the signal occurs but is often not significant or noteworthy. [0060]
  • The [0061] camera 126 may be any line scan camera, area scan camera, combination of multiple line scan cameras, time delay integration (TDI) line scan camera or other camera or cameras as one of skill in the art would recognize as functionally operational herewith. The camera may be angled γ.
  • In the embodiment shown in the Figures, the [0062] camera 126 is a TDI camera. TDI provides additional speed by transferring the charge such that the system integrates light over time. The aperture array with line scan camera uses only one array of pinholes while with TDI the aperture array is 100 or more arrays by multiple apertures in each line (an example is 100 lines by 1024 apertures per line).
  • Image acquisition is typically limited by camera read rates, stage velocity and light. This broadband solution eliminates or significantly reduces light issues. Thus continue scalability of the system will occur as read rates continue to improve for TDI cameras or related technology such as CMOS imagers. Alternatively, system throughput is also increasable by increasing the number of apertures from approximately 1000 to 2000 or even 4000. [0063]
  • Sampling or viewing may be 1:1 or at another ratio. Where at 1:1, the camera operates at a 1 pinhole to 1 pixel ratio. Where under sampling is used, the camera is at a ratio other that 1:1 pinholes to pixels, and in one embodiment is at 1½ or 2 pinholes per pixel element at the camera sensor. [0064]
  • Light passes through the system as follows: [0065] Light source 122 illuminates and directs such light toward beamsplitter 130. Some of the light that reaches the beamsplitter passes through the beamsplitter and emanates out of the entire system thus avoiding interference with the system, a small amount is lost within the beamsplitter, and the remaining light is reflected toward the aperture array. Light that reaches the aperture array either passes through an aperture therein, or hits the plate around the holes in the aperture array and is reflected out of the system due to the cant. Light that passed through the aperture array is reimaged and collimated in the dual telecentric object imager. The light is directed toward the sample S and reflects off of the sample S. If the point that is illuminated is in or near focus, substantially all of the light reflects back into the object imager while if not in focus then little or none is reflected back. Light passes back through the object imager and is directed toward the aperture array. Light that reaches the aperture array either passes through an aperture therein, or hits the plate around the holes in the aperture array and is reflected out of the system due to the cant. Light that passed through the aperture array is in focus due to the confocal principle, and it is reimaged and collimated in the telecentric camera reimager. It is directed into the camera and the intensity recorded. In any given pass, the above process occurs for every point on the sample that is being viewed.
  • The light that passes through the system is received by [0066] camera 126 and stored. After this process has been repeated at different heights, and across at least a portion of the surface, all of the stored data is then processed by a computer or the like to calculate or determine the topography of the sample including the location, size, shape, contour, roughness, and/or metrology of the bumps or other three dimensional features thereon.
  • In one of the current design and embodiment for bumps or other three dimensional features, the process involves one, two or more (generally three or more) passes over all or a selected portion of the sample surface S each at a different surface target elevation to measure surface elevation followed by two or more (generally three or more) passes each at a different bump target elevations to measure bump elevation followed by calculations to determine bump height. The result of the passes is an intensity measurement for each point at each elevation where these points as to surface elevation and separately as to bump elevation are plotted or fitted to a Gaussian curve to determine the elevation of both the surface and the bump from which the actual bump height at a given point is determined. It is the difference between the surface elevation and the bump elevation. [0067]
  • In more detail, a pass is made over a portion or the entire surface of the sample S. Intensity is determined for each pixel. Initially, a course or approximate surface elevation is used that is approximating the surface location or elevation of the sample S. The entire sample (or portion it is desired to measure) is scanned and the intensities are noted for each pixel, while if very small or no intensity at a given point then the system is significantly out of focus at that location or pixel (an example is scanning at the surface elevation where bumps exists results in little or no intensity feedback). This step is generally repeated twice more (though any number of passes may be used so long as a curve can be calculated from the number of passes) at a slightly different elevation such as 5, 10 or 20 microns difference in elevation to the first pass. The result is three data points of intensity for each pixel to plot or fit a Gaussian curve to determine the actual wafer surface elevation at that location. The wafer surface elevation is now known for the entire sample except where bumps or other significant three dimensional protrusions or valleys exist since each of these reported no intensity as they were too out of focus to reflect back any light. Curve fitting may be used to determine surface location under the bumps. [0068]
  • The second step is to determine the elevation of these significant protrusions or valleys (such as bumps). Another pass is made over a portion or the entire surface of the sample S (often only where bumps are expected, known, or no intensity was found in the surface elevation passes). This pass occurs at a course or rough approximation as to the elevation of the expected bumps such as 50, 100, 200, 300 or the like microns above the surface. Intensity is determined at each pixel as the entire sample (or only select locations where bumps are expected, known or no intensity was previously found) is scanned and the intensities are noted for each pixel, while if very small or no intensity at a given point then the system is significantly out of focus at that location or pixel (an example is scanning at bump elevations where no bump exists results in little or no intensity feedback). This step is generally repeated several more times (though any number of passes may be used so long as a curve can be calculated from the number of passes) at a slightly different elevation such as 5, 10 or 20 microns different. The result is multiple data points of intensity for each pixel to plot or fit a Gaussian curve to determine the bump elevation at that point. [0069]
  • Once the surface elevations are known and the bump elevations are known, the bump heights can be determined. The surface elevations are determined for the bump location based upon analysis, plotting, and/or other known curve extension techniques of all of the proximate surface elevations around the bump. The difference between a bump elevation and the proximate surface elevations therearound, or the bump elevation and the calculated surface elevation thereunder, equate to the bump height for a given bump. [0070]
  • In sum, the scanning process for the above invention is as follows: The system will scan lines across the sample surface S at a fixed elevation above the sample surface S. This scan will generate one z axis elevation on a depth response curve for each pixel on the sample under the scan. The sensor will then be moved in the z axis direction to a second elevation and the scan will be repeated to generate a second z axis elevation on the depth response curve for each location on the sample S under the scan. This can then be repeated any number of times desired for the interpolation method used (typically at least two or three scans, although more are certainly contemplated and will improve accuracy). The multiple locations on the depth response curve are then interpolated for each pixel to generate a map of the surface height under the scan. The elevation of the sample surface S is now known. [0071]
  • In the case of significant three dimensional protrusions (such as bumps), this process may be repeated at the approximate elevation of the outermost portion of the protrusions just as it was performed above at the approximate elevation of the sample surface S. The bump elevations will then be known, and the bump heights are then calculated as the difference between the surface elevation and the bump elevation. [0072]
  • It is important to understand that the size of the “in focus” region is determined by the telecentric imaging lens. If this lens has a larger numerical aperture (˜ratio of the focal length to diameter) the focus range will be small, and conversely if the lens has a low numerical aperture the focus range will be large. The best in focus range is dependent on the elevation range that needs to be measured. [0073]
  • The invention also in at least one embodiment is capable of adjusting depth response. This is desirous since with larger bumps a broader depth response is desirable while with smaller bumps a thinner or smaller depth response is desired. In effect, the system degrades the high numerical aperture to look at larger or taller bumps, and this assists in maintaining speed. Inversely, to view smaller or thinner bumps it is desirable to provide a higher numerical aperture. This broadening of depth response is accomplished either by providing a baffle to adjust the aperture, or by providing or increasing the tilt of the sensor. [0074]
  • A significantly different alternative involves imaging multiple heights at each point rather than making multiple passes. This is accomplished by using multi-line line scan cameras where each camera or sensor is looking at different elevations. For example, a four line, line-scan camera system would involve [0075] line 1 reading elevation 0, line 2 reading elevation plus 20 microns, line 3 reading elevation plus 40 microns, and line 4 reading elevation plus 60 microns. All four data points in this example are gathered simultaneously. It is also contemplated and recognized that a CMOS imager would work successfully. Alternatively, multiple TDI sensors could also be used stacked close together. It is necessary to introduce a variable amount of optical path difference between each scan lines either by shifting the aperture array or introducing a difference in compensator thickness in a media such as glass between the aperture arrays which are in a plane and the end of the object imager closest to the aperture array. The result is multiple separate planes that are conjugated to separate z heights at the wafer or sample surface S. In this case where imaging occurred as to multiple heights on a given pass, the surface height calculation and the bump height calculation will involve only one pass each.
  • In yet another alternative embodiment, two modes of speed are provided. A precise mode is provided where scanning occurs as to every die in either or both surface elevation determination and bump elevation determination. A faster mode is provided where scanning as to wafer surface elevation is performed only in one or a few places along the wafer and interpolation is used to calculate the surface over the remaining surface including at the die. [0076]
  • Some alternative light sources include an illuminator with a filament designed for providing a uniformly filled area internally imaged first into a numerical aperture stop and then reimaged into the telecentric pupil of the object imager and whereby the angular spectrum from the filament is mapped first into a field stop inside the illuminator and then reimaged to the a filed located in the intermediate focus or IFA of the object imager at the aperture array. Another light source is an illuminator with a filament designed to provide a uniformly filled area that is imaged into the telecentric pupil of the imaging system (object imager) and whereby the angular spectrum from the filament is mapped into the field located in the intermediate focus or IFA of the imaging system at the aperture array and whereby the light outside the useful AΩ product of the imaging system is eliminated via a series of baffles. Yet another light source is an illuminator with an array of bright monochromatic or quasi-monochromatic sources instead of a filament. Yet an even further illuminator is a bright monochromatic or quasi-chromatic source that is collimated and directed into the field located in the intermediate focus or IFA of the imaging system at the aperture array whereby preferably an array of lenslettes are employed to create an angular spectrum at each aperture, whereby it is preferably but optional that the source is apodized. [0077]
  • Accordingly, the invention as described above and understood by one of skill in the art is simplified, provides an effective, safe, inexpensive, and efficient device, system and process which achieves all the enumerated objectives, provides for eliminating difficulties encountered with prior devices, systems and processes, and solves problems and obtains new results in the art. [0078]
  • In the foregoing description, certain terms have been used for brevity, clearness and understanding; but no unnecessary limitations are to be implied therefrom beyond the requirement of the prior art, because such terms are used for descriptive purposes and are intended to be broadly construed. [0079]
  • Moreover, the invention's description and illustration is by way of example, and the invention's scope is not limited to the exact details shown or described. [0080]
  • Having now described the features, discoveries and principles of the invention, the manner in which it is constructed and used, the characteristics of the construction, and the advantageous, new and useful results obtained; the new and useful structures, devices, elements, arrangements, parts and combinations, are set forth in the appended claims. [0081]

Claims (9)

What is claimed is:
1. An inspection device including:
a light source;
a pellicle beamsplitter for receiving light from the light source and redirecting said light;
an aperture array for receiving light from the pellicle beamsplitter where the aperture array includes multiple arrays; and
an imaging system including an object imager including a plurality of lenses, a camera reimager including a plurality of lenses, and a camera for collecting focused light.
2. The inspection device of claim 1 wherein the multiple arrays include multiple arrays of pinholes.
3. The inspection device of claim 2 wherein the multiple arrays include multiple one dimensional arrays of pinholes.
4. The inspection device of claim 3 wherein each one dimensional array in the multiple one dimensional arrays of pinholes is conjugate to a different height from a surface to be inspected.
5. The inspection device of claim 4 wherein each one dimensional array in the multiple one dimensional arrays of pinholes is conjugate to a different height from a surface to be inspected.
6. The inspection device of claim 5 wherein the camera is one of a multi-sensor line scan camera, a multi-sensor TDI line scan camera, and a CMOS area scan camera.
7. The inspection device of claim 2 wherein the multiple arrays include multiple two dimensional arrays of pinholes.
8. The inspection device of claim 7 wherein each two dimensional array in the multiple two dimensional arrays of pinholes is conjugate to a different height from a surface to be inspected.
9. The inspection device of claim 8 wherein the camera is one of a multi-sensor line scan camera, a multi-sensor TDI line scan camera, and a CMOS area scan camera.
US10/196,335 2001-07-16 2002-07-16 Confocal 3D inspection system and process Abandoned US20030025918A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/196,335 US20030025918A1 (en) 2001-07-16 2002-07-16 Confocal 3D inspection system and process
US10/696,871 US20040102043A1 (en) 2001-07-16 2003-10-30 Confocal 3D inspection system and process

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US30572901P 2001-07-16 2001-07-16
US30573001P 2001-07-16 2001-07-16
US10/196,335 US20030025918A1 (en) 2001-07-16 2002-07-16 Confocal 3D inspection system and process

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/696,871 Continuation US20040102043A1 (en) 2001-07-16 2003-10-30 Confocal 3D inspection system and process

Publications (1)

Publication Number Publication Date
US20030025918A1 true US20030025918A1 (en) 2003-02-06

Family

ID=26974751

Family Applications (4)

Application Number Title Priority Date Filing Date
US10/196,335 Abandoned US20030025918A1 (en) 2001-07-16 2002-07-16 Confocal 3D inspection system and process
US10/196,349 Abandoned US20030030794A1 (en) 2001-07-16 2002-07-16 Confocal 3D inspection system and process
US10/196,741 Expired - Lifetime US6773935B2 (en) 2001-07-16 2002-07-16 Confocal 3D inspection system and process
US10/696,871 Abandoned US20040102043A1 (en) 2001-07-16 2003-10-30 Confocal 3D inspection system and process

Family Applications After (3)

Application Number Title Priority Date Filing Date
US10/196,349 Abandoned US20030030794A1 (en) 2001-07-16 2002-07-16 Confocal 3D inspection system and process
US10/196,741 Expired - Lifetime US6773935B2 (en) 2001-07-16 2002-07-16 Confocal 3D inspection system and process
US10/696,871 Abandoned US20040102043A1 (en) 2001-07-16 2003-10-30 Confocal 3D inspection system and process

Country Status (4)

Country Link
US (4) US20030025918A1 (en)
EP (1) EP1417473A4 (en)
JP (1) JP2004538454A (en)
WO (1) WO2003008940A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6753528B1 (en) * 2002-04-18 2004-06-22 Kla-Tencor Technologies Corporation System for MEMS inspection and characterization
US20040165759A1 (en) * 2003-02-26 2004-08-26 Leo Baldwin Coaxial narrow angle dark field lighting
US20110026141A1 (en) * 2009-07-29 2011-02-03 Geoffrey Louis Barrows Low Profile Camera and Vision Sensor
US20160305871A1 (en) * 2015-04-14 2016-10-20 Cognex Corporation System and method for acquiring images of surface texture

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6731383B2 (en) 2000-09-12 2004-05-04 August Technology Corp. Confocal 3D inspection system and process
US6870609B2 (en) 2001-02-09 2005-03-22 August Technology Corp. Confocal 3D inspection system and process
EP1417473A4 (en) * 2001-07-16 2006-05-10 August Technology Corp Confocal 3d inspection system and process
US6882415B1 (en) 2001-07-16 2005-04-19 August Technology Corp. Confocal 3D inspection system and process
US6970287B1 (en) 2001-07-16 2005-11-29 August Technology Corp. Confocal 3D inspection system and process
US7525659B2 (en) 2003-01-15 2009-04-28 Negevtech Ltd. System for detection of water defects
US7663734B2 (en) * 2003-04-11 2010-02-16 Tadahiro Ohmi Pattern writing system and pattern writing method
JP2007504445A (en) * 2003-08-26 2007-03-01 ブルーシフト・バイオテクノロジーズ・インコーポレーテッド Time-dependent fluorescence measurement
NL1024404C2 (en) * 2003-09-30 2005-03-31 Univ Delft Tech Optical microscope and method for forming an optical image.
US7551272B2 (en) * 2005-11-09 2009-06-23 Aceris 3D Inspection Inc. Method and an apparatus for simultaneous 2D and 3D optical inspection and acquisition of optical inspection data of an object
EP1801110A1 (en) 2005-12-22 2007-06-27 KRKA, tovarna zdravil, d.d., Novo mesto Esomeprazole arginine salt
AU2007215302A1 (en) * 2006-02-10 2007-08-23 Hologic, Inc. Method and apparatus and computer program product for collecting digital image data from microscope media-based specimens
US7764366B2 (en) * 2006-07-11 2010-07-27 Besi North America, Inc. Robotic die sorter with optical inspection system
US7535560B2 (en) * 2007-02-26 2009-05-19 Aceris 3D Inspection Inc. Method and system for the inspection of integrated circuit devices having leads
DE102007026900A1 (en) * 2007-06-11 2008-12-18 Siemens Ag Evaluate the surface structure of devices using different presentation angles
DE102007030985B4 (en) * 2007-07-04 2009-04-09 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Image sensor, method of operating an image sensor and computer program
GB0814039D0 (en) * 2008-07-31 2008-09-10 Imp Innovations Ltd Optical arrangement for oblique plane microscopy
KR101513602B1 (en) * 2009-02-11 2015-04-22 삼성전자주식회사 Method of scanning biochip
US9400414B2 (en) * 2012-11-19 2016-07-26 Raytheon Company Methods and apparatus for imaging without retro-reflection using a tilted image plane and structured relay optic
DE102013105586B4 (en) * 2013-05-30 2023-10-12 Carl Zeiss Ag Device for imaging a sample
CN103411556B (en) * 2013-08-15 2015-12-09 哈尔滨工业大学 The confocal annular microstructure measurement device of standard based on linear array angular spectrum illumination and method
KR102387134B1 (en) * 2014-05-07 2022-04-15 일렉트로 사이언티픽 인더스트리즈, 아이엔씨 Five axis optical inspection system
US9885671B2 (en) 2014-06-09 2018-02-06 Kla-Tencor Corporation Miniaturized imaging apparatus for wafer edge
US9645097B2 (en) 2014-06-20 2017-05-09 Kla-Tencor Corporation In-line wafer edge inspection, wafer pre-alignment, and wafer cleaning
GB201601960D0 (en) * 2016-02-03 2016-03-16 Glaxosmithkline Biolog Sa Novel device
DE102016103182B4 (en) * 2016-02-23 2018-04-12 Leica Microsystems Cms Gmbh Light sheet microscope and method for light microscopic imaging of a sample
US10739276B2 (en) 2017-11-03 2020-08-11 Kla-Tencor Corporation Minimizing filed size to reduce unwanted stray light

Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US32660A (en) * 1861-06-25 Excavating-machine
US4719341A (en) * 1986-10-01 1988-01-12 Mechanical Technology Incorporated Fiber optic displacement sensor with oscillating optical path length
US4802748A (en) * 1987-12-14 1989-02-07 Tracor Northern, Inc. Confocal tandem scanning reflected light microscope
US4930896A (en) * 1985-03-27 1990-06-05 Olympus Optical Co., Ltd. Surface structure measuring apparatus
US4965442A (en) * 1989-11-07 1990-10-23 Massachusetts Institute Of Technology System for ascertaining direction of blur in a range-from-defocus camera
US5067805A (en) * 1990-02-27 1991-11-26 Prometrix Corporation Confocal scanning optical microscope
US5072128A (en) * 1989-07-26 1991-12-10 Nikon Corporation Defect inspecting apparatus using multiple color light to detect defects
US5073018A (en) * 1989-10-04 1991-12-17 The Board Of Trustees Of The Leland Stanford Junior University Correlation microscope
US5083220A (en) * 1990-03-22 1992-01-21 Tandem Scanning Corporation Scanning disks for use in tandem scanning reflected light microscopes and other optical systems
US5248876A (en) * 1992-04-21 1993-09-28 International Business Machines Corporation Tandem linear scanning confocal imaging system with focal volumes at different heights
US5248475A (en) * 1991-10-24 1993-09-28 Derafe, Ltd. Methods for alloy migration sintering
US5329358A (en) * 1991-02-06 1994-07-12 U.S. Philips Corporation Device for optically measuring the height of a surface
US5386317A (en) * 1992-05-13 1995-01-31 Prometrix Corporation Method and apparatus for imaging dense linewidth features using an optical microscope
US5408294A (en) * 1993-05-28 1995-04-18 Image Technology International, Inc. 3D photographic printer with direct key-subject alignment
US5448359A (en) * 1991-12-04 1995-09-05 Siemens Aktiengesellschaft Optical distance sensor
US5594242A (en) * 1993-09-09 1997-01-14 Kabushiki Kaisha Topcon Position detecting apparatus which uses a continuously moving light spot
US5696591A (en) * 1996-01-05 1997-12-09 Eastman Kodak Company Apparatus and method for detecting longitudinally oriented flaws in a moving web
US5734497A (en) * 1996-01-31 1998-03-31 Nidek Company, Ltd. Confocal scanning microscope
US5737084A (en) * 1995-09-29 1998-04-07 Takaoka Electric Mtg. Co., Ltd. Three-dimensional shape measuring apparatus
US5991040A (en) * 1995-06-30 1999-11-23 Siemens Aktiengesellschaft Optical distance sensor
US6224276B1 (en) * 1998-06-05 2001-05-01 Sony Corporation Ink ribbon cartridge including ink ribbon damaging means and rotation direction restricting means
US6288382B1 (en) * 1998-12-17 2001-09-11 Takaoka Electric Mfg. Co., Ltd. Micro-scanning multislit confocal image acquisition apparatus
US6426835B1 (en) * 1999-03-23 2002-07-30 Olympus Optical Co., Ltd. Confocal microscope
US20020145734A1 (en) * 2001-02-09 2002-10-10 Cory Watkins Confocal 3D inspection system and process
US20020148984A1 (en) * 2001-02-09 2002-10-17 Cory Watkins Confocal 3D inspection system and process
US20030052346A1 (en) * 2001-02-09 2003-03-20 Cory Watkins Confocal 3D inspection system and process
US6545265B1 (en) * 1998-05-30 2003-04-08 Carl Zeiss Jena Gmbh System and method for the microscopic generation of object images

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3606541A (en) 1968-09-28 1971-09-20 Mitsubishi Electric Corp Contact-less probe system
JPS53135653A (en) * 1977-05-01 1978-11-27 Canon Inc Photoelectric detecting optical device
USRE32660E (en) * 1982-04-19 1988-05-03 Siscan Systems, Inc. Confocal optical imaging system with improved signal-to-noise ratio
US4925298A (en) * 1988-08-05 1990-05-15 Paolo Dobrilla Etch pit density measuring method
GB9102903D0 (en) 1991-02-12 1991-03-27 Oxford Sensor Tech An optical sensor
JP3385432B2 (en) * 1993-09-29 2003-03-10 株式会社ニュークリエイション Inspection device
JP3404607B2 (en) * 1993-09-30 2003-05-12 株式会社小松製作所 Confocal optics
JP3613402B2 (en) * 1993-10-28 2005-01-26 テンカー・インストルメンツ Method and apparatus for imaging fine line width structure using optical microscope
JP3404134B2 (en) * 1994-06-21 2003-05-06 株式会社ニュークリエイション Inspection device
US5568574A (en) * 1995-06-12 1996-10-22 University Of Southern California Modulator-based photonic chip-to-chip interconnections for dense three-dimensional multichip module integration
JP3467919B2 (en) * 1995-07-17 2003-11-17 株式会社ニコン Two-sided telecentric optical system
JP3306858B2 (en) * 1995-11-02 2002-07-24 株式会社高岳製作所 3D shape measuring device
JPH09275126A (en) * 1996-04-02 1997-10-21 Komatsu Ltd Appearance inspecting equipment and height measuring equipment of wafer bump
JPH09304700A (en) * 1996-05-14 1997-11-28 Nikon Corp Optical scanning type microscope
US6677936B2 (en) * 1996-10-31 2004-01-13 Kopin Corporation Color display system for a camera
JP3509088B2 (en) * 1997-02-25 2004-03-22 株式会社高岳製作所 Optical device for three-dimensional shape measurement
US6534308B1 (en) * 1997-03-27 2003-03-18 Oncosis, Llc Method and apparatus for selectively targeting specific cells within a mixed cell population
JPH11211439A (en) * 1998-01-22 1999-08-06 Takaoka Electric Mfg Co Ltd Surface form measuring device
US6366357B1 (en) * 1998-03-05 2002-04-02 General Scanning, Inc. Method and system for high speed measuring of microscopic targets
JP4229498B2 (en) * 1998-10-02 2009-02-25 オリンパス株式会社 Relay optical system used in confocal microscopes and confocal microscopes
JP2000275530A (en) * 1999-03-24 2000-10-06 Olympus Optical Co Ltd Confocal microscope
JP3544892B2 (en) * 1999-05-12 2004-07-21 株式会社東京精密 Appearance inspection method and apparatus
US6483327B1 (en) * 1999-09-30 2002-11-19 Advanced Micro Devices, Inc. Quadrant avalanche photodiode time-resolved detection
JP5087192B2 (en) * 1999-11-30 2012-11-28 インテレクソン コーポレイション Method and apparatus for selectively aiming a specific cell in a cell group
TW498152B (en) * 2000-09-11 2002-08-11 Olympus Optical Co Confocal microscope
US6731383B2 (en) * 2000-09-12 2004-05-04 August Technology Corp. Confocal 3D inspection system and process
EP1417473A4 (en) 2001-07-16 2006-05-10 August Technology Corp Confocal 3d inspection system and process
US6882415B1 (en) * 2001-07-16 2005-04-19 August Technology Corp. Confocal 3D inspection system and process
US6970287B1 (en) * 2001-07-16 2005-11-29 August Technology Corp. Confocal 3D inspection system and process
US20030103212A1 (en) * 2001-08-03 2003-06-05 Volker Westphal Real-time imaging system and method

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US32660A (en) * 1861-06-25 Excavating-machine
US4930896A (en) * 1985-03-27 1990-06-05 Olympus Optical Co., Ltd. Surface structure measuring apparatus
US4719341A (en) * 1986-10-01 1988-01-12 Mechanical Technology Incorporated Fiber optic displacement sensor with oscillating optical path length
US4802748A (en) * 1987-12-14 1989-02-07 Tracor Northern, Inc. Confocal tandem scanning reflected light microscope
US5072128A (en) * 1989-07-26 1991-12-10 Nikon Corporation Defect inspecting apparatus using multiple color light to detect defects
US5073018A (en) * 1989-10-04 1991-12-17 The Board Of Trustees Of The Leland Stanford Junior University Correlation microscope
US4965442A (en) * 1989-11-07 1990-10-23 Massachusetts Institute Of Technology System for ascertaining direction of blur in a range-from-defocus camera
US5067805A (en) * 1990-02-27 1991-11-26 Prometrix Corporation Confocal scanning optical microscope
US5083220A (en) * 1990-03-22 1992-01-21 Tandem Scanning Corporation Scanning disks for use in tandem scanning reflected light microscopes and other optical systems
US5329358A (en) * 1991-02-06 1994-07-12 U.S. Philips Corporation Device for optically measuring the height of a surface
US5248475A (en) * 1991-10-24 1993-09-28 Derafe, Ltd. Methods for alloy migration sintering
US5448359A (en) * 1991-12-04 1995-09-05 Siemens Aktiengesellschaft Optical distance sensor
US5248876A (en) * 1992-04-21 1993-09-28 International Business Machines Corporation Tandem linear scanning confocal imaging system with focal volumes at different heights
US5386317A (en) * 1992-05-13 1995-01-31 Prometrix Corporation Method and apparatus for imaging dense linewidth features using an optical microscope
US5408294A (en) * 1993-05-28 1995-04-18 Image Technology International, Inc. 3D photographic printer with direct key-subject alignment
US5594242A (en) * 1993-09-09 1997-01-14 Kabushiki Kaisha Topcon Position detecting apparatus which uses a continuously moving light spot
US5991040A (en) * 1995-06-30 1999-11-23 Siemens Aktiengesellschaft Optical distance sensor
US6108090A (en) * 1995-09-29 2000-08-22 Takaoka Electric Mfg. Co., Ltd. Three-dimensional shape measuring apparatus
US5737084A (en) * 1995-09-29 1998-04-07 Takaoka Electric Mtg. Co., Ltd. Three-dimensional shape measuring apparatus
US5696591A (en) * 1996-01-05 1997-12-09 Eastman Kodak Company Apparatus and method for detecting longitudinally oriented flaws in a moving web
US5734497A (en) * 1996-01-31 1998-03-31 Nidek Company, Ltd. Confocal scanning microscope
US6545265B1 (en) * 1998-05-30 2003-04-08 Carl Zeiss Jena Gmbh System and method for the microscopic generation of object images
US6224276B1 (en) * 1998-06-05 2001-05-01 Sony Corporation Ink ribbon cartridge including ink ribbon damaging means and rotation direction restricting means
US6288382B1 (en) * 1998-12-17 2001-09-11 Takaoka Electric Mfg. Co., Ltd. Micro-scanning multislit confocal image acquisition apparatus
US6426835B1 (en) * 1999-03-23 2002-07-30 Olympus Optical Co., Ltd. Confocal microscope
US20020145734A1 (en) * 2001-02-09 2002-10-10 Cory Watkins Confocal 3D inspection system and process
US20020148984A1 (en) * 2001-02-09 2002-10-17 Cory Watkins Confocal 3D inspection system and process
US20030052346A1 (en) * 2001-02-09 2003-03-20 Cory Watkins Confocal 3D inspection system and process

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6753528B1 (en) * 2002-04-18 2004-06-22 Kla-Tencor Technologies Corporation System for MEMS inspection and characterization
US20040165759A1 (en) * 2003-02-26 2004-08-26 Leo Baldwin Coaxial narrow angle dark field lighting
WO2004077336A2 (en) * 2003-02-26 2004-09-10 Electro Scientific Industries, Inc. Coaxial narrow angle dark field lighting
WO2004077336A3 (en) * 2003-02-26 2004-12-29 Electro Scient Ind Inc Coaxial narrow angle dark field lighting
US6870949B2 (en) * 2003-02-26 2005-03-22 Electro Scientific Industries Coaxial narrow angle dark field lighting
GB2413859A (en) * 2003-02-26 2005-11-09 Electro Scient Ind Inc Coaxial narrow angle data field lighting
GB2413859B (en) * 2003-02-26 2006-06-28 Electro Scient Ind Inc Coaxial narrow angle dark field lighting
KR100746114B1 (en) * 2003-02-26 2007-08-06 일렉트로 싸이언티픽 인더스트리이즈 인코포레이티드 Imaging system for imaging a defect on a planar specular object
US20110026141A1 (en) * 2009-07-29 2011-02-03 Geoffrey Louis Barrows Low Profile Camera and Vision Sensor
US20160305871A1 (en) * 2015-04-14 2016-10-20 Cognex Corporation System and method for acquiring images of surface texture
US10724947B2 (en) * 2015-04-14 2020-07-28 Cognex Corporation System and method for acquiring images of surface texture

Also Published As

Publication number Publication date
US20030030794A1 (en) 2003-02-13
WO2003008940A9 (en) 2004-06-24
US20030027367A1 (en) 2003-02-06
US20040102043A1 (en) 2004-05-27
WO2003008940A1 (en) 2003-01-30
EP1417473A4 (en) 2006-05-10
US6773935B2 (en) 2004-08-10
EP1417473A1 (en) 2004-05-12
JP2004538454A (en) 2004-12-24

Similar Documents

Publication Publication Date Title
US6773935B2 (en) Confocal 3D inspection system and process
US20020145734A1 (en) Confocal 3D inspection system and process
US6731383B2 (en) Confocal 3D inspection system and process
US6288780B1 (en) High throughput brightfield/darkfield wafer inspection system using advanced optical techniques
US7554655B2 (en) High throughput brightfield/darkfield water inspection system using advanced optical techniques
JP3697279B2 (en) Thin film thickness measuring device
TWI536012B (en) Dark field inspection system and method of providing the same
US20070258092A1 (en) Optical measurement device and method
JPH06250092A (en) Equipment and method for optical inspection of material body
US11204330B1 (en) Systems and methods for inspection of a specimen
EP0563221A1 (en) Method and apparatus for automatic optical inspection
US5963326A (en) Ellipsometer
US20100296096A1 (en) Imaging optical inspection device with a pinhole camera
US20090091751A1 (en) Multichip ccd camera inspection system
JP2008249386A (en) Defect inspection device and defect inspection method
US6882415B1 (en) Confocal 3D inspection system and process
TW201945688A (en) Multi-spot analysis system with multiple optical probes
WO2021197207A1 (en) Apparatus for surface profile measurement
US20080043313A1 (en) Spatial filter, a system and method for collecting light from an object
US6870609B2 (en) Confocal 3D inspection system and process
US6970287B1 (en) Confocal 3D inspection system and process
TW202204848A (en) High sensitivity image-based reflectometry
WO2021024319A1 (en) Defect inspection device and defect inspection method
US20020148984A1 (en) Confocal 3D inspection system and process
Boher et al. Light-scattered measurements using Fourier optics: a new tool for surface characterization

Legal Events

Date Code Title Description
AS Assignment

Owner name: AUGUST TECHNOLOGY CORP., MINNESOTA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WATKINS, CORY;VAUGHNN, DAVID;BLAIR, ALAN;REEL/FRAME:013411/0547;SIGNING DATES FROM 20021007 TO 20021010

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION