APPARATUS AND METHOD FOR SUPER-RESOLUTION OPTICAL MICROSCOPY TECHNICAL FIELD This invention relates generally to the fields of imaging, metrology and optical microscopy. More particularly, the invention relates to a method for obtaining super-resolution image of an object by holding a predetermined membrane mask near or in contact with the object, using conventional microscope and photodetector to receive the optical images jointly formed by the object and the said mask, and then constructing the image of the object via an image processing unit. BACKGROUND Optical microscopy uses a set of lens to form a magnified image of a tiny object. By 1873, Abbe had developed his fundamental work on diffraction and microscopic imaging. This showed that the lateral resolution was limited by the diffraction limit of the wavelength in use. However lens imperfections further limit resolution. Generally speaking, the lateral resolution (the minimum feature size that can be observed) of microscopy is half the wavelength being used (λ/2). Therefore to increase resolution, the wavelengths used need to go deeper into the ultra-violet (UV), deep ultra-violet (DUV) or even X-ray regime. This is problematic due to the high absorption of UV or DUV by most materials. Moreover, DUV and X-ray are highly ionizing and are therefore destructive to many materials and structures, e.g., living cells or biomolecules. The diffraction limit to resolution in optical microscopy is not fundamental, but rather arises from the fact that the detection element (e.g., lens) is typically many wavelengths away from the sample of interest. Thus by laterally scanning a light source or detector in close proximity to the sample, one can generate an image at a resolution dependent on only the probe size and the probe-to-sample distance, each of which can be made much smaller than the wavelength so a resolution much higher than the diffraction limit can be realized. See E. Betzig and J. Trautman (1992), Science, vol. 257 (5067), p.189-195. The first research on building such a near-field optical microscope came from Pohl et al (1982), European Patent application No. 0112401, field December 27, 1982; US Patent 4,604,520, filed December 20, 1983. Since then the near-field microscopy has been developed steadily. The first functional Near-field Scanning Optical Microscope (NSOM) came out in the 1990's, see E. Monson et al (1995), Ultramicroscopy, vol. 57, p.257-262. Today NSOM are commercially available from a variety of vendors. There are many different implementations of NSOM, but the fundamental mechanism is the same. All NSOM systems use a sub-wavelength sized probe, under which a sample is scanned. The optical near-field interaction between the probe and the sample is detected in the far field for each
scanning position and then an image is constructed. A typical NSOM system can achieve lateral resolution down to a few tens of nanometers within a scan area of 50 μm
2. The main disadvantage of NSOM, however, is that it is a serial device, i.e. it takes infonnation from each point or pixel and scans the area of interests. This serial operation is very slow, which prevents real-time, whole- field imaging. On the contrary, conventional optical microscopy, which is a parallel device, obtains the whole images of interests instantaneously. Conventional optical microscopy also captures variable pictures of interests in a real time manner, which means it has very high temporal resolution. Other use of evanescent fields for measuring and visualizing surface topographic features with high resolution is known. Descriptions of evanescent field usage are contained, for example, in N. Harrick (1962), J. Appl. Phys., vol. 33, p.321 ; C. McCutchen (1964), The Review of Scientific Instruments, vol. 35, p. 1340-45; J. Guerra (1988), in Proceedings fi-om Surface Measurement and Characterization Meeting, Hamburg, SPIE vol. 1009, p.254-62. U.S. Patent No. 5,349,443 entitled "Flexible transducers for photon tunneling microscopes and methods for making and using same", issued Sep. 20, 1994 to Guerra, j. M. U.S. Patent No. 4,681,451, entitled "Optical proximity imaging method and apparatus", issued Jul. 21, 1987 to J. Guerra and W. Plummer discloses an real-time optical proximity imaging method in which the proximity of glass surface to another surface is determined by frustration of total internal reflection of light energy from the glass surface to develop a light area pattern, calibrating gray scale densities of the pattern so that levels of density correspond to increments of surface proximity, and displaying a facsimile of the gray scale image to indicate variations in surface proximity. This is a whole-field reflected evanescent light microscopy, which uses the exponentially varying amplitude of the evanescent field in the vertical direction to sense very small surface height variation. The difficulty in detecting and measuring small changes in bright scenes limits the observable topographic depths to about 3/4 of the illuminating wavelength. Furthermore, the illumination and imaging optics are coupled because the objective element also serves as the condenser. This limits the use of such instruments to the availability of suitable commercial objectives, magnifications, fields of view, and numerical aperture. Devices in which evanescent light from transilluminated samples is scattered into objective pupils are described in: G. J. Stoney (1896), "Microscopic Vision", Phil. Mag. vol. 332, p.348-349; E. Ambrose (1956), "A Surface Contact Microscope for the Study of Cell Movements", Nature, vol. 178; E. Ambrose (1961), "The Movements of Fibrocytes", Experimental Cell Research, Suppl. 8, p.54-73; P. Temple (1981), "Total internal reflection
microscopy: a surface inspection technique", Applied Optics, vol. 20, No. 15. Stoney, Ambrose and Temple disclose optical evanescent light field microscopes in which the light that enters the objective pupil is evanescent field light that has been scattered from a sample surface. However, in all of the microscopes described in the above references, the sample is fransilluminated with the requirement that illumination incident is beyond the critical angle so that the evanescent field from the sample surface is received. US Patent No. 5,327,223, entitled "Method and apparatus for generating high resolution optical images", awarded to H. Korth on July 5, 1994, discloses a microscope interferometer that scans an object surface through a membrane mask with a multitude of pinholes. Light reflected back through the pinholes interferes with the dark-field image of the mask surface. The mask to object separation can be stabilized by an air flow through the pinholes. This apparatus overcomes the diffraction limit for the lateral resolution. However, this resolution of this apparatus is limited by the pinhole size, which can not be made much smaller than the illuminating wavelength, otherwise the light energy passing through the pinholes would be extremely low. Moreover, the apparatus only works for imaging the surface of an object, and does not allow for imaging transparent objects. According to Fourier Optics, when electromagnetic wave arrives at an object, the wave is modulated by this object and becomes a set of plane waves. This is called "diffraction", and the plane waves are called "diffracted waves". The Fourier transform of the pattern of the object is therefore carried by the diffracted plane waves. The diffracted waves traveling in different directions correspond to the Fourier transform of different spatial frequencies. If all the diffracted waves, i.e., die whole Fourier transform of the object, are received by a detection element (e.g., lens), then the exact pattern of the object can be viewed by conducting inverse Fourier transform of the received wave. However, if the spatial frequency is higher than 1/ Jl , where λ is the wavelength, the corresponding information is carried by an evanescent wave. _An evanescent wave attenuates exponentially during propagation, and thus can not be received by detection element that is placed far from the object. As a consequence, the fine details of the object are not captured by conventional optical microscope. Generally speaking, the resolution of an optical microscope is limited to no higher than λ 12 . If a second object is placed within one or a few wavelengths from tb.e object of interest, then the electromagnetic waves diffracted by the object of interest will be dififracted again by the second object before the evanescent wave attenuates too much. The diffraction by the second object will convert part of the evanescent wave, which carries high spatial frequency information
of the first object, to propagating wave, which can be received by a detection element (e.g., a microscope) placed far away from the two objects. As a consequence, the image captured by this detection element contains high spatial frequency information (i.e. fine details) of the first object. If the second object is known, then the high spatial frequency information of the first object can be retrieved by varying or scanning the second object and processing the received waves. Based on the above principle, the present invention presents a method for high speed, whole-field imaging of a large object having many small size features (e.g., a photomask with integrated circuit layout, or a substrate holding many cells or biomolecules) by varying or scanning predetermined templates near the object, obtaining their images using conventional optical microscope, and then reconstructing the image of the object with an image processing unit. This invention can achieve a resolution much higher than the diffraction limit. SUMMARY OF THE INVENTION Accordingly, it is one of the objects of the invention to address the above-described need in the art by providing a novel method for obtaining high-resolution, whole-field image of an object by scanning or varying a membrane mask having predetermined patterns near or in contact with object. Illuminating wave passing through the object and said membrane mask is collected by a microscope. An image processing unit is then used to construct the image of the object. It is another object of the invention to provide a method for obtaining high-resolution, whole-field image of an object by scanning or varying a membrane mask having predetermined patterns near or in contact with this object. Illuminating wave reflected from the object and passes through the said mask and is collected by a microscope. An image processing unit is then used to construct the image of the object. It is still another object of the invention to provide a method for obtaining high-resolution, whole-field image of an object by scanning or varying a membrane mask having predetermined patterns beneath this object. Illuminating wave passes through the object, then is reflected from the said membrane mask, and then passes through the object again. The returned wave is collected by a microscope. An image processing unit is then used to construct the image of the object. Additional objects, advantages and novel features of the invention will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following, or may be learned by practice of the invention. In one embodiment, then, the present invention provides a method for generating high resolution, whole-field image of an object by providing a membrane mask having predetermined patterns. The object and the said mask are very close or in contact so that the evanescent wave
diffracted by the object can arrive at die said mask, in other words, the separation between the object and the mask is less than several wavelengths of the illuminating wave. A coherent or partially coherent wave illuminates the object and the mask. In the illumination setup, either the object or the mask can receive the illuminating wave first. The wave passing through the object and the mask is collected by a detection element (e.g., lens and photodetector). Then the membrane mask or the object is scanned, and the above illumination and wave correction process is repeated. The scanning, illuminating and wave collecting process may be repeated for many times. All collected wave is then processed by an image processing unit, from which the high- resolution image of the object is constructed. Finally the image is displayed. Another embodiment of the invention provides an apparatus for imaging an object, comprising: a membrane mask having predetermined patterns, means for holding the said membrane mask near or in contact with the object, means for optionally filtering the illuminating wave with a spatial filter on the projection pupil plane, means for scanning the said membrane mask or the object, means for collecting the waves passing through the object and the mask, and means for processing the collected waves and constructing the image of the object. Still another embodiment of the invention provides a method for generating high resolution, whole-field image of the surface of an object by providing a membrane mask having predetermined pattern. The object and the said mask are very close or in contact so that the evanescent wave diffracted by the object can arrive at the said mask, i.e., the separation between the object and the said mask is less than several wavelengths of the wave being used. A coherent or partially coherent wave illuminates the the object through the said mask. The illuminating wave reflected back from the object goes through the mask again and is collected by a detection element (e.g., lens and photodetector). Then the said mask or the object is scanned, and the above illumination and wave collection process is repeated. The scanning, illuminating and wave collecting process may be repeated for many times. All collected waves are then processed by an image processing unit, from which the high-resolution image of the object is constructed. Finally the image is displayed. Yet another embodiment of the invention provides an apparatus for imaging an object, comprising: a membrane mask having predetermined patterns, means for holding the membrane mask near or in contact with an object, means for scanning the said membrane mask or the object, means for collecting the illuminating wave that is reflected from the object and then passes through the said mask, and means for processing the collected waves and constructing the image of the object.
Still another embodiment of the invention provides a method for generating high resolution, whole-field image of an object by providing membrane mask(s) having predetermined patterns. The object and the said mask are very close or in contact so that the evanescent wave diffracted by the object can arrive at the mask, i.e., the separation between the object and the mask is less than several wavelengths of the wave being used. A coherent or partially coherent wave illuminates the object and the mask. In the illumination setup, either the object or the mask can receive the wave first. The wave passing through the object and the mask is collected by a detection element (e.g., lens and photodetector). Then the mask is varied, which can be realized by using a tunable mask or replacing the current mask by another mask having different patterns, and the above illumination and wave collection process is repeated. The scanning, illuminating and wave collecting process may be repeated for many times. All collected waves are then processed by an image processing unit, from which the high-resolution image of the object is constructed. Finally the image is displayed. Yet another embodiment of the invention provides an apparatus for imaging an object, comprising: membrane mask(s) having predetermined patterns, means for holding the membrane mask near or in contact with the object, means for varying the membrane mask, means for collecting the waves passing through the object and the mask, and means for processing the collected waves and constructing the image of the object. Still another embodiment of the invention provides a method for generating high resolution, whole-field image of an object by providing membrane mask(s) having predetermined patterns. The object and the said mask are very close or in contact so that the evanescent wave diffracted by the object can arrive at the mask, in other words, the separation between the object and the said mask is less than several illuminating wavelengths. A coherent or partially coherent wave illuminates the object through the said mask. The wave reflected back from the object goes through the mask again and is collected by a detection element (e.g., lens and photodetector). Then the said mask is varied, which can be realized by using a tunable mask or replacing the current mask by another mask, and the above illumination and wave collection process is repeated. The scanning, illuminating and wave collecting process may be repeated for many times. All collected waves are then processed by an image processing unit, from which the high-resolution image of the object is constructed. Finally the image is displayed. Yet another embodiment of the invention provides an apparatus for imaging an object, comprising: membrane mask(s) having predetermined patterns, means for holding the said membrane mask close to the surface of an object, means for varying the membrane mask(s), means
for collecting the wave that is reflected from the object and then passes through the mask, and means for processing the collected waves and constructing the image of the object. Still another embodiment of the invention provides a method for generating high resolution, whole-field image of an object by providing membrane mask(s) having predetermined patterns. The object and the said mask are very close or in contact so that the evanescent wave diffracted by the object can arrive at the mask, in other words, the separation between the object and the mask is less than several illuminating wavelengths. The object is placed on or above the mask. A coherent or partially coherent wave passes through the object and then arrives at the mask. The wave reflected back from the mask goes through the object again and is collected by a detection element (e.g., lens and photodetector). Then the said mask is scanned or varied, which can be realized by using a tunable mask or replacing the current mask by another one, and the above illumination and wave correction process is repeated. The varying, illuminating and wave collecting process may be repeated for many times. All collected waves are then processed by an image processing unit, from which the high-resolution image of the object is constructed. Finally the image is displayed. Yet another embodiment of the invention provides an apparatus for imaging an object, comprising: membrane mask(s) having predetermined patterns, means for holding the said membrane mask near or in contact with the object, means for varying the membrane mask(s), means for collecting the wave that passes through the object, is reflected from the membrane mask and passes through the object again, and means for processing the collected waves and constructing the image of the object. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is an illustration of the principle of a conventional optical microscope. Fig. 2 is an illustration of the principle of one of the embodiments of die present invention used for imaging transparent objects, in which a membrane mask 10 is held near or in contact with the object of interest 9 and a microscope 14 is used to view die object and the mask. Fig. 3 is an illustration of how to hold the membrane mask 10 and the object 9 in the present invention. Fig. 4 is an illustration of optionally filtering the electromagnetic wave on the pupil plane 12 in the present invention. A possible pupil plane filter is shown. Fig. 5 is an illustration of the physical principle of the present invention.
Fig. 6 is an illustration of how to divide the Fourier transform of the object into many regions, the size of each region is the size of the transparent region of the pupil plane filter shown in Fig. 4. Fig. 7 is an illustration of the working flow of the present invention. Fig. 8 is an illustration of the principle of one of the embodiments of the present invention used to imaging opaque object or the surface of an object 23 through a transparent membrane mask 22. The photo detection system 25 includes the lens 11 and 13, pupil plane 12 and the photodetector 15 and 16 in Fig. 2. The methods of holding the mask/object and optionally filtering the wave are referred to Fig. 3 and Fig. 4. Fig. 9 is an illustration of the principle of one of the embodiments of the present invention used to imaging a transparent or thin object or a collection of objects 28 held on top of an opaque membrane mask 29. The photo detection system 30 includes the lens 11 and 13, pupil plane 12 and the photodetector 15 and 16 in Fig. 2. The methods of holding the mask/object and optionally filtering the wave are referred to Fig. 3 and Fig. 4. Fig. 10 is an illustration of a tunable mask, which consists of a glass substrate 35 and a number of tunable elements 34. The tunable elements have different transmission coefficient from the substrate 35 (e.g., the substrate is transparent while the tunable elements are opaque, or the tunable elements shifts the phase of the transmitted wave). The transmission coefficient of the tunable elements can be altered by applying different voltage between the electrodes 32 and 33. Fig. 11 is an illustration of the image construction method. Fig. 12 is a comparison of the imaging results obtained by conventional microscopy and by the present invention. Fig. 13 is a comparison of the imaging results obtained by conventional microscopy and by the present invention. Fig. 14 is an illustration of how the resolution of the present invention is improved with the increase of number of iterations. DETAILED DESCRIPTION OF THE INVENTION OVERVIEW AND DEFINITIONS Before describing the present invention in detail, it is to be understood that this invention is not limited to specific compositions, components or process steps, as such may vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting.
It must be noted that, as used in this specification and the appended claims, the singular forms "a," "an" and "the" include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "an object" includes a combination of two or more objects that may or may not be the same, a "membrane mask" includes a mixture of two or more composition materials, and the like. In describing and claiming the present invention, the following terminology will be used in accordance with the definitions set out below. "Optional" or "optionally" means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where said event or circumstance occurs and instances where it does not. For example, the phrase "optionally filtering" means that a wave may or may not be filtered and that the description includes both filtered and unfiltered waves. The schematic diagram of the optical microscope illustrated in this disclosure only represents the conceptual arrangement of the actual implementations, and is not intended to be limiting. For example, a lens in the diagram may represent a set of lens. The term "photodetector" is used to refer to a means for generating image signals having an amplitude that varies in accordance with the imagewise variation in intensity in said images. For additional information concerning terms used in the field of optics, reference may be had to Principles of Optics, 7
th Edition, Max Born and Emil Wolf (Cambridge, UK: Cambridge University Press, 1999). FOURIER OPTICS MODEL FOR LIGHT DIFFRACTION AND IMAGING For the sake of simplicity, we only discuss the mathematical details of the present invention in transmission mode. The same method, with some modifications, can be used to obtain high-resolution optical images of an object by collecting the light reflected from the surface. For the sake of simplicity, in the discussion we assume the optical microscope being used is a IX microscope, i.e., the image has the same size of the object. This microscope setup is for illustration only and is not intended to be limiting. The same method, with minimal modifications, can be used in the situations where the image is larger or smaller than the object (i.e., magnified or demagnified microscope). Still for the sake of simplicity, in the discussion we assume the illuminating wave being used is monochromatic, i.e., the wave has a single wavelength. In many practical applications, this monochromatic condition is satisfied. However, it is to be understood that the monochromatic condition is not intended to be limiting. The same method discussed in this section can be used to illuminations with multiple wavelengths.
Fig. 1 illustrates a simplified apparatus and process of conventional microscopy. Without loss of generality, suppose the object and the lens are all lined up in z-axis, and suppose the object is located at the position z=0. The object is at the front focal plane of the lens 3, and the pupil plane 4 refers to the back focal plane of the lens 3. The pupil plane 4 and the image plane 7 are the front and back focal planes of the lens 5, respectively. Suppose the object is represented by a transmission function t(x,y) , whose value may be complex (e.g. a phase-shift mask used in photolithography has a complex transmission function). The goal of optical microscopy is to obtain t(x, y) . Suppose the object is illuminated by a coherent light wave with wavelength λ . The field at the front surface of the object is w(x,y) . Then the field immediately after the object is given by (1) E(x, y) = t(x, y)w(x, y) In other words, the light wave is diffracted by the object and becomes E(x, y) . The diffracted wave is collimated by the first lens, and thus the field on the pupil plane, denoted as A(f, g) , is the Fourier transform of E(x, y) : (2) A(f, g) = f f E(x, y)e-
J2π^
dχdy wherein (f, g) is a proper coordinate system on the 2-dimensional pupil plane. A(f , g) is called the "angular spectrum" of the pattern E(x, y) . Suppose the pupil plane is represented by a transmission function P(f, g) . P(f, g) is usually called "pupil function". Note that P(f, g) is not necessarily the effect of only the pupil. Instead, the pupil function P(f, g) is the manifested overall effects of the whole microscope system, including lens, pupil, aperture, filter, photodetector, etc. Then the field immediately after the pupil is given by (3) B(f,g) = A(f,g)P(f,g) The wave is then collimated by the second lens 5 and projected onto the image plane. The wave on the image plane is the Fourier transform of B(f, g) , i.e., (4) U(x,y) = f f B(f,g)e-
J2π^άfdg wherein U(x, y) represents the electromagnetic wave on the image plane, and (x, y) is a proper coordinate system on the 2-dimensional image plane.
Usually photodetectors can only record the wave intensity, not the phase information, thus the image that is seen is therefore given by
In the above imaging model, A(f, g) represents a uniform plane wave traveling in the direction given by
whose magnitude is A(f, g) . Herein (f, g) is called the spatial frequency of the plane wave. The whole light wave diffracted by the object is the superposition of plane waves A(f, g) , - ∞ < f, g < ∞ . If f
2 + g
2 > (l / λ) , the plane wave A(f,g) is an evanescent wave, i.e., it attenuates exponentially in the z-direction. As a consequence, this A(f, g) can not be received by lens or photodetector that is located more than several wavelengths away from the object. This cut-off is manifested into the pupil function (6) P(f, g) = 0 , if f
2 + g
2 > (l/λf Other imperfections of the lens also affect the pupil function. Due to the finite size of the lens and the apertures, a plane wave A(f,g) can not be received if f
2 + g
2 > (NA/ λ) , wherein NA is the numerical aperture of the lens. Usually NA < 1 . Thus the pupil function must satisfy (7) P(f, g) = 0 , when f
2 + g
2 > (NA I λf The lens aberrations or defocus of the photodetector also distort the pupil function. For the sake of simplicity, we assume die lens aberrations or defocus or other imperfections are negligible. So the pupil function is written as
(8) P(f'
g) -
f
2 + g
2 > {NAIλ)
2 Note that this simplified pupil function is for illustration of the present invention only, and is not intended to be limiting. The present invention also applies to the cases where the microscope is imperfect, i.e., P(f, g) satisfies (7) but may not be the form of (8). So in conventional microscopy, only the angular spectrum that satisfies f
2 + g
2 < (NAI λ)
2 can arrive at the image plane.
DESCRIPTION OF THE PREFERRED EMBODIMENTS Fig. 5 illustrates the physical principle of the present invention. The present invention uses a membrane mask near or in contact with an object of interest to move the high spatial frequency angular spectrum of the object to low spatial frequency region that is viewable by a microscope,
and then reconstructs the angular spectrum of the object from received images by using an image processing unit. Thus the present invention can obtain die image of an object with a resolution much higher than imposed by the condition (7). Fig. 2 is an illustration of the configuration of one of the embodiments of the present invention used for imaging transparent objects, in which a membrane mask 10 is held near or in contact with the object of interest 9 and a microscope 14 is used to view the object and the mask. The object 9 is at the front focal plane of the lens 11, and the pupil plane 12 refers to the back focal plane of the lens 11. The pupil plane 12 and the image plane 16 are the front and back focal planes of the lens 13, respectively. Photodetectors 15 and 16 are used to record the image intensity on the pupil plane 12 and the image plane 16. Note tiiat the photodetector 15 is removed from the pupil plane 12 after recording the image intensity so that the electromagnetic wave can pass the pupil plane 12 and arrives at the photodetector 16. Fig. 3 is a detailed illustration of how to hold the membrane mask 10 and the object 9 in the present invention. Either the object 9 or the membrane mask 10 is held by a scanning stage 18 so that the mask can be moved relative to the mask. In other embodiments of the present invention, a scanning stage is not necessarily needed. Instead, a controlling stage can be used to hold the mask near the object and vary the pattern of the mask in accordance to the control signal of the disclosed apparatus. Note that, the gap between the object and the mask is for die purpose of illustration only, there is not necessarily a gap in the embodiments. In fact, it is preferred that there is no gap. For the sake of simplicity, assume the mask 10 has periodical patterns. Note that this periodicity condition is not intended to be limiting. The transmission function of the membrane mask is represented by s(x, y) , which is a periodical function in both x and y directions. Suppose the period in x and y directions are L
x,L
y , respectively. So s(x, y) = s(x + mL
x , y + nL
y ) , where m and n are arbitrary integers. For the sake of simplicity, we assume that the object and the membrane mask are illuminated by a normally incident coherent uniform plane wave. This is not intended to be limiting. The present invention also applies to the illuminations with obliquely incident waves or partially coherent waves. Without loss of generality, assume the magnitude of the plane wave is 1. Denote the illuminating wavelength as λ. Then the field immediately after the object and the membrane mask is: (9) E(x,y) = t(x,y)s(x,y) It should be noted that the mask can be placed either in front of or behind the object. The angular spectrum of the membrane mask is its Fourier transform:
(10)
Where δ(f, g) is the 2-dimensional Dirac function. Denote the angular spectrum of the object as T(f, g) , which is the Fourier transform of t(x, y) . That is,
Then from (9), (10) and (11), the angular spectrum of the field immediately after the object and the membrane mask is given by A(f,g) = lJE(x,y)e-
j2π(fx+sy)dxdy
Wherein "*" means convolution. As discussed earlier, the field on the pupil plane is A(f, g) . The present invention can work with any pupil functions. Nevertheless, to simplify the imaging process, the present invention uses an optional pupil filter. The filter, which is located on the pupil plane, is illustrated in Fig. 4. The transmission function of the pupil filter, i.e. "pupil function", is denoted as U(f,g) , and is given by fl - a ≤ f ≤ a &nd- a ≤ g ≤ a (13) Ω(f,g) = \ „ . [0 otherwise wherein a < \lλ . Note that usually a ≤ NAI(λ 2) , where NA is the numerical aperture of the optical microscope. So this pupil function II(/,g) satisfies the condition imposed by (7). It will be apparent to one of skill in the art that the means for applying pupil plane filtering need not be limited to the rectangular periodical pattern shown in Fig. 4. Other pupil filters are well known in the art and the selections of pupil functions may be altered depending on the characteristics of the imaging needs. For example, a circular filter may be used. The field immediately after the pupil plane is therefore A(f,g)U(f,g) . And the field on the image plane, where the photodetector is located, is given by the Fourier transform of A(f,g)Tl(f,g) . That is,
The image obtained by the photodetector is thus given by
Since the pupil plane works as a low-pass filter, only the part of the spectrum that satisfies |/| < a and |g| < a can pass the pupil plane and arrive at the image plane. It should be noted that
can be obtained by placing a photodetector 15 on the pupil plane, and |t/(x,j )| can be obtained by placing a photodetector 16 on the image plane, see Fig. 2. Then A(f,g)U(f,g) as well as U(x,y) can be derived by using phase- retrieval algorithms. One example of phase-retrieval algorithm is given by J. N. Cederquist (1989), "Wave-front phase estimation from Fourier intensity measurements", J. Opt. Soc. Am. A, vol. 6, p.1020-1026. Other methods may be used, e.g., the phase-contrast microscope introduced by E. Zernike (1935), Z. Tech. Phys., vol. 16, p.454. For full discussion of the phase contrast method, see A. Bennett et al, Phase Microscopy (NY, J. Wiley & Sons, 1952).The present invention then solves for T(f,g) , given A(f,g)Ω(f,g) and S(f,g) . Let . where
m,n = 0,±l,+2,.... In other words, T
mn(f,g) is the shift of T(f,g) ,
2m - l , , 2/B + l 2» -l 2ra + l , . 1 ^ . ^ 1 1 1 ■ ≤ j ≤ , ≤ g ≤ to the region ≤ j < , ≤ g ≤ 2LX 2LX 2Ly 2Ly 2LX 2LX 2Ly 2Ly
The spectrum T(f,g) composed of T
mn(f,g) is depicted as Fig. 6. Note that
[l -a ≤ f ≤ a and-a ≤ g ≤ a 1
U(f, g) = '{ ■ For the sake of simplicity, choose LX =L = — . [0 otherwise 2a
It will be apparent to one of skill in the art that Lx needs not equal Ly , and tiiat Lx and Ly need
not equal — . Then the spectrum received by the microscope is given by 2a (16) R(f, g) = A(f, g)U(f, g) = X a_
m!_
nT
mn (f,g), - a ≤ f ≤ a -a ≤ g ≤ a m,n As illustrated in Fig. 5, the image received by the microscope is a combination of the high spatial frequency information of the object. In order to reconstruct T(f,g) , the present invention
solves for T
mn(f,g), m,n = 0,+l,+2,.... Note that if the membrane mask is displaced by (Δx, Ay) , the Fourier transform of the membrane mask will become
wherein δ(f,g) is the 2-dimensional Dirac function. Therefore we can move the membrane mask in parallel to the object and take images at different locations so as to calculate
First, assume T
mn(f,g) = Q, when /»|>N
p»|>N
2. In other words, the algorithm disclosed here reconstructs the spectrum T(f,g) within the region -(2N
1+l)a≤f≤(2N
l+ )a,-(2N
2+l)a≤g≤(2N
2+l)a, under the assumption that T(f, g) = 0 if (fig) falls outside this region. This is not limiting because one can always choose large N
I3N
2 to approximate the whole T(f,g) with arbitrary accuracy. Then choose a set of displacements. The number of displacements is N = (2N +1)(2N
2 +1) . The displacements are (x
l,y
l),(x
2,y
2),...,(x
N,y
N) , respectively. Then the membrane mask is moved for N times. In the i-th time the membrane mask is moved from its original location by a displacement (x
l}y^). Thus in the i-th time, the Fourier transform of the membrane mask is given by
Then in the i-th time, the spectrum observed on the pupil plane of the microscope is given by R, (/, s) = (T(f, g) * S
t (f, g))Il(f, g)
■a≤ f ≤ a,-a ≤ g≤a
Let
mra. - N
! < m ≤ N
j ,-N
2 ≤n≤N
2,l≤i≤N. Then
equation (19) can be written as (20) R,.(/,g) = ∑a_
mt_
nJT
mn{f,g),-a ≤f≤ a-a ≤g≤a m,n By solving the equation system (20), one can obtain T
mn(f,g), -N
x ≤m≤N
x ,-N
2 ≤n≤ N
2. To illustrate the solution method, write (20) in matrix form.
Denote a-N,-Nlt\ -N„-N2+l,l -N N2,l -Nt+\,-N2,l lN„N2,l -N -N2,2 -N -N2+l,2 -N,,N,,2 *-N,+l,~N,,2 lN,,N,,2 (21) Q = a-Ni,-N2,N -N,,-N2+\,N a -N N2,N -N,+l,~N„N lN„N2,N So Q is an N by N matrix. Note that N = (2Nt + 1)(2N2 + 1) . The Fourier coefficient amni is the element {(m + NX)(2N2 +l) + n + N2+ l,i) of the matrix Q. Then (20) can be written as
(22) ■a≤f≤ a,-a ≤g≤a
Therefore T
mn{f,g),-N
x≤ m ≤ N
x ,-N
2 ≤ n ≤ N
2 , are given by
Thereafter the spectrum T(f, g) , - (2N
X + l)a ≤ f ≤ (2N + i)a ,
- {2N2 + \)a ≤ g < {2N2 + l)a , is obtained by shifting Tmn(f,g),
- N
x ≤ m ≤ N ,-N
2 < n < N
2 , to their corresponding locations, as is depicted in Fig. 6:
2m - 1
~ ^ 2m + \ 2« - l In + 11
Λr Ar < / < , < g < , - N, ≤ m ≤ N -N ≤ n ≤ N
7 2L
X 2L
X 2L
y 2L
y Thus the image of the object is given by the inverse Fourier transform of T(f, g) :
The whole algorithm is illustrated in Fig. 7. It should be noted that the concept of the present invention is to collimate a wave near an object of interest by using a predetermined mask so that the propagating wave diffracted by the two objects carries the information about the fine details of the object of interest. Then the diffracted wave is received by a conventional optical microscope and the image of the object of interests is reconstructed. The implementations of the present invention may vary, depending on specific applications. The method of reconstructing the image of the object may also vary, depending on the implementations. Some of the apparatuses of the present invention are disclosed as following. The present invention is also suitable for use to image an object by collecting the light reflected from the object. In this situation, the present invention is said to work in "reflection mode". Fig. 8 is an illustration of the principle of such an embodiments of the present invention used to imaging opaque object or the surface of an object 23 through a membrane mask 22. The photo detection system 25 includes the lens 11 and 13, pupil plane 12 and the photodetector 15 and 16 in Fig. 2. The methods of holding the mask/object and optionally filtering the wave are referred to Fig. 3 and Fig. 4. The image construction algorithm is similar to the above method. In this embodiment, the illuminating wave passes through the mask, arrives at and is reflected from the object, and passes through the mask again. Then the wave is received by a photo detection system. The mask is then varied or scanned, and the above illumination and wave collection process is repeated. After a number of such iterations, the image of the object is constructed by using the construction algorithm.
Fig. 9 is an illustration of the principle of one of the embodiments of the present invention used to imaging a transparent or thin object or a collection of objects 28 held on top of an opaque membrane mask 29. The photo detection system 30 includes the lens 11 and 13, pupil plane 12 and the photodetector 15 and 16 in Fig. 2. The metiiods of holding the mask/object and optionally filtering die wave are referred to Fig. 3 and Fig. 4. The image construction algorithm is similar to the above method. In this embodiment, the illuminating wave passes through the object, arrives at and is reflected from the mask, and passes through die object again. Then the wave is received by a photo detection system. The mask is then varied or scanned, and the above illumination and wave collection process is repeated. After a number of such iterations, the image of the object is constructed by using the construction algorithm. In the embodiment discussed in the previous section, the membrane mask is moved for N times so the angular spectrum of the object received by the disclosed apparatus is N times more than by a conventional optical microscope. Therefore the present invention achieves a resolution much higher than conventional microscopy. It should be noted that the membrane mask can be fixed while the object is moved for N times. This is equivalent to moving the mask for N times. When the invention works in "reflection mode" illustrated in Fig. 8, moving the object, instead of the membrane mask, is also acceptable. Rather than scanning the mask or the object, an alternative approach is to use a tunable membrane mask. A tunable membrane mask means a mask that can change its pattern according to external signals. As illustrated in Fig. 10, a tunable mask can change its transmission or reflection coefficients according to the control of electrical, optical or other signals. Suitable materials that make tunable membrane masks, may include, but are not limited to, liquid crystal, non-linear optical materials whose permittivity can be altered by optical illumination, etc. A tunable mask maybe made of micro electromechanical systems (MEMS) as well. Fig. 7 discloses the method of obtaining the image of an object via the present invention using a tunable membrane mask. To construct the image of the object, the pattern of the membrane mask is varied for N times, and each time images are taken using an optical microscope. Each time the membrane mask is varied, the coefficients a
πmi in (20) are varied. Using the coefficients a
mni determined by the varied membrane mask, one can obtain the matrix Q in (21), and thereafter use the same method shown in (21)-(25) to obtain the high-resolution image of the object.
It should be understood that the essential role of scanning or varying die membrane mask is to vary the coefficients a
mni in (20), i.e., build the matrix Q in (21), and obtain R, ( , g) in
(22). So the image of an object can be constructed via the present method. The scope of the present invention is not intended to be limited to the scanning or varying mask approaches. Any means of varying the coefficients amni in (20) are acceptable in the present invention. The minimum number of varying or scanning the mask needed to obtain T(fi, g) , the Fourier transform of the object, in the region - (2N, + l)a ≤ f < (2NX + l)a , - (2N2 -+ l)a ≤ g ≤ (2N2 + i)a , is given by N = (2NX + 1)(2N2 + 1) . Note that conventional microscopy can only obtain T(f, g) in the region — a ≤ f,g ≤ a , or f2 + g2 ≤ a2 . In practice, to improve signal-noise-ratio (SNR) and image contrast, the number of varying/scanning can be more than (2NX + 1)(2N2 + 1) , and then the least square method can be used to solve for T(f, g) ■ In other word, die matrix Q in equation (22) is a M x N matrix, where M is the number of scanning/varying the mask, N = (2N; + 1)(2N2 + 1) , and M > N . Then the equation (22) can be solved via the least square method.
OTHER USES It is to be understood that, while the invention has been described in conjunction with the preferred specific embodiments thereof, the foregoing description as well as the examples that follow are intended to illustrate and not limit the scope of the invention. Other aspects, advantages and modifications within the scope of the invention will be apparent to those skilled in the art to which die invention pertains. All patents, patent applications, and publications mentioned herein are hereby incorporated by reference in their entirety. EXAMPLES The following examples are put forth so as to provide those of ordinary skill in the art with a complete disclosure and description of how to use the processes and methods disclosed and claimed herein. Efforts have been made to ensure accuracy with respect to assumptions (e.g., the equivalent patterns of the membrane mask, etc.) but some errors and deviations should be accounted for. Fig. 11 is an illustration of the image construction method: (a) the object to be imaged; (b) the membrane mask; (c) the pattern that is seen through the optical microscope, this pattern is the
multiplication of the object and the mask; (d) the low spatial frequency part of the Fourier transform of the pattern seen in (c), only this low-frequency spectrum can be received by the microscope; (e) the actual image received by the microscope; (f) the image constructed by using the disclosed method. The construction algorithm is to repeat the procedures (b)-(e) for N times, each time the membrane mask is placed at a different location relative to the object, and solve for the whole Fourier transform of the object from the spectra obtained in (d). In the following examples, assume the object of interest is very thin, transparent with opaque patterns. The apparatus illustrated in Fig. 2 is used to image the object. Assume the membrane mask being used is infinitely thin, with transparent background and opaque patterns (e.g. a chrome-on-glass photomask used in photolithography). The mask is in contact with the object. The pattern on the mask is illustrated in Fig. 11 (b), which has a period of one wavelength in both x and y direction, i.e. Lx = Ly = λ . The size of each opaque rectangle on the mask is
0.3 1 x 0.3 1 . The Numerical Aperture (NA) of the microscope is assumed to be 0.7. A pupil filter illustrated in Fig. 4 is placed on the pupil plane, whose opening corresponds to a -= — . Thus only 2Λ 0.5 . 0.5 the spatial frequency satisfying < f,g < can be received by the microscope. The A A object is fixed and the mask is translational moved for a number of iterations. In each iteration, the
center of the mask is moved to (x
k ,y,) 0 ≤ k,l ≤ n — 1 . The number of iterations
is n
2. The larger n, the higher resolution the apparatus can get. All the images are obtained by simulation using Fourier Optics. Fig. 12 is a comparison of the imaging results obtained by conventional microscopy and by the present invention: (a) a transparent and very thin object; (b) the image obtained by conventional microscopy; (c) the image obtained by the apparatus of the present invention illustrated in Fig. 2. A 50x50 nm
2 defect on the object (indicated in the figure) is of interest. Assume the conventional microscopy uses 150 nm wavelength and 0.8 NA, while die present invention uses 250 nm wavelength and 25 iterations (H=5). The defect is very clear in (c) but very fuzzy in (b). In addition, the edges of the pattern are much clearer in (c) than in (b). This indicates that the present invention achieves much higher resolution than conventional microscopy. Fig. 13 is another comparison of the imaging results obtained by conventional microscopy and by the present invention: (a) the object; (b) the image obtained by conventional microscopy;
(c) the image obtained by the present invention. A 30x30 nm
2 defect is of interest. The conventional microscopy uses 150 nm wavelength and 0.8 NA, while the present invention uses 250 nm wavelength and 25 iterations. The defect is very clear in (c) but is complete missing in (b). In addition, the edges of the pattern are much clearer in (c) than in (b). This indicates that the present invention achieves much higher resolution than conventional microscopy. Fig. 14 is an illustration of how the resolution of the present invention is improved with the increase of number of iterations: (a) the object; (b) the image obtained with 25 iterations («=5); (c) the image with 49 iterations (n=7). A 20x20 nm
2 defect indicated in the figure is of interest. Clearly (c) shows much sharper image than (b). Thus the resolution is significantly improved with die number of iterations increases.