BACKGROUND
The present invention relates generally to imaging sensors, and more particularly, to scene based nonuniformity correction methods for use with such imaging sensors.
Nonuniformities appear at an output display of an imaging sensor as fixed pattern noise. The Nonuniformities are described as noise because they result in undesirable information. The nonuniformities are described as a fixed pattern because their characteristics do not change (or change relatively slowly) with time. These nonuniformities may also be thought of as detector gain and offset errors. In the method of the present invention, all errors are treated as offset errors. Thus, the present invention accurately measures the detector offsets using actual scene information.
Once the offset errors have been measured there are several ways in which the corrections may be applied. They may be used as the only source of error correction. They may also be used as fine offset correction terms, in conjunction with coarse offset terms and gain correction terms. These other terms may be calculated using a number of different methods. These methods include coarse offset terms calculated using a thermal reference source; coarse offset and gain terms calculated as part of system initialization; and fine gain terms calculated using thermal reference sources or scene-based methods.
In one current method employed by the assignee of the present invention, one or more thermal reference sources are used to measure nonuniformities for a scanning infrared sensor and provide data for the calculation of correction coefficients that are employed to correct for the nonuniformities. There are several disadvantages related to the use of a thermal reference source-based correction system. First, there is added mechanical complexity which leads to increased system cost. Secondly, system performance may suffer.
System performance suffers for two reasons. In many cases, a separate optical path is utilized for each thermal reference source. Thus, the correction coefficients calculated using the thermal reference source optical path may not be the proper ones for the optical path of the scanning infrared sensor. This leads to imperfect correction. In less sophisticated systems, the temperature of the thermal reference source cannot be controlled. In this case, the thermal reference source may not be at the same temperature as tile scene that is viewed. The correction coefficients thus correspond to the wrong part of the detector response curve. This also leads to imperfect correction. The present method avoids these problems by using scene temperature information. Furthermore, the present invention does not degrade the scene in any manner.
SUMMARY OF THE INVENTION
The present scene-based nonuniformity correction method is used to eliminate image defects in an imaging sensor or video system, such as a scanning infrared sensor or pushbroom sensor, for example, resulting from nonuniformities caused by a detector (focal plane array) and detector readout, for example. The present invention detects, measures, and corrects for nonuniformities in the video output of a imaging sensor without degrading the image. A set of region-based correction terms is calculated and applied to a video signal produced by the sensor using either a feedback or feedforward configuration. After the correction terms are applied, the resultant video signal is suitable for display or further processing.
There are several advantages in using the present method. First the mechanical complexity of the imaging system is reduced, which leads to lower costs because fewer components are required and there is reduced labor required for manufacturing and testing. Secondly an imaging system incorporating the present invention provides better performance.
BRIEF DESCRIPTION OF THE DRAWINGS
The various features and advantages of the present invention may be more readily understood with reference to the following detailed description taken in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
FIG. 1 shows a block diagram of a genetic imaging sensor system incorporating a scene based nonuniformity correction method in accordance with the principles of the present invention;
FIG. 2 is a block diagram illustrating calculation of fine offset terms used in the scene based nonuniformity correction method of the present invention that is employed in the imaging sensor system of FIG. 1; and
FIG. 3 is a flow diagram illustrating the scene based nonuniformity correction method in accordance with the principles of the present invention.
DETAILED DESCRIPTION
In order to better understand the present method or algorithm, reference is made to FIG. 1, which shows a block diagram of a generic scanning infrared sensor system 10, or imaging system 10, incorporating a scene based nonuniformity correction method 40 in accordance with the principles of the present invention. The scanning infrared sensor system 10 is comprised of a detector 11 and its readout 12, and the readout is coupled to system electronics 13 that implements scene based nonuniformity method 40. The system electronics 13 implements correction logic that produces coarse and fine correction terms that are applied to the processed video signal. The correction logic includes two offset and gain pairs 14, 15, comprising a coarse offset and gain pair 14, and a fine offset and gain pair 15. The coarse offset and gain pair 14 is comprised of coarse offset level and coarse gain terms 16, 17 that may be calculated using a thermal reference source (internal or external) and pre-stored in a nonvolatile memory 28. The coarse offset level term 16 may also be calculated using a thermal reference source which is updated continuously.
The fine offset and gain pair 15 is comprised of fine offset level and fine gain terms 18, 19. The fine gain term 19 may be set to unity, calculated using thermal reference sources, or calculated using a scene-based algorithm. First and second adders 21, 23 and first and second multipliers 22, 24 are employed to appropriately combine the coarse and fine level and gain terms 16, 17, 18, 19 to produce a corrected video output signal. The present method 40 or algorithm is used to estimate the fine level correction terms 18 and is performed in a nonuniformity estimator 20. The output of the nonuniformity estimator 20 has a loop attenuation factor (k) 25 applied thereto and is coupled to a first input of a third adder 26. A second input of the third adder 26 is provided by the fine level term 18. As such the fine level term 18 is updated with the output of the nonuniformity estimator 20 multiplied by the loop attenuation factor (k) 25.
The expressions and equations that are used in implementing the present algorithm or method 40 will now be described. The variables associated with the system 10 are as follows:
x(m,n)≡input,
y(m,n)≡output,
LC (m)≡coarse offset level term 16,
GC (m)≡coarse gain term 17,
LF (m,n)≡fine offset level term 18,
GF (m)≡fine gain term 19,
L(m,n)≡error estimate, and
k≡loop attenuation factor 24,
where
m=(0,M-1);M=number of detectors, and
n=(0,N-1);N=samples/detector.
The system input and output are thus related by the equation:
y (m,n)=G.sub.F (m){(G.sub.C (m) [×(m,n)+L.sub.C (m)]+L.sub.F (m,n)}.
The fine level terms are recursively update after each frame. Thus,
L.sub.F (m,n)=L.sub.F (m,n)+kL(m,n).
FIG. 2 is a block diagram illustrating calculation of fine offset level terms 18 used in the scene based nonuniformity correction method 40 of the present invention that is employed in the scanning infrared sensor system 10 of FIG. 1 The following terms are defined and are used in implementing the method 40 of the present invention.
yI (m)≡horizontal average of a vertical region
Fhp ≡high pass filter operator,
hpI (m)≡high pass version of YI,
T≡threshold operator,
bI (m)≡threshold version of YI,
B≡segment boundary operator,
FIp ≡low pass filter operator,
IpI (m)≡low pass version of YI, and
cI (m)≡vertical region correction term,
where
I={0, L-1}; L=number of regions, and
r={0, R-1}; R=samples/region.
During an active field time, the scene based nonuniformity correction method 40 collects scene data and calculates the average within each region, for each line therein, illustrated in box 31. This operation is equivalent to implementing the expression: ##EQU1##
YI (m) is thus comprised of several column vectors, one for each region. These vectors are then high pass filtered (Fhp), illustrated in box 32, and thresholded (T), illustrated in box 33, to detect edges. The edges are marked as boundaries. Thus,
hp.sub.I (m)=(F.sub.hp y.sub.I)
b.sub.I (m)=(T hp.sub.I).
These vectors are then low pass filtered, illustrated in box 34, using the boundary information. Pixels marked as boundaries are ignored. This is denoted by the low pass filter operator, FIp, and the boundary operator, B. That is,
Ip.sub.I (m)=(F.sub.Ip (By.sub.I)).
Next, each region vector, yI (m), is subtracted from its low pass filtered version in an adder 35, producing the correction term for each region, cI (m). That is
c.sub.I (m)=Ip.sub.I (m)-y.sub.I (m).
Finally, the correction terms are either applied individually for each region, or averaged together, wherein the boundary pixels are ignored. If they are averaged together, illustrated in box 36, the error estimate is calculated using the equation: ##EQU2##
For the purposes of completeness, FIG. 3 is a flow diagram illustrating the scene based nonuniformity correction method 40 in accordance with the principles of the present invention. A video input signal is provided, indicated in step 41, such as from the infrared sensor 11 derived from an image. The video input signal is processed such that a vector representing offset correction terms is formed, and this vector is initially set to zero. Each element in this vector represents a correction term for a particular detector of the scanning infrared sensor 11. The vector is applied to each pixel of the image by the processor 13 as the pixels are read from the focal plane array 12.
To measure the offset error, the image is separated into vertically oriented regions, each comprising a plurality of channels. The average of each channel within a region is computed, indicated in step 42, and a set of region vectors is formed, such that them is one region vector for each region. Each region vector is then globally high pass filtered, and edges larger than a predefined threshold are detected, indicated in step 43, and marked, indicated in step 44. Then, each region vector is further separated into sub-regions, indicated in step 45. The isolated sub-regions are high-pass filtered without regard to adjacent sub-regions, indicated in step 46. That is, each sub-region is high pass filtered independently of the other sub-regions. In a first embodiment of the method 40, the correction terms for each vertical region vector are averaged together, resulting in a single correction vector, indicated in step 48.
The correction terms calculated for each vertical region may also be applied individually to each of the detectors. In this second embodiment, the offset level error in each region for each channel is calculated, indicated in step 49, and wherein the offset level error at boundary edges is undefined, that is, not having valves. The correction terms corresponding to a region are applied as the detector 11 scans the scene and views a portion corresponding to that particular region. The correction terms are smoothed at region boundaries to eliminate noise due to boundary transitions. This second method 40 is less sensitive to gain errors in the detector.
Thus there has been described a new and improved scene based nonuniformity correction methods for use with imaging sensors. It is to be understood that the above-described embodiments are merely illustrative of some of the many specific embodiments which represent applications of the principles of the present invention. Clearly, numerous and other arrangements can be readily devised by those skilled in the art without departing from the scope of the invention.