US6519372B1 - Normalized crosscorrelation of complex gradients for image autoregistration - Google Patents
Normalized crosscorrelation of complex gradients for image autoregistration Download PDFInfo
- Publication number
- US6519372B1 US6519372B1 US09/386,246 US38624699A US6519372B1 US 6519372 B1 US6519372 B1 US 6519372B1 US 38624699 A US38624699 A US 38624699A US 6519372 B1 US6519372 B1 US 6519372B1
- Authority
- US
- United States
- Prior art keywords
- pixels
- gradients
- images
- sets
- crosscorrelating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/37—Determination of transform parameters for the alignment of images, i.e. image registration using transform domain methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
- G06V10/7515—Shifting the patterns to accommodate for positional errors
Definitions
- the present invention relates generally to aircraft-based and satellite-based imaging systems, and more particularly, to a system and method for computing the degree of translational offset between corresponding blocks extracted from images acquired by different sensors so that the images can be spatially registered.
- the assignee of the present invention has developed and deployed a digital image processing system that registers images acquired under different conditions and using different types of sensors (e.g., electro-optical camera and synthetic aperture radar). This accurate pixel-to-pixel registration improves the exploitation of such imagery in three ways:
- the present invention provides for a system and method that computes the degree of translational offset between corresponding blocks extracted from images acquired by two sensors, such as electro-optic, infrared sensors, and radar for example, so that the images can be spatially registered.
- the present invention uses fast Fourier transform (FFT) correlation to provide for speed, and also uses gradient magnitude and phase (direction) information to provide for reliability and robustness.
- FFT fast Fourier transform
- two potentially dissimilar images acquired by two potentially different sensors are resampled to a common resolution and orientation using image acquisition parameters provided with the imagery.
- the present invention provides an efficient robust mechanism for computing the degree of translational offset between corresponding blocks extracted from the two resampled images so that they can be spatially registered.
- the present system and method matches image blocks extracted from the two resampled images and makes use of the intensity gradient of both images that are matched.
- the two points of novelty implemented in the present invention are that both gradient magnitude and phase (direction) information are used by the matching mechanism to improve the robustness and reliability of the matching results, and the matching mechanism uses a fast Fourier transform (FFF) so it can quickly match large image blocks even on a small personal computer.
- FFF fast Fourier transform
- the matching mechanism has several advantages over the prior art disclosed in U.S. Pat. No. 5,550,937.
- the present invention combines both gradient magnitude and phase information so that image structure is automatically and implicitly taken into account by the matcher.
- the present invention performs magnitude normalization so that the relative differences in intensity bias and gain between image blocks are ignored by the matcher. This normalization also makes the algorithm insensitive to spatial nonstationarity of edges (i.e., varying number of detailed features within each subarea) within the scene.
- the present invention makes use of the fast Fourier transform (FFT) so that matching results can be computed very rapidly even on a small personal computer.
- FFT fast Fourier transform
- FIGS. 1 a - 1 c illustrate the nature of multisensor and multitemporal image matching in which the present invention is employed
- FIG. 2 illustrates an exemplary image misregistration that is corrected by the present invention
- FIG. 3 illustrates an exemplary system and method in accordance with the principles of the present invention.
- FIG. 4 is a flow diagram illustrating an exemplary method in accordance with the principles of the present invention.
- FIGS. 1 a - 1 c illustrate the nature of multisensor and multitemporal image matching in which the present invention is employed.
- the present invention addresses the problem of automatically registering images acquired by potentially very different sensors, such as a synthetic aperture radar (SAR) and a visible band camera.
- SAR synthetic aperture radar
- Such images may be derived from sensors disposed on reconnaissance aircraft or an orbiting satellite, for example.
- the originally acquired images are resampled (using image acquisition parameters supplied with the imagery) to a common scale and orientation.
- corresponding blocks within the images may have residual offsets relative to one another due to errors in the image acquisition parameters.
- the present invention provides for a system and method for correcting the relative offset between a small image block (site) extracted from a first image and a corresponding image block (site) extracted from a second, potentially very dissimilar image.
- FIG. 1 a shows representative response tables for a synthetic aperature radar (SAR) and an electro-optical (EO) camera along with composition of the ground scene imaged by both sensors.
- the image acquired by a synthetic aperature radar (SAR) shown in FIG. 1 b b differs from the image acquired by the electro-optical camera shown in FIG. 1 c even though the ground scene shown in FIG. 1 a , is the same for both.
- a given material on the ground such as water (region 2 ), concrete (region 3 ), trees (region 4 ), and asphalt (region 5 ), for example
- the ground scene sensed by the synthetic aperture radar has brightness levels of 100 for bare earth, 50 for water, 150 for concrete, 250 for trees, and 200 for asphalt.
- the ground scene sensed by the electro-optic sensor has brightness levels of 200 for bare earth, 150 for water, 250 for concrete, 100 for trees, and 50 for asphalt.
- Gradients e.g., derived using a Sobel operator have, at every pixel, both magnitude and direction (phase). This suggests that the similarity between two images (i.e., SAR and translated electro-optical) can be measured in terms of the amount of correlation between the respective directions of the gradients for all pixels within a prescribed region (i.e., a site). This is accomplished in accordance with the principles of the present invention by crosscorrelating the complex (i.e., magnitude and phase) gradients over the site. In particular, the gradient magnitudes are weighted by a multiplicative function that decreases with increasing difference in the directions of the gradients in the two images.
- a weighting function W( ⁇ +180) must be equal to W( ⁇ ), where ⁇ is the difference (in degrees) between the respective gradient angles. This is due to the fact that the phase of the complex gradient can differ by ⁇ 180 degrees at a given edge depending on the polarity of the brightness difference across that boundary.
- the cross correlation function is normalized to reduce the dependence on the magnitudes of the complex gradients. This is because the magnitude of the gradient across a given boundary varies depending on the brightness difference across that boundary.
- the crosscorrelation function is normalized to reduce the dependence on the number of edges within the site. Without this normalization, the crosscorrelation function would be highest for sites containing the most edges (i.e., clutter).
- FIG. 2 illustrates exemplary image misregistration that is corrected by the present invention.
- image block A is offset horizontally and vertically within image block B to determine the match point.
- image block B has W>w columns and H>h rows.
- a w by h chip is extracted from block B with the upper left corner at B( ⁇ c , ⁇ r ) and that this chip is matched with block A.
- image blocks A and B extracted from different images (which have been resampled, using the image acquisition parameters, so as to have the same scale and orientation), as shown in FIG. 3 .
- FIG. 3 illustrates an exemplary system 10 and method 20 for providing a normalized crosscorrelation of complex gradients using fast Fourier transforms (FFTs) 16 .
- FFTs fast Fourier transforms
- ⁇ A ⁇ a set of pixels in an image block A to which an offset is applied
- w the width of (or number of columns in) image block A
- h the height of (or number of rows in) image block A
- ⁇ B ⁇ a set of pixels in image block B that image block A is offset relative to
- ⁇ c the column offset of image block A relative to image block B
- ⁇ r the row offset of image block A relative to image block B
- ⁇ A (c,r)
- e j ⁇ A (c,r) ( ⁇ A ) H (c,r)+j( ⁇ A ) V (c,r), and
- ⁇ B (c,r)
- e j ⁇ B (c,r) ( ⁇ B ) H (c,r)+j( ⁇ B ) V (c,r) are intensity
- the system 10 comprises an algorithm that implements the above match measure equation.
- the system 10 processes blocks of images acquired by first and second sensors.
- the images are resampled to a common scale and orientation, and a relative offset exists between a set of pixels ⁇ A ⁇ extracted from the first image and a corresponding set of pixels ⁇ B ⁇ extracted from the second image.
- the system 10 has first and second processing paths 11 a , 11 b that respectively process a set of pixels ⁇ A ⁇ in an image block A to which an offset is applied, and a set of pixels ⁇ B ⁇ in an image block B relative to which image block A is offset.
- the respective sets of pixels ⁇ A ⁇ , ⁇ B ⁇ are processed using intensity gradient circuits 12 a , 12 b to generate intensity gradients at each pixel with (column, row) coordinates (c, r) in images A and B expressed in magnitude-phase form and horizontal-vertical gradient form.
- the intensity gradient circuits 12 a , 12 b may comprise Sobel operators, for example.
- Outputs of the intensity gradient circuits 12 a , 12 b in each processing path 11 a , 11 b are multiplied together and by a factor of two in a first multiplier 13 a to produce complex gradients.
- Outputs of the first multipliers 13 a are input to a first fast Fourier transform 16 a that crosscorrelates the respective inputs.
- the outputs of the intensity gradient circuits 12 a , 12 b in each processing path 11 a , 11 b are respectively input to squaring circuits 14 a , 14 b .
- Outputs of the squaring circuits 14 a , 14 b in each processing path 11 a , 11 b are input to a first adder 15 a where they are subtracted.
- Outputs of the first adders 15 a in each processing path 11 a , 11 b are input to a second fast Fourier transform 16 b that crosscorrelates the respective inputs.
- the outputs of the first and second fast Fourier transforms 16 a , 16 b are summed in a third adder 15 c.
- Outputs of the squaring circuits 14 a , 14 b in each processing path 11 a , 11 b are input to a second adder 15 b where they are added.
- Outputs of the second adders 15 b in each processing path 11 a , 11 b are input to a third fast Fourier transform 16 c that crosscorrelates the respective inputs.
- the output of the third fast Fourier transform 16 c is inverted in an inverter circuit 17 .
- the output of the third adder 15 c and the output of the inverter circuit 17 are multiplied together, and thus normalized, in a second multiplier 13 b , which produces a match surface.
- a maximum value in the match surface is determined in a maximum value circuit 18 to produce a match offset, which is the output of the system 10 .
- FIG. 4 is a flow diagram illustrating an exemplary method 20 in accordance with the principles of the present invention.
- the exemplary method 20 comprises the following steps.
- First and second images are acquired 21 using first and second sensors.
- the first and second images are resampled 22 to a common scale and orientation, such that a relative offset exists between a set of pixels ⁇ A ⁇ extracted from the first image and a corresponding set of pixels ⁇ B ⁇ extracted from the second image.
- the respective sets of pixels ⁇ A ⁇ , ⁇ B ⁇ are processed to generate 23 intensity gradients at each pixel with (column, row) coordinates (c, r) in the images expressed in magnitude-phase form and horizontal-vertical gradient form.
- the intensity gradients are multiplied together and by a factor of two 24 to produce complex gradients.
- the complex gradients are crosscorrelated 25 by a first fast Fourier transform 16 a.
- the intensity gradients are squared 26 , are subtracted 27 from each other, and are crosscorrelated 28 in a second fast Fourier transform 16 b.
- the crosscorrelated intensity gradients produced by the first and second fast Fourier transforms 16 a , 16 b are summed together 29 .
- the squared intensity gradients are added 30 and crosscorrelated 31 in a third fast Fourier transform 16 c .
- the crosscorrelated squared intensity gradients are normalized 32 and are multiplied 33 by the crosscorrelated summed intensity gradients to produce a match surface.
- the match surface is then processed 34 to generate a maximum value that corresponds to a match offset between the sets of pixels ⁇ A ⁇ ⁇ B ⁇ .
Abstract
A system and method that computes the degree of translational offset between corresponding blocks extracted from images acquired by two sensors, such as electro-optic, infrared sensors, and radar for example, so that the images can be spatially registered. The present invention uses fast Fourier transform (FFT) correlation to provide for speed, and also uses gradient magnitude and phase (direction) information to provide for reliability and robustness.
Description
The present invention relates generally to aircraft-based and satellite-based imaging systems, and more particularly, to a system and method for computing the degree of translational offset between corresponding blocks extracted from images acquired by different sensors so that the images can be spatially registered.
The assignee of the present invention has developed and deployed a digital image processing system that registers images acquired under different conditions and using different types of sensors (e.g., electro-optical camera and synthetic aperture radar). This accurate pixel-to-pixel registration improves the exploitation of such imagery in three ways:
It makes it possible to detect changes in reconnaissance imagery acquired at two different times. It improves the Image Analyst's ability to interpret the imagery by viewing the ground scene in two different spectral bands (e.g., visible and microwave).
It makes it possible to determine the exact location of objects detected in reconnaissance imagery by transferring them to reference imagery that has been very accurately referenced to the earth.
In order to properly exploit images acquired by different sensors, the two images must be registered to each other. One prior art technique to perform this image registration is disclosed in U.S. Pat. No. 5,550,937 entitled “Mechanism for Registering Digital Images Obtained from Multiple Sensors Having Diverse Image Collection Geometries”, issued Aug. 27, 1996.
In the known prior art disclosed in U.S. Pat. No. 5,550,937, methods for matching image blocks using image gradient magnitude information have been developed that work well with two images acquired with the same type of sensor (e.g., radar for both or electro-optical for both). However, heretofore, no method has been developed that uses a fast Fourier transform (for speed) and also makes use of both gradient magnitude and phase (direction) information to reliably and robustly spatially register the images from two different types of sensors.
Accordingly, it is an objective of the present invention to provide for a system and method for computing the degree of translational offset between corresponding blocks extracted from images acquired by different sensors so that the images can be spatially registered.
To accomplish the above and other objectives, the present invention provides for a system and method that computes the degree of translational offset between corresponding blocks extracted from images acquired by two sensors, such as electro-optic, infrared sensors, and radar for example, so that the images can be spatially registered. The present invention uses fast Fourier transform (FFT) correlation to provide for speed, and also uses gradient magnitude and phase (direction) information to provide for reliability and robustness.
In the present invention, two potentially dissimilar images acquired by two potentially different sensors, such as a synthetic aperature radar (SAR) and a visible band camera, for example, are resampled to a common resolution and orientation using image acquisition parameters provided with the imagery. The present invention provides an efficient robust mechanism for computing the degree of translational offset between corresponding blocks extracted from the two resampled images so that they can be spatially registered.
The present system and method matches image blocks extracted from the two resampled images and makes use of the intensity gradient of both images that are matched. The two points of novelty implemented in the present invention are that both gradient magnitude and phase (direction) information are used by the matching mechanism to improve the robustness and reliability of the matching results, and the matching mechanism uses a fast Fourier transform (FFF) so it can quickly match large image blocks even on a small personal computer.
The matching mechanism has several advantages over the prior art disclosed in U.S. Pat. No. 5,550,937. The present invention combines both gradient magnitude and phase information so that image structure is automatically and implicitly taken into account by the matcher. The present invention performs magnitude normalization so that the relative differences in intensity bias and gain between image blocks are ignored by the matcher. This normalization also makes the algorithm insensitive to spatial nonstationarity of edges (i.e., varying number of detailed features within each subarea) within the scene. The present invention makes use of the fast Fourier transform (FFT) so that matching results can be computed very rapidly even on a small personal computer.
The various features and advantages of the present invention may be more readily understood with reference to the following detailed description taken in conjunction with the accompanying drawing, wherein like reference numerals designate like structural elements, and in which:
FIGS. 1a-1 c illustrate the nature of multisensor and multitemporal image matching in which the present invention is employed;
FIG. 2 illustrates an exemplary image misregistration that is corrected by the present invention;
FIG. 3 illustrates an exemplary system and method in accordance with the principles of the present invention; and
FIG. 4 is a flow diagram illustrating an exemplary method in accordance with the principles of the present invention.
Referring to the drawing figures, FIGS. 1a-1 c illustrate the nature of multisensor and multitemporal image matching in which the present invention is employed. The present invention addresses the problem of automatically registering images acquired by potentially very different sensors, such as a synthetic aperture radar (SAR) and a visible band camera. Such images may be derived from sensors disposed on reconnaissance aircraft or an orbiting satellite, for example. The originally acquired images are resampled (using image acquisition parameters supplied with the imagery) to a common scale and orientation. However, corresponding blocks within the images may have residual offsets relative to one another due to errors in the image acquisition parameters. The present invention provides for a system and method for correcting the relative offset between a small image block (site) extracted from a first image and a corresponding image block (site) extracted from a second, potentially very dissimilar image.
FIG. 1a shows representative response tables for a synthetic aperature radar (SAR) and an electro-optical (EO) camera along with composition of the ground scene imaged by both sensors. The image acquired by a synthetic aperature radar (SAR) shown in FIG. 1b b differs from the image acquired by the electro-optical camera shown in FIG. 1c even though the ground scene shown in FIG. 1a, is the same for both. This is due to the fact that a given material on the ground (bare earth, region 1) such as water (region 2), concrete (region 3), trees (region 4), and asphalt (region 5), for example) produces different brightness values when viewed by the two sensors. As is shown in FIG. 1b, the ground scene sensed by the synthetic aperture radar has brightness levels of 100 for bare earth, 50 for water, 150 for concrete, 250 for trees, and 200 for asphalt. In contrast, as is shown in FIG. 1c, the ground scene sensed by the electro-optic sensor has brightness levels of 200 for bare earth, 150 for water, 250 for concrete, 100 for trees, and 50 for asphalt. These contrast reversals make conventional intensity-based crosscorrelation methods, such as disclosed in U.S. Pat. No. 5,550,937, unreliable and inaccurate. The property that is consistent in the two images is the location and direction of edges separating different materials in the ground scene. These edges are detected as gradients in the respective gray-scale images.
Gradients (e.g., derived using a Sobel operator) have, at every pixel, both magnitude and direction (phase). This suggests that the similarity between two images (i.e., SAR and translated electro-optical) can be measured in terms of the amount of correlation between the respective directions of the gradients for all pixels within a prescribed region (i.e., a site). This is accomplished in accordance with the principles of the present invention by crosscorrelating the complex (i.e., magnitude and phase) gradients over the site. In particular, the gradient magnitudes are weighted by a multiplicative function that decreases with increasing difference in the directions of the gradients in the two images.
Three additional factors are taken into account. A weighting function W(δ+180) must be equal to W(δ), where δ is the difference (in degrees) between the respective gradient angles. This is due to the fact that the phase of the complex gradient can differ by ±180 degrees at a given edge depending on the polarity of the brightness difference across that boundary. A preferred weighting function that is used is W(δ)=cos2n(δ) where n is a positive integer. In the explanation that follows, n is taken to be unity. The cross correlation function is normalized to reduce the dependence on the magnitudes of the complex gradients. This is because the magnitude of the gradient across a given boundary varies depending on the brightness difference across that boundary. The crosscorrelation function is normalized to reduce the dependence on the number of edges within the site. Without this normalization, the crosscorrelation function would be highest for sites containing the most edges (i.e., clutter).
FIG. 2 illustrates exemplary image misregistration that is corrected by the present invention. In FIG. 2, image block A is offset horizontally and vertically within image block B to determine the match point. By way of example, suppose image block A has w columns and h rows while image block B has W>w columns and H>h rows.
Suppose that a w by h chip is extracted from block B with the upper left corner at B(Δc,Δr) and that this chip is matched with block A. In this case, the range of permissible column and row offsets of image block A relative to image block B is Δc=0, . . . , W-w and Δr=0, . . . , H-h.
between image blocks A and B extracted from different images (which have been resampled, using the image acquisition parameters, so as to have the same scale and orientation), as shown in FIG. 3.
More particularly, FIG. 3 illustrates an exemplary system 10 and method 20 for providing a normalized crosscorrelation of complex gradients using fast Fourier transforms (FFTs) 16.
The following symbols are used with reference to the above equation and FIG. 3:
c: pixel column index
r: pixel row index
{A}: a set of pixels in an image block A to which an offset is applied, w: the width of (or number of columns in) image block A, h: the height of (or number of rows in) image block A,
{B}: a set of pixels in image block B that image block A is offset relative to,
Δc: the column offset of image block A relative to image block B, and
Δr: the row offset of image block A relative to image block B, and
∇A(c,r)=|∇A(c,r)|e jθA (c,r)=(∇A)H(c,r)+j(∇A)V(c,r), and
∇B(c,r)=|∇B(c,r)|ejθ B (c,r)=(∇B)H(c,r)+j(∇B)V(c,r) are intensity
gradients at a pixel with (column, row) coordinates (c, r) in images A and B expressed in magnitude-phase form and horizontal-vertical gradient form.
Referring to FIG. 3, the system 10 comprises an algorithm that implements the above match measure equation. In FIG. 3, the above equation has been implemented such that the term cos2(a)=½(1+cos 2a).
The system 10 processes blocks of images acquired by first and second sensors. The images are resampled to a common scale and orientation, and a relative offset exists between a set of pixels {A} extracted from the first image and a corresponding set of pixels {B} extracted from the second image.
The system 10 has first and second processing paths 11 a, 11 b that respectively process a set of pixels {A} in an image block A to which an offset is applied, and a set of pixels {B} in an image block B relative to which image block A is offset. In each processing path 11 a, 11 b, the respective sets of pixels {A}, {B} are processed using intensity gradient circuits 12 a, 12 b to generate intensity gradients at each pixel with (column, row) coordinates (c, r) in images A and B expressed in magnitude-phase form and horizontal-vertical gradient form. The intensity gradient circuits 12 a, 12 b may comprise Sobel operators, for example.
Outputs of the intensity gradient circuits 12 a, 12 b in each processing path 11 a, 11 b are multiplied together and by a factor of two in a first multiplier 13 a to produce complex gradients. Outputs of the first multipliers 13 a are input to a first fast Fourier transform 16 a that crosscorrelates the respective inputs.
The outputs of the intensity gradient circuits 12 a, 12 b in each processing path 11 a, 11 b are respectively input to squaring circuits 14 a, 14 b. Outputs of the squaring circuits 14 a, 14 b in each processing path 11 a, 11 b are input to a first adder 15 a where they are subtracted. Outputs of the first adders 15 a in each processing path 11 a, 11 b are input to a second fast Fourier transform 16 b that crosscorrelates the respective inputs. The outputs of the first and second fast Fourier transforms 16 a, 16 b are summed in a third adder 15 c.
Outputs of the squaring circuits 14 a, 14 b in each processing path 11 a, 11 b are input to a second adder 15 b where they are added. Outputs of the second adders 15 b in each processing path 11 a, 11 b are input to a third fast Fourier transform 16 c that crosscorrelates the respective inputs. The output of the third fast Fourier transform 16 c is inverted in an inverter circuit 17.
The output of the third adder 15 c and the output of the inverter circuit 17 are multiplied together, and thus normalized, in a second multiplier 13 b, which produces a match surface. A maximum value in the match surface is determined in a maximum value circuit 18 to produce a match offset, which is the output of the system 10.
For the purposes of completeness, FIG. 4 is a flow diagram illustrating an exemplary method 20 in accordance with the principles of the present invention. The exemplary method 20 comprises the following steps.
First and second images are acquired 21 using first and second sensors. The first and second images are resampled 22 to a common scale and orientation, such that a relative offset exists between a set of pixels {A} extracted from the first image and a corresponding set of pixels {B} extracted from the second image.
The respective sets of pixels {A}, {B} are processed to generate 23 intensity gradients at each pixel with (column, row) coordinates (c, r) in the images expressed in magnitude-phase form and horizontal-vertical gradient form. The intensity gradients are multiplied together and by a factor of two 24 to produce complex gradients. The complex gradients are crosscorrelated 25 by a first fast Fourier transform 16 a.
The intensity gradients are squared 26, are subtracted 27 from each other, and are crosscorrelated 28 in a second fast Fourier transform 16b. The crosscorrelated intensity gradients produced by the first and second fast Fourier transforms 16 a, 16 b are summed together 29.
The squared intensity gradients are added 30 and crosscorrelated 31 in a third fast Fourier transform 16 c. The crosscorrelated squared intensity gradients are normalized 32 and are multiplied 33 by the crosscorrelated summed intensity gradients to produce a match surface. The match surface is then processed 34 to generate a maximum value that corresponds to a match offset between the sets of pixels {A } {B}.
Thus, a system and method for computing the degree of translational offset between corresponding blocks extracted from images acquired by different sensors so that the images can be spatially registered have been disclosed. It is to be understood that the above-described embodiments are merely illustrative of some of the many specific embodiments that represent applications of the principles of the present invention. Clearly, numerous and other arrangements can be readily devised by those skilled in the art without departing from the scope of the invention.
Claims (17)
1. A system for computing a degree of translational offset between corresponding sets of pixels {A}, {B} extracted from images acquired by first and second sensors so that the images can be spatially registered, which images are resampled to a common scale and orientation, and wherein a relative offset exists between a set of pixels {A} extracted from the first image and a corresponding set of pixels {B} extracted from the second image, comprising:
intensity gradient circuitry for generating intensity gradient magnitude and phase information regarding the sets of pixels;
multiplication circuitry, for multiplying the gradients;
squaring circuitry for generating the square of gradient information regarding the sets of pixels;
addition circuitry for subtracting and adding, respectively, the squares of gradients and for adding the outputs of FFT correlators;
matching circuitry for crosscorrelating the magnitude and phase gradients over the sets of pixels; and
normalizing circuitry for normalizing the crosscorrelated magnitude and phase gradients.
2. The system recited in claim 1 wherein the matching circuitry comprises fast Fourier transforms.
3. The system recited in claim 1 wherein the matching circuitry implements a predetermined weighting function that comprises a multiplicative function that decreases with increasing difference in directions of the gradients in the set of pixels.
4. The system recited in claim 1 wherein matching circuitry implements a predetermined weighting function that comprises a weighting function W(δ+180) that equal to W(δ), where δ is the difference in degrees between respective gradient angles.
5. The system recited in claim 4 wherein the predetermined weighting function is W(δ)=cos2n(δ).
6. A system for computing a degree of translational offset between corresponding sets of pixels {A}, {B} extracted from first and second images acquired by first and second sensors so that the images can be spatially registered, which images are resampled to a common scale and orientation, and wherein a relative offset exists between a set of pixels {A} extracted from the first image and a corresponding set of pixels {B} extracted from the second image, comprising:
first and second processing paths that respectively process the set of pixels {A} in the first image to which an offset is applied, and a set of pixels {B} in the second image relative to which the first image is offset, which processing pairs comprise:
intensity gradient circuits for processing the respective sets of pixels {A}, {B} to generate intensity gradients at each pixel with coordinates in the first and second images expressed in magnitude-phase form and horizontal-vertical gradient form;
a first multiplier for multiplying outputs of the intensity gradient circuitry together and by a factor of two to produce complex gradients;
squaring circuits for squaring the complex gradients output by the intensity gradient circuitry;
a first adder for subtracting outputs of the squaring circuits; and
a second adder for adding outputs of the squaring circuits;
a first crosscorrelating circuit for crosscorrelating outputs of the first multipliers;
a second crosscorrelating circuit for crosscorrelating outputs of the first adders;
a third adder for adding outputs of the first and second crosscorrelating circuits;
a third crosscorrelating circuit for crosscorrelating outputs of the second adders;
an inverter circuit for inverting the crosscorrelated output of the third crosscorrelating circuit;
a third multiplier for multiplying outputs of the third adder and the inverter circuit to produce a match surface; and
in a maximum value circuit for processing the match surface to produce a match offset.
7. The system recited in claim 6 wherein respective crosscorrelating circuits each comprise a fast Fourier transform.
8. A method for computing a degree of translational offset between corresponding sets of pixels {A}, {B} extracted from images acquired by first and second sensors so that the images can be spatially registered, comprising the steps of:
resampling the first and second images to a common scale and orientation, and wherein a relative offset exists between a set of pixels {A} extracted from the first image and a corresponding set of pixels {B} extracted from the second image;
generating intensity gradient magnitude and phase information regarding the sets of pixels to detect edges in the images;
multiplying phase gradient information regarding the sets of pixels;
squaring the intensity gradient information regarding the sets of pixels;
crosscorrelating the magnitude and phase gradients over the sets of pixels; and
normalizing the crosscorrelated magnitude and phase gradients.
9. The method recited in claim 8 wherein the crosscorrelating step comprises fast Fourier transforming the magnitude and phase gradients over the sets of pixels.
10. The method recited in claim 8 wherein the matching circuitry implements a predetermined weighting function that comprises a multiplicative function that decreases with increasing difference in directions of the gradients in the sets of pixels.
11. The method recited in claim 8 wherein the matching circuitry implements a predetermined weighting function that comprises a weighting function W(δ+180) that is equal to W(δ), where δ is the difference in degrees between respective gradient angles.
12. The method recited in claim 11 wherein the predetermined weighting function is W(δ)=cos2n(δ).
13. A method for computing a degree of translational offset between corresponding sets of pixels {A}, {B} extracted from images acquired by first and second sensors so that the images can be spatially registered, comprising the steps of:
resampling the first and second images to a common scale and orientation, and wherein a relative offset exists between a set of pixels {A} extracted from the first image and a corresponding set of pixels {B} extracted from the second image;
processing the respective sets of pixels {A}, {B} to generate intensity gradients at each pixel with coordinates in images A and B expressed in magnitude-phase form and horizontal-vertical gradient form;
multiplying the intensity gradients together and by a factor of two to produce complex gradients;
crosscorrelating the complex gradients;
squaring the intensity gradients;
subtracting the squared intensity gradients from each other;
crosscorrelating the subtracted squared intensity gradients;
summing the crosscorrelated intensity gradients produced by the first and second fast Fourier transforms;
adding the squared intensity gradients;
crosscorrelating the added squared intensity gradients;
normalizing the crosscorrelated squared intensity gradients;
multiplying the normalized crosscorrelated squared intensity gradients by the crosscorrelated summed intensity gradients to produce a match surface; and
processing the match surface to generate a maximum value that corresponds to a match offset between the sets of pixels {A} {B}.
14. The method recited in claim 13 wherein the crosscorrelating steps each comprise fast Fourier transforming the magnitude and phase gradients over the sets of pixels.
15. The method recited in claim 13 wherein the matching circuitry implements a predetermined weighting function that comprises a multiplicative function that decreases with increasing difference in directions of the gradients in the sets of pixels.
16. The method recited in claim 13 wherein the matching circuitry implements a predetermined weighting function that comprises a weighting function W(δ+180) that is equal to W(δ), where δ is the difference in degrees between respective gradient angles.
17. The method recited in claim 16 wherein the predetermined weighting function is W(δ)=cos2n(δ).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/386,246 US6519372B1 (en) | 1999-08-31 | 1999-08-31 | Normalized crosscorrelation of complex gradients for image autoregistration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/386,246 US6519372B1 (en) | 1999-08-31 | 1999-08-31 | Normalized crosscorrelation of complex gradients for image autoregistration |
Publications (1)
Publication Number | Publication Date |
---|---|
US6519372B1 true US6519372B1 (en) | 2003-02-11 |
Family
ID=23524791
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/386,246 Expired - Lifetime US6519372B1 (en) | 1999-08-31 | 1999-08-31 | Normalized crosscorrelation of complex gradients for image autoregistration |
Country Status (1)
Country | Link |
---|---|
US (1) | US6519372B1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050243323A1 (en) * | 2003-04-18 | 2005-11-03 | Hsu Stephen C | Method and apparatus for automatic registration and visualization of occluded targets using ladar data |
US20050276508A1 (en) * | 2004-06-15 | 2005-12-15 | Lockheed Martin Corporation | Methods and systems for reducing optical noise |
US20080232709A1 (en) * | 2007-03-22 | 2008-09-25 | Harris Corporation | Method and apparatus for registration and vector extraction of sar images based on an anisotropic diffusion filtering algorithm |
GB2482551A (en) * | 2010-08-06 | 2012-02-08 | Qinetiq Ltd | Alignment of synthetic aperture radar images |
US20120281909A1 (en) * | 2010-01-06 | 2012-11-08 | Nec Corporation | Learning device, identification device, learning identification system and learning identification device |
US20170178348A1 (en) * | 2014-09-05 | 2017-06-22 | Huawei Technologies Co., Ltd. | Image Alignment Method and Apparatus |
US20180070919A1 (en) * | 2015-03-23 | 2018-03-15 | Kyushu Institute Of Technology | Biological signal detection device |
US11087145B2 (en) * | 2017-12-08 | 2021-08-10 | Kabushiki Kaisha Toshiba | Gradient estimation device, gradient estimation method, computer program product, and controlling system |
CN110956640B (en) * | 2019-12-04 | 2023-05-05 | 国网上海市电力公司 | Heterogeneous image edge point detection and registration method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3748644A (en) * | 1969-12-31 | 1973-07-24 | Westinghouse Electric Corp | Automatic registration of points in two separate images |
US5274236A (en) * | 1992-12-16 | 1993-12-28 | Westinghouse Electric Corp. | Method and apparatus for registering two images from different sensors |
US5550937A (en) * | 1992-11-23 | 1996-08-27 | Harris Corporation | Mechanism for registering digital images obtained from multiple sensors having diverse image collection geometries |
-
1999
- 1999-08-31 US US09/386,246 patent/US6519372B1/en not_active Expired - Lifetime
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3748644A (en) * | 1969-12-31 | 1973-07-24 | Westinghouse Electric Corp | Automatic registration of points in two separate images |
US5550937A (en) * | 1992-11-23 | 1996-08-27 | Harris Corporation | Mechanism for registering digital images obtained from multiple sensors having diverse image collection geometries |
US5274236A (en) * | 1992-12-16 | 1993-12-28 | Westinghouse Electric Corp. | Method and apparatus for registering two images from different sensors |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050243323A1 (en) * | 2003-04-18 | 2005-11-03 | Hsu Stephen C | Method and apparatus for automatic registration and visualization of occluded targets using ladar data |
US7242460B2 (en) * | 2003-04-18 | 2007-07-10 | Sarnoff Corporation | Method and apparatus for automatic registration and visualization of occluded targets using ladar data |
US20080181487A1 (en) * | 2003-04-18 | 2008-07-31 | Stephen Charles Hsu | Method and apparatus for automatic registration and visualization of occluded targets using ladar data |
US20050276508A1 (en) * | 2004-06-15 | 2005-12-15 | Lockheed Martin Corporation | Methods and systems for reducing optical noise |
US20080232709A1 (en) * | 2007-03-22 | 2008-09-25 | Harris Corporation | Method and apparatus for registration and vector extraction of sar images based on an anisotropic diffusion filtering algorithm |
US7929802B2 (en) * | 2007-03-22 | 2011-04-19 | Harris Corporation | Method and apparatus for registration and vector extraction of SAR images based on an anisotropic diffusion filtering algorithm |
US20120281909A1 (en) * | 2010-01-06 | 2012-11-08 | Nec Corporation | Learning device, identification device, learning identification system and learning identification device |
US9036903B2 (en) * | 2010-01-06 | 2015-05-19 | Nec Corporation | Learning device, identification device, learning identification system and learning identification device |
EP2523162A4 (en) * | 2010-01-06 | 2018-01-10 | Nec Corporation | Learning device, identification device, learning identification system and learning identification device |
WO2012017187A1 (en) | 2010-08-06 | 2012-02-09 | Qinetiq Limited | Alignment of synthetic aperture images |
GB2482551A (en) * | 2010-08-06 | 2012-02-08 | Qinetiq Ltd | Alignment of synthetic aperture radar images |
US20130129253A1 (en) * | 2010-08-06 | 2013-05-23 | Qinetiq Limited | Alignment of synthetic aperture images |
US8938130B2 (en) * | 2010-08-06 | 2015-01-20 | Qinetiq Limited | Alignment of synthetic aperture images |
US20170178348A1 (en) * | 2014-09-05 | 2017-06-22 | Huawei Technologies Co., Ltd. | Image Alignment Method and Apparatus |
US10127679B2 (en) * | 2014-09-05 | 2018-11-13 | Huawei Technologies Co., Ltd. | Image alignment method and apparatus |
US20180070919A1 (en) * | 2015-03-23 | 2018-03-15 | Kyushu Institute Of Technology | Biological signal detection device |
US10058304B2 (en) * | 2015-03-23 | 2018-08-28 | Kyushu Institute Of Technology | Biological signal detection device |
US11087145B2 (en) * | 2017-12-08 | 2021-08-10 | Kabushiki Kaisha Toshiba | Gradient estimation device, gradient estimation method, computer program product, and controlling system |
CN110956640B (en) * | 2019-12-04 | 2023-05-05 | 国网上海市电力公司 | Heterogeneous image edge point detection and registration method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Casu et al. | Deformation time-series generation in areas characterized by large displacement dynamics: The SAR amplitude pixel-offset SBAS technique | |
EP0449303B1 (en) | Phase difference auto focusing for synthetic aperture radar imaging | |
US8306274B2 (en) | Methods for estimating peak location on a sampled surface with improved accuracy and applications to image correlation and registration | |
Inglada et al. | On the possibility of automatic multisensor image registration | |
Stow | Reducing the effects of misregistration on pixel-level change detection | |
US4490851A (en) | Two-dimensional image data reducer and classifier | |
US20050147324A1 (en) | Refinements to the Rational Polynomial Coefficient camera model | |
Kashef et al. | A survey of new techniques for image registration and mapping | |
US6519372B1 (en) | Normalized crosscorrelation of complex gradients for image autoregistration | |
Naraghi et al. | Geometric rectification of radar imagery using digital elevation models | |
US6677885B1 (en) | Method for mitigating atmospheric propagation error in multiple pass interferometric synthetic aperture radar | |
CN115060208A (en) | Power transmission and transformation line geological disaster monitoring method and system based on multi-source satellite fusion | |
Chureesampant et al. | Automatic GCP extraction of fully polarimetric SAR images | |
Rauchmiller Jr et al. | Measurement of the Landsat Thematic Mapper modulation transfer function using an array of point sources | |
Mumtaz et al. | Attitude determination by exploiting geometric distortions in stereo images of DMC camera | |
Huang et al. | SAR and optical images registration using shape context | |
Yang et al. | Relative geometric refinement of patch images without use of ground control points for the geostationary optical satellite GaoFen4 | |
Oh et al. | Automated georegistration of high-resolution satellite imagery using a RPC model with airborne lidar information | |
Kelany et al. | Improving InSAR image quality and co-registration through CNN-based super-resolution | |
Peterson et al. | Registration of multi-frequency SAR imagery using phase correlation methods | |
US9218641B1 (en) | Algorithm for calculating high accuracy image slopes | |
Hessel et al. | A global registration method for satellite image series | |
Kwoh et al. | DTM generation from 35-day repeat pass ERS-1 interferometry | |
Saidi et al. | A refined automatic co-registration method for high-resolution optical and sar images by maximizing mutual information | |
Meyer | Estimating the effective spatial resolution of an AVHRR time series |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LOCKHEED MARTIN CORPORATION, MARYLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:EPPLER, WALTER G.;PAGLIERONI, DAVID W.;PETERSEN, SIDNEY M.;AND OTHERS;REEL/FRAME:010215/0803;SIGNING DATES FROM 19990825 TO 19990827 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |