EP2100270A1 - Methods and apparatus for stitching digital images - Google Patents

Methods and apparatus for stitching digital images

Info

Publication number
EP2100270A1
EP2100270A1 EP06840459A EP06840459A EP2100270A1 EP 2100270 A1 EP2100270 A1 EP 2100270A1 EP 06840459 A EP06840459 A EP 06840459A EP 06840459 A EP06840459 A EP 06840459A EP 2100270 A1 EP2100270 A1 EP 2100270A1
Authority
EP
European Patent Office
Prior art keywords
image
frequency band
processor
splicing
subimages
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP06840459A
Other languages
German (de)
French (fr)
Other versions
EP2100270A4 (en
Inventor
Gregory John Ward
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dolby Laboratories Licensing Corp
Original Assignee
Dolby Laboratories Licensing Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby Laboratories Licensing Corp filed Critical Dolby Laboratories Licensing Corp
Publication of EP2100270A1 publication Critical patent/EP2100270A1/en
Publication of EP2100270A4 publication Critical patent/EP2100270A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Circuits (AREA)

Abstract

A method for stitching input images which overlap in an overlap region comprises: obtaining a low-frequency band subimage from each input image; obtaining a high-frequency band subimage from each input image; blending the low-frequency band subimages across at least a portion of the overlap region to create a low-frequency band output image; splicing the high-frequency band subimages across at least a portion of the overlap region to create a high-frequency band output image; and combining the low-frequency band output image and the high-frequency band output image to create a full band output stitched image.

Description

METHODS AND APPARATUS FOR STITCHING DIGITAL IMAGES
Technical Field [0001] This invention relates to methods and apparatus for stitching together two or more digital images.
Background of the Invention
[0002] Digital images depicting overlapping fields of view may be combined together to form a composite image of a larger, unified field of view through a process known as "stitching". Stitching may be used to join a number of original images together to make a composite image having a larger field of view than any of the original images.
[0003] One problem which can occur in some prior art stitched images is the appearance of visually perceptible "seams" where the fields of view from the combined images meet. Known methods for stitching typically require exact or nearly exact overlay of the input images in order to produce a composite image. Most known methods for image stitching require identical image exposures, and do not work well, if at all, with high dynamic range images. Some known methods for stitching are computationally intensive and are not memory efficient when implemented in computer software.
[0004] There is a general need for effective methods for stitching together digital images to yield high quality composite images. There is a particular need for such materials that are capable of combining high dynamic range images into composite images. Summary of the Invention
[0005] The following embodiments and aspects thereof are described and illustrated in ways which are meant to be exemplary and illustrative, not limiting in scope. In various embodiments, one or more of the above-described problems have been reduced or eliminated, while other embodiments are directed to other improvements.
[0006] One aspect of the invention relates to a method for stitching overlapping images. The method comprises: obtaining low-frequency band subimages from input images, obtaining high-frequency band subimages from input images, blending the low-frequency band subimages to create a low-frequency band stitched image, combining the high-frequency band subimages by splicing the high-frequency band subimages at splice locations to create a high-frequency band output stitched image, and combining the low-frequency band output stitched image and the high-frequency band stitched image to create a full-band output stitched image.
[0007] Another aspect of the invention relates to an apparatus for stitching overlapping images. The apparatus comprises a first image blending processor, an image splicing processor; and an image merger connected to the image blending processor and the image splicing processor.
Brief Description of the Drawings
[0008] Exemplary embodiments are illustrated in the attached drawings. The embodiments and figures disclosed herein are illustrative and not restrictive:
Figures la-c show overlapping input images sharing a common sub- field of view and an output stitched image;
Figures 2a-d show scanlines of example overlap regions shared by input images; Figure 3 is a flowchart illustrating a method according to an example embodiment the invention;
Figures 4a-c show an output stitched image which is non- rectangular; Figure 5 is a dataflow diagram illustrating an example of splicing images together;
Figures 6a-c show two subimage scanlines spliced to create an output subimage scanline;
Figure 7 is a block diagram of an image splicing apparatus according to one embodiment of the invention; and,
Figure 8 is a block diagram of an image splicing apparatus according to another embodiment of the invention.
Detailed Description [0009] Throughout the following description, specific details are set forth in order to provide a more thorough understanding of the invention. However, the invention may be practiced without these particulars. In other instances, well known elements have not been shown or described in detail to avoid unnecessarily obscuring the invention. Accordingly, the specification and drawings are to be regarded in an illustrative, rather than a restrictive, sense.
[0010] Apparatus and methods according to some embodiments of the invention described herein blend low spatial frequency components of one input image with low spatial frequency components of another input image and join high spatial frequency components of the input images discontinuously at locations selected to coincide with features of the input images. The low spatial frequency components may be blended smoothly, without introducing salient artifacts. The high spatial frequency components may be joined along each of a plurality of scanlines of an overlap region, at a splice location on each scanline closest to an edge feature in that scanline. [0011] One aspect of the invention relates to methods for stitching images together. In the illustrative description that follows, an example method is described as being applied to two overlapping input images whose fields of view share a common sub-field of view, but it is to be understood that methods according to embodiments of the invention could also be used to combine any number of input images. The portion of each image depicting the common sub-field of view may be referred to as an overlap region. The overlap region in each image contains a plurality of points which correspond to points in the overlap region of the other image. A point in the overlap region of one image and the corresponding point in the overlap region of the other image may be collectively referred to as a pair of correspondence points.
[0012] Figures Ia and Ib show first and second input images 10a and 10b, which share a common sub-field of view depicted as input regions 11a and lib. Figure Ic shows a stitched image 10c obtained by applying a method as described herein to images 10a and 10b. Stitched image 10c contains the common sub-field of view depicted as overlap region lie. Methods according to embodiments of the invention may be applied to stitch together any number of images so long as every one of the images shares an overlap region with at least one of the other images.
[0013] Figures 2a-d respectively show example overlap regions 111a- d. Each overlap region 111 comprises a plurality of pixels, each pixel having a first pixel value from the first input image and a second pixel value from the second input image associated therewith. Each overlap region 111 is defined between a first boundary 123 and a second boundary 124, and may be divided into a plurality of scanlines. Example scanlines 121a-d are indicated with shading in Figures 2a-d, respectivley. The scanlines in the example overlap region Ilia shown in Figure 2a are defined generally orthogonal to parallel boundaries 123a and 124a. [0014] In some situations, the input images may be oriented such that the boundaries of the overlap region are not parallel. In such situations, the input images may be cropped and/or processed with a rotation function such that the boundaries of the overlap regions are made to be parallel, but this is not required in all embodiments.
[0015] In the examples shown in Figures 2b-d, the input images are oriented such that the boundaries 123b-d and 124b-d of overlap regions lllb-d are not parallel. For example, the scanlines of overlap region 121b of Figure 2b are defined generally orthogonal to first boundary
123b, the scanlines of overlap region 121c of Figure 2c are defined generally orthogonal to second boundary 124c, and the scanlines of overlap region 121d of Figure 2d are defined at oblique angles to both first and second boundaries 123d and 124d.
[0016] In some situations, the input images may be normalized to account for differing exposure levels. For example, in some embodiments the input images may be normalized based on exposure information associated with the input images. Alternatively or additionally, normalization may be accomplished by computing an average exposure and normalizing the exposure level for the overlap region.
[0017] Figure 3 shows an example method 20 for stitching first and second input images together. Method 20 may comprise aligning the first and second images according to image correspondence data, which may be obtained using known techniques. In some embodiments, method 20 comprises aligning the input images according to a pair of correspondence points between the images. A pair of correspondence points may be selected manually, or generated automatically using known techniques such as Lucas and Kanade motion estimation or optimal scanline feature matching. In general, a single feature match in the overlap region is sufficient for aligning the input images. [0018] In block 22, method 20 obtains first and second low- frequency band subimages from the input images. The low-frequency band subimages each comprise at least part of the overlap region defined by the common sub-field of view shared by both input images. In some embodiments, the low-frequency band subimages are obtained by filtering the input images with a low-frequency spatial filter. The low-frequency band subimages may have resolutions which are lower than the resolutions of the input images. In one particular embodiment, the low-frequency band subimages have resolutions equal to l/16th of the input image resolutions.
[0019] In block 24, method 20 obtains first and second high- frequency band subimages from the input images. The high-frequency band subimages also each comprise at least part of the overlap region defined by the common sub-field of view shared by both input images. In some embodiments, the high-frequency band subimages are obtained by filtering the input images with a high-frequency spatial filter. In one particular embodiment, the high-frequency band subimage has a resolution equal to that of the input images.
[0020] In block 26, method 20 blends the first and second low-frequency band subimages to create a low-frequency band output subimage. Blending of the low-frequency band subimages may proceed along one or more scanlines. In one embodiment, blending comprises using a sinusoidal blend function to create the low-frequency band output subimage. Other suitable blend functions, such as, for example, exponential blend functions or linear blend functions, may also be used.
[0021] Blending low-frequency band subimages reduces the appearance of Mach band artifacts when the exposures of the input images are different. To further minimize the appearance of Mach band artifacts created by exposure differences between the input images, method 20 may additionally comprise a step of normalizing the subimages based on part or all of the shared overlap region.
[0022] In block 28, method 20 splices the first and second high-frequency band subimages to create a high-frequency band output subimage. Splicing of the high-frequency band subimages may proceed along one or more scanlines. In the following description of an illustrative embodiment, splicing is described for a single scanline, though the invention may be practised by splicing a plurality of scanlines sequentially or simultaneously.
[0023] Splicing respectively takes first and second scanlines from the first and second high-frequency band subimages as inputs, and produces a stitched scanline as an output. A portion of each input scanline is from the overlap region of its corresponding high-frequency band subimage. In splicing, values of pixels from the first input scanline are assigned to pixels of a first side of the output scanline between a first boundary of the overlap region and a splice location, and values of pixels from the second input scanline are assigned to pixels of a second side of the output scanline between the splice location and a second boundary of the overlap region.
[0024] The splice location of the output scanline may be selected based on edge features in the first and second input scanlines. Locations of edge features in scanlines may be obtained by using known algorithms. In one embodiment, edge features are identified at pixel interfaces where the difference in intensity between pixels adjacent to a pixel interface exceeds a threshold. Other embodiments may identify edge features using gradient, Laplace or Sobel methods.
[0025] The splice location of the output scanline may comprise, for example, a pixel interface corresponding to an edge feature common to both the first and second input scanlines. If an ouput scanline contains only one pixel interface corresponding to a common edge feature, that pixel interface may be selected as the splice location for that scanline. If an output scanline contains a plurality of pixel interfaces corresponding to common edge features, the selection of a particular pixel interface as the splice location may be determined according to one or a combination of the following factors: i) the intensity gradient of the edge feature that comprises the pixel interface; ii) whether the edge feature that corresponds to the pixel interface is present in adjacent scanlines; iii) the number of scanlines in the subimages that include the edge feature that corresponds to the pixel interface; and iv) a random selection among a plurality of suitable pixel interfaces.
[0026] Splice locations of adjacent output scanlines may be advantageously selected to be near each other in some embodiments, so that artifacts created by switching splice locations at each scanline are reduced. In some embodiments, if multiple pixel interfaces which correspond to common edge features could be selected as the splice location for a current scanline, the pixel interface corresponding to the edge feature closest to the splice location of a nearby scanline is selected as the splice location for the current scanline.
[0027] Some embodiments may also comprise varying the length of the scanlines, for example by either squashing or stretching the scanlines, to maximize the likelihood that edge features will be continuous from one scanline to the next and thus that splice locations of adjacent scanlines are near each other. Squashing or stretching may be accomplished using any suitable scaling techniques. The length of scanlines may be varied, for example, by up to 15% in some embodiments. In embodiments wherein the length of a scanline is varied, the length of a corresponding scanline of the low-frequency band output subimage may be varied by the same amount to minimize displacement artifacts. Also, any variance in the length of one scanline may be taken into account when varying the length of adjacent scanlines. Varying the length of scanlines may be desirable in situations where the input images have a visually perceptible vertical line in the overlap region. The length of scanlines may be varied either manually or automatically.
[0028] If an input scanline of one of the high-frequency band subimages does not contain any pixel interfaces corresponding to edge features, blending may be employed instead of splicing to obtain the output scanline, so that false edges are not created in the high-frequency band output subimage. As described in the blending of the low-frequency band subimages, blending can be performed using a sinusoidal blend function or other suitable blend function.
[0029] In block 30, method 20 combines the low-frequency band output subimage and the high-frequency band output subimage to create a full band output stitched image. In some situations, the full band output stitched image may comprise a rectangular image, as shown in Figure Ic. However, the input images may oriented such that the full band output stitched image is non-rectangular, as shown in Figures 4a-c.
[0030] Figure 4a shows a non-rectangular output image 40 produced from two input images 41 and 42. As shown in Figure 4b, output image 40 may be cropped as indicated by crop area 43 to obtain a rectangular image. However, cropping the output image results in the loss of some cropped image data 44, which may be undesirable. An alternative solution is to define an enlarged rectangular image as indicated by enlarged image area 45 in Figure 4c. Enlarging the image results in some empty border image areas 46, which may be visually distracting.
[0031] In some embodiments of the invention, border image areas of enlarged output images may be filled by a random neighborhood sampling method. In such embodiments, for each unfilled pixel, an input pixel may be randomly selected from a square or circular region with the unfilled (destination) pixel at its center. Rejection sampling may be used until a filled input pixel is located. With each failure (rejection), the radius of the search region may be increased slightly, thus guaranteeing that a filled input pixel will eventually be located. Once an unfilled destination pixel is filled, the method proceeds to the next (usually adjacent) unfilled pixel and the process may be repeated, starting with a search radius that is a fraction of the previous one. This avoids unconstrained growth of the sampling radius, keeping it on average a good size for finding valid neighbors. The net result is border image area having pixels whose color and character matches the nearest valid pixels, but with a "snowy" appearance.
[0032] Figure 5 shows an illustrative embodiment of the method described above. Adjacent input images 210a and 210b share a common sub-field of view captured respectively by image regions 211a and 211b.
Images 210a and 210b are input to high pass filters 212a and 212b to obtain high-frequency band subimages H-210a and H-210b. Images 210a and 210b are also input to low pass filters 213a and 213b to obtain low- frequency band subimages L-210a and L-210b. High-frequency band subimages H-210a and H-210b are combined by splicing 214. Low- frequency band subimages L-210a and L-210b are combined by blending 215. The high-frequency band spliced output H-210c and the low- frequency band blended output L-210c are combined at 216 by merging to create full-band output image 210c which contains the common sub-field of view depicted by overlap region 211c. Merging of the high-frequency band spliced output H-210c and the low-frequency band blended output L- 210c may comprise, for example, multiplication of the high-frequency band spliced output H-210c and the low-frequency band blended output L- 210c in order to avoid negative values and round-off errors which may occur if the outputs are added.
[0033] Figures 6a, 6b and 6c show an illustrative embodiment of the splicing described above applied to a scanline. Figures 6a and 6b show first and second input scanlines 330a and 330b, respectively, and corresponding graphs 340a and 340b of the pixel values along each scanline. Scanline 330a extends from beyond a first boundary 331, across an overlap region 332 to a second boundary 333. Scanline 330b extends from first boundary 331 across overlap region 332 to beyond second boundary 333. Scanlines 330a and 330b contain pixel interfaces 334a and 334b, respectively, that correspond to common edge features 335a, 335b, respectively. In the illustrated example, pixel interfaces 334a and 334b are the only pixel interfaces in their respective scanlines that are detected as corresponding to edge features and thus the location of pixel interfaces
334a and 334b is selected as the splice location.
[0034] Figure 6c shows an output stitched scanline 330c created by splicing input scanlines 330a and 330b. Output stitched scanline 330c results from splicing scanlines 330a and 330b about the splice location at pixel interface 334c. Values of pixels from first scanline 330a are used for the portion of output stitched scanline 330c to the left of pixel interface 334c, and values of pixels from second scanline 330b are used for the portion of output stitched scanline 330c to the right of pixel interface 334c. Accordingly, as shown in a graph 340c of pixel values along output scanline 330c, to the left of edge feature 335c the spliced output corresponds to graph 340a and that to the right of edge feature 335c the spliced output corresponds to graph 340b.
[0035] Another aspect of the invention provides apparatus for stitching images together to form a composite or "stitched" image. The functional components of the apparatus may be provided by software executing on one or more data processors (which may comprise micro processors, image processors, or the like) and/or by hardware components. Figure 7 shows apparatus 50 according to an example embodiment. Apparatus 50 comprises a scanline splicing processor 52 and a first scanline blending processor 54 that are both connected to an image merger 56. High-frequency band subimages, image correspondence data and image edge location data are input to the scanline splicing processor. Low-frequency band subimages and image correspondence data are input into the scanline blending processor. The apparatus may comprise a second scanline blending processor to which high-frequency band subimages and image correspondence data are input. The apparatus may also comprise a high-pass image filter connected to the scanline splicing processor for obtaining high-frequency band subimages from input images. The apparatus may additionally comprise a low-pass image filter connected to the first scanline blending processor for obtaining low-frequency band subimages from input images. The apparatus may further comprise an image alignment detector for generating image correspondence data for input into one or a combination of the first scanline blending processor, the second scanline blending processor or the scanline splicing processor. The apparatus may yet further comprise an edge feature detector for locating pixel interfaces corresponding to edge features for input into the scanline splicing processor.
[0036] Figure 8 shows an apparatus 440 according to another embodiment of the invention. Apparatus 440 comprises a scanline splicing processor H-441, a first scanline blending processor L-442 and a second scanline blending processor H-442. High-pass image filter H-443 filters input images 450 to obtain high-frequency band subimages H-450 that are input into scanline splicing processor H-441 and second scanline blending processor H-442. Low-pass image filter L-443 filters input images 450 to obtain low-frequency band subimages L-450 that are input into first scanline blending processor L-442. Edge feature detector 445 locates pixel interfaces corresponding to edge features in input images 450 and forwards pixel interface edge data 455 to splice location selector 446. Splice location selector 446 is connected to scanline splicing processor H- 441 and second scanline blending processor H-442. Image alignment detector 444 generates image correspondence 454 data that is input into splice location selector 446, first scanline blending processor L-442 and second scanline blending processor H-442. High-frequency band spliced output image H-451S, high-frequency band blended output image H-451B and low-frequency band blended output image L-451B combine at image merger 447 which outputs full-band stitched image 451.
[0037] As will be apparent to those skilled in the art, in the light of the foregoing disclosure, many alterations and modifications are possible in the practice of this invention without departing from the spirit or scope thereof. Accordingly, the scope of the invention is to be construed in accordance with the substance defined by the following claims. Those of skill in the art will recognize certain modifications, permutations, additions and sub-combinations thereof. It is therefore intended that the following appended claims and claims hereafter introduced are interpreted to include all such modifications, permutations, additions and sub-combinations as are within their true spirit and scope.

Claims

WHAT IS CLAIMED IS:
1. A method for stitching input images which overlap in an overlap region, the method comprising: obtaining a low-frequency band subimage from each input image; obtaining a high-frequency band subimage from each input image; blending the low-frequency band subimages across at least a portion of the overlap region to create a low-frequency band output image; splicing the high-frequency band subimages across at least a portion of the overlap region to create a high-frequency band output image; and combining the low-frequency band output image and the high-frequency band output image to create a full band output stitched image.
2. A method according to claim 1 wherein each input image comprises a high dynamic range image.
3. A method according to claim 1 or claim 2 wherein splicing the high- frequency band subimages comprises selecting splice locations corresponding to edge features common to the input images and splicing the high-frequency band subimages at the selected splice locations.
4. A method according to claim 3 comprising blending the high-frequency band subimages where edge features common to both images cannot be identified.
5. A method according to claim 3 comprising defining a plurality of scanlines for each of the high-frequency band subimages, wherein splicing the high-frequency band subimages at the selected splice locations comprises splicing the high-frequency band subimages on a scanline by scanline basis.
6. A method according to claim 5 comprising, for each scanline of the high-frequency band subimages, selecting a pixel interface corresponding to an edge feature common to both input images as a splice location.
7. A method according to claim 3 comprising identifying edge features in the input images.
8. A method according to claim 3 comprising identifying correspondence points in the input images.
9. A method according to any of claims 1 to 8 wherein blending the low-frequency band subimages to create a low-frequency band stitched image comprises using a sinusoidal blend function.
10. A method according to any of claims 1 to 9 comprising normalizing the input images.
11. A method according to any of claims 1 to 9 comprising normalizing the low-frequency band subimages.
12. A method according to any of claims 1 to 9 comprising normalizing the high-frequency band subimages.
13. A method according to any of claims 1 to 12 comprising varying a size of the high-frequency band subimage.
14. A method according to any one of claims 1 to 13 wherein, when full band output stitched image comprises a non-rectangular image, defining a rectangular enlarged image area and filling an empty border image area by random neighborhood sampling.
15. A method according to claim 14 wherein random neighborhood sampling comprises, for each unfilled pixel: defining a search area proximate to the unfilled pixel; and, selecting characteristics of an input pixel in the search area to fill the unfilled pixel .
16. A method according to claim 15 comprising expanding the search area if the search area comprises only unfilled pixels.
17. A method according to claim 16 comprising selecting a size of the search area for one unfilled pixel to be smaller than a size of the search area of a previously filled unfilled pixel.
18. An apparatus for stitching input images which overlap in an overlap region, the apparatus comprising: an image blending processor; an image splicing processor; and an image merger coupled to the image blending processor and the image splicing processor.
19. An apparatus according to claim 18 comprising a low-pass filter coupled to the image blending processor, and a high-pass filter coupled to the image splicing processor.
20. An apparatus according to claim 18 or claim 19 comprising a image alignment detector coupled to the image blending processor.
21. An apparatus according to any of claims 18 to 20 comprising a image alignment detector coupled to the image splicing processor.
22. An apparatus according to any of claims 18 to 21 comprising a splice location selector coupled to the image splicing processor.
23. An apparatus according to claim 22 comprising an edge feature detector coupled to the image splicing processor.
24. An apparatus according to any of claims 18 to 23 comprising a second image blending processor coupled to the image merger.
25. An apparatus according to any of claims 18 to 24 wherein the image merger comprises an image multiplier.
26. A computer program product comprising a medium carrying computer readable instructions which, when executed by a processor, cause the processor to execute a method according to claim 1.
27. A computer program product comprising a medium carrying computer readable instructions which, when executed by a processor, cause the processor to provide: a first facility for blending images; a second facility for splicing images; and a third facility for adding images.
EP06840459A 2006-12-13 2006-12-13 Methods and apparatus for stitching digital images Withdrawn EP2100270A4 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CA2006/002028 WO2008070949A1 (en) 2006-12-13 2006-12-13 Methods and apparatus for stitching digital images

Publications (2)

Publication Number Publication Date
EP2100270A1 true EP2100270A1 (en) 2009-09-16
EP2100270A4 EP2100270A4 (en) 2011-08-17

Family

ID=39511174

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06840459A Withdrawn EP2100270A4 (en) 2006-12-13 2006-12-13 Methods and apparatus for stitching digital images

Country Status (6)

Country Link
EP (1) EP2100270A4 (en)
JP (1) JP5230873B2 (en)
CN (1) CN101583974B (en)
CA (1) CA2671894C (en)
HK (1) HK1137072A1 (en)
WO (1) WO2008070949A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751659B (en) * 2009-12-24 2012-07-25 北京优纳科技有限公司 Large-volume rapid image splicing method
JP5754312B2 (en) 2011-09-08 2015-07-29 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
CN106600532B (en) * 2016-12-08 2020-01-10 广东威创视讯科技股份有限公司 Image amplification method and device
US20180241982A1 (en) * 2017-02-23 2018-08-23 Nokia Technologies Oy Method, apparatus and computer program product for generating composite images with three-dimensional effects and reducing pole contraction lines
US11734796B2 (en) * 2020-04-15 2023-08-22 Gopro, Inc. Methods and apparatus for shared image processing among multiple devices
JPWO2022075040A1 (en) * 2020-10-09 2022-04-14
CN113077387B (en) * 2021-04-14 2023-06-27 杭州海康威视数字技术股份有限公司 Image processing method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075905A (en) * 1996-07-17 2000-06-13 Sarnoff Corporation Method and apparatus for mosaic image construction

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5187754A (en) * 1991-04-30 1993-02-16 General Electric Company Forming, with the aid of an overview image, a composite image from a mosaic of images
US6434265B1 (en) * 1998-09-25 2002-08-13 Apple Computers, Inc. Aligning rectilinear images in 3D through projective registration and calibration
US6456323B1 (en) * 1999-12-31 2002-09-24 Stmicroelectronics, Inc. Color correction estimation for panoramic digital camera
JP3428581B2 (en) * 2000-12-22 2003-07-22 株式会社スクウェア Video game apparatus and control method thereof, and computer-readable recording medium on which video game program is recorded.
US20050063608A1 (en) * 2003-09-24 2005-03-24 Ian Clarke System and method for creating a panorama image from a plurality of source images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075905A (en) * 1996-07-17 2000-06-13 Sarnoff Corporation Method and apparatus for mosaic image construction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BERTALMIO M ET AL: "Simultaneous structure and texture image inpainting", PROCEEDINGS / 2003 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 18 - 20 JUNE 2003, MADISON, WISCONSIN; [PROCEEDINGS OF THE IEEE COMPUTER CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION], LOS ALAMITOS, CALIF. [U.A, vol. 2, 18 June 2003 (2003-06-18), pages 707-712, XP010644806, DOI: DOI:10.1109/CVPR.2003.1211536 ISBN: 978-0-7695-1900-5 *
See also references of WO2008070949A1 *
WARD GREG: "HIDING SEAMS IN HIGH DYNAMIC RANGE PANORAMAS", PROCEEDINGS APGV 2006. SYMPOSIUM ON APPLIED PERCEPTION IN GRAPHICS AND VISUALIZATION. BOSTON, MA, JULY 28 - 29, 2006; [ACM SIGGRAPH SYMPOSIUM ON PERCEPTION IN GRAPHICS AND VISUALIZATION], NEW YORK, NY : ACM, US, 28 July 2006 (2006-07-28), page 150, XP001505710, DOI: DOI:10.1145/1140491.1140527 ISBN: 978-1-59593-429-1 *

Also Published As

Publication number Publication date
CA2671894A1 (en) 2008-06-19
EP2100270A4 (en) 2011-08-17
CA2671894C (en) 2013-07-16
HK1137072A1 (en) 2010-07-16
CN101583974B (en) 2012-11-14
JP5230873B2 (en) 2013-07-10
WO2008070949A1 (en) 2008-06-19
JP2010512696A (en) 2010-04-22
CN101583974A (en) 2009-11-18

Similar Documents

Publication Publication Date Title
CA2671894C (en) Methods and apparatus for stitching digital images
EP1968008B1 (en) Method for content-aware image retargeting
US7805003B1 (en) Identifying one or more objects within an image
US9654765B2 (en) System for executing 3D propagation for depth image-based rendering
Levin et al. Seamless image stitching in the gradient domain
Shamir et al. Seam carving for media retargeting
US7889949B2 (en) Joint bilateral upsampling
US7599579B2 (en) Interpolated image filtering method and apparatus
CN103168315B (en) Solid (3D) panorama sketch on portable equipment creates
CN101542529B (en) Generation method of depth map for an image and an image process unit
DE69735488T2 (en) METHOD AND DEVICE FOR ALIGNING PICTURES
EP2709070A1 (en) Image generation device and image generation method
KR101049928B1 (en) Method, terminal and computer-readable recording medium for generating panoramic images
CN106447607B (en) A kind of image split-joint method and device
US20050105823A1 (en) Method and system for composing universally focused image from multiple images
CN109803086B (en) Method, device and camera for blending first and second images with overlapping fields of view
CN112862685B (en) Image stitching processing method, device and electronic system
CN107430762B (en) Digital zooming method and system
CN104680501A (en) Image splicing method and device
US11368661B2 (en) Image synthesis method, apparatus and device for free-viewpoint
Kim et al. Sredgenet: Edge enhanced single image super resolution using dense edge detection network and feature merge network
CN112365518A (en) Image splicing method based on optimal suture line self-selection area gradual-in and gradual-out algorithm
WO2014198029A1 (en) Image completion based on patch offset statistics
Kang et al. Seamless stitching using multi-perspective plane sweep
JPH04180176A (en) Picture editing device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090617

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20110720

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 1/00 20060101ALI20110714BHEP

Ipc: G06T 11/60 20060101AFI20110714BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20170428

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20181031