US20030063089A1 - Image-based method and system for building spherical panoramas - Google Patents

Image-based method and system for building spherical panoramas Download PDF

Info

Publication number
US20030063089A1
US20030063089A1 US10/235,190 US23519002A US2003063089A1 US 20030063089 A1 US20030063089 A1 US 20030063089A1 US 23519002 A US23519002 A US 23519002A US 2003063089 A1 US2003063089 A1 US 2003063089A1
Authority
US
United States
Prior art keywords
image
photographic
photographic images
contiguous
warped
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/235,190
Inventor
Ju-Wei Chen
Shu-Cheng Huang
Tse Cheng
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US10/235,190 priority Critical patent/US20030063089A1/en
Publication of US20030063089A1 publication Critical patent/US20030063089A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • G06T3/047
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Definitions

  • the present invention relates to building photographic panoramas of virtual reality and in particular to building spherical environmental maps utilizing some photographs.
  • branching movies approach is used extensively in the video game industry. Multiple movie segments are connected together to depict spatial paths of selected branch points. This approach is limited in navigability and interaction, and also requires a large amount of storage space to store movies.
  • the image-based approach For rendering complex scenes, such as real-world scenery, the image-based approach is often considered to be the most reasonable and feasible approach.
  • the image-based approach renders scenes within a fixed time as well as independently of rendering quality and scene complexity.
  • the image based approach uses an environment map, which is established from a collection of images and characterizes the appearance of a scene when viewed from a particular position.
  • the environment map contains the pixel values used to display the scene.
  • the image environment map is first wrapped onto an object surface having a certain geometry, such as a cube, a sphere, or a cylinder. Afterwards, by locating the viewing position at the geometrical center of the wrapped object, perspective-corrected views of scenes can be reconstructed from the image on the object surface during playback.
  • Spherical environment maps are used to store data relating to a surrounding scene.
  • Spherical mapping mechanisms are described in U.S. Pat. Nos. 5,359,363, 5,384,588, 5,313,306, and 5,185,667, all of which are incorporated herein by reference.
  • the major advantage of spherical mapping systems is that the environment map is able to provide users with 360 degrees of both horizontal and vertical pannings of views within the mapped sphere.
  • a common problem with spherical mapping systems relates to image acquisition. Generally, to acquire the image for a spherical environment map, users usually require special and relatively expensive cameras having fish-eye lenses. Two types of particular interest are the “spherical reflection” map and the “parametric spherical environment” map.
  • a “spherical reflection” map stores the image of an environment as an orthographic projection on a sphere representing a perfect reflection of the surrounding scene (Zimmermann U.S. Pat. No. 5,185,667). These images with circular shapes are all stored in respective square arrays of pixels.
  • the major disadvantage of spherical reflection maps is that the orientations near the silhouette of the sphere (i.e., the ring-like shape when the sphere is orthographically projected on a sphere) are sampled very sparsely. This makes it more difficult to render the spherical reflection maps for omni-directional and interactive viewers.
  • the invention relates to building spherical panoramas or spherical environment maps for image-based virtual reality systems.
  • the image-based spherical panoramas can be navigated in any desired view direction (360-degrees) by suitable three-dimensional image browsers or viewers.
  • a method of photographing pictures for building the spherical panoramas with a camera system includes determining a number of circles of latitude to build the spherical panorama on the basis of the focal length of the camera system, the height of photographic film used in the camera system, and an overlapping ratio between contiguous ones of the pictures, each circle of latitude including a series of contiguous pictures; and photographing the determined number of pictures.
  • Embodiments of this aspect of the invention may include one or more of the following features.
  • a is the height of the photographic film
  • f is the focal length of the camera system
  • k is the overlapping ratio between contiguous pictures.
  • a is the height of the photographic film
  • b is the width of the photographic film
  • f is the focal length of the camera system
  • k is the overlapping ratio between contiguous pictures.
  • a method of transforming photographic images of a panoramic scene into a spherical environment map includes warping each of the plurality of photographic images into a parametric spherical environment map; and image cutting each of the warped photographic images into a rectangular image based on an elevation angle of a predetermined sight direction.
  • This aspect of the invention provides a reliable and efficient approach for transforming photographic images into a spherical environment map.
  • a spherical environment map can be linearly mapped using the spherical coordinate system, in which each pixel is specified in terms of the pixel's azimuthal angle around the horizon and its elevation angle below the north pole, denoted by ( ⁇ , ⁇ ).
  • azimuthal angle around the horizon
  • elevation angle below the north pole
  • each picture is taken based on the predefined sight direction
  • direction errors in taking pictures may exist.
  • an additional coordinate transformation using the azimuthal coordinate system is performed.
  • the picture images are seamed together into a spherical environment map utilizing an image stitching technique.
  • Embodiments of this aspect of the invention may include one or more of the following features.
  • the warped photographic images are cut either horizontally or vertically on the basis of the elevation angle of the sight direction for each of the warped photographic images. For example, if the elevation angle of the sight direction, relative to one of the poles, for a warped photographic image is less than 10° or greater than 170°, the warped photographic image is cut into a rectangular region defined by side edges of the warped photographic image and either a bottom or a top edge of the warped photographic image, respectively. On the other hand, if the elevation angle of the sight direction for a warped photographic image is between 30° and 150°, the warped photographic image is cut into a rectangular region defined by two opposing horizontal lines and two opposing vertical lines.
  • Warping each of the photographic images includes mapping each pixel of the warped photographic image from at least one pixel of an associated photographic image.
  • the attributes (e.g., color) of each pixel of the warped photographic image are derived from at least one pixel of the associated photographic image.
  • a method of stitching together a series of photographic images to build a spherical panorama includes computing sight directions for each of the photographic images; adjusting the intensity of the photographic images to an intensity related to an average of the intensities of the photographic images; and stitching together contiguous ones of the photographic images.
  • Embodiments of this aspect of the invention may include one or more of the following features.
  • Computing sight directions includes performing both vertical and horizontal registration of contiguous photographic images.
  • a correlation-based algorithm is used to compute image phase correlation for all possible positions of image alignment. The position with the minimum alignment difference is then selected. The results of performing horizontal and vertical registration are inspected with a software tool.
  • Vertical registering is used to derive vertical relationships between images.
  • One approach for vertically registering contiguous photographic images includes rotating each photographic image 90°; image warping the rotated photographic image with ⁇ L set to 90°; performing horizontal registration of contiguous photographic images; and then determining sight directions from the image positions after image registration.
  • Horizontal registration is performed to derive horizontal relationships between images and is usually performed after vertical registration.
  • the method can further include a second “fine-tuning” vertical registration.
  • p l is the location of a pixel from a first one of the photographic images
  • q l is the location of a pixel from a corresponding second one of the photographic images contiguous with the first one of the photographic images;
  • d A is the distance between location P i and a boundary of the first one of the photographic images
  • d B is the distance between location q l and a boundary of the second one of the photographic images
  • I(p l ) is the intensity of the pixel at location p l ;
  • I(q i ) is the intensity of the pixel at location q i ;
  • I(r l ) is the resultant intensity of the pixel.
  • an image browser for viewing spherical panoramas can retrieve the scene image from the spherical environment map based on the desired direction of view.
  • aspects of the invention include systems for implementing the methods described above.
  • the systems are for photographing pictures of a panoramic scene for use in building a spherical panorama, for transforming photographic images of a panoramic scene into a spherical environment map, and for stitching together a two dimensional array of the photographic images to build a spherical panorama.
  • the systems include computer-readable media having computer instructions for implementing the methods described above.
  • a computer-readable medium includes any of a wide variety of memory media such as RAM or ROM memory, as well as, external computer-readable media, for example, a computer disk or CD ROM.
  • a computer program may also be downloaded into a computer's temporary active storage (e.g., RAM, output buffers) over a network.
  • the above-described computer program may be downloaded from a Web site over the Internet into a computer's memory.
  • the computer-readable medium of the invention is intended to include the computer's memory which stores the above-described computer program that is downloaded from a network.
  • FIG. 1 illustrates the relationship between rectangular coordinates and spherical coordinates.
  • FIG. 2 shows a coordinate system of a parametric spherical environment map.
  • FIG. 3 illustrates the principle of taking pictures with a camera.
  • FIG. 4 is a top view of two overlapping pictures taken at different sight directions extending through the equator.
  • FIG. 5 illustrates a two dimensional coordinate system for the picture of FIG. 3.
  • FIG. 6 is a three dimensional coordinate system illustrating the spatial relationship between the film and the lens of a camera.
  • FIGS. 7 A- 7 E are a series of five warped images with ⁇ set to 0°, 30°, 45°, 60°, and 90°.
  • FIGS. 8A and 8B illustrate the concept of horizontal and vertical cutting, respectively.
  • FIG. 9 shows a two-dimensional coordinate system for representing pixels of an image stored in a memory.
  • FIG. 10 is a flowchart showing the steps for computing accurate sight directions for the pictures used to build a spherical panorama.
  • FIG. 11 shows the offset between two contiguous images after registration.
  • FIG. 12 is a flowchart showing the steps for an approach for performing vertical registration.
  • FIG. 13 shows a series of four pictures of a scene taken at different elevation angles.
  • FIG. 14 illustrates the four pictures of FIG. 13 after image warping and registration.
  • FIGS. 15A and 15B illustrate the concept of image stitching in accordance with the invention.
  • FIGS. 16A and 16B illustrate the concept of image stitching in accordance with the invention.
  • FIG. 17 shows the concept of registering and blending a contiguous stitched image.
  • FIGS. 18A and 18B shows stitched images which are wider and narrow than the spherical environmental map within which the stitched images are mapped.
  • FIG. 19 shows a stitched image having discontinuities between two picture boundaries which are caused by tilting or slanting of the camera system used to take the pictures.
  • FIG. 20 shows a computer system which is suitable for use with the invention.
  • a “parametric spherical environment map” (PSEM) is used to store the data of an environment into a rectangular image in which the coordinates of each pixel are mapped to points on the surface of a sphere.
  • the points are generally denoted by spherical coordinates ( ⁇ , ⁇ ).
  • the environment data of an entire spherical panorama can be included in a single contiguous image. All of the regions of the panorama are always sampled at least as much as the regions at the equator of the sphere so that the nearer to any pole a region is, the more oversampled will be that region. Based on the coordinates of both horizontal and vertical axes, the environment data of the desired sight direction can be easily retrieved from the map. “Translation” along lines of latitude in the map corresponds to “rotation” around the axis of two poles of a sphere.
  • Any point on a sphere of radius ⁇ can also be represented using the spherical coordinate system ( ⁇ , ⁇ , ⁇ ), where 0 ⁇ 2 ⁇ and 0 ⁇ to represent the angles in units of radians. Alternatively, 0° ⁇ 360° and 0 ⁇ 180° can be used to represent the angles in units of degrees.
  • the relationship between the rectangular coordinates (X R , Y R , Z R ) and its corresponding spherical coordinates ( ⁇ , ⁇ , ⁇ ) is as follows:
  • the spherical surface has only two degrees of freedom, represented by two parameters, ⁇ and ⁇ .
  • a rectangular image can be used to represent a parametric spherical environment map.
  • the resolutions for each axis is defined in pixels/degrees.
  • the resolutions of the two axes are defined to be the same.
  • the width of the PSEM is twice as long as the height of the PSEM.
  • the axes can also be different.
  • x_resolution_pSEM and y_resolution PSEM are the predefined resolutions of the x and y axis, respectively.
  • the corresponding spherical coordinates ⁇ and ⁇ can be obtained from (x m , y m ) as follows:
  • FIG. 3 the principle of taking pictures with a photographic system 12 for building a spherical panorama is illustrated.
  • a photographic film 14 and camera lens 16 are the two primary components of photographic system 12 .
  • the distance between camera lens 16 and film 14 is the focal length or focal distance (f).
  • An image of an object 18 is formed in inverse on film 14 .
  • the size of the image is proportional to the focal length, and is inversely proportional to the distance between the object and the camera. Therefore, the size of the view field captured by a single picture depends on the focal length, the film width, and the film height. As the focal distance increases, the view field decreases.
  • a 360-degree panoramic scene cannot be shown in a single picture taken with a camera having a standard lens.
  • more than one picture is needed to build a spherical panorama.
  • the actual number of required pictures depends on four factors: 1) the focal length of the camera lens; 2) the width of the film; 3) the height of the film; and 4) the overlapping ratio between contiguous pictures used to register adjacent pictures.
  • the corresponding perspective view of the spherical panorama should be included in one or more pictures, which are used to build the panorama.
  • two pictures 20 , 22 tangent to an imaginary sphere 24 of radius “f” are taken for respective sight directions extending through the equator (i.e., the circle of the equator whose plane is perpendicular to the axis extending through the poles of sphere 24 ).
  • a portion of a spherical panorama is to be built by pictures photographed with a photographic system 12 (FIG. 3). For example, let the film height be designated as “a”, the film width as “b”, the focal length as “f”, and the overlapping ratio between contiguous pictures as “k (in percent)”. In order to cover the entire surface of the sphere, circles of latitude, each including a series of contiguous pictures are required.
  • the pictures can be taken using either a landscaping or a portrait style picture-taking approach.
  • the number of pictures taken from the two approaches to build a spherical panorama is the same.
  • the number of circles of the landscape style is larger than that of the portrait style, but the number of pictures in one circle of the landscape style is generally smaller than that of the portrait style.
  • the number of circles is generally restricted to an odd number because the degree to which the elevation is varied with the photographic equipment on the equator is small.
  • Horizontal registration is also used for the pictures of the circle at equator to determine the horizontal relationship between the panoramic pictures. Due to these restrictions, the number of pictures in the landscape style is typically larger than that of the portrait style.
  • NumCirclesLandscape [ 180 ⁇ ° ( 100 - k ) ⁇ % * [ 2 ⁇ tan - 1 ⁇ ( a 2 ⁇ f ) ] - 1 ] .
  • NumCirclesPortrait [ 180 ⁇ ° - 2 ⁇ tan - 1 ⁇ ( b 2 ⁇ f ) + [ 2 ⁇ k ⁇ % * tan - 1 ⁇ ( a 2 ⁇ f ) ] ( 100 - k ) ⁇ % * [ 2 ⁇ tan - 1 ⁇ ( a 2 ⁇ f ) ] ] . ( 5 )
  • the number of pictures, needed to completely cover the zone of each circle of the sphere surface depends on the sight direction above (or below) the equator. The nearer any pole the sight direction is, the fewer pictures are needed. However, taking the design of the image stitching algorithm into consideration the same number of pictures for each circle are taken along the top and bottom directions. The actual number of pictures required for the circle is determined by the possible width of warped images and the overlapping ratio between contiguous pictures. Standard photographic equipment being imperfect, it is difficult to take pictures with very accurate sight directions. Thus, the narrowest possible width of warped images after cutting is used to derive the number of pictures needed to be taken in a circle of latitude.
  • the photographic system has a focal length of 18 mm, and the film has a width of 24 mm, and a height of 36 mm.
  • a portrait style picture taking approach is used.
  • three circles of latitude are required to provide an overlapping ratio of 40%.
  • the view field along the direction of the south pole will be obstructed by the tripod of the camera. It will be substituted by a marked or patterned image.
  • the narrowest width will appear in the ⁇ being 45° or 135°.
  • the number of pictures of one circle of latitude is 12.
  • 36 pictures plus an additional picture along the direction of the north pole are required to cover the sphere.
  • the 37 pictures will be used to explain the proposed algorithm of image stitching and show the effectiveness of our proposed method in this invention.
  • the photographic equipment used to take pictures includes a pan head which controls the rotation angles of the camera around the polar axis. It is desirable to keep errors in rotation angle within five degrees. However, it may be difficult to automatically control the panning above or below the equator resulting in errors larger than five degrees.
  • a parametric spherical environment map (PSEM) is used to store the environment data of a spherical panorama.
  • PSEM parametric spherical environment map
  • each picture is transformed into the PSEM based on a sight direction designated as ( ⁇ 1 , ⁇ 1 )
  • These warped images are stitched together as a complete seamless image of the PSEM using image registration.
  • Conventional image processing algorithms represent images with two-dimensional arrays. That is, the shape of most images used in image processing is rectangular. However, on the PSEM , the shapes of warped images of rectangular pictures are not rectangular. The shapes depend on the value of angle ⁇ below the north pole.
  • the origin of the two dimensional Cartesian coordinate system is defined to be at the center point of the picture.
  • the x axis is along the direction of film width (designated as “b”) and the y axis is along that of film height (designated as “a”).
  • the coordinates of the four corner points of the picture are (b/2, a/2), ( ⁇ b/2, a/2), ( ⁇ b/2, ⁇ a/2), and (b/2, ⁇ a/2); the coordinates of the center points of the four picture bounding edges (Q1, Q2, Q3, and Q4) are (0, a/2), ( ⁇ b/2, 0), (0, ⁇ a/2), and (b/2, 0), respectively.
  • the coordinates of the four corner points P1, P2, P3, and P4 will be (b/2, a/2, f), ( ⁇ b/2, a/2, f), ( ⁇ b/2, ⁇ a/2, f) and (b/2, ⁇ a/2, f);
  • the coordinates of the four center points Q1, Q2, Q3, and Q4 will be (O, a/2, f), ( ⁇ b/2, O, f), (O, ⁇ a/2, f) and (b/2, O, f), respectively.
  • Width PSEM 360 °*x _resolution_PSEM (7)
  • the image height, independent of ⁇ 1 is computed from the coordinates of point Q2( ⁇ b/2, O, f) and point Q3 (O, ⁇ a/2, f).
  • the sole cutting line is determined based on the minimum y-coordinate of point Q2 and point Q3.
  • y m ⁇ ( ⁇ i , Q3 ) y_resolution ⁇ _PSEM * cos - 1 ⁇ [ - a ⁇ ⁇ sin ⁇ ⁇ ⁇ i + 2 ⁇ f ⁇ ⁇ cos ⁇ ⁇ ⁇ i ( a 2 + 4 ⁇ f 2 ) 1 / 2 ] , ( 9 )
  • the two opposing horizontal lines and the two opposing vertical lines can also be derived from the coordinates of point Q1 (O, a/2, f), point Q3 (O, ⁇ a/2, f), point P3 ( ⁇ b/2, ⁇ a/2, f), and point P4 (b/2, ⁇ a/2, f), respectively (see FIG. 8B).
  • the sight direction of picture j is designated as ( ⁇ j , ⁇ j )
  • the coordinates (x_LT, y_LT) of the left-top corner point of the rectangular image is then equal to (x m ( ⁇ j , ⁇ j , P4), y m ( ⁇ 1 , ⁇ 1 , Q1) to (X m ( ⁇ 1 , ⁇ 1 , P4) y m ( ⁇ j , ⁇ j , Q1)).
  • FIGS. 7 A- 7 E five warped images having different shapes are shown, each of which is transformed from a rectangular picture of a photographic image and based on five different sight directions, respectively.
  • the angle ⁇ around the equator is set to 180°, but angle p below the north pole is set to five different values:
  • Image warping to generate the PSEM from photographic images includes two kinds of functions: “image warping transformation” and “image cutting”.
  • the warped images with irregular shapes are cut into rectangular shapes to be used later during the stitching processing.
  • FIGS. 7 A- 7 E the shapes of warped images on the PSEM depend on the ⁇ values below the north pole of the sphere. The nearer to any pole of a sphere the sight direction is, the wider the warped image is.
  • the manner of cutting images depends on the shape of a warped image which, in turn, depends on the value of ⁇ in the sight direction.
  • a camera system having a focal length of 18 mm, a film width of 24 mm, and a film height of 36 mm is used to provide photographs for building a spherical panorama.
  • the rules of image cutting for such a camera system can be established as follows:
  • is less than 10° or not greater than 170°
  • the warped image is cut into a rectangular region by one horizontal line (FIG. 8A). This manner of cutting is called horizontal cutting.
  • is between 30° and 150°
  • the image region is cut into a rectangular region by four lines (FIG. 8B).
  • the coordinates of the left-top corner point of the rectangular is designated as (x_LT, y_LT). This manner of image cutting is called vertical cutting.
  • is between 10° and 30° or ⁇ between 150° and 170°, the type of image cutting is determined based on the requirements of the particular.
  • a two-dimensional coordinate system is defined in which the left-top corner of the image is located on the original, the positive x direction is to the right, and the positive y direction extends downward.
  • the x and y coordinates of a pixel indicate the serial ranks of the pixel from left to right and from top to bottom, respectively.
  • all pixels of a two-dimensional image are loaded into a one dimensional memory array from a disk file or CD-ROM.
  • Each pixel address in the memory array is denoted by a variable, here called Offset, which is defined and calculated based on the x and y coordinates of the pixel as follows:
  • ImageWidth is the number of pixels in each row of the image; and k is the number of bytes used to represent each pixel.
  • WarpImageWidth the width of the rectangular image after cutting, called “ WarpImageWidth” .
  • WarpImageWidth ⁇ ( ⁇ j ) 2 ⁇ tan - 1 ⁇ ( b a ⁇ ⁇ cos ⁇ ⁇ ⁇ j + 2 ⁇ f ⁇ ⁇ sin ⁇ ⁇ ⁇ j ) . ( 16 )
  • WarpImageHeight ⁇ ( ⁇ i ) y ⁇ ⁇ resolution ⁇ ⁇ PSEM * ( cos - 1 ⁇ [ - a ⁇ ⁇ sin ⁇ ⁇ ⁇ i + 2 ⁇ f ⁇ ⁇ cos ⁇ ⁇ ⁇ i ( a 2 + 4 ⁇ f 2 ) 1 / 2 ] - cos - 1 [ a ⁇ ⁇ sin ⁇ ⁇ ⁇ ( a 2 - ( 17 )
  • the original y coordinate of a pixel in the PSEM is equal to the y coordinate in the new coordinate system plus y_LT.
  • the y_LT variable is then designated as ⁇ _TranslatePixels.
  • the width and height of the rectangular warped image after cutting is first calculated based on sight direction.
  • the information of the image size will be useful in the image warping procedure described below. This procedure is described using a C-like programming language.
  • the source code listing is included in Appendix I. Other similar programming languages may be used to implement the image warping procedure.
  • the program includes input and output arguments both of which are listed at the start of the source code listing.
  • the input arguments include the focal length designated as “f”, the film width as “FilmWidth”, the film height as “FilmHeight” and the memory storing the picture imaged designated as “PictureImage”.
  • the width and height of the picture image to be processed is designated as “PictureImageWidth” and “PictureImageHeight”, respectively, and the width and height of the warped image on the PSEM after image cutting is designated as “WarpImageWidth” and “WarpImageHeight”, respectively.
  • the resolutions of the axes of the PSEM are designated as “x_resolution —PSEM ” and “y_resolution —PSEM ”, respectively.
  • the sight direction of the picture below the north pole is designated as ⁇ L and the number of pixels translated for storing the cut image is designated as ⁇ _TranslatePixels”.
  • the sole output argument is the memory storing the warped image, which is designated as “WarpImage”.
  • Each pixel in the warped image is mapped from at least one pixel in the photographic image.
  • the attributes (e.g., color) of each pixel in the warped image are derived from as those of a corresponding pixel in the photographic image.
  • the memory address of the photographic pixel from the corresponding pixel of the warped image needs to be derived.
  • the x and y coordinates of each pixel in the warped image after cutting are designated as “x_index” and “y_index”; the image width and image height of the photographic image are designated as “m” and “n”; the address of the memory storing the photographic image is designated as “OffsetPictureImage”; and the address of the memory storing the warped image designated as “OffsetWarpImage”.
  • Building a spherical panorama using image stitching includes three important stages: 1) computing accurate sight directions of the pictures used to build the panorama; 2) adjusting the intensity of the picture images; and 3) stitch processing. Based on the predefined picture-taking approach discussed above, a set of pictures to be overlapped are taken with a conventional camera. The number of pictures when overlapped are sufficient for building a spherical panorama. An image stitching approach (described below) is then used to stitch the photographic images shown in the pictures together into a complete image of the spherical panorama.
  • the first and most important stage in stitching these overlapping photographic images is to compute accurate sight direction of each picture.
  • the sight directions of the pictures are needed to determine whether the photographic images can be stitched together or not.
  • each picture is taken based on a predefined sight direction, errors in sight direction due to imperfect photographic equipment and the setup of the equipment may still exist. Therefore, the image stitching approach performs image registration in both the horizontal (latitudinal) and vertical (longitudinal) directions of the PSEM to compute accurate sight directions of each picture.
  • a flow diagram shows the steps required for computing accurate sight directions for the pictures.
  • the process also contains three steps: 1) vertical (or longitudinal) image registration ( 30 ); 2) horizontal (or latitudinal) image registration ( 32 ); and 3) fine tuning of the pictures' sight direction along longitudinal directions ( 34 ).
  • a picture is also taken along the zenith direction.
  • the spatial relationships of these pictures can be described with a two-dimensional array of rows and columns. The pictures taken in each row are taken at the same latitudinal angle, and the pictures of each column are taken at the same longitudinal angle.
  • images are registered in each row and each column.
  • both horizontal image registration and vertical image registration is required.
  • a semi-automatic approach accomplishes the required image registration.
  • a correlation-based algorithm as described in U.S. Ser. No. 08/933,758 is used to initially register the contiguous images.
  • inconsistent intensities in images may result in registration errors.
  • GUI graphic user interface
  • the tool can be used to display an array of warping images on a screen based on the positions obtained from image registration. Overlapping regions of two adjacent images can be seen on the screen, Thus, the result of the image registration can be inspected by the software. If the result is not perfect, users can move the warping images (e.g., with a mouse) to obtain better registration.
  • all of the pictures to be registered are transformed onto the space of a PSEM .
  • ⁇ BA ⁇ x BA /x — resolution — PSEM
  • ⁇ BA ⁇ y BA /y — resolution — PSEM (19)
  • image registration is accomplished by computing the image phase correlation for all of the possible positions of image alignment and selecting the one resulting in a minimum alignment difference. Searching for the best alignment position (x, y) in a two-dimensional image array is accomplished through a series of translations along x and y directions.
  • PSEM has the characteristic of “translation” along the equatorial direction in the map corresponding to “rotation” around the axis of two poles of a sphere.
  • the warped image of a particular picture can be obtained from another warped image of the same picture with the same angle below the north pole but with different ⁇ around the latitudinal direction.
  • warped image with different ⁇ 's should be recomputed for different ⁇ 's.
  • another new coordinate system is used to transpose a vertical image registration in the original coordinate system to a horizontal image registration in the new coordinate system.
  • the difference of sight directions around the polar axis in the new coordinate system corresponds to that along the longitudinal direction in the original coordinate system.
  • the time required for vertical registration can be dramatically sped up, particularly for those with large differences in ⁇ , by eliminating the recomputation of warped images for different ⁇ 's.
  • a vertical image registration in the original coordinate system can be performed based on the following four steps:
  • step 50 Rotate each photographic image 90° clockwise (step 50 ). This rotation can be accomplished by storing one row of image pixels into a column of another image array indexed in the inverse sequence.
  • step 54 Apply the horizontal image registration to them (step 54 ).
  • the pictures arranged from left to right are in the order of decreasing ⁇ 's.
  • FIG. 13 shows, from left to right, pictures taken of a scene in which the elevation angle of the camera has been changed.
  • pictures 60 a - 60 c were photographed below, at, and above the equator, respectively.
  • Picture 60 d is taken in the zenithal direction.
  • the respective predefined ⁇ 's of these pictures have errors within five degrees. However, the possible errors of t's may be much greater.
  • the four pictures are to be registered along the longitudinal direction.
  • the spatial relationships between contiguous pictures around the equatorial direction is determined.
  • the horizontal (or latitudinal) image registration on the pictures of the circle is performed on the equator because the variance of pictures's sight directions in p is smaller than that of other circles of latitude.
  • accurate sight directions of all the pictures on the equator ( ⁇ e , ⁇ e )'S are obtained.
  • the sight directions of respective pictures in other circles of latitude can also be indirectly derived similarly as discussed above.
  • normal registration is divided into the horizontal registration and the vertical registration. Both registration steps process one dimensional arrays of images. Therefore, only two adjacent images are compared during the normal image registration.
  • a software tool is used to inspect registration of the stitched images.
  • the stitched image of the equatorial circle should be seamless because the horizontal image registration is applied to the pictures on the circle.
  • seams may exist between certain ones of the contiguous images of other circles along the equatorial (or latitudinal) direction because the spatial relationships are indirectly derived from the positions of pictures on the equator. If seams exist, fine tuning for picture positions of other circles along the longitudinal direction should be performed to eliminate the seams in the stitched image.
  • Fine tuning can be accomplished automatically using a software tool for image registration or performing manual fine tuning. Fine-tuning of image registration is performed to obtain better stitching of a two dimensional image array. Each column of images is processed by the vertical registration, and only the row of images at the equator is processed by the horizontal registration. The positions of images in the other rows are derived indirectly. In particular, the images of the other rows are processed using horizontal registration. For horizontal registration of the other rows of images, both the horizontal and vertical relationships can be tuned, but the horizontal relationship can only be modified a little. This type of horizontal registration is called “fine-tuning.” The final sight directions of all pictures after this fine tuning procedure are recorded for the latter stage of stitch processing.
  • the brightness of all pictures to be seamed together should be tuned to an average brightness (or intensity) value. Because the shutter of a camera opens in different degrees, and each picture is taken at a different instance in time, the brightness (or intensity) of contiguous pictures may differ greatly. It is a necessary that in building a panoramic image, the brightness of the contiguous pictures be smoothed.
  • FIGS. 15A and 15B in the example of the invention, only the two pictures along the two poles are processed using horizontal cutting while the others are all processed by the vertical cutting.
  • the warped images provided by vertical cutting in one circle of latitude are first seamed together from left to right as a flat rectangular image (FIG. 15A).
  • the picture along the south pole is replaced by a rectangular marked or patterned image because the view field is hidden by the camera tripod. Therefore, including the picture along the north pole, there are four such flat rectangular images to be seamed together via image processing techniques.
  • the position of each flat rectangular image in the PSEM can be acquired from the sight directions of pictures. Therefore, based on one sequence from top to bottom or an inverse sequence, the four flat images can be seamed together, as shown in FIG. 15B.
  • discontinuities of image intensity may exist between two contiguous images even though the intensities of these pictures were adjusted. Therefore, for an overlapping image region between two images, image blending should be applied to the overlapping region to smooth the discontinuities between two images.
  • the spatial relationship between two images to be stitched together can be from left to right or from top to bottom.
  • two images 62 , 64 are to be stitched together.
  • a pixel 66 of image 62 located in the overlapping region, is denoted by P 1
  • the pixel of image 64 located in the same position of pixel 66 in the stitched image, is denoted by q i .
  • the corresponding pixel in the stitched image is designated as r 1 .
  • d A is the distance between pi and boundary of image 62; dB is the distance between q i and boundary of image 64; t is a power factor.
  • Image blending is observed visually with power factor t adjusted empirically to provide the optimum blending (e.g., set to 3) (See U.S. Ser. No. 08/933,758 for further details relating to determining power factor t).
  • the width of the stitched image being equal to the difference between sx 1 and sx′ 1 , should be the same as the width of the PSEM.
  • the y coordinate sy 1 was the same as sy′ 1 so that the stitched image would not slant.
  • a stitched image may be wider or more narrow than a PSEM within which the stitched image is mapped.
  • modifications are required so that the image width is equal to that of the PSEM . It may be necessary for columns of pixels to be eliminated or inserted in open space to reduce or increase the image width, respectively.
  • FIG. 18A for example, if a stitched image 76 is wider than a PSEM 78 , one column of pixels will be eliminated for every d columns.
  • a stitched image 84 may also slant to one side due to, for example, the camera being mounted on the equipment in a tilted manner (i.e., not vertical).
  • the y coordinate sy 1 is not the same as sy′ 1 .
  • images on the left-most and the right-most boundaries are not contiguous.
  • the difference between y coordinates (sy′ 1 -sy 1 ) are used to determine how stitched image 84 is to be modified.
  • the number of pictures in a circle of latitude is designated as NumPictures previously. If the absolute value of the difference (sy′ 1 ⁇ sy 1 ) is less than NumPictures, the y positions of
  • Additional buffer is used to store the stitched image after the rotation.
  • the new coordinates after rotation (x′, y′) can be computed from the old coordinates as follows:
  • System 90 for building a spherical panorama including transforming photographic images of a panoramic scene into a spherical environment map and stitching together a two dimensional array of the photographic images is shown.
  • System 90 includes a processor 92 (e.g., Pentium cpu), RAM memory 94 (e.g., 32 MB), and a disk storage 96 (e.g., at least 30 MB) for storing the software tool described above.
  • System 90 is connected to a camera system 98 having a lens with a focal length of 18 mm.
  • Camera system 98 includes a pan head 99 mounted to a tripod 100 . Pan head 99 provides 6 degrees of freedom.
  • each sheet of film was 24 mm
  • the height of each sheet was 36 mm
  • the focal length of the camera lens was 18 mm
  • the overlapping ratio between contiguous pictures was 40%.
  • the number of pictures needed to be taken for building a spherical panorama was determined to be 38.
  • the bottom picture that is, the picture along the direction of the south pole, was replaced by a marked pattern because the view field along the direction of the south pole was hidden by the camera tripod.

Abstract

The invention relates to building spherical panoramas for image-based virtual reality systems. The image-based spherical panoramas can be navigated in any desired view direction (360-degrees) by suitable three-dimensional image browsers or viewers. The method and system also includes computing the number of photographs required to be taken and the azimuth angle of the center point of each photograph for building a spherical environment map representative of the spherical panorama. The method and system also includes an algorithm for computing the accurate azimuth angles of these taken photographs and seaming them together to build the spherical environment map.

Description

    BACKGROUND OF THE INVENTION
  • The present invention relates to building photographic panoramas of virtual reality and in particular to building spherical environmental maps utilizing some photographs. [0001]
  • In virtual reality systems, synthesis and navigation of virtual environments are usually accomplished using one of the following approaches: 1) three-dimensional modeling and rendering; 2) branching movies; and 3) image-based approach. [0002]
  • Although three-dimensional modeling and rendering is the traditional approach, creating the three-dimensional geometrical entities is a laborious manual process, and special hardware (e.g., a graphic accelerator) may be necessary for rendering, in real time, a highly complex scene. [0003]
  • The branching movies approach is used extensively in the video game industry. Multiple movie segments are connected together to depict spatial paths of selected branch points. This approach is limited in navigability and interaction, and also requires a large amount of storage space to store movies. [0004]
  • For rendering complex scenes, such as real-world scenery, the image-based approach is often considered to be the most reasonable and feasible approach. The image-based approach renders scenes within a fixed time as well as independently of rendering quality and scene complexity. [0005]
  • The image based approach uses an environment map, which is established from a collection of images and characterizes the appearance of a scene when viewed from a particular position. The environment map contains the pixel values used to display the scene. The image environment map is first wrapped onto an object surface having a certain geometry, such as a cube, a sphere, or a cylinder. Afterwards, by locating the viewing position at the geometrical center of the wrapped object, perspective-corrected views of scenes can be reconstructed from the image on the object surface during playback. [0006]
  • There are many types of environment maps for storing data including cubic maps, spherical maps, cylindrical maps, fish-eye or hemispherical maps, and planar maps. Some environments are capable of storing data of omni-directional scenes, while others cannot. Others have used two additional kinds of environment maps in experimental systems: 1) the textured cylinder-like prism, and 2) the texture sphere-like polyhedron (see e.g., W. K. Tsao, “Rendering scenes in the real world for virtual environments using scanned images”, Master Thesis of National Taiwan University, Advisor: Ming Ouhyoung, 1996). Rendering methods for different types of environment maps are described in U.S. Pat. No. 5,396,583; U.S. Pat. No. 5,446,833; U.S. Pat. No. 5,561,756; and U.S. Pat. No. 5,185,667, all of which are incorporated herein by reference. [0007]
  • As for building image-based panoramas, commercially available software (e.g., the QuickTime VR authoring tool suite, from Apple Computer Inc.) can be used to create a seamless panoramic cylindrical image from a set of overlapping pictures. An improved method of a stitching method is described in Hsieh et al., U.S. Ser. No. 08/933,758, filed Sep. 23, 1997, which is incorporated herein by reference. Stitching together photographic images taken from a fish-eye lens is also described in the developer manual of InfinitePictures[0008]
    Figure US20030063089A1-20030403-P00900
    , Inc. (“SmoothMove Panorama Web Builder”, Developer Manual (Version 2.0) of InfinitePictures
    Figure US20030063089A1-20030403-P00900
    , Inc., 1996).
  • Spherical environment maps are used to store data relating to a surrounding scene. There exists many types of spherical environment maps. Spherical mapping mechanisms are described in U.S. Pat. Nos. 5,359,363, 5,384,588, 5,313,306, and 5,185,667, all of which are incorporated herein by reference. The major advantage of spherical mapping systems is that the environment map is able to provide users with 360 degrees of both horizontal and vertical pannings of views within the mapped sphere. However, a common problem with spherical mapping systems relates to image acquisition. Generally, to acquire the image for a spherical environment map, users usually require special and relatively expensive cameras having fish-eye lenses. Two types of particular interest are the “spherical reflection” map and the “parametric spherical environment” map. [0009]
  • A “spherical reflection” map stores the image of an environment as an orthographic projection on a sphere representing a perfect reflection of the surrounding scene (Zimmermann U.S. Pat. No. 5,185,667). These images with circular shapes are all stored in respective square arrays of pixels. The major disadvantage of spherical reflection maps is that the orientations near the silhouette of the sphere (i.e., the ring-like shape when the sphere is orthographically projected on a sphere) are sampled very sparsely. This makes it more difficult to render the spherical reflection maps for omni-directional and interactive viewers. [0010]
  • SUMMARY OF THE INVENTION
  • The invention relates to building spherical panoramas or spherical environment maps for image-based virtual reality systems. The image-based spherical panoramas can be navigated in any desired view direction (360-degrees) by suitable three-dimensional image browsers or viewers. [0011]
  • In one aspect of the invention, a method of photographing pictures for building the spherical panoramas with a camera system includes determining a number of circles of latitude to build the spherical panorama on the basis of the focal length of the camera system, the height of photographic film used in the camera system, and an overlapping ratio between contiguous ones of the pictures, each circle of latitude including a series of contiguous pictures; and photographing the determined number of pictures. [0012]
  • The approach permits the use of a camera having a standard or wide-angled lenses rather than fish-eye lenses which are more expensive and less popular. [0013]
  • Embodiments of this aspect of the invention may include one or more of the following features. [0014]
  • Determining the number of circles of latitude includes calculating the following equation: [0015] NumCirclesLandscape = [ 180 ° ( 100 - k ) % * [ 2 tan - 1 ( a 2 f ) ] - 1 ] .
    Figure US20030063089A1-20030403-M00001
  • where: a is the height of the photographic film; [0016]
  • f is the focal length of the camera system; and [0017]
  • k is the overlapping ratio between contiguous pictures. [0018]
  • Alternatively, the number of circles of latitude can be determined by calculating the following equation: [0019] NumCirclesPortrait = [ 180 ° - 2 tan - 1 ( b 2 f ) + [ 2 k % * tan - 1 ( a 2 f ) ] ( 100 - k ) % * [ 2 tan - 1 ( a 2 f ) ] ] .
    Figure US20030063089A1-20030403-M00002
  • where: a is the height of the photographic film; [0020]
  • b is the width of the photographic film; [0021]
  • f is the focal length of the camera system; and [0022]
  • k is the overlapping ratio between contiguous pictures. [0023]
  • In another aspect of the invention, a method of transforming photographic images of a panoramic scene into a spherical environment map includes warping each of the plurality of photographic images into a parametric spherical environment map; and image cutting each of the warped photographic images into a rectangular image based on an elevation angle of a predetermined sight direction. [0024]
  • This aspect of the invention provides a reliable and efficient approach for transforming photographic images into a spherical environment map. A spherical environment map can be linearly mapped using the spherical coordinate system, in which each pixel is specified in terms of the pixel's azimuthal angle around the horizon and its elevation angle below the north pole, denoted by (θ, φ). Based on angles (θ[0025] v, φv) which define a picture's sight (or view) direction, a two-dimensional image of a picture can be transformed onto an image region of a spherical environment map using an image warping formula. This is regarded as one kind of coordinate transformation. Although each picture is taken based on the predefined sight direction, direction errors in taking pictures may exist. For computing the actual azimuth angles of camera sight direction, an additional coordinate transformation using the azimuthal coordinate system is performed. Using these computed azimuth angles, the picture images are seamed together into a spherical environment map utilizing an image stitching technique.
  • Embodiments of this aspect of the invention may include one or more of the following features. [0026]
  • During image cutting, the warped photographic images are cut either horizontally or vertically on the basis of the elevation angle of the sight direction for each of the warped photographic images. For example, if the elevation angle of the sight direction, relative to one of the poles, for a warped photographic image is less than 10° or greater than 170°, the warped photographic image is cut into a rectangular region defined by side edges of the warped photographic image and either a bottom or a top edge of the warped photographic image, respectively. On the other hand, if the elevation angle of the sight direction for a warped photographic image is between 30° and 150°, the warped photographic image is cut into a rectangular region defined by two opposing horizontal lines and two opposing vertical lines. [0027]
  • Warping each of the photographic images includes mapping each pixel of the warped photographic image from at least one pixel of an associated photographic image. The attributes (e.g., color) of each pixel of the warped photographic image are derived from at least one pixel of the associated photographic image. [0028]
  • In still another aspect of the invention, a method of stitching together a series of photographic images to build a spherical panorama, includes computing sight directions for each of the photographic images; adjusting the intensity of the photographic images to an intensity related to an average of the intensities of the photographic images; and stitching together contiguous ones of the photographic images. [0029]
  • Embodiments of this aspect of the invention may include one or more of the following features. [0030]
  • Computing sight directions includes performing both vertical and horizontal registration of contiguous photographic images. In one implementation, a correlation-based algorithm is used to compute image phase correlation for all possible positions of image alignment. The position with the minimum alignment difference is then selected. The results of performing horizontal and vertical registration are inspected with a software tool. [0031]
  • Vertical registering is used to derive vertical relationships between images. One approach for vertically registering contiguous photographic images includes rotating each photographic image 90°; image warping the rotated photographic image with φ[0032] L set to 90°; performing horizontal registration of contiguous photographic images; and then determining sight directions from the image positions after image registration.
  • Horizontal registration is performed to derive horizontal relationships between images and is usually performed after vertical registration. The method can further include a second “fine-tuning” vertical registration. [0033]
  • Image blending of overlapping regions of the contiguous photographic images can also be performed. For example, intensity levels of pixels in the overlapping regions using the following equation: [0034] I ( r i ) = d A t · I ( p i ) + d B t · I ( q i ) d A t + d B t
    Figure US20030063089A1-20030403-M00003
  • where: p[0035] l is the location of a pixel from a first one of the photographic images;
  • q[0036] l is the location of a pixel from a corresponding second one of the photographic images contiguous with the first one of the photographic images;
  • d[0037] A is the distance between location Pi and a boundary of the first one of the photographic images;
  • d[0038] B is the distance between location ql and a boundary of the second one of the photographic images;
  • I(p[0039] l) is the intensity of the pixel at location pl;
  • I(q[0040] i) is the intensity of the pixel at location qi; and
  • I(r[0041] l) is the resultant intensity of the pixel.
  • When the panorama is navigated, an image browser for viewing spherical panoramas can retrieve the scene image from the spherical environment map based on the desired direction of view. [0042]
  • Other aspects of the invention include systems for implementing the methods described above. In particular, the systems are for photographing pictures of a panoramic scene for use in building a spherical panorama, for transforming photographic images of a panoramic scene into a spherical environment map, and for stitching together a two dimensional array of the photographic images to build a spherical panorama. The systems include computer-readable media having computer instructions for implementing the methods described above. [0043]
  • A computer-readable medium includes any of a wide variety of memory media such as RAM or ROM memory, as well as, external computer-readable media, for example, a computer disk or CD ROM. A computer program may also be downloaded into a computer's temporary active storage (e.g., RAM, output buffers) over a network. For example, the above-described computer program may be downloaded from a Web site over the Internet into a computer's memory. Thus, the computer-readable medium of the invention is intended to include the computer's memory which stores the above-described computer program that is downloaded from a network. [0044]
  • Other advantages and features of the invention will be apparent from the following description and from the claims.[0045]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates the relationship between rectangular coordinates and spherical coordinates. [0046]
  • FIG. 2 shows a coordinate system of a parametric spherical environment map. [0047]
  • FIG. 3 illustrates the principle of taking pictures with a camera. [0048]
  • FIG. 4 is a top view of two overlapping pictures taken at different sight directions extending through the equator. [0049]
  • FIG. 5 illustrates a two dimensional coordinate system for the picture of FIG. 3. [0050]
  • FIG. 6 is a three dimensional coordinate system illustrating the spatial relationship between the film and the lens of a camera. [0051]
  • FIGS. [0052] 7A-7E are a series of five warped images with φ set to 0°, 30°, 45°, 60°, and 90°.
  • FIGS. 8A and 8B illustrate the concept of horizontal and vertical cutting, respectively. [0053]
  • FIG. 9 shows a two-dimensional coordinate system for representing pixels of an image stored in a memory. [0054]
  • FIG. 10 is a flowchart showing the steps for computing accurate sight directions for the pictures used to build a spherical panorama. [0055]
  • FIG. 11 shows the offset between two contiguous images after registration. [0056]
  • FIG. 12 is a flowchart showing the steps for an approach for performing vertical registration. [0057]
  • FIG. 13 shows a series of four pictures of a scene taken at different elevation angles. [0058]
  • FIG. 14 illustrates the four pictures of FIG. 13 after image warping and registration. [0059]
  • FIGS. 15A and 15B illustrate the concept of image stitching in accordance with the invention. [0060]
  • FIGS. 16A and 16B illustrate the concept of image stitching in accordance with the invention. [0061]
  • FIG. 17 shows the concept of registering and blending a contiguous stitched image. [0062]
  • FIGS. 18A and 18B shows stitched images which are wider and narrow than the spherical environmental map within which the stitched images are mapped. [0063]
  • FIG. 19 shows a stitched image having discontinuities between two picture boundaries which are caused by tilting or slanting of the camera system used to take the pictures. [0064]
  • FIG. 20 shows a computer system which is suitable for use with the invention.[0065]
  • DETAILED DESCRIPTION
  • In accordance with the invention, a “parametric spherical environment map” (PSEM) is used to store the data of an environment into a rectangular image in which the coordinates of each pixel are mapped to points on the surface of a sphere. The points are generally denoted by spherical coordinates (θ, φ). [0066]
  • Using a parametric spherical environment map (PSEM) to store the environment data of a spherical panorama has several advantages. The environment data of an entire spherical panorama can be included in a single contiguous image. All of the regions of the panorama are always sampled at least as much as the regions at the equator of the sphere so that the nearer to any pole a region is, the more oversampled will be that region. Based on the coordinates of both horizontal and vertical axes, the environment data of the desired sight direction can be easily retrieved from the map. “Translation” along lines of latitude in the map corresponds to “rotation” around the axis of two poles of a sphere. [0067]
  • Referring to FIG. 1, in the rectangular (or Cartesian) coordinate system (R[0068] 3), a sphere of radius ρ can be represented with the following equation
  • x R 2 +y R 2 +Z R 2 2  (1)
  • where (x, y, z) are the coordinate of any [0069] point 10 on the sphere.
  • Any point on a sphere of radius ρ can also be represented using the spherical coordinate system (ρ, θ, φ), where 0≦θ<2Π and 0≦φ<Π to represent the angles in units of radians. Alternatively, 0°≦θ<360° and 0≦φ≦180° can be used to represent the angles in units of degrees. For any point on a sphere, the relationship between the rectangular coordinates (X[0070] R, YR, ZR) and its corresponding spherical coordinates (ρ, θ, φ) is as follows:
  • x R =ρsinφcosθy R =ρsinφsinθ{z R =P cosφ  (2)
  • Thus, for a sphere with a predefined radius ρ, the spherical surface has only two degrees of freedom, represented by two parameters, θ and φ. [0071]
  • Referring to FIG. 2, two orthogonal axes (X and Y) can, therefore, be used to represent θ and φ, respectively. All points on the sphere surface will map to the pixels within a rectangular region bounded by the four lines: θ=0°, θ=360°, φ=0°, and φ=180°. [0072]
  • A rectangular image can be used to represent a parametric spherical environment map. When a rectangular image is used to store the image of a spherical panorama, the resolutions for each axis is defined in pixels/degrees. Typically, the resolutions of the two axes are defined to be the same. In this embodiment, the width of the PSEM is twice as long as the height of the PSEM. However, the axes can also be different. Assume that x_resolution_pSEM and y_resolution PSEM are the predefined resolutions of the x and y axis, respectively. For any pixel in the PSEM with coordinates (x[0073] m, ym), the corresponding spherical coordinates θ and φ can be obtained from (xm, ym) as follows:
  • θ=x m /x resolution PSEM {φ=y m /y m y resolution PSEM  (3)
  • Picture-Taking Procedure [0074]
  • Referring to FIG. 3, the principle of taking pictures with a [0075] photographic system 12 for building a spherical panorama is illustrated. A photographic film 14 and camera lens 16 are the two primary components of photographic system 12. The distance between camera lens 16 and film 14 is the focal length or focal distance (f). An image of an object 18 is formed in inverse on film 14. The size of the image is proportional to the focal length, and is inversely proportional to the distance between the object and the camera. Therefore, the size of the view field captured by a single picture depends on the focal length, the film width, and the film height. As the focal distance increases, the view field decreases. However, a 360-degree panoramic scene cannot be shown in a single picture taken with a camera having a standard lens. Thus, more than one picture is needed to build a spherical panorama. The actual number of required pictures depends on four factors: 1) the focal length of the camera lens; 2) the width of the film; 3) the height of the film; and 4) the overlapping ratio between contiguous pictures used to register adjacent pictures. Along any view direction, the corresponding perspective view of the spherical panorama should be included in one or more pictures, which are used to build the panorama.
  • Referring to FIG. 4, two [0076] pictures 20, 22 tangent to an imaginary sphere 24 of radius “f” are taken for respective sight directions extending through the equator (i.e., the circle of the equator whose plane is perpendicular to the axis extending through the poles of sphere 24). A portion of a spherical panorama is to be built by pictures photographed with a photographic system 12 (FIG. 3). For example, let the film height be designated as “a”, the film width as “b”, the focal length as “f”, and the overlapping ratio between contiguous pictures as “k (in percent)”. In order to cover the entire surface of the sphere, circles of latitude, each including a series of contiguous pictures are required.
  • The pictures can be taken using either a landscaping or a portrait style picture-taking approach. Theoretically, the number of pictures taken from the two approaches to build a spherical panorama is the same. The number of circles of the landscape style is larger than that of the portrait style, but the number of pictures in one circle of the landscape style is generally smaller than that of the portrait style. [0077]
  • However, in practical implementations, the number of circles is generally restricted to an odd number because the degree to which the elevation is varied with the photographic equipment on the equator is small. Horizontal registration is also used for the pictures of the circle at equator to determine the horizontal relationship between the panoramic pictures. Due to these restrictions, the number of pictures in the landscape style is typically larger than that of the portrait style. [0078]
  • Nevertheless, users can adopt either picture-taking approach depending on their application. [0079]
  • If the pictures are taken based on the style of landscape pictures, the fewest number of circles of latitude, (NumCirclesLandscape) required to be photographed is: [0080] NumCirclesLandscape = [ 180 ° ( 100 - k ) % * [ 2 tan - 1 ( a 2 f ) ] - 1 ] .
    Figure US20030063089A1-20030403-M00004
  • On the other hand, if pictures are taken in the style of a portrait, the least number of circles of latitude (NumberCirclesPortrait) will be [0081] NumCirclesPortrait = [ 180 ° - 2 tan - 1 ( b 2 f ) + [ 2 k % * tan - 1 ( a 2 f ) ] ( 100 - k ) % * [ 2 tan - 1 ( a 2 f ) ] ] . ( 5 )
    Figure US20030063089A1-20030403-M00005
  • Based on the number of circles predefined, we can determine the sight direction of each circle below the north pole of the sphere, that is, the value of φ. [0082]
  • The number of pictures, needed to completely cover the zone of each circle of the sphere surface, depends on the sight direction above (or below) the equator. The nearer any pole the sight direction is, the fewer pictures are needed. However, taking the design of the image stitching algorithm into consideration the same number of pictures for each circle are taken along the top and bottom directions. The actual number of pictures required for the circle is determined by the possible width of warped images and the overlapping ratio between contiguous pictures. Standard photographic equipment being imperfect, it is difficult to take pictures with very accurate sight directions. Thus, the narrowest possible width of warped images after cutting is used to derive the number of pictures needed to be taken in a circle of latitude. The narrowest width will appear in the first order derivation of WarpImageWidth(φ[0083] j) with φj being equal to zero. The number of pictures taken for a circle of latitude, NumPictures(a, f, k), is represented by: NumPictures ( a , f , k ) = [ 360 ° tan - 1 ( 2 f a ) * ( 100 - k ) % ] . ( 6 )
    Figure US20030063089A1-20030403-M00006
  • Assume, for example, that the photographic system has a focal length of 18 mm, and the film has a width of 24 mm, and a height of 36 mm. Assume further that a portrait style picture taking approach is used. Based on Equation 5, in addition to two pictures along two poles, three circles of latitude are required to provide an overlapping ratio of 40%. The view field along the direction of the south pole will be obstructed by the tripod of the camera. It will be substituted by a marked or patterned image. The narrowest width will appear in the φ being 45° or 135°. Using equation 6, the number of pictures of one circle of latitude is 12. Thus, for three circles of latitude, 36 pictures plus an additional picture along the direction of the north pole are required to cover the sphere. The 37 pictures will be used to explain the proposed algorithm of image stitching and show the effectiveness of our proposed method in this invention. [0084]
  • In one embodiment, the photographic equipment used to take pictures includes a pan head which controls the rotation angles of the camera around the polar axis. It is desirable to keep errors in rotation angle within five degrees. However, it may be difficult to automatically control the panning above or below the equator resulting in errors larger than five degrees. [0085]
  • Transforming Photographic Images to a Parametric Spherical Environment Map [0086]
  • A parametric spherical environment map (PSEM) is used to store the environment data of a spherical panorama. First, using an “image warping procedure”, each picture is transformed into the [0087] PSEM based on a sight direction designated as (θ1, φ1) These warped images are stitched together as a complete seamless image of the PSEM using image registration. Conventional image processing algorithms represent images with two-dimensional arrays. That is, the shape of most images used in image processing is rectangular. However, on the PSEM, the shapes of warped images of rectangular pictures are not rectangular. The shapes depend on the value of angle φ below the north pole.
  • For a picture, we can utilize a two dimensional Cartesian coordinate system to describe the viewing plane. Referring to FIG. 5, the origin of the two dimensional Cartesian coordinate system is defined to be at the center point of the picture. The x axis is along the direction of film width (designated as “b”) and the y axis is along that of film height (designated as “a”). The coordinates of the four corner points of the picture (P1, P2, P3, and P4) are (b/2, a/2), (−b/2, a/2), (−b/2, −a/2), and (b/2, −a/2); the coordinates of the center points of the four picture bounding edges (Q1, Q2, Q3, and Q4) are (0, a/2), (−b/2, 0), (0, −a/2), and (b/2, 0), respectively. [0088]
  • Referring to FIG. 6, let the spatial relationship between the film and the lens within a camera be described by a three dimensional Cartesian coordinate system. The optical center of the lens of the camera is located at the origin of the three dimensional coordinate system, with the film located along the z-axis at point f. Thus, the coordinates of the four corner points P1, P2, P3, and P4 will be (b/2, a/2, f), (−b/2, a/2, f), (−b/2, −a/2, f) and (b/2, −a/2, f); the coordinates of the four center points Q1, Q2, Q3, and Q4 will be (O, a/2, f), (−b/2, O, f), (O, −a/2, f) and (b/2, O, f), respectively. [0089]
  • Assume that the sight direction of picture i is designated (θ[0090] 1, φi). With horizontal cutting, the width of the rectangular image after cutting-WarpImageWidth, is equal to the width of PSEM -WidthPSEM, and is obtained as follows:
  • WidthPSEM=360°*x_resolution_PSEM  (7)
  • The image height, independent of θ[0091] 1, is computed from the coordinates of point Q2(−b/2, O, f) and point Q3 (O, −a/2, f). Thus, the sole cutting line is determined based on the minimum y-coordinate of point Q2 and point Q3. The y coordinate of Q2 in PSEM is calculated as follows: y m ( φ i , Q2 ) = y_resolution _PSEM * cos - 1 [ 2 f cos φ i ( b 2 + 4 f 2 ) 1 / 2 ] , ( 8 )
    Figure US20030063089A1-20030403-M00007
  • and the y coordinate of Q3 in [0092] PSEM is calculated as follows: y m ( φ i , Q3 ) = y_resolution _PSEM * cos - 1 [ - a sin φ i + 2 f cos φ i ( a 2 + 4 f 2 ) 1 / 2 ] , ( 9 )
    Figure US20030063089A1-20030403-M00008
  • where φ[0093] i is the angle of picture i's sight direction below the north pole of the sphere. Therefore, the sole cutting line for performing horizontal cutting is y = y_resolution _PSEM * min ( y m ( φ i , Q2 ) , y m ( φ i , Q3 ) ) = y_resolution _PSEM * min ( cos - 1 [ 2 f cos φ i ( b 2 + 4 f 2 ) 1 / 2 ] , cos - 1 [ - a sin φ i + 2 f cos φ i ( a 2 + 4 f 2 ) 1 / 2 ] ) . ( 10 )
    Figure US20030063089A1-20030403-M00009
  • As for vertical cutting, the two opposing horizontal lines and the two opposing vertical lines can also be derived from the coordinates of point Q1 (O, a/2, f), point Q3 (O, −a/2, f), point P3 (−b/2, −a/2, f), and point P4 (b/2, −a/2, f), respectively (see FIG. 8B). Similarly, the sight direction of picture j is designated as (θ[0094] j, φj) The equations of the two vertical lines are x=xmj, φj, P3), and x=xmj, φj, P4), respectively. And, the x coordinate of P3 in the PSEM is x m ( θ j , φ j , P3 ) = x_resolution _PSEM * [ θ j + tan - 1 ( - b a cos φ j + 2 f sin φ j ) ] , ( 11 )
    Figure US20030063089A1-20030403-M00010
  • and the x coordinate of P4 in [0095] PSEM is x m ( θ j , φ j , P4 ) = x_resolution _PSEM * [ θ j + tan - 1 ( - b a cos φ j + 2 f sin φ j ) ] , ( 12 )
    Figure US20030063089A1-20030403-M00011
  • where the angle of the picture j's sight direction is φ[0096] j below the north pole of the sphere.
  • The equations of the two horizontal lines, independent of θ[0097] j, are y=ymj, Q1) and y=ymj, Q3), respectively. The y coordinate of Q1 in the PSEM is y m ( φ j , Q1 ) = y_resolution _PSEM * cos - 1 [ a sin φ j + 2 f cos φ j ( a 2 + 4 f 2 ) 1 / 2 ] . ( 13 )
    Figure US20030063089A1-20030403-M00012
  • and the y coordinate of Q3 in the [0098] PSEM, ymj, Q3) can be computed using Equation 9.
  • For picture j, the coordinates (x_LT, y_LT) of the left-top corner point of the rectangular image is then equal to (x[0099] mj, φj, P4), ym1, φ1, Q1) to (Xm 1, φ1, P4) ymj, φj, Q1)).
  • Referring to FIGS. [0100] 7A-7E, five warped images having different shapes are shown, each of which is transformed from a rectangular picture of a photographic image and based on five different sight directions, respectively. For the five directions, the angle θ around the equator is set to 180°, but angle p below the north pole is set to five different values:
  • φ=0°, φ=30°, φ=45°, φ=60°, and φ=90°. [0101]
  • The image stitching algorithm will now be discussed. [0102]
  • Image warping to generate the PSEM from photographic images includes two kinds of functions: “image warping transformation” and “image cutting”. The warped images with irregular shapes are cut into rectangular shapes to be used later during the stitching processing. As illustrated by FIGS. [0103] 7A-7E, the shapes of warped images on the PSEM depend on the φ values below the north pole of the sphere. The nearer to any pole of a sphere the sight direction is, the wider the warped image is. Thus, the manner of cutting images depends on the shape of a warped image which, in turn, depends on the value of φ in the sight direction.
  • In one embodiment, for example, a camera system having a focal length of 18 mm, a film width of 24 mm, and a film height of 36 mm is used to provide photographs for building a spherical panorama. Referring to FIGS. 8A and 8B, the rules of image cutting for such a camera system can be established as follows: [0104]
  • If φ is less than 10° or not greater than 170°, the warped image is cut into a rectangular region by one horizontal line (FIG. 8A). This manner of cutting is called horizontal cutting. [0105]
  • If φ is between 30° and 150°, the image region is cut into a rectangular region by four lines (FIG. 8B). The coordinates of the left-top corner point of the rectangular is designated as (x_LT, y_LT). This manner of image cutting is called vertical cutting. [0106]
  • If φ is between 10° and 30° or φ between 150° and 170°, the type of image cutting is determined based on the requirements of the particular. [0107]
  • Referring to FIG. 9, to represent an image using a two-dimensional array, a two-dimensional coordinate system is defined in which the left-top corner of the image is located on the original, the positive x direction is to the right, and the positive y direction extends downward. The x and y coordinates of a pixel indicate the serial ranks of the pixel from left to right and from top to bottom, respectively. During image processing, all pixels of a two-dimensional image are loaded into a one dimensional memory array from a disk file or CD-ROM. Each pixel address in the memory array is denoted by a variable, here called Offset, which is defined and calculated based on the x and y coordinates of the pixel as follows: [0108]
  • Offset=k*(y*Imagewidth+x)  (14)
  • where ImageWidth is the number of pixels in each row of the image; and k is the number of bytes used to represent each pixel. [0109]
  • For horizontal cutting, the width of the rectangular image after cutting, called “[0110] WarpImageWidth”, is equal to the width of the PSEM-Width and can be calculated using Equation 7 above.
  • The image height after horizontal cutting, called [0111] WarpImageHeight can be determined using the following relationship: WarpImageHeight ( φ i ) = y_resolution PSEM * min ( cos - 1 [ 2 f cos φ i ( b 2 + 4 f 2 ) 1 / 2 ] , cos 1 [ - a sin φ i + ( a 2 + 4 ( 15 )
    Figure US20030063089A1-20030403-M00013
  • As for vertical cutting, the width of a rectangular image after a vertical cutting is computed as follows: [0112] WarpImageWidth ( φ j ) = 2 tan - 1 ( b a cos φ j + 2 f sin φ j ) . ( 16 )
    Figure US20030063089A1-20030403-M00014
  • The height of a rectangular image after a vertical cutting, WarpImageHeight, is then computed as follows: [0113] WarpImageHeight ( φ i ) = y resolution PSEM * ( cos - 1 [ - a sin φ i + 2 f cos φ i ( a 2 + 4 f 2 ) 1 / 2 ] - cos - 1 [ a sin φ ( a 2 - ( 17 )
    Figure US20030063089A1-20030403-M00015
  • For each warped image, instead of storing the entire [0114] PSEM, only the rectangular image after cutting is required to be stored and processed during image stitching. When the rectangular image is represented by a two-dimensional array, these image pixels are described using the coordinate system described above in conjunction with FIG. 9. The coordinates of the left-top corner point (x_LT, y_LT) are designated as (0, 0) in the new system. As for coordinate transformation, the rectangular image is translated to the left-top of the PSEM Then, the PSEM is cut into the same size as the rectangular image. The image remaining after cutting will be used and processed during image stitching, and is stored into a memory array.
  • In the [0115] PSEM space, “translation” along lines of latitude of the sphere (i.e., the equatorial direction) corresponds to “rotation” around the axis extending through the two poles of the sphere. Therefore, the angle of the sight direction around the horizon θL, used for image warping, can be represented by the following equation: Θ L = - tan - 1 ( b a cos φ i + 2 f sin φ j ) . ( 18 )
    Figure US20030063089A1-20030403-M00016
  • The original y coordinate of a pixel in the [0116] PSEM is equal to the y coordinate in the new coordinate system plus y_LT. The y_LT variable is then designated as φ_TranslatePixels.
  • In one implementation, the width and height of the rectangular warped image after cutting is first calculated based on sight direction. The information of the image size will be useful in the image warping procedure described below. This procedure is described using a C-like programming language. The source code listing is included in Appendix I. Other similar programming languages may be used to implement the image warping procedure. [0117]
  • The program includes input and output arguments both of which are listed at the start of the source code listing. In the following image warping procedure, the input arguments include the focal length designated as “f”, the film width as “FilmWidth”, the film height as “FilmHeight” and the memory storing the picture imaged designated as “PictureImage”. The width and height of the picture image to be processed is designated as “PictureImageWidth” and “PictureImageHeight”, respectively, and the width and height of the warped image on the PSEM after image cutting is designated as “WarpImageWidth” and “WarpImageHeight”, respectively. The resolutions of the axes of the PSEM are designated as “x_resolution[0118] —PSEM” and “y_resolution—PSEM”, respectively. The sight direction of the picture below the north pole is designated as φL and the number of pixels translated for storing the cut image is designated as φ_TranslatePixels”. The sole output argument is the memory storing the warped image, which is designated as “WarpImage”.
  • Each pixel in the warped image is mapped from at least one pixel in the photographic image. The attributes (e.g., color) of each pixel in the warped image are derived from as those of a corresponding pixel in the photographic image. Thus, for each pixel in the warped image, the memory address of the photographic pixel from the corresponding pixel of the warped image needs to be derived. In the description of the procedure, the x and y coordinates of each pixel in the warped image after cutting are designated as “x_index” and “y_index”; the image width and image height of the photographic image are designated as “m” and “n”; the address of the memory storing the photographic image is designated as “OffsetPictureImage”; and the address of the memory storing the warped image designated as “OffsetWarpImage”. [0119]
  • Image Stitching a Spherical Panorama [0120]
  • Building a spherical panorama using image stitching includes three important stages: 1) computing accurate sight directions of the pictures used to build the panorama; 2) adjusting the intensity of the picture images; and 3) stitch processing. Based on the predefined picture-taking approach discussed above, a set of pictures to be overlapped are taken with a conventional camera. The number of pictures when overlapped are sufficient for building a spherical panorama. An image stitching approach (described below) is then used to stitch the photographic images shown in the pictures together into a complete image of the spherical panorama. [0121]
  • The first and most important stage in stitching these overlapping photographic images, is to compute accurate sight direction of each picture. The sight directions of the pictures are needed to determine whether the photographic images can be stitched together or not. Although, as discussed above, each picture is taken based on a predefined sight direction, errors in sight direction due to imperfect photographic equipment and the setup of the equipment may still exist. Therefore, the image stitching approach performs image registration in both the horizontal (latitudinal) and vertical (longitudinal) directions of the PSEM to compute accurate sight directions of each picture. [0122]
  • Referring to FIG. 10, a flow diagram shows the steps required for computing accurate sight directions for the pictures. The process also contains three steps: 1) vertical (or longitudinal) image registration ([0123] 30); 2) horizontal (or latitudinal) image registration (32); and 3) fine tuning of the pictures' sight direction along longitudinal directions (34). In addition to taking the same number of pictures of each circle of latitude, a picture is also taken along the zenith direction. The spatial relationships of these pictures can be described with a two-dimensional array of rows and columns. The pictures taken in each row are taken at the same latitudinal angle, and the pictures of each column are taken at the same longitudinal angle. To obtain a seamless image of a spherical panorama, images are registered in each row and each column. Thus, both horizontal image registration and vertical image registration is required.
  • A semi-automatic approach accomplishes the required image registration. A correlation-based algorithm as described in U.S. Ser. No. 08/933,758 is used to initially register the contiguous images. However, inconsistent intensities in images may result in registration errors. Thus, a software tool having a convenient graphic user interface (GUI) is used to inspect the registration results and correct any existing errors. The tool can be used to display an array of warping images on a screen based on the positions obtained from image registration. Overlapping regions of two adjacent images can be seen on the screen, Thus, the result of the image registration can be inspected by the software. If the result is not perfect, users can move the warping images (e.g., with a mouse) to obtain better registration. Before performing image registration, all of the pictures to be registered are transformed onto the space of a [0124] PSEM.
  • Referring to FIG. 11, assume that an image A and an image B are two warped images provided through image cutting and have already been registered. A vector from the [0125] center 42 of image A to the center of image B, is designated as (ΔXBA, ΔYBA). The difference in their sight directions, (ΔθBA, ΔφBA), can be obtained as follows:
  • ΔθBA =Δx BA /x resolution PSEM {Δφ BA =Δy BA /y resolution PSEM  (19)
  • As discussed above, precisely controlling the level of the camera used to take the pictures during panning of the camera is difficult. Thus, derivations in panning above and below the equator of the sphere can result in significant errors. Basically, with a correlation-based algorithm, image registration is accomplished by computing the image phase correlation for all of the possible positions of image alignment and selecting the one resulting in a minimum alignment difference. Searching for the best alignment position (x, y) in a two-dimensional image array is accomplished through a series of translations along x and y directions. [0126]
  • [0127] PSEM has the characteristic of “translation” along the equatorial direction in the map corresponding to “rotation” around the axis of two poles of a sphere. The warped image of a particular picture can be obtained from another warped image of the same picture with the same angle below the north pole but with different θ around the latitudinal direction. However, warped image with different φ's should be recomputed for different φ's. Thus, another new coordinate system is used to transpose a vertical image registration in the original coordinate system to a horizontal image registration in the new coordinate system.
  • The difference of sight directions around the polar axis in the new coordinate system corresponds to that along the longitudinal direction in the original coordinate system. The time required for vertical registration can be dramatically sped up, particularly for those with large differences in φ, by eliminating the recomputation of warped images for different φ's. [0128]
  • Referring to FIG. 12, to compute accurate sight directions of pictures, a vertical image registration in the original coordinate system can be performed based on the following four steps: [0129]
  • 1. Rotate each photographic image 90° clockwise (step [0130] 50). This rotation can be accomplished by storing one row of image pixels into a column of another image array indexed in the inverse sequence.
  • 2. Apply the “image warping procedure” described above to the rotated image setting the φ[0131] L set to 90° (step 52).
  • 3. Apply the horizontal image registration to them (step [0132] 54). The pictures arranged from left to right are in the order of decreasing φ's.
  • 4. Derive accurate sight directions from the image positions of image registration (step [0133] 56).
  • Assume that the sight direction of one picture on the equator is designated as (θ[0134] e, φe), and that of a contiguous picture with the same longitudinal angle above the equator is designated as (θa, φa). Through a coordinate transformation and a horizontal image registration, the difference of sight directions in the new coordinate system, denoted by (Δθ(n), Δφ(n), can be obtained. The sight direction of the picture above the equator in the original coordinate system can be derived as follows:
  • θae−Δφ(n)ae−Δθ(n)  (20)
  • The sight directions of the pictures in other circles can also be calculated and derived in similar manner. [0135]
  • FIG. 13 shows, from left to right, pictures taken of a scene in which the elevation angle of the camera has been changed. In particular, pictures [0136] 60 a-60 c were photographed below, at, and above the equator, respectively. Picture 60 d is taken in the zenithal direction. The respective predefined θ's of these pictures have errors within five degrees. However, the possible errors of t's may be much greater. The four pictures are to be registered along the longitudinal direction.
  • Referring to FIG. 14, the horizontal image registration of pictures [0137] 60 a-60 d through the coordinate transformation procedure described above is shown.
  • Following coordinate transformation, the spatial relationships between contiguous pictures around the equatorial direction is determined. To accomplish this task for computing accurate sight directions of pictures, the horizontal (or latitudinal) image registration on the pictures of the circle is performed on the equator because the variance of pictures's sight directions in p is smaller than that of other circles of latitude. Following horizontal image registration, accurate sight directions of all the pictures on the equator (θ[0138] e, φe)'S are obtained. The sight directions of respective pictures in other circles of latitude can also be indirectly derived similarly as discussed above.
  • As described above, normal registration is divided into the horizontal registration and the vertical registration. Both registration steps process one dimensional arrays of images. Therefore, only two adjacent images are compared during the normal image registration. After performing two-directional image registration, a software tool is used to inspect registration of the stitched images. The stitched image of the equatorial circle should be seamless because the horizontal image registration is applied to the pictures on the circle. However, seams may exist between certain ones of the contiguous images of other circles along the equatorial (or latitudinal) direction because the spatial relationships are indirectly derived from the positions of pictures on the equator. If seams exist, fine tuning for picture positions of other circles along the longitudinal direction should be performed to eliminate the seams in the stitched image. Fine tuning can be accomplished automatically using a software tool for image registration or performing manual fine tuning. Fine-tuning of image registration is performed to obtain better stitching of a two dimensional image array. Each column of images is processed by the vertical registration, and only the row of images at the equator is processed by the horizontal registration. The positions of images in the other rows are derived indirectly. In particular, the images of the other rows are processed using horizontal registration. For horizontal registration of the other rows of images, both the horizontal and vertical relationships can be tuned, but the horizontal relationship can only be modified a little. This type of horizontal registration is called “fine-tuning.” The final sight directions of all pictures after this fine tuning procedure are recorded for the latter stage of stitch processing. [0139]
  • The brightness of all pictures to be seamed together should be tuned to an average brightness (or intensity) value. Because the shutter of a camera opens in different degrees, and each picture is taken at a different instance in time, the brightness (or intensity) of contiguous pictures may differ greatly. It is a necessary that in building a panoramic image, the brightness of the contiguous pictures be smoothed. [0140]
  • Referring to FIGS. 15A and 15B, in the example of the invention, only the two pictures along the two poles are processed using horizontal cutting while the others are all processed by the vertical cutting. During stitch processing, based on computed accurate sight directions, the warped images provided by vertical cutting in one circle of latitude are first seamed together from left to right as a flat rectangular image (FIG. 15A). In this example, the picture along the south pole is replaced by a rectangular marked or patterned image because the view field is hidden by the camera tripod. Therefore, including the picture along the north pole, there are four such flat rectangular images to be seamed together via image processing techniques. The position of each flat rectangular image in the PSEM can be acquired from the sight directions of pictures. Therefore, based on one sequence from top to bottom or an inverse sequence, the four flat images can be seamed together, as shown in FIG. 15B. [0141]
  • During stitch processing, discontinuities of image intensity may exist between two contiguous images even though the intensities of these pictures were adjusted. Therefore, for an overlapping image region between two images, image blending should be applied to the overlapping region to smooth the discontinuities between two images. [0142]
  • Referring to FIGS. 16A and 16B, the spatial relationship between two images to be stitched together can be from left to right or from top to bottom. For example, assume that two [0143] images 62, 64 are to be stitched together. A pixel 66 of image 62, located in the overlapping region, is denoted by P1, and the pixel of image 64, located in the same position of pixel 66 in the stitched image, is denoted by qi. The corresponding pixel in the stitched image is designated as r1 . To achieve image blending, the intensity of pixel ri is calculated from the intensity of Pl and ql as well as the two distances from the pixel position to respective boundaries as follows: I ( r i ) = d A t · I ( p i ) + d B t · I ( q i ) d A t + d B t
    Figure US20030063089A1-20030403-M00017
  • where d[0144] A is the distance between pi and boundary of image 62; dB is the distance between qi and boundary of image 64; t is a power factor. Image blending is observed visually with power factor t adjusted empirically to provide the optimum blending (e.g., set to 3) (See U.S. Ser. No. 08/933,758 for further details relating to determining power factor t).
  • During stitch processing, it may be necessary to perform additional image processing to obtain a continuous stitched image between the left-most and the right-most boundaries. [0145]
  • Referring to FIG. 17, when warped images [0146] 70 a-70 n in circle of latitude are to be seamed into a flat rectangular image from left to right using vertical cutting, the coordinates of the left-top corner 72 of the left-most image 70 a in the circle of latitude is designated as (sx1, sy1). The warped images 70 a-70 n of the pictures in the circle are to be stitched together, one-by-one, based on the sight directions computed by image registration. Finally, by placing the left-most image 70 a in the right side of the right-most image 70 n, the two images are registered and blended to obtain a contiguous stitched image through 360° panning.
  • Assume that the coordinates of the left-top corner of the left-most image, being placed to the right side of the right-most image, is sx′[0147] 1, sy′1 after image registration. Ideally, the width of the stitched image, being equal to the difference between sx1 and sx′1, should be the same as the width of the PSEM. Moreover, it would be ideal if the y coordinate sy1 was the same as sy′1 so that the stitched image would not slant.
  • However, in practical situations, the photographic equipment used to take the pictures is imperfect and the nodal point of the camera lens cannot be kept absolutely stationary for all of the pictures. [0148]
  • A stitched image may be wider or more narrow than a [0149] PSEM within which the stitched image is mapped. In order to utilize the PSEM to store the stitched image, modifications are required so that the image width is equal to that of the PSEM. It may be necessary for columns of pixels to be eliminated or inserted in open space to reduce or increase the image width, respectively.
  • Referring to FIG. 18A, for example, if a stitched [0150] image 76 is wider than a PSEM 78, one column of pixels will be eliminated for every d columns. On the other hand, as shown in FIG. 18B, if a stitched image 80 is narrower than a PSEM 82, one column of pixels should be repeatedly stored for every d columns. For example, let the difference between the two widths be designated as Δx(=sx′1-WidthPSEM). The space d between two columns of pixels to be modified is defined as: d = [ Δ x + W i d t h P S E M Δ x ] = [ s x 1 Δ x ] . ( 22 )
    Figure US20030063089A1-20030403-M00018
  • Referring to FIG. 19, a stitched [0151] image 84 may also slant to one side due to, for example, the camera being mounted on the equipment in a tilted manner (i.e., not vertical). Thus, the y coordinate sy1 is not the same as sy′1. Thus, images on the left-most and the right-most boundaries are not contiguous.
  • In this circumstance, it is necessary to modify stitched [0152] image 84 such that the discontinuities on the boundaries of the stitched image are eliminated. To do so, the difference between y coordinates (sy′1-sy1) are used to determine how stitched image 84 is to be modified. The number of pictures in a circle of latitude is designated as NumPictures previously. If the absolute value of the difference (sy′1−sy1) is less than NumPictures, the y positions of| a series| of images are modified by equally increasing or decreasing y coordinates so that the y coordinate sy′1 after the modification will be equal to sy1 and stitched image 84 will be not slanted again.
  • On the other hand, if the absolute value of the difference (sy′[0153] 1−sy1) is greater than or equal to NumPictures,| the image discontinuities cannot be eliminated using the above modification method. Therefore, an image rotation, a more time consuming process, is applied to slant the stitched image. The rotation angle α is represented by: α = tan - 1 - ( sy 1 - sy 1 ) WidthPSEM ( 23 )
    Figure US20030063089A1-20030403-M00019
  • Additional buffer is used to store the stitched image after the rotation. The new coordinates after rotation (x′, y′) can be computed from the old coordinates as follows: [0154]
  • x′=sx 1+(x−sx 1)cos α−(y−sy 1)sin α {y′=sy 1+(x−sx 1)sin α+(y−sy 1)cos α  (24)
  • After the image rotation, the images in the right and the left boundaries of the stitched image will be continuous. The process of modifying the width of the stitched image is further applied to the rotated image if necessary. [0155]
  • Experimental Results [0156]
  • For verifying the effectiveness of our proposed method, two experiments were conducted. Referring to FIG. 20, a system [0157] 90 for building a spherical panorama including transforming photographic images of a panoramic scene into a spherical environment map and stitching together a two dimensional array of the photographic images is shown. System 90 includes a processor 92 (e.g., Pentium cpu), RAM memory 94 (e.g., 32 MB), and a disk storage 96 (e.g., at least 30 MB) for storing the software tool described above. System 90 is connected to a camera system 98 having a lens with a focal length of 18 mm. Camera system 98 includes a pan head 99 mounted to a tripod 100. Pan head 99 provides 6 degrees of freedom. In the experiments, the width of each sheet of film was 24 mm, the height of each sheet was 36 mm, the focal length of the camera lens was 18 mm, and the overlapping ratio between contiguous pictures was 40%. The number of pictures needed to be taken for building a spherical panorama was determined to be 38. The bottom picture, that is, the picture along the direction of the south pole, was replaced by a marked pattern because the view field along the direction of the south pole was hidden by the camera tripod.
  • Other embodiments are within the scope of the claims.[0158]

Claims (32)

What is claimed is:
1. A method of photographing pictures of a panoramic scene using a camera system, the pictures used to build a spherical panorama, the method comprising:
determining a number of circles of latitude to build the spherical panorama on the basis of the focal length of the camera system, the height of photographic film used in the camera system, and an overlapping ratio between contiguous ones of the pictures, each circle of latitude including a series of contiguous pictures; and
photographing the determined number of pictures.
2. The method of claim 1 wherein determining the number of circles of latitude includes calculating the following equation:
NumCirclesLandscape = [ 180 ° ( 100 - k ) % * [ 2 tan - 1 ( a 2 f ) ] - 1 ] .
Figure US20030063089A1-20030403-M00020
where: a is the height of the photographic film;
f is the focal length of the camera system; and
k is the overlapping ration between contiguous pictures:
3. The method of claim 1 wherein the number of circles of latitude is further determined on the basis of the width of the photographic film.
4. The method of claim 3 wherein determining the number of circles of latitude includes calculating the following equation:
NumCirclesPortrait = [ 180 ° - 2 tan - 1 ( b 2 f ) + [ 2 k % * tan - 1 ( a 2 f ) ] ( 100 - k ) % * [ 2 tan - 1 ( a 2 f ) ] ] .
Figure US20030063089A1-20030403-M00021
where: a is the height of the photographic film;
b is the width of the photographic film;
f is the focal length of the camera system; and
k is the overlapping ration between contiguous pictures.
5. A method of transforming a plurality of photographic images of a panoramic scene into a spherical environment map, each photographic image taken at a predetermined sight direction characterized by an elevation angle, the method comprising:
warping each of the plurality of photographic images into a parametric spherical environment map; and
image cutting each of the warped photographic images into a rectangular image based on the elevation angle of the sight direction for each of the warped photographic images.
6. The method of claim 5 wherein during image cutting, the warped photographic images are cut either horizontally or vertically on the basis of the elevation angle of the sight direction for each of the warped photographic images.
7. The method of claim 6 wherein:
if the elevation angle of the sight direction for a warped photographic image is less than 10° or greater than 170°, the warped photographic image is cut into a rectangular region defined by one horizontal line, which is either a bottom or top edge of the rectangular warped image of the photograph; and
if the elevation angle of the sight direction for a warped photographic image is between 30° and 150°, the warped photographic image is cut into a rectangular region defined by two opposing horizontal lines and two opposing vertical lines.
8. The method of claim 5 wherein warping of each of the plurality of photographic images includes mapping each pixel of the warped photographic image from at least one pixel of an associated photographic image, the attributes of each pixel of the warped photographic image being derived from at least one pixel of the associated photographic image.
9. A method of stitching together a two dimensional array of photographic images to build a spherical panorama, the method comprising:
computing sight directions for each of the photographic images;
adjusting the intensity of the photographic images to an intensity related to an average of the intensities of the photographic images; and
stitching together contiguous ones of the photographic images.
10. The method of claim 9 wherein computing sight directions includes:
performing vertical registration of contiguous photographic images for each column of the two dimensional array; and
performing horizontal registration of contiguous photographic images for each row of the two dimensional array.
11. The method of claim 10 wherein performing horizontal and vertical registration includes using a correlation-based algorithm which computes image phase correlation for all possible positions of image alignment and selecting the one with the minimum alignment difference.
12. The method of claim 10 further including inspecting the results of performing horizontal and vertical registration with a software tool.
13. The method of claim 9 wherein performing vertical registration of contiguous photographic images for each column of the two dimensional array includes:
rotating each photographic image 90°;
image warping the rotated photographic image with φL set to 90°;
performing horizontal registration of contiguous photographic images for each column after being rotated 900 clockwise of the two dimensional array; and
determining sight directions from the image positions of image registration.
14. The method of claim 10 wherein horizontal registration is performed after vertical registration and further including performing a second “fine-tuning” registration.
15. The method of claim 9 wherein stitching together contiguous photographic images further includes image blending overlapping regions of the contiguous photographic images.
16. The method of claim 9 wherein image blending between contiguous photographic images includes determining intensity levels of pixels in the overlapping regions using the following equation:
I ( r i ) = d A t · I ( p i ) + d B t · I ( q i ) d A t + d B t
Figure US20030063089A1-20030403-M00022
where: Pl is the location of a pixel from a first one of the photographic images;
ql is the location of a pixel from a corresponding second one of the photographic images contiguous with the first one of the photographic images;
dA is the distance between location Pi and a boundary of the first one of the photographic images;
dB is the distance between location qi and a boundary of the second one of the photographic images;
I(pl) is the intensity of the pixel at location pi;
I(ql) is the intensity of the pixel at location qi; and
I(ri) is the resultant intensity of the pixel.
17. A system for photographing pictures of a panoramic scene for use in building a spherical panorama, the system comprising:
a camera system for photographing the determined number of pictures; and
a computer-readable medium including computer instructions for determining a number of circles of latitude to build the spherical panorama on the basis of the focal length of the camera system, the height of photographic film used in the camera system, and an overlapping ratio between contiguous ones of the pictures, each circle of latitude including a series of contiguous pictures.
18. The system of claim 17 wherein the computer-readable medium further includes computer instructions for determining the number of circles of latitude including calculating the following equation:
NumCirclesLandscape = [ 180 ° ( 100 - k ) % * [ 2 tan - 1 ( a 2 f ) ] - 1 ] .
Figure US20030063089A1-20030403-M00023
where: a is the height of the photographic film;
f is the focal length of the camera system; and
k is the overlapping ration between contiguous pictures.
19. The system of claim 17 wherein the computer-readable medium further includes computer instructions for determining the number of circles of latitude on the basis of the width of the photographic film.
20. The system of claim 19 wherein the computer-readable medium further includes computer instructions for determining the number of circles of latitude includes calculating the following equation:
NumCirclesPortrait = [ 180 ° - 2 tan - 1 ( b 2 f ) + [ 2 k % * tan - 1 ( a 2 f ) ] ( 100 - k ) % * [ 2 tan - 1 ( a 2 f ) ] ] .
Figure US20030063089A1-20030403-M00024
where: a is the height of the photographic film;
b is the width of the photographic film;
f is the focal length of the camera system; and
k is the overlapping ration between contiguous pictures.
21. A system for transforming a plurality of photographic images of a panoramic scene into a spherical environment map, each photographic image taken at a predetermined sight direction characterized by an elevation angle, the system comprising:
a computer-readable medium including:
computer-readable instructions for warping each of the plurality of photographic images into a parametric spherical environment map; and
computer-readable instructions for cutting each of the warped photographic images into a rectangular image based on the elevation angle of the sight direction for each of the warped photographic images.
22. The system of claim 21 wherein the computer-readable medium further includes computer instructions for cutting the warped photographic images during image cutting either horizontally or vertically on the basis of the elevation angle of the sight direction for each of the warped photographic images.
23. The system of claim 22 wherein the computer-readable medium further includes computer instructions for:
cutting, if the elevation angle of the sight direction for a warped photographic image is less than 10° or greater than 170°, the warped photographic image into a rectangular region defined by one horizontal line, which is either a bottom or top edge of the rectangular warped image of the photograph; and
cutting, if the elevation angle of the sight direction for a warped photographic image is between 30° and 150°, the warped photographic image into a rectangular region defined by two opposing horizontal lines and two opposing vertical lines.
24. The system of claim 21 wherein the computer instructions for warping each of the plurality of photographic images includes computer instructions for mapping each pixel of the warped photographic image from at least one pixel of an associated photographic image, the attributes of each pixel of the warped photographic image being derived from at least one pixel of the associated photographic image.
25. A system for stitching together a two dimensional array of photographic images to build a spherical panorama, the system comprising a computer-readable medium including:
computer instructions for computing sight directions for each of the photographic images;
computer instructions for adjusting the intensity of the photographic images to an intensity related to an average of the intensities of the photographic images; and
computer instructions for stitching together contiguous ones of the photographic images.
26. The system of claim 25 wherein the computer instructions for computing sight directions includes:
computer instructions for performing vertical registration of contiguous photographic images for each column of the two dimensional array; and
computer instructions for performing horizontal registration of contiguous photographic images for each row of the two dimensional array.
27. The system of claim 26 wherein the computer instructions for performing horizontal and vertical registration includes a correlation-based algorithm which computes image phase correlation for all possible positions of image alignment and selecting the one with the minimum alignment difference.
28. The system of claim 26 further including computer instructions for inspecting the results of performing horizontal and vertical registration with a software tool.
29. The system of claim 25 wherein the computer instructions for performing vertical registration of contiguous photographic images for each column of the two dimensional array includes:
computer instructions for rotating each photographic image 90°;
image warping the rotated photographic image with φL set to 90°;
computer instructions for performing horizontal registration of contiguous photographic images for each column after above rotation of the two dimensional array; and
computer instructions for determining sight directions from the image positions of image registration.
30. The system of claim 26 wherein the computer instructions for performing horizontal registration and vertical registration further includes computer instructions for performing a second “fine-tuning” registration.
31. The system of claim 25 wherein the computer instructions for stitching together contiguous on of the photographic images further includes computer instructions for image blending overlapping regions of the contiguous photographic images.
32. The system of claim 25 wherein the computer instructions for image blending between contiguous photographic images includes computer instructions for determining intensity levels of pixels in the overlapping regions using the following equation:
I ( r i ) = d A t · I ( p i ) + d B t · I ( q i ) d A t + d B t
Figure US20030063089A1-20030403-M00025
where: Pl is the location of a pixel from a first one of the photographic images;
ql is the location of a pixel from a corresponding second one of the photographic images contiguous with the first one of the photographic images;
dA is the distance between location Pl and a boundary of the first one of the photographic images;
dB is the distance between location qi and a boundary of the second one of the photographic images;
I(pl) is the intensity of the pixel at location pi;
I(qi) is the intensity of the pixel at location qi; and
I(ri) is the resultant intensity of the pixel.
US10/235,190 1998-05-27 2002-09-04 Image-based method and system for building spherical panoramas Abandoned US20030063089A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/235,190 US20030063089A1 (en) 1998-05-27 2002-09-04 Image-based method and system for building spherical panoramas

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/085,479 US6486908B1 (en) 1998-05-27 1998-05-27 Image-based method and system for building spherical panoramas
US10/235,190 US20030063089A1 (en) 1998-05-27 2002-09-04 Image-based method and system for building spherical panoramas

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/085,479 Division US6486908B1 (en) 1998-05-27 1998-05-27 Image-based method and system for building spherical panoramas

Publications (1)

Publication Number Publication Date
US20030063089A1 true US20030063089A1 (en) 2003-04-03

Family

ID=22191879

Family Applications (4)

Application Number Title Priority Date Filing Date
US09/085,479 Expired - Lifetime US6486908B1 (en) 1998-05-27 1998-05-27 Image-based method and system for building spherical panoramas
US10/235,190 Abandoned US20030063089A1 (en) 1998-05-27 2002-09-04 Image-based method and system for building spherical panoramas
US10/235,609 Expired - Fee Related US7317473B2 (en) 1998-05-27 2002-09-04 Image-based method and system for building spherical panoramas
US11/949,197 Expired - Fee Related US7852376B2 (en) 1998-05-27 2007-12-03 Image-based method and system for building spherical panoramas

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/085,479 Expired - Lifetime US6486908B1 (en) 1998-05-27 1998-05-27 Image-based method and system for building spherical panoramas

Family Applications After (2)

Application Number Title Priority Date Filing Date
US10/235,609 Expired - Fee Related US7317473B2 (en) 1998-05-27 2002-09-04 Image-based method and system for building spherical panoramas
US11/949,197 Expired - Fee Related US7852376B2 (en) 1998-05-27 2007-12-03 Image-based method and system for building spherical panoramas

Country Status (1)

Country Link
US (4) US6486908B1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040196282A1 (en) * 2003-02-14 2004-10-07 Oh Byong Mok Modeling and editing image panoramas
US20050195128A1 (en) * 2004-03-03 2005-09-08 Sefton Robert T. Virtual reality system
US20100302347A1 (en) * 2009-05-27 2010-12-02 Sony Corporation Image pickup apparatus, electronic device, panoramic image recording method, and program
US20110105192A1 (en) * 2009-11-03 2011-05-05 Lg Electronics Inc. Terminal and control method thereof
CN104268379A (en) * 2014-09-12 2015-01-07 北京诺亚星云科技有限责任公司 Digital panorama based three-dimensional indoor ranging system and device
CN105516569A (en) * 2016-01-20 2016-04-20 北京疯景科技有限公司 Method and device for obtaining omni-directional image
CN106254779A (en) * 2016-08-30 2016-12-21 上海乐欢软件有限公司 A kind of panoramic video processing method and server and client side
WO2017211294A1 (en) * 2016-06-07 2017-12-14 Mediatek Inc. Method and apparatus of boundary padding for vr video processing
CN107660337A (en) * 2015-06-02 2018-02-02 高通股份有限公司 For producing the system and method for assembled view from fish eye camera
KR101884565B1 (en) * 2017-04-20 2018-08-02 주식회사 이볼케이노 Apparatus and method of converting 2d images of a object into 3d modeling data of the object
US11062426B2 (en) 2018-03-05 2021-07-13 Samsung Electronics Co., Ltd. Electronic device and image processing method
US11388336B2 (en) * 2017-03-02 2022-07-12 Arashi Vision Inc. Horizontal calibration method and system for panoramic image or video, and portable terminal

Families Citing this family (113)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7098914B1 (en) * 1999-07-30 2006-08-29 Canon Kabushiki Kaisha Image synthesis method, image synthesis apparatus, and storage medium
US6798923B1 (en) * 2000-02-04 2004-09-28 Industrial Technology Research Institute Apparatus and method for providing panoramic images
US6879338B1 (en) * 2000-03-31 2005-04-12 Enroute, Inc. Outward facing camera system for environment capture
US6975353B1 (en) * 2000-04-19 2005-12-13 Milinusic Tomislav F Immersive camera system
US6915484B1 (en) 2000-08-09 2005-07-05 Adobe Systems Incorporated Text reflow in a structured document
US6895126B2 (en) 2000-10-06 2005-05-17 Enrico Di Bernardo System and method for creating, storing, and utilizing composite images of a geographic location
US6765589B1 (en) 2000-11-16 2004-07-20 Adobe Systems Incorporated Brush for warping and water reflection effects
US7725604B1 (en) * 2001-04-26 2010-05-25 Palmsource Inc. Image run encoding
JP3744002B2 (en) * 2002-10-04 2006-02-08 ソニー株式会社 Display device, imaging device, and imaging / display system
EP1553521A4 (en) * 2002-10-15 2006-08-02 Seiko Epson Corp Panorama synthesis processing of a plurality of image data
JP4434624B2 (en) * 2003-05-12 2010-03-17 キヤノン株式会社 Imaging apparatus, imaging method, computer program, and computer-readable storage medium
TWI276044B (en) * 2003-12-26 2007-03-11 Ind Tech Res Inst Real-time image warping method for curve screen
US7375745B2 (en) * 2004-09-03 2008-05-20 Seiko Epson Corporation Method for digital image stitching and apparatus for performing the same
US20070030396A1 (en) * 2005-08-05 2007-02-08 Hui Zhou Method and apparatus for generating a panorama from a sequence of video frames
US7840032B2 (en) * 2005-10-04 2010-11-23 Microsoft Corporation Street-side maps and paths
US7764849B2 (en) * 2006-07-31 2010-07-27 Microsoft Corporation User interface for navigating through images
US20080027985A1 (en) * 2006-07-31 2008-01-31 Microsoft Corporation Generating spatial multimedia indices for multimedia corpuses
US7712052B2 (en) 2006-07-31 2010-05-04 Microsoft Corporation Applications of three-dimensional environments constructed from images
US20080043020A1 (en) * 2006-08-18 2008-02-21 Microsoft Corporation User interface for viewing street side imagery
JP4899803B2 (en) * 2006-11-06 2012-03-21 ソニー株式会社 Image processing apparatus, camera apparatus, image processing method, and program
US20080143709A1 (en) * 2006-12-14 2008-06-19 Earthmine, Inc. System and method for accessing three dimensional information from a panoramic image
US7961980B2 (en) * 2007-08-06 2011-06-14 Imay Software Co., Ltd. Method for providing output image in either cylindrical mode or perspective mode
US20090086021A1 (en) * 2007-09-27 2009-04-02 Rockwell Automation Technologies, Inc. Dynamically generating real-time visualizations in industrial automation environment as a function of contect and state information
EP2044987B1 (en) * 2007-10-03 2013-05-22 Sony Computer Entertainment Europe Ltd. Apparatus and method of on-line reporting
US8217956B1 (en) 2008-02-29 2012-07-10 Adobe Systems Incorporated Method and apparatus for rendering spherical panoramas
US20090232415A1 (en) * 2008-03-13 2009-09-17 Microsoft Corporation Platform for the production of seamless orthographic imagery
US8355042B2 (en) * 2008-10-16 2013-01-15 Spatial Cam Llc Controller in a camera for creating a panoramic image
US8134589B2 (en) * 2008-07-17 2012-03-13 Eastman Kodak Company Zoom by multiple image capture
US9307165B2 (en) * 2008-08-08 2016-04-05 Qualcomm Technologies, Inc. In-camera panorama image stitching assistance
US8554014B2 (en) * 2008-08-28 2013-10-08 Csr Technology Inc. Robust fast panorama stitching in mobile phones or cameras
US8391640B1 (en) * 2008-08-29 2013-03-05 Adobe Systems Incorporated Method and apparatus for aligning and unwarping distorted images
US8340453B1 (en) 2008-08-29 2012-12-25 Adobe Systems Incorporated Metadata-driven method and apparatus for constraining solution space in image processing techniques
US8842190B2 (en) 2008-08-29 2014-09-23 Adobe Systems Incorporated Method and apparatus for determining sensor format factors from image metadata
US8724007B2 (en) 2008-08-29 2014-05-13 Adobe Systems Incorporated Metadata-driven method and apparatus for multi-image processing
US8368773B1 (en) 2008-08-29 2013-02-05 Adobe Systems Incorporated Metadata-driven method and apparatus for automatically aligning distorted images
US20100231581A1 (en) * 2009-03-10 2010-09-16 Jar Enterprises Inc. Presentation of Data Utilizing a Fixed Center Viewpoint
US8947502B2 (en) 2011-04-06 2015-02-03 Qualcomm Technologies, Inc. In camera implementation of selecting and stitching frames for panoramic imagery
JP5429291B2 (en) * 2009-09-17 2014-02-26 富士通株式会社 Image processing apparatus and image processing method
EP2517181A1 (en) * 2009-12-21 2012-10-31 Thomson Licensing Method for generating an environment map
WO2011106520A1 (en) 2010-02-24 2011-09-01 Ipplex Holdings Corporation Augmented reality panorama supporting visually impaired individuals
JP2012075088A (en) * 2010-09-03 2012-04-12 Pentax Ricoh Imaging Co Ltd Image processing system and image processing method
TWI423659B (en) * 2010-11-09 2014-01-11 Avisonic Technology Corp Image corretion method and related image corretion system thereof
UA100890C2 (en) * 2010-12-02 2013-02-11 Владимир Иванович Голуб Method for identification of geometric objects and image frame orthographic drawing
JP5620871B2 (en) * 2011-04-11 2014-11-05 大成建設株式会社 Panorama image data generation device
US20130021488A1 (en) * 2011-07-20 2013-01-24 Broadcom Corporation Adjusting Image Capture Device Settings
US9189891B2 (en) * 2011-08-16 2015-11-17 Google Inc. Systems and methods for navigating a camera
CN103139580B (en) * 2011-11-29 2015-11-25 长春理工大学 A kind of three-dimensional panoramic space stereo image generation method
US8818101B1 (en) * 2012-01-03 2014-08-26 Google Inc. Apparatus and method for feature matching in distorted images
CA2861391A1 (en) * 2012-01-18 2013-07-25 Logos Technologies Llc Method, device, and system for computing a spherical projection image based on two-dimensional images
US9135678B2 (en) 2012-03-19 2015-09-15 Adobe Systems Incorporated Methods and apparatus for interfacing panoramic image stitching with post-processors
JP2015156523A (en) * 2012-06-06 2015-08-27 ソニー株式会社 Image processing device, image processing method, and program
US9235923B1 (en) 2012-09-28 2016-01-12 Google Inc. Systems and methods for providing a visualization of satellite sightline obstructions
US9275460B2 (en) * 2012-10-17 2016-03-01 Google Inc. Reference orientations for viewing panoramic images
US8902322B2 (en) 2012-11-09 2014-12-02 Bubl Technology Inc. Systems and methods for generating spherical images
CN103020900B (en) * 2012-11-15 2015-06-24 小米科技有限责任公司 Method and device for image processing
US10262460B2 (en) * 2012-11-30 2019-04-16 Honeywell International Inc. Three dimensional panorama image generation systems and methods
KR20140100656A (en) * 2013-02-06 2014-08-18 한국전자통신연구원 Point video offer device using omnidirectional imaging and 3-dimensional data and method
KR102082300B1 (en) * 2013-07-01 2020-02-27 삼성전자주식회사 Apparatus and method for generating or reproducing three-dimensional image
EP3028187A1 (en) * 2013-07-30 2016-06-08 Kodak Alaris Inc. System and method for creating navigable views of ordered images
US9076238B2 (en) * 2013-08-21 2015-07-07 Seiko Epson Corporation Intelligent weighted blending for ultrasound image stitching
JP6434209B2 (en) * 2013-12-20 2018-12-05 株式会社リコー Image generating apparatus, image generating method, and program
JP5967504B1 (en) * 2015-05-18 2016-08-10 パナソニックIpマネジメント株式会社 Omni-directional camera system
US10475234B2 (en) * 2015-07-15 2019-11-12 George Mason University Multi-stage method of generating 3D civil site surveys
TWI547177B (en) * 2015-08-11 2016-08-21 晶睿通訊股份有限公司 Viewing Angle Switching Method and Camera Therefor
US10043237B2 (en) 2015-08-12 2018-08-07 Gopro, Inc. Equatorial stitching of hemispherical images in a spherical image capture system
US10484601B2 (en) * 2015-08-31 2019-11-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN105306887A (en) * 2015-09-21 2016-02-03 北京奇虎科技有限公司 Method and device for sharing panoramic data
US9681111B1 (en) 2015-10-22 2017-06-13 Gopro, Inc. Apparatus and methods for embedding metadata into video stream
US10033928B1 (en) 2015-10-29 2018-07-24 Gopro, Inc. Apparatus and methods for rolling shutter compensation for multi-camera systems
US9792709B1 (en) * 2015-11-23 2017-10-17 Gopro, Inc. Apparatus and methods for image alignment
US9973696B1 (en) 2015-11-23 2018-05-15 Gopro, Inc. Apparatus and methods for image alignment
US9848132B2 (en) 2015-11-24 2017-12-19 Gopro, Inc. Multi-camera time synchronization
US9667859B1 (en) 2015-12-28 2017-05-30 Gopro, Inc. Systems and methods for determining preferences for capture settings of an image capturing device
US9922387B1 (en) 2016-01-19 2018-03-20 Gopro, Inc. Storage of metadata and images
US9967457B1 (en) 2016-01-22 2018-05-08 Gopro, Inc. Systems and methods for determining preferences for capture settings of an image capturing device
US9665098B1 (en) 2016-02-16 2017-05-30 Gopro, Inc. Systems and methods for determining preferences for flight control settings of an unmanned aerial vehicle
US9973746B2 (en) 2016-02-17 2018-05-15 Gopro, Inc. System and method for presenting and viewing a spherical video segment
US9602795B1 (en) 2016-02-22 2017-03-21 Gopro, Inc. System and method for presenting and viewing a spherical video segment
US9743060B1 (en) 2016-02-22 2017-08-22 Gopro, Inc. System and method for presenting and viewing a spherical video segment
US9984436B1 (en) * 2016-03-04 2018-05-29 Scott Zhihao Chen Method and system for real-time equirectangular projection
US9779322B1 (en) * 2016-04-08 2017-10-03 Gopro, Inc. Systems and methods for generating stereographic projection content
US11080871B2 (en) 2016-05-03 2021-08-03 Google Llc Method and system for obtaining pair-wise epipolar constraints and solving for panorama pose on a mobile device
CN105915818B (en) * 2016-05-10 2019-07-02 网易(杭州)网络有限公司 A kind of method for processing video frequency and device
CN107371011B (en) * 2016-05-13 2019-05-17 爱眉电脑软体有限公司 The method that wide angle picture is converted into map projection's image and perspective projection image
US9973695B1 (en) 2016-07-29 2018-05-15 Gopro, Inc. Systems and methods for capturing stitched visual content
US9727945B1 (en) * 2016-08-30 2017-08-08 Alex Simon Blaivas Construction and evolution of invariants to rotational and translational transformations for electronic visual image recognition
US9858638B1 (en) 2016-08-30 2018-01-02 Alex Simon Blaivas Construction and evolution of invariants to rotational and translational transformations for electronic visual image recognition
US9934758B1 (en) 2016-09-21 2018-04-03 Gopro, Inc. Systems and methods for simulating adaptation of eyes to changes in lighting conditions
US9747667B1 (en) 2016-09-29 2017-08-29 Gopro, Inc. Systems and methods for changing projection of visual content
US10268896B1 (en) 2016-10-05 2019-04-23 Gopro, Inc. Systems and methods for determining video highlight based on conveyance positions of video content capture
CN107945101B (en) * 2016-10-13 2021-01-29 华为技术有限公司 Image processing method and device
US9973792B1 (en) 2016-10-27 2018-05-15 Gopro, Inc. Systems and methods for presenting visual information during presentation of a video segment
KR102589853B1 (en) * 2016-10-27 2023-10-16 삼성전자주식회사 Image display apparatus and method for displaying image
US10536702B1 (en) 2016-11-16 2020-01-14 Gopro, Inc. Adjusting the image of an object to search for during video encoding due to changes in appearance caused by camera movement
CN106780310B (en) 2016-12-20 2020-11-24 北京奇艺世纪科技有限公司 Projection graph construction method and device
CN108234929A (en) * 2016-12-21 2018-06-29 昊翔电能运动科技(昆山)有限公司 Image processing method and equipment in unmanned plane
US10250866B1 (en) 2016-12-21 2019-04-02 Gopro, Inc. Systems and methods for capturing light field of objects
CN110169044B (en) * 2017-01-06 2021-04-23 富士胶片株式会社 Image processing apparatus, image processing method, and recording medium storing program
CN106846245B (en) * 2017-01-17 2019-08-02 北京大学深圳研究生院 Panoramic video mapping method based on main view point
US10194101B1 (en) 2017-02-22 2019-01-29 Gopro, Inc. Systems and methods for rolling shutter compensation using iterative process
US10187607B1 (en) 2017-04-04 2019-01-22 Gopro, Inc. Systems and methods for using a variable capture frame rate for video capture
CN107123136B (en) * 2017-04-28 2019-05-24 深圳岚锋创视网络科技有限公司 Panoramic picture alignment schemes, device and portable terminal based on multiway images
US10586377B2 (en) 2017-05-31 2020-03-10 Verizon Patent And Licensing Inc. Methods and systems for generating virtual reality data that accounts for level of detail
US10347037B2 (en) * 2017-05-31 2019-07-09 Verizon Patent And Licensing Inc. Methods and systems for generating and providing virtual reality data that accounts for level of detail
US10311630B2 (en) 2017-05-31 2019-06-04 Verizon Patent And Licensing Inc. Methods and systems for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene
CN109429087B (en) * 2017-06-26 2021-03-02 上海优土视真文化传媒有限公司 Virtual reality video barrage display method, medium, and system
CN108124193A (en) * 2017-12-25 2018-06-05 中兴通讯股份有限公司 Method for processing video frequency and device
CN108765582B (en) * 2018-04-28 2022-06-17 海信视像科技股份有限公司 Panoramic picture display method and device
US10666863B2 (en) * 2018-05-25 2020-05-26 Microsoft Technology Licensing, Llc Adaptive panoramic video streaming using overlapping partitioned sections
US10764494B2 (en) 2018-05-25 2020-09-01 Microsoft Technology Licensing, Llc Adaptive panoramic video streaming using composite pictures
US11185774B1 (en) * 2020-01-22 2021-11-30 Gil-ad Goldstein Handheld computer application for creating virtual world gaming spheres
US11376502B2 (en) * 2020-05-28 2022-07-05 Microsoft Technology Licensing, Llc Adjudicating fault in a virtual simulation environment
CN113012032B (en) * 2021-03-03 2022-12-09 中国人民解放军战略支援部队信息工程大学 Aerial panoramic image display method capable of automatically labeling place names

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5185667A (en) * 1991-05-13 1993-02-09 Telerobotics International, Inc. Omniview motionless camera orientation system
US5359363A (en) * 1991-05-13 1994-10-25 Telerobotics International, Inc. Omniview motionless camera surveillance system
US5393583A (en) * 1992-05-29 1995-02-28 Yazaki Corporation Connector
US5446833A (en) * 1992-05-08 1995-08-29 Apple Computer, Inc. Textured sphere and spherical environment map rendering using texture map double indirection
US6011558A (en) * 1997-09-23 2000-01-04 Industrial Technology Research Institute Intelligent stitcher for panoramic image-based virtual worlds
US6018349A (en) * 1997-08-01 2000-01-25 Microsoft Corporation Patch-based alignment method and apparatus for construction of image mosaics
US6157747A (en) * 1997-08-01 2000-12-05 Microsoft Corporation 3-dimensional image rotation method and apparatus for producing image mosaics
US6327381B1 (en) * 1994-12-29 2001-12-04 Worldscape, Llc Image transformation and synthesis methods
US6593969B1 (en) * 1996-06-24 2003-07-15 Be Here Corporation Preparing a panoramic image for presentation
US6686970B1 (en) * 1997-10-03 2004-02-03 Canon Kabushiki Kaisha Multi-media editing method and apparatus

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5083389A (en) * 1988-07-15 1992-01-28 Arthur Alperin Panoramic display device and method of making the same
US5259584A (en) * 1990-07-05 1993-11-09 Wainwright Andrew G Camera mount for taking panoramic pictures having an electronic protractor
US5313306A (en) * 1991-05-13 1994-05-17 Telerobotics International, Inc. Omniview motionless camera endoscopy system
US5384588A (en) * 1991-05-13 1995-01-24 Telerobotics International, Inc. System for omindirectional image viewing at a remote location without the transmission of control signals to select viewing parameters
US5396583A (en) * 1992-10-13 1995-03-07 Apple Computer, Inc. Cylindrical to planar image mapping using scanline coherence
US6020931A (en) * 1996-04-25 2000-02-01 George S. Sheng Video composition and position system and media signal communication system
US6389179B1 (en) * 1996-05-28 2002-05-14 Canon Kabushiki Kaisha Image combining apparatus using a combining algorithm selected based on an image sensing condition corresponding to each stored image
US6331869B1 (en) * 1998-08-07 2001-12-18 Be Here Corporation Method and apparatus for electronically distributing motion panoramic images
JPH10178564A (en) * 1996-10-17 1998-06-30 Sharp Corp Panorama image generator and recording medium
US5963213A (en) * 1997-05-07 1999-10-05 Olivr Corporation Ltd. Method and system for accelerating warping
US6028584A (en) * 1997-08-29 2000-02-22 Industrial Technology Research Institute Real-time player for panoramic imaged-based virtual worlds
US6128108A (en) * 1997-09-03 2000-10-03 Mgi Software Corporation Method and system for compositing images
US6552744B2 (en) * 1997-09-26 2003-04-22 Roxio, Inc. Virtual reality camera
US6304284B1 (en) * 1998-03-31 2001-10-16 Intel Corporation Method of and apparatus for creating panoramic or surround images using a motion sensor equipped camera

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5185667A (en) * 1991-05-13 1993-02-09 Telerobotics International, Inc. Omniview motionless camera orientation system
US5359363A (en) * 1991-05-13 1994-10-25 Telerobotics International, Inc. Omniview motionless camera surveillance system
US5446833A (en) * 1992-05-08 1995-08-29 Apple Computer, Inc. Textured sphere and spherical environment map rendering using texture map double indirection
US5561756A (en) * 1992-05-08 1996-10-01 Apple Computer, Inc. Textured sphere and spherical environment map rendering using texture map double indirection
US5393583A (en) * 1992-05-29 1995-02-28 Yazaki Corporation Connector
US6327381B1 (en) * 1994-12-29 2001-12-04 Worldscape, Llc Image transformation and synthesis methods
US6593969B1 (en) * 1996-06-24 2003-07-15 Be Here Corporation Preparing a panoramic image for presentation
US6018349A (en) * 1997-08-01 2000-01-25 Microsoft Corporation Patch-based alignment method and apparatus for construction of image mosaics
US6157747A (en) * 1997-08-01 2000-12-05 Microsoft Corporation 3-dimensional image rotation method and apparatus for producing image mosaics
US6011558A (en) * 1997-09-23 2000-01-04 Industrial Technology Research Institute Intelligent stitcher for panoramic image-based virtual worlds
US6686970B1 (en) * 1997-10-03 2004-02-03 Canon Kabushiki Kaisha Multi-media editing method and apparatus

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040196282A1 (en) * 2003-02-14 2004-10-07 Oh Byong Mok Modeling and editing image panoramas
US20050195128A1 (en) * 2004-03-03 2005-09-08 Sefton Robert T. Virtual reality system
US7224326B2 (en) 2004-03-03 2007-05-29 Volo, Llc Virtual reality system
US20070229397A1 (en) * 2004-03-03 2007-10-04 Volo, Llc Virtual reality system
US10506157B2 (en) 2009-05-27 2019-12-10 Sony Corporation Image pickup apparatus, electronic device, panoramic image recording method, and program
US20100302347A1 (en) * 2009-05-27 2010-12-02 Sony Corporation Image pickup apparatus, electronic device, panoramic image recording method, and program
US10091416B2 (en) 2009-05-27 2018-10-02 Sony Corporation Image pickup apparatus, electronic device, panoramic image recording method, and program
US8791983B2 (en) 2009-05-27 2014-07-29 Sony Corporation Image pickup apparatus and associated methodology for generating panoramic images based on location and orientation information
US8627236B2 (en) * 2009-11-03 2014-01-07 Lg Electronics Inc. Terminal and control method thereof
US20110105192A1 (en) * 2009-11-03 2011-05-05 Lg Electronics Inc. Terminal and control method thereof
CN104268379A (en) * 2014-09-12 2015-01-07 北京诺亚星云科技有限责任公司 Digital panorama based three-dimensional indoor ranging system and device
CN107660337A (en) * 2015-06-02 2018-02-02 高通股份有限公司 For producing the system and method for assembled view from fish eye camera
CN105516569A (en) * 2016-01-20 2016-04-20 北京疯景科技有限公司 Method and device for obtaining omni-directional image
GB2565702A (en) * 2016-06-07 2019-02-20 Mediatek Inc Method and apparatus of boundary padding for VR video processing
WO2017211294A1 (en) * 2016-06-07 2017-12-14 Mediatek Inc. Method and apparatus of boundary padding for vr video processing
GB2565702B (en) * 2016-06-07 2021-09-08 Mediatek Inc Method and apparatus of boundary padding for VR video processing
CN106254779A (en) * 2016-08-30 2016-12-21 上海乐欢软件有限公司 A kind of panoramic video processing method and server and client side
US11388336B2 (en) * 2017-03-02 2022-07-12 Arashi Vision Inc. Horizontal calibration method and system for panoramic image or video, and portable terminal
KR101884565B1 (en) * 2017-04-20 2018-08-02 주식회사 이볼케이노 Apparatus and method of converting 2d images of a object into 3d modeling data of the object
US11062426B2 (en) 2018-03-05 2021-07-13 Samsung Electronics Co., Ltd. Electronic device and image processing method

Also Published As

Publication number Publication date
US7852376B2 (en) 2010-12-14
US20030063816A1 (en) 2003-04-03
US6486908B1 (en) 2002-11-26
US7317473B2 (en) 2008-01-08
US20080074500A1 (en) 2008-03-27

Similar Documents

Publication Publication Date Title
US7317473B2 (en) Image-based method and system for building spherical panoramas
US9961264B2 (en) Virtual reality camera
US6323862B1 (en) Apparatus for generating and interactively viewing spherical image data and memory thereof
Peleg et al. Mosaicing on adaptive manifolds
US6044181A (en) Focal length estimation method and apparatus for construction of panoramic mosaic images
US6018349A (en) Patch-based alignment method and apparatus for construction of image mosaics
US6157747A (en) 3-dimensional image rotation method and apparatus for producing image mosaics
US5986668A (en) Deghosting method and apparatus for construction of image mosaics
CN104246795B (en) The method and system of adaptive perspective correction for extrawide angle lens image
US6009190A (en) Texture map construction method and apparatus for displaying panoramic image mosaics
US7027049B2 (en) Method and system for reconstructing 3D interactive walkthroughs of real-world environments
US5987164A (en) Block adjustment method and apparatus for construction of image mosaics
US6097854A (en) Image mosaic construction system and apparatus with patch-based alignment, global block adjustment and pair-wise motion-based local warping
US6549651B2 (en) Aligning rectilinear images in 3D through projective registration and calibration
US6359617B1 (en) Blending arbitrary overlaying images into panoramas
US7899270B2 (en) Method and apparatus for providing panoramic view with geometric correction
Zelnik-Manor et al. Squaring the circle in panoramas
US7865013B2 (en) System and method for registration of cubic fisheye hemispherical images
US20060078215A1 (en) Image processing based on direction of gravity
US20060078214A1 (en) Image processing based on direction of gravity
KR100614004B1 (en) An automated method for creating 360 degrees panoramic image
JP2004046573A (en) Method for calculating tilt angle, method for synthesizing cylindrical panorama picture, and computer-readable recording medium with program for cylindrical panorama picture synthesis processing recorded thereon
JP3149389B2 (en) Method and apparatus for overlaying a bitmap image on an environment map
KR20030082307A (en) Image-based rendering method using orthogonal cross cylinder
Rawlinson Design and implementation of a spatially enabled panoramic virtual reality prototype

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION