WO2014043814A1 - Methods and apparatus for displaying and manipulating a panoramic image by tiles - Google Patents

Methods and apparatus for displaying and manipulating a panoramic image by tiles Download PDF

Info

Publication number
WO2014043814A1
WO2014043814A1 PCT/CA2013/050720 CA2013050720W WO2014043814A1 WO 2014043814 A1 WO2014043814 A1 WO 2014043814A1 CA 2013050720 W CA2013050720 W CA 2013050720W WO 2014043814 A1 WO2014043814 A1 WO 2014043814A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinates
image
method defined
panoramic image
texture
Prior art date
Application number
PCT/CA2013/050720
Other languages
French (fr)
Inventor
Gregory Pekofsky
Dongxu Li
Guillaume Racine
Original Assignee
Tamaggo Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tamaggo Inc. filed Critical Tamaggo Inc.
Publication of WO2014043814A1 publication Critical patent/WO2014043814A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Definitions

  • This invention relates to panoramic content processing, and in particular to methods and apparatus for manipulating and displaying panoramic images and/or video.
  • panoramic content such as panoramic image content and panoramic video content.
  • Conventional image processing includes loading the panoramic content in main memory and processing it using a Central Processing Unit (CPU) of the electronic device. This is also a slow process as the panoramic content is rather large in comparison with a view window of interest to be displayed.
  • CPU Central Processing Unit
  • GPUs have been employed to repeatedly map textures on a mesh.
  • Content transfer bandwidth to and from graphics memory is a limitation of GPUs which limits the size of texture tiles transferred by the GPU, wherein content transfer bandwidth is orders of magnitude smaller than the graphics memory available to the GPU.
  • Some GPUs are further limited to uploading only textures having Power-Of-Two (POT) sizes.
  • POT Power-Of-Two
  • a typical skybox based viewer introduces pincushion distortion when projecting the 3D skybox to a flat display, as shown in Figure 2B.
  • the projection process is not conformal, as longitude and latitude lines are not kept perpendicular to each other.
  • current environmental mapping schemes such as cubic mapping and skydome mapping, cannot support a Field-of-View (FOV) greater than 90°.
  • Significant distortion is apparent whenever the FOV gets close to 90°, and in practice the conventional environmental mapping methods are limited to about 45°. It would therefore be desirable to correct the pincushion distortion and limited FOV problems to avoid distorting the local shape of objects such as faces.
  • a non-standard skydome can be used, which has its texture coordinates determined according to an circular-to-skydome geometrical mapping, instead of using azimuth and polar angles as in an equirectangular to skydome mapping.
  • the skybox has texture coordinates according to a circular-to-skybox mapping, instead of texture coordinates being linear to pixel locations as in the case of standard cubic mapping provided by 3D GPUs.
  • the texture coordinates are generated for each circular panorama based on the camera lens mapping parameters of the circular image, and the texture coordinate generation process can be carried out by a CPU or by a GPU using vertex or geometry shaders. Correction of chromatic aberration can be included in the direct mapping algorithm.
  • Direct environmental mapping by dome-based panorama viewers requires acquired circular images to be loaded as textures. Due to the large size of circular images, these have to be split at loading time into sub-images of sizes acceptable to the GPU. Two device-dependent GPU limitations, maximum texture size and support for non-Power- Of-Two (non-POT), need to be considered in deciding sub-image sizes. After the sub- images of proper sizes are generated, texture coordinates are assigned according to geometrical mapping of the panorama.
  • a panorama image can be loaded by a GPU as tiles of sub-images of smaller sizes than the size of the original panorama image as a workaround for limitation of maximum texture size and the POT limitation, or as a way to provide flexibility of ordered loading according to a Region-of-lnterest (ROI).
  • a mesh is created and loaded to GPU for each tile. The mesh contains information for both angular position and texture position, allowing the subsequent reconstruction of the panorama geometry without knowing the particular format (eg. cylindrical, fish eye, circular etc.) of the original panorama image. It is possible to display the panorama according to multiple projection types based on the same intermediate set of tiles/meshes generated from the original panorama image.
  • Another advantage of storing panorama images by the intermediate tiles is the ability to modify the panorama images according to projected geometry, based on the angular information comes with the tiles.
  • a viewer in accordance with a non-limiting embodiment of the proposed solution relies on a conformal projection process to preserve local shapes.
  • a rotated cylindrical mapping can be used.
  • the source panoramic image which can be circular or non-circular, is cast on a sphere according to the angular location of pixels in the acquired panorama.
  • the sphere is rotated around its center to a desired orientation to select an ROI before being projected to a cylinder also centered at the sphere's center with its longitudinal axis along the sphere's z-axis.
  • the projected image on the cylinder is unwrapped and displayed by the viewer. Because the mapping algorithm is based on unwrapping a developable plane with projected panorama, FOV is not particularly limited.
  • a method for mapping a panoramic image to a 3-dimensional virtual object of which a projection is made for display on a screen comprising: providing the panoramic image in a memory, the panoramic image defined by a set of picture elements (pixels) in a 2-dimensional space; providing a model of the object, the model comprising a set of vertices in a 3- dimensional space; selecting a vertex on the model, the selected vertex being characterized by a set of angular coordinates; applying a transformation to the angular coordinates to obtain a set of polar coordinates; identifying a pixel whose position in the panoramic image is defined by the polar coordinates; storing in memory an association between the selected vertex on the model and a value of the identified pixel.
  • pixels picture elements
  • a non- transitory computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out a method for mapping a panoramic image to a 3-D virtual object of which a projection is made for display on a screen, the method comprising: providing the panoramic image in a memory, the panoramic image defined by a set of picture elements (pixels) in a 2-dimensional space; providing a model of the object, the model comprising a set of vertices in a 3-dimensional space; selecting a vertex on the model, the selected vertex characterized by a set of angular coordinates; applying a transformation to the angular coordinates to obtain a set of polar coordinates; identifying a pixel whose position in the panoramic image is defined by the polar coordinates; storing in memory an association between the selected vertex on the model and a value of the identified pixel.
  • pixels picture elements
  • a method of assigning a value to a vertex of an object of interest comprising: obtaining 3-D coordinates of the vertex; using a shader to derive 2-D coordinates based on the 3-D coordinates; and consulting a panoramic image to obtain a value corresponding to the 2-D coordinates.
  • a method of assigning a value to a vertex of an object of interest comprising: obtaining 3-D coordinates of the vertex; using a shader to derive 2-D coordinates based on the 3-D coordinates; and consulting a panoramic image to obtain a value corresponding to the 2-D coordinates.
  • a method performed by a CPU comprising: receiving a request for a value of a texture element having S and T coordinates; transforming the S and T coordinates into S1 , T1 and H coordinates, where S1 and T1 denote X and Y coordinates within a sub-texture indexed by H; sending a request to a GPU for a value of a texture element having S1 , T1 and H coordinates; receiving from the GPU the value of the texture element of the sub-texture indexed by H whose X and Y coordinates are S1 and T1 .
  • a method performed by a CPU comprising: receiving a request for a value of a texture element having S and T coordinates; transforming the S and T coordinates into S1 , T1 and H coordinates, where S1 and T1 denote X and Y coordinates within a sub-texture indexed by H; sending a request to a GPU for a value of a texture element having S1 , T1 and H coordinates; receiving from the GPU the value of the texture element of the sub-texture indexed by H whose X and Y coordinates are S1 and T1 .
  • a non- transitory computer-readable medium comprising instructions which, when executed by a CPU, cause the CPU to carry out a method that comprises: receiving a request for a value of a texture element having S and T coordinates; transforming the S and T coordinates into S1 , T1 and H coordinates, where S1 and T1 denote X and Y coordinates within a sub-texture indexed by H; sending a request to a GPU for a value of a texture element having S1 , T1 and H coordinates; receiving from the GPU the value of the texture element of the sub-texture indexed by H whose X and Y coordinates are S1 and T1.
  • a method of displaying a 2-D panoramic image in a viewing window comprising: obtaining 2-D coordinates of an element of the viewing window; transforming the 2-D coordinates into a 3-D vector; rotating the vector; mapping the 3-D coordinates to 2-D coordinates of a panoramic image; and obtaining color information of the panoramic image at the 2-D coordinates.
  • a non- transitory computer-readable medium comprising instructions which, when executed by a computing apparatus, cause the computing apparatus to carry out a method that comprises: obtaining 2-D coordinates of an element of the viewing window; transforming the 2-D coordinates into a 3-D vector; rotating the vector; mapping the 3-D coordinates to 2-D coordinates of a panoramic image; and obtaining color information of the panoramic image at the 2-D coordinates.
  • Figures 1A and 1 B are a comparison between illustrations of (A) a cubic mapping and (B) a direct mapping;
  • Figure 2A is an illustration of a dome view in accordance with the proposed solution
  • Figure 2B is an illustration of a pincushion distortion
  • Figure 3 is a schematic diagram illustrating relationships between spaces
  • Figure 4(a) is a schematic diagram illustrating rendering a view of a texture surface on a screen in accordance with the proposed solution
  • Figure 4(b) is a schematic diagram illustrating a 2-D geometric mapping of a textured surface in accordance with the proposed solution
  • Figure 5A is a schematic plot showing a camera radial mapping function in accordance with the proposed solution
  • Figure 5B is a schematic diagram illustrating direct mapping from a circular image to skydome as defined by Eq. (1 ) in accordance with the proposed solution;
  • Figure 5C is a comparison between (a) a cubic mapping process and (b) a direct mapping process in accordance with the proposed solution;
  • Figure 6 is an algorithmic listing illustrating dome vertex generation in accordance with a non-limiting example of the proposed solution
  • Figure 7 is an algorithmic listing illustrating cube/box vertex generation in accordance with another non-limiting example of the proposed solution
  • Figure 8 is schematic diagram illustrating sub-images loaded as textures in accordance with the proposed solution
  • Figure 9 is a schematic diagram illustrating determining vertex coordinates from texture coordinates in accordance with the proposed solution.
  • Figure 10 is a schematic diagram illustrating obtaining texture values in accordance with the proposed solution.
  • Figure 1 1 is an algorithmic listing illustrating image splitting in accordance with a non- limiting example of the proposed solution
  • Figure 12 is an illustration of a split circular panorama in accordance with the proposed solution
  • Figure 13 is another algorithmic listing illustrating dome construction in accordance with another non-limiting example of the proposed solution
  • Figure 14 is an illustration of a dome construction from a split circular image in accordance with the non-limiting algorithmic example illustrated in Figure 13;
  • Figure 15 is schematic diagram illustrating a sketch of the geometry involved in accordance with the proposed solution.
  • Figure 16 is a table illustrating image processing in accordance with the proposed solution
  • Figure 17 is an illustration having reduced pincushion distortion compared to the illustration in Figure 2B in accordance with the proposed solution
  • Figure 18 is an algorithmic listing illustrating a rotated equirectangular mapping in accordance with a non-limiting example of the proposed solution
  • Figure 19 is an illustration of a mapping from a circular panorama image to a viewer window in accordance with the proposed solution
  • Figure 20 is an illustration of a 90° FOV mapping from a circular panorama image in accordance with the proposed solution.
  • Figure 21 is another illustration of a 90° FOV mapping from a circular panorama image in accordance with the proposed solution, wherein similar features bear similar labels throughout the drawings.
  • Reference to qualifiers such as “top” and “left” in the present specification is made solely with reference to the orientation of the drawings as presented in the application and do not imply any absolute spatial orientation.
  • Texture space is a 2D space of surface textures
  • object space is a local 3D coordinate system in which 3D objects such as polygons and patches can be defined.
  • a polygon is defined by listing the object space coordinates of each of its vertices.
  • World space is a global coordinate system that is related to each object's local object space using 3D modeling transformations such as translations, rotations and scaling.
  • 3D screen space is the 3D coordinate system of a display: a perspective space with pixel coordinates (x, y) and depth z using z-buffering.
  • 3D screen space is related to world space by the camera parameters such as position, orientation, and FOV.
  • 2D screen space is a 2D subset of 3D screen space without the z dimension: a projection of objects from an ROI in 3D screen space onto the display.
  • Use of the phrase "screen space” by itself can mean 2D screen space.
  • the correspondence between 2D texture space and 3D object space is called the "parameterization of the surface", and the mapping from 3D object space to 2D screen space is the "projection" defined by the camera and the modeling transformations.
  • Figure 4(a) it is noted that when rendering a particular view of a textured surface, it is the compound mapping from 2D texture space to 2D screen space that is of interest.
  • the intermediate 3D space can be ignored.
  • the compound mapping in texture mapping is an example of an image warp, the resampling of a source image to produce a destination image according to a 2D geometric mapping (see Figure 4(b)).
  • a vertex on a skydome mesh which is centered at the coordinate origin, can be located by its angular part in spherical coordinates ( ⁇ , ⁇ ), where ⁇ and ⁇ are the polar and azimuth angles respectively.
  • the direct mapping from a circular image to skydome is defined by:
  • the two-dimensional lens mapping is uniquely determined by a one dimensional radial lens mapping function:
  • ⁇ and ⁇ are the polar coordinates of mapped location within a centered circular or non-circular source image
  • f(6) is a mapping function defined by the camera lens projection.
  • the radial mapping function f(Q) is supplied by the camera in the form of a one-dimensional lookup table (more on this below).
  • Figure 5A illustrates an example of a radial mapping function for a non-circular source image.
  • the mapping defined by Eq. (1 ) is conceptually illustrated in Figure 5B. Note that Eq. (1 ) can be applied to 360° fisheye lens images. In that case, the radial mapping function ⁇ ( ⁇ ) may be a straight line.
  • the texture coordinates of a vertex is obtained by transforming the polar coordinates into cartesian as follows:
  • the dome (an example of a 3D model) is created by generating vertices on a sphere, and the texture coordinates are assigned to the vertices according to Eqs. (1 ) and (2).
  • the lens mapping function as employed hereinabove is the one-to-one correspondence between the incident angle of an incoming ray and the pixel position on the image sensor of the camera:
  • the actual lens mapping function is found to be wavelength/color dependent. That is a footprint of an incident ray on the imaging sensor is not a point but a color separated region. When the color separated region of the footprint spans over a few neighboring pixels on the imaging sensor, this type of color dependency of the lens mapping function results in chromatic aberration reducing the overall resolution of panorama images. It is desirable to include dispersion correction in a panorama viewer.
  • different lens mapping functions for Red/Green/Blue pixels are generated according to weighted ray tracing centroids, where the weights are proportional to R/G/B pixel response functions (pixel's sensitivities to a particular wavelength) and the corresponding single color intensity within the spectrum of the light source (the energy at a given wavelength from the light source times the sensor's response function per input light flux at that wavelength).
  • the lens mapping function for red pixels can be generated by:
  • the scaled polar coordinates are converted to spherical coordinates ( ⁇ , ⁇ ) according to for example stereographic projection:
  • the texture coordinates for R/G/B are found by using dispersion correcting lens mapping functions for R/G/B:
  • these three texture coordinates are used in texture look-up to retrieve the red, green and blue colors, respectively, and the color of the pixel within window space is constructed by combining the R/G/B colors.
  • the scaling and rotation in steps 1 and 3 above are required only for interactive viewers, and can be skipped when the default static view of the panorama is to be generated.
  • a variant of this algorithm could be mapping texture coordinates to 3 different locations within the display space, according to stereographic projection, and then performing the RGB dependent lens mapping. The red, green, and blue colors for the display pixels are then found by a single color texture look-up using the corresponding texture coordinates, respectively.
  • a skybox is used instead of the skydome as the 3D model.
  • the vertex locations on the skybox have the form (r(Q, ⁇ ), ⁇ , ⁇ ) in spherical coordinates, with the radius being a function of angular direction (i.e., defined ⁇ and ⁇ ) instead of a constant as in the skydome case.
  • the radius has a function that is constrained by ⁇ and ⁇ . This is the case with a cube, for example, although the same will also be true of other regular polyhedrons. Since Eq. (1 ) does not use the radial part, the vertex coordinates are generated by Eqs. (1 ) and (2) using the angular part of the vertex coordinates.
  • direct mapping (which is implemented by certain embodiments of the proposed solution) avoids the need for a geometric mapping to transform an input 2D circular image into an intermediate rectangular (for a dome model) or cubic (for a cube/box model) image before mapping the intermediate image to the vertices of the 3D model.
  • the texture for a desired vertex can be found by transforming the 3D coordinates of the texture into 2D coordinates of the original circular image and then looking up the color value of the original circular image at those 2D coordinates.
  • the transformation can be done using a vertex shader by applying a simple geometry according to Eq (1 ).
  • dome vertex generation is given by Algorithm 1 in Figure 6.
  • an originally acquired panoramic image (e.g., circular or non-circular) is split along its width and height into image segments of sizes up the maximum texture size of the GPU.
  • an originally acquired panoramic image e.g., circular or non-circular
  • image segments are split along its width and height into image segments of sizes up the maximum texture size of the GPU.
  • non-POT is supported, there is at most one remainder image segment of a size less than the maximum texture size. Where non-POT is not supported, this remainder segment can be further split into smaller POT sizes.
  • the original image can be padded with transparent color up to texture size.
  • the circular image is thus split into a plurality of rectangular sub-images according to segmentation in the width and height directions.
  • the sub-images are loaded as textures for example as illustrated in Figure 8.
  • the total memory of the GPU may be insufficient to accommodate the entire circular image. This may be the case on mobile devices. Therefore, another benefit of using a splitting algorithm is being able to load high resolution circular panorama images for mobile devices with tight limits on system memory.
  • One 3D mesh portion is created to cover each rectangular sub-image, and the vertex coordinates are determined from the texture coordinates of the circular image:
  • r E and Q E are the texture coordinates in polar coordinates
  • F 1 (r E ) is the reverse mapping function from circular image to panorama defined by the camera lens projection for example as illustrated in Figure 9.
  • the texture coordinates of the circular image and texture coordinates of sub-images are related by linear scaling, as in the following equation: where s and t are texture coordinates, and W and H stand for image width and height respectively, with subscripts E and S indicating circular and sub-image respectively, and Ls and 7s are the pixel location of the left and top edge of the sub-image, respectively.
  • s and t are texture coordinates
  • W and H stand for image width and height respectively
  • subscripts E and S indicating circular and sub-image respectively
  • Ls and 7s are the pixel location of the left and top edge of the sub-image, respectively.
  • FIG. 1 1 A non-limiting example of image splitting according to width or height is given by Algorithm 1 listed in Figure 1 1 .
  • Figure 12 illustrates an example of splitting of a circular panorama. Specifically, splitting was done by POT sizes up to a given maximum. Straight lines indicate borders of sub-images after splitting.
  • Algorithm 2 A non-limiting example of dome construction from a split circular image is given in Algorithm 2 listed in Figure 13.
  • Figure 14 illustrates a dome with the split images mapped thereon. Specifically, mapping of a circular panorama split to POT sizes up to a given maximum is illustrated by way of non-limiting example.
  • Tiles are generated according to methods described herein above. Meshes are generated with angular information for each vertex:
  • a panorama viewer projects a vertex fx, y, z) to certain displaying coordinates (u, v) according to the projection methods of the viewer, and the displayed image is formed by texture look-up using the texture coordinates (u, v).
  • Examples of some viewers include: skydome environment mapping, equirectangular projection, and stereographic projection.
  • the pre-tiled set-up allows fast switching between projection types without having to reload the original image in the GPU.
  • each tile is directly projected to a rectangle of the same dimension, in parallel with a projection to display space by a given viewer projection, and the modification of tiles is carried out using information from the viewer projection method.
  • the trivial projection is to a rectangle of the same size. This operation is the same as copying of the tile if pixels are not modified according to the viewer display space coordinates.
  • the tiles can be used in viewers to display the modified panorama, or can be saved to a new panorama image.
  • the manipulation can be limited to tiles in the ROI. Examples of this type of image manipulation could be: cropping of panorama by projection on display, drawing shapes on to panorama in display, pixelation in display space.
  • advantages are derived from using pre-tiled panorama images which allows panorama viewers and editors to be implemented without being limited to GPU supported texture sizes.
  • the pre-tile panorama can be modified according to display coordinates in viewers.
  • a pixel in the display indexed as (u, v), is mapped to a cylinder with unit radius in 3-dimensional space by equirectangular projection for example as shown by Eq. (4):
  • ⁇ 0 and z c are the azimuth and height in cylindriac coordinates, respectively, and w and h are the width and height of the displayed image, respectively.
  • Linear mapping is used to preserve angular uniformity in both directions along the u-indices and v- indices.
  • the point on the cylinder (which was just found) is mapped to a unit sphere by normalization of its cartesian coordinates, and the point on the unit sphere is rotated which can for example be expressed by:
  • x c , y c , z c are respectively the cartesian coordinates of the point on the cylinder
  • r c is its distance to the origin
  • F is a rotation matrix
  • (x s , y s , z s ) are the cartesian coordinates of the corresponding point on the unit sphere.
  • the rotation matrix F is a function of user input where navigation throughout the original image will induce changes in F.
  • the color of the displayed pixel (u, v) in the view window is the color of a corresponding location within a 2D panoramic image, which can be circular or non-circular.
  • This corresponding location can be obtained by first converting the cartesian coordinates of the aforementioned point on the unit sphere (x s , y S: z s ) to spherical coordinates (1, G s , (Ps) then recognizing the existence of a mapping between (general) spherical coordinates (1, ⁇ , ⁇ ) on the unit sphere and (general) polar coordinates (r E , %) on circular or non-circular panoramic image.
  • this mapping can be defined by Eq. (1 ) where f(Q) is a mapping function defined by the camera lens projection, and may indeed be supplied by the camera in a form of an one-dimensional lookup table.
  • Figure 15 illustrates a sketch of the geometry involved in the aforementioned process.
  • Figure 16 illustrates a summary of the overall mapping process.
  • Figure 17 illustrates a screenshot from a viewer implemented in accordance with an embodiment of the present invention. It is noted that the pincushion distortion from Figure 2B has been reduced.
  • Algorithm 1 listed in Figure 18 finds the texture coordinates for a location wi in the
  • v s a column vector of the cartesian coordinates of the mapped point on the unit sphere.
  • Figures 20 and 21 show how a portion of the circular panorama image is mapped to a viewer window at a 90° FOV.
  • Panoramic Viewer employing Stereographic Projection employing Stereographic Projection
  • a typical panorama viewer supplies environmental mapping of wide angle panorama images according to viewing conditions (ROI) from user input.
  • ROI viewing conditions
  • Stereographic projection is well-known for being conformal, and gives satisfactory results when used as an environmental mapping method. Compared with traditional cubic mapping, stereographic projection generates less pronounced local distortion at the same size of FOV. Therefore, slightly larger FOV angles can be used than in the case of a cubic mapping viewer.
  • Stereographic projection maps a point on a unit sphere to a point on a tangential plane to the sphere.
  • Each point on the unit sphere can be rotated according to viewer rotation (ROI) and mapped to a location within the panorama image according to lens optics.
  • ROI viewer rotation
  • the direction of vector (x' y z') is identified, for example, by converting ( ⁇ ', /' z'J to spherical coordinates:
  • (u; V) Mi(6, ⁇ )
  • the M L function is determined by the wide angle lens which focuses incoming rays in direction ( ⁇ , ⁇ ) to certain sensor location by lens optics.
  • the ML function can be in the form of a look-up table to give pixel locations for all panorama angles covered within a field of view.
  • each pixel within the viewer viewport is uniquely mapped to one location within the original circular image, and color can be assigned to viewer pixels according to the mapped location within the panorama image.
  • the color assignment process is usually performed by GPU texture look up, or by traditional CPU based geometry mapping.
  • a computing device may implement the methods and processes of certain embodiments of the present invention by executing instructions read from a storage medium.
  • the storage medium may be implemented as a ROM, a CD, Hard Disk, USB storage device, etc. connected directly to (or integrated with) the computing device.
  • the storage medium may be located elsewhere and accessed by the computing device via a data network such as the Internet.
  • the computing device accesses the Internet, the physical interconnectivity of the computing device in order to gain access to the Internet is not material, and can be achieved via a variety of mechanisms, such as wireline, wireless (cellular, Wi-Fi, Bluetooth, WiMax), fiber optic, free-space optical, infrared, etc.
  • the computing device itself can take on just about any form, including a desktop computer, a laptop, a tablet, a smartphone (e.g., Blackberry, iPhone, etc.), a TV set, etc.
  • the panoramic image being processed may be an original panoramic image, while in other cases it may be an image derived from an original panoramic image, such as a thumbnail or preview image.

Abstract

There is provided a method for mapping a panoramic image to a 3D virtual object of which a projection is made for display on a screen. The method includes: providing the panoramic image in a memory, the panoramic image being defined by a set of pixels in a 2D space; providing a model of the object, the model having a set of vertices in a 3D space; selecting a vertex on the model, the selected vertex being characterized by a set of angular coordinates; applying a transformation to the angular coordinates to obtain a set of polar coordinates; identifying a pixel whose position in the panoramic image is defined by the polar coordinates; and storing in memory an association between the selected vertex on the model and a value of the identified pixel.

Description

METHODS AND APPARATUS FOR DISPLAYING AND MANIPULATING
A PANORAMIC IMAGE BY TILES
REFERENCE TO RELATED APPLICATIONS
This application is claims priority from: U.S. Provisional Patent Application US 61/704,088 entitled "DIRECT ENVIRONMENTAL MAPPING METHOD AND SYSTEM" filed September 21 , 2012, U.S. Provisional Patent Application US 61/704,060 entitled "SPLITTING OF ELLIPTICAL IMAGES" filed 21 September 2012, and U.S. Provisional Patent Application US 61/704,082 entitled "PANORAMIC IMAGE VIEWER" filed 21 September 2013 the entireties of which are incorporated herein by reference.
Technical Field
This invention relates to panoramic content processing, and in particular to methods and apparatus for manipulating and displaying panoramic images and/or video.
Background
There is a growing need for improving the processing of panoramic content, such as panoramic image content and panoramic video content.
Environmental mapping by skybox and skydome is widely used in displaying of 360° panorama images. When an acquired panorama is provided in an circular fisheye form, the circular image is transformed into six cubic images to be shown on six faces of a skybox Figure 1A, or in the case of a skydome transformed into a single rectangle image with pixels scaled according to azimuth and polar angles of the skydome Figure 1 B. The cubic or rectangle images are then loaded into a Graphics Processing Unit (GPU) as mesh textures and applied on a skybox or a skydome shaped mesh, respectively Figures 1A and 1 B. The geometrical mapping from the circular image to cubic images or to the rectangle image has been found to be the slowest speed-limiting step in manipulating and displaying a panorama. Chromatic aberrations also complicate such mappings.
Conventional image processing includes loading the panoramic content in main memory and processing it using a Central Processing Unit (CPU) of the electronic device. This is also a slow process as the panoramic content is rather large in comparison with a view window of interest to be displayed.
Recently, GPUs have been employed to repeatedly map textures on a mesh. Content transfer bandwidth to and from graphics memory is a limitation of GPUs which limits the size of texture tiles transferred by the GPU, wherein content transfer bandwidth is orders of magnitude smaller than the graphics memory available to the GPU. Some GPUs are further limited to uploading only textures having Power-Of-Two (POT) sizes.
A typical skybox based viewer introduces pincushion distortion when projecting the 3D skybox to a flat display, as shown in Figure 2B. The projection process is not conformal, as longitude and latitude lines are not kept perpendicular to each other. Moreover, due to the perspective projection with the viewer located at the center, current environmental mapping schemes, such as cubic mapping and skydome mapping, cannot support a Field-of-View (FOV) greater than 90°. Significant distortion is apparent whenever the FOV gets close to 90°, and in practice the conventional environmental mapping methods are limited to about 45°. It would therefore be desirable to correct the pincushion distortion and limited FOV problems to avoid distorting the local shape of objects such as faces.
There is a need to improve panoramic content processing.
Summary Certain non-limiting embodiments of the proposed solution provide a direct mapping algorithm which combines geometrical mapping and texture applying steps into a single step. To this end, a non-standard skydome can be used, which has its texture coordinates determined according to an circular-to-skydome geometrical mapping, instead of using azimuth and polar angles as in an equirectangular to skydome mapping. When a skybox is used, the skybox has texture coordinates according to a circular-to-skybox mapping, instead of texture coordinates being linear to pixel locations as in the case of standard cubic mapping provided by 3D GPUs. The texture coordinates are generated for each circular panorama based on the camera lens mapping parameters of the circular image, and the texture coordinate generation process can be carried out by a CPU or by a GPU using vertex or geometry shaders. Correction of chromatic aberration can be included in the direct mapping algorithm.
Direct environmental mapping by dome-based panorama viewers requires acquired circular images to be loaded as textures. Due to the large size of circular images, these have to be split at loading time into sub-images of sizes acceptable to the GPU. Two device-dependent GPU limitations, maximum texture size and support for non-Power- Of-Two (non-POT), need to be considered in deciding sub-image sizes. After the sub- images of proper sizes are generated, texture coordinates are assigned according to geometrical mapping of the panorama.
A panorama image can be loaded by a GPU as tiles of sub-images of smaller sizes than the size of the original panorama image as a workaround for limitation of maximum texture size and the POT limitation, or as a way to provide flexibility of ordered loading according to a Region-of-lnterest (ROI). A mesh is created and loaded to GPU for each tile. The mesh contains information for both angular position and texture position, allowing the subsequent reconstruction of the panorama geometry without knowing the particular format (eg. cylindrical, fish eye, circular etc.) of the original panorama image. It is possible to display the panorama according to multiple projection types based on the same intermediate set of tiles/meshes generated from the original panorama image.
Another advantage of storing panorama images by the intermediate tiles is the ability to modify the panorama images according to projected geometry, based on the angular information comes with the tiles. A viewer in accordance with a non-limiting embodiment of the proposed solution relies on a conformal projection process to preserve local shapes. For example, a rotated cylindrical mapping can be used. In the image generation process, the source panoramic image, which can be circular or non-circular, is cast on a sphere according to the angular location of pixels in the acquired panorama. The sphere is rotated around its center to a desired orientation to select an ROI before being projected to a cylinder also centered at the sphere's center with its longitudinal axis along the sphere's z-axis. The projected image on the cylinder is unwrapped and displayed by the viewer. Because the mapping algorithm is based on unwrapping a developable plane with projected panorama, FOV is not particularly limited.
In accordance with an aspect of the proposed solution there is provided a method for mapping a panoramic image to a 3-dimensional virtual object of which a projection is made for display on a screen, comprising: providing the panoramic image in a memory, the panoramic image defined by a set of picture elements (pixels) in a 2-dimensional space; providing a model of the object, the model comprising a set of vertices in a 3- dimensional space; selecting a vertex on the model, the selected vertex being characterized by a set of angular coordinates; applying a transformation to the angular coordinates to obtain a set of polar coordinates; identifying a pixel whose position in the panoramic image is defined by the polar coordinates; storing in memory an association between the selected vertex on the model and a value of the identified pixel.
In accordance with another aspect of the proposed solution there is provided a non- transitory computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out a method for mapping a panoramic image to a 3-D virtual object of which a projection is made for display on a screen, the method comprising: providing the panoramic image in a memory, the panoramic image defined by a set of picture elements (pixels) in a 2-dimensional space; providing a model of the object, the model comprising a set of vertices in a 3-dimensional space; selecting a vertex on the model, the selected vertex characterized by a set of angular coordinates; applying a transformation to the angular coordinates to obtain a set of polar coordinates; identifying a pixel whose position in the panoramic image is defined by the polar coordinates; storing in memory an association between the selected vertex on the model and a value of the identified pixel.
In accordance with a further aspect of the proposed solution there is provided a method of assigning a value to a vertex of an object of interest, comprising: obtaining 3-D coordinates of the vertex; using a shader to derive 2-D coordinates based on the 3-D coordinates; and consulting a panoramic image to obtain a value corresponding to the 2-D coordinates.
In accordance with a further aspect of the proposed solution there is provided a method of assigning a value to a vertex of an object of interest, comprising: obtaining 3-D coordinates of the vertex; using a shader to derive 2-D coordinates based on the 3-D coordinates; and consulting a panoramic image to obtain a value corresponding to the 2-D coordinates.
In accordance with a further aspect of the proposed solution there is provided a method performed by a CPU, comprising: receiving a request for a value of a texture element having S and T coordinates; transforming the S and T coordinates into S1 , T1 and H coordinates, where S1 and T1 denote X and Y coordinates within a sub-texture indexed by H; sending a request to a GPU for a value of a texture element having S1 , T1 and H coordinates; receiving from the GPU the value of the texture element of the sub-texture indexed by H whose X and Y coordinates are S1 and T1 . In accordance with a further aspect of the proposed solution there is provided a method performed by a CPU, comprising: receiving a request for a value of a texture element having S and T coordinates; transforming the S and T coordinates into S1 , T1 and H coordinates, where S1 and T1 denote X and Y coordinates within a sub-texture indexed by H; sending a request to a GPU for a value of a texture element having S1 , T1 and H coordinates; receiving from the GPU the value of the texture element of the sub-texture indexed by H whose X and Y coordinates are S1 and T1 .
In accordance with a further aspect of the proposed solution there is provided a non- transitory computer-readable medium comprising instructions which, when executed by a CPU, cause the CPU to carry out a method that comprises: receiving a request for a value of a texture element having S and T coordinates; transforming the S and T coordinates into S1 , T1 and H coordinates, where S1 and T1 denote X and Y coordinates within a sub-texture indexed by H; sending a request to a GPU for a value of a texture element having S1 , T1 and H coordinates; receiving from the GPU the value of the texture element of the sub-texture indexed by H whose X and Y coordinates are S1 and T1.
In accordance with a further aspect of the proposed solution there is provided a method of displaying a 2-D panoramic image in a viewing window, comprising: obtaining 2-D coordinates of an element of the viewing window; transforming the 2-D coordinates into a 3-D vector; rotating the vector; mapping the 3-D coordinates to 2-D coordinates of a panoramic image; and obtaining color information of the panoramic image at the 2-D coordinates.
In accordance with yet another aspect of the proposed solution there is provided a non- transitory computer-readable medium comprising instructions which, when executed by a computing apparatus, cause the computing apparatus to carry out a method that comprises: obtaining 2-D coordinates of an element of the viewing window; transforming the 2-D coordinates into a 3-D vector; rotating the vector; mapping the 3-D coordinates to 2-D coordinates of a panoramic image; and obtaining color information of the panoramic image at the 2-D coordinates.
Brief Description of the Drawings
The proposed solution will be better understood by way of the following detailed description of embodiments of the invention with reference to the appended drawings, in which: Figures 1A and 1 B are a comparison between illustrations of (A) a cubic mapping and (B) a direct mapping;
Figure 2A is an illustration of a dome view in accordance with the proposed solution; Figure 2B is an illustration of a pincushion distortion;
Figure 3 is a schematic diagram illustrating relationships between spaces;
Figure 4(a) is a schematic diagram illustrating rendering a view of a texture surface on a screen in accordance with the proposed solution; Figure 4(b) is a schematic diagram illustrating a 2-D geometric mapping of a textured surface in accordance with the proposed solution;
Figure 5A is a schematic plot showing a camera radial mapping function in accordance with the proposed solution;
Figure 5B is a schematic diagram illustrating direct mapping from a circular image to skydome as defined by Eq. (1 ) in accordance with the proposed solution;
Figure 5C is a comparison between (a) a cubic mapping process and (b) a direct mapping process in accordance with the proposed solution;
Figure 6 is an algorithmic listing illustrating dome vertex generation in accordance with a non-limiting example of the proposed solution; Figure 7 is an algorithmic listing illustrating cube/box vertex generation in accordance with another non-limiting example of the proposed solution;
Figure 8 is schematic diagram illustrating sub-images loaded as textures in accordance with the proposed solution;
Figure 9 is a schematic diagram illustrating determining vertex coordinates from texture coordinates in accordance with the proposed solution;
Figure 10 is a schematic diagram illustrating obtaining texture values in accordance with the proposed solution;
Figure 1 1 is an algorithmic listing illustrating image splitting in accordance with a non- limiting example of the proposed solution; Figure 12 is an illustration of a split circular panorama in accordance with the proposed solution;
Figure 13 is another algorithmic listing illustrating dome construction in accordance with another non-limiting example of the proposed solution; Figure 14 is an illustration of a dome construction from a split circular image in accordance with the non-limiting algorithmic example illustrated in Figure 13;
Figure 15 is schematic diagram illustrating a sketch of the geometry involved in accordance with the proposed solution;
Figure 16 is a table illustrating image processing in accordance with the proposed solution;
Figure 17 is an illustration having reduced pincushion distortion compared to the illustration in Figure 2B in accordance with the proposed solution;
Figure 18 is an algorithmic listing illustrating a rotated equirectangular mapping in accordance with a non-limiting example of the proposed solution; Figure 19 is an illustration of a mapping from a circular panorama image to a viewer window in accordance with the proposed solution;
Figure 20 is an illustration of a 90° FOV mapping from a circular panorama image in accordance with the proposed solution; and
Figure 21 is another illustration of a 90° FOV mapping from a circular panorama image in accordance with the proposed solution, wherein similar features bear similar labels throughout the drawings. Reference to qualifiers such as "top" and "left" in the present specification is made solely with reference to the orientation of the drawings as presented in the application and do not imply any absolute spatial orientation. Detailed Description
To discuss texture mapping, several coordinate systems can be defined, see Figure 3. "Texture space" is a 2D space of surface textures and "object space" is a local 3D coordinate system in which 3D objects such as polygons and patches can be defined. Typically, a polygon is defined by listing the object space coordinates of each of its vertices. "World space" is a global coordinate system that is related to each object's local object space using 3D modeling transformations such as translations, rotations and scaling. "3D screen space" is the 3D coordinate system of a display: a perspective space with pixel coordinates (x, y) and depth z using z-buffering. 3D screen space is related to world space by the camera parameters such as position, orientation, and FOV. Finally, "2D screen space" is a 2D subset of 3D screen space without the z dimension: a projection of objects from an ROI in 3D screen space onto the display. Use of the phrase "screen space" by itself can mean 2D screen space.
With reference to Figure 3, the correspondence between 2D texture space and 3D object space is called the "parameterization of the surface", and the mapping from 3D object space to 2D screen space is the "projection" defined by the camera and the modeling transformations. As illustrated in Figure 4(a) it is noted that when rendering a particular view of a textured surface, it is the compound mapping from 2D texture space to 2D screen space that is of interest. For resampling purposes, once the 2D to 2D compound mapping is known, the intermediate 3D space can be ignored. The compound mapping in texture mapping is an example of an image warp, the resampling of a source image to produce a destination image according to a 2D geometric mapping (see Figure 4(b)).
Direct Environmental Mapping
In what follows, with reference to Figure 5C(b), a skydome and a skybox with texture coordinates set to allow direct mapping are given in detail. However, the algorithm described here is general and can be applied to generate other geometry shapes for panorama viewers. Geometry of 3D Model (Dome)
A vertex on a skydome mesh, which is centered at the coordinate origin, can be located by its angular part in spherical coordinates (θ, φ), where Θ and φ are the polar and azimuth angles respectively. The direct mapping from a circular image to skydome is defined by:
For many fisheye lenses, the two-dimensional lens mapping is uniquely determined by a one dimensional radial lens mapping function:
Figure imgf000012_0001
where ΓΕ and ΘΕ are the polar coordinates of mapped location within a centered circular or non-circular source image, and f(6) is a mapping function defined by the camera lens projection. The radial mapping function f(Q) is supplied by the camera in the form of a one-dimensional lookup table (more on this below). Figure 5A illustrates an example of a radial mapping function for a non-circular source image. The mapping defined by Eq. (1 ) is conceptually illustrated in Figure 5B. Note that Eq. (1 ) can be applied to 360° fisheye lens images. In that case, the radial mapping function ί(θ) may be a straight line.
The texture coordinates of a vertex is obtained by transforming the polar coordinates into cartesian as follows:
Figure imgf000012_0002
As such, the dome (an example of a 3D model) is created by generating vertices on a sphere, and the texture coordinates are assigned to the vertices according to Eqs. (1 ) and (2).
Once the textures of the vertices of the 3D model (in this case a sphere, or dome) are known, this results in a 3D object which can now undergo a projection from 3D object space to 2D screen space in accordance with the "camera" angle and the modeling transformation (e.g., perspective projection). This can be done by viewer software.
For general illustration purposes the lens mapping function as employed hereinabove is the one-to-one correspondence between the incident angle of an incoming ray and the pixel position on the image sensor of the camera:
(s, t) = f(e, <p) with (s, t) being the texture coordinates of the image sensor pixel, and θ, φ being the polar and azimuth angles, respectively i.e., the texture coordinates of the pixel being a function of incident angle. However, due to optical dispersion of lens materials, the actual lens mapping function is found to be wavelength/color dependent. That is a footprint of an incident ray on the imaging sensor is not a point but a color separated region. When the color separated region of the footprint spans over a few neighboring pixels on the imaging sensor, this type of color dependency of the lens mapping function results in chromatic aberration reducing the overall resolution of panorama images. It is desirable to include dispersion correction in a panorama viewer.
In accordance with a preferred embodiment of the proposed solution, to correct dispersion, different lens mapping functions for Red/Green/Blue pixels (R/G/B) are generated according to weighted ray tracing centroids, where the weights are proportional to R/G/B pixel response functions (pixel's sensitivities to a particular wavelength) and the corresponding single color intensity within the spectrum of the light source (the energy at a given wavelength from the light source times the sensor's response function per input light flux at that wavelength). For example, the lens mapping function for red pixels can be generated by:
∑£ «¾ where the summation over i goes through a list red wavelengths, w, is the overall red pixel response function at the -th wavelength for a given light source spectrum which for simplicity is assumed to be the solar spectrum (for brevity of description herein, light source spectra are assumed to be the solar spectrum; it will be understood that the each light source has a corresponding light spectrum), and (θ, φ) is the monochromatic lens mapping function an incident ray of the /'-th wavelength as above. Therefore the overall direct mapping process can include:
1 . For each pixel in window space with polar coordinates (R, Θ) is scaled according to zooming factor k:
(R, Θ) = (→ scale)(kR, Θ)
2. The scaled polar coordinates are converted to spherical coordinates (θ, φ) according to for example stereographic projection:
Figure imgf000014_0001
3. The spherical coordinates are rotated to a new location according to allow user panning:
Figure imgf000014_0002
4. The texture coordinates for R/G/B are found by using dispersion correcting lens mapping functions for R/G/B:
Figure imgf000015_0001
Accordingly, these three texture coordinates are used in texture look-up to retrieve the red, green and blue colors, respectively, and the color of the pixel within window space is constructed by combining the R/G/B colors. The scaling and rotation in steps 1 and 3 above are required only for interactive viewers, and can be skipped when the default static view of the panorama is to be generated.
A variant of this algorithm could be mapping texture coordinates to 3 different locations within the display space, according to stereographic projection, and then performing the RGB dependent lens mapping. The red, green, and blue colors for the display pixels are then found by a single color texture look-up using the corresponding texture coordinates, respectively.
The following describes the proposed solution in terms of a monochromatic source panoramic image for brevity, however it is understood that in practice the source panoramic image is a color image and dispersion correction is employed in order to retain best resolution.
Geometry of 3-D Model (Box/Cube)
In a variant, illustrated in Figure 5C(a), a skybox is used instead of the skydome as the 3D model. In this case, the vertex locations on the skybox have the form (r(Q, φ), θ, φ) in spherical coordinates, with the radius being a function of angular direction (i.e., defined Θ and φ) instead of a constant as in the skydome case. In other words, at a given point on the surface of the mesh shape, the radius has a function that is constrained by Θ and φ. This is the case with a cube, for example, although the same will also be true of other regular polyhedrons. Since Eq. (1 ) does not use the radial part, the vertex coordinates are generated by Eqs. (1 ) and (2) using the angular part of the vertex coordinates.
It is seen that direct mapping (which is implemented by certain embodiments of the proposed solution) avoids the need for a geometric mapping to transform an input 2D circular image into an intermediate rectangular (for a dome model) or cubic (for a cube/box model) image before mapping the intermediate image to the vertices of the 3D model. Specifically, in the case of direct mapping, the texture for a desired vertex can be found by transforming the 3D coordinates of the texture into 2D coordinates of the original circular image and then looking up the color value of the original circular image at those 2D coordinates. Conveniently, the transformation can be done using a vertex shader by applying a simple geometry according to Eq (1 ). On the other hand, when conventional cubic mapping is used, the texture of a desired vertex is found by consulting the corresponding 2D coordinate of the unwrapped cube. However, this requires the original circular image to have been geometrically transformed into the unwrapped cube, which can take a substantial amount of time. A comparison of the direct mapping to the traditional "cubic mapping" is illustrated in Figures 1 and 5C.
General Mesh Shapes
Because the form of (r(9, φ), θ, θ, φ) is the general case where the function r(9) specifies the particular mesh shape, Eqs. (1 ) and (2) are applicable in generating any geometry where the radius is uniquely determined by the angular position relative to the coordinate origin.
Direct Environmental Mapping Implementation
A non-limiting example of dome vertex generation is given by Algorithm 1 in Figure 6.
A non-limiting example of cube/box vertex generation is given by Algorithm 2 in Figure 7. Splitting Of Circular Images
In accordance with one embodiment of the proposed solution, at loading time, an originally acquired panoramic image (e.g., circular or non-circular) is split along its width and height into image segments of sizes up the maximum texture size of the GPU. When non-POT is supported, there is at most one remainder image segment of a size less than the maximum texture size. Where non-POT is not supported, this remainder segment can be further split into smaller POT sizes. Alternatively the original image can be padded with transparent color up to texture size. The circular image is thus split into a plurality of rectangular sub-images according to segmentation in the width and height directions. The sub-images are loaded as textures for example as illustrated in Figure 8.
In accordance with a further embodiment of the proposed solution, the total memory of the GPU may be insufficient to accommodate the entire circular image. This may be the case on mobile devices. Therefore, another benefit of using a splitting algorithm is being able to load high resolution circular panorama images for mobile devices with tight limits on system memory.
One 3D mesh portion is created to cover each rectangular sub-image, and the vertex coordinates are determined from the texture coordinates of the circular image:
Figure imgf000017_0001
where rE and QE are the texture coordinates in polar coordinates, and F1(rE) is the reverse mapping function from circular image to panorama defined by the camera lens projection for example as illustrated in Figure 9.
The texture coordinates of the circular image and texture coordinates of sub-images are related by linear scaling, as in the following equation:
Figure imgf000018_0001
where s and t are texture coordinates, and W and H stand for image width and height respectively, with subscripts E and S indicating circular and sub-image respectively, and Ls and 7s are the pixel location of the left and top edge of the sub-image, respectively. With reference to Figure 10, consider a vertex for which, by virtue of the texture map, is mapped to coordinates (s, t) in the original circular image. In order to obtain the texture value for (s, t), it is necessary for the CPU to determine the sub-image corresponding to these coordinates and the location within that sub-image. This is obtained using Eq. (3) above, which will result in (s, t) for a given sub-image h, (1≤ h≤ H). The appropriate texture value can be obtained using the GPU.
A non-limiting example of image splitting according to width or height is given by Algorithm 1 listed in Figure 1 1 . Figure 12 illustrates an example of splitting of a circular panorama. Specifically, splitting was done by POT sizes up to a given maximum. Straight lines indicate borders of sub-images after splitting. A non-limiting example of dome construction from a split circular image is given in Algorithm 2 listed in Figure 13. Figure 14 illustrates a dome with the split images mapped thereon. Specifically, mapping of a circular panorama split to POT sizes up to a given maximum is illustrated by way of non-limiting example.
Displaying and Manipulating a Panoramic image by Tiles
Tiles are generated according to methods described herein above. Meshes are generated with angular information for each vertex:
(x, y, z, s, t) with (x, y, z) being a direction vector of the vertex, and (s, t) being the texture coordinates. The association of the direction vector and texture coordinates is determined by optical design of the panorama lens. For example,
(x, y, z) =( sin Θ * cos φ, sin Θ * sin φ, cos φ ) where Θ and φ are the polar angle and azimuth angle from reverse ray tracing of the (s, t) location on the image sensor.
Subsequently a panorama viewer projects a vertex fx, y, z) to certain displaying coordinates (u, v) according to the projection methods of the viewer, and the displayed image is formed by texture look-up using the texture coordinates (u, v). Examples of some viewers include: skydome environment mapping, equirectangular projection, and stereographic projection. The pre-tiled set-up allows fast switching between projection types without having to reload the original image in the GPU.
To edit the panorama image according to a projected geometry, each tile is directly projected to a rectangle of the same dimension, in parallel with a projection to display space by a given viewer projection, and the modification of tiles is carried out using information from the viewer projection method.
(s, t) -> (s, t)
The trivial projection is to a rectangle of the same size. This operation is the same as copying of the tile if pixels are not modified according to the viewer display space coordinates.
(x, y, z) -> (u, v) gives the viewer display space coordinates according to the projection method of a panorama viewer.
After the manipulation is carried out for all tiles, the tiles can be used in viewers to display the modified panorama, or can be saved to a new panorama image. The manipulation can be limited to tiles in the ROI. Examples of this type of image manipulation could be: cropping of panorama by projection on display, drawing shapes on to panorama in display, pixelation in display space.
In accordance with the proposed solution, advantages are derived from using pre-tiled panorama images which allows panorama viewers and editors to be implemented without being limited to GPU supported texture sizes.
Given complete geometry information provided by a tiled panorama, multiple projection methods can be supported simultaneously allowing switching between projection methods without reloading of the original panorama image into the GPU. The pre-tile panorama can be modified according to display coordinates in viewers.
For certainty, such a manipulation process is understood to be employed in conjunction (either in sequence or in parallel) with the projection process such, but not limited to ones described above.
Panoramic Image Viewer
In accordance with an embodiment of the proposed solution, for an image to be generated by the viewer, a pixel in the display, indexed as (u, v), is mapped to a cylinder with unit radius in 3-dimensional space by equirectangular projection for example as shown by Eq. (4):
Figure imgf000020_0001
where φ0 and zc are the azimuth and height in cylindriac coordinates, respectively, and w and h are the width and height of the displayed image, respectively. Linear mapping is used to preserve angular uniformity in both directions along the u-indices and v- indices. Next, the point on the cylinder (which was just found) is mapped to a unit sphere by normalization of its cartesian coordinates, and the point on the unit sphere is rotated which can for example be expressed by:
Figure imgf000021_0001
where xc, yc, zc are respectively the cartesian coordinates of the point on the cylinder, rc is its distance to the origin, F is a rotation matrix, and (xs, ys, zs) are the cartesian coordinates of the corresponding point on the unit sphere. The rotation matrix F is a function of user input where navigation throughout the original image will induce changes in F.
The color of the displayed pixel (u, v) in the view window is the color of a corresponding location within a 2D panoramic image, which can be circular or non-circular. This corresponding location can be obtained by first converting the cartesian coordinates of the aforementioned point on the unit sphere (xs, yS: zs) to spherical coordinates (1, Gs, (Ps) then recognizing the existence of a mapping between (general) spherical coordinates (1, θ, φ) on the unit sphere and (general) polar coordinates (rE, %) on circular or non-circular panoramic image. In particular, this mapping can be defined by Eq. (1 ) where f(Q) is a mapping function defined by the camera lens projection, and may indeed be supplied by the camera in a form of an one-dimensional lookup table.
As a result, the texture coordinates in the original 2D circular image that correspond to the point (u, v) in the display viewing window are given by: 2
t = 2
(6).
Figure 15 illustrates a sketch of the geometry involved in the aforementioned process.
Figure 16 illustrates a summary of the overall mapping process.
Figure 17 illustrates a screenshot from a viewer implemented in accordance with an embodiment of the present invention. It is noted that the pincushion distortion from Figure 2B has been reduced.
Implementation
Algorithm 1 listed in Figure 18 finds the texture coordinates for a location wi in the
viewer window. cpc and zc are the cylindriac coordinates from Eqn. (4), and vc =
Figure imgf000022_0001
.: ~:<- /) is a column vector of the corresponding cartesian coordinates; rc is the length of the 2D vector I; F is a rotation matrix which as columns holds the direction vectors along x- y-
z- axes of a frame fixed on the spherical source image; vs =
Figure imgf000022_0002
a column vector of the cartesian coordinates of the mapped point on the unit sphere.
One example of mapping from a circular panorama image to the viewer window is shown in Figure 19.
Figures 20 and 21 show how a portion of the circular panorama image is mapped to a viewer window at a 90° FOV. Panoramic Viewer employing Stereographic Projection
A typical panorama viewer supplies environmental mapping of wide angle panorama images according to viewing conditions (ROI) from user input.
Traditional cubic mapping preserves straight lines at the cost of losing orthomorphism. In contrast, conformal mapping based viewers are orthomorphic, i.e. local scaling factors are isotropic. Therefore, resulting local shapes from viewer output appear almost distortion free, while straight lines spanning large viewing angles are not mapped straight.
Stereographic projection is well-known for being conformal, and gives satisfactory results when used as an environmental mapping method. Compared with traditional cubic mapping, stereographic projection generates less pronounced local distortion at the same size of FOV. Therefore, slightly larger FOV angles can be used than in the case of a cubic mapping viewer.
Geometry of Rotated Stereographic mapping
Stereographic projection maps a point on a unit sphere to a point on a tangential plane to the sphere. Each point on the unit sphere can be rotated according to viewer rotation (ROI) and mapped to a location within the panorama image according to lens optics.
A detailed description of such a procedure for example includes: For the image to be generated by the viewer, a pixel indexed with normalized texture coordinates / is related to a point fx, y, z) on a unit sphere according to stereographic projection: u = ½ + x/(1-z) v = ½ + y/(1-z) The unit sphere point fx, y: z) is rotated according to rotation angles specified by the viewer (ROI): (χ; y; z ') = RF(x, y, z)
The direction of vector (x' y z') is identified, for example, by converting (χ', /' z'J to spherical coordinates:
Θ = arccos(z') φ - arctan(y', x')
The corresponding texture coordinates of the mapped pixel within the original panorama image is found according to the optical mapping of the wide angle lens:
(u; V) = Mi(6, ψ) where the ML function is determined by the wide angle lens which focuses incoming rays in direction (θ, φ) to certain sensor location by lens optics. In practice, the ML function can be in the form of a look-up table to give pixel locations for all panorama angles covered within a field of view.
With this procedure, each pixel within the viewer viewport is uniquely mapped to one location within the original circular image, and color can be assigned to viewer pixels according to the mapped location within the panorama image. The color assignment process is usually performed by GPU texture look up, or by traditional CPU based geometry mapping.
Those skilled in the art will appreciate that a computing device may implement the methods and processes of certain embodiments of the present invention by executing instructions read from a storage medium. In some embodiments, the storage medium may be implemented as a ROM, a CD, Hard Disk, USB storage device, etc. connected directly to (or integrated with) the computing device. In other embodiments, the storage medium may be located elsewhere and accessed by the computing device via a data network such as the Internet. Where the computing device accesses the Internet, the physical interconnectivity of the computing device in order to gain access to the Internet is not material, and can be achieved via a variety of mechanisms, such as wireline, wireless (cellular, Wi-Fi, Bluetooth, WiMax), fiber optic, free-space optical, infrared, etc. The computing device itself can take on just about any form, including a desktop computer, a laptop, a tablet, a smartphone (e.g., Blackberry, iPhone, etc.), a TV set, etc.
Moreover, persons skilled in the art will appreciate that in some cases, the panoramic image being processed may be an original panoramic image, while in other cases it may be an image derived from an original panoramic image, such as a thumbnail or preview image.
Certain adaptations and modifications of the described embodiments can be made. Therefore, the above discussed embodiments are to be considered illustrative and not restrictive. Also it should be appreciated that additional elements that may be needed for operation of certain embodiments of the present invention have not been described or illustrated as they are assumed to be within the purview of the person of ordinary skill in the art. Moreover, certain embodiments of the present invention may be free of, may lack and/or may function without any element that is not specifically disclosed herein.
While the invention has been shown and described with referenced to preferred embodiments thereof, it will be recognized by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims

What is claimed is:
1 . A method for mapping a panoramic image to a 3-dimensional virtual object of which a projection is made for display on a screen, comprising:
- providing the panoramic image in a memory, the panoramic image defined by a set of picture elements (pixels) in a 2-dimensional space;
- providing a model of the object, the model comprising a set of vertices in a 3- dimensional space;
- selecting a vertex on the model, the selected vertex characterized by a set of angular coordinates;
- applying a transformation to the angular coordinates to obtain a set of polar coordinates;
- identifying a pixel whose position in the panoramic image is defined by the polar coordinates;
- storing in memory an association between the selected vertex on the model and a value of the identified pixel.
2. The method defined in claim 1 , wherein the selected vertex on the model is further characterized by a radial component that is constant over a range of vertices on the model.
3. The method defined in claim 1 or 2, wherein the selected vertex on the model is further characterized by a radial component that is constant for all vertices on the model.
4. The method defined in any of claims 1 to 3, wherein the selected vertex on the model is further characterized by a radial component that is a function of at least one of the angular coordinates.
5. The method defined in claim 1 or 4, wherein the selected vertex on the model is further characterized by a radial component that is not independent of the angular coordinates.
6. The method defined in any of claims 1 to 5, further comprising repeating the selecting, identifying and storing for a plurality of vertices on the model.
7. The method defined in any of claims 1 to 6, wherein the transformation is a function of optical properties of an image acquisition device used to capture the panoramic image.
8. The method defined in any of claims 1 to 7, wherein said association defines a surface pixel for the 3-D object.
9. The method defined in any of claims 1 to 8, wherein the angular coordinates include an azimuth coordinate and a polar coordinate.
10. The method defined in any of claims 1 to 9, further comprising: determining a desired viewing orientation in 3-D space; identifying a viewing window corresponding to the desired viewing orientation, the viewing window occupying a plane in 3-dimensional space; projecting the model onto the viewing window in order to determine a set of surface pixel of the 3-D virtual object that are visible in the desired viewing orientation.
1 1. The method defined in any of claims 1 to 10, wherein the panoramic image is a 360- degree image and wherein the set of pixels of the panoramic images defines a circle.
12. The method defined in any of claims 1 to 3, wherein the 3-D model is a dome.
13. The method defined in any of claims 1 , 4 and 5, wherein the 3-D model is a box.
14. A non-transitory computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out a method for mapping a panoramic image to a 3-D virtual object of which a projection is made for display on a screen, the method comprising:
- providing the panoramic image in a memory, the panoramic image defined by a set of picture elements (pixels) in a 2-dimensional space;
- providing a model of the object, the model comprising a set of vertices in a 3- dimensional space; - selecting a vertex on the model, the selected vertex characterized by a set of angular coordinates;
- applying a transformation to the angular coordinates to obtain a set of polar coordinates;
- identifying a pixel whose position in the panoramic image is defined by the polar coordinates;
- storing in memory an association between the selected vertex on the model and a value of the identified pixel.
15. A method of assigning a value to a vertex of an object of interest, comprising:
- obtaining 3-D coordinates of the vertex;
- using a shader to derive 2-D coordinates based on the 3-D coordinates; and
- consulting a panoramic image to obtain a value corresponding to the 2-D coordinates.
16. The method defined in claim 15, wherein the panoramic image is a non-circular image.
17. The method defined in claim 15 or 16, wherein the shader is a vertex shader.
18. The method defined in any of claims 15 to 17, wherein the shader utilizes the following geometry in derivng the 2-D coordinates based on the 3-D coordinates: rE = f i O)
ΘΕ =
19. A method performed by a GPU, comprising:
- receiving an image whose pixels span a range along each of two orthogonal axes;
- segmenting the image into indexed sub-images;
- storing the sub-images as texture maps; - responding to a request for a value of a texture element having S1 , T1 and H coordinates by returning the value of the texture element of the sub-image indexed by H whose X and Y coordinates are S1 and T1.
20. The method defined in claim 19, wherein segmenting is done according to maximum GPU size.
21. The method defined in claim 19 or 20, wherein segmenting is done according to powers of two.
22. The method defined in any of claims 19 to 21 , wherein the image is a panoramic image.
23. The method defined in any of claims 19 to 22, wherein the pixels of the image define a non-circular shape.
24. The method defined in any of claims 19 to 22, wherein the pixels of the image define an circle.
25. A method performed by a CPU, comprising:
- receiving a request for a value of a texture element having S and T coordinates;
- transforming the S and T coordinates into S1 , T1 and H coordinates, where S1 and T1 denote X and Y coordinates within a sub-texture indexed by H;
- sending a request to a GPU for a value of a texture element having S1 , T1 and H coordinates;
- receiving from the GPU the value of the texture element of the sub-texture indexed by H whose X and Y coordinates are S1 and T1.
26. The method defined in claim 25, wherein the request is received from a requesting function, the method further comprising returning the received value of the texture element to the requesting function.
27. The method defined in claim 25 or 26, wherein the sub-texture is a sub-image obtained from an original image through segmentation.
28. The method defined in any of claims 25 to 27, wherein the original image is a non- circular panoramic image.
29. A non-transitory computer-readable medium comprising instructions which, when executed by a GPU, cause the GPU to carry out a method that comprises:
- receiving an image whose pixels span a range along each of two orthogonal axes;
- segmenting the image into indexed sub-images;
- storing the sub-images as texture maps;
- responding to a request for a value of a texture element having S1 , T1 and H coordinates by returning the value of the texture element of the sub-image indexed by H whose X and Y coordinates are S1 and T1.
30. A non-transitory computer-readable medium comprising instructions which, when executed by a CPU, cause the CPU to carry out a method that comprises:
- receiving a request for a value of a texture element having S and T coordinates;
- transforming the S and T coordinates into S1 , T1 and H coordinates, where S1 and T1 denote X and Y coordinates within a sub-texture indexed by H;
- sending a request to a GPU for a value of a texture element having S1 , T1 and H coordinates;
- receiving from the GPU the value of the texture element of the sub-texture indexed by H whose X and Y coordinates are S1 and T1.
31. A method of displaying a 2-D panoramic image in a viewing window, comprising:
- obtaining 2-D coordinates of an element of the viewing window;
- transforming the 2-D coordinates into a 3-D vector;
- rotating the vector;
- mapping the 3-D coordinates to 2-D coordinates of a panoramic image; and - obtaining color information of the panoramic image at the 2-D coordinates.
32. The method defined in claim 31 , wherein the transforming comprises applying a projection of the 2-D coordinates onto a virtual 3-D shape.
33. The method defined in claim 31 or 32, wherein the virtual 3-D shape includes a cylinder.
34. The method defined in any of claims 31 to 33, further comprising normalizing the 3- D vector between the transforming and mapping steps.
35. The method defined in any of claims 31 to 34, wherein normalizing the vector comprises projectng the vector to the surface of the unit sphere.
36. The method defined in any of claims 31 to 35, further comprising obtaining a desired orientation of the viewing window and rotating the vector in accordance with the desired orientation.
37. The method defined in any of claims 31 to 36, wherein the panoramic image is non- circular.
38. The method defined in any of claims 31 to 37, wherein the panoramic image is captured by a camera.
39. The method defined in any of claims 31 to 36, wherein the panoramic image is circular.
40. The method defined in any of claims 31 to 39, wherein the steps are repeated for multiple elements in the viewing window.
41. A non-transitory computer-readable medium comprising instructions which, when executed by a computing apparatus, cause the computing apparatus to carry out a method that comprises:
- obtaining 2-D coordinates of an element of the viewing window;
- transforming the 2-D coordinates into a 3-D vector; - rotating the vector;
- mapping the 3-D coordinates to 2-D coordinates of a panoramic image; and
- obtaining color information of the panoramic image at the 2-D coordinates.
PCT/CA2013/050720 2012-09-21 2013-09-20 Methods and apparatus for displaying and manipulating a panoramic image by tiles WO2014043814A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201261704060P 2012-09-21 2012-09-21
US201261704082P 2012-09-21 2012-09-21
US201261704088P 2012-09-21 2012-09-21
US61/704,060 2012-09-21
US61/704,082 2012-09-21
US61/704,088 2012-09-21

Publications (1)

Publication Number Publication Date
WO2014043814A1 true WO2014043814A1 (en) 2014-03-27

Family

ID=50340496

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2013/050720 WO2014043814A1 (en) 2012-09-21 2013-09-20 Methods and apparatus for displaying and manipulating a panoramic image by tiles

Country Status (1)

Country Link
WO (1) WO2014043814A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827978A (en) * 2016-04-28 2016-08-03 努比亚技术有限公司 Semispherical panorama photographing method, apparatus and terminal
CN107146274A (en) * 2017-05-05 2017-09-08 上海兆芯集成电路有限公司 Image data processing system, texture mapping compression and the method for producing panoramic video
WO2017176345A1 (en) * 2016-04-05 2017-10-12 Qualcomm Incorporated Dual fisheye image stitching for spherical video
WO2017176346A1 (en) * 2016-04-05 2017-10-12 Qualcomm Incorporated Dual fisheye image stitching for spherical image content
WO2018215502A1 (en) * 2017-05-23 2018-11-29 Koninklijke Kpn N.V. Coordinate mapping for rendering panoramic scene
EP3416372A4 (en) * 2016-02-12 2019-03-13 Samsung Electronics Co., Ltd. Method and apparatus for processing 360-degree image
CN109547766A (en) * 2017-08-03 2019-03-29 杭州海康威视数字技术股份有限公司 A kind of panorama image generation method and device
CN110348138A (en) * 2019-07-15 2019-10-18 辽宁瑞华实业集团高新科技有限公司 A kind of real-time method, apparatus and storage medium for generating true underworkings model
WO2020018135A1 (en) * 2018-07-20 2020-01-23 Facebook, Inc. Rendering 360 depth content
WO2020018134A1 (en) * 2018-07-19 2020-01-23 Facebook, Inc. Rendering 360 depth content
CN110832877A (en) * 2017-07-10 2020-02-21 高通股份有限公司 Enhanced high-order signaling for fisheye metaverse video in DASH
CN111145085A (en) * 2019-12-26 2020-05-12 上海霁目信息科技有限公司 Method of sorting fragments and method, system, apparatus and medium for model rasterization
CN111461125A (en) * 2020-03-19 2020-07-28 杭州凌像科技有限公司 Continuous segmentation method of panoramic image
CN112308766A (en) * 2020-10-19 2021-02-02 武汉中科通达高新技术股份有限公司 Image data display method and device, electronic equipment and storage medium
CN112330785A (en) * 2020-11-02 2021-02-05 通号通信信息集团有限公司 Image-based urban road and underground pipe gallery panoramic image acquisition method and system
CN113593052A (en) * 2021-08-06 2021-11-02 北京房江湖科技有限公司 Scene orientation determining method and marking method
CN113593046A (en) * 2021-06-22 2021-11-02 北京百度网讯科技有限公司 Panorama switching method and device, electronic equipment and storage medium
CN115100813A (en) * 2022-06-07 2022-09-23 慧之安信息技术股份有限公司 Intelligent community system based on digital twins

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5446833A (en) * 1992-05-08 1995-08-29 Apple Computer, Inc. Textured sphere and spherical environment map rendering using texture map double indirection
US6192393B1 (en) * 1998-04-07 2001-02-20 Mgi Software Corporation Method and system for panorama viewing
US20030095131A1 (en) * 2001-11-08 2003-05-22 Michael Rondinelli Method and apparatus for processing photographic images
US20060023105A1 (en) * 2003-07-03 2006-02-02 Kostrzewski Andrew A Panoramic video system with real-time distortion-free imaging

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5446833A (en) * 1992-05-08 1995-08-29 Apple Computer, Inc. Textured sphere and spherical environment map rendering using texture map double indirection
US6192393B1 (en) * 1998-04-07 2001-02-20 Mgi Software Corporation Method and system for panorama viewing
US20030095131A1 (en) * 2001-11-08 2003-05-22 Michael Rondinelli Method and apparatus for processing photographic images
US20060023105A1 (en) * 2003-07-03 2006-02-02 Kostrzewski Andrew A Panoramic video system with real-time distortion-free imaging

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Shader From Wikipedia. the free encyclopedia", WIKIPEDIA, THE FREE ENCYCLOPEDIA, 26 February 2010 (2010-02-26), Retrieved from the Internet <URL:http://web.archive.ors/web/20100226071054/http://en.wikipedia.oi/wiki/Shader> [retrieved on 20140117] *
TRAPP ET AL.: "A Generalization Approach for 3D Viewing Deformations of Single-Center Projections", GRAPP 2008 - INTERNATIONAL CONFERENCE ON COMPUTER GRAPHIES THEORY AND APPLICATIONS, 1 January 2008 (2008-01-01), pages 162 - 170, Retrieved from the Internet <URL:http://www.hpi.uni-potsdam.de/fileadmin/hpi/FG_Doellner/publications/2008/TD08/NonPlanarProjection.pdf> [retrieved on 20140117] *

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3720114A1 (en) * 2016-02-12 2020-10-07 Samsung Electronics Co., Ltd. Method and apparatus for processing 360-degree image
US10992918B2 (en) 2016-02-12 2021-04-27 Samsung Electronics Co., Ltd. Method and apparatus for processing 360-degree image
US11490065B2 (en) 2016-02-12 2022-11-01 Samsung Electronics Co., Ltd. Method and apparatus for processing 360-degree image
EP3416372A4 (en) * 2016-02-12 2019-03-13 Samsung Electronics Co., Ltd. Method and apparatus for processing 360-degree image
US10102610B2 (en) 2016-04-05 2018-10-16 Qualcomm Incorporated Dual fisheye images stitching for spherical video
WO2017176345A1 (en) * 2016-04-05 2017-10-12 Qualcomm Incorporated Dual fisheye image stitching for spherical video
US10275928B2 (en) 2016-04-05 2019-04-30 Qualcomm Incorporated Dual fisheye image stitching for spherical image content
CN109074627A (en) * 2016-04-05 2018-12-21 高通股份有限公司 The fish eye images of spherical video splice
WO2017176346A1 (en) * 2016-04-05 2017-10-12 Qualcomm Incorporated Dual fisheye image stitching for spherical image content
CN105827978A (en) * 2016-04-28 2016-08-03 努比亚技术有限公司 Semispherical panorama photographing method, apparatus and terminal
US20180322685A1 (en) * 2017-05-05 2018-11-08 Via Alliance Semiconductor Co., Ltd. Methods of compressing a texture image and image data processing system and methods of generating a 360 degree panoramic video thereof
CN107146274A (en) * 2017-05-05 2017-09-08 上海兆芯集成电路有限公司 Image data processing system, texture mapping compression and the method for producing panoramic video
CN110663068B (en) * 2017-05-23 2024-02-02 皇家Kpn公司 Coordinate mapping for rendering panoramic scenes
WO2018215502A1 (en) * 2017-05-23 2018-11-29 Koninklijke Kpn N.V. Coordinate mapping for rendering panoramic scene
CN110663068A (en) * 2017-05-23 2020-01-07 皇家Kpn公司 Coordinate mapping for rendering panoramic scenes
US11182875B2 (en) 2017-05-23 2021-11-23 Koninklijke Kpn N.V. Coordinate mapping for rendering panoramic scene
CN110832877A (en) * 2017-07-10 2020-02-21 高通股份有限公司 Enhanced high-order signaling for fisheye metaverse video in DASH
CN110832877B (en) * 2017-07-10 2020-10-23 高通股份有限公司 Enhanced high-order signaling for fisheye metaverse video in DASH
US11012620B2 (en) 2017-08-03 2021-05-18 Hangzhou Hikvision Digital Technology Co., Ltd. Panoramic image generation method and device
CN109547766B (en) * 2017-08-03 2020-08-14 杭州海康威视数字技术股份有限公司 Panoramic image generation method and device
CN109547766A (en) * 2017-08-03 2019-03-29 杭州海康威视数字技术股份有限公司 A kind of panorama image generation method and device
US10652514B2 (en) 2018-07-19 2020-05-12 Facebook, Inc. Rendering 360 depth content
US20200029063A1 (en) * 2018-07-19 2020-01-23 Facebook, Inc. Rendering 360 depth content
WO2020018134A1 (en) * 2018-07-19 2020-01-23 Facebook, Inc. Rendering 360 depth content
US10733786B2 (en) 2018-07-20 2020-08-04 Facebook, Inc. Rendering 360 depth content
WO2020018135A1 (en) * 2018-07-20 2020-01-23 Facebook, Inc. Rendering 360 depth content
CN110348138B (en) * 2019-07-15 2023-04-18 北京瑞华高科技术有限责任公司 Method and device for generating real underground roadway model in real time and storage medium
CN110348138A (en) * 2019-07-15 2019-10-18 辽宁瑞华实业集团高新科技有限公司 A kind of real-time method, apparatus and storage medium for generating true underworkings model
CN111145085A (en) * 2019-12-26 2020-05-12 上海霁目信息科技有限公司 Method of sorting fragments and method, system, apparatus and medium for model rasterization
CN111145085B (en) * 2019-12-26 2023-09-22 上海杰图天下网络科技有限公司 Method for sorting primitives and method, system, device and medium for model rasterization
CN111461125A (en) * 2020-03-19 2020-07-28 杭州凌像科技有限公司 Continuous segmentation method of panoramic image
CN111461125B (en) * 2020-03-19 2022-09-20 杭州凌像科技有限公司 Continuous segmentation method of panoramic image
CN112308766A (en) * 2020-10-19 2021-02-02 武汉中科通达高新技术股份有限公司 Image data display method and device, electronic equipment and storage medium
CN112308766B (en) * 2020-10-19 2023-11-24 武汉中科通达高新技术股份有限公司 Image data display method and device, electronic equipment and storage medium
CN112330785A (en) * 2020-11-02 2021-02-05 通号通信信息集团有限公司 Image-based urban road and underground pipe gallery panoramic image acquisition method and system
CN113593046A (en) * 2021-06-22 2021-11-02 北京百度网讯科技有限公司 Panorama switching method and device, electronic equipment and storage medium
CN113593046B (en) * 2021-06-22 2024-03-01 北京百度网讯科技有限公司 Panorama switching method and device, electronic equipment and storage medium
CN113593052A (en) * 2021-08-06 2021-11-02 北京房江湖科技有限公司 Scene orientation determining method and marking method
CN115100813A (en) * 2022-06-07 2022-09-23 慧之安信息技术股份有限公司 Intelligent community system based on digital twins

Similar Documents

Publication Publication Date Title
WO2014043814A1 (en) Methods and apparatus for displaying and manipulating a panoramic image by tiles
CN113382168B (en) Apparatus and method for storing overlapping regions of imaging data to produce an optimized stitched image
AU2017246470B2 (en) Generating intermediate views using optical flow
US8803918B2 (en) Methods and apparatus for calibrating focused plenoptic camera data
US7336299B2 (en) Panoramic video system with real-time distortion-free imaging
TWI387936B (en) A video conversion device, a recorded recording medium, a semiconductor integrated circuit, a fish-eye monitoring system, and an image conversion method
RU2686591C1 (en) Image generation device and image display control device
US20140085295A1 (en) Direct environmental mapping method and system
WO2017116952A1 (en) Viewport independent image coding and rendering
US11189043B2 (en) Image reconstruction for virtual 3D
WO2007139067A1 (en) Image high-resolution upgrading device, image high-resolution upgrading method, image high-resolution upgrading program and image high-resolution upgrading system
CN106558017B (en) Spherical display image processing method and system
US20140169699A1 (en) Panoramic image viewer
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
US9299127B2 (en) Splitting of elliptical images
WO2020151268A1 (en) Generation method for 3d asteroid dynamic map and portable terminal
US11270413B2 (en) Playback apparatus and method, and generation apparatus and method
US20230215047A1 (en) Free-viewpoint method and system
JP2011521372A (en) Shape invariant affine recognition method and device
US11922568B2 (en) Finite aperture omni-directional stereo light transport
US20220222842A1 (en) Image reconstruction for virtual 3d
CN111726594A (en) Implementation method for efficient optimization rendering and pose anti-distortion fusion
KR102442089B1 (en) Image processing apparatus and method for image processing thereof
JP7150460B2 (en) Image processing device and image processing method
CN111726566A (en) Implementation method for correcting splicing anti-shake in real time

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13840035

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13840035

Country of ref document: EP

Kind code of ref document: A1