US20010056574A1 - VTV system - Google Patents
VTV system Download PDFInfo
- Publication number
- US20010056574A1 US20010056574A1 US09/891,733 US89173301A US2001056574A1 US 20010056574 A1 US20010056574 A1 US 20010056574A1 US 89173301 A US89173301 A US 89173301A US 2001056574 A1 US2001056574 A1 US 2001056574A1
- Authority
- US
- United States
- Prior art keywords
- electronic device
- video
- image
- panoramic
- hmd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/002—Special television systems not provided for by H04N7/007 - H04N7/18
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/173—Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
- H04N7/17309—Transmission or handling of upstream communications
- H04N7/17318—Direct or substantially direct transmission and handling of requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/08—Gnomonic or central projection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/426—Internal components of the client ; Characteristics thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/641—Multi-purpose receivers, e.g. for auxiliary information
Definitions
- the following patent relates to an overall hardware configuration that produces an enhanced spatial television-like viewing experience. Unlike normal television, with this system the viewer is able to control both the viewing direction and relative position of the viewer with respect to the movie action. In addition to a specific hardware configuration, this patent also relates to a new video format which makes possible this virtual reality like experience. Additionally, several proprietary video compression standards are also defined which facilitate this goal.
- the VTV system is designed to be an intermediary technology between conventional two-dimensional cinematography and true virtual reality.
- the overall VTV system consists of a central graphics processing device (the VTV processor), a range of video input devices (DVD, VCR, satellite, terrestrial television, remote video cameras), infrared remote control, digital network connection and several output device connections.
- the VTV unit In its most basic configuration as shown in FIG. 2, the VTV unit would output imagery to a conventional television device.
- a remote control device possibly infrared
- the advantage of this “basic system configuration” is that it is implementable utilizing current audiovisual technology.
- the VTV graphics standard is a forwards compatible graphics standard which can be thought of as a “layer” above that of standard video.
- VTV video represents a subset of the new VTV graphics standard.
- VTV can be introduced without requiring any major changes in the television and/or audiovisual manufacturers specifications.
- VTV compatible television decoding units will inherently be compatible with conventional television transmissions.
- the VTV system uses a wireless HMD as the display device.
- the wireless HMD can be used as a tracking device in addition to simply displaying images.
- This tracking information in the most basic form could consist of simply controlling the direction of view.
- both direction of view and position of the viewer within the virtual environment can be determined.
- remote cameras on the HMD will provide to the VTV system, real world images which it will interpret into spatial objects, the spatial objects can then be replaced with virtual objects thus providing an “environment aware” augmented reality system.
- the wireless HMD is connected to the VTV processor by virtue of a wireless data link “Cybernet link”.
- this link is capable of transmitting video information from the VTV processor to the HMD and transmitting tracking information from the HMD to the VTV processor.
- the cybernet link would transmit video information both to and from the HMD in addition to transferring tracking information from the HMD to the VTV processor. Additionally certain components of the VTV processor may be incorporated in the remote HMD thus reducing the data transfer requirement through the cybernet link.
- This wireless data link can be implemented in a number of different ways utilizing either analog or digital video transmission (in either an un-compressed or a digitally compressed format) with a secondary digitally encoded data stream for tracking information.
- a purely digital unidirectional or bi-directional data link which carries both of these channels could be incorporated.
- the actual medium for data transfer would probably be microwave or optical. However either transfer medium may be utilized as appropriate.
- the preferred embodiment of this system is one which utilizes on-board panoramic cameras fitted to the HMD in conjunction with image analysis hardware on board the HMD or possibly on the VTV base station to provide real-time tracking information.
- retroflective markers may also the utilized in the “real world environment”. In such a configuration, switchable light sources placed near to the optical axis of the on-board cameras would be utilized in conjunction with these cameras to form a “differential image analysis” system.
- Such a system features considerably higher recognition accuracy than one utilizing direct video images alone.
- the VTV system will transfer graphic information utilizing a “universal graphics standard”.
- a “universal graphics standard” will incorporate an object based graphics description language which achieves a high degree of compression by virtue of a “common graphics knowledge base” between subsystems.
- This patent describes in basic terms three levels of progressive sophistication in the evolution of this graphics language.
- VTV system In its most basic format the VTV system can be thought of as a 360 Degree panoramic display screen which surrounds the viewer.
- This “virtual display screen” consists of a number of “video Pages”. Encoded in the video image is a “Page key code” which instructs the VTV processor to place the graphic information into specific locations within this “virtual display screen”.
- Page key code which instructs the VTV processor to place the graphic information into specific locations within this “virtual display screen”.
- the VTV graphics standard consists of a virtual 360 degree panoramic display screen upon which video images can be rendered from an external video source such as VCR, DVD, satellite, camera or terrestrial television receiver such that each video frame contains not only the video information but also information that defines its location within the virtual display screen.
- an external video source such as VCR, DVD, satellite, camera or terrestrial television receiver
- each video frame contains not only the video information but also information that defines its location within the virtual display screen.
- Such a system is remarkably versatile as it provides not only variable resolution images but also frame rate independent imagery. That is to say, the actual update rate within a particular virtual image (entire virtual display screen) may vary within the display screen itself This is inherently accomplished by virtue of each frame containing its virtual location information. This allows active regions of the virtual image to be updated quickly at the nominal perception cost of not updating sections on the image which have little or no change.
- FIG. 4 Such a system is shown in FIG. 4.
- the basic VTV system can be enhanced to the format shown in FIG. 5.
- the cylindrical virtual display screen is interpreted by the VTV processor as a truncated sphere. This effect can be easily generated through the use of a geometry translator or “Warp Engine” within the digital processing hardware component of the VTV processor.
- the VTV standard In addition to 360 Degree panoramic video, the VTV standard also supports either 4 track (quadraphonic) or 8 track (octaphonic) spatial audio.
- a virtual representation of the 4 track system is shown in FIG. 6.
- sound through the left and right speakers of the sound system or headphones, in the case of an HMD based system
- the azimuth the of the view port direction of view within the VR environment
- the 8 track audio system sound through the left and right speakers of the sound system is scaled according to both the azimuth and elevation of the view port, as shown in the virtual representation of the system, FIG. 7.
- the VTV standard encodes the multi-track audio channels as part of the video information in a digital/analogue hybrid format as shown in FIG. 12.
- each audio scan line contains 512 audio samples.
- each audio scan line contains a three bit digital code that is used to “pre-scale” the audio information. That is to say that the actual audio sample value is X*S where X is the pre-scale number and S is the sample value.
- the dynamic range of the audio system can be extended from about 43 dB to over 60 dB.
- this extending of the dynamic range is done at relatively “low cost” to the audio quality because we are relatively insensitive to audio distortion when the overall signal level is high.
- the start bit is an important component in the system. It's function is to set the maximum level for the scan line (i.e. the 100% or white level) This level in conjunction with the black level (this can be sampled just after the colour burst) forms the 0% and 100% range for each line.
- the system becomes much less sensitive to variations in black level due to AC-coupling of video sub modules and/or recording and play back of the video media in addition to improving the accuracy of the decoding of the digital component of the scan line.
- an audio control bit is included in each field (at line 21 ). This control bit sets the audio buffer sequence to 0 when it is set. This provides a way to synchronize the 4 or 8 track audio information so that the correct track is always being updated from the current data regardless of the sequence of the video Page updates.
- this spatial audio system/standard could also be used in audio only mode by the combination of a suitable compact tracking device and a set of cordless headphones to realize a spatial-audio system for advanced hi-fi equipment.
- the first two standards relate to the definitions of spatial graphics objects where as the third graphics standard relates to a complete VR environment definition language which utilizes the first standards as a subset and incorporates additional environment definitions and control algorithms.
- the VTV graphic standard (in its basic form) can be thought of as a control layer above that of the conventional video standard (NTSC, PAL etc.). As such, it is not limited purely to conventional analog video transmission standards. Using basically identical techniques, the VTV standard can operate with the HDTV standard as well as many of the computer graphic and industry audiovisual standards.
- the VTV graphics processor is the heart of the VTV system.
- this module is responsible for the real-time generation of the graphics which is output to the display device (either conventional TV/HDTV or HMD).
- the display device either conventional TV/HDTV or HMD.
- a video media provision device such as VCR, DVD, satellite, camera or terrestrial television receiver.
- More sophisticated versions of this module may real-time render graphics from a “universal graphics language” passed to it via the Internet or other network connection.
- the VTV processor can also perform image analysis. Early versions of this system will use this image analysis function for the purpose of determining tracking coordinates of the HMD.
- More sophisticated versions of this module will in addition to providing this tracking information, also interpret the real world images from the HMD as physical three-dimensional objects. These three-dimensional objects will be defined in the universal graphics language which can then be recorded or communicated to similar remote display devices via the Internet or other network or alternatively be replaced by other virtual objects of similar physical size thus creating a true augmented reality experience.
- VTV hardware itself consists of a group of sub modules as follows:
- VRM Virtual Reality Memory
- Video information is digitized and placed in the augmented reality memory on a field by field basis assuming an absolute Page reference of 0 degree azimuth, 0 degree elevation with the origin of each Page being determined by the state of the Page number bits (P 3 -P 0 ).
- Auxiliary video information for background and/or floor/ceiling maps is loaded into the virtual reality memory on a field by field basis dependent upon the state of the “field type” bits (F 3 -F 0 ) and Page number bits (P 3 -P 0 ).
- the digital processing hardware interprets this information held in augmented reality and virtual reality memory and utilizing a combination of a geometry processing engine (Warp Engine), digital subtractive image processing and a new versatile form of “blue-screening”, translates and selectively combines this data into an image substantially similar to that which would be seen by the viewer if they were standing in the same location as that of the panoramic camera when the video material was filmed.
- a geometry processing engine Warp Engine
- digital subtractive image processing and a new versatile form of “blue-screening”
- VTV processor mode is determined by additional control information present in the source media and thus the processing and display modes can change dynamically while displaying a source of VTV media.
- the video generation module then generates a single or pair of video images for display on a conventional television or HMD display device.
- VTV image field will be updated at less than full frame rates (unless multi-spin DVD devices are used as the image media) graphics rendering will still occur at full video frame rates, as will the updates of the spatial audio. This is possible because each “Image Sphere” contains all of the required information for both video and audio for any viewer orientation (azimuth and elevation).
- ADC-0 would generally be used for live panoramic video feeds and ADC-2 would generally be used for virtual reality video feeds from pre-rendered video material
- both video input stages have fill access to both augmented reality and virtual reality memory (i.e. they use a memory pool).
- This hardware configuration allows for more versatility in the design and allows several unusual display modes (which will be covered in more detail in later sections).
- the video output stages (DAC-0 and DAC-1) have total access to both virtual and augmented reality memory.
- the memory pool style of design means that the system can function with either one or two input and/or output stages (although with reduced capabilities) and as such the presence of either one or two input or output stages in a particular implementation should not limit the generality of the specification.
- the digital processing hardware would take the form of one or more field programmable logic arrays or custom ASIC.
- the advantage of using field programmable logic arrays is that the hardware can be updated at anytime.
- the main disadvantage of this technology is that it is not quite as fast as an ASIC.
- high-speed conventional digital processors may also be utilized to perform this image analysis and/or graphics generation task.
- VTV base station hardware would act only as a link between the HMD and the Internet or other network with all graphics image generation, image analysis and spatial object recognition occurring within the HMD itself
- a VTV image frame consists of either a cylinder or a truncated sphere. This space subtends only a finite vertical angle to the viewer (+/ ⁇ 45 degrees in the prototype). This is an intentional limitation designed to make the most of the available data bandwidth of the video storage and transmission media and thus maintain compatibility with existing video systems. However, as a result of this compromise, there can exist a situation in which the view port exceeds the scope of the image data. There are several different ways in which this exception can be handled. Firstly, the simplest way to handle this exception is to simply make out of bounds video data black. This will give the appearance of being in a room with a black ceiling and floor.
- VRM Virtual reality memory
- FIG. 8 The basic memory map for the system utilizing both augmented reality memory and virtual reality memory (in addition to translation memory) is shown in FIG. 8. As can be seen in this illustration, The translation memory area must have sufficient range to cover a full 360 degree*180 degrees and ideally have the same angular resolution as that of the augmented reality memory bank (which covers 360 degree*90 degree). With such a configuration, it is possible to provide both floor and ceiling exception handling and variable transparency imagery such as looking through windows in the foreground and showing the background behind them.
- the backgrounds can be either static or dynamic and can be updated in basically the same way as foreground (augmented reality memory) by utilizing a Paged format.
- the VTV system has two basic modes of operation. Within these two modes there also exist several sub modes.
- the two basic modes are as follows:
- augmented reality mode 1 selective components of “real world imagery” are overlaid upon a virtual reality background.
- this process involves first removing all of the background components from the “real world” imagery. This can be easily done by using differential imaging techniques. I.e. by comparing current “real world” imagery against a stored copy taken previously and detecting differences between the two. After the two images have been correctly aligned, the regions that differ are new or foreground objects and those that remain the same are static background objects. This is the simplest of the augmented reality modes and is generally not sufficiently interesting as most of the background will be removed in the process.
- augmented reality memory when operated in mobile Pan-Cam (telepresense) or augmented reality mode the augmented reality memory will generally be updated in sequential Page order (i.e. updated in whole system frames) rather than random Page updates. This is because constant variations in the position and orientation of the panoramic camera system during filming will probably cause mismatches in the image Pages if they are handled separately.
- Augmented reality mode 2 differs from mode 1 in that, in addition to automatically extracting foreground and moving objects and placing these in an artificial background environment, the system also utilizes the Warp Engine to “push” additional “real world” objects into the background. In addition to simply adding these “real world” objects into the virtual environment the Warp Engine is also capable of scaling and translating these objects so that they match into the virtual environment more effectively. These objects can be handled as opaque overlays or transparencies.
- Augmented reality mode 3 differs from the mode 2 in that, in this case, the Warp Engine is used to “pull” the background objects into the foreground to replace “real world” objects. As in mode 2, these objects can be translated and scaled and can be handled as either opaque overlays or transparencies. This gives the user to the ability to “match” the physical size and position of a “real world” object with a virtual object. By doing so, the user is able to interact and navigate within the augmented reality environment as they would in the “real world” environment. This mode is probably the most likely mode to be utilized for entertainment and gaming purposes as it would allow a Hollywood production to be brought into the users own living room,
- Virtual reality mode is a functionally simpler mode than the previous augmented reality modes.
- “pre-filmed” or computer-generated graphics are loaded into augmented reality memory on a random Page by Page basis. This is possible because the virtual camera planes of reference are fixed.
- virtual reality memory is loaded with a fixed or dynamic background at a lower resolution. The use of both foreground and background image planes makes possible more sophisticated graphics techniques such as motion parallax.
- VTV encoder module In the case of imagery collected by mobile panoramic camera systems, the images are first processed by a VTV encoder module. This device provides video distortion correction and also inserts video Page information, orientation tracking data and spatial audio into the video stream. This can be done without altering the video standard, thereby maintaining compatibility with existing recording and playback devices.
- this module could be incorporated within the VTV processor, having this module as a separate entity is advantageous for use in remote camera applications where the video information must ultimately be either stored or transmitted through some form of wireless network
- tracking information must comprise part of the resultant video stream in order that an “absolute” azimuth and elevation coordinate system be maintained.
- this data is not required as the camera orientation is a theoretical construct known to the computer system at render time.
- the basic tracking system of the VTV HMD utilizes on-board panoramic video cameras to capture the required 360 degree visual information of the surrounding real world environment. This information is then analyzed by the VTV processor (whether it exists within the HMD or as a base station unit) utilizing computationally intensive yet relatively algorithmically simple techniques such as auto correlation. Examples of a possible algorithm are shown in FIGS. 13 - 19 .
- FIG. 20 shows a simplistic representation of the tracking hardware in which the auto correlators simply detect the presence or absence of a particular movement.
- a practical system would probably incorporate a number of auto correlators for each class of movement (for example there may be 16 or more separate auto correlators to detect horizontal movement). Such as system would then be able to detect different levels or amounts of movement in all of the directions.
- absolute reference points allows such a system to re-calibrate its absolute references and thus achieve an overall absolute coordinate system.
- This absolute reference point calibration can be achieved relatively easily utilizing several different techniques. The first, and perhaps simplest technique is to use color sensitive retroflective spots as previously described. Alternately, active optical beacon's (such as LED beacon's) could also be utilized.
- a further alternative absolute reference calibration system which could be used is based on a bi-directional infrared beacon. Such as system would communicate a unique ID code between the HMD and the beacon, such that calibration would occur only once each time the HMD passed under any of these “known spatial reference points”. This is required to avoid “dead tracking regions” within the vicinity of the calibration beacons due to multiple origin resets.
- the image can then be processed as a series of horizontal and vertical strips such that auto correlation regions are bounded between highlight points/edges. Additionally, small highlight regions can very easily be tracked by comparing previous image frames against current images and determining “closest possible fit” between the images (i.e. minimum movement of highlight points). Such techniques are relatively easy and well within the capabilities of most moderate speed micro-processors, provided some of the image pre-processing overhead is handled by hardware.
Abstract
The following patent relates to an overall hardware configuration that produces an enhanced spatial television-like viewing experience. Unlike normal television, with this system the viewer is able to control both the viewing direction and relative position of the viewer with respect to the movie action. In addition to a specific hardware configuration, this patent also relates to a new video format which makes possible this virtual reality like experience.
Description
- This application claims priority of U.S. provisional patent No. 60/212,862 titled “VTV System” filed Jun. 26, 2000 by Angus Duncan Richards.
- 1.1) The following patent relates to an overall hardware configuration that produces an enhanced spatial television-like viewing experience. Unlike normal television, with this system the viewer is able to control both the viewing direction and relative position of the viewer with respect to the movie action. In addition to a specific hardware configuration, this patent also relates to a new video format which makes possible this virtual reality like experience. Additionally, several proprietary video compression standards are also defined which facilitate this goal. The VTV system is designed to be an intermediary technology between conventional two-dimensional cinematography and true virtual reality. There are several stages in the evolution of the VTV system ranging from, in its most basic form, a panoramic display system to, in its most sophisticated form featuring full object based virtual reality utilizing animated texture maps and featuring live actors and/or computer-generated characters in a full “environment aware” augmented reality system.
- 1.2) As can be seen in FIG. 1 the overall VTV system consists of a central graphics processing device (the VTV processor), a range of video input devices (DVD, VCR, satellite, terrestrial television, remote video cameras), infrared remote control, digital network connection and several output device connections. In its most basic configuration as shown in FIG. 2, the VTV unit would output imagery to a conventional television device. In such a configuration a remote control device (possibly infrared) would be used to control the desired viewing direction and position of the viewer within the VTV environment. The advantage of this “basic system configuration” is that it is implementable utilizing current audiovisual technology. The VTV graphics standard is a forwards compatible graphics standard which can be thought of as a “layer” above that of standard video. That is to say conventional video represents a subset of the new VTV graphics standard. As a result of this standard's compatibility, VTV can be introduced without requiring any major changes in the television and/or audiovisual manufacturers specifications. Additionally, VTV compatible television decoding units will inherently be compatible with conventional television transmissions.
- 1.3) In a more sophisticated configuration, as shown in FIG. 3, the VTV system uses a wireless HMD as the display device. In such a configuration the wireless HMD can be used as a tracking device in addition to simply displaying images. This tracking information in the most basic form could consist of simply controlling the direction of view. In a more sophisticated system, both direction of view and position of the viewer within the virtual environment can be determined. Ultimately, in the most sophisticated implementation, remote cameras on the HMD will provide to the VTV system, real world images which it will interpret into spatial objects, the spatial objects can then be replaced with virtual objects thus providing an “environment aware” augmented reality system.
- b1.4) The wireless HMD is connected to the VTV processor by virtue of a wireless data link “Cybernet link”. In its most basic form this link is capable of transmitting video information from the VTV processor to the HMD and transmitting tracking information from the HMD to the VTV processor. In its most sophisticated form the cybernet link would transmit video information both to and from the HMD in addition to transferring tracking information from the HMD to the VTV processor. Additionally certain components of the VTV processor may be incorporated in the remote HMD thus reducing the data transfer requirement through the cybernet link. This wireless data link can be implemented in a number of different ways utilizing either analog or digital video transmission (in either an un-compressed or a digitally compressed format) with a secondary digitally encoded data stream for tracking information. Alternately, a purely digital unidirectional or bi-directional data link which carries both of these channels could be incorporated. The actual medium for data transfer would probably be microwave or optical. However either transfer medium may be utilized as appropriate. The preferred embodiment of this system is one which utilizes on-board panoramic cameras fitted to the HMD in conjunction with image analysis hardware on board the HMD or possibly on the VTV base station to provide real-time tracking information. To further improve system accuracy, retroflective markers may also the utilized in the “real world environment”. In such a configuration, switchable light sources placed near to the optical axis of the on-board cameras would be utilized in conjunction with these cameras to form a “differential image analysis” system. Such a system features considerably higher recognition accuracy than one utilizing direct video images alone.
- 1.5) Ultimately, the VTV system will transfer graphic information utilizing a “universal graphics standard”. Such a standard will incorporate an object based graphics description language which achieves a high degree of compression by virtue of a “common graphics knowledge base” between subsystems. This patent describes in basic terms three levels of progressive sophistication in the evolution of this graphics language.
- 1.6) These three compression standards will for the purpose of this patent be described as:
- a) c-com
- b) s-com
- c) v-com
- 1.7) In its most basic format the VTV system can be thought of as a 360 Degree panoramic display screen which surrounds the viewer.
- 1.8) This “virtual display screen” consists of a number of “video Pages”. Encoded in the video image is a “Page key code” which instructs the VTV processor to place the graphic information into specific locations within this “virtual display screen”. As a result of this ability to place images dynamically it is possible to achieve the effective equivalent to both high-resolution and high frame rates without significant sacrifice to either. For example, only sections of the image which are rapidly changing require rapid image updates whereas the majority of the image is generally static. Unlike conventional cinematography in which key elements (which are generally moving) are located in the primary scene, the majority of a panoramic image is generally static.
- 2.1) In its most basic form the VTV graphics standard consists of a virtual 360 degree panoramic display screen upon which video images can be rendered from an external video source such as VCR, DVD, satellite, camera or terrestrial television receiver such that each video frame contains not only the video information but also information that defines its location within the virtual display screen. Such a system is remarkably versatile as it provides not only variable resolution images but also frame rate independent imagery. That is to say, the actual update rate within a particular virtual image (entire virtual display screen) may vary within the display screen itself This is inherently accomplished by virtue of each frame containing its virtual location information. This allows active regions of the virtual image to be updated quickly at the nominal perception cost of not updating sections on the image which have little or no change. Such a system is shown in FIG. 4.
- 2.2) To further improve the realism of the imagery, the basic VTV system can be enhanced to the format shown in FIG. 5. In this configuration the cylindrical virtual display screen is interpreted by the VTV processor as a truncated sphere. This effect can be easily generated through the use of a geometry translator or “Warp Engine” within the digital processing hardware component of the VTV processor.
- 2.3) Due to constant variation of absolute planes of reference, mobile camera applications (either HMD based or Pan-Cam based) require additional tracking information for azimuth and elevation of the camera system to be included with the visual information in order that the images can be correctly decoded by the VTV graphics engine. In such a system, absolute camera azimuth and elevation becomes part of the image frame information. There are several possible techniques for the interpretation of this absolute reference data. Firstly, the coordinate data could be used to define the origins of the image planes within the memory during the memory writing process. Unfortunately this approach will tend to result in remnant image fragments being left in memory from previous frames with different alignment values. A more practical solution is simply to write the video information into memory with an assumed reference point of 0 azimuth, 0 elevation. This video information is then correctly displayed by correcting the display viewport for the camera angular offsets. The data format for such a system is shown in FIG. 11.
- 2.4) In addition to 360 Degree panoramic video, the VTV standard also supports either 4 track (quadraphonic) or 8 track (octaphonic) spatial audio. A virtual representation of the 4 track system is shown in FIG. 6. In the case of the simple 4 track audio system sound through the left and right speakers of the sound system (or headphones, in the case of an HMD based system) is scaled according to the azimuth the of the view port (direction of view within the VR environment). In the case of the 8 track audio system sound through the left and right speakers of the sound system (or headphones, in the case of an HMD based system) is scaled according to both the azimuth and elevation of the view port, as shown in the virtual representation of the system, FIG. 7.
- 2.5) In its most basic form, the VTV standard encodes the multi-track audio channels as part of the video information in a digital/analogue hybrid format as shown in FIG. 12.
- As a result, video compatibility with existing equipment can be achieved. As can be seen in this illustration, the audio data is stored in a compressed analogue coded format such that each video scan line contains 512 audio samples. In addition to this analogue coded audio information, each audio scan line contains a three bit digital code that is used to “pre-scale” the audio information. That is to say that the actual audio sample value is X*S where X is the pre-scale number and S is the sample value. Using this dual-coding scheme the dynamic range of the audio system can be extended from about 43 dB to over 60 dB. Secondly, this extending of the dynamic range is done at relatively “low cost” to the audio quality because we are relatively insensitive to audio distortion when the overall signal level is high. The start bit is an important component in the system. It's function is to set the maximum level for the scan line (i.e. the 100% or white level) This level in conjunction with the black level (this can be sampled just after the colour burst) forms the 0% and 100% range for each line. By dynamically adjusting the 0% and 100% marks for each line on a line by line basis, the system becomes much less sensitive to variations in black level due to AC-coupling of video sub modules and/or recording and play back of the video media in addition to improving the accuracy of the decoding of the digital component of the scan line.
- 2.6) In addition to this pre-scaling of the digital information, an audio control bit (AR) is included in each field (at line21). This control bit sets the audio buffer sequence to 0 when it is set. This provides a way to synchronize the 4 or 8 track audio information so that the correct track is always being updated from the current data regardless of the sequence of the video Page updates.
- 2.7) In more sophisticated multimedia data formats such as computer AV. files and digital television transmissions, these additional audio tracks could be stored in other ways which may be more efficient or otherwise advantageous.
- 2.8) It should be noted that, in addition to it's use as an audiovisual device, this spatial audio system/standard could also be used in audio only mode by the combination of a suitable compact tracking device and a set of cordless headphones to realize a spatial-audio system for advanced hi-fi equipment.
- 2.9) In addition to this simplistic graphics standard, There a are number of enhancements which can be used alone or in conjunction with the basic VTV graphics standard. These three graphics standards will be described in detail in subsequent patents, however for the purpose of this patent, they are known as:
- a) c-com
- b) s-com
- c) v-com
- 2.10) The first two standards relate to the definitions of spatial graphics objects where as the third graphics standard relates to a complete VR environment definition language which utilizes the first standards as a subset and incorporates additional environment definitions and control algorithms.
- 2.11) The VTV graphic standard (in its basic form) can be thought of as a control layer above that of the conventional video standard (NTSC, PAL etc.). As such, it is not limited purely to conventional analog video transmission standards. Using basically identical techniques, the VTV standard can operate with the HDTV standard as well as many of the computer graphic and industry audiovisual standards.
- 3.1) The VTV graphics processor is the heart of the VTV system. In its most basic form this module is responsible for the real-time generation of the graphics which is output to the display device (either conventional TV/HDTV or HMD). In addition to digitizing raw graphics information input from a video media provision device such as VCR, DVD, satellite, camera or terrestrial television receiver. More sophisticated versions of this module may real-time render graphics from a “universal graphics language” passed to it via the Internet or other network connection. In addition to this digitizing and graphics rendering task, the VTV processor can also perform image analysis. Early versions of this system will use this image analysis function for the purpose of determining tracking coordinates of the HMD. More sophisticated versions of this module will in addition to providing this tracking information, also interpret the real world images from the HMD as physical three-dimensional objects. These three-dimensional objects will be defined in the universal graphics language which can then be recorded or communicated to similar remote display devices via the Internet or other network or alternatively be replaced by other virtual objects of similar physical size thus creating a true augmented reality experience.
- 3.2) The VTV hardware itself consists of a group of sub modules as follows:
- a) video digitizing module
- b) Augmented Reality Memory (ARM)
- c) Virtual Reality Memory (VRM)
- d) Translation Memory (TM)
- e) digital processing hardware
- f) video generation module
- 3.3) The exact configuration of these modules is dependent upon other external hardware. For example, if digital video sources are used then the video digitizing module becomes relatively trivial and may consist of no more than a group of latch's or FIFO buffer. However, if composite or Y/C video inputs are utilized then additional hardware is required to convert these signals into digital format. Additionally, if a digital HDTV signal is used as the video input source then an HDTV decoder is required as the front end of the system (as HDTV signals cannot be processed in compressed format).
- 3.4) In the case of a field based video system such as analogue TV, the basic operation of the VTV graphics engine is as follows:
- a) Video information is digitized and placed in the augmented reality memory on a field by field basis assuming an absolute Page reference of 0 degree azimuth, 0 degree elevation with the origin of each Page being determined by the state of the Page number bits (P3-P0).
- b) Auxiliary video information for background and/or floor/ceiling maps is loaded into the virtual reality memory on a field by field basis dependent upon the state of the “field type” bits (F3-F0) and Page number bits (P3-P0).
- c) The digital processing hardware interprets this information held in augmented reality and virtual reality memory and utilizing a combination of a geometry processing engine (Warp Engine), digital subtractive image processing and a new versatile form of “blue-screening”, translates and selectively combines this data into an image substantially similar to that which would be seen by the viewer if they were standing in the same location as that of the panoramic camera when the video material was filmed. The main differences between this image and that available utilizing conventional video techniques being that it is not only 360 degree panoramic but also has the ability to have elements of both virtual reality and “real world” imagery melded together to form a complex immersive augmented reality experience.
- d) The exact way in which the virtual reality and “real world imagery” is combined depends upon the mode that the VTV processor is operating in and is discussed in more detail in later sections of this specification. The particular VTV processor mode is determined by additional control information present in the source media and thus the processing and display modes can change dynamically while displaying a source of VTV media.
- e) The video generation module then generates a single or pair of video images for display on a conventional television or HMD display device. Although the VTV image field will be updated at less than full frame rates (unless multi-spin DVD devices are used as the image media) graphics rendering will still occur at full video frame rates, as will the updates of the spatial audio. This is possible because each “Image Sphere” contains all of the required information for both video and audio for any viewer orientation (azimuth and elevation).
- 3.5) As can be seen in FIG. 9. The memory write side of the VTV processor shows two separate video input stages (ADC's). It should be noted that although ADC-0 would generally be used for live panoramic video feeds and ADC-2 would generally be used for virtual reality video feeds from pre-rendered video material, both video input stages have fill access to both augmented reality and virtual reality memory (i.e. they use a memory pool). This hardware configuration allows for more versatility in the design and allows several unusual display modes (which will be covered in more detail in later sections). Similarly, the video output stages (DAC-0 and DAC-1) have total access to both virtual and augmented reality memory.
- 3.6) Although having two input and two output stages improves the versatility of the design, the memory pool style of design means that the system can function with either one or two input and/or output stages (although with reduced capabilities) and as such the presence of either one or two input or output stages in a particular implementation should not limit the generality of the specification.
- 3.7) For ease of design, high-speed static RAM was utilized as the video memory in the prototype device. However, other memory technologies may be utilized without limiting the generality of the design specification.
- 3.8) In the preferred embodiment, the digital processing hardware would take the form of one or more field programmable logic arrays or custom ASIC. The advantage of using field programmable logic arrays is that the hardware can be updated at anytime. The main disadvantage of this technology is that it is not quite as fast as an ASIC. Alternatively, high-speed conventional digital processors may also be utilized to perform this image analysis and/or graphics generation task.
- 3.9) As previously described, certain sections of this hardware may be incorporated in the HMI, possibly even to the point at which the entire VTV hardware exists within the portable HMD device. In such a case the VTV base station hardware would act only as a link between the HMD and the Internet or other network with all graphics image generation, image analysis and spatial object recognition occurring within the HMD itself
- 3.10) Note: The low order bits of the viewport address generator are run through a look up table address translator for the X and Y image axies which impose barrel distortion on the generated images. This provides the correct image distortion for the current field of view for the viewport. This hardware is not shown explicitly in FIG. 10 because it will probably be implemented within an FPGA or ASIC logic and thus comprises a part of the viewport address generator functional block. Likewise roll of the final image will likely be implemented in a similar fashion.
- 3.11) It should be noted that only viewport-0 is affected by the translation engine (Warp Engine), Viewport-1 is read out undistorted. This is necessary when using the superimpose and overlay augmented reality modes because VR-video material being played from storage has already been “flattened” (i.e. pincushion distorted) prior to being stored whereas the live video from the panoramic cameras on the HMD require distortion correction prior to being displayed by the system in Augmented Reality mode. After this preliminary distortion, images recorded by the panoramic cameras in the HMD should be geometrically accurate and suitable for storage as new VR material in their own right (i.e. they can become VR material). One of the primary roles of the Warp Engine is then to provide geometry correction and trimming of the panoramic camera's on the HMD. This includes the complex task of providing a seamless transition between camera views.
- 3.12) As can be seen in FIGS. 4,5 a VTV image frame consists of either a cylinder or a truncated sphere. This space subtends only a finite vertical angle to the viewer (+/−45 degrees in the prototype). This is an intentional limitation designed to make the most of the available data bandwidth of the video storage and transmission media and thus maintain compatibility with existing video systems. However, as a result of this compromise, there can exist a situation in which the view port exceeds the scope of the image data. There are several different ways in which this exception can be handled. Firstly, the simplest way to handle this exception is to simply make out of bounds video data black. This will give the appearance of being in a room with a black ceiling and floor. However, an alternative and preferable configuration is to use a secondary video memory store to store a full 360 degree*180 degree background image map at reduced resolution. This memory area is known as Virtual reality memory (VRM). The basic memory map for the system utilizing both augmented reality memory and virtual reality memory (in addition to translation memory) is shown in FIG. 8. As can be seen in this illustration, The translation memory area must have sufficient range to cover a full 360 degree*180 degrees and ideally have the same angular resolution as that of the augmented reality memory bank (which covers 360 degree*90 degree). With such a configuration, it is possible to provide both floor and ceiling exception handling and variable transparency imagery such as looking through windows in the foreground and showing the background behind them. The backgrounds can be either static or dynamic and can be updated in basically the same way as foreground (augmented reality memory) by utilizing a Paged format.
- 3.13) The VTV system has two basic modes of operation. Within these two modes there also exist several sub modes. The two basic modes are as follows:
- a) Augmented reality mode
- b) Virtual reality mode
- 3.14) In
augmented reality mode 1, selective components of “real world imagery” are overlaid upon a virtual reality background. In general, this process involves first removing all of the background components from the “real world” imagery. This can be easily done by using differential imaging techniques. I.e. by comparing current “real world” imagery against a stored copy taken previously and detecting differences between the two. After the two images have been correctly aligned, the regions that differ are new or foreground objects and those that remain the same are static background objects. This is the simplest of the augmented reality modes and is generally not sufficiently interesting as most of the background will be removed in the process. It should be noted that, when operated in mobile Pan-Cam (telepresense) or augmented reality mode the augmented reality memory will generally be updated in sequential Page order (i.e. updated in whole system frames) rather than random Page updates. This is because constant variations in the position and orientation of the panoramic camera system during filming will probably cause mismatches in the image Pages if they are handled separately. - 3.15)
Augmented reality mode 2 differs frommode 1 in that, in addition to automatically extracting foreground and moving objects and placing these in an artificial background environment, the system also utilizes the Warp Engine to “push” additional “real world” objects into the background. In addition to simply adding these “real world” objects into the virtual environment the Warp Engine is also capable of scaling and translating these objects so that they match into the virtual environment more effectively. These objects can be handled as opaque overlays or transparencies. - 3.16)
Augmented reality mode 3 differs from themode 2 in that, in this case, the Warp Engine is used to “pull” the background objects into the foreground to replace “real world” objects. As inmode 2, these objects can be translated and scaled and can be handled as either opaque overlays or transparencies. This gives the user to the ability to “match” the physical size and position of a “real world” object with a virtual object. By doing so, the user is able to interact and navigate within the augmented reality environment as they would in the “real world” environment. This mode is probably the most likely mode to be utilized for entertainment and gaming purposes as it would allow a Hollywood production to be brought into the users own living room, - 3.16) Clearly the key to making
augmented reality modes - 3.17) Virtual reality mode is a functionally simpler mode than the previous augmented reality modes. In this mode “pre-filmed” or computer-generated graphics are loaded into augmented reality memory on a random Page by Page basis. This is possible because the virtual camera planes of reference are fixed. As in the previous examples, virtual reality memory is loaded with a fixed or dynamic background at a lower resolution. The use of both foreground and background image planes makes possible more sophisticated graphics techniques such as motion parallax.
- 3.18) The versatility of virtual reality memory (background memory) can be improved by utilizing an enhanced form of “blue-screening”. In such a system, a sample of the “chroma-key” color is provided at the beginning of each scan line in the background field. This provides a versatile system in which any color is allowable in the image. Thus, by surrounding individual objects with the “transparent” chroma-key color, problems and inaccuracies associated with the “cutting and pasting” of this object by the Warp Engine are greatly reduced. Additionally, the use of “transparent” chroma-keyed regions within foreground virtual reality images allows easy generation of complex sharp edged and/or dynamic foreground regions with no additional information overhead.
- 4.1) As can be seen in the definition of the graphic standard, additional Page placement and tracking information is required for the correct placement and subsequent display of the imagery captured by mobile Pan-Cam or HMD based video systems. Additionally, if Spatial audio is to be recorded in real-time then this information must also be encoded as part of the video stream. In the case of computer-generated imagery this additional video information can easily be inserted at render-stage. However, in the case of live video capture, this additional tracking and audio information must be inserted into the video stream prior to recording. This can effectively be achieved through a graphics processing module herein after referred to as the VTV encoder module.
- 4.2) In the case of imagery collected by mobile panoramic camera systems, the images are first processed by a VTV encoder module. This device provides video distortion correction and also inserts video Page information, orientation tracking data and spatial audio into the video stream. This can be done without altering the video standard, thereby maintaining compatibility with existing recording and playback devices. Although this module could be incorporated within the VTV processor, having this module as a separate entity is advantageous for use in remote camera applications where the video information must ultimately be either stored or transmitted through some form of wireless network
- 4.3) For any mobile panoramic camera system such as a “Pan-Cam” or HMD based camera system, tracking information must comprise part of the resultant video stream in order that an “absolute” azimuth and elevation coordinate system be maintained. In the case of computer-generated imagery this data is not required as the camera orientation is a theoretical construct known to the computer system at render time.
- 4.4) The basic tracking system of the VTV HMD utilizes on-board panoramic video cameras to capture the required 360 degree visual information of the surrounding real world environment. This information is then analyzed by the VTV processor (whether it exists within the HMD or as a base station unit) utilizing computationally intensive yet relatively algorithmically simple techniques such as auto correlation. Examples of a possible algorithm are shown in FIGS.13-19.
- 4.5) The simple tracking system outlined in FIGS.13-19 detects only changes in position and orientation. With the addition of several retroflective targets, which can be easily distinguished from the background images using differential imaging techniques, it is possible to gain absolute reference points. Such absolute reference points would probably be located at the extremities of the environmental region (i.e. confines of the user space) however they could be placed anywhere within the real environment, provided the VTV hardware is aware of the real world coordinates of these markers. The combination of these absolute reference points and differential movement (from the image analysis data) makes possible the generation of absolute real world coordinate information at full video frame rates. As an alternative to the placement of retroflective targets at known spatial coordinates, active optical beacons could be employed. These devices would operate in a similar fashion to the retroflective targets in that they would be configured to strobe light in synchronism with the video capture rate thus allowing differential video analysis to be performed on the resultant images. However, unlike passive retroflective targets, active optical beacons could, in addition to strobing in time with the video capture, transmit additional information describing their real world coordinates to the HMD. As a result, the system would not have to explicitly know the locations of these beacon's as this data could be extracted “on the fly”. Such as system is very versatile and somewhat more rugged than the simpler retroflective configuration.
- 4.6) Note: FIG. 20 shows a simplistic representation of the tracking hardware in which the auto correlators simply detect the presence or absence of a particular movement. A practical system would probably incorporate a number of auto correlators for each class of movement (for example there may be 16 or more separate auto correlators to detect horizontal movement). Such as system would then be able to detect different levels or amounts of movement in all of the directions.
- 4.7) An alternative implementation of this tracking system is possible utilizing a similar image analysis technique to track a pattern on the ceiling to achieve spatial positioning information and simple “tilt sensors” to detect angular orientation of the HMD/Pan-Cam system. The advantage of this system is that it is considerably simpler and less expensive than the full six axis optical tracker previously described. The fact that the ceiling is at a constant distance and known orientation from the HMD greatly simplifies the optical system, the quality of the required imaging device and the complexity of the subsequent image analysis. As in the previous six-axis optical tracking system, this spatial positioning information is inherently in the form of relative movement only. However, the addition of “absolute reference points” allows such a system to re-calibrate its absolute references and thus achieve an overall absolute coordinate system. This absolute reference point calibration can be achieved relatively easily utilizing several different techniques. The first, and perhaps simplest technique is to use color sensitive retroflective spots as previously described. Alternately, active optical beacon's (such as LED beacon's) could also be utilized. A further alternative absolute reference calibration system which could be used is based on a bi-directional infrared beacon. Such as system would communicate a unique ID code between the HMD and the beacon, such that calibration would occur only once each time the HMD passed under any of these “known spatial reference points”. This is required to avoid “dead tracking regions” within the vicinity of the calibration beacons due to multiple origin resets.
- 4.8) The basic auto correlation technique used to locate movement within the image can be simplified into reasonably straightforward image processing steps. Firstly, rotation detection can be simplified into a group of lateral shifts (up, down, left, right) symmetrical around the center of the image (optical axis of the camera). Additionally, these “sample points” for lateral movement do not necessarily have to be very large. They do however have to contain unique picture information. For example a blank featureless wall will yield no useful tracking information However an image with high contrast regions such as edges of objects or bright highlight points is relatively easily tracked. Taking this thinking one step further, it is possible to first reduce the entire image into highlight points/edges. The image can then be processed as a series of horizontal and vertical strips such that auto correlation regions are bounded between highlight points/edges. Additionally, small highlight regions can very easily be tracked by comparing previous image frames against current images and determining “closest possible fit” between the images (i.e. minimum movement of highlight points). Such techniques are relatively easy and well within the capabilities of most moderate speed micro-processors, provided some of the image pre-processing overhead is handled by hardware.
Claims (56)
1. An electronic device that produces an enhanced spatial television like viewing experience utilizing conventional video devices for the provision of the source media.
2. An electronic device that produces graphical imagery depicting a panoramic (360 degree horizontal view) image such that this overall panoramic image (“Image Sphere”) is composed of a number of smaller image subsections (“Pages”).
3. An electronic device that produces graphical imagery as described in claims 1-2, such that the overall Image Sphere is updated on a Page by Page basis in real-time utilizing conventional video devices for the provision of the source media.
4. An electronic device that is described in claims 1-3 in which the Page order is determined by additional information present in the source media.
5. An electronic device as described in claims 1-4, which allows the viewer to view prerecorded audiovisual media in a wide screen format such that the width of the “virtual” screen can extend to a full 360 degrees horizontally and up to 180 degrees vertically.
6. An electronic device as described in claims 1-5, which allows the viewer to view prerecorded audiovisual material on a conventional screen based display device (TV, projection TV, computer screen) such that the display device represents a viewport or subset of the full 360 degree panoramic image.
7. An entertainment system consisting of; a range of alternative media provision devices (such as VCR, DVD, satellite receiver etc.) an electronic device (VTV processor), which generates panoramic video imagery from video data provided from the aforementioned devices and a display device such as a conventional flat screen television or helmet mounted display device (HMD) or other virtual reality display device, fitted with an optional single view or panoramic video capture device, in conjunction with a wireless data communication network to communicate this video information between the HMD and the VTV processor as shown FIGS. 1-3.
8. A new audiovisual standard (the virtual television or VTV standard) which consists of a modification to the existing television standard which allows for a variety of different “Frames”, such that these Frames may contain graphical data, sound or control information while still maintaining compatibility with the existing television standards (NTSC, PAL, HDTV etc.)
9. A new audiovisual standard as described in which, includes within one or more scan lines of a standard video image, additional digital and/or analog coded data which provides information which define control parameters and image manipulation data for the VTV graphics processor.
claim 8
10. A new audiovisual standard as described in which, includes within one or more scan lines of a standard video image, additional digital and analog coded data (hybrid coded data) which provides information to generate 4 or more audio tracks in real-time.
claim 8
11. A new audiovisual standard as described in which, includes within one or more scan lines of a standard video image, additional digital or analog coded data which provides information as to absolute orientation (azimuth or azimuth and elevation) of the camera that filmed the imagery.
claim 8
12. A new audiovisual standard as described in which, includes within one or more scan lines of a standard video image, additional digital or analog coded data which provides information as to the relative placement position of the current Page (video field or frame) within the 360 degree horizontal by X degree vertical “Image Sphere”.
claim 8
13. A new audiovisual standard as described in claims 8,10, which, includes within one or more scan lines of a standard video image, additional digital or analog coded data which provides information as to the number of audio tracks, the audio sampling rate and the track synchronization which allows the VTV graphics processor to decode the audio information as described in into spatial (position and orientation sensitive) sound.
claim 10
14. A new audiovisual standard based around the concept of “Image Spheres” which are 360 degree horizontal by X degree vertical cylinders or truncated spheres, such that each Image Sphere consists of a number of subsections or “Pages”.
15. A new audiovisual standard as described in which makes possible the encoding of multi-track audio for use with standard video storage and transmission systems such that this information can be subsequently decoded by specific hardware (the VTV processor) to produce a left and right audio channel (for headphones or speaker systems) such that the audio channels are mixed (mathematically combined) in such a way as to produce spatially correct audio for the left and right ears of the user. The parameters affecting this mathematical combination being primarily azimuth (in the case of a 4 track audio system) and both azimuth and elevation azimuth (in the case of an 8 track audio system).
claim 8
16. An electronic device as described in claims 1-6, which allows the viewer to view prerecorded audiovisual material using a helmet mounted display (HMD) or other virtual reality type display device such that the display device represents a viewport or subset of the full 360 degree horizontal panoramic image.
17. An electronic device as described in claims 1-6,16, such that the horizontal direction of view within the 360 degree by X degree vertically “virtual environment” is dynamically controllable by the user at runtime (while the images being displayed).
18. An electronic device as described in claims 1-6,16-17, such that both the azimuth and elevation of the viewport within the 360 degree horizontal by X degree vertical “virtual environment” is dynamically controllable by the user at runtime (while the images being displayed).
19. An electronic device as described in claims 1-6,16-18, in which the direction of view is automatically controlled by virtue of a tracking device which continuously measures the azimuth or both azimuth and elevation of the viewer's head.
20. An electronic device as described in claims 1-6,16-19, in which the virtual camera position within “virtual environment” (i.e. the viewpoint of the viewer) is dynamically controllable by the user at runtime (while the images are being displayed).
21. An electronic device as described in claims 1-6,16-20, in which the virtual camera position within “virtual environment” (i.e. the viewpoint of the viewer) is automatically controlled by virtue of a tracking device which continuously measures the physical position of viewer's head in “real world coordinates”.
22. An electronic device, in which orientation sensitive audio is provided in real-time, which is controlled by the direction of the viewers head (azimuth and elevation).
23. An electronic device as described in claims 1-6,16-21, in which orientation sensitive audio is also provided in real-time, which is controlled by the direction of the viewport within the 360 degree Image Sphere (“virtual environment”).
24. An electronic device as described in claims 1-6,16-21, in which orientation and position sensitive audio is also provided in real-time, which is controlled by the direction of the viewport within the 360 degree Image Sphere and virtual position within the “virtual environment”.
25. An electronic device as described in claims 1-6,16-24, which is capable of displaying prerecorded computer graphic or live imagery in a 360 degree Image Sphere format to produce a virtual reality experience which is capable of being provided from standard video storage and transmission devices (VCR, DVD, satellite transmission etc.)
26. An electronic device as described in claims 1-6,16-25 which is capable of combining prerecorded computer graphic or live imagery with “real world imagery” captured utilizing a simple single view or panoramic camera system in real-time to produce an augmented reality experience.
27. An electronic device as described in claims 1-6,16-26, which is capable of selectively combining and geometrically altering either “real world” or prerecorded imagery to create a composite augmented reality experience.
28. An electronic device as described in claims 1-6,16-27, which is capable of analyzing “real world” images captured by a simple single view or panoramic camera system and by utilizing differential imaging techniques and/or other image processing techniques, is capable of automatically removing the background “real world” scenery and replacing this with synthetic or prerecorded imagery provided from a video device (such as VCR DVD player etc.)
29. An electronic device as described in claims 1-6,16-25, which is capable of combining “foreground” and “background” pre-rendered video information utilizing chroma-keying techniques in which the foreground and background information may be provided by the same video source and which additionally the chroma-key color is dynamically variable within an image by providing an analog or digital sample of the chroma-key color coded either as a special control frame, as part of each scan line of the video image.
30. An electronic device which is capable of performing both of the functions described in clams 28 and 29.
31. An electronic device which is capable of analyzing images captured by a simple single view or panoramic camera system as described in claims 39-44 and interpreting the imagery as three-dimensional objects in real-time.
32. An electronic device as described in , which converts the three-dimensional objects into a “universal graphics description language” such as VRML or other appropriate language for storage or live transmission and subsequent decoding into graphical imagery by another VTV processor and appropriate display device.
claim 31
33. An electronic device (otherwise known as the VTV graphics processor) described in claims 1-6,16-32, shown in FIGS. 8-10, and who's functionality is described in paragraphs 3.1-3.18, which is comprised of; one or more video digitizing modules, three areas of memory, known as augmented reality memory (ARM), virtual reality memory (VRM), and translation memory (TM), a digital processing module and one or more video generation modules.
34. An electronic device as described in , In which the augmented reality memory (ARM) is “mapped” to occupy a smaller vertical field of view than the virtual reality memory (VRM), and translation memory (TM) so as to minimize the data requirement for the provision of the media whilst still maintaining a high-quality image.
claim 33
35. An electronic device as described in claims 33-34, In which the augmented reality memory (ARM), virtual reality memory (VRM), and translation memory (TM) may be “mapped” at different resolutions (i.e. pixels in each memory region can represent a different degree of angular deviation.)
36. An electronic device as described in claims 33-35, which displays imagery as described in claims 26-28, by first placing the “real world” video information in augmented reality memory (foreground memory), source information from video provision device (VCR, DVD player etc.) into virtual reality memory and then combining these two sources of imagery according to the pattern of data held in translation memory (part of the Warp Engine) into a “composite image” before displaying on the output device (such as a flat screen display or HMD).
37. An electronic device as described in claims 33-35, which displays imagery as described in claims 25,29, by first placing the foreground video information from a video provision device (VCR, DVD player etc.) into augmented reality memory, and then by placing background video information from a video provision device (VCR, DVD player etc.) into virtual reality memory and then combining these two sources of imagery according to the pattern of data held in translation memory (part of the Warp Engine) into a “composite image” before displaying on the output device (such as a flat screen display or HMD).
38. An electronic device as described in claims 37, which in addition to using the Warp Engine for image combination also relies on chroma-keying information present in the video media to determine foreground and background priority for final combination and display.
39. An electro-optical assembly which consists of a plurality of electronic image capture devices (video cameras, HDTV cameras, digital still cameras etc.) which are configured with overlapping horizontal fields of view such that collectively the overlapping horizontal fields of view cover a full 360 degrees.
40. An electronic device which crops and aligns the individual images (Pages) produced by the assembly described in claims 39 to produce an overall 360 degree panoramic image with negligible distortion and overlap between the individual Pages.
41. An electronic device as described in , which in addition to cropping and aligning the separate images to produce a seamless 360 degree panoramic image, also applies distortion correction to the images so that the resulting 360 degree panoramic image is mathematically “flat” in the horizontal axis. (i.e. each pixel in the horizontal axis of the image subtends an equal angle to the camera.)
claim 40
42. An electronic device as described in claims 40-41, which also applies distortion correction to the images so that the resulting 360 degree panoramic image is mathematically “flat” in the vertical axis. (i.e. each pixel in the vertical axis of the image subtends an equal angle to the camera.)
43. An electronic device as described in claims 40-42, which additionally, Inserts “Page identification information” which describe the location of the individual Pages that comprise the 360 degree panoramic image produced by the panoramic camera assembly, into the outgoing video stream.
44. An electronic device as described in claims 40-43, which additionally, Inserts “tracking information” which describe the current orientation of the panoramic camera assembly (azimuth and elevation) into the video stream.
45. An electronic device which utilizing data received from one or more video capture devices (video cameras etc.) and by performing a series of simple image analysis processes such as autocorrelation calculates relative movement in the azimuth of the camera (of the viewer in the case of an HMD based camera assembly) as shown in FIGS. 13,14 and more completely described in paragraphs 4.1-4.8.
46. An electronic device which utilizing data received from one or more video capture devices (video cameras etc.) and by performing a series of simple image analysis processes such as autocorrelation calculates relative movement in the elevation of the camera (of the viewer in the case of an HMD based camera assembly) as shown in FIGS. 13,15 and more completely described in paragraphs 4.1-4.8.
47. An electronic device which utilizing data received from one or more video capture devices (video cameras etc.) and by performing a series of simple image analysis processes such as autocorrelation calculates relative movement in the roll of the camera (of the viewer in the case of an HMD based camera assembly) as shown in FIGS. 13,16 and more completely described in paragraphs 4.1-4.8.
48. An electronic device which utilizing data received from one or more video capture devices (video cameras etc.) and by performing a series of simple image analysis processes such as autocorrelation calculates relative movement in the physical (spatial) position of the camera (of the viewer in the case of an HMD based camera assembly) in either or any combination of the X, Y or Z axes as shown in FIGS. 13,17-18 and more completely described in paragraphs 4.1-4.8.
49. An electronic device as described in claims 45-48, which utilizes a number of retroflective targets with known “real world” coordinates in conjunction with constant or strobed on-axis light sources to determine absolute angular/spatial references for the purposes of a converting the relative angular and spatial data determined by devices described in claims 45-48 into absolute angular and spatial data.
50. An electronic device as described in , which utilizes a combination of color filters over the retroflective targets in conjunction with controllable on-axis light sources which are synchronized to the video capture rate of the HMD based or remote panoramic cameras to improve the ability of the system to correctly identify and maintain tracking of the individual retroflective targets.
claim 49
51. An electronic device as described in claims 49-50, which utilizes a combination of retroflective targets in conjunction with color controllable on-axis light sources which are synchronized to the video capture rate of the HMD based or remote panoramic cameras to improve the ability of the system to correctly identify and maintain tracking of the individual retroflective targets.
52. An electronic device as described in claims 49-51, which utilizes a combination of color filters over the retroflective targets in conjunction with color controllable on-axis light sources which are synchronized to the video capture rate of the HMD based or remote panoramic cameras to improve the ability of the system to correctly identify and maintain tracking of the individual retroflective targets.
53. An electronic device as described in claims 45-48, which utilizes a number of “active optical beacons” (controllable light sources which are synchronized to the video capture rate of the HMD based or remote panoramic cameras) such that pulse timing, color of light and/or combinations of these are used to transmit the “real world” coordinates of the beacon to the HMD or remote panoramic camera to determine absolute angular/spatial references for the purposes of a converting the relative angular and spatial data determined by devices described in claims 45-48, into absolute angular and spatial data.
54. An electronic device as described in claims 45-48, which utilizes a number of “bi-directional infrared beacons” which communicate a unique ID code between the HMD and the beacon such that this calibration would occur only once each time the HMD passed under any of these “known in spatial reference points.”
55. An electronic device which utilizes a single optical imaging device to monitor a pattern on the ceiling and utilizing similar image processing techniques as described in claims 45-48, determines relative spatial movement and azimuth, in conjunction with an alternative angular tracking system such as fluid level sensors to determine the remaining angular orientations (pitch and roll).
56. An electronic device as described in which utilizes any of the calibration systems as described in claims 49-54 to determine absolute references for the purposes of converting the relative spatial data determined by the device described in , into absolute spatial data
claim 55
claim 55
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/891,733 US20010056574A1 (en) | 2000-06-26 | 2001-06-25 | VTV system |
JP2003508064A JP2005500721A (en) | 2001-06-25 | 2001-12-21 | VTV system |
DE10197255T DE10197255T5 (en) | 2001-06-25 | 2001-12-21 | VTV system |
PCT/US2001/049287 WO2003001803A1 (en) | 2001-06-25 | 2001-12-21 | Vtv system |
US11/230,173 US7688346B2 (en) | 2001-06-25 | 2005-09-19 | VTV system |
US12/732,671 US20100302348A1 (en) | 2001-06-25 | 2010-03-26 | VTV System |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US21286200P | 2000-06-26 | 2000-06-26 | |
US09/891,733 US20010056574A1 (en) | 2000-06-26 | 2001-06-25 | VTV system |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/230,173 Division US7688346B2 (en) | 2001-06-25 | 2005-09-19 | VTV system |
US11/230,173 Continuation US7688346B2 (en) | 2001-06-25 | 2005-09-19 | VTV system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20010056574A1 true US20010056574A1 (en) | 2001-12-27 |
Family
ID=25398728
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/891,733 Abandoned US20010056574A1 (en) | 2000-06-26 | 2001-06-25 | VTV system |
US11/230,173 Active 2024-09-19 US7688346B2 (en) | 2001-06-25 | 2005-09-19 | VTV system |
US12/732,671 Abandoned US20100302348A1 (en) | 2001-06-25 | 2010-03-26 | VTV System |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/230,173 Active 2024-09-19 US7688346B2 (en) | 2001-06-25 | 2005-09-19 | VTV system |
US12/732,671 Abandoned US20100302348A1 (en) | 2001-06-25 | 2010-03-26 | VTV System |
Country Status (4)
Country | Link |
---|---|
US (3) | US20010056574A1 (en) |
JP (1) | JP2005500721A (en) |
DE (1) | DE10197255T5 (en) |
WO (1) | WO2003001803A1 (en) |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020147586A1 (en) * | 2001-01-29 | 2002-10-10 | Hewlett-Packard Company | Audio annoucements with range indications |
US20040125044A1 (en) * | 2002-09-05 | 2004-07-01 | Akira Suzuki | Display system, display control apparatus, display apparatus, display method and user interface device |
WO2004088994A1 (en) * | 2003-04-02 | 2004-10-14 | Daimlerchrysler Ag | Device for taking into account the viewer's position in the representation of 3d image contents on 2d display devices |
US20040222988A1 (en) * | 2003-05-08 | 2004-11-11 | Nintendo Co., Ltd. | Video game play using panoramically-composited depth-mapped cube mapping |
WO2005064440A2 (en) * | 2003-12-23 | 2005-07-14 | Siemens Aktiengesellschaft | Device and method for the superposition of the real field of vision in a precisely positioned manner |
US20050231532A1 (en) * | 2004-03-31 | 2005-10-20 | Canon Kabushiki Kaisha | Image processing method and image processing apparatus |
US20060069591A1 (en) * | 2004-09-29 | 2006-03-30 | Razzano Michael R | Dental image charting system and method |
US7118228B2 (en) | 2003-11-04 | 2006-10-10 | Hewlett-Packard Development Company, L.P. | Image display system |
US20070268316A1 (en) * | 2006-05-22 | 2007-11-22 | Canon Kabushiki Kaisha | Display apparatus with image-capturing function, image processing apparatus, image processing method, and image display system |
US20080007617A1 (en) * | 2006-05-11 | 2008-01-10 | Ritchey Kurtis J | Volumetric panoramic sensor systems |
US20080024594A1 (en) * | 2004-05-19 | 2008-01-31 | Ritchey Kurtis J | Panoramic image-based virtual reality/telepresence audio-visual system and method |
US20080117288A1 (en) * | 2006-11-16 | 2008-05-22 | Imove, Inc. | Distributed Video Sensor Panoramic Imaging System |
US7734070B1 (en) | 2002-12-31 | 2010-06-08 | Rajeev Sharma | Method and system for immersing face images into a video sequence |
US20120281128A1 (en) * | 2011-05-05 | 2012-11-08 | Sony Corporation | Tailoring audio video output for viewer position and needs |
US20120307001A1 (en) * | 2011-06-03 | 2012-12-06 | Nintendo Co., Ltd. | Information processing system, information processing device, storage medium storing information processing program, and moving image reproduction control method |
US8771064B2 (en) | 2010-05-26 | 2014-07-08 | Aristocrat Technologies Australia Pty Limited | Gaming system and a method of gaming |
US9236000B1 (en) * | 2010-12-23 | 2016-01-12 | Amazon Technologies, Inc. | Unpowered augmented reality projection accessory display device |
CN105324984A (en) * | 2013-12-09 | 2016-02-10 | Cjcgv株式会社 | Method and system for generating multi-projection images |
US20160127723A1 (en) * | 2013-12-09 | 2016-05-05 | Cj Cgv Co., Ltd. | Method and system for generating multi-projection images |
US9383831B1 (en) * | 2010-12-23 | 2016-07-05 | Amazon Technologies, Inc. | Powered augmented reality projection accessory display device |
CN106165402A (en) * | 2014-04-22 | 2016-11-23 | 索尼公司 | Information reproduction apparatus, information regeneration method, information record carrier and information recording method |
US9508194B1 (en) | 2010-12-30 | 2016-11-29 | Amazon Technologies, Inc. | Utilizing content output devices in an augmented reality environment |
US9607315B1 (en) | 2010-12-30 | 2017-03-28 | Amazon Technologies, Inc. | Complementing operation of display devices in an augmented reality environment |
US9721386B1 (en) | 2010-12-27 | 2017-08-01 | Amazon Technologies, Inc. | Integrated augmented reality environment |
US9766057B1 (en) | 2010-12-23 | 2017-09-19 | Amazon Technologies, Inc. | Characterization of a scene with structured light |
US20170329394A1 (en) * | 2016-05-13 | 2017-11-16 | Benjamin Lloyd Goldstein | Virtual and augmented reality systems |
FR3057430A1 (en) * | 2016-10-10 | 2018-04-13 | Immersion | DEVICE FOR IMMERSION IN A REPRESENTATION OF AN ENVIRONMENT RESULTING FROM A SET OF IMAGES |
US9950262B2 (en) | 2011-06-03 | 2018-04-24 | Nintendo Co., Ltd. | Storage medium storing information processing program, information processing device, information processing system, and information processing method |
US9958934B1 (en) * | 2006-05-01 | 2018-05-01 | Jeffrey D. Mullen | Home and portable augmented reality and virtual reality video game consoles |
US20180176708A1 (en) * | 2016-12-20 | 2018-06-21 | Casio Computer Co., Ltd. | Output control device, content storage device, output control method and non-transitory storage medium |
US20180342267A1 (en) * | 2017-05-26 | 2018-11-29 | Digital Domain, Inc. | Spatialized rendering of real-time video data to 3d space |
US20190007672A1 (en) * | 2017-06-30 | 2019-01-03 | Bobby Gene Burrough | Method and Apparatus for Generating Dynamic Real-Time 3D Environment Projections |
US20190104282A1 (en) * | 2017-09-29 | 2019-04-04 | Sensormatic Electronics, LLC | Security Camera System with Multi-Directional Mount and Method of Operation |
WO2019076667A1 (en) * | 2017-10-16 | 2019-04-25 | Signify Holding B.V. | A method and controller for controlling a plurality of lighting devices |
US20190149731A1 (en) * | 2016-05-25 | 2019-05-16 | Livit Media Inc. | Methods and systems for live sharing 360-degree video streams on a mobile device |
CN109996060A (en) * | 2017-12-30 | 2019-07-09 | 深圳多哚新技术有限责任公司 | A kind of virtual reality cinema system and information processing method |
TWI666912B (en) * | 2017-03-22 | 2019-07-21 | 聯發科技股份有限公司 | Method and apparatus for generating and encoding projection-based frame with 360-degree content represented in projection faces packed in segmented sphere projection layout |
US10375355B2 (en) | 2006-11-16 | 2019-08-06 | Immersive Licensing, Inc. | Distributed video sensor panoramic imaging system |
US10712810B2 (en) * | 2017-12-08 | 2020-07-14 | Telefonaktiebolaget Lm Ericsson (Publ) | System and method for interactive 360 video playback based on user location |
CN112233048A (en) * | 2020-12-11 | 2021-01-15 | 成都成电光信科技股份有限公司 | Spherical video image correction method |
US11086395B2 (en) * | 2019-02-15 | 2021-08-10 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
CN113824746A (en) * | 2021-11-25 | 2021-12-21 | 山东信息职业技术学院 | Virtual reality information transmission method and virtual reality system |
US11288937B2 (en) | 2017-06-30 | 2022-03-29 | Johnson Controls Tyco IP Holdings LLP | Security camera system with multi-directional mount and method of operation |
US11361640B2 (en) | 2017-06-30 | 2022-06-14 | Johnson Controls Tyco IP Holdings LLP | Security camera system with multi-directional mount and method of operation |
US11372474B2 (en) * | 2019-07-03 | 2022-06-28 | Saec/Kinetic Vision, Inc. | Systems and methods for virtual artificial intelligence development and testing |
US11816757B1 (en) * | 2019-12-11 | 2023-11-14 | Meta Platforms Technologies, Llc | Device-side capture of data representative of an artificial reality environment |
Families Citing this family (83)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9063633B2 (en) * | 2006-03-30 | 2015-06-23 | Arjuna Indraeswaran Rajasingham | Virtual navigation system for virtual and real spaces |
US9101279B2 (en) | 2006-02-15 | 2015-08-11 | Virtual Video Reality By Ritchey, Llc | Mobile user borne brain activity data and surrounding environment data correlation system |
CN101496387B (en) | 2006-03-06 | 2012-09-05 | 思科技术公司 | System and method for access authentication in a mobile wireless network |
US8570373B2 (en) * | 2007-06-08 | 2013-10-29 | Cisco Technology, Inc. | Tracking an object utilizing location information associated with a wireless device |
US8717412B2 (en) * | 2007-07-18 | 2014-05-06 | Samsung Electronics Co., Ltd. | Panoramic image production |
US8098283B2 (en) * | 2007-08-01 | 2012-01-17 | Shaka Ramsay | Methods, systems, and computer program products for implementing a personalized, image capture and display system |
US9703369B1 (en) | 2007-10-11 | 2017-07-11 | Jeffrey David Mullen | Augmented reality video game systems |
US8355041B2 (en) * | 2008-02-14 | 2013-01-15 | Cisco Technology, Inc. | Telepresence system for 360 degree video conferencing |
US8797377B2 (en) | 2008-02-14 | 2014-08-05 | Cisco Technology, Inc. | Method and system for videoconference configuration |
US10229389B2 (en) * | 2008-02-25 | 2019-03-12 | International Business Machines Corporation | System and method for managing community assets |
US8319819B2 (en) * | 2008-03-26 | 2012-11-27 | Cisco Technology, Inc. | Virtual round-table videoconference |
US8390667B2 (en) | 2008-04-15 | 2013-03-05 | Cisco Technology, Inc. | Pop-up PIP for people not in picture |
JP4444354B2 (en) * | 2008-08-04 | 2010-03-31 | 株式会社東芝 | Image processing apparatus and image processing method |
EP2157545A1 (en) * | 2008-08-19 | 2010-02-24 | Sony Computer Entertainment Europe Limited | Entertainment device, system and method |
US8694658B2 (en) * | 2008-09-19 | 2014-04-08 | Cisco Technology, Inc. | System and method for enabling communication sessions in a network environment |
US8529346B1 (en) * | 2008-12-30 | 2013-09-10 | Lucasfilm Entertainment Company Ltd. | Allocating and managing software assets |
US8659637B2 (en) | 2009-03-09 | 2014-02-25 | Cisco Technology, Inc. | System and method for providing three dimensional video conferencing in a network environment |
US8477175B2 (en) * | 2009-03-09 | 2013-07-02 | Cisco Technology, Inc. | System and method for providing three dimensional imaging in a network environment |
US20130176192A1 (en) * | 2011-09-30 | 2013-07-11 | Kenneth Varga | Extra-sensory perception sharing force capability and unknown terrain identification system |
WO2010124074A1 (en) * | 2009-04-22 | 2010-10-28 | Terrence Dashon Howard | System for merging virtual reality and reality to provide an enhanced sensory experience |
US8659639B2 (en) | 2009-05-29 | 2014-02-25 | Cisco Technology, Inc. | System and method for extending communications between participants in a conferencing environment |
US8762982B1 (en) * | 2009-06-22 | 2014-06-24 | Yazaki North America, Inc. | Method for programming an instrument cluster |
US9082297B2 (en) | 2009-08-11 | 2015-07-14 | Cisco Technology, Inc. | System and method for verifying parameters in an audiovisual environment |
US8812990B2 (en) * | 2009-12-11 | 2014-08-19 | Nokia Corporation | Method and apparatus for presenting a first person world view of content |
US9225916B2 (en) | 2010-03-18 | 2015-12-29 | Cisco Technology, Inc. | System and method for enhancing video images in a conferencing environment |
USD626102S1 (en) | 2010-03-21 | 2010-10-26 | Cisco Tech Inc | Video unit with integrated features |
USD626103S1 (en) | 2010-03-21 | 2010-10-26 | Cisco Technology, Inc. | Video unit with integrated features |
US9313452B2 (en) | 2010-05-17 | 2016-04-12 | Cisco Technology, Inc. | System and method for providing retracting optics in a video conferencing environment |
US8896655B2 (en) | 2010-08-31 | 2014-11-25 | Cisco Technology, Inc. | System and method for providing depth adaptive video conferencing |
US8599934B2 (en) | 2010-09-08 | 2013-12-03 | Cisco Technology, Inc. | System and method for skip coding during video conferencing in a network environment |
US20120075466A1 (en) * | 2010-09-29 | 2012-03-29 | Raytheon Company | Remote viewing |
WO2012048252A1 (en) | 2010-10-07 | 2012-04-12 | Aria Glassworks, Inc. | System and method for transitioning between interface modes in virtual and augmented reality applications |
US8599865B2 (en) | 2010-10-26 | 2013-12-03 | Cisco Technology, Inc. | System and method for provisioning flows in a mobile network environment |
US8699457B2 (en) | 2010-11-03 | 2014-04-15 | Cisco Technology, Inc. | System and method for managing flows in a mobile network environment |
US9338394B2 (en) | 2010-11-15 | 2016-05-10 | Cisco Technology, Inc. | System and method for providing enhanced audio in a video environment |
US8730297B2 (en) | 2010-11-15 | 2014-05-20 | Cisco Technology, Inc. | System and method for providing camera functions in a video environment |
US9143725B2 (en) | 2010-11-15 | 2015-09-22 | Cisco Technology, Inc. | System and method for providing enhanced graphics in a video environment |
US8902244B2 (en) | 2010-11-15 | 2014-12-02 | Cisco Technology, Inc. | System and method for providing enhanced graphics in a video environment |
US8576276B2 (en) | 2010-11-18 | 2013-11-05 | Microsoft Corporation | Head-mounted display device which provides surround video |
US8542264B2 (en) | 2010-11-18 | 2013-09-24 | Cisco Technology, Inc. | System and method for managing optics in a video environment |
US8723914B2 (en) | 2010-11-19 | 2014-05-13 | Cisco Technology, Inc. | System and method for providing enhanced video processing in a network environment |
US9017163B2 (en) | 2010-11-24 | 2015-04-28 | Aria Glassworks, Inc. | System and method for acquiring virtual and augmented reality scenes by a user |
US9111138B2 (en) | 2010-11-30 | 2015-08-18 | Cisco Technology, Inc. | System and method for gesture interface control |
USD678308S1 (en) | 2010-12-16 | 2013-03-19 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD682864S1 (en) | 2010-12-16 | 2013-05-21 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD682294S1 (en) | 2010-12-16 | 2013-05-14 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD682854S1 (en) | 2010-12-16 | 2013-05-21 | Cisco Technology, Inc. | Display screen for graphical user interface |
USD682293S1 (en) | 2010-12-16 | 2013-05-14 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD678894S1 (en) | 2010-12-16 | 2013-03-26 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD678307S1 (en) | 2010-12-16 | 2013-03-19 | Cisco Technology, Inc. | Display screen with graphical user interface |
USD678320S1 (en) | 2010-12-16 | 2013-03-19 | Cisco Technology, Inc. | Display screen with graphical user interface |
US8953022B2 (en) | 2011-01-10 | 2015-02-10 | Aria Glassworks, Inc. | System and method for sharing virtual and augmented reality scenes between users and viewers |
US8692862B2 (en) | 2011-02-28 | 2014-04-08 | Cisco Technology, Inc. | System and method for selection of video data in a video conference environment |
US8670019B2 (en) | 2011-04-28 | 2014-03-11 | Cisco Technology, Inc. | System and method for providing enhanced eye gaze in a video conferencing environment |
US8786631B1 (en) | 2011-04-30 | 2014-07-22 | Cisco Technology, Inc. | System and method for transferring transparency information in a video environment |
US8934026B2 (en) | 2011-05-12 | 2015-01-13 | Cisco Technology, Inc. | System and method for video coding in a dynamic environment |
US8947493B2 (en) | 2011-11-16 | 2015-02-03 | Cisco Technology, Inc. | System and method for alerting a participant in a video conference |
US8682087B2 (en) | 2011-12-19 | 2014-03-25 | Cisco Technology, Inc. | System and method for depth-guided image filtering in a video conference environment |
US20130250040A1 (en) * | 2012-03-23 | 2013-09-26 | Broadcom Corporation | Capturing and Displaying Stereoscopic Panoramic Images |
US9743119B2 (en) | 2012-04-24 | 2017-08-22 | Skreens Entertainment Technologies, Inc. | Video display system |
US10499118B2 (en) * | 2012-04-24 | 2019-12-03 | Skreens Entertainment Technologies, Inc. | Virtual and augmented reality system and headset display |
US11284137B2 (en) | 2012-04-24 | 2022-03-22 | Skreens Entertainment Technologies, Inc. | Video processing systems and methods for display, selection and navigation of a combination of heterogeneous sources |
US9179126B2 (en) * | 2012-06-01 | 2015-11-03 | Ostendo Technologies, Inc. | Spatio-temporal light field cameras |
US20130333633A1 (en) * | 2012-06-14 | 2013-12-19 | Tai Cheung Poon | Systems and methods for testing dogs' hearing, vision, and responsiveness |
US9626799B2 (en) | 2012-10-02 | 2017-04-18 | Aria Glassworks, Inc. | System and method for dynamically displaying multiple virtual and augmented reality scenes on a single display |
US9681154B2 (en) | 2012-12-06 | 2017-06-13 | Patent Capital Group | System and method for depth-guided filtering in a video conference environment |
US10769852B2 (en) | 2013-03-14 | 2020-09-08 | Aria Glassworks, Inc. | Method for simulating natural perception in virtual and augmented reality scenes |
WO2014176115A1 (en) * | 2013-04-22 | 2014-10-30 | Ar Tables, Llc | Apparatus for hands-free augmented reality viewing |
US9843621B2 (en) | 2013-05-17 | 2017-12-12 | Cisco Technology, Inc. | Calendaring activities based on communication processing |
US8982472B2 (en) * | 2013-05-21 | 2015-03-17 | Matvey Lvovskiy | Method of widening of angular field of view of collimating optical systems |
FR3006841B1 (en) * | 2013-06-07 | 2015-07-03 | Kolor | FUSION OF SEVERAL VIDEO STREAMS |
US9579573B2 (en) | 2013-06-10 | 2017-02-28 | Pixel Press Technology, LLC | Systems and methods for creating a playable video game from a three-dimensional model |
US10363486B2 (en) | 2013-06-10 | 2019-07-30 | Pixel Press Technology, LLC | Smart video game board system and methods |
US10977864B2 (en) | 2014-02-21 | 2021-04-13 | Dropbox, Inc. | Techniques for capturing and displaying partial motion in virtual or augmented reality scenes |
US9392212B1 (en) | 2014-04-17 | 2016-07-12 | Visionary Vr, Inc. | System and method for presenting virtual reality content to a user |
US9665170B1 (en) | 2015-06-10 | 2017-05-30 | Visionary Vr, Inc. | System and method for presenting virtual reality content to a user based on body posture |
DE102015116868A1 (en) | 2015-10-05 | 2017-04-06 | Christoph Greiffenbach | Presentation system for advertising purposes and for displaying a product |
AU2017214748B9 (en) * | 2016-02-05 | 2021-05-27 | Magic Leap, Inc. | Systems and methods for augmented reality |
US10547704B2 (en) | 2017-04-06 | 2020-01-28 | Sony Interactive Entertainment Inc. | Predictive bitrate selection for 360 video streaming |
US10217488B1 (en) * | 2017-12-15 | 2019-02-26 | Snap Inc. | Spherical video editing |
KR102157160B1 (en) * | 2018-12-27 | 2020-09-17 | 주식회사 다윈테크 | 360°virtual image experience system |
US11683464B2 (en) * | 2018-12-28 | 2023-06-20 | Canon Kabushiki Kaisha | Electronic device, control method, and non-transitorycomputer readable medium |
US11503227B2 (en) | 2019-09-18 | 2022-11-15 | Very 360 Vr Llc | Systems and methods of transitioning between video clips in interactive videos |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3580978A (en) * | 1968-06-06 | 1971-05-25 | Singer General Precision | Visual display method and apparatus |
US5495576A (en) * | 1993-01-11 | 1996-02-27 | Ritchey; Kurtis J. | Panoramic image based virtual reality/telepresence audio-visual system and method |
US5619255A (en) * | 1994-08-19 | 1997-04-08 | Cornell Research Foundation, Inc. | Wide-screen video system |
US5999220A (en) * | 1997-04-07 | 1999-12-07 | Washino; Kinya | Multi-format audio/video production system with frame-rate conversion |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3872238A (en) * | 1974-03-11 | 1975-03-18 | Us Navy | 360 Degree panoramic television system |
JPS5124211A (en) * | 1974-08-23 | 1976-02-27 | Victor Company Of Japan | Onseishingono shuhasuhenkansochi |
JPS60141087A (en) * | 1983-12-28 | 1985-07-26 | Tsutomu Ohashi | Reproducer of environment |
US5130794A (en) * | 1990-03-29 | 1992-07-14 | Ritchey Kurtis J | Panoramic display system |
US5130815A (en) * | 1990-07-20 | 1992-07-14 | Mti Associates | Method and apparatus for encoding a video signal having multi-language capabilities |
US5148310A (en) * | 1990-08-30 | 1992-09-15 | Batchko Robert G | Rotating flat screen fully addressable volume display system |
ES2043549B1 (en) * | 1992-04-30 | 1996-10-01 | Jp Producciones Sl | INTEGRAL RECORDING SYSTEM, PROJECTION-VISUALIZATION-AUDITION OF IMAGES AND / OR PERFECTED VIRTUAL REALITY. |
JPH06301390A (en) * | 1993-04-12 | 1994-10-28 | Sanyo Electric Co Ltd | Stereoscopic sound image controller |
US5850352A (en) * | 1995-03-31 | 1998-12-15 | The Regents Of The University Of California | Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images |
US5991085A (en) * | 1995-04-21 | 1999-11-23 | I-O Display Systems Llc | Head-mounted personal visual display apparatus with image generator and holder |
US5703604A (en) * | 1995-05-22 | 1997-12-30 | Dodeca Llc | Immersive dodecaherdral video viewing system |
WO1999044698A2 (en) * | 1998-03-03 | 1999-09-10 | Arena, Inc. | System and method for tracking and assessing movement skills in multidimensional space |
JPH11308608A (en) * | 1998-02-19 | 1999-11-05 | Nippon Lsi Card Co Ltd | Dynamic image generating method, dynamic image generator, and dynamic image display method |
JP3232408B2 (en) * | 1997-12-01 | 2001-11-26 | 日本エルエスアイカード株式会社 | Image generation device, image presentation device, and image generation method |
US6064423A (en) * | 1998-02-12 | 2000-05-16 | Geng; Zheng Jason | Method and apparatus for high resolution three dimensional display |
CA2371349A1 (en) * | 1998-05-13 | 1999-11-18 | Scott Gilbert | Panoramic movies which simulate movement through multidimensional space |
JP3449937B2 (en) * | 1999-01-14 | 2003-09-22 | 日本電信電話株式会社 | Panorama image creation method, surrounding situation transmission method using panorama image, and recording medium recording these methods |
JP4453119B2 (en) * | 1999-06-08 | 2010-04-21 | ソニー株式会社 | Camera calibration apparatus and method, image processing apparatus and method, program providing medium, and camera |
GB9914914D0 (en) * | 1999-06-26 | 1999-08-25 | British Aerospace | Measurement apparatus for measuring the position and orientation of a first part to be worked, inspected or moved |
JP2001108421A (en) * | 1999-10-13 | 2001-04-20 | Sanyo Electric Co Ltd | Method and apparatus for three-dimensional modeling, and medium recording three-dimensional modeling program |
US6765569B2 (en) * | 2001-03-07 | 2004-07-20 | University Of Southern California | Augmented-reality tool employing scene-feature autocalibration during camera motion |
-
2001
- 2001-06-25 US US09/891,733 patent/US20010056574A1/en not_active Abandoned
- 2001-12-21 DE DE10197255T patent/DE10197255T5/en not_active Withdrawn
- 2001-12-21 JP JP2003508064A patent/JP2005500721A/en active Pending
- 2001-12-21 WO PCT/US2001/049287 patent/WO2003001803A1/en active Application Filing
-
2005
- 2005-09-19 US US11/230,173 patent/US7688346B2/en active Active
-
2010
- 2010-03-26 US US12/732,671 patent/US20100302348A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3580978A (en) * | 1968-06-06 | 1971-05-25 | Singer General Precision | Visual display method and apparatus |
US5495576A (en) * | 1993-01-11 | 1996-02-27 | Ritchey; Kurtis J. | Panoramic image based virtual reality/telepresence audio-visual system and method |
US5619255A (en) * | 1994-08-19 | 1997-04-08 | Cornell Research Foundation, Inc. | Wide-screen video system |
US5999220A (en) * | 1997-04-07 | 1999-12-07 | Washino; Kinya | Multi-format audio/video production system with frame-rate conversion |
Cited By (70)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020147586A1 (en) * | 2001-01-29 | 2002-10-10 | Hewlett-Packard Company | Audio annoucements with range indications |
JP2005099064A (en) * | 2002-09-05 | 2005-04-14 | Sony Computer Entertainment Inc | Display system, display control apparatus, display apparatus, display method and user interface device |
US20040125044A1 (en) * | 2002-09-05 | 2004-07-01 | Akira Suzuki | Display system, display control apparatus, display apparatus, display method and user interface device |
US20100195913A1 (en) * | 2002-12-31 | 2010-08-05 | Rajeev Sharma | Method and System for Immersing Face Images into a Video Sequence |
US7826644B2 (en) | 2002-12-31 | 2010-11-02 | Rajeev Sharma | Method and system for immersing face images into a video sequence |
US7734070B1 (en) | 2002-12-31 | 2010-06-08 | Rajeev Sharma | Method and system for immersing face images into a video sequence |
WO2004088994A1 (en) * | 2003-04-02 | 2004-10-14 | Daimlerchrysler Ag | Device for taking into account the viewer's position in the representation of 3d image contents on 2d display devices |
US7256779B2 (en) * | 2003-05-08 | 2007-08-14 | Nintendo Co., Ltd. | Video game play using panoramically-composited depth-mapped cube mapping |
US20040222988A1 (en) * | 2003-05-08 | 2004-11-11 | Nintendo Co., Ltd. | Video game play using panoramically-composited depth-mapped cube mapping |
US7118228B2 (en) | 2003-11-04 | 2006-10-10 | Hewlett-Packard Development Company, L.P. | Image display system |
WO2005064440A2 (en) * | 2003-12-23 | 2005-07-14 | Siemens Aktiengesellschaft | Device and method for the superposition of the real field of vision in a precisely positioned manner |
WO2005064440A3 (en) * | 2003-12-23 | 2006-01-26 | Siemens Ag | Device and method for the superposition of the real field of vision in a precisely positioned manner |
US20050231532A1 (en) * | 2004-03-31 | 2005-10-20 | Canon Kabushiki Kaisha | Image processing method and image processing apparatus |
US7728852B2 (en) * | 2004-03-31 | 2010-06-01 | Canon Kabushiki Kaisha | Image processing method and image processing apparatus |
US20080024594A1 (en) * | 2004-05-19 | 2008-01-31 | Ritchey Kurtis J | Panoramic image-based virtual reality/telepresence audio-visual system and method |
US20060069591A1 (en) * | 2004-09-29 | 2006-03-30 | Razzano Michael R | Dental image charting system and method |
US20060285636A1 (en) * | 2004-09-29 | 2006-12-21 | Interactive Diagnostic Imaging, Inc. | Dental image charting system and method |
US9958934B1 (en) * | 2006-05-01 | 2018-05-01 | Jeffrey D. Mullen | Home and portable augmented reality and virtual reality video game consoles |
US10838485B2 (en) | 2006-05-01 | 2020-11-17 | Jeffrey D. Mullen | Home and portable augmented reality and virtual reality game consoles |
US20080030573A1 (en) * | 2006-05-11 | 2008-02-07 | Ritchey Kurtis J | Volumetric panoramic sensor systems |
US20080007617A1 (en) * | 2006-05-11 | 2008-01-10 | Ritchey Kurtis J | Volumetric panoramic sensor systems |
US8953057B2 (en) | 2006-05-22 | 2015-02-10 | Canon Kabushiki Kaisha | Display apparatus with image-capturing function, image processing apparatus, image processing method, and image display system |
EP1860612A3 (en) * | 2006-05-22 | 2011-08-31 | Canon Kabushiki Kaisha | Image distortion correction |
EP1860612A2 (en) * | 2006-05-22 | 2007-11-28 | Canon Kabushiki Kaisha | Image distortion correction |
US20070268316A1 (en) * | 2006-05-22 | 2007-11-22 | Canon Kabushiki Kaisha | Display apparatus with image-capturing function, image processing apparatus, image processing method, and image display system |
US10819954B2 (en) | 2006-11-16 | 2020-10-27 | Immersive Licensing, Inc. | Distributed video sensor panoramic imaging system |
US20080117288A1 (en) * | 2006-11-16 | 2008-05-22 | Imove, Inc. | Distributed Video Sensor Panoramic Imaging System |
US10375355B2 (en) | 2006-11-16 | 2019-08-06 | Immersive Licensing, Inc. | Distributed video sensor panoramic imaging system |
US8771064B2 (en) | 2010-05-26 | 2014-07-08 | Aristocrat Technologies Australia Pty Limited | Gaming system and a method of gaming |
US9236000B1 (en) * | 2010-12-23 | 2016-01-12 | Amazon Technologies, Inc. | Unpowered augmented reality projection accessory display device |
US10031335B1 (en) | 2010-12-23 | 2018-07-24 | Amazon Technologies, Inc. | Unpowered augmented reality projection accessory display device |
US9383831B1 (en) * | 2010-12-23 | 2016-07-05 | Amazon Technologies, Inc. | Powered augmented reality projection accessory display device |
US9766057B1 (en) | 2010-12-23 | 2017-09-19 | Amazon Technologies, Inc. | Characterization of a scene with structured light |
US9721386B1 (en) | 2010-12-27 | 2017-08-01 | Amazon Technologies, Inc. | Integrated augmented reality environment |
US9508194B1 (en) | 2010-12-30 | 2016-11-29 | Amazon Technologies, Inc. | Utilizing content output devices in an augmented reality environment |
US9607315B1 (en) | 2010-12-30 | 2017-03-28 | Amazon Technologies, Inc. | Complementing operation of display devices in an augmented reality environment |
US20120281128A1 (en) * | 2011-05-05 | 2012-11-08 | Sony Corporation | Tailoring audio video output for viewer position and needs |
US9950262B2 (en) | 2011-06-03 | 2018-04-24 | Nintendo Co., Ltd. | Storage medium storing information processing program, information processing device, information processing system, and information processing method |
US20120307001A1 (en) * | 2011-06-03 | 2012-12-06 | Nintendo Co., Ltd. | Information processing system, information processing device, storage medium storing information processing program, and moving image reproduction control method |
US10471356B2 (en) | 2011-06-03 | 2019-11-12 | Nintendo Co., Ltd. | Storage medium storing information processing program, information processing device, information processing system, and information processing method |
CN105324984A (en) * | 2013-12-09 | 2016-02-10 | Cjcgv株式会社 | Method and system for generating multi-projection images |
US20160127723A1 (en) * | 2013-12-09 | 2016-05-05 | Cj Cgv Co., Ltd. | Method and system for generating multi-projection images |
US20160328824A1 (en) * | 2013-12-09 | 2016-11-10 | Cj Cgv Co., Ltd. | Method and system for generating multi-projection images |
CN106165402A (en) * | 2014-04-22 | 2016-11-23 | 索尼公司 | Information reproduction apparatus, information regeneration method, information record carrier and information recording method |
EP3136713A4 (en) * | 2014-04-22 | 2017-12-06 | Sony Corporation | Information reproduction device, information reproduction method, information recording device, and information recording method |
US20170329394A1 (en) * | 2016-05-13 | 2017-11-16 | Benjamin Lloyd Goldstein | Virtual and augmented reality systems |
US20190149731A1 (en) * | 2016-05-25 | 2019-05-16 | Livit Media Inc. | Methods and systems for live sharing 360-degree video streams on a mobile device |
FR3057430A1 (en) * | 2016-10-10 | 2018-04-13 | Immersion | DEVICE FOR IMMERSION IN A REPRESENTATION OF AN ENVIRONMENT RESULTING FROM A SET OF IMAGES |
US20180176708A1 (en) * | 2016-12-20 | 2018-06-21 | Casio Computer Co., Ltd. | Output control device, content storage device, output control method and non-transitory storage medium |
US10593012B2 (en) | 2017-03-22 | 2020-03-17 | Mediatek Inc. | Method and apparatus for generating and encoding projection-based frame with 360-degree content represented in projection faces packed in segmented sphere projection layout |
TWI666912B (en) * | 2017-03-22 | 2019-07-21 | 聯發科技股份有限公司 | Method and apparatus for generating and encoding projection-based frame with 360-degree content represented in projection faces packed in segmented sphere projection layout |
US20180342267A1 (en) * | 2017-05-26 | 2018-11-29 | Digital Domain, Inc. | Spatialized rendering of real-time video data to 3d space |
US10796723B2 (en) * | 2017-05-26 | 2020-10-06 | Immersive Licensing, Inc. | Spatialized rendering of real-time video data to 3D space |
US20190007672A1 (en) * | 2017-06-30 | 2019-01-03 | Bobby Gene Burrough | Method and Apparatus for Generating Dynamic Real-Time 3D Environment Projections |
US11288937B2 (en) | 2017-06-30 | 2022-03-29 | Johnson Controls Tyco IP Holdings LLP | Security camera system with multi-directional mount and method of operation |
US11361640B2 (en) | 2017-06-30 | 2022-06-14 | Johnson Controls Tyco IP Holdings LLP | Security camera system with multi-directional mount and method of operation |
US10713811B2 (en) | 2017-09-29 | 2020-07-14 | Sensormatic Electronics, LLC | Security camera system with multi-directional mount and method of operation |
US20190104282A1 (en) * | 2017-09-29 | 2019-04-04 | Sensormatic Electronics, LLC | Security Camera System with Multi-Directional Mount and Method of Operation |
US11234312B2 (en) | 2017-10-16 | 2022-01-25 | Signify Holding B.V. | Method and controller for controlling a plurality of lighting devices |
WO2019076667A1 (en) * | 2017-10-16 | 2019-04-25 | Signify Holding B.V. | A method and controller for controlling a plurality of lighting devices |
US10712810B2 (en) * | 2017-12-08 | 2020-07-14 | Telefonaktiebolaget Lm Ericsson (Publ) | System and method for interactive 360 video playback based on user location |
US11703942B2 (en) | 2017-12-08 | 2023-07-18 | Telefonaktiebolaget Lm Ericsson (Publ) | System and method for interactive 360 video playback based on user location |
CN109996060A (en) * | 2017-12-30 | 2019-07-09 | 深圳多哚新技术有限责任公司 | A kind of virtual reality cinema system and information processing method |
US11086395B2 (en) * | 2019-02-15 | 2021-08-10 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
US11372474B2 (en) * | 2019-07-03 | 2022-06-28 | Saec/Kinetic Vision, Inc. | Systems and methods for virtual artificial intelligence development and testing |
US11644891B1 (en) * | 2019-07-03 | 2023-05-09 | SAEC/KineticVision, Inc. | Systems and methods for virtual artificial intelligence development and testing |
US11914761B1 (en) * | 2019-07-03 | 2024-02-27 | Saec/Kinetic Vision, Inc. | Systems and methods for virtual artificial intelligence development and testing |
US11816757B1 (en) * | 2019-12-11 | 2023-11-14 | Meta Platforms Technologies, Llc | Device-side capture of data representative of an artificial reality environment |
CN112233048A (en) * | 2020-12-11 | 2021-01-15 | 成都成电光信科技股份有限公司 | Spherical video image correction method |
CN113824746A (en) * | 2021-11-25 | 2021-12-21 | 山东信息职业技术学院 | Virtual reality information transmission method and virtual reality system |
Also Published As
Publication number | Publication date |
---|---|
WO2003001803A1 (en) | 2003-01-03 |
JP2005500721A (en) | 2005-01-06 |
US20060082643A1 (en) | 2006-04-20 |
US20100302348A1 (en) | 2010-12-02 |
US7688346B2 (en) | 2010-03-30 |
DE10197255T5 (en) | 2004-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7688346B2 (en) | VTV system | |
US7719563B2 (en) | VTV system | |
JP2005500721A5 (en) | ||
US10645369B2 (en) | Stereo viewing | |
CN110463195B (en) | Method and apparatus for rendering timed text and graphics in virtual reality video | |
KR20170017700A (en) | Electronic Apparatus generating 360 Degrees 3D Stereoscopic Panorama Images and Method thereof | |
AU1463597A (en) | Method and apparatus for converting a two-dimensional motion picture into a three-dimensional motion picture | |
CN113099204A (en) | Remote live-action augmented reality method based on VR head-mounted display equipment | |
EP1919219A1 (en) | Video transmitting apparatus, video display apparatus, video transmitting method and video display method | |
EP3301933A1 (en) | Methods, devices and stream to provide indication of mapping of omnidirectional images | |
KR20200065087A (en) | Multi-viewpoint-based 360 video processing method and apparatus | |
KR101825063B1 (en) | The hardware system for inputting 3D image in a flat panel | |
JP2018033107A (en) | Video distribution device and distribution method | |
Zheng et al. | Research on panoramic stereo live streaming based on the virtual reality | |
CN114040097A (en) | Large-scene interactive action capturing system based on multi-channel image acquisition and fusion | |
JP2001148806A (en) | Video image synthesis arithmetic processing unit, its method and system | |
JP2007323481A (en) | Video data transmission system and method, transmission processing apparatus and method, and reception processing apparatus and method | |
JP2004048803A (en) | Video composition processing system | |
JP2022021886A (en) | Vr video generation device and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |