WO1997026758A1 - Method and apparatus for insertion of virtual objects into a video sequence - Google Patents
Method and apparatus for insertion of virtual objects into a video sequence Download PDFInfo
- Publication number
- WO1997026758A1 WO1997026758A1 PCT/GB1997/000029 GB9700029W WO9726758A1 WO 1997026758 A1 WO1997026758 A1 WO 1997026758A1 GB 9700029 W GB9700029 W GB 9700029W WO 9726758 A1 WO9726758 A1 WO 9726758A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frame
- feature points
- virtual object
- points
- sequence
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
- H04N5/2723—Insertion of virtual advertisement; Replacing advertisements physical present in the scene by virtual advertisement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
- G11B27/32—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
Definitions
- the present invention relates to insertion of virtual objects into video sequences and in particular to sequences which have already been previously generated.
- CG images and characters are widely used in feature films and commercials. They provide for special effects possible only with CG content as well as for the special look of a cartoon character. While in many instances the complete picture is computer generated, in other instances, CG characters are to be inserted in a live image sequence taken by a physical camera.
- the apparent motion of the objects and the characters is a combination of the objects ego-motion in a 3D world, and the motion of the camera.
- the ego-motion is determined by the animator.
- One possible solution is to use motion control systems in shooting the live footage.
- the motion of the camera is computer-controlled and recorded. These records are then used in a straight forward manner to render the CG characters in synchronization with camera motion.
- a known 3D object may be used to solve camera motion, by matching image features to the object's model If this is not the case, we may try to solve the structure and the motion concurrently [J Weng et al., Error Analysis of Motion Parameter Estimation from Image Sequences, First Intl. Conf. on Computer Vision 1987, pp. 703-707]. These non-linear methods are inaccurate, slowly converging and computationally unstable.
- the present application provides a method and apparatus for insertion of CG characters into a existing video sequence, independent of motion control records or a known pattern.
- a method of insertion of virtual objects into a video sequence consisting of a plurality of video frames comprising the steps of : i. detecting in a one frame (Frame A) of the video sequence a set of feature points; ii. detecting in another frame (Frame B) of the video sequence the set of feature points; iii. detecting in each frame other than frame A or frame B at least a sub-set of the feature points; iv. positioning a virtual object in a defined position in frame A; v. positioning the virtual object in the defined position in frame
- apparatus for insertion of virtual objects into a video sequence consisting of a plurality of video frames said apparatus including : i. means for detecting in one frame (Frame A) a set of feature points; ii. means for detecting in another frame (Frame B)the set of feature points; iii. means for detecting in each frame other than frame A or frame B at least a sub-set of the feature points; iv.
- the CG character is constrained relative to a cube or other regularly shaped box, the cube representing the virtual object. The CG character is thereby able to be animated.
- Figure 1 shows an exemplary video sequence illustrating in Figure
- FIG. IA a first frame of the video sequence; in Figure IB an intermediate frame (K) of the video sequence; in Figure IC a last frame of the video sequence and in Figure ID a virtual object to be inserted into the video sequence of Figures IA to IC;
- Figure 2 shows apparatus according to the present invention
- Figure 3 shows a flow diagram illustrating the selection and storage of feature points
- Figure 4 shows a flow diagram illustrating the positioning of the virtual object in the first, last and intermediate frames
- Figure 5 shows a cube (as defined) enclosing a three dimensional moving virtual character
- Figure 6 shows a flow diagram illustrating the solution of camera transformation corresponding to a frame.
- the present invention is related to the investigation of properties of feature points in three perspective views, As an example, consider the concept of the fundamental matrix (FM). [R Deriche et al., Robust recovery of the epipolar geometry for an uncalibrated stereo rig, Lecture Notes in Computer Science, Vol. 800, Computer Vision - ECCV 94, Springer- Verlag Berlin Heidelberg 1994, pp. 567-576]. Given 2 corresponding points in two views, q and q' (in homogeneous coordinates) we can write :
- Figure IA shows a first video frame which is assumed to be the first frame of a sequence, selected as now described.
- the sequence can be selected manually or automatically.
- the operator or an automatic feature selection system searches for a number of feature points in both a first frame (Frame 1)
- Figure IA and a last frame (Frame N) Figure IC Figure IA and a last frame (Frame N) Figure IC.
- any intermediate frame such as in Figure IB (Frame K) a sub-set of the points must be visible.
- Figure ID is computer generated and in this example comprises a cube 12 (XYZW).
- the cube 12 is to be positioned on a shelf 14 of a bookcase 16.
- the VDU 22 receives a video sequence from VCR 24.
- the video controller 26 can control VCR 24 to evaluate a sequence of video shots as in Figures IA to IC to evaluate a sequence having the desired number of feature points. Such a sequence could be very long for a fairly static camera or short for a fast panning camera.
- the feature points may be selected manually by, for example, mouse 28 or automatically. Preferably, as stated above, at least eight feature points are selected to appear in all frames of a sequence. When the controller 26 in conjunction with processor 30 detects that there are less than eight points the video sequence is terminated. If further insertion of an object is required then a continuing further video sequence is generated using the same principles.
- CG object 12 is created by generator 32.
- the CG object 12 is then positioned as desired in the first and last frames of the sequence.
- the orientation of the object in the first and last frames is accomplished manually such that the object appears to be naturally correct in both frames.
- the CG object 12 is then automatically positioned in all intermediate frames by the processors 30 and 34 as follows with reference to Figures 3 and 4.
- processor 1 From a start 40 processor 1 searches for feature points in a first frame 42 and continues searching for these features until the sequence is lost 44. The feature positions are then stored in store 36 - step 46. The positions of these features in all intermediate frames are then stored in store 36 - step 48.
- the CG object 12 is then generated 50, 52 - Figure 4 and positioned on the shelf 14 in a first frame of the video sequence - step 54.
- One or more reference points are selected for the CG object - step
- the positions of the reference points in the first frame are stored in store 38 - step 58.
- the CG object is then positioned in the last frame of the sequence - step 60 and the position of the reference points is stored for this position of the CG object in store 38 - step 62.
- the positions of the reference points for the object 12 are calculated for each intermediate frame i by calculating the FM or the TT using the triplets of reference points in the first frame, last frame and frame i - step 64.
- the location of the reference points for the object in Frame i is computed from the locations of the corresponding object points in the first and in the last frames, as well as the FM or the TT as described before.
- the location of the reference point m can be computed using the TT and its locations in the first and in the last frames.
- the CG object is a cube or other regular solid shape (hereinafter referred to as a cube) there is a possibility of providing an animated figure which is associated with the cube.
- the figure may be completely within the cube or could be larger than the cube but constrained in its movement in relation to the cube.
- the animated figure will also be positioned.
- the cube was made a rectangular box which was the size of shelf 16, then a rabbit could be made to dance along the shelf.
- step 54 when we position the virtual object the transformation applied to the model in 52 can be stored and the inverse of the transformation constitutes a camera transformation due to the duality between the camera and object motions. Therefore when we generate the virtual object in 52 we would prefer to generate it relative to a rectangular bounding box (see Figure 5) and then the vertices of this bounding box can be used as reference points in 64.
- the camera transformation corresponding to the frame can be solved as indicated in Figure 6 in which in step 68 the model coordinates for the reference points of the virtual object from step 52 of Figure 4 are used with the image coordinates of reference points in the intermediate field (step 70) to combine to solve camera transformation information (step 72) and this is then stored in store 35 ( Figure 1) - step 74.
- this transformation is applied to the actual object so that if we allow the virtual character 76 to move relative to the bounding box 78 in the object coordinate system then we take the animated model (character) at each intermediate frame and further transform it by the camera transformation computed as described above.
- the animated model will therefore move naturally and the correct perspective etc will be provided by the camera transformation system as calculated above.
- An alternative method to insert an object having ego motion is to generate it manually only in the coordinate systems of frame A and frame B. This can be manually adjusted by an animator for correct appearance in both images. The entire object can then be reprojected into all other frames by using its locations in Frames A and B, and the FM or TT methods.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP97900282A EP0875115A1 (en) | 1996-01-19 | 1997-01-07 | Method and apparatus for insertion of virtual objects into a video sequence |
AU13873/97A AU1387397A (en) | 1996-01-19 | 1997-01-07 | Method and apparatus for insertion of virtual objects into video sequence |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9601098.8 | 1996-01-19 | ||
GB9601098A GB2312582A (en) | 1996-01-19 | 1996-01-19 | Insertion of virtual objects into a video sequence |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1997026758A1 true WO1997026758A1 (en) | 1997-07-24 |
Family
ID=10787260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB1997/000029 WO1997026758A1 (en) | 1996-01-19 | 1997-01-07 | Method and apparatus for insertion of virtual objects into a video sequence |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP0875115A1 (en) |
AU (1) | AU1387397A (en) |
GB (1) | GB2312582A (en) |
WO (1) | WO1997026758A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6360234B2 (en) | 1997-08-14 | 2002-03-19 | Virage, Inc. | Video cataloger system with synchronized encoders |
US6463444B1 (en) | 1997-08-14 | 2002-10-08 | Virage, Inc. | Video cataloger system with extensibility |
US6567980B1 (en) | 1997-08-14 | 2003-05-20 | Virage, Inc. | Video cataloger system with hyperlinked output |
US7206434B2 (en) | 2001-07-10 | 2007-04-17 | Vistas Unlimited, Inc. | Method and system for measurement of the duration an area is included in an image stream |
US7230653B1 (en) | 1999-11-08 | 2007-06-12 | Vistas Unlimited | Method and apparatus for real time insertion of images into video |
US7295752B1 (en) | 1997-08-14 | 2007-11-13 | Virage, Inc. | Video cataloger system with audio track extraction |
US9338520B2 (en) | 2000-04-07 | 2016-05-10 | Hewlett Packard Enterprise Development Lp | System and method for applying a database to video multimedia |
US9684728B2 (en) | 2000-04-07 | 2017-06-20 | Hewlett Packard Enterprise Development Lp | Sharing video |
US10089550B1 (en) | 2011-08-17 | 2018-10-02 | William F. Otte | Sports video display |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2351199B (en) * | 1996-09-13 | 2001-04-04 | Pandora Int Ltd | Image processing |
US6525765B1 (en) | 1997-04-07 | 2003-02-25 | Pandora International, Inc. | Image processing |
US6965397B1 (en) | 1999-11-22 | 2005-11-15 | Sportvision, Inc. | Measuring camera attitude |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5353392A (en) * | 1990-04-11 | 1994-10-04 | Multi Media Techniques | Method and device for modifying a zone in successive images |
US5436672A (en) * | 1994-05-27 | 1995-07-25 | Symah Vision | Video processing system for modifying a zone in successive images |
WO1995025399A1 (en) * | 1994-03-14 | 1995-09-21 | Scitex America Corporation | A system for implanting an image into a video stream |
WO1995030312A1 (en) * | 1994-04-29 | 1995-11-09 | Orad, Inc. | Improved chromakeying system |
-
1996
- 1996-01-19 GB GB9601098A patent/GB2312582A/en not_active Withdrawn
-
1997
- 1997-01-07 WO PCT/GB1997/000029 patent/WO1997026758A1/en not_active Application Discontinuation
- 1997-01-07 EP EP97900282A patent/EP0875115A1/en not_active Withdrawn
- 1997-01-07 AU AU13873/97A patent/AU1387397A/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5353392A (en) * | 1990-04-11 | 1994-10-04 | Multi Media Techniques | Method and device for modifying a zone in successive images |
WO1995025399A1 (en) * | 1994-03-14 | 1995-09-21 | Scitex America Corporation | A system for implanting an image into a video stream |
WO1995030312A1 (en) * | 1994-04-29 | 1995-11-09 | Orad, Inc. | Improved chromakeying system |
US5436672A (en) * | 1994-05-27 | 1995-07-25 | Symah Vision | Video processing system for modifying a zone in successive images |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6360234B2 (en) | 1997-08-14 | 2002-03-19 | Virage, Inc. | Video cataloger system with synchronized encoders |
US6463444B1 (en) | 1997-08-14 | 2002-10-08 | Virage, Inc. | Video cataloger system with extensibility |
US6567980B1 (en) | 1997-08-14 | 2003-05-20 | Virage, Inc. | Video cataloger system with hyperlinked output |
US6877134B1 (en) | 1997-08-14 | 2005-04-05 | Virage, Inc. | Integrated data and real-time metadata capture system and method |
US7093191B1 (en) | 1997-08-14 | 2006-08-15 | Virage, Inc. | Video cataloger system with synchronized encoders |
US7295752B1 (en) | 1997-08-14 | 2007-11-13 | Virage, Inc. | Video cataloger system with audio track extraction |
US7230653B1 (en) | 1999-11-08 | 2007-06-12 | Vistas Unlimited | Method and apparatus for real time insertion of images into video |
US9338520B2 (en) | 2000-04-07 | 2016-05-10 | Hewlett Packard Enterprise Development Lp | System and method for applying a database to video multimedia |
US9684728B2 (en) | 2000-04-07 | 2017-06-20 | Hewlett Packard Enterprise Development Lp | Sharing video |
US7206434B2 (en) | 2001-07-10 | 2007-04-17 | Vistas Unlimited, Inc. | Method and system for measurement of the duration an area is included in an image stream |
US10089550B1 (en) | 2011-08-17 | 2018-10-02 | William F. Otte | Sports video display |
Also Published As
Publication number | Publication date |
---|---|
EP0875115A1 (en) | 1998-11-04 |
GB2312582A (en) | 1997-10-29 |
AU1387397A (en) | 1997-08-11 |
GB9601098D0 (en) | 1996-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kanade et al. | Virtualized reality: Concepts and early results | |
Pollefeys et al. | Visual modeling with a hand-held camera | |
US6084979A (en) | Method for creating virtual reality | |
Guillou et al. | Using vanishing points for camera calibration and coarse 3D reconstruction from a single image | |
US6124864A (en) | Adaptive modeling and segmentation of visual image streams | |
US6266068B1 (en) | Multi-layer image-based rendering for video synthesis | |
GB2391149A (en) | Processing scene objects | |
Saito et al. | Appearance-based virtual view generation from multicamera videos captured in the 3-d room | |
EP0903695B1 (en) | Image processing apparatus | |
WO1997026758A1 (en) | Method and apparatus for insertion of virtual objects into a video sequence | |
US7209136B2 (en) | Method and system for providing a volumetric representation of a three-dimensional object | |
US6404913B1 (en) | Image synthesizing apparatus and method, position detecting apparatus and method, and supply medium | |
JP2000268179A (en) | Three-dimensional shape information obtaining method and device, two-dimensional picture obtaining method and device and record medium | |
WO2003036384A2 (en) | Extendable tracking by line auto-calibration | |
US6795090B2 (en) | Method and system for panoramic image morphing | |
US5793372A (en) | Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points | |
Kanade et al. | Virtualized reality: Being mobile in a visual scene | |
Kanade et al. | Virtualized reality: perspectives on 4D digitization of dynamic events | |
Ponto et al. | Effective replays and summarization of virtual experiences | |
Kang et al. | Tour into the video: image-based navigation scheme for video sequences of dynamic scenes | |
Havaldar et al. | Synthesizing Novel Views from Unregistered 2‐D Images | |
Chan et al. | A panoramic-based walkthrough system using real photos | |
Mayer et al. | Multiresolution texture for photorealistic rendering | |
Kim et al. | Digilog miniature: real-time, immersive, and interactive AR on miniatures | |
JPH10111934A (en) | Method and medium for three-dimensional shape model generation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1997900282 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1997900282 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
NENP | Non-entry into the national phase |
Ref country code: JP Ref document number: 97525765 Format of ref document f/p: F |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1997900282 Country of ref document: EP |