WO1995019093A1 - Viewing imaged objects from selected points of view - Google Patents

Viewing imaged objects from selected points of view Download PDF

Info

Publication number
WO1995019093A1
WO1995019093A1 PCT/US1995/000303 US9500303W WO9519093A1 WO 1995019093 A1 WO1995019093 A1 WO 1995019093A1 US 9500303 W US9500303 W US 9500303W WO 9519093 A1 WO9519093 A1 WO 9519093A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
view
taken
image
imaging devices
Prior art date
Application number
PCT/US1995/000303
Other languages
French (fr)
Inventor
Richard C. Fuisz
Gerald E. Battist
Original Assignee
Fuisz Richard C
Battist Gerald E
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuisz Richard C, Battist Gerald E filed Critical Fuisz Richard C
Priority to JP7518668A priority Critical patent/JPH09507620A/en
Priority to EP95907364A priority patent/EP0742987A4/en
Publication of WO1995019093A1 publication Critical patent/WO1995019093A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals

Definitions

  • the present invention relates to the art of recording and displaying images of objects and, in parti-.'.ilar, to the manipulation of such images for presentation from selected points of view. Description of Related Art
  • the viewer of objects has traditionally been restricted to viewing sequences of images of those objects from a limited number of points of view established at the time of imaging.
  • objects for example, articles, events, scenes or locations
  • the viewer is commonly limited to only a single point of view a"s selected by another person (e.g., a director) when that object was imaged.
  • another person e.g., a director
  • the viewer is often frustrated by the fact that the point of view desired by the viewer was not selected and/or that a plurality of different views of the imaged object are not available for consideration.
  • the viewer may continue to be frustrated by the fact that a certain other point of view with respect to the object was not selected and made available for viewing.
  • a plurality of imaging devices are positioned at spaced apart locations for recording sequences of images of an object from a plurality of different points of view.
  • a laser tracking (direction and distance) device determines the relative locations of the imaging devices, and the relative location determination is synchronized with the recorded sequences of images.
  • a processor presents the recorded images as taken by the imaging devices to the user as is, if desired, to display the imaged object from one of the taken points of view.
  • the processor manipulates the taken sequences of images, in accordance with viewer input provided through an interface in conjunction with the determined relative locations of the imaging devices, to construct a sequence of images or a freeze frame image of the object for display as if taken from a viewer selected point of view.
  • the processor further processes the images under user control to facilitate zoom in and zoom out on the imaged object, and removal of unwanted portions of the imaged object for display.
  • FIGURE 1 is a block diagram of the image processing system of the present invention.
  • FIGURE 2 illustrates the exemplary installation of a plurality of imaging devices around and about an object to be imaged in accordance with the teachings of the present invention
  • FIGURE 3 shows a moveable structure for mounting and supporting the plurality of imaging devices around and about an object to be imaged
  • FIGURE 4 is a flow diagram of a first embodiment of the method of the present invention.
  • FIGURE 5 is a flow diagram of a second embodiment of the method of the present invention.
  • the system includes a plurality of imaging devices 12 (such as digitizing optical cameras, high definition film cameras and/or three-dimensional imaging cameras) connected to a storage device 14.
  • the sequences of images recorded by the imaging devices 12 are separately stored by the storage device 14 and synchronized in accordance with a time recording for future retrieval and processing in a manner to be described.
  • the sequences of images are stored on film, optical or magnetic tape, optical or magnetic disk, or other suitable storage media as desired or necessitated. It will, of course, be understood that the storage device 14 used to store the sequences of images need not also be used in the retrieval of the images.
  • the functions of storing the image data on and subsequently retrieving the image data from the media may occur at different locations.
  • the sequences of images may be simultaneously, separately and contemporaneously transmitted (also synchronized in accordance with a time recording) over lines 15 from the imaging devices 12 using digitization, polarization, different bandwidth or sequential simultaneous transmission techniques for further processing in a manner to be described. Transmission of the images over lines 15 is especially useful in "live" processing and display of images of as will be further described herein.
  • FIGURE 2 there is shown an illustration of an exemplary installation of the plurality of imaging devices 12 at a plurality of selected spaced apart locations 16 around and about an object 18.
  • the plurality of imaging devices 12 are preferably positioned at the locations 16 to simultaneously record a sequence of images of the object 18 from a plurality of different points of view
  • object 18 is used generally to describe not only a single article, person or thing, but also a collection of articles, persons or things forming a scene, and thus further includes a particular geographic location or an event or activity occurring at that location.
  • Imaged objects 18 thus include, without limitation, scenes for motion picture or television productions as well as "live” events such as televised concerts or sporting events.
  • FIGURE 2 Five imaging devices 12 are positioned one at each of the corners of an imaginary square 20, and one directly above the center of the square. It will, of course, be understood that other installation configurations not necessarily completely surrounding and/or with more or fewer imaging devices 12 are possible (including a spherical configuration) and that the use of the square 20 configuration is by way of example only. Thus, the imaging devices 12 may be positioned in accordance with the present invention in any manner desired provided that the imaging devices are spaced apart from each other at locations 16 that provide a plurality of different taken points of view 17 with respect to the object 18 being imaged.
  • the plurality of sequences of images of the object 18 recorded by the imaging devices 12 and stored in the storage device 14 are simultaneously retrievable by a processor 22 from the storage device 14, or receivable over lines 15 by the processor from the imaging devices in a "live" operating mode.
  • An interactive device 24
  • the processor 22 (such as a graphical user interface) is connected to the processor and operated by the user to choose a point of view from which the user desires to view the imaged object.
  • This chosen point of view may comprise one of the plurality of different taken points of view 17 provided by the imaging devices 12 at the locations 16.
  • the processor 22 will acquire the sequence of images taken by the imaging device at the chosen location 16 and display those images on a display 26 comprising either a monitor 28 or a virtual reality type display helmet 30.
  • the user may select one image in any of the taken sequences for viewing in a "freeze frame" mode.
  • the chosen point of view may alternatively comprise a point of view defined by a selected location different from (for example, somewhere between) any of the locations 16 associated with the placement of the imaging devices 12. Representative selected locations are illustrated in FIGURE 2 at each reference numeral 32. Responsive to such a selection by the user through the interactive device 24 of such a point of view with respect to the object 18 (hereinafter referred to as the "selected point of view" 33) , the processor 22 will acquire the sequences of images recorded by the imaging devices 12 at each of the locations 16, and through the use of fuzzy logic processing 34 construct a sequence of images or a single freeze frame image from the recorded sequences of images having a selected point of view 33 corresponding to the selected location 32.
  • This constructed sequence of images or freeze frame image will also be presented to the user on the display 26.
  • the present invention allows a user to select for viewing an individual image or sequence of images of the object 18 from a selected point of view 33 (at one of the locations 32) not provided by the placement of the imaging devices 12 at the locations 16 for the taken points of view 17.
  • the user may "move” around and about, and zoom in and zoom out on the object 18 for viewing the object at any of a number of selected angles (including phantom angles defined by the selected points of view 33) that are supported by the images taken by the plurality of imaging devices 12.
  • supported it is meant that sufficient and suitable image data necessary for the processor 22 to construct the sequence of images or freeze frame image at the selected location 32 and distance must be obtained by the imaging devices 12. It is thus preferred that high definition film or digital cameras be used to obtain sufficient data.
  • the processor 22 may limit user selection of such points of view or distances, or construct the images with a resolution and accuracy as best as possible from the existing image data.
  • the image data provided by the imaging device 12' is cut out by the processor 22 and pasted around the sequences of images or freeze frame images provided by the conventional imaging devices 12.
  • the curvature of the image associated with fish eye lenses is reduced or eliminated by the processor 22 using conventional image processing techniques.
  • the image of the object 18 taken from one point of view 17, or constructed for a selected point of view 33 may include another one of the imaging devices at another location 16, or some other unwanted object or item.
  • the sequence of images of the object 18 taken by the imaging device 12 at one corner of the square 20 may include therein the imaging device at the diagonally opposite corner of the square.
  • the presence of another imaging device 12 in the taken image of the object will not be distracting.
  • the processor 22 at the control of the user through the interactive device 24 further utilizes image subtraction logic processing 36 to remove unwanted items from either the taken or constructed sequences of images or freeze frame images.
  • unwanted items may be color coded and removed through conventional blue masking techniques.
  • the processor 22 further includes an image buffer 23 for sequentially storing for a predetermined time delay prior to display the individual images as constructed and/or output from the processor. Storage of the images in the buffer 23 during the time delay allows for more computationally intense subsequent images to catch up with previously generated and output images.
  • the images are presented by the display devices 26 in an uninterrupted, sequential, continuous manner for viewing from the selected point of view 33. Any sound associated with the images is likewise delayed in the buffer 23 for synchronized output.
  • the present invention further includes a laser tracking device 31 positioned at a known location which may comprise the location of one of the imaging devices 12.
  • the laser tracking device 31 emits a laser beam 35 directed at and reflected by each imaging device 12 (two such beams are shown in FIGURE 2) .
  • the reflected beam is processed by the device 31 to determine distance and direction to (i.e., relative location of) each imaging device.
  • the relative locations of the imaging devices 12 are transmitted to the storage device 14 for storage with the recorded imaged or, alternatively in a "live" operating mode, are transmitted directly to the processor 22 for use in the image construction, zooming and image subtraction processes.
  • the relative location information need only be calculated once and transmitted to the processor 22 prior to recording images of the object 18.
  • the relative location information is updated by the device 31 on a periodic basis and transmitted to the storage device 14 or processor 22 synchronized with the recorded images in accordance with the time recording.
  • FIGURE 3 wherein there is shown a structure 39 for mounting and supporting the plurality of imaging devices 12 at the locations 16.
  • the structure 39 includes a plurality of telescoping arms 41.
  • a universal joint 43 is provided at each end of each one of the arms 41.
  • a distal end one of the universal joints 43 on each arm 41 supports mounting of an imaging device 12.
  • a proximal end one of the universal joints 43 is mounted to a support platform 45.
  • the structure 39 is typically fixed in one location during the imaging of stationary objects 18. With non-stationary objects 18, however, the structure 39 is mounted to a vehicle 43 (schematically shown in phantom, for example comprising a track cart, truck, or helicopter) for movement along with the object to be imaged.
  • telescoping arms 41 and universal joints 43 facilitates the positioning of the plurality of imaging devices 12 in accordance with the present invention in any one of a number of positions at spaced apart locations 16 providing a plurality of points of view with respect to the object 18.
  • the positions of the arms 41 and joints 43 for the structure 39 are lockable to maintain a substantially constant relative location between the plurality of imaging devices 12 even during object 18 and/or vehicle 43 movement.
  • FIGURE 4 wherein there is shown a flow diagram of a first embodiment of the method of the present invention.
  • Sequences of images of the object 18 are first captured by a plurality of imaging devices 12 having known relative locations (step 38) .
  • the captured image data is processed for phantom angle and zooming capability (step 40) .
  • the processed data is then either recorded on media (step 42) or transmitted live (step 44) for subsequent processing to display the sequences of images from user chosen and selected points of view 17 and 33, respectively.
  • Such display comprises the steps of inputting from the user preferred viewing information such as a selected point of view and distance (step 46) .
  • the processed image data is then read responsive to the user input (step 48) , and a sequence of images or freeze frame image is constructed from the data in accordance with the user input (step 49) and the known relative locations of the imaging devices 12 and transmitted to a monitor for viewing (step 50) .
  • FIGURE 5 there is shown a flow diagram of a second embodiment of the method of the present invention. Sequences of images of the object 18 are first captured by a plurality of imaging devices 12 having known relative locations (step 52) . The images ' are then recorded on appropriate media (step 54) . In this connection, it will be understood that the recorded sequences of images may then be packaged for subsequent sale or lease.
  • the image data is simultaneously accessed by the processor 22 (step 56) , and processed in accordance with the user preferred viewing information input through the use of an interactive viewing device and the known relative locations of the imaging devices 12 (step 58) to generate or construct sequences of images or freeze frame images for viewing (step 60) or, alternatively, the generated sequences of images or freeze frame images are recorded (step 62) and the recording is subsequently provided to the viewer for later access (step 64) .
  • the present invention is not intended to replace or render obsolete the traditional role of a director in the production of movies, television shows and the like.
  • the taken points of view 17 of scenes and events selected by the director will be provided along with other taken points of view 17 provided by the imaging devices 12 for user selection.
  • the selected points of view 33 from phantom angles will also be made available to the viewer through processing the sequences of images in accordance with the present invention.
  • those images will be displayed. Otherwise, the viewer is free with the present method and apparatus to select any other point of view and distance (using either taken or constructed images) for viewing that is supported by the image data.
  • Such viewing would include the viewing of freeze frame images as well as sequences of images.
  • One immediate use of the present invention would be for a director to record a scene from multiple taken points of view 17.
  • the director may select the optimum angles and imaging device positions for making the final recording.
  • multiple imaging devices can be used in final filming to produce different versions of the production for general distribution.
  • the processor of the present invention may be made available at home or at a theater to allow viewers to select other points of view 33 relating to phantom angles during a showing.
  • the freeze frame feature of the present invention further facilitates careful consideration by the viewer of a recorded scene such as would be necessary in participating in an interactive or role playing game (e.g., a mystery/detective or combat game) .

Abstract

An object (18) is imaged from a plurality of spaced apart locations (16), having known relative positions with respect to each other, to generate a plurality of sequences of images of the object taken from different points of view (17). These images are displayable to the user, or are manipulated by a processor (22) under user direction (24) using the known relative position information, to construct a displayable individual image or sequence of images of the object as if taken from a user selected point of view (33) different from any of the taken points of view. The generated images or the constructed images are further processable under user control to facilitate zoom in and zoom out on the imaged object, and removal of unwanted portions of an image.

Description

VIEWING IMAGED OBJECTS FROM SELECTED POINTS OF VIEW
CROSS-REFERENCE TO RELATED APPLICATIONS
This Application is a continuation-in-part of prior co-pending United States application for patent Serial No. 08/179,383, filed January 10, 1994.
BACKGROUND OF THE INVENTION
Technical Field of the Invention
The present invention relates to the art of recording and displaying images of objects and, in parti-.'.ilar, to the manipulation of such images for presentation from selected points of view. Description of Related Art
The viewer of objects (for example, articles, events, scenes or locations) that have been optically imaged in, for example, a motion picture, has traditionally been restricted to viewing sequences of images of those objects from a limited number of points of view established at the time of imaging. In fact, when viewing objects recorded as videotape or motion picture film images, the viewer is commonly limited to only a single point of view a"s selected by another person (e.g., a director) when that object was imaged. With such sequences of images, the viewer is often frustrated by the fact that the point of view desired by the viewer was not selected and/or that a plurality of different views of the imaged object are not available for consideration. Furthermore, when multiple points of view concerning an object are imaged and made available, the viewer may continue to be frustrated by the fact that a certain other point of view with respect to the object was not selected and made available for viewing. SUMMARY OF THE INVENTION
The foregoing concerns are addressed by the method and apparatus of the present invention wherein a plurality of imaging devices are positioned at spaced apart locations for recording sequences of images of an object from a plurality of different points of view. A laser tracking (direction and distance) device determines the relative locations of the imaging devices, and the relative location determination is synchronized with the recorded sequences of images. A processor presents the recorded images as taken by the imaging devices to the user as is, if desired, to display the imaged object from one of the taken points of view. In addition, the processor manipulates the taken sequences of images, in accordance with viewer input provided through an interface in conjunction with the determined relative locations of the imaging devices, to construct a sequence of images or a freeze frame image of the object for display as if taken from a viewer selected point of view. The processor further processes the images under user control to facilitate zoom in and zoom out on the imaged object, and removal of unwanted portions of the imaged object for display.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete understanding of the method and apparatus of the present invention may be had by reference to the following Detailed Description when taken in conjunction with the accompanying Drawings wherein:
FIGURE 1 is a block diagram of the image processing system of the present invention;
FIGURE 2 illustrates the exemplary installation of a plurality of imaging devices around and about an object to be imaged in accordance with the teachings of the present invention;
FIGURE 3 shows a moveable structure for mounting and supporting the plurality of imaging devices around and about an object to be imaged;
FIGURE 4 is a flow diagram of a first embodiment of the method of the present invention; and
FIGURE 5 is a flow diagram of a second embodiment of the method of the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS
Referring now to FIGURE 1, there is shown a block diagram of the image processing system 10 of the present invention. The system includes a plurality of imaging devices 12 (such as digitizing optical cameras, high definition film cameras and/or three-dimensional imaging cameras) connected to a storage device 14. The sequences of images recorded by the imaging devices 12 are separately stored by the storage device 14 and synchronized in accordance with a time recording for future retrieval and processing in a manner to be described. In this connection, the sequences of images are stored on film, optical or magnetic tape, optical or magnetic disk, or other suitable storage media as desired or necessitated. It will, of course, be understood that the storage device 14 used to store the sequences of images need not also be used in the retrieval of the images. Thus, the functions of storing the image data on and subsequently retrieving the image data from the media may occur at different locations. Furthermore, and/or alternatively, the sequences of images may be simultaneously, separately and contemporaneously transmitted (also synchronized in accordance with a time recording) over lines 15 from the imaging devices 12 using digitization, polarization, different bandwidth or sequential simultaneous transmission techniques for further processing in a manner to be described. Transmission of the images over lines 15 is especially useful in "live" processing and display of images of as will be further described herein.
Referring now to FIGURE 2, there is shown an illustration of an exemplary installation of the plurality of imaging devices 12 at a plurality of selected spaced apart locations 16 around and about an object 18. The plurality of imaging devices 12 are preferably positioned at the locations 16 to simultaneously record a sequence of images of the object 18 from a plurality of different points of view
(hereinafter referred to as "taken points of view" 17) .
As used herein, the term "object" 18 is used generally to describe not only a single article, person or thing, but also a collection of articles, persons or things forming a scene, and thus further includes a particular geographic location or an event or activity occurring at that location. Imaged objects 18 thus include, without limitation, scenes for motion picture or television productions as well as "live" events such as televised concerts or sporting events.
In the installation of FIGURE 2, five imaging devices 12 are positioned one at each of the corners of an imaginary square 20, and one directly above the center of the square. It will, of course, be understood that other installation configurations not necessarily completely surrounding and/or with more or fewer imaging devices 12 are possible (including a spherical configuration) and that the use of the square 20 configuration is by way of example only. Thus, the imaging devices 12 may be positioned in accordance with the present invention in any manner desired provided that the imaging devices are spaced apart from each other at locations 16 that provide a plurality of different taken points of view 17 with respect to the object 18 being imaged.
Referring now to both FIGURES 1 and 2, the plurality of sequences of images of the object 18 recorded by the imaging devices 12 and stored in the storage device 14 are simultaneously retrievable by a processor 22 from the storage device 14, or receivable over lines 15 by the processor from the imaging devices in a "live" operating mode. An interactive device 24
(such as a graphical user interface) is connected to the processor and operated by the user to choose a point of view from which the user desires to view the imaged object. This chosen point of view may comprise one of the plurality of different taken points of view 17 provided by the imaging devices 12 at the locations 16. Responsive to such a choice by the user through the interactive device 24, the processor 22 will acquire the sequence of images taken by the imaging device at the chosen location 16 and display those images on a display 26 comprising either a monitor 28 or a virtual reality type display helmet 30. Alternatively, in response to actuation of the device 24, the user may select one image in any of the taken sequences for viewing in a "freeze frame" mode.
The chosen point of view may alternatively comprise a point of view defined by a selected location different from (for example, somewhere between) any of the locations 16 associated with the placement of the imaging devices 12. Representative selected locations are illustrated in FIGURE 2 at each reference numeral 32. Responsive to such a selection by the user through the interactive device 24 of such a point of view with respect to the object 18 (hereinafter referred to as the "selected point of view" 33) , the processor 22 will acquire the sequences of images recorded by the imaging devices 12 at each of the locations 16, and through the use of fuzzy logic processing 34 construct a sequence of images or a single freeze frame image from the recorded sequences of images having a selected point of view 33 corresponding to the selected location 32. This constructed sequence of images or freeze frame image will also be presented to the user on the display 26. Thus, the present invention allows a user to select for viewing an individual image or sequence of images of the object 18 from a selected point of view 33 (at one of the locations 32) not provided by the placement of the imaging devices 12 at the locations 16 for the taken points of view 17.
Through use of the interactive device 24, the user may "move" around and about, and zoom in and zoom out on the object 18 for viewing the object at any of a number of selected angles (including phantom angles defined by the selected points of view 33) that are supported by the images taken by the plurality of imaging devices 12. By "supported" it is meant that sufficient and suitable image data necessary for the processor 22 to construct the sequence of images or freeze frame image at the selected location 32 and distance must be obtained by the imaging devices 12. It is thus preferred that high definition film or digital cameras be used to obtain sufficient data. In situations where insufficient data exists to construct the sequence of images or freeze frame images, the processor 22 may limit user selection of such points of view or distances, or construct the images with a resolution and accuracy as best as possible from the existing image data. It should further be recognized that in a zoom out, as the distance from the location 32 to the object 18 increases, the degree of definition required in the constructed sequence of images correspondingly decreases. Thus, less image data will be needed by the processor 22 in constructing the individual image or sequence of images. When zooming out, the constructed or taken sequences of images or freeze frame images may shrink because of a lack of available image data as the zoom out moves past the original distances between the imaging devices 12 and the object 18. To assist the processor 22 in the processing of the zoom image, image data from an imaging device 12' utilizing a fish eye lens may be used to provide the bordering image data needed to fill out the constructed or taken images. In such instances, the image data provided by the imaging device 12' is cut out by the processor 22 and pasted around the sequences of images or freeze frame images provided by the conventional imaging devices 12. The curvature of the image associated with fish eye lenses is reduced or eliminated by the processor 22 using conventional image processing techniques.
It will be noticed that in many installation configurations for the plurality of imaging devices 12, the image of the object 18 taken from one point of view 17, or constructed for a selected point of view 33, may include another one of the imaging devices at another location 16, or some other unwanted object or item. For example, in the configuration shown in FIGURE 2, the sequence of images of the object 18 taken by the imaging device 12 at one corner of the square 20 may include therein the imaging device at the diagonally opposite corner of the square. In considering some objects 18, for example, a sports stadium or other venue with images taken during a sporting event or other "live" event, the presence of another imaging device 12 in the taken image of the object will not be distracting. However, with respect to other objects 18, such as a scene taken in a studio or on location for a motion picture or television show, the presence of the imaging device 12 (along with other unwanted objects such as the operators of the device) in the images is very distracting and hardly realistic. Accordingly, the processor 22 at the control of the user through the interactive device 24 further utilizes image subtraction logic processing 36 to remove unwanted items from either the taken or constructed sequences of images or freeze frame images. Alternatively, unwanted items may be color coded and removed through conventional blue masking techniques. Processing of certain ones of the taken images by the processor 22 to construct the sequence of images taken from a selected point of view 33 using fuzzy logic processing 34, or to zoom in and zoom out on the object 18, or to remove unwanted portions of the imaged object through image subtraction logic processing 36, may take more time than with other images. Such delays will cause a disruption in continuity of the constructed sequence of images output for display. Accordingly, the processor 22 further includes an image buffer 23 for sequentially storing for a predetermined time delay prior to display the individual images as constructed and/or output from the processor. Storage of the images in the buffer 23 during the time delay allows for more computationally intense subsequent images to catch up with previously generated and output images. When output from the buffer following delay, the images are presented by the display devices 26 in an uninterrupted, sequential, continuous manner for viewing from the selected point of view 33. Any sound associated with the images is likewise delayed in the buffer 23 for synchronized output.
Processing of the sequences of images by the processor 22 as discussed herein is facilitated and more quickly accomplished when the relative locations of the imaging devices 12 with respect to each other are known. The relative locations of the devices 12 taking the images may be input to the processor 22 through the interface 24. Alternatively, the present invention further includes a laser tracking device 31 positioned at a known location which may comprise the location of one of the imaging devices 12. The laser tracking device 31 emits a laser beam 35 directed at and reflected by each imaging device 12 (two such beams are shown in FIGURE 2) . The reflected beam is processed by the device 31 to determine distance and direction to (i.e., relative location of) each imaging device. The relative locations of the imaging devices 12 are transmitted to the storage device 14 for storage with the recorded imaged or, alternatively in a "live" operating mode, are transmitted directly to the processor 22 for use in the image construction, zooming and image subtraction processes. In instances where the positions 16 of the imaging devices 12 are fixed, the relative location information need only be calculated once and transmitted to the processor 22 prior to recording images of the object 18. Conversely, when the imaging devices 12 move with respect to each other during imaging of an object 18, the relative location information is updated by the device 31 on a periodic basis and transmitted to the storage device 14 or processor 22 synchronized with the recorded images in accordance with the time recording. Reference is now made to FIGURE 3, wherein there is shown a structure 39 for mounting and supporting the plurality of imaging devices 12 at the locations 16. The structure 39 includes a plurality of telescoping arms 41. A universal joint 43 is provided at each end of each one of the arms 41. A distal end one of the universal joints 43 on each arm 41 supports mounting of an imaging device 12. A proximal end one of the universal joints 43 is mounted to a support platform 45. The structure 39 is typically fixed in one location during the imaging of stationary objects 18. With non-stationary objects 18, however, the structure 39 is mounted to a vehicle 43 (schematically shown in phantom, for example comprising a track cart, truck, or helicopter) for movement along with the object to be imaged. The use of telescoping arms 41 and universal joints 43 facilitates the positioning of the plurality of imaging devices 12 in accordance with the present invention in any one of a number of positions at spaced apart locations 16 providing a plurality of points of view with respect to the object 18. The positions of the arms 41 and joints 43 for the structure 39 are lockable to maintain a substantially constant relative location between the plurality of imaging devices 12 even during object 18 and/or vehicle 43 movement.
Reference is now made to FIGURE 4 wherein there is shown a flow diagram of a first embodiment of the method of the present invention. Sequences of images of the object 18 are first captured by a plurality of imaging devices 12 having known relative locations (step 38) . Next, the captured image data is processed for phantom angle and zooming capability (step 40) . The processed data is then either recorded on media (step 42) or transmitted live (step 44) for subsequent processing to display the sequences of images from user chosen and selected points of view 17 and 33, respectively. Such display comprises the steps of inputting from the user preferred viewing information such as a selected point of view and distance (step 46) . The processed image data is then read responsive to the user input (step 48) , and a sequence of images or freeze frame image is constructed from the data in accordance with the user input (step 49) and the known relative locations of the imaging devices 12 and transmitted to a monitor for viewing (step 50) .
Referring now to FIGURE 5, there is shown a flow diagram of a second embodiment of the method of the present invention. Sequences of images of the object 18 are first captured by a plurality of imaging devices 12 having known relative locations (step 52) . The images 'are then recorded on appropriate media (step 54) . In this connection, it will be understood that the recorded sequences of images may then be packaged for subsequent sale or lease. In using the images, the image data is simultaneously accessed by the processor 22 (step 56) , and processed in accordance with the user preferred viewing information input through the use of an interactive viewing device and the known relative locations of the imaging devices 12 (step 58) to generate or construct sequences of images or freeze frame images for viewing (step 60) or, alternatively, the generated sequences of images or freeze frame images are recorded (step 62) and the recording is subsequently provided to the viewer for later access (step 64) .
The present invention is not intended to replace or render obsolete the traditional role of a director in the production of movies, television shows and the like. In such productions, the taken points of view 17 of scenes and events selected by the director will be provided along with other taken points of view 17 provided by the imaging devices 12 for user selection. In addition, the selected points of view 33 from phantom angles will also be made available to the viewer through processing the sequences of images in accordance with the present invention. When the viewer wishes to view the director's selection, those images will be displayed. Otherwise, the viewer is free with the present method and apparatus to select any other point of view and distance (using either taken or constructed images) for viewing that is supported by the image data. Such viewing would include the viewing of freeze frame images as well as sequences of images.
One immediate use of the present invention would be for a director to record a scene from multiple taken points of view 17. By reviewing the sequences of images from the recorded taken points of view 17, and generating sequences of images or freeze frame images from phantom angles having other selected points of view 33, the director may select the optimum angles and imaging device positions for making the final recording. Furthermore, multiple imaging devices can be used in final filming to produce different versions of the production for general distribution. Furthermore, the processor of the present invention may be made available at home or at a theater to allow viewers to select other points of view 33 relating to phantom angles during a showing. The freeze frame feature of the present invention further facilitates careful consideration by the viewer of a recorded scene such as would be necessary in participating in an interactive or role playing game (e.g., a mystery/detective or combat game) .
Although preferred embodiments of the method and apparatus of the present invention have been illustrated in the accompanying Drawings and described in the foregoing Detailed Description, it will be understood that the invention is not limited to the embodiment disclosed, but is capable of numerous rearrangements, modifications and substitutions without departing from the spirit of the invention as set forth and defined by the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method for processing sequences of images of an object taken from a plurality of different imaging points of view by a corresponding plurality of imaging devices positioned at different locations, comprising the steps of: determining the relative locations of the plurality of imaging devices with respect to each other; inputting a selected point of view for viewing of the object other than any of the plurality of imaging points of view; and constructing from the taken sequences of images of the object and the determined relative locations of the imaging devices, a constructed individual freeze frame image or sequence of images of the object as if taken from the input selected point of view.
2. The method as in claim 1 further including the steps of: inputting a selected viewing distance away from the object; and processing the taken or constructed sequences of images to zoom in or zoom out on the object as if taken from the selected viewing distance.
3. The method as in claim 1 wherein the taken sequences of images of the object include unwanted object images, further comprising the step of processing the taken or constructed images to subtract the unwanted object images.
4. The method as in claim 1 further including the step of synchronizing the determined relative locations to the taken sequences of images.
5. The method as in claim 1 further including the step of storing the taken sequences of images and determined relative locations of the imaging devices for subsequent retrieval and processing.
6. The method as in claim 1 wherein the locations for positioning of the imaging devices change during imaging of the object, further comprising the steps of: periodically re-determining the relative locations of the plurality of imaging devices with respect to each other, said determined relative locations changing as the locations of the imaging devices change; and synchronizing and associating the determined and changing relative locations with the taken sequences of images.
7. Apparatus for image processing, comprising: a plurality of imaging devices positioned at different locations for taking sequences of images of an object from different imaging points of view; means for determining the relative locations of the plurality of imaging devices with respect to each other; an input device for selecting a point of view for viewing of the object other than any of the plurality of imaging points of view; and a processor connected to the imaging devices, the means for determining and the input device, and including means for constructing from the taken sequences of images of the object and the determined relative locations of the imaging devices, a constructed individual freeze frame image or sequence of images of the object as if taken from the selected point of view.
8. The apparatus as in claim 7 wherein the means for determining comprises a user interface connected to the processor for manual input of the determined relative locations of the imaging devices.
9. The apparatus as in claim 7 wherein the means for determining comprises a laser tracking device for projecting a laser beam towards each one of the plurality of imaging devices to determine distance and direction thereto and identify the relative locations of the imaging devices with respect to each other.
10. The apparatus as in claim 7 wherein the input device further allows for selection of a distance away from the object for viewing of the object, the means for constructing of the processor further including means for processing the taken or constructed images to zoom in or zoom out on the object as if taken from the selected viewing distance.
11. The apparatus as in claim 7 wherein the taken sequences of images of the object include unwanted object images, the processor further including means for processing the taken or constructed images to subtract the unwanted object images.
12. The apparatus as in claim 7 wherein the determined relative locations of the plurality of imaging devices are synchronized to the taken sequences of images.
13. The apparatus as in claim 7 further including means for storing the taken sequences of images and determined relative locations of the imaging devices for subsequent retrieval and processing by the processor.
14. The apparatus as in claim 7 wherein the locations of the imaging devices change during imaging of the object, the means for determining further including means for periodically re-determining the relative locations of the plurality of imaging devices with respect to each other, with said determined changing relative locations being synchronized and associated with the taken sequences of images.
15. The apparatus as in claim 7 wherein the object comprises a moving object, further including means for moving the plurality of imaging devices along with the moving object during imaging while maintaining substantially constant relative locations of the imaging devices with respect to each other.
16. The apparatus as in claim 15 wherein the means for moving comprises: a movable support platform; a plurality of telescoping arms, each arm pivotally mounted at a proximal end thereof to the movable support platform; and means for pivotally mounting one of the plurality of imaging devices to a distal end of each of the plurality of telescoping arms.
17. The apparatus of claim 7 wherein the imaging devices comprise digitizing cameras.
18. The apparatus of claim 7 wherein the imaging devices comprise three-dimensional imaging cameras.
19. Apparatus for image processing, comprising: a plurality of cameras positioned at different locations for taking sequences of images of an object from different camera points of view; a laser tracking device for determining the relative locations of the cameras with respect to each other; an input device for selecting a point of view for viewing of the object other than any of the camera points of view; and a fuzzy logic processor connected to the cameras, the laser tracking device and the input device for constructing from the taken sequences of images of the object and the determined relative locations of the cameras a constructed individual freeze frame image sequence of images of the object as if taken from the selected point of view.
20. The apparatus as in claim 19 wherein the input device further allows for selection of a distance away from the object for viewing of the object, further including a zooming processor for processing the taken or constructed sequences of images to zoom in or zoom out on the object as if taken from the selected viewing distance.
21. The apparatus as in claim 19 wherein the taken sequences of images of the object include unwanted object images, further including an image subtraction processor for processing the taken or constructed images to subtract the unwanted object images.
22. The apparatus as in claim 19 further including means for storing the taken sequences of images synchronized with the determined relative locations of the imaging devices for subsequent retrieval and processing.
23. A method for viewing an object from a viewer decided point of view comprising the steps of: visually recording an object from a plurality of different viewing positions to generate image data; inputting a viewer's selected viewing position different from any of the recorded viewing positions; processing said generated image data in accordance with the selected viewing position to construct from the image data an image of the object taken from the selected viewing position; and displaying the constructed image.
24. A method as in claim 23 wherein the processing step further comprises processing said constructed image for zoom in and zoom out viewing of the object.
25. A method for viewing an object from a viewer decided point of view comprising the steps of: storing information for images of an object taken from different camera points of view by each of a plurality of cameras; sending said stored information to a processor; selecting a point of view with respect to the object other than any of the camera points of view; processing said stored information to construct from the information an image of the object taken from said selected point of view; and viewing said constructed image.
26. An apparatus for viewing an object from a viewer decided point of view comprising: a plurality of cameras for recording images of an object taken from different camera points of view; means for selecting a point of view with respect to the object different from any of the camera points of view; means for processing said recorded images to construct an image of the object taken from the selected different point of view; and means for viewing the constructed image or recording the constructed image.
27. An apparatus for viewing an object from a viewer decided point of view comprising: a first camera means for obtaining image data of an object from a first point of view; a second camera means for obtaining image data of the object from a second point of view; a third camera means for obtaining image data of the object from a third point of view; a viewer interactive means capable of receiving viewer input for selecting a point of view for viewing the object different from the first, second and third points of view; and a means responsive to said viewer interactive means for processing the image data from the cameras to construct an image of the object taken from the selected point of view.
28. An apparatus as claimed in claim 27, wherein said processing means further comprises means for processing the constructed image for zoom in and zoom out viewing of the object.
29. An apparatus for viewing an object from a viewer decided point of view comprising: a plurality of cameras for recording images of an object taken from different camera points of view; a device for simultaneously transmitting recorded images synchronized in accordance with a time of recording; a computer processor for processing said simultaneously transmitted images to construct an image of the object taken from a selected point of view different from any of the camera points of view; a viewer interactive device connected to said computer processor for selecting the point of view for the constructed image; and a device for viewing the constructed image or recording the constructed image.
30. A method as in claim 23 wherein the image data of the object includes data of an unwanted portion of the image of the object further including the step of subtracting the data associated with the unwanted portion from the image data to remove the unwanted portion from the image. *
31. A method as in claim 25 wherein the image data of the object includes data of an unwanted portion of the image of the object further including the step of subtracting the data associated with the unwanted portion from the image data to remove the unwanted portion from the image.
32. An apparatus as in Claim 26 wherein the image data of the object includes data of an unwanted portion of the image of the object and the means for processing includes means for subtracting the data associated with the unwanted portion from the image data to remove the unwanted portion from the image.
33. An apparatus as in Claim 27 wherein the image data of the object includes data of an unwanted portion of the image of the object and the means for processing includes means for subtracting the data associated with the unwanted portion from the image data to remove the unwanted portion from the image.
34. An apparatus as in Claim 29 wherein the image data of the object includes data of an unwanted portion of the image of the object and the computer processor includes means for subtracting the data associated with the unwanted portion from the image data to remove the unwanted portion from the image.
35. A method as in claim 24 wherein the step of processing further includes filling out around the constructed imaged with fish eye image data when zooming out.
PCT/US1995/000303 1994-01-10 1995-01-09 Viewing imaged objects from selected points of view WO1995019093A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP7518668A JPH09507620A (en) 1994-01-10 1995-01-09 Observing a captured object from a selected viewpoint
EP95907364A EP0742987A4 (en) 1994-01-10 1995-01-09 Viewing imaged objects from selected points of view

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US17938394A 1994-01-10 1994-01-10
US08/179,383 1994-01-10
US36689094A 1994-12-30 1994-12-30
US08/366,890 1994-12-30

Publications (1)

Publication Number Publication Date
WO1995019093A1 true WO1995019093A1 (en) 1995-07-13

Family

ID=26875277

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1995/000303 WO1995019093A1 (en) 1994-01-10 1995-01-09 Viewing imaged objects from selected points of view

Country Status (4)

Country Link
EP (1) EP0742987A4 (en)
JP (1) JPH09507620A (en)
CA (1) CA2179809A1 (en)
WO (1) WO1995019093A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0793392A1 (en) * 1996-02-29 1997-09-03 Matsushita Electric Industrial Co., Ltd. Method and apparatus for the transmission and the reception of three-dimensional television signals of stereoscopic images
EP0903695A1 (en) * 1997-09-16 1999-03-24 Canon Kabushiki Kaisha Image processing apparatus
EP0930585A1 (en) * 1998-01-14 1999-07-21 Canon Kabushiki Kaisha Image processing apparatus
GB2378341A (en) * 2001-07-31 2003-02-05 Hewlett Packard Co Altering the viewpoint of part of an image to simulate a different viewing angle
US20100007735A1 (en) * 2005-12-22 2010-01-14 Marco Jacobs Arrangement for video surveillance
DE19825302B4 (en) * 1997-06-09 2014-09-25 Evans & Sutherland Computer Corp. System for creating a three-dimensional garbage mat, which allows a simplified adjustment of spatial relationships between real and virtual scene elements

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3223125A1 (en) 2022-03-29 2023-10-05 Illumina, Inc. Chromenoquinoline dyes and uses in sequencing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5200818A (en) * 1991-03-22 1993-04-06 Inbal Neta Video imaging system with interactive windowing capability
US5285397A (en) * 1989-12-13 1994-02-08 Carl-Zeiss-Stiftung Coordinate-measuring machine for non-contact measurement of objects
US5315313A (en) * 1991-07-24 1994-05-24 Matsushita Electric Industrial Co., Ltd. Device for electing a figure from among figures depicted on a display device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4797942A (en) * 1987-03-02 1989-01-10 General Electric Pyramid processor for building large-area, high-resolution image by parts
US5187571A (en) * 1991-02-01 1993-02-16 Bell Communications Research, Inc. Television system for displaying multiple views of a remote location

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5285397A (en) * 1989-12-13 1994-02-08 Carl-Zeiss-Stiftung Coordinate-measuring machine for non-contact measurement of objects
US5200818A (en) * 1991-03-22 1993-04-06 Inbal Neta Video imaging system with interactive windowing capability
US5315313A (en) * 1991-07-24 1994-05-24 Matsushita Electric Industrial Co., Ltd. Device for electing a figure from among figures depicted on a display device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0742987A4 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0793392A1 (en) * 1996-02-29 1997-09-03 Matsushita Electric Industrial Co., Ltd. Method and apparatus for the transmission and the reception of three-dimensional television signals of stereoscopic images
US6104425A (en) * 1996-02-29 2000-08-15 Matsushita Electric Industrial Co., Ltd. Method and apparatus for transmitting television signals, method and apparatus for receiving television signals, and method and apparatus for transmitting/receiving television signals
DE19825302B4 (en) * 1997-06-09 2014-09-25 Evans & Sutherland Computer Corp. System for creating a three-dimensional garbage mat, which allows a simplified adjustment of spatial relationships between real and virtual scene elements
EP0903695A1 (en) * 1997-09-16 1999-03-24 Canon Kabushiki Kaisha Image processing apparatus
US6421459B1 (en) 1997-09-16 2002-07-16 Canon Kabushiki Kaisha Image processing apparatus
EP0930585A1 (en) * 1998-01-14 1999-07-21 Canon Kabushiki Kaisha Image processing apparatus
US6914599B1 (en) 1998-01-14 2005-07-05 Canon Kabushiki Kaisha Image processing apparatus
GB2378341A (en) * 2001-07-31 2003-02-05 Hewlett Packard Co Altering the viewpoint of part of an image to simulate a different viewing angle
GB2378341B (en) * 2001-07-31 2005-08-24 Hewlett Packard Co Improvements in and relating to dislaying digital images
US7432930B2 (en) 2001-07-31 2008-10-07 Hewlett-Packard Development Company, L.P. Displaying digital images
US20100007735A1 (en) * 2005-12-22 2010-01-14 Marco Jacobs Arrangement for video surveillance
US9241140B2 (en) * 2005-12-22 2016-01-19 Robert Bosch Gmbh Arrangement for video surveillance

Also Published As

Publication number Publication date
EP0742987A4 (en) 1998-01-07
EP0742987A1 (en) 1996-11-20
JPH09507620A (en) 1997-07-29
CA2179809A1 (en) 1996-08-20

Similar Documents

Publication Publication Date Title
US6522325B1 (en) Navigable telepresence method and system utilizing an array of cameras
AU761950B2 (en) A navigable telepresence method and system utilizing an array of cameras
US6741250B1 (en) Method and system for generation of multiple viewpoints into a scene viewed by motionless cameras and for presentation of a view path
EP2161925B1 (en) Method and system for fusing video streams
US20020190991A1 (en) 3-D instant replay system and method
US20040027451A1 (en) Immersive imaging system
KR20010023561A (en) Image processing method and apparatus
WO2002011431A1 (en) Video system and method of operating a video system
JPH11508384A (en) Method and apparatus for creating a spherical image
WO2001028309A2 (en) Method and system for comparing multiple images utilizing a navigable array of cameras
CA2794928A1 (en) System and method for capturing and displaying cinema quality panoramic images
US11703942B2 (en) System and method for interactive 360 video playback based on user location
EP0742987A1 (en) Viewing imaged objects from selected points of view
WO2002087218A2 (en) Navigable camera array and viewer therefore
US6525765B1 (en) Image processing
JP2000182058A (en) Three-dimensional motion input method and three- dimensional motion input system
JPH07105400A (en) Motion picture reproducing device
WO1998044723A1 (en) Virtual studio
JP2024056581A (en) Image generating device and control method thereof
GB2317299A (en) Processing digital image data derived from cinematographic film

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2179809

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 1995907364

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1995907364

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1995907364

Country of ref document: EP