WO1999033026A1 - Acquisition and animation of surface detail images - Google Patents

Acquisition and animation of surface detail images

Info

Publication number
WO1999033026A1
WO1999033026A1 PCT/US1998/025955 US9825955W WO9933026A1 WO 1999033026 A1 WO1999033026 A1 WO 1999033026A1 US 9825955 W US9825955 W US 9825955W WO 9933026 A1 WO9933026 A1 WO 9933026A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
recited
images
database
Prior art date
Application number
PCT/US1998/025955
Other languages
French (fr)
Inventor
Eimar M. Boesjes
Original Assignee
Boesjes Eimar M
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Boesjes Eimar M filed Critical Boesjes Eimar M
Priority to AU17151/99A priority Critical patent/AU1715199A/en
Priority to EP98961969A priority patent/EP1040450A4/en
Publication of WO1999033026A1 publication Critical patent/WO1999033026A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition

Definitions

  • the field of the present invention relates to animation.
  • apparatus and methods are described herein for generating and storing an image database and for interactively generating animation sequences using images in the datab-ase.
  • a picture is a two-dimensional view of one or more objects.
  • Film, or animation is a linear sequence of pictures. Film or animation may be viewed as a one-dimensional array of two-dimensional pictures.
  • the present invention discloses apparatus and methods for creating two- or higher- dimensional arrays of pictures that would allow a viewer to create his/her own film or animation of the object.
  • the present invention provides apparatus and methods to create animations of objects that are very large relative to the amount of detail one might want to see in the animation, by using an image database. This is the case, for instance, when creating an aerial animation of a geographical area. The earth is very large as compared to the size of an individual house that one might want to view in an animation. This is also the case when creating a microscopic animation of a skin surface. Here the size of the object is also very large as related to the amount detail one might wish to see. Each image will only show a very small detailed part of the object surface.
  • the details that one might want to see may not be visible in a straight-on picture (i.e., acquired from directly above).
  • the image database is generated using a combination of oblique cameras that are targeted at an angle greater than about 0° and less than 90° downward from horizontal toward the object, so that vertically oriented details of the object may be seen in an animation.
  • PANORAMIC ANIMATIONS Microsoft's SurroundVideo® and Apple Computer's QuickTime® VR use a system where a camera is placed inside a scene. The camera is rotated and images are acquired from all directions. The images are stored in a computer and then assembled so that a single seamless image strip is created showing all directions. The image is projected onto a cylindrical object around the viewer. The viewer can now interactively create panoramic animations by moving around and viewing in all directions.
  • An undesirable artifact of this technology is that wide-angle type views .are generated of the view. This means that the true size and shape of objects is distorted and less comprehensible to the viewer. In the present invention montages of images that are not wide angle are created, so no wide angle artifacts are generated.
  • the image database may be generated in the present invention using a large number of camera positions, thereby allowing camera movements from one point to another in an animation. In this way the viewer can create an animation of images acquired from different points, thereby showing an animation of movement over the object surface.
  • SurroundVideo® and QuickTime® VR only images acquired from a single point within the object are animated.
  • SurroundVideo® and QuickTime® VR acquire the images from inside the object, whereas in the present invention images are acquired at a distance from the object.
  • Stereo images can be used in still images or in animated images.
  • the present invention may be implemented with regular images and may also be implemented with stereo images.
  • 3D MODELING Three-dimensional (3D) models can be created in a computer, and perspective images can be generated and animated from the 3D models.
  • 3D models contain a three-dimensional geometrical description of the object, or the surfaces of the object.
  • Perspective images can be created from any position in the 3D Model.
  • the perspective images can be sequenced to create an animation.
  • the image database used in the present invention is a collection of two-dimensional images. It may employ a 2D array of grid points with a reference to the images that have been generated at each such grid point relative to the object.
  • the problem of 3D models is that it is exceedingly difficult to create a model of a large area that is completely accurate.
  • an optimal scheme may be devised to photographically map entire cities or areas and create a mosaic of images that are comprehensible to the average viewer. Images can be selected from the database and scaled in such a way that a nearly optically continuous image sequence is presented to a viewer as an animation.
  • the image database requires a large amount of computer storage space, typically much more than a 3D model.
  • a multi-perspective image database is also distinctly different from photogrammetry.
  • a multi-perspective image database it is essential to create oblique images from an object.
  • images used for photogrammetry should have as little perspective as possible.
  • each image in the database contains perspective, it is possible to create a sequence of images that gives the feeling of moving around or over an object. This is not possible with images that show little or no perspective.
  • the change in perspective in the sequence of images makes it possible to see the exact relationship between details of the object. In a city database, for instance, the exact position and height of a tree next to a house can be seen. Without perspective images, and without a nearly optically continuous image sequence, this is not possible.
  • the sequence of images can produce a feeling of motion through the scene or movement of objects in the scene.
  • the sequence of images can only be played forward and backward, and the viewer has no control over what is viewed, in what order, or from what direction.
  • a viewer has full control of the .animation, and may move and view in any direction, thereby creating his or her own "film" of the target object shown in the image database. This offers a great potential for data analysis, travel information, city planning, space exploration, computer gaming, and many other fields.
  • any viewer may "fly” over the object or city from anywhere in the world. With sufficient computer network connection bandwidth and/or network access speed, the viewer may remotely “fly” over a city interactively in real time.
  • Certain aspects of the present invention may overcome one or more aforementioned drawbacks of the previous art and/or advance the state-of-the-art of object animation apparatus and methods, and in addition may meet one or more of the following objects:
  • To provide apparatus and methods for generating an image database the image database comprising images of a target object recorded from a variety of viewing positions and in a variety of viewing directions;
  • the image database comprising images of a target object recorded from a variety of viewing positions and in a variety of viewing directions;
  • the imaging assembly comprising at least one camera
  • the imaging assembly comprising at least one wide-angled camera;
  • the imaging assembly comprising a plurality of cameras pointing radially outw.ard and downward toward the target object;
  • an 25 apparatus comprising: an imaging assembly comprising at least one camera; means for moving the imaging assembly relative to the target object; means for determining a position of the imaging assembly relative to the target object; means for storing an image recorded by the camera; and means for storing image parameters and the position of the imaging assembly corresponding to each image.
  • One or more of the foregoing objects may be achieved in the 30 present invention by a method comprising the steps of: a) recording an image of the target object with a camera on an imaging assembly; b) determining the position of the imaging assembly relative to the target object; c) storing the image recorded by the camera; d) storing image parameters and the position of the imaging assembly corresponding to the image; e) moving the imaging assembly to a new position relative to the target object; and f) repeating steps a) through e).
  • an apparatus comprising: means for storing the image database; means for storing image parameters and imaging assembly position for each image in the image database; means for selecting a trajectory of viewing position, viewing direction, viewing distance, and/or viewing time by a user; means for selecting, cropping or splicing, scaling, and sequencing images from the image database using the user-selected trajectory, stored image parameters, and stored imaging assembly positions; and means for presenting the selected, scaled, and sequenced images from the image database as an animation sequence.
  • One or more of the foregoing objects may be achieved in the present invention by a method comprising the steps of: a) selecting a trajectory of viewing position, viewing direction, viewing distance, and/or viewing time by a user; b) selecting, cropping or splicing, scaling, and sequencing images from the image database using the user-selected trajectory, stored image parameters, and stored imaging assembly positions; and c) presenting the selected, scaled, and sequenced images from the image database as an animation sequence.
  • an imaging assembly may comprise a plurality of cameras, all pointed toward the target object at about the same vertical angle of greater than about 0° and less than 90° downward from horizontal.
  • the fields-of-view of the cameras preferably cover a full circle when viewed from above.
  • An imaginary grid of points may be defined over the object.
  • a viewer may interactively "move" over the grid and change the viewing direction and image size by selecting a trajectory of viewing position, viewing direction, viewing distance, and/or viewing time.
  • Images may be selected, cropped or spliced, and scaled so that a near optically-continuous sequence of images is created.
  • the images may be played back at a speed sufficiently great that a viewer may perceive "movement" relative to the object.
  • the images may be played back preferably at a rate greater than about 6 images per second, most preferably at about 24 images per second (a common frame rate for a motion picture). Additional objects and advantages of the present invention may become apparent upon referring to the preferred and alternative embodiments of the present invention as illustrated in the drawings and described in the following written description .and/or claims.
  • Figure 1 is a block diagram showing an overview of apparatus and methods according to the present invention.
  • Figure 2A shows an isometric view of an imaging assembly according to the present invention
  • Figure 2B shows a side view of a single camera/gyro-stabilizer assembly.
  • Figures 3 A .and 3B show, respectively, top .and isometric views of the camera directions of an imaging assembly according to the present invention.
  • Figure 4 shows a top view of the fields-of-view of cameras on an imaging assembly according to the present invention.
  • Figure 5 shows a flow diagram for generation of a multi-perspective image database according to the present invention.
  • Figure 6 shows a top view of multiple camera directions from each of multiple camera positions on grid points, according to the present invention.
  • Figure 7 shows a top view of a target object detail with camera positions and directions corresponding to images showing the detail, according to the present invention.
  • Figure 8 shows a flow diagram for generation of an animation sequence from a multi- perspective image database, according to the present invention.
  • Figure 9 A shows a top view of a target object detail with camera positions and directions corresponding to images showing the detail
  • Figures 9B, 9C, .and 9D show images in the multi-perspective image database showing the detail
  • Figures 9E, 9F, and 9G show images clipped and scaled to form part of an animation sequence showing the detail.
  • Figure 10 is a flow diagram showing linkage of data, images, maps, audio, and/or video to a multi-perspective image database according to the present invention.
  • Figure 11 is a flow diagram showing analysis and display of data linked to a multi- perspective image database according to the present invention.
  • Figure 12 is a flow diagram showing editing of data linked to a multi-perspective image database according to the present invention. MODES FOR CARRYING OUT THE INVENTION
  • the viewer may perceive apparent motion and/or evolution of one or more objects in the scene, and/or apparent motion of the viewer's viewing position and/or viewing direction relative to the scene.
  • a preferred convention for such Cartesian coordinates has the xy-plane oriented substantially horizontally and substantially coinciding with the surface of the target object and the z-axis substantially perpendicular to the surface of the object, although other conventions may be used without departing from inventive concepts disclosed and/or claimed herein.
  • camera setup alternatively "imaging assembly” - a set of cameras used to create a multi- perspective image database, and typically comprising a plurality of oblique cameras and one vertical camera clip an image - alternatively “crop an image” - select a part of an image for viewing continuous image sequence - also "near-continuous image sequence", “optically-continuous image sequence”, or “near-optically-continuous image sequence” - a series of images where the changes in perspective .and scale of each consecutive image are (nearly) continuous, so that when played back at sufficient speed it produces the appearance of smooth camera movement.
  • the term “near-continuous” is used because from a mathematical point of view, the sequences generated by the present invention are only approximately continuous, not exactly continuous.
  • An object of the present invention is to produce a sequence that appears nearly continuous to the human eye. Such a sequence may be played back at a rate of preferably greater than about 6 images per second, most preferably about 24 images per second (a common frame rate for motion pictures). Even at relatively slow playback speeds (similar to a slide show), such a sequence may yield a perception of motion relative to the target object.
  • image database alternatively "multi-perspective image database” or simply “database” - collection of all images. However, other databases are referred to herein containing other types of data.
  • image parameter - camera position (X CA M, y m, z ⁇ u), camera vertical angle ( ⁇ CAM), camera azimuthal angle (CAM), camera constant (c), camera field-of-view ( ⁇ CA M, A ⁇ > C AM), .and/or other parameters describing the recording of a given image of the target object.
  • a database of these parameters for each image in the image database may be generated and stored for use in subsequent selection, manipulation, sequencing, and/or playback of images in an animation sequence.
  • object detail - alternatively "object detail”, “target object detail”, or “detail of the target object” - a detail or feature of the target object which appears in one or more images of a multi- perspective image database geographical coordinates - alternatively x GE0 , y ⁇ o, .and z 0 ⁇ o, "object coordinates", “target object coordinates”, or “physical coordinates” - the three-dimensional Cartesian coordinates of a particular detail of the target object, preferably using the same convention set forth for the camera coordinates, although other conventions may be used without departing from inventive concepts disclosed and/or claimed herein.
  • geo-reference may be used when the target object is the earth's surface or a portion thereof, or may be used in the more general case of any target object for which an image database has been generated.
  • geo-reference - a link between a location on a map or image and data related to a detail of the object corresponding to the location positioning system - any apparatus and/or method used to determine a position relative to the target object (i.e., for determining object coordinates).
  • a positioning system may include a system for measuring and/or storing an elevation profile for the object surface.
  • grid a systematic field of points in a substantially horizontal plane, typically laid out as
  • the grid spacing may be small relative to the size of the target object and to the size of the area of the target object corresponding to an image.
  • grid size alternatively "grid spacing" - the distance between two adjacent grid points
  • the horizontal plane which substantially coincides with the surface of the object is also referred to as the .xy-plane, although other conventions may be used without departing from inventive concepts disclosed and/or claimed herein.
  • a "horizontal plane” may be defined l o locally, and may not be parallel to or coincide with a "horizontal plane” at a distant location.
  • an "average horizontal plane” may be defined locally which most nearly approximates the rough surface locally.
  • Hotspot an area or collection of areas on an image sharing a common attribute. With the use of 15 hotspots, data can be linked to images. Hotspots can also be used to display certain characteristics of the data superimposed on the image. Selection and activation of a hotspot by a pointing device in a graphical user interface (GUI) environment may result in a certain action such as the retrieval of data, actions of the software, etc.
  • image coordinates - alternatively x M and y M - two-dimensional Cartesian coordinates used to 20 specify a location in an image.
  • the xy-plane corresponds to the plane of the two-dimensional image, with the origin at the center of the image, the x-axis horizontal, and the y-axis vertical, although other conventions may be used without departing from inventive concepts disclosed .and/or claimed herein.
  • Internet any worldwide computer network that allows computers to communicate with each 25 other using standardized communication protocols
  • intranet a computer network that allows computers from a selected group (a company, a department, an organization) to communicate with each other using standard communication protocols
  • a vertical angle of 0° corresponds to a camera directed horizontally, while a vertical angle of 90° corresponds to a camera directed vertically downward.
  • a "vertical axis" may be defined locally, and may not be parallel to a
  • a vertical axis at a distant location.
  • a vertical axis may be defined substantially pe ⁇ endicular to an "average horizontal plane", which may be defined locally and which most nearly approximates the rough surface locally.
  • vertical plane - a plane perpendicular to a horizontal plane.
  • a vertical plane is parallel to the vertical axis.
  • Vertical planes may be defined locally for curved and/or rough target object surfaces.
  • vertical camera - a camera pointed at the target object with a vertical angle of about 90°, i.e., vertically downward. The images created by such a camera will appear flat and show little perspective.
  • WAN alternatively "wide area network” - a network of computers at multiple locations z-direction - the direction perpendicular to a horizontal plane, i.e., directly upward or downward.
  • the imaging assembly is considered “above” the target object (positive value for Z CAM ) -and pointed “down” at the target object, although other conventions may be used without departing from inventive concepts disclosed and/or claimed herein.
  • zoom-in - to enlarge the size of the object in .an image on the screen so it appears larger and closer zoom-out - to decrease the size of the object in .an image on the screen so it appears smaller and farther away.
  • any of the terms "means for storing the image database”, “means for storing image parameters”, “means for selecting a trajectory”, “means for selecting, scaling, and sequencing images”, “means for presenting”, and any other “means” described herein shall denote any device, machine, apparatus, hardware, software, combination thereof, and/or equivalent thereof capable of performing the specified functions, including but not limited to one or more computers, storage media, servers, server software, browser software, user interface hardware and/or software, terminals, networked devices, the Internet, the World Wide Web, an intranet, an extr-anet, a LAN, a WAN, modems, memory units, storage devices, processors, distributed computing resources, integrated circuits, ASICs, functional equivalents thereof, and/or combinations thereof.
  • any medium suitable for storage of information, data, and or images, whether in digitally-encoded form or in non-digital form, or any combination of and/or functional equivalents of such media may be utilized for any of the various storage functions described herein.
  • such storage media may be distributed over a plurality of storage locations.
  • network connection shall encompass any apparatus, hardware, software, and/or combination thereof capable of providing any of the various connections described herein, including but not limited to an Internet connection, an Internet service provider (ISP), browser software, user interface hardware and/or software, a modem hook-up, a cable, a phone line, a satellite link, a wireless link, a microwave link, television, a BBS system, a local area network, a wide area network, an intranet, an extranet, direct linkage of a plurality of computers, connections between devices within a single computer, terminal connections, program instructions, multiple user accounts on a single computer, multiple storage areas on a single computer, combinations thereof, functional equivalents thereof, and future apparatus, methods, systems, and/or protocols for performing analogous functions.
  • ISP Internet service provider
  • Images of the target object may be recorded at block 10 by an imaging assembly comprising one or more cameras. Images may be recorded from multiple viewing locations and in multiple viewing directions relative to the target object.
  • a positioning system may be employed at block 20 to record a camera position corresponding to each image recorded, and one or more of the camera position, camera direction, and/or other image parameters may be stored for each image at block 40.
  • the images themselves may be stored at block 30 as an image database.
  • the positioning system may be employed to control the recording of images. Data related to details of the target object may be linked to the image database via the image parameters at block 50.
  • a viewer may select a viewing trajectory at block 70.
  • the selected trajectory and image parameters may be employed at block 60 to select, scale and sequence images retrieved from the image database.
  • the image sequence may then be presented as an animation sequence at block 90.
  • the viewer may: view the animation sequence; select a subsequent trajectory based on the animation sequence; and/or view, analyze, and/or edit linked data at block 80. Any connections represented by arrows in Figure 1 may be provided by a network or other connection.
  • FIG. 2A and 2B A preferred embodiment of an imaging assembly 100 according to the present invention is shown in Figures 2A and 2B, and comprises: a plurality of oblique cameras 110 pointing radially outward from a central point 101 and downward toward the object, each preferably having substantially the same vertical angle of greater than about 0° and less than 90°, preferably between about 15° and about 75°, and most preferably between about 15° and about 30°; and one vertical camera 120 pointing vertically downward at the object with about a 90° vertical angle.
  • the viewing directions of the cameras are illustrated in Figures 3 A and 3B.
  • the vertical camera may be used to record reference and images.
  • the fields-of- view 300 of the oblique cameras should preferably cover an entire 360° range of azimuthal angles around a central point and should preferably be substantially uniformly spaced around the central point.
  • an imaging assembly may comprise twelve oblique cameras each having a field-of-view of about 40°, although many other configurations having various numbers of cameras and various fields-of-view may be employed without departing from inventive concepts disclosed and/or claimed herein.
  • the flow diagram of Figure 5 illustrates a preferred method for generating a multi- perspective image database for an object (an area of a city, for example).
  • a fine imaginary 2D grid may be defined (408) to overlay the object, as shown in Figure 6.
  • Images may be recorded with the imaging assembly positioned at each xy grid position 501 at a height z C m above the object.
  • the horizontal distance between adjacent grid points should preferably be relatively small compared to the size of the object.
  • the entire camera setup is prepared (406) aimed at the object so that one vertical image and multiple oblique images (represented by arrows 502) may be recorded at each grid point.
  • All oblique cameras should preferably have the same focal length (or equivalently, the same camera constant, roughly the lens-to-focal-plane distance) and the same field-of-view. All oblique cameras are preferably aimed at the object under subst-antially the same vertical angle of greater than about 0° and less than 90°, preferably between about 15° and about 75°, most preferably between about 15° and about 30°, to record suitable perspective images. Since the focal length, the camera height, and the vertical angle are substantially the same for all oblique cameras, all perspective images will be substantially the same scale and have substantially the same perspective distortion. The area covered by an image is determined by the distance from the camera to the object, and the camera field-of-view.
  • the height of the imaging assembly above the object surface and the camera field-of-view should preferably be chosen so that each oblique image covers an area of the object greater than about the grid spacing, most preferably greater than about three times the grid spacing. This ensures that a detail 601 on the target object will be visible in multiple images 502 recorded from multiple grid points 501, as illustrated in Figure 7.
  • the imaging assembly In order to generate an entire multi-perspective image database, as illustrated in the flow diagram of Figure 4, the imaging assembly must be moved to each of the grid points, at which each camera records an image. For the example of an area of a city, this may be done by mounting the imaging assembly in an aircraft and flying (412, 414) over the city along the imaginary gridlines, and recording (422) images with each camera as the aircraft crosses each grid point (416).
  • the imaging assembly may be gyro-stabilized (by gyro-stabilizers 130) to insure that the cameras are always pointing in the same direction. Navigation of the aircraft along the imaginary gridlines and accurate recording of the camera position for each image may be facilitated by use of a global positioning system (GPS) or other positioning system (404, 410, 416). For an imaging assembly comprising twelve oblique cameras and one vertical camera, thirteen images will be recorded (422) at each grid point. Each image is stored (424, 426) in the image database, and image parameters are also stored (418, 420) for each image.
  • GPS global positioning system
  • image parameters preferably include, but are not limited to, camera position (X CA M, ycAM, Z CA M), camera vertical angle ( ⁇ CAM), camera azimuthal angle ( ⁇ CAM), camera constant (c), and camera field-of- view ( ⁇ CAM , ⁇ C ⁇ M).
  • the images and image parameters are preferably stored in digital form in a searchable format, for example, as a database in a computer memory or on one or more computer storage media. Without departing from inventive concepts disclosed .and/or claimed herein, images, image parameters, data, and/or information may be stored in any suitable format, digital and/or non-digital, on any suitable storage medium.
  • image recording and the flight may be ended (428, 430).
  • Generation of a database of images as described above may be repeated at different points in time, and the time at which each image is recorded may be stored as an image parameter (t C M).
  • the camera height, Z C AM may often be defined with respect to some reference point of the t-arget object, but not necessarily the surface of the target object.
  • Z CAM is typically defined as the camera elevation above sea level. If the terrain imaged is not flat, the height of the camera above the ground may vary, changing the perspective parameters of the images.
  • a preferred embodiment of the present invention may therefore include as a component of the positioning system means for determining the elevation of the surface of the target object. Photogrammetry techniques may be employed using images recorded by the vertical camera to calculate an elevation profile for the area for which the image database is generated.
  • the imaging assembly may employ a range-finder or equivalent device for measuring the elevation profile of the area, preferably concurrently with the recording of the images for the image database.
  • the elevation profile may be stored and used in subsequent processing of images in the database, as set forth hereinbelow.
  • a series of images (9E, 9F, and 9G, for example) of a detail 601 of the object may be created that show the detail in the same size, and that form a near- optically-continuous sequence.
  • This may be accomplished according to the present invention by selecting (806, 808) images (9B, 9C, and 9D, for example), clipping (810) them (i.e. selecting a part of the image) or splicing them (i.e., assembling adjacent images), and scaling (812) the clipped or spliced images.
  • the continuity of the images may not be mathematically perfect, but perfect continuity is not necess.ary to produce .an animation suitable for viewing by a human viewer when the images are played back.
  • the images in the animation sequence may preferably be played back (819) at greater than about six images per second, most preferably at about 24 images per second (a common frame rate for commercial motion pictures).
  • the animation may be played back at rates slower than about six images per second. Such a slow playback may still give a viewer the perception of motion relative to the target object, even if it appears as a "slide show" instead of a smooth animation.
  • a single camera may be used to record images in multiple camera directions.
  • the camera direction may be changed after recording each of multiple images at a given camera position, and the process repeated at successive camera positions.
  • a single camera may be employed having a wide-angled, or "fish-eye" lens. The camera may be directed vertically downward, so that the camera field-of-view may cover an entire 360° range of azimuthal angles. A single wide-angled image may therefore cover .an area of the target object equivalent to the combined areas covered by a plurality of images recorded by an imaging assembly as described above.
  • Each wide-angled image may be divided into a plurality of images prior to storage, or may be stored as a single image and processed during subsequent generation of an animation sequence.
  • one or more cameras may be used in conjunction with one or more auxiliary optics, such as a plurality of mirrors, prisms, and/or lenses set at a variety of positions and/or orientations, to record images covering an area of the target object functionally equivalent to the combined areas covered by a plurality of images recorded by an imaging assembly as described above.
  • Each image thus recorded may be divided into a plurality of images prior to storage, or may be stored as a single image and processed during subsequent generation of an animation sequence.
  • trajectories 883 of viewing position, viewing direction, viewing distance, and/or viewing time relative to the target object may be simulated in an animation by selecting different image sequences from the l.arge database of images.
  • Such trajectories may include as examples, but are not limited to: circular motion around an object detail with the view directed toward the detail; circuit motion with the view directed radially outward from the center of the circular motion; motion along an arbitrary curvilinear path with the view directed in the direction of motion; motion along an arbitrary curvilinear path with an independently arbitrarily varying viewing direction; a view from a fixed position with a fixed viewing direction showing images recorded at successively later times, thereby showing the temporal evolution of the object details visible in the view.
  • softw.are may be employed to select the images from the database, and clip or splice and scale them so that the viewer experiences near-continuous movement when the image sequence is played back.
  • the images are selected and displayed at a speed of preferably greater than about six images per second, most preferably about 24 images per second, the viewer experiences the feeling of smooth motion over the target object.
  • An entire trajectory may be specified prior to any selection and/or presentation of images as an animation sequence.
  • the viewer may preferably interactively control the trajectory of viewing position, viewing direction, viewing distance, and/or viewing time, thereby allowing the viewer to virtually "fly over" the target object, looking in .any desired direction -as he/she does so.
  • Images may be selected, manipulated, and presented as an animation sequence and, concurrently, subsequent portions of the trajectory may be selected.
  • the viewer may zoom in and out during a trajectory so that he/she may view an animation of relatively small details of the object, or an animation of a relatively large portion of the object.
  • Pre-selected, default, and/or automatic trajectories may also be employed to generate animations.
  • An animation generated for a trajectory may be stored for later play-back.
  • a smaller vertical angle for the oblique cameras yields better viewing of vertical details of the target object (building facades and hills in the city example).
  • the smaller the camera vertical angle at a given camera height the farther away the camera is from the viewed object detail, and the more the camera needs to zoom in to get good images of the detail.
  • the more the camera is zoomed in the smaller the resulting field- of-view, and the more cameras are needed.
  • Additional data may be available about details of the target object for which a multi- perspective image database has been created. For a city, for example, address information, phone directory information, tax data, property information, business and/or Chamber of Commerce data, census data, Standard Metropolitan Statistical Area (SMSA) and/or other government data, and/or other alphanumeric data may be available.
  • SMSA Standard Metropolitan Statistical Area
  • Maps may be available as CAD drawings, as scanned images, or in other formats. Images acquired from the ground may also be available for many objects, and in some cases there may be accompanying digitized video or sound. Some or all of this information may have associated geographical coordinates (x G o, j or equivalently longitude and latitude). Since the image parameters are known, the image position (x M , y m ) for any such detail information/data can be calculated for each image. This enables generation (either manually or, preferably, automatically) of one or more links (i.e., geo- references) between the information/data and each image in the multi-perspective image database in which the relevant location appears, as illustrated in the flow diagram of Figure 10.
  • links i.e., geo- references
  • the image location of each address can be calculated for each image and so-called "hotspots" generated on the images.
  • GUI graphical user interface
  • the corresponding address, or any other information/data associated with the image location may be displayed, either superimposed on the image or in a separate display.
  • the viewer may be provided with an address finder, whereby entering an address into the system may automatically trigger the system to display an image or an animation of the lot and/or house, building, or other structure at the address.
  • the linked data can be used to do any type of data sorting and/or analysis, and the geo-references may be used to display the results of the analysis on an image or an animation of images, as shown in the flow diagram of Figure 11.
  • Analysis results may be color-coded and graphically overlaid on each image of the multi-perspective image database in real time as the viewer navigates through the database. For example, the viewer may graphically view the results of an analysis while viewing an animation and experiencing a three-dimensional "feel" for the entire target object.
  • vertical reference images recorded by the vertical camera of the imaging assembly, new geo-references may be created for each image, or the positions of existing geo- references may be improved.
  • the viewer may draw an outline of an object detail onto the different views simultaneously and attach data to the object detail. In this way the viewer can very finely define the size and shape of the detail in different dimensions, and the position of the object detail can be generated automatically for all images on which the detail appears.
  • Viewers may also edit data that is linked to the image database, as shown in the flow diagram of Figure 12. This can be done without affecting the images themselves. Changes in the data may be displayed superimposed on an image or animation, allowing the viewer to interactively view his/her changes visually.
  • a multi-perspective image database for an object or geographical area may be stored on a disk drive or other digital storage medium that is connected to a computer, to a computer network, or to the Internet.
  • the storage medium with the image database may be referred to as the Server ( Figure 8, 802).
  • the navigation and viewing hardware/software (801) used by the viewer to move through the images can run on the Server, or on a separate computer or terminal with a network connection to the Server directly, to a common computer network with the Server, or to the Internet.
  • Navigation and viewing requests may be sent (803, 806) to the Server and the Server may select (806, 808), manipulate (clip 810 or splice, scale 812), and transmit (814, 817) the images to the viewer that comprise the desired view or animation.
  • the viewer may navigate and view an animation of an object, city, geographical area (for which an image database is stored on the Server) from anywhere in the world.
  • the viewing and navigating hardware/software, the image database, and Server software may also run on a single computer so that viewers may view ariimations on computers that are mobile, or computers that are not connected to the Internet or to a network.
  • a multi-perspective image database typically will be large, possibly hundreds of gigabytes for a single target object. Therefore, it may be advantageous to store the images centrally, and to transfer images to individual viewers over a network or via the Internet.
  • the apparatus and methods described above pertain primarily to a two-dimensional multi- perspective image database.
  • a two-dimensional image database With such a two-dimensional image database, perceived movement in an animation (alternatively, the trajectory of viewing positions) occurs only within the horizontal plane from which the images are recorded, although a primitive sense of vertical movement can be achieved by zooming in or out.
  • a three-dimensional multi-perspective image database may be generated.
  • a multi-dimensional multi-perspective image database may be generated also having a temporal dimension.
  • a time- interval multi-perspective image database thus generated may be very useful for monitoring changes in a target object or geographical area over time.
  • a regular grid pattern for acquisition of images for the database is not strictly necessary.
  • an image database may be generated and employed in which images are recorded from an arbitr.ary set of viewing positions, in an arbitrary set of viewing directions (not necessarily the same for all viewing positions), and at an arbitrary set of viewing times (not necessarily the same for all viewing positions or viewing directions), provided that the image database contains sufficiently many views to cover adequately the target object for the desired viewing and/or data analysis, display, and/or editing.
  • acquisition of images from regular grid points offers the possibility of more efficient storage, selection, and/or processing of images and/or data.
  • fish-eye or wide-angle lenses may be used for recording images for the image database. Additional viewing and navigating hardware/software may be employed to create perceived camera movement within an animation wherein the vertical angle of the view changes, thereby enabling animations wherein a viewer may simulate looking forward, shifting the view to look downward, and swinging further to look backward.
  • stereo or holographic images may be stored in a multi-perspective image database and used to generate stereo or holographic animations, respectively.
  • the apparatus and methods disclosed herein are particularly well suited for generating an image database for a city or other geographical area, and for implementation using digitally recorded and stored images.
  • the cameras used in the imaging assembly preferably may be digital cameras, and preferably may be linked to a computer.
  • the computer may preferably be equipped with reception hardware/software for receiving and processing information from a global positioning system (GPS), and preferably may contain or be linked to a high volume digital storage medium, such as a large hard disk, for storage of images and image parameters.
  • GPS global positioning system
  • the imaging assembly may be moved along grid lines over the area. At each grid point the computer triggers all cameras simultaneously, and the images are immediately downloaded to the computer and named (or numbered).
  • the names of the images and their corresponding image parameters are simultaneously stored in a database.
  • Image parameters such as x C , y AM, a . nd z CAM may be obtained from, or calculated from data obtained from, the GPS.
  • images may be recorded by non- digital still or motion photography and converted to digital format and stored at a later time.
  • any means for determining the position of the imaging assembly relative to the target object i.e., positioning system
  • the GPS-based scheme disclosed herein is exemplary only.
  • the images may be stored in non-digital form, i.e., as physical photographs, slides, holograms, stereo images, or other form.
  • a mechanical device may be employed to store the pictures, slides, or other format, select from among them for an animation sequence, clip and/or scale projections of those images selected, and project the sequence as an animation.
  • storage of the images, image parameters, data, and/or other information, whether in digital or non-digital form, may be on any storage medium suitable for storing images, image parameters, data, and/or other information.
  • a multi -perspective image database may be created from within a target object.
  • a multi-perspective image database may be generated from a 3D computer model. Images for the database may be rendered on the computer from each grid point with 3D rendering software and then stored. Animations may be generated from the rendered and stored images as described herein. This technique may be more advantageous than real time 3D rendering when storage space may be abundant, but processing power may be limited.
  • a multi-perspective image database may be generated using a plurality of static imaging assemblies at a plurality of locations relative to the target object, rather than moving an imaging assembly to a plurality of locations.
  • a target object includes surface details which move during acquisition of a multi- perspective image database (cars moving along city streets, for ex-ample)
  • those moving surface details may be observed to appear, disappear, or move irregularly in an animation generated from the image database.
  • transient surface details may be removed from images in the database. This may be accomplished automatically for digitally stored images through the use of optical pattern recognition software to detect .and remove the moving surface details.
  • EXAMPLE CREATING A MULTI-PERSPECTIVE IMAGE DATABASE FOR A CITY.
  • the following is exemplary only and shall not be construed so as to limit the scope of the present invention.
  • Mount an imaging assembly comprising digital cameras and mounted on a gyro-stabilizer on the underside of a relatively slow-flying airplane, such as a Cessna 172 (which can fly at about 70 mph).
  • a Cessna 172 which can fly at about 70 mph.
  • an unmanned drone aircraft or miniature aircraft may be used, either automated or by remote control.
  • Aim all twelve (or 16, 32, etc.) oblique cameras with substantially the same vertical angle of between about 15° and about 75° downward from horizontal.
  • the multi-perspective image database may be linked to any address database (such as a county address database) that has longitude .and latitude information (or contains State Plane coordinates).
  • This link may be used to locate a given address on any image in which it appears.
  • the viewer may be provided with an address finder that will generate an animation of a house, building or other object, after entering the address.
  • the address finder and the resulting animation may be provided over the Internet, thereby enabling the process to be done remotely.
  • This type of link between geographical data and a location on a map or image is an example of a geo-reference.
  • the vertical images in the database may be used to link data to all images. For example, using the geo-reference link to the address database, determine for each house its exact longitude and latitude. Locate and measure the horizontal size of each building or house from the vertical images, and calculate the height of each building or house shown in each oblique image in the multi-perspective image database in which it appears. These calculations are possible since all image parameters are known and have been stored (elevation profile data for the terrain may be required). Store the calculated size information in the address database. It is now possible to calculate both the approximate location and size of each building or house in each image, thereby enabling creation of so-called "hotspots" on each image and linkage of those hotspots to information in the address database.
  • a "hotspot file” may be created for each image listing which structures are shown on which pixels of the image.
  • the hotspot files may be stored with the database.
  • a viewer may click on any building in any image to retrieve the available data for that building, or a viewer may search for houses which fit viewer-specified search parameters, and a search program may highlight houses in each image which fit the search parameters. Differing ranges of such search parameters may be highlighted in different ways (by color-coding, for example) to facilitate analysis of the data. For example, a viewer may define different colors for different price ranges for a house. After determining the color for each house, its corresponding hotspot in each image may be drawn in the appropriate color.
  • the analysis of the property values may be shown in real time as the viewer interactively "flies" over the virtual city.
  • the color-coded hotspots may be drawn on each image before it is displayed as part of an animation, thereby not affecting the "3D feel" of the animation.
  • Lines indicating roads may be drawn on the vertical reference images, and used to graphically create a database of road identities and locations for each vertical image in the image database.
  • Road identities and locations may alternatively be obtained from digitally-stored and/or CAD-generated maps.
  • the position of a road may be calculated in each oblique image in the multi-perspective image database and this location overlaid on the images while moving through the image database, by highlighting the road in a certain color in each image, for example.
  • the pixel location in each image for each latitude and longitude point is known or can be calculated.
  • the image database may therefore be linked to a GPS receiver/card in a computer in a car.
  • the system may update and display the location of the car in real time as the viewer/driver simultaneously "moves" through the multi-perspective image database of the city and drives around the city.
  • EXAMPLE CITY DATABASE SIZE.
  • the following is exemplary only and shall not be construed so as to limit the scope of the present invention.
  • MB megabytes
  • a medium-sized city such as the Eugene-Springfield (OR) area comprises roughly 200 square kilometers, requiring over 1,000,000 images occupying more than 150 gigabytes (GB) of storage space.
  • GB gigabytes
  • Image database size may be reduced by using a lower image resolution and/or by using 8-bit images, at the potential expense of degraded image quality. Compression algorithms may be used, at the potential expense of slower access to images in the database.
  • many city image databases would probably not fit onto current CD-ROM media or DVD-formatted media.
  • distribution of images over the Internet or over a computer network from a central storage medium is a preferred means for enabling viewer access to the image database, particularly since most viewers would only require access to a relatively small fraction of the total number of images available in the database.
  • a traveler may enter an address or city and start flying over the city, checking out hotels, restaurants, places to visit, and other travel information;
  • Public utilities may visually check the physical environment and existing structures and utility infrastructure when scheduling repair jobs or planning construction. Images, animations, and/or information from the database may be accessed from an office or from the field, and information may be added regarding work progress, repair reports, account activity, etc.;
  • Delivery or parcel services may use the image database to help find addresses and choose optimal routing;
  • Architects and/or city planners may use animations to get a feel for the existing buildings and environment, to superimpose designs onto images or animations, and/or use animations and images to determine location of certain objects;
  • Emergency services and law enforcement personnel may check out a building or site in an animation from the dispatch room or from the emergency vehicle before arriving at the site.
  • Databases for emergency call (i.e., 911 call) data, property ownership data, crime statistics, floor plans, ground photos, and/or hazardous materials data may be linked to the image database.
  • Pre- fire, pre-raid, or pre-emergency plans may be formulated with the aid of images and animations from the database; 6.
  • Insurance companies may view a property in an animation in the course of processing an insurance application or claim;
  • Traffic planners may link the image database to traffic planning software, run analyses, and view the results in an animation
  • Real estate professionals may view/display properties for rent or sale in animations, and may link the image database to databases containing information on such properties, such as floor plans, price, tax, and market data, and ground photos;
  • Animations may be used during litigation or a trial to show crime scene or illustrate a sequence of events
  • An image database may be linked to a demographic database and used to display results of analyses superimposed on images or animations;
  • a vehicle may be linked to a positioning system and tracked in real time on an image or animation, either in the vehicle or at a remote location.
  • OTHER USE EXAMPLES The following is exemplary only and shall not be construed so as to limit the scope of the present invention.
  • a multi -perspective image database may have uses in other areas, including but not limited to:
  • a multi-perspective image database may be generated for a planetary surface and used to create animations for virtual exploration and research; 2.
  • a multi-perspective image database may be generated for an undersea area and used to create animations for virtual exploration and research;
  • a multi -perspective image database may be used by geologists and/or cartographers.
  • a multi-perspective image database may be generated for a microscopic surface and used to create animations for virtual exploration and research in such diverse areas as medical applications, microbiology, materials science, etc.
  • the image database may comprise images recorded by any method of microscopy, including but not limited to optical microscopy, phase contrast microscopy, scanning tunneling microscopy, atomic force microscopy, near-field scanning microscopies, fluorescence microscopy, electron microscopy;
  • a multi-perspective image database may be generated for management of natural and industrial resources, including but not limited to forests, f.arml.ands, watersheds, mining areas, oil and natural gas fields, pipelines, etc., and the image database may be linked to databases for inventory, growth, harvest, seasonal changes, etc., and used by government agencies, private industrial companies, or environmental activist organizations;
  • a multi-perspective image database may be used to plan construction or expansion of highways; 7.
  • a multi-perspective image database may be generated for a virtual world and used as a backdrop for computer gaming, allowing a player to play the game from any animated view of the virtual world.
  • An array of stationary imaging assemblies may be installed around and above an arena, stadium, or sports facility and used to continuously record images during sporting or other events. Viewers may view the image sequences in real time to view the event live and may continuously and interactively change their view of the event while watching from a remote location (to "follow the play", for example). Earlier images may be reviewed, allowing "instant replay” of any portion of the event from any angle, with obvious implications for sports officiating, for example.
  • X REF X CAM ⁇ Z CAM ⁇ an "cAM C0S ⁇ cAM Z CAM C0S ⁇ CAM CTR XcAM + tanfl CAM
  • the corresponding image coordinates (x m y M ) may be calculated for a given image as: s-c-sm ⁇ C.AM / IM
  • This algorithm may also be used to determine whether a given object detail appears in a given image. If calculated values of x M andy ⁇ , fall outside the range of the image in question, then the object detail does not appear in that image.
  • an irregular target object surface may be approximately characterized by a two-dimensional array of object surface height values (i.e., an elevation profile).
  • an elevation profile may be independently available, or may be generated within the scope of the present invention, as described hereinabove, by photogrammetry (using vertical reference images) or by direct measurement (range-finding or other equivalent methods).

Abstract

A multi-perspective image database and apparatus for a target object may be generated by: recording (element 10) an image of the object with a camera on an imaging assembly; determining the position (element 20) of the imaging assembly relative to the object; storing the image (element 30) recorded by the camera; storing image parameters and the position (element 40) of the imaging assembly corresponding to the image; moving the image assembly to a new position relative to the object; and repeating the preceeding steps. An animation sequence apparatus and generation using images in the database by: selecting a trajectory (element 70) of viewing position, viewing direction, viewing distance, and/or viewing time by a user; selecting, cropping or splicing, scaling, and sequencing (element 60) images from the image database using the user-selected trajectory, stored image parameters, and stored imaging assembly positions; presenting (element 90) the selected, scaled, and sequenced images from the image database as an animation sequence.

Description

ACQUISITION AND ANIMATION OF SURFACE DETAIL IMAGES
Inventor: Eimar M. Boesjes
RELATED APPLICATIONS This international patent application filed under the PCT claims the benefit of the priority date established by prior-filed, co-pending United States Provisional Application Serial No. 60/068,414, filed 12/22/1997, said application being hereby incorporated by reference as if fully set forth herein.
TECHNICAL FIELD The field of the present invention relates to animation. In particular, apparatus and methods are described herein for generating and storing an image database and for interactively generating animation sequences using images in the datab-ase.
BACKGROUND ART
For the last 30 years or so, many people have looked for .an inexpensive .and easy way to create animations of large complex existing target objects such as cities or terrain. A picture is a two-dimensional view of one or more objects. Film, or animation, is a linear sequence of pictures. Film or animation may be viewed as a one-dimensional array of two-dimensional pictures. The present invention discloses apparatus and methods for creating two- or higher- dimensional arrays of pictures that would allow a viewer to create his/her own film or animation of the object.
ANIMATION AROUND AN OBJECT. Systems exist to acquire perspective images of relatively small target objects from all angles around the object. The images are acquired from an imaginary sphere around the object or the object is rotated to achieve the same effect. All images are acquired at the same scale, from the same distance to the object, and straight toward the object. The interval distance between the images is identical in all directions around the object. Usually, all images show the entire object or at least a significant part of it. The images are stored in a computer database. Viewers can now play the images back and view animations of moving around the object on an imaginary sphere. Users can interactively choose the camera movement while they are viewing it. This technology only works for small objects that cameras can move around and where the entire object, or a significant part of the object, is visible in each image. Tiny details in the object surface are not visible. The present invention provides apparatus and methods to create animations of objects that are very large relative to the amount of detail one might want to see in the animation, by using an image database. This is the case, for instance, when creating an aerial animation of a geographical area. The earth is very large as compared to the size of an individual house that one might want to view in an animation. This is also the case when creating a microscopic animation of a skin surface. Here the size of the object is also very large as related to the amount detail one might wish to see. Each image will only show a very small detailed part of the object surface. The details that one might want to see may not be visible in a straight-on picture (i.e., acquired from directly above). The image database is generated using a combination of oblique cameras that are targeted at an angle greater than about 0° and less than 90° downward from horizontal toward the object, so that vertically oriented details of the object may be seen in an animation.
PANORAMIC ANIMATIONS. Microsoft's SurroundVideo® and Apple Computer's QuickTime® VR use a system where a camera is placed inside a scene. The camera is rotated and images are acquired from all directions. The images are stored in a computer and then assembled so that a single seamless image strip is created showing all directions. The image is projected onto a cylindrical object around the viewer. The viewer can now interactively create panoramic animations by moving around and viewing in all directions. An undesirable artifact of this technology is that wide-angle type views .are generated of the view. This means that the true size and shape of objects is distorted and less comprehensible to the viewer. In the present invention montages of images that are not wide angle are created, so no wide angle artifacts are generated. Furthermore, the image database may be generated in the present invention using a large number of camera positions, thereby allowing camera movements from one point to another in an animation. In this way the viewer can create an animation of images acquired from different points, thereby showing an animation of movement over the object surface. With SurroundVideo® and QuickTime® VR only images acquired from a single point within the object are animated. Another difference is that SurroundVideo® and QuickTime® VR acquire the images from inside the object, whereas in the present invention images are acquired at a distance from the object.
SATELLITE IMAGES. With satellites, images of the earth's surface are acquired. These images can be stored and played back so that animations are created of the earth's surface. Because satellites are relatively far from the earth all images are essentially vertical, acquired at an angle close to 90° downward from horizontal. This means that vertical details cannot be seen very well, if at all. The facades of houses are usually not visible on satellite images, whereas the rooftops are. For comprehending a geographical area viewers are accustomed to seeing facades clearly, whereas the rooftops look unf.amili.ar. Facades are visible in oblique aerial images that are acquired at an angle greater than about 0° and less than 90° downward from horizontal. In the present invention, animations are created using oblique aerial photos. This allows viewers to clearly see the relationship of detailed objects, including their vertical components. The same image quality cannot be achieved by animating satellite images. STEREO IMAGES. Technologies exist to generate two images of an object that have a tiny perspective difference. This perspective difference corresponds with the perspective difference between a viewer's eyes when looking at a projection screen that is placed at a predetermined distance from the eyes. Both images are now projected simultaneously in such a way that the left eye only sees the left perspective image, and the right eye only sees the right perspective image. The result is that the viewer can see the image depth and gets a "true feel" for perspective. Stereo images can be used in still images or in animated images. The present invention may be implemented with regular images and may also be implemented with stereo images.
3D MODELING. Three-dimensional (3D) models can be created in a computer, and perspective images can be generated and animated from the 3D models. 3D models contain a three-dimensional geometrical description of the object, or the surfaces of the object. Perspective images can be created from any position in the 3D Model. The perspective images can be sequenced to create an animation. In the present invention there is no 3D geometrical model and there are no lines or surfaces stored that describe the object geometry. The image database used in the present invention is a collection of two-dimensional images. It may employ a 2D array of grid points with a reference to the images that have been generated at each such grid point relative to the object. The problem of 3D models is that it is exceedingly difficult to create a model of a large area that is completely accurate. It is not feasible, for instance, to create a model of a city that has each leaf on each tree correctly positioned. It is therefore not feasible to create images that are as accurate as photographs. In the present invention, actual photos of the object are stored in the image database and therefore accurate animations can be created. In addition, creating the image database involves less human work than generating a 3D model, since it can be created automatically. It would be extremely labor intensive to generate a 3D model for an object such as a city with accuracy sufficient to produce photographic-quality animations. Generation of an animation using such a multi-perspective image database requires less calculation than 3D rendering, and it yields more realistic, photographic image quality for existing cities or terrain. Experience has shown that creating photographic databases of cities is actually less work than expected. By experimenting with flying at different heights, using different cameras, and using different camera lens lengths, an optimal scheme may be devised to photographically map entire cities or areas and create a mosaic of images that are comprehensible to the average viewer. Images can be selected from the database and scaled in such a way that a nearly optically continuous image sequence is presented to a viewer as an animation. However, the image database requires a large amount of computer storage space, typically much more than a 3D model.
PHOTOGRAMMETRY. A multi-perspective image database is also distinctly different from photogrammetry. In a multi-perspective image database it is essential to create oblique images from an object. In oblique images there is perspective, whereas images used for photogrammetry should have as little perspective as possible. Because each image in the database contains perspective, it is possible to create a sequence of images that gives the feeling of moving around or over an object. This is not possible with images that show little or no perspective. The change in perspective in the sequence of images makes it possible to see the exact relationship between details of the object. In a city database, for instance, the exact position and height of a tree next to a house can be seen. Without perspective images, and without a nearly optically continuous image sequence, this is not possible.
It is therefore desirable to provide apparatus and methods for generating a multi- perspective image database comprising images of a target object recorded from a variety of viewing positions and in a variety of viewing directions. It is desirable to provide apparatus and methods for storing the image database and linking each image in the database to information related to image parameters and/or to information related to details of the target object shown in the image. It is desirable to provide apparatus and methods for enabling a user to choose a trajectory of viewing positions, viewing directions, and/or viewing times for viewing an animation of the target object. It is desirable to provide apparatus and methods for selecting and manipulating a sequence of images from the database, and for presenting the sequence as an animation. In film or video, images are animated in a sequence. The sequence of images can produce a feeling of motion through the scene or movement of objects in the scene. However, the sequence of images can only be played forward and backward, and the viewer has no control over what is viewed, in what order, or from what direction. By using a multi-perspective image database according to the present invention, a viewer has full control of the .animation, and may move and view in any direction, thereby creating his or her own "film" of the target object shown in the image database. This offers a great potential for data analysis, travel information, city planning, space exploration, computer gaming, and many other fields. By storing such a multi- perspective image database of an object or city on a storage medium connected to a computer network, such as the Internet, any viewer may "fly" over the object or city from anywhere in the world. With sufficient computer network connection bandwidth and/or network access speed, the viewer may remotely "fly" over a city interactively in real time.
DISCLOSURE OF INVENTION
Certain aspects of the present invention may overcome one or more aforementioned drawbacks of the previous art and/or advance the state-of-the-art of object animation apparatus and methods, and in addition may meet one or more of the following objects: To provide apparatus and methods for generating an image database, the image database comprising images of a target object recorded from a variety of viewing positions and in a variety of viewing directions;
To provide -an imaging assembly for generating an image database, the image database comprising images of a target object recorded from a variety of viewing positions and in a variety of viewing directions;
To provide an imaging assembly for generating an image database, the imaging assembly comprising at least one camera;
To provide an imaging assembly for generating an image database, the imaging assembly comprising at least one wide-angled camera; To provide an imaging assembly for generating an image database, the imaging assembly comprising a plurality of cameras pointing radially outw.ard and downward toward the target object;
To provide means for providing relative motion between the imaging assembly and the target object to record images of the object from a variety of viewing positions and in a variety of viewing directions;
To provide apparatus and methods for storing the image database and linking each image in the database to information related to its respective image parameters;
To provide apparatus and methods for storing the image database and linking each image in the database to information related to details of the target object shown in the image;
To provide apparatus and methods for storing the image database and linking each image in the database to the position of the imaging assembly, to the image parameters of the camera in the imaging assembly which generated the image, and/or to the time at which the image was recorded; To provide apparatus and methods for storing the image database and linking each image in the database to the target object coordinates of the corresponding area of the target object shown in the image;
To provide apparatus and methods for storing the image database and linking 5 image coordinates of a detail of the target object shown in each image in the database to information related to the detail of the t.arget object;
To provide apparatus and methods for enabling user selection of a trajectory of viewing positions, viewing directions, viewing distances, and/or viewing times for viewing an animation of the target object; l o To provide apparatus and methods for enabling interactive user selection of a trajectory of viewing positions, viewing directions, viewing distances, and/or viewing times for viewing an animation of the target object;
To provide apparatus and methods for selecting and/or manipulating a sequence of images from the database based on the user-selected trajectory of viewing positions, 15 viewing directions, viewing distances, and/or viewing times;
To provide apparatus and methods for presenting a sequence of selected and/or manipulated images as an animation for viewing by the user;
To provide apparatus and methods for interactively presenting a sequence of selected and/or manipulated images as an animation for viewing by the user;
20 To provide apparatus and methods for presenting linked information pertaining to a detail shown in an image and/or an animation to a user; and
To provide apparatus and methods for interactively presenting linked information pertaining to a detail shown in an image and/or an animation to a user.
One or more of the foregoing objects may be achieved in the present invention by an 25 apparatus comprising: an imaging assembly comprising at least one camera; means for moving the imaging assembly relative to the target object; means for determining a position of the imaging assembly relative to the target object; means for storing an image recorded by the camera; and means for storing image parameters and the position of the imaging assembly corresponding to each image. One or more of the foregoing objects may be achieved in the 30 present invention by a method comprising the steps of: a) recording an image of the target object with a camera on an imaging assembly; b) determining the position of the imaging assembly relative to the target object; c) storing the image recorded by the camera; d) storing image parameters and the position of the imaging assembly corresponding to the image; e) moving the imaging assembly to a new position relative to the target object; and f) repeating steps a) through e). One or more of the foregoing objects may be achieved in the present invention by an apparatus comprising: means for storing the image database; means for storing image parameters and imaging assembly position for each image in the image database; means for selecting a trajectory of viewing position, viewing direction, viewing distance, and/or viewing time by a user; means for selecting, cropping or splicing, scaling, and sequencing images from the image database using the user-selected trajectory, stored image parameters, and stored imaging assembly positions; and means for presenting the selected, scaled, and sequenced images from the image database as an animation sequence. One or more of the foregoing objects may be achieved in the present invention by a method comprising the steps of: a) selecting a trajectory of viewing position, viewing direction, viewing distance, and/or viewing time by a user; b) selecting, cropping or splicing, scaling, and sequencing images from the image database using the user-selected trajectory, stored image parameters, and stored imaging assembly positions; and c) presenting the selected, scaled, and sequenced images from the image database as an animation sequence.
Preferably an imaging assembly may comprise a plurality of cameras, all pointed toward the target object at about the same vertical angle of greater than about 0° and less than 90° downward from horizontal. The fields-of-view of the cameras preferably cover a full circle when viewed from above. An imaginary grid of points may be defined over the object. As the cameras are moved over the imaginary grid each camera records an image at each grid point, and these images may be stored, along with image parameters, thereby creating a multi-perspective image database. A viewer may interactively "move" over the grid and change the viewing direction and image size by selecting a trajectory of viewing position, viewing direction, viewing distance, and/or viewing time. Images may be selected, cropped or spliced, and scaled so that a near optically-continuous sequence of images is created. The images may be played back at a speed sufficiently great that a viewer may perceive "movement" relative to the object. The images may be played back preferably at a rate greater than about 6 images per second, most preferably at about 24 images per second (a common frame rate for a motion picture). Additional objects and advantages of the present invention may become apparent upon referring to the preferred and alternative embodiments of the present invention as illustrated in the drawings and described in the following written description .and/or claims.
BRIEF DESCRIPTION OF DRAWINGS
Figure 1 is a block diagram showing an overview of apparatus and methods according to the present invention.
Figure 2A shows an isometric view of an imaging assembly according to the present invention, while Figure 2B shows a side view of a single camera/gyro-stabilizer assembly.
Figures 3 A .and 3B show, respectively, top .and isometric views of the camera directions of an imaging assembly according to the present invention.
Figure 4 shows a top view of the fields-of-view of cameras on an imaging assembly according to the present invention. Figure 5 shows a flow diagram for generation of a multi-perspective image database according to the present invention.
Figure 6 shows a top view of multiple camera directions from each of multiple camera positions on grid points, according to the present invention.
Figure 7 shows a top view of a target object detail with camera positions and directions corresponding to images showing the detail, according to the present invention.
Figure 8 shows a flow diagram for generation of an animation sequence from a multi- perspective image database, according to the present invention.
Figure 9 A shows a top view of a target object detail with camera positions and directions corresponding to images showing the detail, while Figures 9B, 9C, .and 9D show images in the multi-perspective image database showing the detail, and Figures 9E, 9F, and 9G show images clipped and scaled to form part of an animation sequence showing the detail.
Figure 10 is a flow diagram showing linkage of data, images, maps, audio, and/or video to a multi-perspective image database according to the present invention.
Figure 11 is a flow diagram showing analysis and display of data linked to a multi- perspective image database according to the present invention.
Figure 12 is a flow diagram showing editing of data linked to a multi-perspective image database according to the present invention. MODES FOR CARRYING OUT THE INVENTION
For purposes of the present written description and/or claims, the following definitions shall be used:
2D - two dimensional 3D - three dimensional animation - alternatively "object animation" or "animation sequence" - any sequence of images which may give a viewer a perception of a changing scene. The viewer may perceive apparent motion and/or evolution of one or more objects in the scene, and/or apparent motion of the viewer's viewing position and/or viewing direction relative to the scene. camera coordinates - alternatively XCAM, ycm, and z m - the three-dimensional Cartesian coordinates of the imaging assembly relative to the target object. A preferred convention for such Cartesian coordinates has the xy-plane oriented substantially horizontally and substantially coinciding with the surface of the target object and the z-axis substantially perpendicular to the surface of the object, although other conventions may be used without departing from inventive concepts disclosed and/or claimed herein. camera setup - alternatively "imaging assembly" - a set of cameras used to create a multi- perspective image database, and typically comprising a plurality of oblique cameras and one vertical camera clip an image - alternatively "crop an image" - select a part of an image for viewing continuous image sequence - also "near-continuous image sequence", "optically-continuous image sequence", or "near-optically-continuous image sequence" - a series of images where the changes in perspective .and scale of each consecutive image are (nearly) continuous, so that when played back at sufficient speed it produces the appearance of smooth camera movement. The term "near-continuous" is used because from a mathematical point of view, the sequences generated by the present invention are only approximately continuous, not exactly continuous. An object of the present invention is to produce a sequence that appears nearly continuous to the human eye. Such a sequence may be played back at a rate of preferably greater than about 6 images per second, most preferably about 24 images per second (a common frame rate for motion pictures). Even at relatively slow playback speeds (similar to a slide show), such a sequence may yield a perception of motion relative to the target object. image database - alternatively "multi-perspective image database" or simply "database" - collection of all images. However, other databases are referred to herein containing other types of data. image parameter - camera position (XCAM, y m, z Λu), camera vertical angle (ΘCAM), camera azimuthal angle ( CAM), camera constant (c), camera field-of-view (ΔΘCAM, A<>CAM), .and/or other parameters describing the recording of a given image of the target object. A database of these parameters for each image in the image database may be generated and stored for use in subsequent selection, manipulation, sequencing, and/or playback of images in an animation sequence. detail - alternatively "object detail", "target object detail", or "detail of the target object" - a detail or feature of the target object which appears in one or more images of a multi- perspective image database geographical coordinates - alternatively xGE0, yβεo, .and z0εo, "object coordinates", "target object coordinates", or "physical coordinates" - the three-dimensional Cartesian coordinates of a particular detail of the target object, preferably using the same convention set forth for the camera coordinates, although other conventions may be used without departing from inventive concepts disclosed and/or claimed herein. The term "geographical" may be used when the target object is the earth's surface or a portion thereof, or may be used in the more general case of any target object for which an image database has been generated. geo-reference - a link between a location on a map or image and data related to a detail of the object corresponding to the location positioning system - any apparatus and/or method used to determine a position relative to the target object (i.e., for determining object coordinates). A positioning system may include a system for measuring and/or storing an elevation profile for the object surface.
GPS - alternatively "global positioning system" - a positioning system that calculates its location on earth (latitude, longitude, altitude) using satellite-broadcast information. This location data may be digitally stored. grid - a systematic field of points in a substantially horizontal plane, typically laid out as
Cartesian coordinates. The grid spacing may be small relative to the size of the target object and to the size of the area of the target object corresponding to an image. grid size - alternatively "grid spacing" - the distance between two adjacent grid points
5 horizontal plane - plane substantially parallel to the surface of the target object. In a preferred convention, the horizontal plane which substantially coincides with the surface of the object is also referred to as the .xy-plane, although other conventions may be used without departing from inventive concepts disclosed and/or claimed herein. For a target object having a curved surface (the earth, for example) a "horizontal plane" may be defined l o locally, and may not be parallel to or coincide with a "horizontal plane" at a distant location. Similarly, for an object with surface roughness on distance scales smaller than about the distances shown in an individual image, an "average horizontal plane" may be defined locally which most nearly approximates the rough surface locally. hotspot - an area or collection of areas on an image sharing a common attribute. With the use of 15 hotspots, data can be linked to images. Hotspots can also be used to display certain characteristics of the data superimposed on the image. Selection and activation of a hotspot by a pointing device in a graphical user interface (GUI) environment may result in a certain action such as the retrieval of data, actions of the software, etc. image coordinates - alternatively xM and yM - two-dimensional Cartesian coordinates used to 20 specify a location in an image. In a preferred convention, the xy-plane corresponds to the plane of the two-dimensional image, with the origin at the center of the image, the x-axis horizontal, and the y-axis vertical, although other conventions may be used without departing from inventive concepts disclosed .and/or claimed herein.
Internet - any worldwide computer network that allows computers to communicate with each 25 other using standardized communication protocols intranet - a computer network that allows computers from a selected group (a company, a department, an organization) to communicate with each other using standard communication protocols
KB - kilobyte
30 LAN - alternatively "local area network" - a network of computers at a location MB - megabyte object - alternatively "target object" - the physical/spatial object for which a multi-perspective image database is generated oblique camera - a camera pointed down at the target object with a vertical angle (i.e., the angle downward from horizontal) less than 90°, so that a perspective image is recorded pan - to move, or scroll, sideways, upward, or downward in an image on a screen vertical angle - alternatively ΘCAM - the .angle at which a camera is aimed, measured downward from a horizontal plane. A vertical angle of 0° corresponds to a camera directed horizontally, while a vertical angle of 90° corresponds to a camera directed vertically downward. vertical axis - alternatively "z-axis" - coordinate axis directed substantially perpendicularly to the surface of the target object and in a preferred convention through the origin of the xy- plane, although other conventions may be used without depa- rting from inventive concepts disclosed and/or claimed herein. For a target object having a curved surface (the earth, for example) a "vertical axis" may be defined locally, and may not be parallel to a
"vertical axis" at a distant location. Similarly, for an object with surface roughness on distance scales smaller than about the distances shown in an individual image, a vertical axis may be defined substantially peφendicular to an "average horizontal plane", which may be defined locally and which most nearly approximates the rough surface locally. vertical plane - a plane perpendicular to a horizontal plane. A vertical plane is parallel to the vertical axis. Vertical planes may be defined locally for curved and/or rough target object surfaces. vertical camera - a camera pointed at the target object with a vertical angle of about 90°, i.e., vertically downward. The images created by such a camera will appear flat and show little perspective.
WAN - alternatively "wide area network" - a network of computers at multiple locations z-direction - the direction perpendicular to a horizontal plane, i.e., directly upward or downward. In a preferred convention, the imaging assembly is considered "above" the target object (positive value for ZCAM) -and pointed "down" at the target object, although other conventions may be used without departing from inventive concepts disclosed and/or claimed herein. zoom-in - to enlarge the size of the object in .an image on the screen so it appears larger and closer zoom-out - to decrease the size of the object in .an image on the screen so it appears smaller and farther away.
For purposes of the present written description .and/or claims, any of the terms "means for storing the image database", "means for storing image parameters", "means for selecting a trajectory", "means for selecting, scaling, and sequencing images", "means for presenting", and any other "means" described herein shall denote any device, machine, apparatus, hardware, software, combination thereof, and/or equivalent thereof capable of performing the specified functions, including but not limited to one or more computers, storage media, servers, server software, browser software, user interface hardware and/or software, terminals, networked devices, the Internet, the World Wide Web, an intranet, an extr-anet, a LAN, a WAN, modems, memory units, storage devices, processors, distributed computing resources, integrated circuits, ASICs, functional equivalents thereof, and/or combinations thereof. For purposes of the present written description and/or claims, any medium suitable for storage of information, data, and or images, whether in digitally-encoded form or in non-digital form, or any combination of and/or functional equivalents of such media, may be utilized for any of the various storage functions described herein. In particular, such storage media may be distributed over a plurality of storage locations. For purposes of the present written description .and/or claims, the term "network connection" shall encompass any apparatus, hardware, software, and/or combination thereof capable of providing any of the various connections described herein, including but not limited to an Internet connection, an Internet service provider (ISP), browser software, user interface hardware and/or software, a modem hook-up, a cable, a phone line, a satellite link, a wireless link, a microwave link, television, a BBS system, a local area network, a wide area network, an intranet, an extranet, direct linkage of a plurality of computers, connections between devices within a single computer, terminal connections, program instructions, multiple user accounts on a single computer, multiple storage areas on a single computer, combinations thereof, functional equivalents thereof, and future apparatus, methods, systems, and/or protocols for performing analogous functions. An overview of apparatus and methods according to the present invention is shown in the block diagram of Figure 1. Images of the target object may be recorded at block 10 by an imaging assembly comprising one or more cameras. Images may be recorded from multiple viewing locations and in multiple viewing directions relative to the target object. A positioning system may be employed at block 20 to record a camera position corresponding to each image recorded, and one or more of the camera position, camera direction, and/or other image parameters may be stored for each image at block 40. The images themselves may be stored at block 30 as an image database. The positioning system may be employed to control the recording of images. Data related to details of the target object may be linked to the image database via the image parameters at block 50. A viewer may select a viewing trajectory at block 70. The selected trajectory and image parameters may be employed at block 60 to select, scale and sequence images retrieved from the image database. The image sequence may then be presented as an animation sequence at block 90. The viewer may: view the animation sequence; select a subsequent trajectory based on the animation sequence; and/or view, analyze, and/or edit linked data at block 80. Any connections represented by arrows in Figure 1 may be provided by a network or other connection.
A preferred embodiment of an imaging assembly 100 according to the present invention is shown in Figures 2A and 2B, and comprises: a plurality of oblique cameras 110 pointing radially outward from a central point 101 and downward toward the object, each preferably having substantially the same vertical angle of greater than about 0° and less than 90°, preferably between about 15° and about 75°, and most preferably between about 15° and about 30°; and one vertical camera 120 pointing vertically downward at the object with about a 90° vertical angle. The viewing directions of the cameras are illustrated in Figures 3 A and 3B. The vertical camera may be used to record reference and images. Viewed from above, as in Figure 4, the fields-of- view 300 of the oblique cameras should preferably cover an entire 360° range of azimuthal angles around a central point and should preferably be substantially uniformly spaced around the central point. For example, an imaging assembly may comprise twelve oblique cameras each having a field-of-view of about 40°, although many other configurations having various numbers of cameras and various fields-of-view may be employed without departing from inventive concepts disclosed and/or claimed herein.
The flow diagram of Figure 5 illustrates a preferred method for generating a multi- perspective image database for an object (an area of a city, for example). After selection (402) of an area of the target object to be imaged, a fine imaginary 2D grid may be defined (408) to overlay the object, as shown in Figure 6. Images may be recorded with the imaging assembly positioned at each xy grid position 501 at a height zCm above the object. The horizontal distance between adjacent grid points should preferably be relatively small compared to the size of the object. The entire camera setup is prepared (406) aimed at the object so that one vertical image and multiple oblique images (represented by arrows 502) may be recorded at each grid point. All oblique cameras should preferably have the same focal length (or equivalently, the same camera constant, roughly the lens-to-focal-plane distance) and the same field-of-view. All oblique cameras are preferably aimed at the object under subst-antially the same vertical angle of greater than about 0° and less than 90°, preferably between about 15° and about 75°, most preferably between about 15° and about 30°, to record suitable perspective images. Since the focal length, the camera height, and the vertical angle are substantially the same for all oblique cameras, all perspective images will be substantially the same scale and have substantially the same perspective distortion. The area covered by an image is determined by the distance from the camera to the object, and the camera field-of-view. The height of the imaging assembly above the object surface and the camera field-of-view should preferably be chosen so that each oblique image covers an area of the object greater than about the grid spacing, most preferably greater than about three times the grid spacing. This ensures that a detail 601 on the target object will be visible in multiple images 502 recorded from multiple grid points 501, as illustrated in Figure 7. In order to generate an entire multi-perspective image database, as illustrated in the flow diagram of Figure 4, the imaging assembly must be moved to each of the grid points, at which each camera records an image. For the example of an area of a city, this may be done by mounting the imaging assembly in an aircraft and flying (412, 414) over the city along the imaginary gridlines, and recording (422) images with each camera as the aircraft crosses each grid point (416). The imaging assembly may be gyro-stabilized (by gyro-stabilizers 130) to insure that the cameras are always pointing in the same direction. Navigation of the aircraft along the imaginary gridlines and accurate recording of the camera position for each image may be facilitated by use of a global positioning system (GPS) or other positioning system (404, 410, 416). For an imaging assembly comprising twelve oblique cameras and one vertical camera, thirteen images will be recorded (422) at each grid point. Each image is stored (424, 426) in the image database, and image parameters are also stored (418, 420) for each image. These image parameters preferably include, but are not limited to, camera position (XCAM, ycAM, ZCAM), camera vertical angle (ΘCAM), camera azimuthal angle (ΦCAM), camera constant (c), and camera field-of- view (ΔΘCAM, ΔΦCΛM). The images and image parameters are preferably stored in digital form in a searchable format, for example, as a database in a computer memory or on one or more computer storage media. Without departing from inventive concepts disclosed .and/or claimed herein, images, image parameters, data, and/or information may be stored in any suitable format, digital and/or non-digital, on any suitable storage medium. When the area of the target abject has been imaged, image recording and the flight may be ended (428, 430). Generation of a database of images as described above may be repeated at different points in time, and the time at which each image is recorded may be stored as an image parameter (tC M).
The camera height, ZCAM, may often be defined with respect to some reference point of the t-arget object, but not necessarily the surface of the target object. In the example of generation of an image database for a geographical area, ZCAM is typically defined as the camera elevation above sea level. If the terrain imaged is not flat, the height of the camera above the ground may vary, changing the perspective parameters of the images. A preferred embodiment of the present invention may therefore include as a component of the positioning system means for determining the elevation of the surface of the target object. Photogrammetry techniques may be employed using images recorded by the vertical camera to calculate an elevation profile for the area for which the image database is generated. Alternatively, the imaging assembly may employ a range-finder or equivalent device for measuring the elevation profile of the area, preferably concurrently with the recording of the images for the image database. However acquired, the elevation profile may be stored and used in subsequent processing of images in the database, as set forth hereinbelow.
Since the image parameters, or alternatively the perspective parameters, of each camera are known, the corresponding perspective parameters for each image are known. Therefore, if the geographical coordinates (xGE0, yaEo) of .an object detail 601 (a house in a city, for example) are known, then calculations can be used to determine on which images 502 in the database this detail appears, as illustrated in Figure 7. The position of the detail in each image can also be calculated. An example of algorithms for conversion of geographical coordinates to image coordinates and vice versa are given in the Appendix. As illustrated in the flow diagram of Figure 8 and shown in Figures 9A-9G, a series of images (9E, 9F, and 9G, for example) of a detail 601 of the object may be created that show the detail in the same size, and that form a near- optically-continuous sequence. This may be accomplished according to the present invention by selecting (806, 808) images (9B, 9C, and 9D, for example), clipping (810) them (i.e. selecting a part of the image) or splicing them (i.e., assembling adjacent images), and scaling (812) the clipped or spliced images. The continuity of the images may not be mathematically perfect, but perfect continuity is not necess.ary to produce .an animation suitable for viewing by a human viewer when the images are played back. The images in the animation sequence may preferably be played back (819) at greater than about six images per second, most preferably at about 24 images per second (a common frame rate for commercial motion pictures). Without departing from inventive concepts disclosed and/or claimed herein, the animation may be played back at rates slower than about six images per second. Such a slow playback may still give a viewer the perception of motion relative to the target object, even if it appears as a "slide show" instead of a smooth animation.
In an alternative embodiment of the present invention, a single camera may be used to record images in multiple camera directions. In this case, the camera direction may be changed after recording each of multiple images at a given camera position, and the process repeated at successive camera positions. In an alternative embodiment of the present invention, a single camera may be employed having a wide-angled, or "fish-eye" lens. The camera may be directed vertically downward, so that the camera field-of-view may cover an entire 360° range of azimuthal angles. A single wide-angled image may therefore cover .an area of the target object equivalent to the combined areas covered by a plurality of images recorded by an imaging assembly as described above. Each wide-angled image may be divided into a plurality of images prior to storage, or may be stored as a single image and processed during subsequent generation of an animation sequence. In an alternative embodiment of the present invention, one or more cameras may be used in conjunction with one or more auxiliary optics, such as a plurality of mirrors, prisms, and/or lenses set at a variety of positions and/or orientations, to record images covering an area of the target object functionally equivalent to the combined areas covered by a plurality of images recorded by an imaging assembly as described above. Each image thus recorded may be divided into a plurality of images prior to storage, or may be stored as a single image and processed during subsequent generation of an animation sequence.
Many different trajectories (803) of viewing position, viewing direction, viewing distance, and/or viewing time relative to the target object may be simulated in an animation by selecting different image sequences from the l.arge database of images. Such trajectories may include as examples, but are not limited to: circular motion around an object detail with the view directed toward the detail; circuit motion with the view directed radially outward from the center of the circular motion; motion along an arbitrary curvilinear path with the view directed in the direction of motion; motion along an arbitrary curvilinear path with an independently arbitrarily varying viewing direction; a view from a fixed position with a fixed viewing direction showing images recorded at successively later times, thereby showing the temporal evolution of the object details visible in the view.
In a preferred embodiment of the present invention, in which the multi-perspective image database and associated image parameters have been stored in digital form, softw.are may be employed to select the images from the database, and clip or splice and scale them so that the viewer experiences near-continuous movement when the image sequence is played back. When the images are selected and displayed at a speed of preferably greater than about six images per second, most preferably about 24 images per second, the viewer experiences the feeling of smooth motion over the target object. An entire trajectory may be specified prior to any selection and/or presentation of images as an animation sequence. Alternatively, the viewer may preferably interactively control the trajectory of viewing position, viewing direction, viewing distance, and/or viewing time, thereby allowing the viewer to virtually "fly over" the target object, looking in .any desired direction -as he/she does so. Images may be selected, manipulated, and presented as an animation sequence and, concurrently, subsequent portions of the trajectory may be selected. Within limits imposed by the resolution .and quality of the images in the database, the viewer may zoom in and out during a trajectory so that he/she may view an animation of relatively small details of the object, or an animation of a relatively large portion of the object. Pre-selected, default, and/or automatic trajectories may also be employed to generate animations. An animation generated for a trajectory may be stored for later play-back.
Creation of such near-continuous animations from the image database is enabled by knowledge of the image parameters for each image in the database, and by the large number of images in the database. The more oblique cameras are used to create the database, the more perspective images will be available from each grid point, and the more nearly continuous the resulting animations will be. The finer the grid (i.e. the smaller the grid size) relative to the field- of-view of each camera, the more grid points there are from which to select a perspective image, and the more nearly continuous resulting the animations will be. More cameras and/or more grid points result in more images and image parameters to store and to search/select/clip/scale, requiring more storage and processing capacity. A smaller vertical angle for the oblique cameras yields better viewing of vertical details of the target object (building facades and hills in the city example). However, the smaller the camera vertical angle at a given camera height, the farther away the camera is from the viewed object detail, and the more the camera needs to zoom in to get good images of the detail. The more the camera is zoomed in, the smaller the resulting field- of-view, and the more cameras are needed. Additional data may be available about details of the target object for which a multi- perspective image database has been created. For a city, for example, address information, phone directory information, tax data, property information, business and/or Chamber of Commerce data, census data, Standard Metropolitan Statistical Area (SMSA) and/or other government data, and/or other alphanumeric data may be available. Maps may be available as CAD drawings, as scanned images, or in other formats. Images acquired from the ground may also be available for many objects, and in some cases there may be accompanying digitized video or sound. Some or all of this information may have associated geographical coordinates (xG o, j or equivalently longitude and latitude). Since the image parameters are known, the image position (xM, ym) for any such detail information/data can be calculated for each image. This enables generation (either manually or, preferably, automatically) of one or more links (i.e., geo- references) between the information/data and each image in the multi-perspective image database in which the relevant location appears, as illustrated in the flow diagram of Figure 10. For the city ex.ample, the image location of each address can be calculated for each image and so-called "hotspots" generated on the images. When a viewer, in a graphical user interface (GUI) environment, points to such a hotspot and activates it (by clicking a mouse or other pointing device, for example), the corresponding address, or any other information/data associated with the image location, may be displayed, either superimposed on the image or in a separate display. The viewer may be provided with an address finder, whereby entering an address into the system may automatically trigger the system to display an image or an animation of the lot and/or house, building, or other structure at the address. The linked data can be used to do any type of data sorting and/or analysis, and the geo-references may be used to display the results of the analysis on an image or an animation of images, as shown in the flow diagram of Figure 11. Analysis results may be color-coded and graphically overlaid on each image of the multi-perspective image database in real time as the viewer navigates through the database. For example, the viewer may graphically view the results of an analysis while viewing an animation and experiencing a three-dimensional "feel" for the entire target object. With use of vertical reference images, recorded by the vertical camera of the imaging assembly, new geo-references may be created for each image, or the positions of existing geo- references may be improved. By viewing a vertical image (i.e., plan view) as well as one or more perspective (i.e., oblique) images of the same object, the viewer may draw an outline of an object detail onto the different views simultaneously and attach data to the object detail. In this way the viewer can very finely define the size and shape of the detail in different dimensions, and the position of the object detail can be generated automatically for all images on which the detail appears. Viewers may also edit data that is linked to the image database, as shown in the flow diagram of Figure 12. This can be done without affecting the images themselves. Changes in the data may be displayed superimposed on an image or animation, allowing the viewer to interactively view his/her changes visually.
A multi-perspective image database for an object or geographical area may be stored on a disk drive or other digital storage medium that is connected to a computer, to a computer network, or to the Internet. The storage medium with the image database may be referred to as the Server (Figure 8, 802). The navigation and viewing hardware/software (801) used by the viewer to move through the images can run on the Server, or on a separate computer or terminal with a network connection to the Server directly, to a common computer network with the Server, or to the Internet. Navigation and viewing requests may be sent (803, 806) to the Server and the Server may select (806, 808), manipulate (clip 810 or splice, scale 812), and transmit (814, 817) the images to the viewer that comprise the desired view or animation. If network- or Internet-based communication is used between the viewer .and the Server, the viewer may navigate and view an animation of an object, city, geographical area (for which an image database is stored on the Server) from anywhere in the world. The viewing and navigating hardware/software, the image database, and Server software may also run on a single computer so that viewers may view ariimations on computers that are mobile, or computers that are not connected to the Internet or to a network. A multi-perspective image database typically will be large, possibly hundreds of gigabytes for a single target object. Therefore, it may be advantageous to store the images centrally, and to transfer images to individual viewers over a network or via the Internet. The apparatus and methods described above pertain primarily to a two-dimensional multi- perspective image database. With such a two-dimensional image database, perceived movement in an animation (alternatively, the trajectory of viewing positions) occurs only within the horizontal plane from which the images are recorded, although a primitive sense of vertical movement can be achieved by zooming in or out. In an alternative embodiment of the present invention, by repeating the acquisition of images at different distances (heights, or ZCAM) from the object, a three-dimensional multi-perspective image database may be generated. In an alternative embodiment of the present invention, by repeating the acquisition of an entire two- or three- dimensional multi-perspective image database at chosen time intervals, a multi-dimensional multi-perspective image database may be generated also having a temporal dimension. A time- interval multi-perspective image database thus generated may be very useful for monitoring changes in a target object or geographical area over time. A regular grid pattern for acquisition of images for the database is not strictly necessary. Without departing from inventive concepts disclosed and/or claimed herein, an image database may be generated and employed in which images are recorded from an arbitr.ary set of viewing positions, in an arbitrary set of viewing directions (not necessarily the same for all viewing positions), and at an arbitrary set of viewing times (not necessarily the same for all viewing positions or viewing directions), provided that the image database contains sufficiently many views to cover adequately the target object for the desired viewing and/or data analysis, display, and/or editing. However, acquisition of images from regular grid points offers the possibility of more efficient storage, selection, and/or processing of images and/or data.
In an alternative embodiment of the present invention, fish-eye or wide-angle lenses may be used for recording images for the image database. Additional viewing and navigating hardware/software may be employed to create perceived camera movement within an animation wherein the vertical angle of the view changes, thereby enabling animations wherein a viewer may simulate looking forward, shifting the view to look downward, and swinging further to look backward. In an alternative embodiment of the present invention, stereo or holographic images may be stored in a multi-perspective image database and used to generate stereo or holographic animations, respectively.
The apparatus and methods disclosed herein are particularly well suited for generating an image database for a city or other geographical area, and for implementation using digitally recorded and stored images. The cameras used in the imaging assembly preferably may be digital cameras, and preferably may be linked to a computer. The computer may preferably be equipped with reception hardware/software for receiving and processing information from a global positioning system (GPS), and preferably may contain or be linked to a high volume digital storage medium, such as a large hard disk, for storage of images and image parameters. After definition of a fine grid of longitude and latitude points covering the area, the imaging assembly may be moved along grid lines over the area. At each grid point the computer triggers all cameras simultaneously, and the images are immediately downloaded to the computer and named (or numbered). The names of the images and their corresponding image parameters are simultaneously stored in a database. Image parameters such as xC , y AM, a . nd zCAM may be obtained from, or calculated from data obtained from, the GPS.
In an alternative embodiment of the present invention, images may be recorded by non- digital still or motion photography and converted to digital format and stored at a later time. In an alternative embodiment of the present invention, any means for determining the position of the imaging assembly relative to the target object (i.e., positioning system) may be employed. The GPS-based scheme disclosed herein is exemplary only. In an alternative embodiment of the present invention, the images may be stored in non-digital form, i.e., as physical photographs, slides, holograms, stereo images, or other form. A mechanical device may be employed to store the pictures, slides, or other format, select from among them for an animation sequence, clip and/or scale projections of those images selected, and project the sequence as an animation. In an alternative embodiment of the present invention, storage of the images, image parameters, data, and/or other information, whether in digital or non-digital form, may be on any storage medium suitable for storing images, image parameters, data, and/or other information. In an alternative embodiment of the present invention, a multi -perspective image database may be created from within a target object. In an alternative embodiment of the present invention, a multi-perspective image database may be generated from a 3D computer model. Images for the database may be rendered on the computer from each grid point with 3D rendering software and then stored. Animations may be generated from the rendered and stored images as described herein. This technique may be more advantageous than real time 3D rendering when storage space may be abundant, but processing power may be limited. In an alternative embodiment of the present invention, a multi-perspective image database may be generated using a plurality of static imaging assemblies at a plurality of locations relative to the target object, rather than moving an imaging assembly to a plurality of locations. When a target object includes surface details which move during acquisition of a multi- perspective image database (cars moving along city streets, for ex-ample), then those moving surface details may be observed to appear, disappear, or move irregularly in an animation generated from the image database. In an alternative embodiment of the present invention, such transient surface details may be removed from images in the database. This may be accomplished automatically for digitally stored images through the use of optical pattern recognition software to detect .and remove the moving surface details.
INDUSTRIAL APPLICABILITY
EXAMPLE: CREATING A MULTI-PERSPECTIVE IMAGE DATABASE FOR A CITY. The following is exemplary only and shall not be construed so as to limit the scope of the present invention. Define a grid over the city with grid points between 30 .and 100 meters apart. Mount an imaging assembly comprising digital cameras and mounted on a gyro-stabilizer on the underside of a relatively slow-flying airplane, such as a Cessna 172 (which can fly at about 70 mph). Alternatively, an unmanned drone aircraft or miniature aircraft may be used, either automated or by remote control. Aim all twelve (or 16, 32, etc.) oblique cameras with substantially the same vertical angle of between about 15° and about 75° downward from horizontal. Select and set all camera lenses in the same manner, so that an area of about 100 to 300 horizontal meters is covered by each image when recorded from the selected camera height (i.e., flying height). Choose flying height and camera settings such that individual houses and/or buildings are clearly visible in each image when displayed on a computer screen. Mount a single camera vertically downward, and link all cameras to a computer that is also linked to a receiver for receiving and processing information from a Global Positioning System (GPS) as well as a large hard disk for image storage.
Choose weather conditions in which either the sky is clear, or the sky has relatively uniform cloud cover. Avoid early morning and late afternoon if the sky is clear, to avoid long shadows in the images. Fly low and slow at a constant height of, for example, between about 100 meters and about 1,000 meters above the city .and at a constant speed. (Federal, state, and/or local regulations may prescribe a minimum legal altitude when flying above private and/or public property. These regulations should be obeyed, or permission or a waiver obtained.) Fly over the imaginary grid lines in a systematic manner, grid line after adjacent grid line. Try to record all images on the same day under about the same light conditions. Stop flying if the light changes dramatically, and continue on another day with similar light conditions.
When flying, use software to control the cameras and record 13 (or 17, 33, etc.) images at each grid point. At each grid point the computer triggers all 13 cameras substantially simultaneously. Create a directory for each camera on the computer hard disk and immediately download each image from each camera to its own directory with its own file name. Simultaneously create a database in which all image file names are stored together with all their corresponding image parameters (height, longitude, latitude, camera direction, time, and camera constant, or equivalently, Xcm, y w, ZCAM, tCm θC , φc , ΔΘCAM, Δφ AM, and c), the image parameters being derived from camera properties, the time of the flight, and the GPS flight data. Constantly monitor the flight pattern to warn the operator and/or pilot if the airplane is not flying on the grid lines within a set margin, 10 meters for example. Fly the entire city or region, .and land with the database of thousands of images on the computer. EXAMPLE: LINK CITY DATA TO THE IMAGE DATABASE. The following is exemplary only and shall not be construed so as to limit the scope of the present invention. As noted above, for any object detail (i.e., a building, house, address, etc.) for which the location is known, the approximate location of that detail in each image may be calculated. Therefore, the multi-perspective image database may be linked to any address database (such as a county address database) that has longitude .and latitude information (or contains State Plane coordinates). This link may be used to locate a given address on any image in which it appears. The viewer may be provided with an address finder that will generate an animation of a house, building or other object, after entering the address. Furthermore, the address finder and the resulting animation may be provided over the Internet, thereby enabling the process to be done remotely. This type of link between geographical data and a location on a map or image is an example of a geo-reference.
The vertical images in the database may be used to link data to all images. For example, using the geo-reference link to the address database, determine for each house its exact longitude and latitude. Locate and measure the horizontal size of each building or house from the vertical images, and calculate the height of each building or house shown in each oblique image in the multi-perspective image database in which it appears. These calculations are possible since all image parameters are known and have been stored (elevation profile data for the terrain may be required). Store the calculated size information in the address database. It is now possible to calculate both the approximate location and size of each building or house in each image, thereby enabling creation of so-called "hotspots" on each image and linkage of those hotspots to information in the address database. A "hotspot file" may be created for each image listing which structures are shown on which pixels of the image. The hotspot files may be stored with the database. A viewer may click on any building in any image to retrieve the available data for that building, or a viewer may search for houses which fit viewer-specified search parameters, and a search program may highlight houses in each image which fit the search parameters. Differing ranges of such search parameters may be highlighted in different ways (by color-coding, for example) to facilitate analysis of the data. For example, a viewer may define different colors for different price ranges for a house. After determining the color for each house, its corresponding hotspot in each image may be drawn in the appropriate color. The analysis of the property values may be shown in real time as the viewer interactively "flies" over the virtual city. With the use of double buffering for displaying the images, the color-coded hotspots may be drawn on each image before it is displayed as part of an animation, thereby not affecting the "3D feel" of the animation.
Lines indicating roads may be drawn on the vertical reference images, and used to graphically create a database of road identities and locations for each vertical image in the image database. Road identities and locations may alternatively be obtained from digitally-stored and/or CAD-generated maps. Using this information, the position of a road may be calculated in each oblique image in the multi-perspective image database and this location overlaid on the images while moving through the image database, by highlighting the road in a certain color in each image, for example. The pixel location in each image for each latitude and longitude point is known or can be calculated. The image database may therefore be linked to a GPS receiver/card in a computer in a car. The system may update and display the location of the car in real time as the viewer/driver simultaneously "moves" through the multi-perspective image database of the city and drives around the city.
EXAMPLE: CITY DATABASE SIZE. The following is exemplary only and shall not be construed so as to limit the scope of the present invention. The total number of images recorded from one object, such as a city, may be very large. If 13 images per grid point are recorded from a grid of 50 by 50 meters, 400 x 13 = 5,200 images per square kilometer must be recorded and stored. If the images are stored as 24-bit JPG files at a resolution of around 800x600 pixels, then each image is about 150KB in size, yielding about 780 megabytes (MB) of image data per square kilometer. A medium-sized city such as the Eugene-Springfield (OR) area comprises roughly 200 square kilometers, requiring over 1,000,000 images occupying more than 150 gigabytes (GB) of storage space. Currently this is a large database, but technically feasible. Further advances in storage media will enlarge the maximum size of an image database. Image database size may be reduced by using a lower image resolution and/or by using 8-bit images, at the potential expense of degraded image quality. Compression algorithms may be used, at the potential expense of slower access to images in the database. Given the large amount of storage space required, many city image databases would probably not fit onto current CD-ROM media or DVD-formatted media. Currently, distribution of images over the Internet or over a computer network from a central storage medium is a preferred means for enabling viewer access to the image database, particularly since most viewers would only require access to a relatively small fraction of the total number of images available in the database.
USE EXAMPLES. The following is exemplary only and shall not be construed so as to limit the scope of the present invention. Availability of a multi-perspective image database for a large number of metropolitan areas linked to information databases on the Internet may be used to facilitate a wide variety of uses and/or functionalities, including but not limited to:
1. A traveler may enter an address or city and start flying over the city, checking out hotels, restaurants, places to visit, and other travel information; 2. Public utilities may visually check the physical environment and existing structures and utility infrastructure when scheduling repair jobs or planning construction. Images, animations, and/or information from the database may be accessed from an office or from the field, and information may be added regarding work progress, repair reports, account activity, etc.;
3. Delivery or parcel services may use the image database to help find addresses and choose optimal routing;
4. Architects and/or city planners may use animations to get a feel for the existing buildings and environment, to superimpose designs onto images or animations, and/or use animations and images to determine location of certain objects;
5. Emergency services and law enforcement personnel may check out a building or site in an animation from the dispatch room or from the emergency vehicle before arriving at the site.
Databases for emergency call (i.e., 911 call) data, property ownership data, crime statistics, floor plans, ground photos, and/or hazardous materials data may be linked to the image database. Pre- fire, pre-raid, or pre-emergency plans may be formulated with the aid of images and animations from the database; 6. Insurance companies may view a property in an animation in the course of processing an insurance application or claim;
7. Traffic planners may link the image database to traffic planning software, run analyses, and view the results in an animation; 8. Real estate professionals may view/display properties for rent or sale in animations, and may link the image database to databases containing information on such properties, such as floor plans, price, tax, and market data, and ground photos;
9. Animations may be used during litigation or a trial to show crime scene or illustrate a sequence of events;
10. An image database may be linked to a demographic database and used to display results of analyses superimposed on images or animations;
11. A vehicle may be linked to a positioning system and tracked in real time on an image or animation, either in the vehicle or at a remote location. OTHER USE EXAMPLES: The following is exemplary only and shall not be construed so as to limit the scope of the present invention. A multi -perspective image database may have uses in other areas, including but not limited to:
1. A multi-perspective image database may be generated for a planetary surface and used to create animations for virtual exploration and research; 2. A multi-perspective image database may be generated for an undersea area and used to create animations for virtual exploration and research;
3. A multi -perspective image database may be used by geologists and/or cartographers.
4. A multi-perspective image database may be generated for a microscopic surface and used to create animations for virtual exploration and research in such diverse areas as medical applications, microbiology, materials science, etc. The image database may comprise images recorded by any method of microscopy, including but not limited to optical microscopy, phase contrast microscopy, scanning tunneling microscopy, atomic force microscopy, near-field scanning microscopies, fluorescence microscopy, electron microscopy;
5. A multi-perspective image database may be generated for management of natural and industrial resources, including but not limited to forests, f.arml.ands, watersheds, mining areas, oil and natural gas fields, pipelines, etc., and the image database may be linked to databases for inventory, growth, harvest, seasonal changes, etc., and used by government agencies, private industrial companies, or environmental activist organizations;
6. A multi-perspective image database may be used to plan construction or expansion of highways; 7. A multi-perspective image database may be generated for a virtual world and used as a backdrop for computer gaming, allowing a player to play the game from any animated view of the virtual world.
8. An array of stationary imaging assemblies may be installed around and above an arena, stadium, or sports facility and used to continuously record images during sporting or other events. Viewers may view the image sequences in real time to view the event live and may continuously and interactively change their view of the event while watching from a remote location (to "follow the play", for example). Earlier images may be reviewed, allowing "instant replay" of any portion of the event from any angle, with obvious implications for sports officiating, for example.
APPENDIX. The following is exemplary only and shall not be construed so as to limit the scope of the present invention. Geographical coordinates can be transformed to image coordinates within a given image, and vice versa, using mathematical techniques well known in the field of photogrammetry. Such techniques are described in Handbook of Aerial Mapping and Photogrammetry by Trorey (Cambridge University Press, Cambridge, England, 1952),
Photogrammetry by Hallert (McGraw-Hill, New York, 1960) and Manual of Photogrammetry edited by Salma et al (American Society of Photogrammetry, Falls Church, Virginia, 1980), each of said references being incorporated by reference as if fully set forth herein. Given the camera position xc yC , ZCAM), camera orientation (ΘC M, φcm), and the camera constant c for a given image, the image coordinates (xM y^) can be used to calculate the corresponding geographical coordinates (xβEo, yGE0) as:
_ + QΠ°
otherwise
Figure imgf000033_0001
Figure imgf000033_0002
- -^ QEo otherwise tonΦcAM where
Figure imgf000034_0001
, = CTR + /c ) S "CAM
Figure imgf000034_0002
Figure imgf000034_0003
XREF = XCAM ~ ZCAM ^an"cAM C0SΦcAM
Figure imgf000034_0004
ZCAM C0SΨCAM CTR XcAM + tanfl CAM
Figure imgf000034_0005
Conversely, given the geographical coordinates (xβεo, yβεo) for a particular location, the corresponding image coordinates (xm yM) may be calculated for a given image as: s-c-smθ C.AM / IM
'CAM lc7? χι) +{ycτR yi)
Figure imgf000034_0006
where
x, = GEO if ΦCAM = 0° r 180°
otherwise
Figure imgf000035_0001
yl = yCAM if ΦCAM = 0° r 180°
Figure imgf000035_0002
= yCAM + (*ι - XCAM ) tan ΦCAM otherwise
X- = X CTR if ΦCAM = 0° or 180°
herwise
Figure imgf000035_0003
Figure imgf000035_0004
yCTR if ΦCAM = ± 90°
(x - x )/
~- y 'ccTτRR - 2 CTR/tø φCAM otherwise
Figure imgf000035_0005
REF ))
Figure imgf000035_0006
""ccAA CC00SSΦccAAMM yREF = yCA - ZCAU tan &CA sin JCAM
Figure imgf000035_0007
λ.CTR XCAM + tanft 'CAM
Figure imgf000035_0008
This algorithm may also be used to determine whether a given object detail appears in a given image. If calculated values of xM andyα, fall outside the range of the image in question, then the object detail does not appear in that image.
The preceding algorithms apply to the case of a substantially horizontal and substantially flat target object surface. Modifications of these algorithms by those skilled in the mathematical and/or computational arts may be made to perform similar conversions for sloped, curved, and/or irregular target object surfaces without departing from inventive concepts disclosed and/or claimed herein. For example, an irregular target object surface may be approximately characterized by a two-dimensional array of object surface height values (i.e., an elevation profile). Such an elevation profile may be independently available, or may be generated within the scope of the present invention, as described hereinabove, by photogrammetry (using vertical reference images) or by direct measurement (range-finding or other equivalent methods).
Intermediate values could be found by interpolation, .and a point of intersection of a particular viewing direction with the object surface could be found by a simple search algorithm. The preceding procedures are exemplary only. Any algorithm suitable for converting object coordinates to perspective image coordinates and vice versa may be employed without departing from inventive concepts disclosed and/or claimed herein.
The present invention has been set forth in the forms of its preferred and alternative embodiments. It is nevertheless intended that modifications to the disclosed image database and interactive animation apparatus and methods may be made without departing from inventive concepts disclosed and/or claimed herein.

Claims

CLAIMSWhat is claimed is:
1. An apparatus for generating an image database for a target object, comprising: an imaging assembly comprising at least one camera; means for recording an image with the camera from a plurality of camera positions; means for storing the image recorded by the camera; and means for storing image parameters corresponding to the image.
2. An apparatus as recited in Claim 1, further comprising: means for selecting a trajectory of at least one of viewing position, viewing direction, viewing distance, and viewing time; means for selecting, scaling, and sequencing images from the image database using the selected trajectory and stored image parameters; and means for presenting the selected, scaled, and sequenced images from the image database as an animation sequence.
3. An apparatus as recited in Claim 1 , wherein the means for storing the image further comprises a network connection.
4. An apparatus as recited in Claim 1, wherein the means for storing the image parameters further comprises a network connection.
5. An apparatus as recited in Claim 1, wherein the imaging assembly and the means for recording an image with the camera from a plurality of camera positions comprise: a plurality of cameras located at a plurality of camera positions relative to the target object and directed in a plurality of camera directions.
6. An apparatus as recited in Claim 1, further comprising a movable platform for moving the imaging assembly relative to the target object above a surface of the target object.
7. An apparatus as recited in Claim 6, wherein the camera is a wide-angle camera directed substantially vertically downward toward the target object.
8. An apparatus as recited in Claim 6, wherein the imaging assembly comprises a plurality of oblique cameras positioned substantially uniformly about a central point of the imaging assembly, with each oblique camera pointed substantially radially outward from the central point of the imaging assembly and pointed downward with substantially the same vertical angle, the vertical angle being greater than about 0┬░ and less than about 90┬░.
9. An apparatus as recited in Claim 8, wherein the vertical angle is greater than about 15┬░ and less than about 75┬░.
10. An apparatus as recited in Claim 9, wherein the vertical angle is greater than about 15┬░ and less than about 30┬░.
11. An apparatus as recited in Claim 8, wherein a range of azimuthal angles covered by angular fields-of-view of the plurality of oblique cameras is 360┬░.
12. An apparatus as recited in Claim 11 , wherein the plurality of oblique cameras comprises at least twelve oblique cameras.
13. An apparatus as recited in Claim 12, wherein the angular field-of-view of each of the plurality of oblique cameras is about 40┬░.
14. An apparatus as recited in Claim 8, further comprising a vertical camera having a vertical angle of about 90┬░.
15. An apparatus as recited in Claim 14, further comprising means for determining an elevation profile for a surface of the target object using images recorded by the vertical camera.
16. An apparatus as recited in Claim 6, wherein the imaging assembly further comprises a gyroscopic stabilizer.
17. An apparatus as recited in Claim 6, wherein the imaging assembly is mounted on the movable platform.
18. An apparatus as recited in Claim 17, wherein the movable platform is mounted on a vehicle.
19. An apparatus as recited in Claim 18, wherein the vehicle is an aircraft.
20. An apparatus as recited in Claim 6, wherein the target object is mounted on the movable platform.
21. An apparatus as recited in Claim 6, wherein the image storage means comprises a holographic storage medium and the image is stored as a holographic image.
22. An apparatus as recited in Claim 6, wherein the image storage means comprises a digital storage medium and the image is stored as a digital image.
23. An apparatus as recited in Claim 22, wherein the camera is a digital camera.
24. An apparatus as recited in Claim 22, wherein the camera is a non-digital camera recording a non-digital image, and wherein the image storage means further comprises means for converting the non-digital image into the digital image.
25. An apparatus as recited in Claim 6, wherein the image parameters comprise at least one of a camera position, a camera direction, a camera field-of-view, a camera constant, a camera lens focal length, and an image time-of-recording.
26. An apparatus as recited in Claim 25, further comprising a positioning system for determining the camera position.
27. An apparatus as recited in Claim 26, wherein the positioning system is a global positioning system.
28. An apparatus as recited in Claim 26, wherein the positioning system further comprises means for measuring an elevation profile for a surface of the target object.
29. An apparatus as recited in Claim 10: wherein a range of azimuthal angles covered by angular fields-of-view of the plurality of oblique cameras is 360┬░; wherein the plurality of oblique cameras comprises at least twelve oblique cameras; wherein the angular field-of-view of each of the plurality of oblique cameras is about 40┬░; further comprising a vertical camera having a vertical angle of about 90┬░; wherein the imaging assembly further comprises a gyroscopic stabilizer; wherein the imaging assembly is mounted on the movable platform and the movable platform is mounted on an aircraft; wherein the camera is a digital camera, the image storage means comprises a digital storage medium, and the image is stored as a digital image via a network connection; wherein the image parameters are stored via a network connection .and comprise at least one of a camera position, a camera direction, a camera field-of-view, a camera constant, a camera lens focal length, and an image time-of-recording; and further comprising a global positioning system for determining the camera position.
30. A method for generating an image database for a target object, comprising the steps of: recording images with a camera from a plurality of camera positions relative to the target object; storing the images recorded by the camera; and storing image parameters corresponding to each of the images.
31. A method as recited in Claim 30, further comprising the steps of: selecting a trajectory of at least one of viewing position, viewing direction, viewing distance, and viewing time; selecting, scaling, and sequencing images from the image database using the selected trajectory and stored image parameters; and presenting the selected, scaled, and sequenced images from the image database as an animation sequence.
32. A method as recited in Claim 30, wherein the images are stored via a network connection.
33. A method as recited in Claim 30, wherein the image parameters are stored via a network connection.
34. A method as recited in Claim 30, wherein images are recorded by a plurality of stationary cameras positioned at a plurality of camera positions relative to the target object and directed in a plurality of camera directions.
35. A method as recited in Claim 30, wherein the step of recording images with a camera from a plurality of camera positions relative to the target object comprises the steps of: recording images of the target object with the camera on an imaging assembly; moving the imaging assembly to a new camera position relative to the target object; and repeating the recording step and the moving step.
36. A method as recited in Claim 35, wherein the camera is a wide-angle camera directed substantially vertically downward toward the target object.
37. A method as recited in Claim 35, wherein the imaging assembly comprises a plurality of oblique cameras positioned substantially uniformly about a central point of the imaging assembly, with each oblique camera pointed substantially radially outward from the central point of the imaging assembly and pointed downward with substantially the same vertical angle, the vertical angle being greater than about 0┬░ and less than about 90┬░.
38. A method as recited in Claim 37, wherein the vertical angle is greater than about 15┬░ and less than about 75┬░.
39. A method as recited in Claim 38, wherein the vertical angle is greater than about 15┬░ and less than about 30┬░.
40. A method as recited in Claim 37, wherein a range of azimuthal angles covered by angular fields-of-view of the plurality of oblique cameras is 360┬░.
41. A method as recited in Claim 40, wherein the plurality of oblique cameras comprises at least twelve oblique cameras.
42. A method as recited in Claim 41, wherein the angular field-of-view of each of the plurality of oblique cameras is about 40┬░.
43. A method as recited in Claim 37, wherein the imaging assembly further comprises a vertical camera having a vertical angle of about 90┬░.
44. A method as recited in Claim 43, further comprising a step of determining an elevation profile for a surface of the target object using images recorded by the vertical camera.
45. An apparatus as recited in Claim 35, wherein the imaging assembly further comprises a gyroscopic stabilizer.
46. A method as recited in Claim 35, wherein the imaging assembly is mounted on a movable platform.
47. A method as recited in Claim 46, wherein the movable platform is mounted on a vehicle.
48. A method as recited in Claim 47, wherein the vehicle is an aircraft.
49. A method as recited in Claim 35, wherein the target object is mounted on a movable platform.
50. A method as recited in Claim 35, wherein the image is stored as a holographic image on a holographic storage medium.
51. A method as recited in Claim 35, wherein the image is stored as a digital image on a digital storage medium.
52. A method as recited in Claim 51 , wherein the camera is a digital camera.
53. A method as recited in Claim 51, wherein the camera is a non-digital camera recording a non-digital image, and the step of storing the images recorded by the camera further comprises the step of converting the non-digital image into the digital image.
54. A method as recited in Claim 35, wherein the image parameters comprise at least one of a camera position, a camera direction, a camera field-of-view, a camera constant, a camera lens focal length, and an image time-of-recording.
55. A method as recited in Claim 54, wherein a positioning system determines the camera position.
56. A method as recited in Claim 55, wherein the positioning system is a global positioning system.
57. A method as recited in Claim 55, wherein the positioning system measures an elevation profile for a surface of the target object.
58. A method as recited in Claim 54, wherein images are recorded from camera positions corresponding to grid points on an imaginary substantially planar substantially rectilinear grid defined above and substantially parallel to a surface of the target object.
59. A method as recited in Claim 58, wherein a spacing between adjacent grid points is smaller than a dimension of the area of the surface of the target object covered in the image.
60. A method as recited in Claim 39: wherein a range of azimuthal angles covered by angular fields-of-view of the plurality of oblique cameras is 360┬░; wherein the plurality of oblique cameras comprises at least twelve oblique cameras; wherein the angular field-of-view of each of the plurality of oblique cameras is about 40┬░; wherein the imaging assembly further comprises a vertical camera having a vertical angle of about 90┬░; wherein the imaging assembly further comprises a gyroscopic stabilizer; wherein the imaging assembly is mounted on an aircraft; wherein the camera is a digital camera and the image is stored via a network connection as a digital image on a digital storage medium; wherein the image parameters are stored via a network connection and comprise at least one of a camera position, a camera direction, a camera field-of-view, a camera constant, a camera lens focal length, and an image time-of-recording; wherein a global positioning system determines the camera position; and wherein images are recorded from camera positions corresponding to grid points on an imaginary substantially planar substantially rectilinear grid defined above and substantially parallel to a surface of the target object and a spacing between adjacent grid points is smaller than a dimension of the area of the surface of the target object covered in the image.
61. An apparatus for generating an animation sequence from a database of images of a target object, the apparatus comprising: means for storing the image database; means for storing image parameters for each image in the image database; means for selecting a trajectory of at least one of viewing positions, viewing directions, viewing distances, and viewing times; means for selecting, scaling, and sequencing images from the image database using the selected trajectory and stored image parameters; and means for presenting the selected, scaled, and sequenced images from the image database as an animation sequence.
62. An apparatus as recited in Claim 61, further comprising: .an imaging assembly comprising at least one camera; and means for recording images with the camera from a plurality of camera positions.
63. An apparatus as recited in Claim 61, wherein the image database storing means comprises a digital storage medium and images from the image database are stored therein as digital images, and wherein the means for storing the image parameters comprises a digital storage medium.
64. An apparatus as recited in Claim 63, further comprising a network connection for providing access to at least one of the image database storing means and the image parameter storing means.
65. An apparatus as recited in Claim 61, wherein the image database storing means comprises a holographic storage medium .and images from the image database are stored therein as holographic images.
66. An apparatus as recited in Claim 61, further comprising a network connection for providing access to the trajectory selecting means.
67. An apparatus as recited in Claim 61, wherein the trajectory selecting means comprises a graphical user interface for selecting the trajectory.
68. An apparatus as recited in Claim 67, wherein the trajectory may be selected interactively.
69. An apparatus as recited in Claim 61, further comprising means for storing an information database and wherein: the information database contains information related to details of the target object, and the information database is linked to at least one of the image database and the image parameters.
70. An apparatus as recited in Claim 69, further comprising a network connection for providing access to the information database storing means.
71. An apparatus as recited in Claim 69, further comprising means for selecting the trajectory based on information in the information database.
72. An apparatus as recited in Claim 69, further comprising means for displaying information from the information database with the animation sequence.
73. An apparatus as recited in Claim 61, wherein the image parameters comprise at least one of a camera position, a camera direction, a camera field-of-view, a camera constant, a camera lens focal length, and an image time-of-recording.
74. An apparatus as recited in Claim 73, wherein the image database comprises images recorded from a plurality of camera positions at grid points of an imaginary substantially planar substantially rectilinear grid defined above and substantially parallel to a surface of the target object and from a plurality of camera directions at each camera position.
75. An apparatus as recited in Claim 73, wherein the means for selecting, scaling, and sequencing images comprises means for: selecting, for each point on the trajectory, an image in the database most nearly approximating at least one of the viewing position, viewing direction, viewing distance, and viewing time of the traj ectory point; selecting a portion of the selected image most closely approximating the viewing direction of the trajectory point; and scaling the selected portion of the selected image to approximate the viewing distance of the trajectory point.
76. An apparatus as recited in Claim 75, further comprising a network connection for providing access to the means for selecting, scaling, and sequencing images.
77. An apparatus as recited in Claim 61, wherein the animation sequence may be presented interactively.
78. An apparatus as recited in Claim 61, further comprising a network connection for presenting the animation sequence.
79. An apparatus as recited in Claim 61, wherein the animation sequence is presented at a rate of greater than about six images per second.
80. An apparatus as recited in Claim 61 : wherein the image database storing means comprises a digital storage medium .and images from the image database are stored therein as digital images, and wherein the image parameter storing means comprises a digital storage medium; further comprising a network connection for providing access to at least one of the image database storing means and the image parameter storing means; further comprising a network connection for providing access to the trajectory selecting means; wherein the trajectory selecting means comprises a graphical user interface for selecting the trajectory; further comprising means for storing an information database; wherein the information database contains information related to details of the target object and the information database is linked to at least one of the image database and the image parameters; further comprising a network connection for providing access to the information database storing means; further comprising means for selecting the trajectory based on information in the information database; further comprising means for displaying information from the information database with the animation sequence; wherein the image parameters comprise at least one of a camera position, a camera direction, a camera field-of-view, a camera constant, a camera lens focal length, and an image time-of-recording ; wherein the image database comprises images recorded from a plurality of camera positions at grid points of an imaginary substantially planar substantially rectilinear grid defined above and substantially parallel to a surface of the target object and from a plurality of camera directions at each camera position; wherein the means for selecting, scaling, and sequencing images comprises means for selecting, for each point on the trajectory, an image in the database most nearly approximating at least one of the viewing position, viewing direction, viewing distance, and viewing time of the trajectory point; wherein the means for selecting, scaling, and sequencing images comprises means for selecting a portion of the selected image most closely approximating the viewing direction of the trajectory point; wherein the means for selecting, scaling, and sequencing images comprises means for scaling the selected portion of the selected image to approximate the viewing distance of the trajectory point; further comprising a network connection for providing access to the means for selecting, scaling, and sequencing images; further comprising a network connection for presenting the animation sequence; and wherein the animation sequence is presented at a rate of greater than about six images per second.
81. A method for generating an animation sequence from an image database of images of a target object, comprising the steps of: selecting a trajectory of at least one of viewing position, viewing direction, viewing distance, and viewing time; selecting, scaling, and sequencing images from the image database using the selected trajectory and stored image parameters; and presenting the selected, scaled, and sequenced images from the image database as an animation sequence.
82. A method as recited in Claim 81, further comprising the steps of: recording images with a camera from a plurality of camera positions relative to the target object; and storing the images recorded by the camera.
83. A method as recited in Claim 81 , wherein images from the image database are stored as digital images on a digital storage medium, and wherein the image parameters are stored on a digital storage medium.
84. A method as recited in Claim 83, wherein access to at least one of the image database and the image parameters is provided by a network connection.
85. A method as recited in Claim 81 , wherein images from the image database are stored as holographic images on a holographic storage medium.
86. A method as recited in Claim 81, wherein the trajectory may be selected via a network connection.
87. A method as recited in Claim 81, wherein the trajectory may be selected using a graphical user interface.
88. A method as recited in Claim 81, wherein the trajectory may be selected interactively.
89. A method as recited in Claim 81, further comprising the step of linking at least one of the image database and the image parameters to an information database, the information database containing information related to details of the target object.
90. A method as recited in Claim 89, wherein access to the information database storing means is provided by a network connection.
91. A method as recited in Claim 89, wherein the trajectory may be selected based on information in the information database.
92. A method as recited in Claim 89, wherein information from the information database may be displayed with the animation sequence.
93. A method as recited in Claim 81, wherein the image parameters comprise at least one of a camera position, a camera direction, a camera field-of-view, a camera constant, a camera lens focal length, .and an image time-of-recording.
94. A method as recited in Claim 93, wherein the image database comprises images recorded from a plurality of camera positions at grid points of an imaginary substantially planar substantially rectilinear grid defined above and substantially parallel to a surface of the target object and from a plurality of camera directions at each camera position.
95. A method as recited in Claim 93, wherein the step of selecting, scaling, and sequencing images comprises the steps of: selecting, for each point on the trajectory, an image in the database most nearly approximating at least one of the viewing position, viewing direction, viewing distance, and viewing time of the trajectory point; selecting a portion of the selected image most closely approximating the viewing direction of the trajectory point; scaling the selected portion of the selected image to approximate the viewing distance of the trajectory point; and presenting the selected and scaled image as an image in the animation sequence.
96. A method as recited in Claim 95, wherein the images may be selected, scaled, and sequenced via a network connection.
97. A method as recited in Claim 81, wherein the .animation sequence may be presented interactively.
98. A method as recited in Claim 81, wherein the animation sequence may be presented via a network connection.
99. A method as recited in Claim 81, wherein the animation sequence may be presented at a rate of greater than about six images per second.
100. A method as recited in Claim 81 : wherein images from the image database are stored as digital images on a digital storage medium, .and wherein the image parameters are stored on a digital storage medium. wherein access to at least one of the image database and the image parameters is provided by a network connection; wherein the trajectory may be selected via a network connection; wherein the trajectory may be selected using a graphical user interface; further comprising the step of linking at least one of the image database and the image parameters to an information database, the information database containing information related to details of the target object; wherein access to the information database storing means is provided by a network connection; wherein the trajectory may be selected based on information in the information database; wherein information from the information database may be displayed with the animation sequence; wherein the image parameters comprise at least one of a camera position, a camera direction, a camera field-of-view, a camera constant, a camera lens focal length, and an image time-of-recording; wherein the image database comprises images recorded from a plurality of camera positions at grid points of an imaginary substantially planar substantially rectilinear grid defined above and substantially parallel to a surface of the target object and from a plurality of camera directions at each camera position; wherein the step of selecting, scaling, and sequencing images further comprises the step of selecting, for each point on the trajectory, an image in the database most nearly approximating at least one of the viewing position, viewing direction, viewing distance, and viewing time of the trajectory point; wherein the step of selecting, scaling, and sequencing images further comprises the step of selecting a portion of the selected image most closely approximating the viewing direction of the trajectory point; wherein the step of selecting, scaling, and sequencing images further comprises the step of scaling the selected portion of the selected image to approximate the viewing distance of the trajectory point; wherein the step of selecting, scaling, and sequencing images further comprises the step of presenting the selected .and scaled image as an image in the animation sequence; wherein the images may be selected, scaled, and sequenced via a network connection; wherein the animation sequence may be presented via a network connection; and wherein the animation sequence may be presented at a rate of greater than about six images per second.
PCT/US1998/025955 1997-12-22 1998-12-07 Acquisition and animation of surface detail images WO1999033026A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU17151/99A AU1715199A (en) 1997-12-22 1998-12-07 Acquisition and animation of surface detail images
EP98961969A EP1040450A4 (en) 1997-12-22 1998-12-07 Acquisition and animation of surface detail images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US6841497P 1997-12-22 1997-12-22
US60/068,414 1997-12-22

Publications (1)

Publication Number Publication Date
WO1999033026A1 true WO1999033026A1 (en) 1999-07-01

Family

ID=22082430

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/025955 WO1999033026A1 (en) 1997-12-22 1998-12-07 Acquisition and animation of surface detail images

Country Status (3)

Country Link
EP (1) EP1040450A4 (en)
AU (1) AU1715199A (en)
WO (1) WO1999033026A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1227301A1 (en) * 2001-01-29 2002-07-31 Siemens Aktiengesellschaft Position determination with the assistance of a mobile telephon
US10358235B2 (en) 2008-04-11 2019-07-23 Nearmap Australia Pty Ltd Method and system for creating a photomap using a dual-resolution camera system
US10358234B2 (en) 2008-04-11 2019-07-23 Nearmap Australia Pty Ltd Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5577175A (en) * 1993-08-06 1996-11-19 Matsushita Electric Industrial Co., Ltd. 3-dimensional animation generating apparatus and a method for generating a 3-dimensional animation
US5696892A (en) * 1992-07-10 1997-12-09 The Walt Disney Company Method and apparatus for providing animation in a three-dimensional computer generated virtual world using a succession of textures derived from temporally related source images
US5706416A (en) * 1995-11-13 1998-01-06 Massachusetts Institute Of Technology Method and apparatus for relating and combining multiple images of the same scene or object(s)
US5710875A (en) * 1994-09-09 1998-01-20 Fujitsu Limited Method and apparatus for processing 3-D multiple view images formed of a group of images obtained by viewing a 3-D object from a plurality of positions
US5850469A (en) * 1996-07-09 1998-12-15 General Electric Company Real time tracking of camera pose
US5852450A (en) * 1996-07-11 1998-12-22 Lamb & Company, Inc. Method and apparatus for processing captured motion data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4463380A (en) * 1981-09-25 1984-07-31 Vought Corporation Image processing system
US5259037A (en) * 1991-02-07 1993-11-02 Hughes Training, Inc. Automated video imagery database generation using photogrammetry

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5696892A (en) * 1992-07-10 1997-12-09 The Walt Disney Company Method and apparatus for providing animation in a three-dimensional computer generated virtual world using a succession of textures derived from temporally related source images
US5577175A (en) * 1993-08-06 1996-11-19 Matsushita Electric Industrial Co., Ltd. 3-dimensional animation generating apparatus and a method for generating a 3-dimensional animation
US5710875A (en) * 1994-09-09 1998-01-20 Fujitsu Limited Method and apparatus for processing 3-D multiple view images formed of a group of images obtained by viewing a 3-D object from a plurality of positions
US5706416A (en) * 1995-11-13 1998-01-06 Massachusetts Institute Of Technology Method and apparatus for relating and combining multiple images of the same scene or object(s)
US5850469A (en) * 1996-07-09 1998-12-15 General Electric Company Real time tracking of camera pose
US5852450A (en) * 1996-07-11 1998-12-22 Lamb & Company, Inc. Method and apparatus for processing captured motion data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1040450A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1227301A1 (en) * 2001-01-29 2002-07-31 Siemens Aktiengesellschaft Position determination with the assistance of a mobile telephon
WO2002061376A1 (en) * 2001-01-29 2002-08-08 Siemens Aktiengesellschaft Position determination using a mobile appliance
US10358235B2 (en) 2008-04-11 2019-07-23 Nearmap Australia Pty Ltd Method and system for creating a photomap using a dual-resolution camera system
US10358234B2 (en) 2008-04-11 2019-07-23 Nearmap Australia Pty Ltd Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features

Also Published As

Publication number Publication date
EP1040450A1 (en) 2000-10-04
AU1715199A (en) 1999-07-12
EP1040450A4 (en) 2001-04-11

Similar Documents

Publication Publication Date Title
US8893026B2 (en) System and method for creating and broadcasting interactive panoramic walk-through applications
US9858717B2 (en) System and method for producing multi-angle views of an object-of-interest from images in an image dataset
US10380410B2 (en) Apparatus and method for image-based positioning, orientation and situational awareness
US9420234B2 (en) Virtual observer
US8331611B2 (en) Overlay information over video
JP5861150B2 (en) Image information output method
US20180322197A1 (en) Video data creation and management system
US10191635B1 (en) System and method of generating a view for a point of interest
US9153011B2 (en) Movement based level of detail adjustments
Ciampa Pictometry Digital Video Mapping
EP2056256A2 (en) System and method for revealing occluded objects in an image dataset
US20040218910A1 (en) Enabling a three-dimensional simulation of a trip through a region
CA2705809A1 (en) Method and apparatus of taking aerial surveys
US11403822B2 (en) System and methods for data transmission and rendering of virtual objects for display
JP2009217524A (en) System for generating and browsing three-dimensional moving image of city view
Stal et al. Highly detailed 3D modelling of Mayan cultural heritage using an UAV
EP1040450A1 (en) Acquisition and animation of surface detail images
Zheng et al. Pervasive Views: Area exploration and guidance using extended image media
Nobre et al. Spatial Video: exploring space using multiple digital videos
US20220375175A1 (en) System For Improving The Precision and Accuracy of Augmented Reality
Cai et al. Locating key views for image indexing of spaces
Counsell Recording and retrieving spatial information with video and images
Fause et al. Development of tools for construction of urban databases and their efficient visualization
Yu et al. Pervasive Views: Area Exploration and Guidance Using Extended Image Media

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: KR

WWE Wipo information: entry into national phase

Ref document number: 1998961969

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1998961969

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 1998961969

Country of ref document: EP