USRE41175E1 - GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site - Google Patents

GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site Download PDF

Info

Publication number
USRE41175E1
USRE41175E1 US11/480,248 US48024806A USRE41175E US RE41175 E1 USRE41175 E1 US RE41175E1 US 48024806 A US48024806 A US 48024806A US RE41175 E USRE41175 E US RE41175E
Authority
US
United States
Prior art keywords
range
site
data
scanner
scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US11/480,248
Inventor
Robert M. Vashisth
James U. Jensen
James W. Bunger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
InteliSum Inc
Original Assignee
InteliSum Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by InteliSum Inc filed Critical InteliSum Inc
Priority to US11/480,248 priority Critical patent/USRE41175E1/en
Assigned to SQUARE 1 BANK reassignment SQUARE 1 BANK SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTELISUM, INC.
Application granted granted Critical
Publication of USRE41175E1 publication Critical patent/USRE41175E1/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only

Definitions

  • the reissue applications are application No. 11 / 480 , 248 ( the present application filed on Jun. 30 , 2006 ) and application No. 12 / 362 , 954 ( filed on Jan. 30 , 2009 ) , all of which are reissue or divisional reissue applications of Pat. No. 6 , 759 , 979 .
  • the present application is a reissue application of U.S. Pat. No. 6 , 759 , 979 ( application No. 10 / 348 , 275 ) , which claims the benefit of U.S. Provisional Application No.
  • the present invention relates generally to three-dimensional modeling. More specifically, the present invention relates to a system and method for capturing three-dimensional virtual models of a site that can be co-registered and visualized within a computer system.
  • Lidar light detection and ranging
  • Lidar uses laser technology to make precise distance measurements over long or short distances.
  • One application of lidar is the range scanner, or scanning lidar.
  • a lidar In a typical range scanner, a lidar is mounted on a tripod equipped with a servo mechanism that continuously pans and tilts the lidar to scan a three-dimensional area. During the scanning process, the lidar makes repeated range measurements to objects in its path. The resulting range data may be collected and serve as a rough model of the scanned area.
  • range scanner Physical limitations of the range scanner constrain the maximum resolution of the range data, which decreases with distance from the range scanner. At large distances, the range scanner may not be able to discern surface details of an object. A lack of continuous spatial data (gaps between points) and a lack of color attributes are significant limitations of conventional range scanners. Furthermore, a range scanner only scans objects within the lidar's line-of-sight. As a result, no data is collected for the side of an object opposite to the lidar or for objects obscured by other objects (“occlusions”).
  • the range scanner can be moved to other scanning locations in order to scan the same area from different perspectives and thereby obtain range data for obscured objects. Thereafter, the resulting sets of range data can be merged into a single model.
  • a system for capturing a virtual model of a site includes a range scanner for scanning the site to generate range data indicating distances from the range scanner to real-world objects.
  • the system also includes a global positioning system (GPS) receiver coupled to the range scanner for acquiring GPS data for the range scanner at a scanning location.
  • GPS global positioning system
  • the system includes a communication interface for outputting a virtual model comprising the range data and the GPS data.
  • the system may further include a transformation module for using the GPS data with orientation information, such as bearing, for the range scanner to automatically transform the range data from a scanning coordinate system to a modeling coordinate system, where the modeling coordinate system is independent of the scanning location.
  • a co-registration module may then combine the transformed range data with a second set of transformed range data for the same site generated at a second scanning location.
  • the system also includes a digital camera coupled to the range scanner for obtaining digital images of the real-world objects scanned by the range scanner.
  • the system may associate the digital images of the real-world objects with the corresponding range data in the virtual model.
  • a system for building a virtual model of a site includes a communication interface for receiving a first set of range data indicating distances from a range scanner at a first location to real-world objects.
  • the communication interface also receives a first set of GPS data for the range scanner at the first location.
  • the system further includes a transformation module for using the first set of GPS data with orientation information for the range scanner to automatically transform the first set of range data from a first local coordinate system to a modeling coordinate system.
  • a system for modeling an object includes a range scanner for scanning an object from a first vantage point to generate a first range image.
  • the system further includes a GPS receiver for obtaining GPS readings for the first vantage point, as well as a storage medium for associating the first range image and the GPS readings within a first virtual model.
  • the range scanner may re-scan the object from a second vantage point to generate a second range image.
  • the GPS receiver may acquire updated GPS readings for the second vantage point, after which the storage medium associates the second range image and the updated GPS readings within a second virtual model.
  • a transformation module then employs the GPS readings of the virtual models with orientation information for the range scanner at each location to automatically transform the associated range images from local coordinate systems referenced to the vantage points to a single coordinate system independent of the vantage points.
  • FIG. 1 is a high-level overview of a system for capturing and co-registering virtual models
  • FIG. 2 is a detailed block diagram of the system of FIG. 1 ;
  • FIG. 3 is a schematic illustration of capturing one or more virtual models of a site at each of a number of positions or vantage points;
  • FIG. 4 is a schematic illustration of three sets of range data
  • FIG. 5 is a representation of a scanning coordinate system
  • FIG. 6 is a schematic illustration of transforming range data from various scanning coordinate systems to a single modeling coordinate system
  • FIG. 7 is a schematic illustration of generating a combined or transformed virtual model
  • FIG. 8 is a schematic illustration of generating a merged virtual model
  • FIG. 9 is a schematic illustration of generating an interactive, three-dimensional visualization
  • FIG. 10 is a flowchart of a method for capturing and co-registering virtual models of a site.
  • FIG. 11 is a schematic illustration of generating an area model based on scans of multiple sites.
  • FIG. 1 is a high-level overview of a modeling system 100 according to an embodiment of the invention.
  • a range scanner 102 includes a lidar 103 for scanning a site 104 to generate range data, i.e., distance measurements from the range scanner 102 to real-world objects within the site 104 .
  • the site 104 may be any indoor or outdoor three-dimensional region that includes one or more objects to which distance measurements can be made using the lidar 103 .
  • the location and dimensions of the site 104 may be defined by an operator 105 using a control device, such as a personal data assistant (PDA) 106 , computer 108 , or the like, which may communicate with the range scanner 102 using any wired or wireless method.
  • PDA personal data assistant
  • the operator 105 may specify, for instance, the degree to which the range scanner 102 pans and tilts during scanning, effectively determining the dimensions of the site 104 .
  • the range scanner 102 is equipped with a high-resolution, high-speed digital camera 110 for obtaining digital images of the site 104 during the scanning process.
  • the digital images may be later used to apply textures to a polygon mesh created from the range data, providing a highly realistic three-dimensional visualization 112 of the site 104 for display on a computer monitor 114 or other display device.
  • the range scanner 102 also includes a global positioning system (GPS) receiver 116 for acquiring GPS data relative to the range scanner 102 at the location of scanning.
  • GPS global positioning system
  • the GPS data may include, for example, the latitude, longitude, and altitude of the range scanner 102 .
  • the GPS data may include Universal Transverse Mercator (UTM) coordinates, Earth-Centered/Earth-Fixed (ECEF) coordinates, or other Earth-based locators.
  • UDM Universal Transverse Mercator
  • ECEF Earth-Centered/Earth-Fixed
  • a GPS receiver 116 relies on three or more orbiting satellites 118 for triangulation and, in some configurations, can provide readings accurate to within a few centimeters.
  • the range scanner 102 sends the range data, digital images, and GPS data to a computer 108 , where they are used to create the visualization 112 .
  • the visualization 112 may be interactive, e.g., a user may “walk through” the site 104 depicted in the visualization 112 .
  • the user may delete or move objects depicted in the visualization 112 or modify the visualization 112 in other ways.
  • Such visualizations 112 are highly beneficial in the fields of architecture, landscape design, land use, erosion control, etc.
  • FIG. 2 is a detailed block diagram of the system 100 of FIG. 1 .
  • the range scanner 102 includes a lidar 103 , a digital camera 110 , and a GPS receiver 116 .
  • the lidar 103 may be embodied, for instance, as an LMS 291 , available SICK AG of Waldkirch, Germany, although various other models are contemplated.
  • the digital camera 110 may include a PowerShot G2TM camera available from Canon, Inc. In one configuration, the digital camera 110 is capable of capturing images with a resolution of 2272 ⁇ 1704 pixels at a rate of approximately 2.5 images per second.
  • the digital camera 110 may be included within, attached to, or otherwise integrated with the range scanner 102 . In alternative embodiments, the range scanner 102 includes multiple digital cameras 110 .
  • the GPS receiver 116 may be embodied as a standard mapping-grade receiver, which may support L-band differential GPS (DGPS). Where higher accuracy is needed, survey-grade receivers may be used, such as a carrier phase (CPH) or real-time kinematic (RTK) GPS. In such embodiments, a base station (not shown) having a known Earth location broadcasts an error correction signal that is used by the GPS receiver 116 to achieve accuracy to within a few centimeters.
  • a suitable GPS receiver 116 is the ProMark2TM survey system available from Ashtech, Inc. of Santa Clara, Calif.
  • the GPS receiver 116 may be included within, attached to, or otherwise integrated with the range scanner 102 .
  • the range scanner 102 may also include one or more orientation indicator(s) 202 for providing information about the orientation of the range scanner 102 with respect to the Earth.
  • one indicator 202 may provide a bearing or heading (azimuth) of the range scanner 102 .
  • Azimuth is typically expressed as a horizontal angle of the observer's bearing, measured clockwise from a referent direction, such as North.
  • a bearing indicator 202 may be embodied, for instance, as a high-accuracy compass capable of digital output.
  • Some GPS receivers 116 may include compasses, gyroscopes, inertial navigation systems, etc., for providing highly accurate bearing and/or other orientation information.
  • the ProMark2TM survey system described above provides an azimuth reading.
  • a bearing may be obtained indirectly from GPS readings, since two precise GPS coordinates define a bearing.
  • the orientation indicator 202 need not be separate component.
  • an indicator 202 may provide the tilt or inclination of the range scanner 102 with respect to the Earth's surface.
  • the range scanner 102 may be tilted with respect to one or two axes.
  • the following exemplary embodiments assume that the range scanner 102 is level prior to scanning.
  • the range scanner 102 further includes a servo 203 for continuously changing the bearing and/or tilt of the range scanner 102 to scan a selected site 104 .
  • the servo 203 may include high-accuracy theodolite-type optical or electronic encoders to facilitate high-resolution scanning.
  • the servo 203 only tilts the range scanner 102 , while a continuously rotating prism or mirror performs the panning or rotation function.
  • the range scanner 102 could be mounted at a 90° angle, in which case the servo 203 is used for panning.
  • any appropriate mechanical and/or electronic means such as stepper motors, diode arrays, etc., may be used to control the bearing and/or tilt of the range scanner 102 within the scope of the invention.
  • the servo 203 as well as the other components of the range scanner 102 , are directed by a controller 204 .
  • the controller 204 may be embodied as a microprocessor, microcontroller, digital signal processor (DSP), or other control device known in the art.
  • DSP digital signal processor
  • the controller 204 is coupled to a memory 206 , such as a random access memory (RAM), read-only memory (ROM), or the like.
  • the memory 206 is used to buffer the range data, digital images, and GPS data during the scanning process.
  • the memory device 206 may also be used to store parameters and program code for operation of the range scanner 102 .
  • controller 204 is coupled to a control interface 208 , such as an infrared (IR) receiver, for receiving IR-encoded commands from the PDA 106 .
  • a control interface 208 such as an infrared (IR) receiver
  • IR infrared
  • Various other control interfaces 208 may be used, however, such as an 802.11b interface, an RS-232 interface, a universal serial bus (USB) interface, or the like.
  • the PDA 106 is used to program the range scanner 102 .
  • the PDA 106 may specify the size of the site 104 to be scanned, the resolution of the range data and digital images to be collected, etc.
  • the controller 204 is also coupled to a communication interface 210 for sending the captured range data, digital images, and GPS data to the computer 108 for further processing.
  • the communication interface 210 may include, for instance, an Ethernet adapter, a IEEE 1349 (Firewire) adaptor, a USB adaptor, or other high-speed communication interface.
  • the communication interface 210 of the range scanner 102 is coupled to, or in communication with, a similar communication interface 212 within the computer 108 .
  • the computer 108 may be embodied as a standard IBM-PCTM compatible computer running a widely-available operating system (OS) such as Windows XPTM or LinuxTM.
  • OS operating system
  • the computer 108 also includes a central processing unit (CPU) 214 , such as an IntelTM x86 processor.
  • the CPU 214 is coupled to a standard display interface 216 for displaying text and graphics, including the visualization 112 , on the monitor 114 .
  • the CPU 214 is further coupled to an input interface 218 for receiving data from a standard input device, such as a keyboard 220 or mouse 222 .
  • the CPU 214 is coupled to a memory 224 , such as a RAM, ROM, or the like.
  • the memory 224 includes various software modules or components, including a co-registration module 228 , transformation module 229 , a merging module 230 , and a visualization module 232 .
  • the memory 224 may further include various data structures, such as a number of virtual models 234 .
  • the co-registration module 228 automatically co-registers sets of range data from different views (e.g., collected from different vantage points) using the GPS data and orientation information. Co-registration places the sets of range data 302 within the same coordinate system and combining the sets into a single virtual model 234 . In addition, co-registration may require specific calibration of instruments for parallax and other idiosyncrasies.
  • the transformation module 229 performs the necessary transformations to convert each set of range data from a local scanning coordinate system referenced to a particular scanning location to a modeling coordinate system that is independent of the scanning location. Since transformation is typically part of co-registration, the transformation module 229 may be embodied as a component of the co-registration module 228 in one embodiment.
  • the merging module 230 analyzes the range data 302 to correct for errors in the scanning process, eliminating gaps, overlapping points, and other incongruities. Thereafter, the visualization module 232 produces the interactive, three-dimensional visualization 112 , as explained in greater detail below.
  • one or more of the described modules may be implemented using hardware or firmware, and may even reside within the range scanner 102 .
  • the invention should not be construed as requiring a separate computer 108 .
  • the computer 108 includes a mass storage device 236 , such as a hard disk drive, optical storage device (e.g., DVD-RW), or the like, which may be used to store any of the above-described modules or data structures.
  • mass storage device 236 such as a hard disk drive, optical storage device (e.g., DVD-RW), or the like, which may be used to store any of the above-described modules or data structures.
  • references herein to “memory” or “storage media” should be construed to include any combination of volatile, non-volatile, magnetic, or optical storage media.
  • one or more virtual models 234 of a site 104 may be captured at each of a number of positions or vantage points.
  • the range scanner 102 At each position, the range scanner 102 generates range data 302 indicating distances to objects (e.g., a tree) within the site 104 .
  • a set of range data 302 is sometimes referred to as a “range image,” although the range data 302 need not be stored or presented in a conventional image format.
  • range data and range image are used herein interchangeably.
  • the pattern of marks depicted within the range data 302 represents sample points, i.e., points at which a range measurement has been taken.
  • the density or resolution of the range data 302 depends on the distance of the object from the range scanner 102 , as well as the precision and accuracy of the lidar 103 and the mechanism for panning and/or tilting the lidar 103 relative to its platform.
  • FIG. 3 suggests a horizontal scanning pattern, the range data 302 could also be generated using a vertical or spiral scanning pattern.
  • the GPS receiver 116 associated with the range scanner 102 obtains GPS data 304 (e.g., latitude, longitude, altitude) relative to the range scanner 102 at the scanning position. Additionally, the orientation indicator(s) 202 may provide orientation information 305 , e.g., bearing, tilt.
  • the camera 110 associated with the range scanner 102 obtains one or more high-resolution digital images 306 of the site 104 .
  • the resolution of the digital images 306 will typically far exceed the resolution of the range data 302 .
  • the range data 302 , GPS data 304 , orientation information 305 , and digital images 306 are collected at each scanning position or location and represent a virtual model 234 of the site 104 . Separate virtual models 234 are generated from the perspective of each of the scanning positions. Of course, any number of virtual models 234 of the site 104 can be made within the scope of the invention.
  • a data structure lacking one or more of the above-described elements may still be referred to as a “virtual model.”
  • a virtual model 234 may not include the digital images 306 or certain orientation information 305 (such as tilt data where the range scanner 102 is level during scanning).
  • FIG. 4 depicts the three sets of range data 302 a-c from top-down views rather than the side views of FIG. 3 . As shown, each set represents only a portion of the site 104 , since the range scanner 102 is not able to “see” behind objects.
  • each of the sets of range data 302 a-c have separate scanning coordinate systems 402 a-c that are referenced to the scanning positions.
  • the range data 302 is initially captured in a polar (or polar-like) coordinate system.
  • a polar (or polar-like) coordinate system For example, as shown in FIG. 5 , an individual range measurement may be represented by P(R, ⁇ , ⁇ ), where R is the range (distance) from the range scanner 102 , ⁇ is the degree of tilt, and ⁇ is the degree of panning.
  • Converting polar range-data 304 into the depicted Cartesian coordinates may be done using standard transformations, as shown below.
  • X R cos ⁇ cos ⁇ Eq. 1
  • Y R sin ⁇ Eq. 2
  • Z R cos ⁇ sin ⁇ Eq. 3
  • the geometry of the range scanner 102 e.g., the axis of rotation, offset, etc.
  • the origin of each of the scanning coordinate systems 402 a-c is the light-reception point of the lidar 103 .
  • the transformation module 229 transforms the range data 302 a-c from their respective scanning coordinate systems 402 a-c to a single modeling coordinate system 602 that is independent of the scanning positions and the orientation of the range scanner 102 .
  • the modeling coordinate system 602 is based on a geographic coordinate system, such as Universal Transverse Mercator (UTM), Earth-Centered/Earth-Fixed (ECEF), or longitude/latitude/altitude (LLA).
  • GPS receivers 104 are typically able to display Earth-location information in one or more of these coordinate systems.
  • UTM is used in the following examples because it provides convenient Cartesian coordinates in meters. In the following examples, the UTM zone is not shown since the range data 302 will typically be located within a single zone.
  • the transformation module 229 initially rotates each set of range data 302 a-c by the bearing of the range scanner 102 obtained from the orientation information 305 .
  • each point may be rotated around the origin by the following transformation, where b is the bearing.
  • X 1 X cos (b) ⁇ Z sin (b) Eq. 4
  • Z 1 Z cos (b)+X sin (b) Eq. 5
  • the range scanner 102 was level at the time of scanning, such that the XZ planes of the scanning coordinate system 402 and modeling coordinate system 602 are essentially co-planer. If, however, the range scanner 102 was tilted with respect to the X and/or Z axes, the transformations could be modified by one of skill in the art.
  • the transformation module 229 uses the GPS data 304 to translate the range data 302 to the correct location within the modeling coordinate system 602 . In one embodiment, this is done by adding the coordinates from the GPS data 304 to each of the range data coordinates, as shown below.
  • X 2 X 1 +GPS E Eq. 6
  • Y 2 Y 1 +GPS H Eq. 7
  • Z 2 Z 1 +GPS N Eq. 8
  • the invention is not limited to UTM coordinates and that transformations exist for other coordinate systems, such as ECEF and LLA.
  • the modeling coordinate system 602 may actually be referenced to a local landmark or a point closer to the range data 302 , but will still be geographically oriented.
  • the units of the range data 302 and GPS data 304 are both in meters.
  • a scaling transformation will be needed.
  • FIGS. 6 and 7 show particular types of transformations, those of skill in the art will recognize that different transformations may be required based on the geometry of the range scanner 102 , whether the range scanner 102 was tilted with respect to the XZ plane, and the like.
  • the co-registration module 228 co-registers or combine combines the range data 302 a-c from the various views into a co-registered model 702 of the entire site 104 . This may involve, for example, combining the sets of range data 302 a-c into a single data structure, while still preserving the ability to access the individual sets.
  • the co-registered model 702 includes GPS data 304 for at least one point. This allows the origin of the modeling coordinate system 602 to be changed to any convenient location, while still preserving a geographic reference.
  • a co-registered model 702 is not perfect. Noise and other sources of error may result in various gaps, incongruities, regions of overlap, etc. Thus, while the co-registration module 228 automatically places the range data 302 a-c within close proximity to their expected locations, eliminating the need for human decision-making, the range data 302 a—c are not truly merged. For example, two separate points may exist within the co-registered model 702 that should actually refer to the same physical location in the site 104 .
  • a merging module 230 addresses this problem by merging the range data 302 a-c from the co-registered model 702 into a single merged model 802 .
  • the merging module 230 makes fine adjustments to the transformed range data 302 a—c, eliminating the gaps, incongruities, and regions of overlap.
  • the merging module 230 may eliminate redundancy by merging points from the transformed range data 302 a that represent the same physical location. This is accomplished, in one embodiment, using an iterative closest point (ICP) algorithm, as known to those of skill in the art.
  • ICP iterative closest point
  • the merging module 230 incorporates the ScanalyzeTM product available from Stanford University.
  • ScanalyzeTM is an interactive computer graphics application for viewing, editing, aligning, and merging range images to produce dense polygon meshes.
  • ScanalyzeTM processes three kinds of files: triangle-mesh PLY files (extension .ply), range-grid PLY files (also with extension .ply), and SD files (extension .sd).
  • Triangle-mesh PLY files encode general triangle meshes as lists of arbitrarily connected 3D vertices
  • range-grid PLY files and SD files encode range images as rectangular arrays of points.
  • SD files also contain metadata that describe the geometry of the range scanner 102 used to acquire the data. This geometry is used by ScanalyzeTM to derive line-of-sight information for various algorithms.
  • PLY files may also encode range images (in polygon mesh form), but they do not include metadata about the range scanner and thus do not provide line-of-sight information.
  • PLY or SD files Once the PLY or SD files have been loaded, they can be pairwise aligned using a variety of techniques—some manual (i.e. pointing and clicking) and some automatic (using a variant of the ICP algorithm).
  • Pairs of scans can be selected for alignment either automatically (so-called all-pairs alignment) or manually, by choosing two scans from a list. These pairwise alignments can optionally be followed by a global registration step whose purpose is to spread the alignment error evenly across the available pairs.
  • the new positions and orientations of each PLY or SD file can be stored as a transform file (extension .xf) containing a 4 ⁇ 4 matrix.
  • the visualization module 232 uses the merged model 802 of FIG. 8 to create an interactive, three-dimensional visualization 112 of the site 104 .
  • the visualization module 232 may convert the transformed/merged range data 302 into a polygon mesh 902 .
  • Various known software applications are capable of producing a polygon mesh 902 from range data 302 , such as the Volumetric Range Image Processing Package (VripPack), available from Stanford University.
  • VripPack is a set of source code, scripts, and binaries for creating surface reconstructions from range images.
  • the VripPack merges range images into a compressed volumetric grid, extracts a surface from the compressed volumetric grid, fills holes in the reconstruction by carving out empty space, removes small triangles from the reconstruction, and performs a simple 4-level decimation for interactive rendering.
  • the visualization module 232 also decomposes the digital images 306 into textures 904 , which are then applied to the polygon mesh 902 .
  • the digital images 306 are “draped” upon the polygon mesh 902 .
  • the textures 904 add a high degree of realism to the visualization 112 .
  • Techniques and code for applying textures 904 to polygon meshes 902 are known to those of skill in the art.
  • the mesh 902 and textures 904 are used to create the visualization 112 of the site 104 using a standard modeling representation, such as the virtual reality modeling language (VRML). Thereafter, the visualization 112 can be viewed using a standard VRML browser, or a browser equipped with a VRML plugin, such as the MicrosoftTM VRML Viewer. Of course, the visualization 112 could also be created using a proprietary representation and viewed using a proprietary viewer.
  • a standard modeling representation such as the virtual reality modeling language (VRML).
  • VRML virtual reality modeling language
  • the visualization 112 can be viewed using a standard VRML browser, or a browser equipped with a VRML plugin, such as the MicrosoftTM VRML Viewer.
  • the visualization 112 could also be created using a proprietary representation and viewed using a proprietary viewer.
  • the browser may provide navigation controls 906 to allow the user to “walk through” the visualization 112 .
  • the user may delete or move objects shown in the visualization 112 or modify the visualization 112 in other ways.
  • visualization 112 are highly beneficial in the fields of architecture, landscape design, land use, erosion control, and the like.
  • FIG. 10 is a flowchart of a method 1000 for capturing and co-registering virtual models 234 of a site 104 .
  • the site 104 is scanned 1002 to generate a first set of range data 302 indicating distances from a range scanner 102 at a first location to real-world objects in the site 104 .
  • a GPS receiver then acquires 1004 GPS data 304 relative to the range scanner 102 at the first location, after which the range scanner 102 outputs 1006 a first virtual model 234 comprising the first sets of range data 302 and GPS data 304 .
  • the method 1000 continues by scanning 1008 the site 104 to generate a second set of range data 302 indicating distances from the range scanner 102 at the second location to real-world objects in the site 104 .
  • the GPS receiver 116 acquires 1010 a second set of GPS data 304 relative to the range scanner 102 at the second location, after which the range scanner 102 outputs 1012 a second virtual model 234 comprising the second sets of range data 302 and GPS data 304 .
  • a transformation module 229 then uses 1014 the sets of GPS data 304 to transform the sets of range data 302 from scanning coordinate systems 402 to a single modeling coordinate system 602 . Thereafter, the transformed range data 302 can be merged and visualized using standard applications.
  • a range scanner 102 may be used to scan multiple sites 104 a-b within a particular area 1102 to create multiple site models 1104 a-b using the techniques discussed above.
  • the sites 104 a-b may or may not be contiguous, although they are typically in close proximity or related in some manner.
  • the area 1102 may represents represent a town, campus, golf course, etc., while the sites 104 a-b may correspond to different buildings or structures.
  • the site models 1104 a-b may be co-registered models 702 or merged models 802 , as previously shown and described. Furthermore, as previously noted, a site model 1104 a-b may include GPS data 304 .
  • the transformation module 229 uses the sets of GPS data 304 a-b to combine the individual site models 1104 a-b into a single area model 1106 . This may be done in the same manner as the virtual models 302 a-c of FIG. 6 were transformed and combined into the co-registered model 702 .
  • the GPS data 304 provides a common reference point for each site model 1104 a-b, allowing the co-registration and/or transformation modules 228 , 229 to make any necessary transformations.
  • the resulting area model 1106 may then be used to produce an interactive, three-dimensional visualization 112 of the entire area 1102 that may be used for many purposes. For example, a user may navigate from one site 104 to another within the area 1102 . Also, when needed, a user may remove any of the site models 1104 from the area model 1106 to visualize the area 1102 within the objects from the removed site model 1104 . This may be helpful in the context of architectural or land-use planning.

Abstract

A system for capturing a virtual model of a site includes a range scanner for scanning the site to generate range data indicating distances from the range scanner to real-world objects. The system also includes a global positioning system (GPS) receiver coupled to the range scanner for acquiring GPS data for the range scanner at a scanning location. In addition, the system includes a communication interface for outputting a virtual model comprising the range data and the GPS data.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
The applicationNotice: More than one reissue application has been filed for the reissue of Pat. No. 6,759,979. The reissue applications are application No. 11/480,248 (the present application filed on Jun. 30, 2006 ) and application No. 12/362,954 (filed on Jan. 30, 2009 ), all of which are reissue or divisional reissue applications of Pat. No. 6,759,979. The present application is a reissue application of U.S. Pat. No. 6,759,979 (application No. 10/348,275 ), which claims the benefit of U.S. Provisional Application No. 60/350,860, filed on Jan. 22, 2002, for “System and Method for Generating 3-D Topographical Visualizations,” with inventors Munish Vashisth and James U. Jensen, whicheach application isidentified above being incorporated herein by this reference in its entirety.
BACKGROUND
1. Field of the Invention
The present invention relates generally to three-dimensional modeling. More specifically, the present invention relates to a system and method for capturing three-dimensional virtual models of a site that can be co-registered and visualized within a computer system.
2. Description of Related Background Art
Lidar (light detection and ranging) uses laser technology to make precise distance measurements over long or short distances. One application of lidar is the range scanner, or scanning lidar. In a typical range scanner, a lidar is mounted on a tripod equipped with a servo mechanism that continuously pans and tilts the lidar to scan a three-dimensional area. During the scanning process, the lidar makes repeated range measurements to objects in its path. The resulting range data may be collected and serve as a rough model of the scanned area.
Physical limitations of the range scanner constrain the maximum resolution of the range data, which decreases with distance from the range scanner. At large distances, the range scanner may not be able to discern surface details of an object. A lack of continuous spatial data (gaps between points) and a lack of color attributes are significant limitations of conventional range scanners. Furthermore, a range scanner only scans objects within the lidar's line-of-sight. As a result, no data is collected for the side of an object opposite to the lidar or for objects obscured by other objects (“occlusions”).
To obtain a more complete and accurate model, the range scanner can be moved to other scanning locations in order to scan the same area from different perspectives and thereby obtain range data for obscured objects. Thereafter, the resulting sets of range data can be merged into a single model.
Unfortunately, the merging of sets of range data is not automatic. Human decision-making is generally required at several steps in the merging process. For instance, a human surveyor is typically needed to determine the relative distances between the range scanning locations and the scanned area. Furthermore, a human operator must manually identify points in common (“fiducials”) between multiple sets of range data in order to align and merge the sets into a single model. Such identification is by no means easy, particularly in the case of curved surfaces. The need for human decision-making increases the cost of modeling and the likelihood of error in the process.
SUMMARY OF THE INVENTION
A system for capturing a virtual model of a site includes a range scanner for scanning the site to generate range data indicating distances from the range scanner to real-world objects. The system also includes a global positioning system (GPS) receiver coupled to the range scanner for acquiring GPS data for the range scanner at a scanning location. In addition, the system includes a communication interface for outputting a virtual model comprising the range data and the GPS data.
The system may further include a transformation module for using the GPS data with orientation information, such as bearing, for the range scanner to automatically transform the range data from a scanning coordinate system to a modeling coordinate system, where the modeling coordinate system is independent of the scanning location. A co-registration module may then combine the transformed range data with a second set of transformed range data for the same site generated at a second scanning location.
The system also includes a digital camera coupled to the range scanner for obtaining digital images of the real-world objects scanned by the range scanner. The system may associate the digital images of the real-world objects with the corresponding range data in the virtual model.
A system for building a virtual model of a site includes a communication interface for receiving a first set of range data indicating distances from a range scanner at a first location to real-world objects. The communication interface also receives a first set of GPS data for the range scanner at the first location. The system further includes a transformation module for using the first set of GPS data with orientation information for the range scanner to automatically transform the first set of range data from a first local coordinate system to a modeling coordinate system.
A system for modeling an object includes a range scanner for scanning an object from a first vantage point to generate a first range image. The system further includes a GPS receiver for obtaining GPS readings for the first vantage point, as well as a storage medium for associating the first range image and the GPS readings within a first virtual model.
The range scanner may re-scan the object from a second vantage point to generate a second range image. Likewise, the GPS receiver may acquire updated GPS readings for the second vantage point, after which the storage medium associates the second range image and the updated GPS readings within a second virtual model. A transformation module then employs the GPS readings of the virtual models with orientation information for the range scanner at each location to automatically transform the associated range images from local coordinate systems referenced to the vantage points to a single coordinate system independent of the vantage points.
BRIEF DESCRIPTION OF THE DRAWINGS
Non-exhaustive embodiments of the invention are described with reference to the figures, in which:
FIG. 1 is a high-level overview of a system for capturing and co-registering virtual models;
FIG. 2 is a detailed block diagram of the system of FIG. 1;
FIG. 3 is a schematic illustration of capturing one or more virtual models of a site at each of a number of positions or vantage points;
FIG. 4 is a schematic illustration of three sets of range data;
FIG. 5 is a representation of a scanning coordinate system;
FIG. 6 is a schematic illustration of transforming range data from various scanning coordinate systems to a single modeling coordinate system;
FIG. 7 is a schematic illustration of generating a combined or transformed virtual model;
FIG. 8 is a schematic illustration of generating a merged virtual model;
FIG. 9 is a schematic illustration of generating an interactive, three-dimensional visualization;
FIG. 10 is a flowchart of a method for capturing and co-registering virtual models of a site; and
FIG. 11 is a schematic illustration of generating an area model based on scans of multiple sites.
DETAILED DESCRIPTION
Reference is now made to the figures in which like reference numerals refer to like elements. For clarity, the first digit of a reference numeral indicates the figure number in which the corresponding element is first used.
In the following description, numerous specific details of programming, software modules, user selections, network transactions, database queries, database structures, etc., are provided for a thorough understanding of the embodiments of the invention. However, those skilled in the art will recognize that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In some cases, well-known structures, materials, or operations are not shown or not described in detail to avoid obscuring aspects of the invention. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
FIG. 1 is a high-level overview of a modeling system 100 according to an embodiment of the invention. A range scanner 102 includes a lidar 103 for scanning a site 104 to generate range data, i.e., distance measurements from the range scanner 102 to real-world objects within the site 104. The site 104 may be any indoor or outdoor three-dimensional region that includes one or more objects to which distance measurements can be made using the lidar 103.
The location and dimensions of the site 104 may be defined by an operator 105 using a control device, such as a personal data assistant (PDA) 106, computer 108, or the like, which may communicate with the range scanner 102 using any wired or wireless method. The operator 105 may specify, for instance, the degree to which the range scanner 102 pans and tilts during scanning, effectively determining the dimensions of the site 104.
In one embodiment, the range scanner 102 is equipped with a high-resolution, high-speed digital camera 110 for obtaining digital images of the site 104 during the scanning process. As explained more fully below, the digital images may be later used to apply textures to a polygon mesh created from the range data, providing a highly realistic three-dimensional visualization 112 of the site 104 for display on a computer monitor 114 or other display device.
The range scanner 102 also includes a global positioning system (GPS) receiver 116 for acquiring GPS data relative to the range scanner 102 at the location of scanning. The GPS data may include, for example, the latitude, longitude, and altitude of the range scanner 102. In other embodiments, the GPS data may include Universal Transverse Mercator (UTM) coordinates, Earth-Centered/Earth-Fixed (ECEF) coordinates, or other Earth-based locators. A GPS receiver 116 relies on three or more orbiting satellites 118 for triangulation and, in some configurations, can provide readings accurate to within a few centimeters.
In one embodiment, the range scanner 102 sends the range data, digital images, and GPS data to a computer 108, where they are used to create the visualization 112. The visualization 112 may be interactive, e.g., a user may “walk through” the site 104 depicted in the visualization 112. In addition, the user may delete or move objects depicted in the visualization 112 or modify the visualization 112 in other ways. Such visualizations 112 are highly beneficial in the fields of architecture, landscape design, land use, erosion control, etc.
FIG. 2 is a detailed block diagram of the system 100 of FIG. 1. As noted above, the range scanner 102 includes a lidar 103, a digital camera 110, and a GPS receiver 116. The lidar 103 may be embodied, for instance, as an LMS 291, available SICK AG of Waldkirch, Germany, although various other models are contemplated.
The digital camera 110 may include a PowerShot G2™ camera available from Canon, Inc. In one configuration, the digital camera 110 is capable of capturing images with a resolution of 2272×1704 pixels at a rate of approximately 2.5 images per second. The digital camera 110 may be included within, attached to, or otherwise integrated with the range scanner 102. In alternative embodiments, the range scanner 102 includes multiple digital cameras 110.
The GPS receiver 116 may be embodied as a standard mapping-grade receiver, which may support L-band differential GPS (DGPS). Where higher accuracy is needed, survey-grade receivers may be used, such as a carrier phase (CPH) or real-time kinematic (RTK) GPS. In such embodiments, a base station (not shown) having a known Earth location broadcasts an error correction signal that is used by the GPS receiver 116 to achieve accuracy to within a few centimeters. An example of a suitable GPS receiver 116 is the ProMark2™ survey system available from Ashtech, Inc. of Santa Clara, Calif. Like the digital camera 110, the GPS receiver 116 may be included within, attached to, or otherwise integrated with the range scanner 102.
The range scanner 102 may also include one or more orientation indicator(s) 202 for providing information about the orientation of the range scanner 102 with respect to the Earth. For example, one indicator 202 may provide a bearing or heading (azimuth) of the range scanner 102. Azimuth is typically expressed as a horizontal angle of the observer's bearing, measured clockwise from a referent direction, such as North. A bearing indicator 202 may be embodied, for instance, as a high-accuracy compass capable of digital output.
Some GPS receivers 116 may include compasses, gyroscopes, inertial navigation systems, etc., for providing highly accurate bearing and/or other orientation information. For example, the ProMark2™ survey system described above provides an azimuth reading. Similarly, a bearing may be obtained indirectly from GPS readings, since two precise GPS coordinates define a bearing. Thus, the orientation indicator 202 need not be separate component.
In certain implementations, an indicator 202 may provide the tilt or inclination of the range scanner 102 with respect to the Earth's surface. For example, the range scanner 102 may be tilted with respect to one or two axes. For simplicity, however, the following exemplary embodiments assume that the range scanner 102 is level prior to scanning.
As depicted, the range scanner 102 further includes a servo 203 for continuously changing the bearing and/or tilt of the range scanner 102 to scan a selected site 104. The servo 203 may include high-accuracy theodolite-type optical or electronic encoders to facilitate high-resolution scanning.
In one embodiment, the servo 203 only tilts the range scanner 102, while a continuously rotating prism or mirror performs the panning or rotation function. Alternatively, the range scanner 102 could be mounted at a 90° angle, in which case the servo 203 is used for panning. Thus, any appropriate mechanical and/or electronic means, such as stepper motors, diode arrays, etc., may be used to control the bearing and/or tilt of the range scanner 102 within the scope of the invention.
In one embodiment, the servo 203, as well as the other components of the range scanner 102, are directed by a controller 204. The controller 204 may be embodied as a microprocessor, microcontroller, digital signal processor (DSP), or other control device known in the art.
The controller 204 is coupled to a memory 206, such as a random access memory (RAM), read-only memory (ROM), or the like. In one configuration, the memory 206 is used to buffer the range data, digital images, and GPS data during the scanning process. The memory device 206 may also be used to store parameters and program code for operation of the range scanner 102.
In addition, the controller 204 is coupled to a control interface 208, such as an infrared (IR) receiver, for receiving IR-encoded commands from the PDA 106. Various other control interfaces 208 may be used, however, such as an 802.11b interface, an RS-232 interface, a universal serial bus (USB) interface, or the like. As previously noted, the PDA 106 is used to program the range scanner 102. For example, the PDA 106 may specify the size of the site 104 to be scanned, the resolution of the range data and digital images to be collected, etc.
The controller 204 is also coupled to a communication interface 210 for sending the captured range data, digital images, and GPS data to the computer 108 for further processing. The communication interface 210 may include, for instance, an Ethernet adapter, a IEEE 1349 (Firewire) adaptor, a USB adaptor, or other high-speed communication interface.
The communication interface 210 of the range scanner 102 is coupled to, or in communication with, a similar communication interface 212 within the computer 108. The computer 108 may be embodied as a standard IBM-PC™ compatible computer running a widely-available operating system (OS) such as Windows XP™ or Linux™.
The computer 108 also includes a central processing unit (CPU) 214, such as an Intel™ x86 processor. The CPU 214 is coupled to a standard display interface 216 for displaying text and graphics, including the visualization 112, on the monitor 114. The CPU 214 is further coupled to an input interface 218 for receiving data from a standard input device, such as a keyboard 220 or mouse 222.
The CPU 214 is coupled to a memory 224, such as a RAM, ROM, or the like. As described in greater detail hereafter, the memory 224 includes various software modules or components, including a co-registration module 228, transformation module 229, a merging module 230, and a visualization module 232. The memory 224 may further include various data structures, such as a number of virtual models 234.
Briefly, the co-registration module 228 automatically co-registers sets of range data from different views (e.g., collected from different vantage points) using the GPS data and orientation information. Co-registration places the sets of range data 302 within the same coordinate system and combining the sets into a single virtual model 234. In addition, co-registration may require specific calibration of instruments for parallax and other idiosyncrasies.
The transformation module 229 performs the necessary transformations to convert each set of range data from a local scanning coordinate system referenced to a particular scanning location to a modeling coordinate system that is independent of the scanning location. Since transformation is typically part of co-registration, the transformation module 229 may be embodied as a component of the co-registration module 228 in one embodiment.
The merging module 230 analyzes the range data 302 to correct for errors in the scanning process, eliminating gaps, overlapping points, and other incongruities. Thereafter, the visualization module 232 produces the interactive, three-dimensional visualization 112, as explained in greater detail below.
In alternative embodiments, one or more of the described modules may be implemented using hardware or firmware, and may even reside within the range scanner 102. Thus, the invention should not be construed as requiring a separate computer 108.
In one configuration, the computer 108 includes a mass storage device 236, such as a hard disk drive, optical storage device (e.g., DVD-RW), or the like, which may be used to store any of the above-described modules or data structures. Hence, references herein to “memory” or “storage media” should be construed to include any combination of volatile, non-volatile, magnetic, or optical storage media.
Referring to FIG. 3, one or more virtual models 234 of a site 104 may be captured at each of a number of positions or vantage points. At each position, the range scanner 102 generates range data 302 indicating distances to objects (e.g., a tree) within the site 104. A set of range data 302 is sometimes referred to as a “range image,” although the range data 302 need not be stored or presented in a conventional image format. The terms “range data” and “range image” are used herein interchangeably.
The pattern of marks depicted within the range data 302 represents sample points, i.e., points at which a range measurement has been taken. The density or resolution of the range data 302 depends on the distance of the object from the range scanner 102, as well as the precision and accuracy of the lidar 103 and the mechanism for panning and/or tilting the lidar 103 relative to its platform. Although FIG. 3 suggests a horizontal scanning pattern, the range data 302 could also be generated using a vertical or spiral scanning pattern.
As previously noted, the GPS receiver 116 associated with the range scanner 102 obtains GPS data 304 (e.g., latitude, longitude, altitude) relative to the range scanner 102 at the scanning position. Additionally, the orientation indicator(s) 202 may provide orientation information 305, e.g., bearing, tilt.
The camera 110 associated with the range scanner 102 obtains one or more high-resolution digital images 306 of the site 104. The resolution of the digital images 306 will typically far exceed the resolution of the range data 302.
The range data 302, GPS data 304, orientation information 305, and digital images 306 are collected at each scanning position or location and represent a virtual model 234 of the site 104. Separate virtual models 234 are generated from the perspective of each of the scanning positions. Of course, any number of virtual models 234 of the site 104 can be made within the scope of the invention.
In certain instances, a data structure lacking one or more of the above-described elements may still be referred to as a “virtual model.” For example, a virtual model 234 may not include the digital images 306 or certain orientation information 305 (such as tilt data where the range scanner 102 is level during scanning).
FIG. 4 depicts the three sets of range data 302a-c from top-down views rather than the side views of FIG. 3. As shown, each set represents only a portion of the site 104, since the range scanner 102 is not able to “see” behind objects.
In general, each of the sets of range data 302a-c have separate scanning coordinate systems 402a-c that are referenced to the scanning positions. Typically, the range data 302 is initially captured in a polar (or polar-like) coordinate system. For example, as shown in FIG. 5, an individual range measurement may be represented by P(R, φ, θ), where R is the range (distance) from the range scanner 102, φ is the degree of tilt, and θ is the degree of panning.
Converting polar range-data 304 into the depicted Cartesian coordinates may be done using standard transformations, as shown below.
X=R cos φ cosθ  Eq. 1
Y=R sin φ  Eq. 2
Z=R cos φsin θ  Eq. 3
In certain embodiments, the geometry of the range scanner 102 (e.g., the axis of rotation, offset, etc.) may result in a polar-like coordinate system that requires different transformations, as will be known to those of skill in the art. In general, the origin of each of the scanning coordinate systems 402a-c is the light-reception point of the lidar 103.
Referring to FIG. 6, in order to combine or “co-register” the virtual models 234 from the various scanning positions, the transformation module 229 transforms the range data 302a-c from their respective scanning coordinate systems 402a-c to a single modeling coordinate system 602 that is independent of the scanning positions and the orientation of the range scanner 102.
In one embodiment, the modeling coordinate system 602 is based on a geographic coordinate system, such as Universal Transverse Mercator (UTM), Earth-Centered/Earth-Fixed (ECEF), or longitude/latitude/altitude (LLA). GPS receivers 104 are typically able to display Earth-location information in one or more of these coordinate systems. UTM is used in the following examples because it provides convenient Cartesian coordinates in meters. In the following examples, the UTM zone is not shown since the range data 302 will typically be located within a single zone.
As depicted in FIG. 6, the transformation module 229 initially rotates each set of range data 302a-c by the bearing of the range scanner 102 obtained from the orientation information 305. After a set of range data 302 has been converted into Cartesian coordinates, each point may be rotated around the origin by the following transformation, where b is the bearing.
X1=X cos (b)−Z sin (b)   Eq. 4
Z1=Z cos (b)+X sin (b)   Eq. 5
These equations assume that the range scanner 102 was level at the time of scanning, such that the XZ planes of the scanning coordinate system 402 and modeling coordinate system 602 are essentially co-planer. If, however, the range scanner 102 was tilted with respect to the X and/or Z axes, the transformations could be modified by one of skill in the art.
Next, as shown in FIG. 7, the transformation module 229 uses the GPS data 304 to translate the range data 302 to the correct location within the modeling coordinate system 602. In one embodiment, this is done by adding the coordinates from the GPS data 304 to each of the range data coordinates, as shown below.
X2=X1+GPSE   Eq. 6
Y2=Y1+GPSH   Eq. 7
Z2=Z1+GPSN   Eq. 8
where
    • GPSE is the UTM “easting,”
    • GPSH is the altitude (typically the height above the reference ellipsoid), and
    • GPSN is the UTM “northing.”
      The UTM easting and northing for a number of points in the modeling coordinate system 602 are shown in FIG. 7, and are typically represented on maps using the “mE” (meters East) and “mN” (meters North) labels.
Those of skill in the art will recognize that the invention is not limited to UTM coordinates and that transformations exist for other coordinate systems, such as ECEF and LLA. In certain embodiments, the modeling coordinate system 602 may actually be referenced to a local landmark or a point closer to the range data 302, but will still be geographically oriented.
In the preceding example, the units of the range data 302 and GPS data 304 are both in meters. For embodiments in which the units differ, a scaling transformation will be needed. Furthermore, while FIGS. 6 and 7 show particular types of transformations, those of skill in the art will recognize that different transformations may be required based on the geometry of the range scanner 102, whether the range scanner 102 was tilted with respect to the XZ plane, and the like.
When the transformation is complete, the co-registration module 228 co-registers or combine combines the range data 302a-c from the various views into a co-registered model 702 of the entire site 104. This may involve, for example, combining the sets of range data 302a-c into a single data structure, while still preserving the ability to access the individual sets.
In one embodiment, the co-registered model 702 includes GPS data 304 for at least one point. This allows the origin of the modeling coordinate system 602 to be changed to any convenient location, while still preserving a geographic reference.
As illustrated in FIG. 7, a co-registered model 702 is not perfect. Noise and other sources of error may result in various gaps, incongruities, regions of overlap, etc. Thus, while the co-registration module 228 automatically places the range data 302a-c within close proximity to their expected locations, eliminating the need for human decision-making, the range data 302a—c are not truly merged. For example, two separate points may exist within the co-registered model 702 that should actually refer to the same physical location in the site 104.
Referring to FIG. 8, a merging module 230 addresses this problem by merging the range data 302a-c from the co-registered model 702 into a single merged model 802. The merging module 230 makes fine adjustments to the transformed range data 302a—c, eliminating the gaps, incongruities, and regions of overlap. In addition, the merging module 230 may eliminate redundancy by merging points from the transformed range data 302a that represent the same physical location. This is accomplished, in one embodiment, using an iterative closest point (ICP) algorithm, as known to those of skill in the art.
In one embodiment, the merging module 230 incorporates the Scanalyze™ product available from Stanford University. Scanalyze™ is an interactive computer graphics application for viewing, editing, aligning, and merging range images to produce dense polygon meshes.
Scanalyze™ processes three kinds of files: triangle-mesh PLY files (extension .ply), range-grid PLY files (also with extension .ply), and SD files (extension .sd). Triangle-mesh PLY files encode general triangle meshes as lists of arbitrarily connected 3D vertices, whereas range-grid PLY files and SD files encode range images as rectangular arrays of points. SD files also contain metadata that describe the geometry of the range scanner 102 used to acquire the data. This geometry is used by Scanalyze™ to derive line-of-sight information for various algorithms. PLY files may also encode range images (in polygon mesh form), but they do not include metadata about the range scanner and thus do not provide line-of-sight information.
Once the PLY or SD files have been loaded, they can be pairwise aligned using a variety of techniques—some manual (i.e. pointing and clicking) and some automatic (using a variant of the ICP algorithm).
Pairs of scans can be selected for alignment either automatically (so-called all-pairs alignment) or manually, by choosing two scans from a list. These pairwise alignments can optionally be followed by a global registration step whose purpose is to spread the alignment error evenly across the available pairs. The new positions and orientations of each PLY or SD file can be stored as a transform file (extension .xf) containing a 4×4 matrix.
Referring to FIG. 9, the visualization module 232 uses the merged model 802 of FIG. 8 to create an interactive, three-dimensional visualization 112 of the site 104. To accomplish this, the visualization module 232 may convert the transformed/merged range data 302 into a polygon mesh 902. Various known software applications are capable of producing a polygon mesh 902 from range data 302, such as the Volumetric Range Image Processing Package (VripPack), available from Stanford University. VripPack is a set of source code, scripts, and binaries for creating surface reconstructions from range images. For example, the VripPack merges range images into a compressed volumetric grid, extracts a surface from the compressed volumetric grid, fills holes in the reconstruction by carving out empty space, removes small triangles from the reconstruction, and performs a simple 4-level decimation for interactive rendering.
The visualization module 232 also decomposes the digital images 306 into textures 904, which are then applied to the polygon mesh 902. In essence, the digital images 306 are “draped” upon the polygon mesh 902. Due to the relatively higher resolution of the digital images 306, the textures 904 add a high degree of realism to the visualization 112. Techniques and code for applying textures 904 to polygon meshes 902 are known to those of skill in the art.
In one embodiment, the mesh 902 and textures 904 are used to create the visualization 112 of the site 104 using a standard modeling representation, such as the virtual reality modeling language (VRML). Thereafter, the visualization 112 can be viewed using a standard VRML browser, or a browser equipped with a VRML plugin, such as the Microsoft™ VRML Viewer. Of course, the visualization 112 could also be created using a proprietary representation and viewed using a proprietary viewer.
As depicted in FIG. 9, the browser may provide navigation controls 906 to allow the user to “walk through” the visualization 112. In addition, the user may delete or move objects shown in the visualization 112 or modify the visualization 112 in other ways. As noted, such visualization 112 are highly beneficial in the fields of architecture, landscape design, land use, erosion control, and the like.
FIG. 10 is a flowchart of a method 1000 for capturing and co-registering virtual models 234 of a site 104. Initially, the site 104 is scanned 1002 to generate a first set of range data 302 indicating distances from a range scanner 102 at a first location to real-world objects in the site 104. A GPS receiver then acquires 1004 GPS data 304 relative to the range scanner 102 at the first location, after which the range scanner 102 outputs 1006 a first virtual model 234 comprising the first sets of range data 302 and GPS data 304.
After then range scanner 102 is moved to a second location, the method 1000 continues by scanning 1008 the site 104 to generate a second set of range data 302 indicating distances from the range scanner 102 at the second location to real-world objects in the site 104. In addition, the GPS receiver 116 acquires 1010 a second set of GPS data 304 relative to the range scanner 102 at the second location, after which the range scanner 102 outputs 1012 a second virtual model 234 comprising the second sets of range data 302 and GPS data 304.
In one configuration, a transformation module 229 then uses 1014 the sets of GPS data 304 to transform the sets of range data 302 from scanning coordinate systems 402 to a single modeling coordinate system 602. Thereafter, the transformed range data 302 can be merged and visualized using standard applications.
As illustrated in FIG. 11, a range scanner 102 may be used to scan multiple sites 104a-b within a particular area 1102 to create multiple site models 1104a-b using the techniques discussed above. The sites 104a-b may or may not be contiguous, although they are typically in close proximity or related in some manner. For instance, the area 1102 may represents represent a town, campus, golf course, etc., while the sites 104a-b may correspond to different buildings or structures.
The site models 1104a-b may be co-registered models 702 or merged models 802, as previously shown and described. Furthermore, as previously noted, a site model 1104a-b may include GPS data 304.
In one embodiment, the transformation module 229 uses the sets of GPS data 304a-b to combine the individual site models 1104a-b into a single area model 1106. This may be done in the same manner as the virtual models 302a-c of FIG. 6 were transformed and combined into the co-registered model 702. Specifically, the GPS data 304 provides a common reference point for each site model 1104a-b, allowing the co-registration and/or transformation modules 228, 229 to make any necessary transformations.
The resulting area model 1106 may then be used to produce an interactive, three-dimensional visualization 112 of the entire area 1102 that may be used for many purposes. For example, a user may navigate from one site 104 to another within the area 1102. Also, when needed, a user may remove any of the site models 1104 from the area model 1106 to visualize the area 1102 within the objects from the removed site model 1104. This may be helpful in the context of architectural or land-use planning.
While specific embodiments and applications of the present invention have been illustrated and described, it is to be understood that the invention is not limited to the precise configuration and components disclosed herein. Various modifications, changes, and variations apparent to those skilled in the art may be made in the arrangement, operation, and details of the methods and systems of the present invention disclosed herein without departing from the spirit and scope of the invention.

Claims (44)

1. A system for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the system comprising:
a range scanner for automatically scanning a site from a plurality of different fixed locations to generate a separate set of range data at each scanning location indicating distances from the range scanner to real-world objects within the site, each set of range data comprising a three-dimensional model of the same site from a different perspective, wherein at least one set of range data includes a surface of a real-world object that is occluded in at least one other set of range data;
a digital camera coupled to the range scanner for obtaining digital images of the real-world objects scanned by the range scanner at each location;
a global positioning system (GPS) receiver coupled to the range scanner for acquiring GPS data for the range scanner at a each scanning location, wherein the GPS receiver interacts with a separate base station to achieve sub-meter accuracy;
an orientation indicator coupled to the range scanner for indicating an orientation of the range scanner at each scanner location;
a transformation module for using the GPS data with orientation data information for the range scanner at each scanning location to automatically transform the sets of range data from individual scanning coordinate systems based on the scanning locations to a single modeling coordinate system; and
a co-registration module for automatically co-registering the transformed sets of range data into a single virtual model of the site that includes the one or more occluded surfaces.
2. The system of claim 1, further comprising:
a visualization module for converting the co-registered virtual model of the site into a polygon mesh and for applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the site that is substantially free of occlusions, the textures being derived from the digital images.
3. The system of claim 1, further comprising:
a merging module for merging at least two points represented within the co-registered virtual model that correspond to the same physical location within the site.
4. The system of claim 1, wherein the modeling coordinate system is a geographic coordinate system.
5. The system of claim 2, wherein the orientation indicator comprises a bearing indicator for indicating the bearing of the range scanner.
6. The system of claim 1, wherein the GPS data is selected from the group consisting of longitude, latitude, altitude, Universal Transverse Mercator (UTM) coordinates, and Earth-Centered/Earth-Fixed (ECEF) coordinates.
7. The system of claim 1, wherein at least two of the sets of range data indicate a distance from the range scanner to the same physical location within the site.
8. The system of claim 1, wherein the virtual model associates the digital images of the real-world objects with the corresponding range data.
9. The system of claim 1, wherein the range scanner comprises:
a servo for continuously changing an orientation of the range scanner with respect to a fixed location to scan the site; and
a lidar to obtain range measurements to real-world objects along a changing path of the range scanner responsive to the servo.
10. A system for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the system comprising:
a range scanner for automatically scanning the site to generate a first set of range data indicating distances from the range scanner at a first location to real-world objects in the site, wherein the range scanner is to automatically re-scan the site to generate a second set of range data indicating distances from the range scanner at a second scanning location to real-world objects in the site, each set of range data comprising a three-dimensional model of the same site from a different perspective, wherein the second set of range data includes a surface of a real-world object that is occluded in the first set of range data;
a digital camera coupled to the range scanner for obtaining digital images of the real-world objects scanned by the range scanner at each location;
a global positioning system (GPS) receiver coupled to the range scanner for acquiring a first set of GPS data for the range scanner at the first scanning location and a second set of GPS data for the range scanner at the second location, wherein the GPS receiver interacts with a separate base station to achieve sub-meter accuracy;
an orientation indicator for indicating an orientation of the range scanner at each scanning location;
a transformation module for using the first and second sets of GPS data with orientation data information for the range scanner at the scanning locations to automatically transform the first and second sets of range data from local coordinate systems referenced to the scanning locations to a single coordinate system independent of the scanning locations;
a co-registration module for automatically co-registering the first and second sets of range data into a single virtual model of the site that includes the one or more occluded surfaces; and
a merging module for merging at least two points represented within the co-registered virtual model that correspond to the same physical location within the site.
11. The system of claim 10, further comprising:
a visualization module for converting the co-registered virtual model of the site into a polygon mesh and for applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the site that is substantially free of occlusions, the textures being derived from the digital images.
12. A system for modeling an object including one or more occluded surfaces when viewed from any vantage point, the system comprising:
a range scanner for automatically scanning an object from a plurality of fixed vantage points to generate a plurality of separate range images, each range image comprising a three-dimensional model of the object from a different perspective, wherein at least one range image includes a surface of the object that is occluded in at least one other range image;
a digital camera coupled to the range scanner for obtaining digital images of the object from each vantage point;
a global positioning system (GPS) receiver for obtaining GPS readings for the range scanner at each vantage point, wherein the GPS receiver interacts with a separate base station to achieve sub-meter accuracy;
ana bearing indicator coupled to the range scanner for indicating a bearing of the range scanner at each scanning location;
a transformation module for using the GPS readings associated with each range image, as well as information about the range scanner's bearing at each vantage point, to automatically transform the range images from local coordinate systems relative to the vantage points to a single coordinate system independent of the vantage points; and
a co-registration module for automatically co-registering the transformed range images into a single virtual model of the object that includes the one or more occluded surfaces.
13. The system of claim 12, further comprising:
a visualization module for converting the co-registered virtual model of the object into a polygon mesh and for applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the object that is substantially free of occlusions, the textures being derived from the digital images.
14. The system of claim 12, wherein
the range scanner comprises
a servo for continuously changing an orientation of the range scanner with respect to a fixed location to scan the object; and
a lidar to obtain range measurements of the object along a changing path of the range scanner responsive to the servo.
15. The system of claim 12, wherein the virtual model is to associate the digital imagery images and the corresponding range images within the virtual model.
16. The system of claim 12, further comprising:
a merging module for merging at least two points represented within the co-registered range images that correspond to the same physical location on the object.
17. A method for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the method comprising:
automatically scanning a site from a plurality of different fixed locations to generate a separate set of range data at each scanning location indicating distances from a range scanner to real-world objects within the site, each set of range data comprising a three-dimensional model of the same site from a different perspective, wherein at least one set of range data includes a surface of a real-world object that is occluded in at least one other set of range data;
obtaining digital images of the real-world objects scanned by the range scanner at each location;
acquiring global positioning system (GPS) data for the range scanner at each scanning location using a GPS receiver that interacts with a separate base station to achieve sub-meter accuracy;
obtaining orientation data information for the scanner at each scanning location;
automatically transforming the separate sets of range data from individual scanning coordinate systems to a modeling coordinate system using the GPS data with the orientation data information for the range scanner at each scanning location; and
automatically co-registering the transformed sets of range data into a single virtual model of the site that includes the one or more occluded surfaces.
18. The method of claim 17, further comprising:
converting the co-registered virtual model of the site into a polygon mesh; and
applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the site that is substantially free of occlusions, the textures being derived from the digital images.
19. The method of claim 17, further comprising:
merging at least two points represented within the co-registered virtual model that correspond to the same physical location within the site.
20. The method of claim 17, wherein the modeling coordinate system is a geographic coordinate system.
21. The method of claim 17, wherein the orientation information includes a bearing of the range scanner, the method further comprising:
determining the bearing of the range scanner.
22. The system method of claim 17, wherein the GPS data is selected from the group consisting of longitude, latitude, altitude, Universal Transverse Mercator (UTM) coordinates, and Earth-Centered/Earth-Fixed (ECEF) coordinates.
23. The method of claim 17, wherein at least two of the sets of range data indicate a distance from the range scanner to the same physical location within the site.
24. The method of claim 17, further comprising:
associating the digital images of the real-world objects with the corresponding range data.
25. The method of claim 17, wherein scanning comprises:
continuously changing an orientation of the range scanner with respect to a fixed location to scan the site; and
obtaining range measurements to real-world objects along a changing path of the range scanner.
26. A method for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the method comprising:
automatically scanning the site to generate a first set of range data indicating distances from a range scanner at a first location to real-world objects in the site, wherein the first set of range data comprises a three-dimensional model of the site from a first perspective;
obtaining digital images of the real-world objects scanned by the range scanner at the first location;
acquiring a first set of global positioning system (GPS) data for the range scanner at the first location using a GPS receiver that interacts with a base station to achieve sub-meter accuracy;
determining orientation information for the range scanner at the first location;
scanning the same site from a second perspective to generate a second set of range data indicating distances from the range scanner at a second location to real-world objects in the site, wherein the second set of range data comprises a three-dimensional model of the site from a second perspective, wherein the second set of range data includes a surface of a real-world object that is occluded in the first set of range data;
obtaining digital images of the real-world objects scanned by the range scanner at the second location;
acquiring a second set of GPS data for the range scanner at the second location;
determining orientation information for the range scanner at the second location;
automatically transforming the first and second sets of range data from individual local coordinate systems to a single coordinate system independent of the range scanner locations using the first and second sets of GPS data with the orientation information;
automatically co-registering the first and second sets of range data into a single virtual model of the site that includes the one or more occluded surfaces;
converting the co-registered virtual model of the site into a polygon mesh; and
applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the site that is substantially free of occlusions, the textures being derived from the digital images.
27. A method for modeling an object including one or more occluded surfaces when viewed from any vantage point, the method comprising:
automatically scanning an object from a plurality of fixed vantage points to generate a plurality of separate range images, each range image comprising a three-dimensional model of the object from a different perspective, wherein at least one range image includes a surface of the object that is occluded in at least one other range image;
obtaining digital images of the object from each vantage point;
obtaining a bearing of the scanner at each vantage point;
acquiring global position system (GPS) readings for the range scanner at each vantage point using a GPS receiver that accesses a separate base station to achieve sub-meter accuracy;
transforming the range images from local coordinate systems relative to the vantage points to a single coordinate system independent of the vantage points using the GPS readings associated with each range image, as well as information about the range scanner's bearing at each vantage point; and
automatically co-registering the transformed range images into a single virtual model of the object that includes the one or more occluded surfaces.
28. The method of claim, 27, further comprising:
converting the co-registered virtual model of the object into a polygon mesh; and
applying textures to the polygon mesh derived from the digital imagery to create an a visualization of the object that is substantially free of occlusions, the textures being derived from the digital images.
29. The system method of claim 27, wherein scanning comprises:
continuously changing an orientation of the range scanner with respect to a fixed location to scan the object; and
obtaining range measurements of the object along a changing path of the range scanner responsive to the servo.
30. The system method of claim 27, wherein the GPS data is selected from the group consisting of longitude, latitude, uniform, altitude, Universal Transverse Mercator (UTM) coordinates, and Earth-Centered/Earth-Fixed (ECEF) coordinates.
31. The method of claim 27, further comprising:
associating the digital imagery images with the corresponding range images within the virtual model.
32. The method of claim 27,
wherein at least two of the range images depict the same physical location within the site.
33. The system of claim 27, wherein the GPS data is selected from the group consisting of longitude, latitude, altitude, Universal Transverse Mercator (UTM) coordinates, and Earth-Centered/Earth-Fixed (ECEF) coordinates.
34. The system method of claim 27,
wherein at least two of the range images depict the same physical location on the object.
35. An apparatus for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the system apparatus comprising:
scanning means for automatically scanning a site from a plurality of different fixed locations to generate a separate set of range data at each scanning location indicating distances from the scanning means to real-world objects within the site, each set of range data comprising a three-dimensional model of the same site from a different perspective, wherein at least one set of range data includes a surface of a real-world object that is occluded in at least one other set of range data;
camera means coupled to the scanning means for obtaining digital images of the real-world objects scanned by the scanning means at each location;
position detection means coupled to the scanning means for acquiring global positioning system (GPS) data for the scanning means at a each scanning location, wherein the position detection means interacts with a separate base station to achieve sub-meter accuracy;
an orientation detection means coupled to the scanning means for indicating an orientation of the scanning means at each scanning location;
transformation means for using the GPS data with orientation data information for the scanning means at each scanning location to automatically transform the sets of range data from individual scanning coordinate systems based on the scanning locations to a single modeling coordinate system; and
co-registration means for automatically co-registering the transformed sets of range data into a single virtual model of the site that includes the one or more occluded surfaces.
36. A computer program product comprising program code for performing a method for capturing a virtual model of a site including one or more occluded surfaces when viewed from any given perspective, the computer program product comprising:
program code for automatically scanning a site from a plurality of different fixed locations to generate a separate set of range data at each scanning location indicating distances from a range scanner to real-world objects within the site, each set of range data comprising a three-dimensional model of the same site from a different perspective, wherein at least one set of range data includes a surface of a real-world object that is occluded in at least one other set of range data;
program code for obtaining digital images of the real-world objects scanned by the range scanner at each location;
program code for acquiring global positioning system (GPS) data for the range scanner at each scanning location using a GPS receiver that interacts with a separate base station to achieve sub-meter accuracy;
program code for obtaining orientation data information for the scanner at each scanning location;
program code for automatically transforming the separate sets of range data from individual scanning coordinate systems to a modeling coordinate system using the GPS data with the orientation data information for the range scanner at each scanning location; and
program code for automatically co-registering the transformed sets of range data into a single virtual model of the site that includes the one or more occluded surfaces.
37. The system of claim 1, wherein the GPS receiver achieves sub-centimeter accuracy.
38. The system of claim 1, wherein the orientation indicator comprises a compass capable of digital output.
39. The system of claim 1, wherein the orientation indicator comprises at least two GPS readings to indicate the orientation of the range scanner at each location.
40. The system of claim 1, further comprising a second digital camera for obtaining digital images of the real-world objects scanned by the range scanner at each location, wherein the digital camera and the second digital camera are set to focus in a stereo vision arrangement.
41. The method of claim 17, wherein the GPS receiver achieves sub-centimeter accuracy.
42. The method of claim 17, wherein the orientation information comprises data from a compass capable of digital output.
43. The method of claim 17, wherein the orientation information comprises at least two GPS readings to indicate an orientation of the range scanner at each location.
44. The method of claim 17, further comprising obtaining digital images of the real-world objects scanned by the range scanner at each location using a second digital camera, wherein the digital camera and the second digital camera are set to focus in a stereo vision arrangement.
US11/480,248 2002-01-22 2006-06-30 GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site Expired - Fee Related USRE41175E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/480,248 USRE41175E1 (en) 2002-01-22 2006-06-30 GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US35086002P 2002-01-22 2002-01-22
US10/348,275 US6759979B2 (en) 2002-01-22 2003-01-21 GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site
US11/480,248 USRE41175E1 (en) 2002-01-22 2006-06-30 GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/348,275 Reissue US6759979B2 (en) 2002-01-22 2003-01-21 GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site

Publications (1)

Publication Number Publication Date
USRE41175E1 true USRE41175E1 (en) 2010-03-30

Family

ID=26995629

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/348,275 Ceased US6759979B2 (en) 2002-01-22 2003-01-21 GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site
US11/480,248 Expired - Fee Related USRE41175E1 (en) 2002-01-22 2006-06-30 GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/348,275 Ceased US6759979B2 (en) 2002-01-22 2003-01-21 GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site

Country Status (3)

Country Link
US (2) US6759979B2 (en)
AU (1) AU2003207644A1 (en)
WO (1) WO2003062849A2 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100118116A1 (en) * 2007-06-08 2010-05-13 Wojciech Nowak Tomasz Method of and apparatus for producing a multi-viewpoint panorama
US20110001795A1 (en) * 2008-02-21 2011-01-06 Hyun Duk Uhm Field monitoring system using a mobil terminal
US20110032507A1 (en) * 2004-06-23 2011-02-10 Leica Geosystems Ag Scanner system and method for registering surfaces
US20120150573A1 (en) * 2010-12-13 2012-06-14 Omar Soubra Real-time site monitoring design
US8207964B1 (en) * 2008-02-22 2012-06-26 Meadow William D Methods and apparatus for generating three-dimensional image data models
US20120265494A1 (en) * 2011-04-14 2012-10-18 National Central University Method of Online Building-Model Reconstruction Using Photogrammetric Mapping System
US20130321583A1 (en) * 2012-05-16 2013-12-05 Gregory D. Hager Imaging system and method for use of same to determine metric scale of imaged bodily anatomy
US20140298666A1 (en) * 2013-04-05 2014-10-09 Leica Geosystems Ag Surface determination for objects by means of geodetically precise single point determination and scanning
US8884950B1 (en) 2011-07-29 2014-11-11 Google Inc. Pose data via user interaction
US20150172628A1 (en) * 2011-06-30 2015-06-18 Google Inc. Altering Automatically-Generated Three-Dimensional Models Using Photogrammetry
US9098870B2 (en) 2007-02-06 2015-08-04 Visual Real Estate, Inc. Internet-accessible real estate marketing street view system and method
US9134339B2 (en) 2013-09-24 2015-09-15 Faro Technologies, Inc. Directed registration of three-dimensional scan measurements using a sensor unit
US9528834B2 (en) 2013-11-01 2016-12-27 Intelligent Technologies International, Inc. Mapping techniques using probe vehicles
WO2019204800A1 (en) * 2018-04-20 2019-10-24 WeRide Corp. Method and system for generating high definition map
US10458792B2 (en) 2016-12-15 2019-10-29 Novatel Inc. Remote survey system
US10634791B2 (en) 2016-06-30 2020-04-28 Topcon Corporation Laser scanner system and registration method of point cloud data
US10665035B1 (en) 2017-07-11 2020-05-26 B+T Group Holdings, LLC System and process of using photogrammetry for digital as-built site surveys and asset tracking
US11151782B1 (en) 2018-12-18 2021-10-19 B+T Group Holdings, Inc. System and process of generating digital images of a site having a structure with superimposed intersecting grid lines and annotations
US11215597B2 (en) 2017-04-11 2022-01-04 Agerpoint, Inc. Forestry management tool for assessing risk of catastrophic tree failure due to weather events

Families Citing this family (120)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5037765B2 (en) * 2001-09-07 2012-10-03 株式会社トプコン Operator guidance system
US6759979B2 (en) 2002-01-22 2004-07-06 E-Businesscontrols Corp. GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site
JP2003346185A (en) * 2002-05-24 2003-12-05 Olympus Optical Co Ltd Information display system and personal digital assistant
US20040001074A1 (en) * 2002-05-29 2004-01-01 Hideki Oyaizu Image display apparatus and method, transmitting apparatus and method, image display system, recording medium, and program
US7242460B2 (en) * 2003-04-18 2007-07-10 Sarnoff Corporation Method and apparatus for automatic registration and visualization of occluded targets using ladar data
US8170079B2 (en) * 2003-07-28 2012-05-01 Los Alamos National Security, Llc Code division multiple access signaling for modulated reflector technology
US8294712B2 (en) * 2003-09-19 2012-10-23 The Boeing Company Scalable method for rapidly detecting potential ground vehicle under cover using visualization of total occlusion footprint in point cloud population
JP2005167517A (en) * 2003-12-01 2005-06-23 Olympus Corp Image processor, calibration method thereof, and image processing program
US10721405B2 (en) 2004-03-25 2020-07-21 Clear Imaging Research, Llc Method and apparatus for implementing a digital graduated filter for an imaging apparatus
US9826159B2 (en) 2004-03-25 2017-11-21 Clear Imaging Research, Llc Method and apparatus for implementing a digital graduated filter for an imaging apparatus
WO2005093654A2 (en) 2004-03-25 2005-10-06 Fatih Ozluturk Method and apparatus to correct digital image blur due to motion of subject or imaging device
US7895020B2 (en) * 2004-04-01 2011-02-22 General Dynamics Advanced Information Systems, Inc. System and method for multi-perspective collaborative modeling
US7236235B2 (en) 2004-07-06 2007-06-26 Dimsdale Engineering, Llc System and method for determining range in 3D imaging systems
US7697748B2 (en) * 2004-07-06 2010-04-13 Dimsdale Engineering, Llc Method and apparatus for high resolution 3D imaging as a function of camera position, camera trajectory and range
US7974461B2 (en) * 2005-02-11 2011-07-05 Deltasphere, Inc. Method and apparatus for displaying a calculated geometric entity within one or more 3D rangefinder data sets
US7777761B2 (en) * 2005-02-11 2010-08-17 Deltasphere, Inc. Method and apparatus for specifying and displaying measurements within a 3D rangefinder data set
US7933001B2 (en) * 2005-07-11 2011-04-26 Kabushiki Kaisha Topcon Geographic data collecting system
US7911940B2 (en) 2005-09-30 2011-03-22 Genband Us Llc Adaptive redundancy protection scheme
US7711360B2 (en) * 2005-11-08 2010-05-04 Siemens Aktiengesellschaft Radio frequency planning with consideration of inter-building effects
JP2007218896A (en) * 2006-01-23 2007-08-30 Ricoh Co Ltd Imaging device, position information recording method and program
US7463270B2 (en) * 2006-02-10 2008-12-09 Microsoft Corporation Physical-virtual interpolation
US7583855B2 (en) * 2006-02-23 2009-09-01 Siemens Aktiengesellschaft Signal source data input for radio frequency planning
US20080036758A1 (en) * 2006-03-31 2008-02-14 Intelisum Inc. Systems and methods for determining a global or local position of a point of interest within a scene using a three-dimensional model of the scene
DE202006005643U1 (en) * 2006-03-31 2006-07-06 Faro Technologies Inc., Lake Mary Device for three-dimensional detection of a spatial area
JP4847192B2 (en) * 2006-04-14 2011-12-28 キヤノン株式会社 Image processing system, image processing apparatus, imaging apparatus, and control method thereof
WO2007136004A1 (en) * 2006-05-19 2007-11-29 Nissan Chemical Industries, Ltd. Hyperbranched polymer and method for producing the same
US20080002880A1 (en) * 2006-06-30 2008-01-03 Intelisum, Inc. Systems and methods for fusing over-sampled image data with three-dimensional spatial data
WO2008008210A2 (en) * 2006-07-12 2008-01-17 Apache Technologies, Inc. Handheld laser light detector with height correction, using a gps receiver to provide two-dimensional position data
JP5073256B2 (en) * 2006-09-22 2012-11-14 株式会社トプコン POSITION MEASUREMENT DEVICE, POSITION MEASUREMENT METHOD, AND POSITION MEASUREMENT PROGRAM
US20080075051A1 (en) * 2006-09-27 2008-03-27 Baris Dundar Methods, apparatus and articles for radio frequency planning
DE112008001380T5 (en) * 2007-05-22 2010-04-15 Trimble Navigation Ltd., Sunnyvale Handling raster image 3D objects
JP4926826B2 (en) * 2007-05-25 2012-05-09 キヤノン株式会社 Information processing method and information processing apparatus
US20090021367A1 (en) * 2007-07-19 2009-01-22 Davies Daniel F Apparatus, system, and method for tracking animals
JP5044817B2 (en) * 2007-11-22 2012-10-10 インターナショナル・ビジネス・マシーンズ・コーポレーション Image processing method and apparatus for constructing virtual space
JP5150307B2 (en) * 2008-03-03 2013-02-20 株式会社トプコン Geographic data collection device
JP5150310B2 (en) * 2008-03-04 2013-02-20 株式会社トプコン Geographic data collection device
FR2939263B1 (en) * 2008-12-01 2011-10-14 Mathieu Trusgnach METHOD FOR ACQUIRING DATA AND METHOD FOR CONSTRUCTING MULTIMEDIA VIRTUAL VISIT PRODUCT
JP5688876B2 (en) 2008-12-25 2015-03-25 株式会社トプコン Calibration method for laser scanner measurement system
JP5347144B2 (en) * 2009-02-03 2013-11-20 リコーイメージング株式会社 Camera capable of fixed point shooting
DE102009010465B3 (en) * 2009-02-13 2010-05-27 Faro Technologies, Inc., Lake Mary laser scanner
US9551575B2 (en) 2009-03-25 2017-01-24 Faro Technologies, Inc. Laser scanner having a multi-color light source and real-time color receiver
DE102009015920B4 (en) 2009-03-25 2014-11-20 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9536348B2 (en) * 2009-06-18 2017-01-03 Honeywell International Inc. System and method for displaying video surveillance fields of view limitations
US20100321500A1 (en) * 2009-06-18 2010-12-23 Honeywell International Inc. System and method for addressing video surveillance fields of view limitations
DE102009035337A1 (en) 2009-07-22 2011-01-27 Faro Technologies, Inc., Lake Mary Method for optically scanning and measuring an object
DE102009035336B3 (en) * 2009-07-22 2010-11-18 Faro Technologies, Inc., Lake Mary Device for optical scanning and measuring of environment, has optical measuring device for collection of ways as ensemble between different centers returning from laser scanner
US9529083B2 (en) 2009-11-20 2016-12-27 Faro Technologies, Inc. Three-dimensional scanner with enhanced spectroscopic energy detector
DE102009055988B3 (en) 2009-11-20 2011-03-17 Faro Technologies, Inc., Lake Mary Device, particularly laser scanner, for optical scanning and measuring surrounding area, has light transmitter that transmits transmission light ray by rotor mirror
DE102009057101A1 (en) 2009-11-20 2011-05-26 Faro Technologies, Inc., Lake Mary Device for optically scanning and measuring an environment
US9210288B2 (en) 2009-11-20 2015-12-08 Faro Technologies, Inc. Three-dimensional scanner with dichroic beam splitters to capture a variety of signals
DE102009055989B4 (en) 2009-11-20 2017-02-16 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9113023B2 (en) 2009-11-20 2015-08-18 Faro Technologies, Inc. Three-dimensional scanner with spectroscopic energy detector
US9879976B2 (en) 2010-01-20 2018-01-30 Faro Technologies, Inc. Articulated arm coordinate measurement machine that uses a 2D camera to determine 3D coordinates of smoothly continuous edge features
US9163922B2 (en) 2010-01-20 2015-10-20 Faro Technologies, Inc. Coordinate measurement machine with distance meter and camera to determine dimensions within camera images
US9607239B2 (en) 2010-01-20 2017-03-28 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
US9628775B2 (en) 2010-01-20 2017-04-18 Faro Technologies, Inc. Articulated arm coordinate measurement machine having a 2D camera and method of obtaining 3D representations
JP5192614B1 (en) 2010-01-20 2013-05-08 ファロ テクノロジーズ インコーポレーテッド Coordinate measuring device
DE102010020925B4 (en) 2010-05-10 2014-02-27 Faro Technologies, Inc. Method for optically scanning and measuring an environment
JP5912234B2 (en) 2010-07-16 2016-04-27 株式会社トプコン measuring device
DE102010032726B3 (en) 2010-07-26 2011-11-24 Faro Technologies, Inc. Device for optically scanning and measuring an environment
DE102010032725B4 (en) 2010-07-26 2012-04-26 Faro Technologies, Inc. Device for optically scanning and measuring an environment
DE102010032723B3 (en) 2010-07-26 2011-11-24 Faro Technologies, Inc. Device for optically scanning and measuring an environment
DE102010033561B3 (en) * 2010-07-29 2011-12-15 Faro Technologies, Inc. Device for optically scanning and measuring an environment
JP5698480B2 (en) 2010-09-02 2015-04-08 株式会社トプコン Measuring method and measuring device
JP5653715B2 (en) 2010-10-27 2015-01-14 株式会社トプコン Laser surveyor
US9168654B2 (en) 2010-11-16 2015-10-27 Faro Technologies, Inc. Coordinate measuring machines with dual layer arm
US9952316B2 (en) 2010-12-13 2018-04-24 Ikegps Group Limited Mobile measurement devices, instruments and methods
US9182229B2 (en) * 2010-12-23 2015-11-10 Trimble Navigation Limited Enhanced position measurement systems and methods
US9879993B2 (en) 2010-12-23 2018-01-30 Trimble Inc. Enhanced bundle adjustment techniques
US10168153B2 (en) 2010-12-23 2019-01-01 Trimble Inc. Enhanced position measurement systems and methods
TWI419015B (en) * 2010-12-31 2013-12-11 Hsin Feng Peng System for transforming and displaying coordinates dates
US8624929B2 (en) * 2011-01-30 2014-01-07 Hsin-Fung Peng System for transforming and displaying coordinate datum
US8675013B1 (en) * 2011-06-16 2014-03-18 Google Inc. Rendering spherical space primitives in a cartesian coordinate system
DE102012100609A1 (en) 2012-01-25 2013-07-25 Faro Technologies, Inc. Device for optically scanning and measuring an environment
EP2639597A1 (en) * 2012-03-14 2013-09-18 Technische Universität Dresden Method and assembly for locating and rescuing people
US9972120B2 (en) * 2012-03-22 2018-05-15 University Of Notre Dame Du Lac Systems and methods for geometrically mapping two-dimensional images to three-dimensional surfaces
US9176215B2 (en) * 2012-03-22 2015-11-03 Intermec Ip Corp. Synthetic aperture RFID handheld with tag location capability
US9671566B2 (en) 2012-06-11 2017-06-06 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
US8997362B2 (en) 2012-07-17 2015-04-07 Faro Technologies, Inc. Portable articulated arm coordinate measuring machine with optical communications bus
DE102012107544B3 (en) 2012-08-17 2013-05-23 Faro Technologies, Inc. Optical scanning device i.e. laser scanner, for evaluating environment, has planetary gears driven by motor over vertical motor shaft and rotating measuring head relative to foot, where motor shaft is arranged coaxial to vertical axle
WO2014039623A1 (en) 2012-09-06 2014-03-13 Faro Technologies, Inc. Laser scanner with additional sensing device
EP4221187A3 (en) 2012-09-10 2023-08-09 Aemass, Inc. Multi-dimensional data capture of an environment using plural devices
DE112013004489T5 (en) 2012-09-14 2015-05-28 Faro Technologies, Inc. Laser scanner with dynamic setting of the angular scanning speed
US10285141B1 (en) 2012-09-19 2019-05-07 Safeco Insurance Company Of America Data synchronization across multiple sensors
US10067231B2 (en) 2012-10-05 2018-09-04 Faro Technologies, Inc. Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner
US9513107B2 (en) 2012-10-05 2016-12-06 Faro Technologies, Inc. Registration calculation between three-dimensional (3D) scans based on two-dimensional (2D) scan data from a 3D scanner
DE102012109481A1 (en) 2012-10-05 2014-04-10 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US9235763B2 (en) 2012-11-26 2016-01-12 Trimble Navigation Limited Integrated aerial photogrammetry surveys
EP2967322A4 (en) 2013-03-11 2017-02-08 Magic Leap, Inc. System and method for augmented and virtual reality
CN108427504B (en) 2013-03-15 2021-06-11 奇跃公司 Display system and method
US9360888B2 (en) * 2013-05-09 2016-06-07 Stephen Howard System and method for motion detection and interpretation
US10891003B2 (en) 2013-05-09 2021-01-12 Omni Consumer Products, Llc System, method, and apparatus for an interactive container
US9465488B2 (en) * 2013-05-09 2016-10-11 Stephen Howard System and method for motion detection and interpretation
US10084871B2 (en) * 2013-05-23 2018-09-25 Allied Telesis Holdings Kabushiki Kaisha Graphical user interface and video frames for a sensor based detection system
US9430822B2 (en) 2013-06-14 2016-08-30 Microsoft Technology Licensing, Llc Mobile imaging platform calibration
US9247239B2 (en) 2013-06-20 2016-01-26 Trimble Navigation Limited Use of overlap areas to optimize bundle adjustment
US9714830B2 (en) 2013-07-26 2017-07-25 Chervon (Hk) Limited Measuring system and operating method thereof
US9917873B2 (en) * 2013-10-15 2018-03-13 Cyberlink Corp. Network-based playback of content in cloud storage based on device playback capability
US9824397B1 (en) 2013-10-23 2017-11-21 Allstate Insurance Company Creating a scene for property claims adjustment
US10269074B1 (en) 2013-10-23 2019-04-23 Allstate Insurance Company Communication schemes for property claims adjustments
US9613460B2 (en) * 2014-05-09 2017-04-04 Lenovo (Singapore) Pte. Ltd. Augmenting a digital image
CA2971280C (en) * 2014-12-30 2021-11-30 Omni Consumer Products, Llc System and method for interactive projection
DK3056923T3 (en) 2015-02-13 2021-07-12 Zoller & Froehlich Gmbh Scanning device and method for scanning an object
KR101835434B1 (en) * 2015-07-08 2018-03-09 고려대학교 산학협력단 Method and Apparatus for generating a protection image, Method for mapping between image pixel and depth value
DE102015122844A1 (en) 2015-12-27 2017-06-29 Faro Technologies, Inc. 3D measuring device with battery pack
EP3633985B1 (en) * 2017-05-24 2023-02-15 Furuno Electric Co., Ltd. Video generation device
US10346994B2 (en) * 2017-07-29 2019-07-09 Verizon Patent And Licensing Inc. Systems and methods for inward-looking depth scanning of a real-world scene
EP3441789B1 (en) * 2017-08-07 2020-10-21 Vestel Elektronik Sanayi ve Ticaret A.S. System and method for processing data and a user device for pre-processing data
US10586349B2 (en) 2017-08-24 2020-03-10 Trimble Inc. Excavator bucket positioning via mobile device
US10726299B2 (en) * 2017-10-12 2020-07-28 Sony Corporation Sorted geometry with color clustering (SGCC) for point cloud compression
CN108279421B (en) * 2018-01-28 2021-09-28 深圳新亮智能技术有限公司 Time-of-flight camera with high resolution color images
EP4276520A3 (en) 2018-08-31 2024-01-24 Magic Leap, Inc. Spatially-resolved dynamic dimming for augmented reality device
CN109212540A (en) * 2018-09-12 2019-01-15 百度在线网络技术(北京)有限公司 Distance measuring method, device and readable storage medium storing program for executing based on laser radar system
CN110176018B (en) * 2019-04-18 2021-02-26 中国测绘科学研究院 Pattern spot merging method for keeping structural ground feature contour characteristics
EP4014468A4 (en) 2019-08-12 2022-10-19 Magic Leap, Inc. Systems and methods for virtual and augmented reality
US10943360B1 (en) 2019-10-24 2021-03-09 Trimble Inc. Photogrammetric machine measure up
US11584315B2 (en) * 2020-01-31 2023-02-21 Denso Corporation Sensor system for vehicle
US11216005B1 (en) * 2020-10-06 2022-01-04 Accenture Global Solutions Limited Generating a point cloud capture plan
KR102594258B1 (en) * 2021-04-26 2023-10-26 한국전자통신연구원 Method and apparatus for virtually moving real object in augmetnted reality
US20220365217A1 (en) * 2021-05-12 2022-11-17 Faro Technologies, Inc. Generating environmental map by aligning captured scans

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5337149A (en) 1992-11-12 1994-08-09 Kozah Ghassan F Computerized three dimensional data acquisition apparatus and method
WO1997040342A2 (en) 1996-04-24 1997-10-30 Cyra Technologies, Inc. Integrated system for imaging and modeling three-dimensional objects
US6166744A (en) 1997-11-26 2000-12-26 Pathfinder Systems, Inc. System for combining virtual images with real-world scenes
WO2001004576A1 (en) 1999-07-14 2001-01-18 Cyra Technologies, Inc. Method for operating a laser scanner
US6249600B1 (en) 1997-11-07 2001-06-19 The Trustees Of Columbia University In The City Of New York System and method for generation of a three-dimensional solid model
US20010010546A1 (en) 1997-09-26 2001-08-02 Shenchang Eric Chen Virtual reality camera
US6292215B1 (en) 1995-01-31 2001-09-18 Transcenic L.L.C. Apparatus for referencing and sorting images in a three-dimensional system
US6307556B1 (en) 1993-09-10 2001-10-23 Geovector Corp. Augmented reality vision systems which derive image information from other vision system
WO2001088565A2 (en) 2000-05-18 2001-11-22 Cyra Technologies, Inc. Apparatus and method for identifying the points that lie on a surface of interest
WO2001088741A2 (en) 2000-05-18 2001-11-22 Cyra Technologies, Inc. System and method for concurrently modeling any element of a model
WO2001088849A2 (en) 2000-05-18 2001-11-22 Cyra Technologies, Inc. Apparatus and method for forming 2d views of a structure from 3d point data
WO2001088566A2 (en) 2000-05-18 2001-11-22 Cyra Technologies, Inc. System and method for acquiring tie-point location information on a structure
WO2002016865A2 (en) 2000-08-25 2002-02-28 3Shape Aps Object and method for calibration of a three-dimensional light scanner
US20020060784A1 (en) 2000-07-19 2002-05-23 Utah State University 3D multispectral lidar
US6420698B1 (en) 1997-04-24 2002-07-16 Cyra Technologies, Inc. Integrated system for quickly and accurately imaging and modeling three-dimensional objects
US6526352B1 (en) 2001-07-19 2003-02-25 Intelligent Technologies International, Inc. Method and arrangement for mapping a road
US20030090415A1 (en) 2001-10-30 2003-05-15 Mitsui & Co., Ltd. GPS positioning system
US20040105573A1 (en) 2002-10-15 2004-06-03 Ulrich Neumann Augmented virtual environments
US6759979B2 (en) 2002-01-22 2004-07-06 E-Businesscontrols Corp. GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site
US20050057745A1 (en) 2003-09-17 2005-03-17 Bontje Douglas A. Measurement methods and apparatus

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5166789A (en) * 1989-08-25 1992-11-24 Space Island Products & Services, Inc. Geographical surveying using cameras in combination with flight computers to obtain images with overlaid geographical coordinates
US5124915A (en) * 1990-05-29 1992-06-23 Arthur Krenzel Computer-aided data collection system for assisting in analyzing critical situations
US5894323A (en) * 1996-03-22 1999-04-13 Tasc, Inc, Airborne imaging system using global positioning system (GPS) and inertial measurement unit (IMU) data
US6009359A (en) * 1996-09-18 1999-12-28 National Research Council Of Canada Mobile system for indoor 3-D mapping and creating virtual environments
US5940172A (en) * 1998-06-03 1999-08-17 Measurement Devices Limited Surveying apparatus
US6064335A (en) * 1997-07-21 2000-05-16 Trimble Navigation Limited GPS based augmented reality collision avoidance system
US6215498B1 (en) * 1998-09-10 2001-04-10 Lionhearth Technologies, Inc. Virtual command post

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5337149A (en) 1992-11-12 1994-08-09 Kozah Ghassan F Computerized three dimensional data acquisition apparatus and method
US6307556B1 (en) 1993-09-10 2001-10-23 Geovector Corp. Augmented reality vision systems which derive image information from other vision system
US6292215B1 (en) 1995-01-31 2001-09-18 Transcenic L.L.C. Apparatus for referencing and sorting images in a three-dimensional system
US6330523B1 (en) 1996-04-24 2001-12-11 Cyra Technologies, Inc. Integrated system for quickly and accurately imaging and modeling three-dimensional objects
WO1997040342A2 (en) 1996-04-24 1997-10-30 Cyra Technologies, Inc. Integrated system for imaging and modeling three-dimensional objects
US5988862A (en) 1996-04-24 1999-11-23 Cyra Technologies, Inc. Integrated system for quickly and accurately imaging and modeling three dimensional objects
US6473079B1 (en) 1996-04-24 2002-10-29 Cyra Technologies, Inc. Integrated system for quickly and accurately imaging and modeling three-dimensional objects
US6246468B1 (en) 1996-04-24 2001-06-12 Cyra Technologies Integrated system for quickly and accurately imaging and modeling three-dimensional objects
US6420698B1 (en) 1997-04-24 2002-07-16 Cyra Technologies, Inc. Integrated system for quickly and accurately imaging and modeling three-dimensional objects
US20010010546A1 (en) 1997-09-26 2001-08-02 Shenchang Eric Chen Virtual reality camera
US6249600B1 (en) 1997-11-07 2001-06-19 The Trustees Of Columbia University In The City Of New York System and method for generation of a three-dimensional solid model
US6166744A (en) 1997-11-26 2000-12-26 Pathfinder Systems, Inc. System for combining virtual images with real-world scenes
WO2001004576A1 (en) 1999-07-14 2001-01-18 Cyra Technologies, Inc. Method for operating a laser scanner
WO2001088565A2 (en) 2000-05-18 2001-11-22 Cyra Technologies, Inc. Apparatus and method for identifying the points that lie on a surface of interest
WO2001088741A2 (en) 2000-05-18 2001-11-22 Cyra Technologies, Inc. System and method for concurrently modeling any element of a model
WO2001088566A2 (en) 2000-05-18 2001-11-22 Cyra Technologies, Inc. System and method for acquiring tie-point location information on a structure
WO2001088849A2 (en) 2000-05-18 2001-11-22 Cyra Technologies, Inc. Apparatus and method for forming 2d views of a structure from 3d point data
US20020060784A1 (en) 2000-07-19 2002-05-23 Utah State University 3D multispectral lidar
US6664529B2 (en) * 2000-07-19 2003-12-16 Utah State University 3D multispectral lidar
WO2002016865A2 (en) 2000-08-25 2002-02-28 3Shape Aps Object and method for calibration of a three-dimensional light scanner
US6526352B1 (en) 2001-07-19 2003-02-25 Intelligent Technologies International, Inc. Method and arrangement for mapping a road
US20030090415A1 (en) 2001-10-30 2003-05-15 Mitsui & Co., Ltd. GPS positioning system
US6759979B2 (en) 2002-01-22 2004-07-06 E-Businesscontrols Corp. GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site
US20040105573A1 (en) 2002-10-15 2004-06-03 Ulrich Neumann Augmented virtual environments
US20050057745A1 (en) 2003-09-17 2005-03-17 Bontje Douglas A. Measurement methods and apparatus

Non-Patent Citations (125)

* Cited by examiner, † Cited by third party
Title
""Automatic" Multimodal Medical Image Fusion," http://csdl2.computer.org/persagen/DLAbsToc.jsp, Jul. 7, 2006, pp. 1.
"2-D and 3-D Image Registration: A Tutorial," http://www.cs.wright.edu/~agoshtas/CVPR04_Registration_Tutorial.html, Jul. 7, 2006, pp. 1-3.
"3D Reality Modelling: Photo-Realistic 3D Models of Real World Scenes," Vitor Sequeira, Joäo G.M. Gonçalves, Proceedings of the First International Symposium on 3D Data Processing Visualization and Transmission, 2002, pp. 1-8.
"A Contour-Based Approach to Multisensor Image Registration," Hui Li, B. S. Manjunath, Sanjit K. Mitra, IEEE Transactions On Image Processing, vol. 4, No. 3, Mar. 1995, pp. 320-334.
"A flexible mathematical model for matching of 3D surfaces and attributes," Devrim Akca, Armin Gruen, Electronic Imaging, SPIE vol. 5665, 2005, pp. 184-195.
"A Multi-Resolution ICP with Heuristic Closest Point Search for Fast and Robust 3D Registration of Range Images," Timothée Jost and Heinz Hügli, Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-7.
"A Point-and-Shoot Color 3D Camera," Askold V. Strat, Manuel M. Oliveira, Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.
"A simple MATLAB interface to FireWire cameras," F. Wörnle, May 2006, pp. 1-25.
"A Survey of Medical Image Registration," J. B. Antoine Maintz and Max A. Viergever, Image Sciences Institute, Utrecht University Hospital, Utrecht, the Netherlands, Oct. 16, 1997, pp. 1-37.
"Adaptive Enhancement of 3D Scenes using Hierarchical Registration of Texture-Mapped 3D models," Srikumar Ramalingam and Suresh K. Lodha, Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.
"Advanced Nonrigid Registration Algorithms for Image Fusion," Simon K. Warfield et al., Brain Mapping: The Methods, Second Edition, 2002, pp. 661-690.
"Alignment by Maximization of Mutual Information," Paul A. Viola, Massachusetts Institute of Technology, 1995, pp. 1-156.
"An Integrated Multi-Sensory System for Photo-Realistic 3D Scene Reconstruction," Kia Ng et al., School of Computer Studies, University of Leeds, European Commission-Joint Research Centre, pp. 1-9.
"Automated reconstruction of 3D models from real environments," V. Sequeira, K. Ng, E. Wolfart, J.G.M. Gonçalves, D. Hogg, ISPRS Journal of Photogrammetry & Remote Sensing 54, 1999, pp. 1-22.
"Automated Registration and Evaluation of Laser Scanner Point Clouds and Images, Automatic Point Cloud Registration Using Template Shaped Targets," http://www.photogrammetry.ethz.ch/research/pointcloud/withtargets.html, Aug. 25, 2004.
"Automated Registration, 3DD Optix 400 Series,"3D Digital Corp., Apr. 2004.
"Automatic Registration of Range Images Based on Correspondence of Complete Plane Patches," Wenfeng He, Wei Ma, Hongbin Zha, Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-6.
"Combining texture and shape for automatic crude patch registration," Joris Vanden Wyngaerd et al., Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.
"Consistent Linear-Elastic Transformations for Image Matching," Gary E. Christensen, A. Kuba et al. (Eds.): IPIM'99, LNCS 1613, 1999, pp. 224-237.
"Edge and Line Detection Image Fusion Systems Research," http://www.imgfsr.com/ifsr_is_ed.html, Jul. 7, 2006, pp. 1-8.
"Effective Nearest Neighbor Search for Aligning and Merging Range Images," Ryusuke Sagawa et al., Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling. 2003, pp. 1-8.
"Enhanced, Robust Genetic Algorithms for Multiview Range Image Registration," Luciano Silva et al., Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.
"Fast Normalized Cross-Correlation," http://www.idiom.com/~zilla/Work/nvisionInterface/nip.htm, Jul. 7, 2006, pp. 1-11.
"Feature point detection in multiframe images," Barbara Zitova et al., Czech Pattern Recognition Workshop 2000, Feb. 2, 2000-Feb. 4, 2000, pp. 1-6.
"Fully automatic registration of multiple 3D data sets," Daniel F. Huber, Martial Hebert, Image and Vision Computing 21, 2003, pp. 637-650.
"Fusion and interpretation of medical images," http://www.research.ibm.com/hc/VISUALIZE/visualize.html, Jul. 7, 2006, pp. 1-3.
"Future standard," http://www.medicalimagingmag.com/issues/articles/2000-08_01.asp, Jul. 7, 2006, pp. 1.
"Gaussian Random Fields on Sub-Manifolds for Characterizing Brain Surfaces," http://portal.acm.org/citation.cfm?id=645595.660540&coll=GUIDE&d..., Jul. 6, 2006, pp. 1-3.
"HLODs: Hierarchical Levels of Detail, hierarchical Simplification for Faster Display of Massive Geometric environments," Department of Computer Science, University of North Carolina at Chapel Hill, Feb. 2004.
"Image Fusion Systems Research," http://www.imgfsr.com/, Jul. 7, 2006, pp. 1-2.
"Image Fusion-a new era in diagnosis," Dr Michael Kitchener, Imaging Update, Issue 8, Apr. 2002, pp. 1-4.
"Image Guidance Laboratories," Complete Laboratory Publications, http://www-igl.stanford.edu/papers.php?pg=main..., Jul. 7, 2006.
"Image Image-Guided Interventions Workshop Guided Interventions Workshop," John Haller, National Institute of Biomedical Imaging and Bioengineering, May 13, 2004-May 14, 2004, pp. 2-11.
"Image Registration and Mosaicking," http://prettyview.com/mtch/mtch.shtml, Jul. 7, 2006, pp. 1.
"Image registration methods: a survey," Barbara Zitova et al., Department of Image Processing, Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Jun. 26, 2003, pp. 977-1000.
"Image-Based Registration of 3D-Range Data Using Feature Surface Elements," Gerhard Heinrich Bendels et al., Institute for Computer Science II-Computer Graphics, University of Bonn, Germany, 2004, pp. 1-10.
"Infrastructure for Image Guided Surgery," John W. Haller, Timothy C. Ryken, Thomas A. Gallagher and Michael W. Vannier, National Institute of Neurological Disease and Stroke, Jun. 28, 2001, pp. 1-6.
"Inverse Problems: Image Restoration and Parameter Identification," http://www.mit.jyu.fi/majkir/tutkimus/index.html, Tommi Kärkkäinen and Kirsi Majava, University of Jyväskylä Dept. of Mathematical Information Technology, Aug. 2, 2004, pp. 1-3.
"Investigations of Image Fusion," http://www.ece.lehigh.edu/SPCRL/IF/image_fusion.htm, Jul. 7, 2006, pp. 1-13.
"iPhotoMeasure-The Contractors Photo Measuring Tool," http://www.iphotomeasure.com/faq.asp, Jan. 25, 2007.
"Keith Price Bibliography Fusion of Medical Data," http://iris.usc.edu/Vision-Notes/bibliography/match-p1504.html, Jul. 7, 2006.
"Least Squares 3D Surfaces Matching," Armin Gruen, Devrim Akca, Geospatial Goes Global: From Your Neighborhood to the Whole Planet, ASPRS 2005 Annual Conference, Mar. 7, 2005-Mar. 11, 2005, pp. 1-13.
"Lecture Notes In Computer Science," http://portal.acm.org/toc.cfm?id=645595&type=proceeding&coll=GUI..., Jul. 6, 2006, pp. 1-6.
"Leica Cyclone 5.6 Register," Leica Geosystems, Heerbrugg, Switzerland, 2006.
"Medical image fusion by wavelet transform modulus maxima," Guihong Qu, Dali Zhang and Pingfan Yan, Department of Automation, Tsinghua University, Beijing 100084,China, May 10, 2001, pp. 184-190.
"Medical Image Fusion," http://www.uihealthcare.com/news/currents/vol1issue1/figimfus.html, Jul. 6, 2006, pp. 1-4.
"Medical Image Processing," http://www.sce.carleton.ca/faculty/adler/elg7173/elg7173.html, Jul. 7, 2006, pp. 1-5.
"Multi-modality image registration using mutual information based on gradient vector flow," Yujun Guo, May 1, 2006, pp. 1-31.
"Multi-Sensor Image Fusion Using the Wavelet Transform," Hui Li, B.S. Manjunath, Sanjit K. Mitra, IEEE, 1994, pp. 51-55.
"Non-parametric 3D Surface Completion," Toby P. Breckon, Robert B. Fisher, Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.
"Open Scene Graph Home Page," http://www.openscenegraph.org/osgwiki/pmwiki.php/Home/HomePage, Jan. 25, 2007.
"Project: Fully Automated Registration and Composite Generation of Multisensor and Multidate Satellite Image Data," http://vision.ece.ucsb.edu/registration/satellite/, Jul. 7, 2006, pp. 1-2.
"Projective Surface Matching of Colored 3D Scans," Kari Pulli et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.
"Registration and Integration of Textured 3-D Data," Andrew Johnson and Sing Bing Kang, Digital Equipment Corporation Cambridge Research Lab, Sep. 1996, pp. 1-48.
"Reliability of Functional MRI for Motor and Language Cortex Activation," http://dolphin.radiology.uiowa,edu/ge/BME/public_html/projects/project, Jul. 6, 2006, pp. 1.
"Robust Detection of Significant Points in Multiframe Images," http://staff.utia.cas.cz/zitova/corners.htm, Jul. 7, 2006, pp. 1-5.
"Selected Papers on Image Registration, Image Fusion Systems Research," http://www.imgfsr.com/ifsr_irb.html, Jul. 7, 2006.
"The Registration of Three Dimensional Images from one or more Imaging Modalities," http://www.biomed.abdn.ac.uk/Abstracts/A00331/, Jul. 7, 2006, pp. 1-11.
"Workshop on symmetries, inverse problems and image processing," http://www.indmath.uni-linz.ac.at/people/bila/Workshop.html, Jan. 12, 2005, pp. 1-4.
"2-D and 3-D Image Registration: A Tutorial," http://www.cs.wright.edu/˜agoshtas/CVPR04_Registration_Tutorial.html, Jul. 7, 2006, pp. 1-3.
"3D Modeling of Outdoor Environments by Integrating Omnidirectional Range and Color Images," Toshihiro ASAI et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.
"Construction of Large-Scale Virtual Environment by Fusing Range Data, Texture Images, and Airborne Altimetry Data," Conny Riani Gunadi et al., Proceedings of the First International Symposium on 3-D Data Processing Visualization and Transmission, 2002 pp. 1.
"Construction of Large-Scale Virtual Environment by Fusing Range Data, Texture Images, and Airborne Altimetry Data," Conny Riani Gunadi et al., Proceedings of the First International Symposium on 3-D Data Processing Visualization and Transmission, 2002, pp. 1-4.
"Correction of color information of a 3D model using a range intensity image," Kazunori Umeda et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.
"Evaluating Collinearity Constraint for Automatic Range Image Registration," Yonghuai Liu et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.
"Fast Alignment of 3D Geometrical Models and 2D Color Images using 2D Distance Maps," Yumi Iwashita et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.
"Fast Normalized Cross-Correlation," http://www.idiom.com/˜zilla/Work/nvisionInterface/nip.htm, Jul. 7, 2006, pp. 1-11.
"Image-Based Object Editing," Holly Rushmeier et al., Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.
"Registration and Fusion of Intensity and Range Data for 3D Modelling of Real World Scenes," Paulo Dias et al., Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, 2003, pp. 1-8.
"Semi-automatic range to range registration: a feature-based method," Chen Chao et al., Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, pp. 1-8.
A Volumetric Method for Building Complex Models from Range Images by Brian Curless et al.; Proc. SIGGRAPH '96, Aug. 1995; pp. 1-10. *
Allen, P. et al, Avenue: automated site modeling in urban environments-3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on, 2001, pp. 357-364. *
Ameesh Makadia, et al., "Fully Automatic Registration of 3D Point Clouds," Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006.
Andrew Johnson, et al. "Registration and Integration of Textured 3-D Data," Bibliography, 1996.
Armin Gruen, et al., "Least Squares 3D Surface and Curve Matching," Elsevier, ISPRS Journal of Photogrammetry & Remote Sensing 59, Feb. 16, 2005.
Ayman F. Habib, et al., "Automatic Surface Matching for the Registration of LIDAR Data and MR Imagery," ETRI Journal, vol. 28, No. 2, Apr. 2006.
B. Vangelder, et al., "Modern Technologies for Design Data Collection. " Civil Engineering, Joint Transportation Research Program, Purdue Libraries, 2005.
Beraldin, J-A. et al, "Portable Digital 3-d Imaging System For Remote Sites"-Circuits and Systems, 1998. ISCAS '98. Proceedings of the 1998 IEEE International Symposium on, pp. V488-V493. *
Berladin, J-A. et al, "Object Model Creation From Multiple Range Images: Acquisition, Calibration, Model Building And Verification," 3-D Digital Imaging and Modeling, 1997. Proceedings., International Conference on Recent Advances,pp. 326-333. *
Besl, Paul, "Geometric modeling and computer vision"-Proceedings of the IEEE, vol. 76, No. 8, Aug. 1988, pp. 936-958. *
Christoph Dold, "Extended Gaussian Images for the Registration of Terrestrial Scan Data," Institute of Cartography and Geoinformatics, Workshop "Laser Scanning 2005," Sep. 1214, 2005.
Christoph Dold, et al., "Automatic Matching of Terrestrial Scan Data as a Basis for the Generation of Detailed 3D City Models," International Archives of Phtotogrammetry Remote Sensing and Spatial Information Sciences, vol. 35, Part 3, p. 1091-1096, 2004.
D. Akca, "Registration of Point Clouds Using Range and Intensity Information," International Workshop on Recording, Modeling and Visualization of Cultural Heritage, 2005.
David T. Gering, et al., An Integrated Visualization System for Surgical Planning and Guidance using Image Fusion and Interventional Imagining, MIT AI Laboratory, 1999.
Dinesh Manadhar, "Extraction of linear features from vehicle-borne laser data," Remote Sensing, Singapore, vol. 2, p. 113-1118, Nov. 5-9, 2001.
Faysal Boughorbal, et al., "Registration and Integration of Multi-Sensor Data for Photo-realistic Scene Reconstruction," Applied Imagery Pattern Recognition, Sponsored by SPIE, Washington, DC, Oct. 13-15, 1999.
Fiorella Sgallari, "Numerical Solution of Inverse Problems in Image Processing," University of Bologna, Presented at Association for Computing Machinery Conference, Jan. 13, 2006.
Geoff Jacobs, "Field Productivity Factors in Laser Scanning, Part 1," Professional Surveyor Magazine, Jan. 2007.
Google Search, "Medical Image Fusion," http://www.google.com/search?hl=cn&q=medical+image+fusion&btnG..., Jul. 7, 2006.
Gueorguiev, A. et al, "Design, architecture and control of a mobile site-modeling robot"-Robotics and Automation, 2000. Proceedings. ICRA '00. IEEE International Conference on, pp. 3266-3271. *
Hans J. Johnson, et al., "Consistent Landmark and Intensity-based Image Registration," IEEE Transactions on Medical Imaging, p. 126, 2002.
Helmut Pottmann, et al., "Registration without ICP," Geometric Modeling and Industrial Geometry Group, Vienna University of Technology, Mar. 5, 2004.
Hendrik P. A. Lensch, et al., "Automated Texture Registration and Stitching for Real World Models," Max-Planck-Institute for Computer Science, Baarbrucken, Germany, 2000.
Isi-Gi, Dec. 21, 2006.
Kamgar-Parsi et al, "Registration Algorithms For Geophysical Maps"-Oceans '97, MTS/IEEE Conference Proceedings, pp. 974-980. *
Kia Ng, et al., "An Integrated Multi-Sensory System for photo-Realistic 3D Scene Reconstruction," ISPRS Comm, p. 356-363, 1998.
Klein, Konrad et al, "View Planning for the 3D Modelling of Real World Scenes," Proc. of the 2000 IEEE/RSJ International Co on Intelligent Robots and Systems, pp. 943-948. *
Kropp,A. et al, OMNIVIS'00: "Acquiring and Rendering High-Resolution Spherical Mosaics", pp. 1-7. *
Kwang-Ho Bae, et al., "Automated registration of Unorganised Point Clouds from Terrestrial Laser Scanners," Department of Spatial Sciences, Curtin University of Technology, Perth, Australia, 2004.
Li, Rongxing, "Mobile Mapping-An Emerging Technology For Spatial Data Acquisition," Dept. of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, 2000, pp. 1-23. *
Matthew P. Tait, "Point Cloud registration: Current State of the Science," Schulich School of Engineering, University of Calgary, Mar. 27, 2006.
Mehdi Bouroumand, et al., "The Fusion of Laser Scanning and Close Range Photogrammetry in Bam Laser-Photogrammetric Mapping of Bam Citadel (Arg-E-Bam)/Iran," KNT University in Tehran, 2004.
Modeling and Rendering of Real Environments by Wagner T. Correa et al.; RITA, vol. IX, Numero 1, Aug. 2002; pp. 1-32. *
Naser El-Sheimy, et al., "Digital Terrain Modeling, Acquisition, Manipulation, and Applications," Artech House, Inc., p. 117-119, 184-186, 2005.
Natasha Gelfand, et al., "Robust Global Registration," Eurographics Symposium on Geometry Processing, 2005.
Nathaniel Williams, et al., "Automatic Image Alignment for 3D Environment Modeling," Bibliography, Presented at the Siggraph conference, Aug. 11, 2004.
Nathaniel Williams, et al., "Automatic Image Alignment for 3D Environment Modeling," Department of Computer Science, University of North Carolina at Chapel Hill, 2004.
Nhat Xuan Nguyen, "Numerical Algorithms for Image Supreresolution," http://citeseer.ist.psu.edu/nguyen00numerical.html, 2000.
Paul Rademacher, "Ray Tracing: Graphics for the Masses," ACM, New York, vol. 3, Issue 4, p. 3-7, 1997.
Pedro F. Felzenswalb, et al., "Pictorial Structures for Object Recognition," Artificial Intelligence Lab, Massachusetts Institute of Technology, Computer Science Department, Cornell University, 2003.
Peter K. Allen, et al., "3D Modeling of Historic Sites Using Range and Image Data," Dept. of Computer Science, Columbia University, Dept. of Computer Science, Hunger College, May 16, 2008.
Richard A. Robb, et al., "Adaptive Piece-wise. Registration for Automated Fusion of Point Cloud Coordinates and Anatomic Volume Images," http://www.mayoclinictechnology.com/tc/software/MMV-04-247-565.html, 2004.
S.T. Dijkman, et al., "Semi Automatic Registration of Laser Scanner Data," International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences, Natural Resources, Canada, vol. 34, Part 5, . 12-17, 2002.
Sato, Yukio et al, "Three-Dimensional Shape Reconstruction by Active Rangefinder," Proc. CVPR., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, Jun. 1993, pp. 142-147. *
Scanalyze: a system for aligning and merging range data; http://graphics.standford.edu/software/scanalyze; dated Dec. 9, 2002; pp. 1-7. *
Soucy, M. et al, "A general surface approach to the integration of a set of range views"-Pattern Analysis and Machine Intelligence, vol. 17, No. 4,, IEEE Transactions on, Apr. 1995, pp. 34-358. *
Steven Alexander Sablerolle, "Automatic Registration of Laser Scanning Data and Colour Images," TU Delft, Faculty of Civil Engineering, Final Presentation Msc. Geomatics, Oct. 31, 2006.
Steven Alexander Sablerolle, "Automatic Registration of Laser Scanning Data and Colour Images," TUDelft, Oct. 2006.
Steven Sablerolle, "Graduation Research Report," TU Delft, Sep. 2006.
T. Rabbani, et al., "Segmentation of Point clouds Using Smoothness Constraint," ISPRS Commission V Symposium 'Image Engineering and Vision Metrology,' 2006.
Tahir Rabbani Shah, "Automatic Reconstruction of Industrial Installations Using Point Clouds and Images," Geodesy 6.2, Nederlandse Commissie voor Geodesie Netherlands Geodetic Commission, Delft, May 2006.
Tahir Rabbani, et al., "Efficient Hough Transform for Automatic detection of Cylinders in Point Clouds," Workshop "Laser Scanning 2005," Enschede, the Netherlands, Sep. 12-14, 2005.
Wikipedia, the free encyclopedia, "Level of Detail," http://en.wikipedia.org/wiki/Level_of_detail_(programming), May 23, 2008.
Yu Lifeng, et al., "Multi-Modality Medical Image Fusion Based on Wavelet Pyramid and Evaluation," Peking University, Beijing China, 1994.
Zippered Polygon Meshes from Range Images by Greg Turk et al.; Proc. SIGGRAPH '94, Jul. 1994; pp. 1-8. *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110032507A1 (en) * 2004-06-23 2011-02-10 Leica Geosystems Ag Scanner system and method for registering surfaces
US8379191B2 (en) * 2004-06-23 2013-02-19 Leica Geosystems Ag Scanner system and method for registering surfaces
US8890866B2 (en) 2004-08-31 2014-11-18 Visual Real Estate, Inc. Method and apparatus of providing street view data of a comparable real estate property
US9384277B2 (en) 2004-08-31 2016-07-05 Visual Real Estate, Inc. Three dimensional image data models
US9311396B2 (en) 2004-08-31 2016-04-12 Visual Real Estate, Inc. Method of providing street view data of a real estate property
US9311397B2 (en) 2004-08-31 2016-04-12 Visual Real Estates, Inc. Method and apparatus of providing street view data of a real estate property
US8558848B2 (en) 2004-08-31 2013-10-15 William D. Meadow Wireless internet-accessible drive-by street view system and method
USRE45264E1 (en) * 2004-08-31 2014-12-02 Visual Real Estate, Inc. Methods and apparatus for generating three-dimensional image data models
US8902226B2 (en) 2004-08-31 2014-12-02 Visual Real Estate, Inc. Method for using drive-by image data to generate a valuation report of a selected real estate property
US9098870B2 (en) 2007-02-06 2015-08-04 Visual Real Estate, Inc. Internet-accessible real estate marketing street view system and method
US20100118116A1 (en) * 2007-06-08 2010-05-13 Wojciech Nowak Tomasz Method of and apparatus for producing a multi-viewpoint panorama
US8436901B2 (en) * 2008-02-21 2013-05-07 Id. Fone Co., Ltd. Field monitoring system using a mobile terminal and method thereof
US20110001795A1 (en) * 2008-02-21 2011-01-06 Hyun Duk Uhm Field monitoring system using a mobil terminal
US8207964B1 (en) * 2008-02-22 2012-06-26 Meadow William D Methods and apparatus for generating three-dimensional image data models
US20120150573A1 (en) * 2010-12-13 2012-06-14 Omar Soubra Real-time site monitoring design
US20120265494A1 (en) * 2011-04-14 2012-10-18 National Central University Method of Online Building-Model Reconstruction Using Photogrammetric Mapping System
US8600713B2 (en) * 2011-04-14 2013-12-03 National Central University Method of online building-model reconstruction using photogrammetric mapping system
US20150172628A1 (en) * 2011-06-30 2015-06-18 Google Inc. Altering Automatically-Generated Three-Dimensional Models Using Photogrammetry
US8884950B1 (en) 2011-07-29 2014-11-11 Google Inc. Pose data via user interaction
US20130321583A1 (en) * 2012-05-16 2013-12-05 Gregory D. Hager Imaging system and method for use of same to determine metric scale of imaged bodily anatomy
US9367914B2 (en) * 2012-05-16 2016-06-14 The Johns Hopkins University Imaging system and method for use of same to determine metric scale of imaged bodily anatomy
US9377298B2 (en) * 2013-04-05 2016-06-28 Leica Geosystems Ag Surface determination for objects by means of geodetically precise single point determination and scanning
US20140298666A1 (en) * 2013-04-05 2014-10-09 Leica Geosystems Ag Surface determination for objects by means of geodetically precise single point determination and scanning
US9134339B2 (en) 2013-09-24 2015-09-15 Faro Technologies, Inc. Directed registration of three-dimensional scan measurements using a sensor unit
US9528834B2 (en) 2013-11-01 2016-12-27 Intelligent Technologies International, Inc. Mapping techniques using probe vehicles
US10634791B2 (en) 2016-06-30 2020-04-28 Topcon Corporation Laser scanner system and registration method of point cloud data
US10458792B2 (en) 2016-12-15 2019-10-29 Novatel Inc. Remote survey system
US11215597B2 (en) 2017-04-11 2022-01-04 Agerpoint, Inc. Forestry management tool for assessing risk of catastrophic tree failure due to weather events
US10665035B1 (en) 2017-07-11 2020-05-26 B+T Group Holdings, LLC System and process of using photogrammetry for digital as-built site surveys and asset tracking
WO2019204800A1 (en) * 2018-04-20 2019-10-24 WeRide Corp. Method and system for generating high definition map
US11151782B1 (en) 2018-12-18 2021-10-19 B+T Group Holdings, Inc. System and process of generating digital images of a site having a structure with superimposed intersecting grid lines and annotations

Also Published As

Publication number Publication date
WO2003062849A3 (en) 2004-02-05
US20030137449A1 (en) 2003-07-24
WO2003062849A2 (en) 2003-07-31
US6759979B2 (en) 2004-07-06
AU2003207644A1 (en) 2003-09-02

Similar Documents

Publication Publication Date Title
USRE41175E1 (en) GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site
Fernández‐Hernandez et al. Image‐based modelling from unmanned aerial vehicle (UAV) photogrammetry: an effective, low‐cost tool for archaeological applications
US7689032B2 (en) Scanning system for three-dimensional objects
Li et al. Quantitative photogrammetric analysis of digital underwater video imagery
KR100912715B1 (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
AU629624B2 (en) A method of surveying for the compilation of detailed three-dimensional topographic data
US7187401B2 (en) System and a method of three-dimensional modeling and restitution of an object
US5774826A (en) Optimization of survey coordinate transformations
Sanz‐Ablanedo et al. Reducing systematic dome errors in digital elevation models through better UAV flight design
Schuhmacher et al. Georeferencing of terrestrial laserscanner data for applications in architectural modeling
Honkamaa et al. Interactive outdoor mobile augmentation using markerless tracking and GPS
Benjamin et al. Improving data acquisition efficiency: Systematic accuracy evaluation of GNSS-assisted aerial triangulation in UAS operations
Redweik Photogrammetry
CN110986888A (en) Aerial photography integrated method
Voyat et al. Advanced techniques for geo structural surveys in modelling fractured rock masses: application to two Alpine sites
Novak et al. Development and application of the highway mapping system of Ohio State University
Wu Photogrammetry: 3-D from imagery
Shi et al. Reference-plane-based approach for accuracy assessment of mobile mapping point clouds
Hendriatiningsih et al. 3D Model Based on Terrestrial Laser Scanning (TLS) Case study: The Cangkuang Temple, Garut District, West Java, Indonesia.
Rönnholm et al. A method for interactive orientation of digital images using backprojection of 3D data
Setkowicz Evaluation of algorithms and tools for 3D modeling of laser scanning data.
Lee et al. Automatic building reconstruction with satellite images and digital maps
Abu Hanipah et al. Development of the 3D dome model based on a terrestrial laser scanner
Abbas et al. Three-dimensional data quality assessment: Unmanned aerial vehicle photogrammetry and mobile laser scanner
Lerma et al. Fusion of range-based data and image-based datasets for efficient documentation of cultural heritage objects and sites

Legal Events

Date Code Title Description
AS Assignment

Owner name: SQUARE 1 BANK, NORTH CAROLINA

Free format text: SECURITY INTEREST;ASSIGNOR:INTELISUM, INC.;REEL/FRAME:020930/0037

Effective date: 20070518

Owner name: SQUARE 1 BANK,NORTH CAROLINA

Free format text: SECURITY INTEREST;ASSIGNOR:INTELISUM, INC.;REEL/FRAME:020930/0037

Effective date: 20070518

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees