Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20070141534 A1
Publication typeApplication
Application numberUS 11/542,682
Publication date21 Jun 2007
Filing date2 Oct 2006
Priority date14 Dec 2004
Also published asUS8026916, US20060127852, US20080316209
Publication number11542682, 542682, US 2007/0141534 A1, US 2007/141534 A1, US 20070141534 A1, US 20070141534A1, US 2007141534 A1, US 2007141534A1, US-A1-20070141534, US-A1-2007141534, US2007/0141534A1, US2007/141534A1, US20070141534 A1, US20070141534A1, US2007141534 A1, US2007141534A1
InventorsHuafeng Wen
Original AssigneeHuafeng Wen
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Image-based orthodontic treatment viewing system
US 20070141534 A1
Abstract
Systems and methods are disclosed for visualizing changes in a three dimensional (3D) model by receiving an initial 3D model for the patient; determining a target 3D model; and generating one or more intermediate 3D models by morphing one or more of the 3D models.
Images(6)
Previous page
Next page
Claims(20)
1. A method for visualizing changes in a three dimensional (3D) model, comprising:
a. receiving an initial 3D model for the patient;
b. determining a target 3D model; and
c. generating one or more intermediate 3D models by morphing one or more of the 3D models.
2. The method of claim 1, comprising extracting environment information from the model.
3. The method of claim 1, comprising rendering one or more images of the model.
4. The method of claim 1, wherein the model is represented using one of: polyhedrons and voxels.
5. The method of claim 1, wherein the model is a patient model
6. The method of claim 5, comprising generating a virtual treatment for the patient.
7. The method of claim 5, comprising generating a post-treatment 3D model.
8. The method of claim 5, comprising rendering an image of the model.
9. The method of claim 1, comprising geometry subdividing and tessellating the model.
10. The method of claim 1, comprising generating an inside model of the 3D model.
11. A visualization system, comprising:
a. means for receiving an initial three dimensional (3D) model for the patient;
b. means for determining a target 3D model; and
c. means for generating one or more intermediate 3D models by morphing one or more of the 3D models.
12. The system of claim 11, comprising means for extracting environment information from the model.
13. The system of claim 1, comprising means for rendering one or more images of the model.
14. The system of claim 11, wherein the model is represented using one of: polyhedrons and voxels.
15. The system of claim 11, wherein the model is a patient model
16. The system of claim 15, comprising means for generating a virtual treatment for the patient.
17. The system of claim 15, comprising means for generating a post-treatment 3D model.
18. The system of claim 15, comprising means for rendering an image of the model.
19. The system of claim 11, comprising means for geometry subdividing and tessellating the model.
20. The system of claim 11, comprising means for generating an inside model of the 3D model.
Description
  • [0001]
    The present invention relates to techniques for generating three dimensional (3D) graphics for orthodontic treatment.
  • [0002]
    Conventionally, a 3D modeling and rendering process is used for representing different views of a 3D scene. The usual steps in constructing a 3D model include: loading an image or previous saved work; displaying the image; identifying one or more object features in the image; finding the object features in 3D space; displaying a model of object features in 3D space; measuring lengths, distances and angles of the objects; and saving the work. These steps can be repeated until satisfactory results are obtained. This process requires a great deal of user interaction and is time-consuming. The user has to construct detailed models (e.g., polygon or wire frame) of the objects appearing in an image.
  • [0003]
    Once 3D models are obtained, the models may be animated by varying them and displaying the varied models at a predetermined frame rate. However, it is difficult to manipulate computer graphics representations of three-dimensional models, for example to rotate the object or “fly through” a scene. If many objects need to be displayed, or many surface textures need to be filled, the time required to compute new views can be prohibitive. The conventional 3D rendering process is thus compute intensive and also the rendering time depends on the complexity of the visible part of the scene.
  • [0004]
    On another note, in many graphics applications, a special effect operation known as “warping” or “morphing” is used to gradually transform one image into another image. This is accomplished by creating a smooth transitional link between the two images. Some computer programs, for example, use warping to generate an animation sequence using the image transformations. Such an animation might, for example, show a first person's face transforming into a second person's face.
  • [0005]
    The warping process preserves features associated with each image by mapping the features from a source image to corresponding features in a destination image. In particular, mesh warping warps a first image into a second image using a point-to-point mapping from the first image to the second image. A first lattice (mesh) is superimposed on the first image and second lattice is superimposed on the second image. For each point in the first lattice, a one-to-one correspondence with a corresponding point in the second lattice is defined. Mesh warping is generally described in George Wolberg, Digital Image Warping, IEEE Computer Society Press (1990). Variations on mesh warping include a version in which the user specifies lines on the first image corresponding to lines on the second image.
  • [0006]
    Morphing is a name for animation sequences which display gradual transformation. This concept has been used for transformations of 2 D images, 3 D polygons, and voxels. The morphing operation changes one picture to another by creating a smooth transitional link between the two pictures. The process preserves features associated with each image by mapping the features from a source image to corresponding features in a destination image. Morphing couples image warping with color interpolation. Image warping applies two dimensional geometric transformations on images to align their features geometrically, while color interpolation blends their colors. In this way, a seamless transition from one picture to another is achieved.
  • [0007]
    U.S. Pat. No. 6,268,846 discloses a computer-implemented method that generates a new view of a three-dimensional scene by receiving three or more pictures representing three or more different view points on a plane, each picture taken from a viewing direction perpendicular to the plane; selecting a new point on the plane; and generating the new view of the three dimensional scene from the new point by morphing among the three or more pictures.
  • [0008]
    U.S. Pat. No. 6,573,889 discloses a computer-implemented system that performs a conformal warp operation using a unique warping function to map a first area to a second area. The first area is defined by a first enclosing contour and the second area is defined by a second enclosing contour. The system defines the first enclosing contour; modifies the first enclosing contour into the second enclosing contour; generates an analytic function to conformally warp the first area into the second area; and performs the conformal warp using the analytic function. The system does not require the user to define mappings from individual points within the fist contour to individual points within the second contour. Rather, the user needs to only specify the first and second contours and correspondences between them. This increases the ease of use with which the user can define a mapping between the first and second images and also allows for a more uniform warping which preserves angles.
  • SUMMARY
  • [0009]
    Systems and methods are disclosed for visualizing changes in a three dimensional (3D) model by receiving an initial 3D model for the patient; determining a target 3D model; and generating one or more intermediate 3D models by morphing one or more of the 3D models.
  • [0010]
    In one embodiment, 3D geometry information is used to morph an untreated photograph of a patient into a photo realistic rendering of post-treatment view(s) of a patient's teeth, face or organ based and predicted 3D geometry after treatment.
  • [0011]
    Advantages of the system include one or more of the following. The system enables patients/doctors/dentists to be able to look at photorealistic rendering of the patient as they would appear to be after treatment. In case of orthodontics for example, a patient will be able to see what kind of smile he or she would have after treatment. The system uses 3D morphing, which is an improvement over 2 D morphing since true 3D models are generated for all intermediate models. The resulting 3D intermediate object can be processed with an environmental model such as lighting, color, texture etc to realistically render the intermediate stage. Camera viewpoints can be changed and the 3D models can render the intermediate object from any angle. The system permits the user to generate any desired 3D view, if provided with a small number of appropriately chosen starting images. The system avoids the need for 3D shape modeling. System performance is enhanced because the morphing process requires less memory space, disk space and processing power than the 3D shape modeling process. The resulting 3D images are lifelike and visually convincing because they are derived from images and not from geometric models. The system thus provides a powerful and lasting impression, engages audiences and creates a sense of reality and credibility.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0012]
    FIG. 1 shows an exemplary visualization process for 3D animation using morphing.
  • [0013]
    FIG. 2 shows an exemplary process for 3D morphing in the 3D visualization process of FIG. 1.
  • [0014]
    FIG. 3 shows a system for visualizing 3D animation.
  • [0015]
    FIG. 4 shows exemplary teeth before morphing.
  • [0016]
    FIG. 5 shows an exemplary display of teeth after the 3D morphing process.
  • DESCRIPTION
  • [0017]
    FIG. 1 shows an exemplary process that uses 3D geometry information to morph an untreated photograph of a patient into a photo realistic rendering of post-treatment view(s) of a patient's teeth, face or organ based and predicted 3D geometry after treatment.
  • [0018]
    The process of FIG. 1 first receives a 3 D face model for the patient and extracts environment information from the model (10). Next, a Virtual Treatment is designed (12). The process then predicts and generates a Post-Treatment 3D Model/Environment using 3 D Morphing and previously generated Information (14). A photo realistic image is then rendered (16), and the predicted Post-Treatment Photo can be viewed (18).
  • [0019]
    In the virtual treatment design (12), the system generates or otherwise obtains one or more treatment plans specifying the treatment process in which the teeth are moved in order to perform the orthodontic treatment. The input to this process is the 3D geometry of the patient's jaw and/or teeth. In this process, a computer or computer operators design treatments for the patient. This treatment results in a predicted shape/position for the jaw and teeth in it after the orthodontic treatment has been applied.
  • [0020]
    In the predicting and generating the 3D Post-Treatment Model/Environment (14), the Treatment Design is combined with the 3 D Teeth/Face Model with texture, environment, shadow, shading information in order to predict the 3D Post Treatment Teeth/Jaw and/or face model which will include the changes in the 3D Geometry Position Texture Environment Shading and Shadow of the face.
  • [0021]
    Certain treatment design information such as how the teeth move during the orthodontic treatment and changes in the tooth movement can be used with the database on faces and teeth to determine how changes in a particular tooth position results in changes in the jaw and facial model. Since all data at this stage is 3 D data, the system can compute the impact of any tooth movement using true 3 D morphing of the facial model based on the prior knowledge of teeth and facial bone and tissue. In this manner, movements in the jaw/teeth result in changes to the 3D model of the teeth and face. Techniques such as collision computation between the jaw and the facial bone and tissue are used to calculate deformations on the face. The information is then combined with curves and surfaces based smoothing algorithms specialized for the 3D models and the database containing prior knowledge of faces to simulate the changes to the overall face due to localized changes in tooth position. The gradual changes in the teeth/face are visualized and computed using true 3D morphing.
  • [0022]
    At this stage, the 3 D models and environmental information for the predicted post-treatment of facial and dental models are completed and the computed data can be sent to a photo realistic renderer for high quality rendering of the post-treatment teeth and/or face (16). The predicted post-treatment photo is then generated from the renderer (18).
  • [0023]
    FIG. 2 shows an exemplary process for 3D morphing. The process determines a final 3 D model from an initial 3D model based on an Initial Facial Model and Treatment Plan (102). Next, the process finds feature points to map corresponding features on the Initial and Final 3D models (104). Based on the mapping, the process interpolates between the initial and final models to determine intermediate 3 D models (106). The result is a photo realistic rendering of the patient's jaw/teeth/face after the proposed treatment has been applied.
  • [0024]
    The morphing operation gradually converts one graphical object into another graphical object. Morphing is used to affect both the shape and also the attributes of graphical objects. 3D morphing is the process in which there is gradual transformation between 3D objects. Using start and final 3D models, a 3D morphing sequence is produced by interpolating these objects. For example, 3D objects are commonly represented using polyhedrons, i.e. vertices, edges, and faces. For these polyhedrons, the process results in a polyhedron sequence in which each intermediate polyhedron looks more like the final polyhedron and less like the start or beginning polyhedron.
  • [0025]
    In one implementation, the process morphs between objects described by a voxel representation. Morphing can include changes in the 3D model including possibly deformations, changes in the 3D structure, surface properties, among others. The process can morph surface features such as change the color attributes gradually. The color morphing can be used in gradually showing the whitening of teeth, for example.
  • [0026]
    The feature mapping may include teeth or lips on the initial and final models. Also, when using 3 D polyhedral representation, the process specifies faces, edges, or vertices in each group. Alternatively, when using a voxel representation, appropriate voxels in each group are specified. The group concept can also be applied to the morphing of surface properties such as the case as in teeth whitening.
  • [0027]
    In one implementation, pseudo-code for the 3D Morphing algorithm is follows:
      • 1. Determine final 3D Model using facial/teeth model based on the initial 3D model and the treatment plan and generate a final 3D model based on the treatment plan.
      • 2. Map features from initial 3D Model to the final 3D model.
      • 3. Interpolate the 3D models at any step to determine the intermediate 3D model (true 3D Model).
      • 4. Apply true 3D model for realistic rendering.
  • [0032]
    Turning now to the generation of the patient 3D model (10), scanned information from various sources are combined to generate 3 D model(s) for the face and jaw and teeth of the patient before treatment (initial model). The process also obtains the information about the rendering environment (e.g. shadows and shading and color information). The resulting detailed initial 3D model and rendering environment are stored for subsequent operations such as rendering/visualization as well as collision determination, among others. The process also receives information about the Treatment Design specifying each tooth movement during the orthodontic treatment. The information on the changes in the tooth movement are used in conjunction with information on the faces and teeth to determine how a change in the tooth position changes the overall view of the teeth in the jaw and in the soft tissue of the face and the rest of the facial model. This is done using a true 3 D model morphing. The 3 D data is used to compute the impact of any tooth movement using true 3 D morphing of the teeth/facial model based on previously determined teeth and facial model.
  • [0033]
    Patient data can be generated through a color 3D scan of the patient's face and can include other data such as X-Ray data and CT data, among others. Alternatively, a picture of the patient can be used with a generic face model and the operation 10 can texture map the picture onto the 3D face. The original 2D pictures are saved in the process to eventually provide surface texture, shadow and shading information for the patient. The following exemplary data, among others, can be collected:
      • 3D scan image of the patient's head/face. This is the how the patient currently looks before treatment such as data which represents the soft tissue of the face.
      • Untreated photo of the patient. 2D pictures are provided as input to texture mapping techniques and known facial models to generate a facial model based on 2D pictures as a alternative to a 3D scan.
      • 3D scans for of the jaw and teeth of the patient to provide information on the initial orientation of the jaw and teeth prior to the treatment.
      • X-Rays for Bone and tissue information.
      • Environmental information. This is used to obtain separate the color pigment information from the shading and shadow information of the patient.
  • [0039]
    The patient's color pigments can be separated from shadow/shading in photo of the patient. The system generates initial environmental information by placing light sources at known coordinates and using these coordinates as inputs to the system. Alternatively, lighting from many angles can be used so that there are no shadows and the system can subsequently incorporate lighting into the 3 D environment.
  • [0040]
    In one implementation, the above data is combined to create a complete 3D model of the patient's face using the Patient's 3D Geometry, Texture, Environment Shading and Shadow. The result is a true Hierarchy model with bone, teeth, gingival, joint information, muscles, soft tissue, skin. Missing data such as internal muscle can be added using a data base of known facial models.
  • [0041]
    The result is the initial 3 D orthodontic and facial model with environmental information extracted. This gives the system the ability to change direction of the light sources and see changes in shading and the shadows. The arrangement also provides thes the ability to extract the color of the patient skin.
  • [0042]
    In one implementation of the generation of 3 D Face Model for the patient and extraction of environment, a true hierarchical face model with teeth, bone, joints, gingiva, muscle, soft tissue and skin. Changes in position/shape of one level of the hierarchy changes all dependent levels in the hierarchy. As an example a modification in the jaw bone will impact the muscle, soft tissue and skin. This includes changes in the gingiva.
  • [0043]
    The process extrapolates missing data using prior knowledge on the particular organ. For example, for missing data on a particular tooth, the system consults a database to estimate expected data for the tooth. For missing facial data, the system can check with a soft tissue database to estimate the muscle and internal tissue which are extrapolated.
  • [0044]
    The system also estimate the behavior of the organ based on its geometry and other model of the organ. An expert system computes the model of face and how the face should change if pressure is applied by moved teeth. In this manner, the impact of teeth movement on the face is determined. Changes in the gingival can also be determined using this model.
  • [0045]
    In one implementation, geometry subdivision and tessellation are used. Based on changes in the face caused by changes in teeth position, at times it is required to sub divide the soft face tissue geometry for a more detailed/smooth rendering. At other times the level of detail needs to be reduced. The model uses prior information to achieve this. True 3 D morphing connects the initial and modified geometry for showing gradual changes in the face model.
  • [0046]
    In certain applications that need the external 3 D model for the face and the 3 D model for the jaw/teeth as well as internal model such as the inner side of the facial tissue, and muscle tissue, hole filling and hidden geometry prediction operations are performed on the organ. The internal information is required in these applications to model the impact of changes at various level of model hierarchy on the overall model. As an example, teeth movement can impact facial soft tissue or bone movements. Hence, jaw movements can impact the muscles and the face. A database containing prior knowledge can be used for generating the internal model information.
  • [0047]
    In one implementation, gingiva prediction is done. The model recomputes the gingivas geometry based on changes in other parts of the facial model to determine how teeth movement impacts the gingiva.
  • [0048]
    In another implementation, a texture based 3D geometry reconstruction is done. The actual face color/pigment is stored as a texture. Since different parts of the facial skin can have different colorations, texture maps store colors corresponding to each position on the face 3D model.
  • [0049]
    In another implementation, multiple cameras are used for photo geometry reconstruction. Multiple camera shots are used to generate the face geometry to produce a true 3 D model of the face.
  • [0050]
    An alternate to scanning the model is to have a 2D picture of patient. The process then maps point(s) on the 2D picture to a 3D model using prior information on typical sets of heads 3D (for example by applying texture mapping). The simulated 3D head is used for making the final facial model.
  • [0051]
    FIG. 3 shows another implementation of a 3D morphing system for treatment planning purposes. Initial untreated photograph of a patient are scanned (200). From the scan, patient 3D geometry is determined (202). Next, exemplary 3D data is determined, for example, the Patient's 3D Geometry Texture, Environment, Shading, and Shadow (206). The output of 206 is combined with a treatment design or prescription (208) to arrive at a predicted Post Treatment 3D model with geometry, position, texture, environment, shading, and shadow, among others (210). The output is then rendered as a photo realistic output (212). The result can be used as predicted post-treatment photo (214).
  • [0052]
    FIG. 4 shows exemplary teeth before morphing, while FIG. 5 shows an exemplary display of teeth after the 3D morphing process. The system enables patients, doctors, dentists and other interested parties to view photorealistic rendering of expected appearances of patients after treatment. In case of orthodontics for example, a patient can view his or her expected smile post-treatment.
  • [0053]
    The system can also be used for other medical, surgical simulation systems. Thus, for plastic surgery applications, the system can show the before and after results of the procedure. In tooth whitening applications, given an initial tooth color and given a target tooth color, the tooth surface color can be morphed to show changes in the tooth color and the impact on the patient face. The system can also be used to perform lip sync. The system can also perform face detection: depending of facial expression, a person can have multiple expressions on their face at different times and the model can simulate multiple expressions based on prior information and the multiple expressions can be compared to a scanned face for face detection. The system can also be applied to show wound healing on the face through progressive morphing. Additionally, a growth model based on a database of prior organ growth information to predict how an organ would be expected to grow and the growth can be visualized using morphing. For example, a hair growth model can show a person his or her expected appearance three to six months from the day of the haircut using one or more hair models.
  • [0054]
    The techniques described here may be implemented in hardware or software, or a combination of the two. Preferably, the techniques are implemented in computer programs executing on programmable computers that each includes a processor, a storage medium readable by the processor (including volatile and nonvolatile memory and/or storage elements), and suitable input and output devices. Program code is applied to data entered using an input device to perform the functions described and to generate output information. The output information is applied to one or more output devices.
  • [0055]
    One such computer system includes a CPU, a RAM, a ROM and an I/O controller coupled by a CPU bus. The I/O controller is also coupled by an I/O bus to input devices such as a keyboard and a mouse, and output devices such as a monitor. The I/O controller also drives an I/O interface which in turn controls a removable disk drive such as a floppy disk, among others.
  • [0056]
    Variations are within the scope of the following claims. For example, instead of using a mouse as the input devices to the computer system, a pressure-sensitive pen or tablet may be used to generate the cursor position information. Moreover, each program is preferably implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language.
  • [0057]
    Each such computer program is preferably stored on a storage medium or device (e.g., CD-ROM, hard disk or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer to perform the procedures described. The system also may be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner.
  • [0058]
    While the invention has been shown and described with reference to an embodiment thereof, those skilled in the art will understand that the above and other changes in form and detail may be made without departing from the spirit and scope of the following claims.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4488173 *4 Feb 198311 Dec 1984Robotic Vision Systems, Inc.Method of sensing the position and orientation of elements in space
US4600012 *22 Apr 198515 Jul 1986Canon Kabushiki KaishaApparatus for detecting abnormality in spinal column
US4971069 *28 Sep 198820 Nov 1990Diagnospine Research Inc.Method and equipment for evaluating the flexibility of a human spine
US4983120 *12 May 19888 Jan 1991Specialty Appliance Works, Inc.Method and apparatus for constructing an orthodontic appliance
US5568384 *13 Oct 199222 Oct 1996Mayo Foundation For Medical Education And ResearchBiomedical imaging and analysis
US5753834 *23 Jul 199719 May 1998Lear CorporationMethod and system for wear testing a seat by simulating human seating activity and robotic human body simulator for use therein
US5867584 *22 Feb 19962 Feb 1999Nec CorporationVideo object tracking method for interactive multimedia applications
US5889550 *10 Jun 199630 Mar 1999Adaptive Optics Associates, Inc.Camera tracking system
US5937083 *28 Apr 199710 Aug 1999The United States Of America As Represented By The Department Of Health And Human ServicesImage registration using closest corresponding voxels with an iterative registration process
US6099314 *4 Jul 19968 Aug 2000Cadent Ltd.Method and system for acquiring three-dimensional teeth image
US6210162 *14 May 19993 Apr 2001Align Technology, Inc.Creating a positive mold of a patient's dentition for use in forming an orthodontic appliance
US6217325 *23 Apr 199917 Apr 2001Align Technology, Inc.Method and system for incrementally moving teeth
US6227850 *13 May 19998 May 2001Align Technology, Inc.Teeth viewing system
US6252623 *15 May 199826 Jun 20013Dmetrics, IncorporatedThree dimensional imaging system
US6264468 *19 Feb 199924 Jul 2001Kyoto TakemotoOrthodontic appliance
US6275613 *3 Jun 199914 Aug 2001Medsim Ltd.Method for locating a model in an image
US6315553 *30 Nov 199913 Nov 2001Orametrix, Inc.Method and apparatus for site treatment of an orthodontic patient
US6318994 *13 May 199920 Nov 2001Align Technology, IncTooth path treatment plan
US6341016 *4 Aug 200022 Jan 2002Michael MalioneMethod and apparatus for measuring three-dimensional shape of object
US6406292 *13 May 199918 Jun 2002Align Technology, Inc.System for determining final position of teeth
US6415051 *24 Jun 19992 Jul 2002Geometrix, Inc.Generating 3-D models using a manually operated structured light source
US6556706 *26 Jan 200129 Apr 2003Z. Jason GengThree-dimensional surface profile imaging method and apparatus using single spectral light condition
US6563499 *19 Jul 199913 May 2003Geometrix, Inc.Method and apparatus for generating a 3D region from a surrounding imagery
US6602070 *25 Apr 20015 Aug 2003Align Technology, Inc.Systems and methods for dental treatment planning
US6786721 *26 Apr 20027 Sep 2004Align Technology, Inc.System and method for positioning teeth
US6851949 *28 Apr 20008 Feb 2005Orametrix, Inc.Method and apparatus for generating a desired three-dimensional digital model of an orthodontic structure
US6948931 *22 Oct 200327 Sep 2005Align Technology, Inc.Digitally modeling the deformation of gingival tissue during orthodontic treatment
US20010002310 *21 Dec 200031 May 2001Align Technology, Inc.Clinician review of an orthodontic treatment plan and appliance
US20010005815 *4 Jan 200128 Jun 2001Immersion CorporationComponent position verification using a position tracking device
US20010006770 *21 Feb 20015 Jul 2001Align Technology, Inc.Method and system for incrementally moving teeth
US20010008751 *8 Jan 200119 Jul 2001Align Technology, Inc.Method and system for incrementally moving teeth
US20020028418 *26 Apr 20017 Mar 2002University Of Louisville Research Foundation, Inc.System and method for 3-D digital reconstruction of an oral cavity from a sequence of 2-D images
US20020119423 *26 Apr 200229 Aug 2002Align Technology, Inc.System and method for positioning teeth
US20030039941 *24 Oct 200227 Feb 2003Align Technology, Inc.Digitally modeling the deformation of gingival tissue during orthodontic treatment
US20030129565 *10 Jan 200210 Jul 2003Align Technolgy, Inc.System and method for positioning teeth
US20040038168 *22 Aug 200226 Feb 2004Align Technology, Inc.Systems and methods for treatment analysis by teeth matching
US20040137408 *24 Dec 200315 Jul 2004Cynovad Inc.Method for producing casting molds
US20040185422 *19 Mar 200423 Sep 2004Sirona Dental Systems GmbhData base, tooth model and restorative item constructed from digitized images of real teeth
US20040253562 *4 Mar 200416 Dec 2004Align Technology, Inc.Systems and methods for fabricating a dental template
US20050019732 *23 Jul 200327 Jan 2005Orametrix, Inc.Automatic crown and gingiva detection from three-dimensional virtual model of teeth
US20050153257 *8 Jan 200414 Jul 2005Durbin Duane M.Method and system for dental model occlusal determination using a replicate bite registration impression
US20050208449 *19 Mar 200422 Sep 2005Align Technology, Inc.Root-based tooth moving sequencing
US20050244791 *29 Apr 20043 Nov 2005Align Technology, Inc.Interproximal reduction treatment planning
US20060003292 *24 May 20055 Jan 2006Lauren Mark DDigital manufacturing of removable oral appliances
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US85037634 Jan 20096 Aug 20133M Innovative Properties CompanyImage signatures for use in motion-based three-dimensional reconstruction
US88039584 Jan 200912 Aug 20143M Innovative Properties CompanyGlobal camera path optimization
US8830309 *4 Jan 20099 Sep 20143M Innovative Properties CompanyHierarchical processing using image deformation
US94184744 Jan 200916 Aug 20163M Innovative Properties CompanyThree-dimensional model refinement
US20100283781 *4 Jan 200911 Nov 2010Kriveshko Ilya ANavigating among images of an object in 3d space
US20110007137 *4 Jan 200913 Jan 2011Janos RohalyHierachical processing using image deformation
US20110007138 *4 Jan 200913 Jan 2011Hongsheng ZhangGlobal camera path optimization
US20110043613 *4 Jan 200924 Feb 2011Janos RohalyThree-dimensional model refinement
US20110164810 *4 Jan 20097 Jul 2011Tong ZangImage signatures for use in motion-based three-dimensional reconstruction
WO2009089125A2 *4 Jan 200916 Jul 20093M Innovative Properties CompanyNavigating among images of an object in 3d space
WO2009089125A3 *4 Jan 200924 Sep 20093M Innovative Properties CompanyNavigating among images of an object in 3d space
WO2016083519A1 *26 Nov 20152 Jun 20163Shape A/SMethod of digitally designing a modified dental setup
Classifications
U.S. Classification433/213
International ClassificationA61C11/00
Cooperative ClassificationA61C9/0046, A61C7/00
European ClassificationA61C7/00