US20150313559A1 - System and method for detecting a problem tooth - Google Patents

System and method for detecting a problem tooth Download PDF

Info

Publication number
US20150313559A1
US20150313559A1 US14/797,321 US201514797321A US2015313559A1 US 20150313559 A1 US20150313559 A1 US 20150313559A1 US 201514797321 A US201514797321 A US 201514797321A US 2015313559 A1 US2015313559 A1 US 2015313559A1
Authority
US
United States
Prior art keywords
tooth
objects
data
processing module
dental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/797,321
Inventor
Mireya Ortega
Roger Daugherty
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VISIONARY TECHNOLOGIES Inc
Original Assignee
VISIONARY TECHNOLOGIES Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VISIONARY TECHNOLOGIES Inc filed Critical VISIONARY TECHNOLOGIES Inc
Priority to US14/797,321 priority Critical patent/US20150313559A1/en
Assigned to VISIONARY TECHNOLOGIES, INC. reassignment VISIONARY TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ORTEGA, MIREYA, DAUGHERTY, ROGER
Publication of US20150313559A1 publication Critical patent/US20150313559A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/14Applications or adaptations for dentistry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/467Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient characterised by special input means
    • A61B6/51
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5223Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data generating planar views from image data, e.g. extracting a coronal view from a 3D image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention relates to a system and method for detecting a problem tooth. More specifically, the present invention is related to mathematically modeling the growth of at least one tooth object, the decay for at least one tooth object, or the combination thereof.
  • dental images are displayed in two-dimensions using light tables, e.g. X-rays. These two dimensional views provide a single perspective of the image.
  • Three-dimensional (3-D) imaging systems have also been developed. These systems provide high-definition digital imaging with relatively short scan times, e.g. 20 seconds.
  • the image reconstruction takes less than two minutes.
  • the X-ray source is typically a high frequency source with a cone x-ray beam, and employs an image detector with an amorphous silicon flat panel.
  • the images are 12-bit gray scale and may have a voxel size of 0.4 mm to 0.1 mm.
  • Image acquisition is performed in a single session and is based on a 360 degree rotation of the X-ray source.
  • the output data are digital images that are stored using conventional imaging formats such as the Digital Imaging and Communications in Medicine (DICOM) standard.
  • DICOM Digital Imaging and Communications in Medicine
  • the 3-D volumetric imaging system provides complete views of oral and maxillofacial structures.
  • the volumetric images provide complete 3-D views of anatomy for a more thorough analysis of bone structure and tooth orientation. These 3-D images are frequently used for implant and oral surgery, orthodontics, and TMJ analysis.
  • the software techniques for visualization of the dental images do not provide a dentist with sufficient flexibility to manipulate the 3-D image. Additionally, the visualization features provided by current third party solutions lack the ability to detect objects, detect irregularities, and detect anomalies.
  • a system and method for visualizing a dental image that includes a plurality of high resolution dental data, a plurality of tooth objects, at least one threshold and a processing module is described.
  • the plurality of high resolution dental data is generated using computed tomography.
  • the plurality of tooth objects selected for each tooth from the dental data includes at least one of an enamel object, a dentin object, a pulp object, a root object, and a nerve object.
  • the at least one threshold is used to detect at least one problem tooth.
  • the processing module detects at least one problem tooth. Additionally, the processing module mathematically models the growth of at least one tooth object, the decay for at least one tooth object, or the combination thereof. Furthermore, the processing module determines the effect the problem tooth has on at least one other tooth object.
  • the system and method includes a database that further includes a plurality of data fields that include a plurality of standard shapes associated with each tooth and a plurality of bone density data for each section of tooth. Additionally, the database includes a plurality of normative standards and at least one statistical standard for anomaly detection.
  • the system and method includes identifying a common boundary between at least two tooth objects.
  • the system and method includes identifying a particular tooth for further analysis. Also, the system and method includes analyzing the tooth objects for the particular tooth. Furthermore, the system and method includes analyzing the particular tooth object by slicing the tooth object at one or more locations.
  • FIG. 1A is shows an illustrative system overview.
  • FIG. 1B is an illustrative general purpose computer.
  • FIG. 1C is an illustrative client-server system.
  • FIG. 2 is an illustrative raw image.
  • FIG. 3 is an illustrative object identification flowchart.
  • FIG. 4A is an illustrative drawing showing jaw object identification.
  • FIG. 4B is an illustrative drawing showing tooth object identification.
  • FIG. 5 is an illustrative 3-D image of a tooth object.
  • FIG. 6 is an illustrative first slice of the tooth object in FIG. 5 .
  • FIG. 7 is an illustrative second slice of the tooth object in FIG. 5 .
  • FIG. 8 is an illustrative third slice of the tooth object in FIG. 5 .
  • FIG. 9 is an illustrative flowchart for anomaly detection and for modeling growth rates.
  • FIGS. 10A and 10B shows a normal orientation for a wisdom tooth.
  • FIGS. 11A and 11B shows the beginning phase of horizontal impaction.
  • FIGS. 12A and 12B shows an illustrative example of cyst formation.
  • FIGS. 13A and 13B shows an illustrative example of cyst growth.
  • FIGS. 14A and 14B shows the resulting tooth decay and continuing cyst growth.
  • tomographic imaging includes analyzing the attenuation of the captured image using the Radon transform and filtered back projection.
  • tomography there are a variety of different types of tomography including but not limited to Atom Probe Tomography, Computed Tomography, Electrical Impedance Tomography, Magnetic Resonance Tomography, Optical Coherence Tomography, Positron Emission Tomography, Quantum Tomography, Single Photon Emission Computed Tomography, and X-Ray Tomography. Attenuation refers to any reduction in signal strength.
  • Visualization refers to the process of taking one or more images and incorporates a comprehension of the physical relationship or significance of the features contained in the images.
  • An object is a physical relationship within an image that is capable of being grasped through visualization and an object is comprised of a plurality of voxels that presume a common basis.
  • a variety of techniques, methods, algorithms, mathematical formulae, or any combination thereof may be used to identify a common basis.
  • elements such as location, bone density, shape or a combination thereof may be used to identify at least one common basis that is used for object identification. Bone density is the measure of mass of bone in relation to volume. Therefore, one or more common basis may be used for object identification.
  • a modality in a medical image is any of the various types of equipment or probes used to acquire images of the body.
  • Magnetic Resonance Imaging is an example of a modality in this context.
  • the illustrative system 200 receives a 3-D dental image 202 that is stored in a first database 204 that stores archived images.
  • a digital acquisition and processing component 208 processes received 3-D dental images.
  • Particular information that is used to process the 3-D dental images is stored in the second database 206 .
  • An interactive graphical user interface 210 permits a user to manipulate the processed images and to interact with each illustrative dental object.
  • the 3-D dental image is generated by a medical imaging device such as an i-CAT 3-D Imaging System from Imaging Sciences International.
  • the databases 204 and 206 comprise a plurality of data fields including, but not limited to, data fields that correspond to the location for a plurality of teeth, a plurality of locations for each section of tooth, a plurality of standard shapes associated with each tooth, a plurality of standard shapes associated with each of the sections of tooth, and a plurality of bone density data for each section of tooth.
  • the digital processing component 208 is configured to process the 3-D image, and is in operative communication with the database.
  • the digital processing component is configured to provide improved visualization of the medical image.
  • the digital processing component 208 is configured to identify an object by combining a plurality of voxels having a common density and tagging the object using the methods described herein.
  • a voxel is a volume element that represents a value in 3-D space.
  • Common density is a density associated with a particular object in an image, in which a degree of attenuation within the image is associated with density.
  • the digital processing component 208 is also configured to permit modifying the shape of at least one object. Furthermore, the digital processing component 208 is configured to provide a method for detecting anomalies and mathematically modeling growth rates.
  • the digital processing component 208 is a computer having a processor as shown in FIG. 1B .
  • the illustrative general purpose computer 10 is suitable for implementing the systems and methods described herein.
  • the general purpose computer 10 includes at least one central processing unit (CPU) 12 , a display such as monitor 14 , and an input device 15 such as cursor control device 16 or keyboard 17 .
  • the cursor control device 16 can be implemented as a mouse, a joy stick, a series of buttons, or any other input device which allows user to control the position of a cursor or pointer on the display monitor 14 .
  • Another illustrative input device is the keyboard 17 .
  • the general purpose computer may also include random access memory ⁇ RAM) 18 , hard drive storage 20 , read-only memory (ROM) 22 , a modem 26 and a graphic co-processor 28 . All of the elements of the general purpose computer 10 may be tied together by a common bus 30 for transporting data between the various elements.
  • RAM random access memory
  • ROM read-only memory
  • modem modem
  • graphic co-processor 28 All of the elements of the general purpose computer 10 may be tied together by a common bus 30 for transporting data between the various elements.
  • the bus 30 typically includes data, address, and control signals.
  • the general purpose computer 10 illustrated in FIG. 1B includes a single data bus 30 which ties together all of the elements of the general purpose computer 10 , there is no requirement that there be a single communication bus which connects the various elements of the general purpose computer 10 .
  • the CPU 12 , RAM 18 , ROM 22 , and graphics co-processor might be tied together with a data bus while the hard disk 20 , modem 26 , keyboard 24 , display monitor 14 , and cursor control device are connected together with a second data bus (not shown).
  • the first data bus 30 and the second data bus could be linked by a bi-directional bus interface (not shown).
  • the elements such as the CPU 12 and the graphics coprocessor 28 could be connected to both the first data bus 30 and the second data bus and communication between the first and second data bus would occur through the CPU 12 and the graphics co-processor 28 .
  • the methods of the present invention are thus executable on any general purpose computing architecture, but there is no limitation that this architecture is the only one which can execute the methods of the present invention.
  • BioImage and BioPSE Power App is a visualization and analysis application developed by the University of Utah that may run on the computer 10 .
  • the software programs explore scalar data sets such as medical imaging volumes. In operation, the user chooses an input data set.
  • BioImage supports a variety of different industry standard formats including DICOM and Analyze. For example, a dental data set containing a single tooth may be loaded into these programs.
  • the illustrative software program permits the user to resample, crop, histogram or median filter the data. Using a cropping filter permits visually removing the excess data from the borders of the volume.
  • the GUI permits the user to explore the data volume in both 2-D and 3-D using the rendering panes in the software.
  • the software also permits slice views wherein the user can change slices and can adjust the contrast and brightness of the data.
  • Yet another feature of BioImage is the volume rendering engine. From the volume rendering tab, the user turns on the direct volume rendering visualization.
  • the volume rendering algorithm uses a transfer function to assign color and opacity based on both data values and gradient magnitudes of the volume.
  • the dentin is a calcified tissue of the body, and along with enamel, cementum and pulp are the four major components of teeth. Pulp is the part in the center of a tooth make up of living soft tissue and cells called odontoblasts.
  • the methods described herein may use a client/server architecture which is shown in FIG. 1C .
  • the client/server architecture 50 can be configured to perform similar functions as those performed by the general purpose computer 10 .
  • In the client-server architecture communication generally takes the form of a request message 52 from a client 54 to the server 56 asking for the server 56 to perform a server process 58 .
  • the server 56 performs the server process 58 and sends back a reply 60 to a client process 62 resident within client 54 .
  • Additional benefits from use of a client/server architecture include the ability to store and share gathered information and to collectively analyze gathered information.
  • a peer-to-peer network (not shown) can used to implement the methods described herein.
  • the general purpose computer I 0 In operation, the general purpose computer I 0 , client/server network system 50 , or peer-to-peer network system execute a sequence of machine-readable instructions. These machine readable instructions may reside in various types of signal bearing media.
  • one aspect of the present invention concerns a programmed product, comprising signal-bearing media tangibly embodying a program of machine-readable instructions executable by a digital data processor such as the CPU 12 for the general purpose computer 10 .
  • the computer readable medium may comprise, for example, RAM 18 contained within the general purpose computer 10 or within a server 56 .
  • the computer readable medium may be contained in another signal-bearing media, such as a magnetic data storage diskette that is directly accessible by the general purpose computer 10 or the server 56 .
  • the machine readable instructions within the computer readable medium may be stored in a variety of machine readable data storage media, such as a conventional “hard drive” or a RAID array, magnetic tape, electronic read-only memory (ROM), an optical storage device such as CD-ROM, DVD, or other suitable signal bearing media including transmission media such as digital and analog and communication links.
  • the machine-readable instructions may comprise software object code from a programming language such as C++, Java, or Python.
  • FIG. 2 there is shown an illustrative raw image of a mouth and a tooth.
  • FIG. 2 provides a visual aid of the basic anatomy of the mouth and the tooth similar to what may be generated using the illustrative i-CAT imaging system described above.
  • This visual aid has many limitations, namely, the multiple objects in the image have not been identified. Additionally, the image is essentially a raw image that has not been standardized using some type of calibrated sample.
  • the illustrative method 70 for visualizing a 3-D medical image comprises receiving a plurality of high resolution 3-D medical data at block 72 that are generated using a tomography technique, e.g. computed tomography (CT) scans.
  • CT computed tomography
  • the method for visualizing a dental image comprises receiving a plurality of high resolution 3-D dental data associated with a patient's mouth that is generated using x-ray tomography.
  • the high resolution 3-D data received at block 72 includes a standard for calibration purposes.
  • the standard may have a particular density that can be associated with a bone density.
  • the standard is composed of a material that can be associated with the bone density of an illustrative tooth.
  • the standard may placed adjacent to the patient and held physically or mechanically in place. Alternatively, the patient may place the standard in the mouth and bite the standard.
  • the method then proceeds to block 74 where the high resolution data is converted into an image comprised of a plurality of cubic voxels.
  • the method proceeds to identify the location for each cubic voxel.
  • the method then proceeds to identify a degree of attenuation for the voxels at block 78 . Attenuation is the reduction in amplitude and intensity of a signal.
  • the method associates a common density with the degree of attenuation.
  • the method also associates the degree of attenuation for the standard with the previously determined standard density related to block 72 .
  • the degree of attenuation for each voxel is associated with at least one of a plurality of common bone densities.
  • the method then proceeds to block 82 that identifies an object by combining the voxels having the common density and determines an initial shape for the object.
  • the identifying of the object may comprise comparing the initial shape of the object to a standard shape.
  • the common density may also be modified by a user, thereby resulting in the object having a different shape.
  • the 3-D data may be dental data and the common density is a bone density associated with teeth, mouth, or jaw.
  • the method then proceeds to generate dental objects by combining voxels having one of the common bone densities. The boundaries for each dental object are also determined.
  • Various opportunities may be presented where each dental object is compared to a standard shape to confirm identification of each dental object.
  • the method then proceeds to block 84 where at least one object is then tagged for further analysis 78 .
  • the tagged objects in the tooth may be associated with a “metatag” so that each object can be quickly identified and viewed.
  • a “metatag” as used herein refers to a “tag” that is associated with each object, wherein the “tag” is searchable and is used to provide a structured means for identifying object so that the “tagged” object or objects can be viewed.
  • the tagged object may then be extracted for further analysis.
  • the extracted object may then be viewed using a plurality of different perspectives.
  • the tagged and extracted object can then be viewed by slicing the object at desired locations.
  • An illustrative object may be a particular tooth object, an enamel object, a dentin object, a pulp object, a root object, a nerve object, or any other such dental object associated with mouth and jaw.
  • the plurality of objects may also be identified such as teeth objects, a plurality of nerve objects, and a plurality of bone objects. More generally, a plurality of objects may also be identified by combining the voxels having one of a plurality of different common densities and by determining the boundaries for each object.
  • each of the objects is then compared to a standard shape, and each of the objects is then tagged to permit one or more objects to be combined.
  • the plurality of objects may also be identified such as teeth objects, a plurality of nerve objects, a plurality of bone objects, or any other such object.
  • the flowchart also describes modifying the shape of at least one object in a scanned 3-D medical image at block 88 .
  • a common boundary between the first object and the second object is identified.
  • the common boundary is configured to identify a change in bone density between the first object and the second object.
  • the method permits a user to modify the common boundary by permitting the user to modify the apparent bone density of the first object.
  • the method also provides for coloring each voxel according to each of the bone densities.
  • the common boundary spans a relatively broad area when there is little change in bone density between the first object and the second object.
  • the method also permits evaluating a plurality of standard shapes when generating the first object and the second object.
  • each of the plurality of objects may have a plurality of tags, in which each tag may be extracted from the image as represented by block 90 .
  • the first object is tagged as a first tooth and the second object is a second tooth.
  • the first object and said second object is selected from a group consisting of a tooth object, an enamel object, a dentin object, a pulp object, a root object, a nerve object, a plurality of teeth objects, a plurality of nerve objects, or a plurality of bone objects.
  • the method 70 also supports performing imaging operation such as slicing objects as represented by block 92 and described in further detail below.
  • the method involves known physiologies discovered by the method above and supports analyzing relevant materials.
  • a 3-D DICOM file is converted to 3-D volumetric image, if the process has not already been completed, the volume is oriented according to axes, body portion contained, scale, etc.
  • Object identification may be performed as a function of common densities, density transitions, and known or standard shape similarities.
  • a map of the objects can then be created and displayed.
  • the systems and method described may be applied to non-specific objects, issue identification by density, adjacent material, and general location.
  • Margin (junction) shape determination, specific material shape (object) determination based on material profile, and cataloging of same may also be performed.
  • objects, passageways, etc. e.g. teeth, nerve canals, implants, vertebrae, jaw, etc.
  • the objective is to identify recurring examples of similar objects such as teeth, and to catalog their identification, both by normative standards and by reference to statistically compiled identifiers and shape.
  • FIG. 4A there is shown an illustrative drawing 100 with dental and jaw object identification.
  • the tagged objects in the tooth are associated with a “metatag” or “searchable tag” so that each object can be quickly identified and viewed.
  • a variety of soft tissues objects such as nerve objects are shown.
  • the nerve objects refer to sensitive tissue in the pulp of a tooth, or any bundle of nerve fibers running to various organs in the body.
  • a standard 102 for calibration purposes is shown. By way of example and not of limitation, these nerve objects are typically identified using MRI or CT scans.
  • FIG. 5 there is shown an illustrative 3-D image of an illustrative tooth object 120 .
  • the tooth object 120 comprises each of the objects described above such as the enamel, dentin and pulp.
  • a variety of different slices of the 3-D image are presented.
  • FIG. 6 provides an illustrative first slice 122 of the tooth object in FIG. 5 .
  • FIG. 7 provides an illustrative second slice 124 of the tooth object in FIG. 5
  • FIG. 8 is an illustrative third slice 126 of the tooth object.
  • Each of these drawings depict that the tagged objects can be “sliced” to provide a clearer view of the particular tooth. This slicing process may also be used for anomaly detection as described below.
  • FIG. 9 there is shown an illustrative flowchart for anomaly detection and for modeling growth rates that is a continuation of the flowchart in FIG. 3 .
  • An anomaly is a deviation or departure from a normal, or common order, or form, or rule, and is generally used to refer to a substantial defect.
  • An “irregularity” is distinguishable from an anomaly since “irregular” simply means lacking symmetry, evenness, or having a minor defect.
  • the flowchart describes a method for identifying anomalies in a scanned 3-D dental image. The method accesses a database 206 (shown in FIG.
  • IA having a plurality of data fields related to location for a plurality of teeth, a plurality of locations for each section of tooth, a plurality of standard shapes associated with the teeth, a plurality of standard shapes associated with each of the sections of tooth, and a plurality of bone density data for each section of tooth.
  • a 3-D image having a plurality of cubic voxels is generated, and the location for each voxel is identified.
  • the method then proceeds to identify a signal strength for each cubic voxel, and associates the signal strength for each voxel with the bone density data.
  • Signal strength refers to the total amount of power of RF received by the receiver. This is divided into useful signal, referred to as EC/IO, and the noise floor.
  • the method at block 92 performs object identification at block 132 where an illustrative first object is generated by combining a first grouping of voxels having a first bone density.
  • the illustrative first object is compared to objects in the database 206 (shown in FIG. IA).
  • the method then proceeds to identify irregularities at block 136 .
  • anomalies are identified after comparing the first object to one or more fields in the database 206 .
  • the database comprises a plurality of normative standards and statistical standards for anomaly detection that distinguished between anomalies and irregularities.
  • the anomaly detection at block 138 may also comprise generating a plurality of other objects and tagging the objects so that one or more objects may be combined.
  • the method may then proceed to identify one or more anomalies associated with the plurality of objects.
  • the method supports identifying one or more anomalies associated with at least one object that is tagged as a tooth object, in which the tooth object further comprises a plurality of tagged objects selected from a group consisting of an enamel object, a dentin object, a pulp object, a root object, and a nerve object.
  • teeth are not the only objects that grow and it shall be appreciated that the systems and methods described herein may be used to model bone growth and bone decay in general. Growth rate projections may be based on such parameters as age, gender, height, weight, ethnicity, and other such parameters that may be valuable to mathematically modeling growth rates. Those skilled in the art shall appreciate that measurements such as bone growth are also primary indicators and are provided for illustrative purposes only.
  • the relational effects resulting from having modeled the growth of a particular object are determined.
  • the modeled growth results in changes to the local conditions, and these changes are presented to the user.
  • the method then proceeds to decision diamond 144 where the determination of whether to change any of the parameters described above is necessary. Therefore, modeled growth rates may be changed, thresholds for anomaly detection may be changed, and the basis for object identification may also be modified.
  • At least one expected growth rate is provided for at least one tooth.
  • the method then proceeds to mathematically model a growth rate for the first tooth object using the expected growth rate, and modifies the location of a plurality of objects surrounding the first tooth object due to the growth of the first tooth object.
  • the method may then proceed to identify an anomaly after comparing the first object to one or more fields in the database.
  • Objects may then be tagged so that so that one or more objects may be combined.
  • Anomalies may then be associated with one or more tooth objects selected from a group consisting of an enamel object, a dentin object, a pulp object, a root object, and a nerve object.
  • anomaly detection may be performed by identifying at least one threshold for anomaly detection. The gathered data is then compared to the threshold to determine if one or more anomalies have been detected.
  • the potential anomaly may also be associated with a first mathematical model, which is then compared to a second “normative” mathematical model using recently extracted data.
  • the first mathematical model may have variables that can be modified, which mirrors the ability to modify the object.
  • the correlation between the first mathematical model and second mathematical model is determined by a correlation estimate that may be based on the concordances of randomly sampled pairs.
  • the method may also provide for the use of clustering analysis.
  • Clustering provides an additional method for analyzing the data.
  • Spatial cluster detection has two objectives, namely, to identify the locations, shapes and sizes of potentially anomalous spatial regions, and to determine whether each of these potential clusters is more likely to a valid cluster or simply a chance cluster.
  • the process of spatial cluster detection can separated into two parts: first, determining the expected result, secondly, determining which regions deviate from the expected result.
  • the process of determining which regions deviate from the expected result can be performed using a variety of techniques. For example, simple statistics can be used to determine a number of spatial standard deviations, and anomalies simply fall outside the standard deviations.
  • spatial scan statistics can be used as described by Kulldorff. (M. Kulldorff. A Spatial Scan Statistic. Communications in Statistics: Theory and Methods 26(6), 1481-1496, 1997.) In this method, a given set of spatial regions are searched and regions are found using hypothesis testing.
  • a generalized spatial scan framework can also be used. (M. R. Sabhnani, D. B. Neill, A. W. Moore, F.-C. tsui, M. M. Wagner, and J. U. Espino. Detecting anomalous patterns in pharmacy retail data. KDD Workshop on Data Mining Methods for Anomaly Detection, 2005.)
  • an anomaly 150 is detected in FIG. 11A and the exploded view in FIG. 11B .
  • the anomaly reflects that this is a problem tooth, and this anomaly can immediately be brought to the physician's attention using the systems and method described herein.
  • This image also shows the beginning phase of horizontal impaction, and the formation of a cyst.
  • a cyst is an abnormal membranous sac containing a gaseous, liquid, or semisolid substance. Additionally, at location there is shown a dental cavity/caries that are just starting.

Abstract

A system and method for visualizing a dental image that includes a plurality of high resolution dental data, a plurality of tooth objects, at least one threshold and a processing module is described. The plurality of high resolution dental data that is generated using computed tomography. The plurality of tooth objects selected for each tooth from the dental data includes at least one of an enamel object, a dentin object, a pulp object, a root object, and a nerve object. The at least one threshold is used to detect at least one problem tooth. The processing module detects at least one problem tooth. Additionally, the processing module mathematically models the growth of at least one tooth object, the decay for at least one tooth object, or the combination thereof. Furthermore, the processing module determines the effect the problem tooth has on at least one other tooth object.

Description

    CROSS-REFERENCE
  • This patent application is a continuation of patent application Ser. No. 11/890,533 filed on Aug. 6, 2007, which claims the benefit of provisional patent application 60/837,311, filed Aug. 11, 2006, all of which are incorporated herein by reference in their entirety.
  • FIELD
  • The present invention relates to a system and method for detecting a problem tooth. More specifically, the present invention is related to mathematically modeling the growth of at least one tooth object, the decay for at least one tooth object, or the combination thereof.
  • BACKGROUND
  • Generally, dental images are displayed in two-dimensions using light tables, e.g. X-rays. These two dimensional views provide a single perspective of the image. Three-dimensional (3-D) imaging systems have also been developed. These systems provide high-definition digital imaging with relatively short scan times, e.g. 20 seconds. The image reconstruction takes less than two minutes. The X-ray source is typically a high frequency source with a cone x-ray beam, and employs an image detector with an amorphous silicon flat panel. The images are 12-bit gray scale and may have a voxel size of 0.4 mm to 0.1 mm. Image acquisition is performed in a single session and is based on a 360 degree rotation of the X-ray source. The output data are digital images that are stored using conventional imaging formats such as the Digital Imaging and Communications in Medicine (DICOM) standard.
  • The 3-D volumetric imaging system provides complete views of oral and maxillofacial structures. The volumetric images provide complete 3-D views of anatomy for a more thorough analysis of bone structure and tooth orientation. These 3-D images are frequently used for implant and oral surgery, orthodontics, and TMJ analysis. There are a variety of different software solutions that can be integrated into the 3-D dental imaging systems. These third party solutions are generally related to implant planning, and assist in planning and placement of the implants. Additionally, the 3-D dental images can be used for developing models to assist in planning an operation.
  • In spite of the advances in the 3-D imaging systems and the 3-D imaging software, the software techniques for visualization of the dental images do not provide a dentist with sufficient flexibility to manipulate the 3-D image. Additionally, the visualization features provided by current third party solutions lack the ability to detect objects, detect irregularities, and detect anomalies.
  • SUMMARY
  • A system and method for visualizing a dental image that includes a plurality of high resolution dental data, a plurality of tooth objects, at least one threshold and a processing module is described. The plurality of high resolution dental data is generated using computed tomography. The plurality of tooth objects selected for each tooth from the dental data includes at least one of an enamel object, a dentin object, a pulp object, a root object, and a nerve object. The at least one threshold is used to detect at least one problem tooth. The processing module detects at least one problem tooth. Additionally, the processing module mathematically models the growth of at least one tooth object, the decay for at least one tooth object, or the combination thereof. Furthermore, the processing module determines the effect the problem tooth has on at least one other tooth object.
  • In one illustrative embodiment, the system and method includes a database that further includes a plurality of data fields that include a plurality of standard shapes associated with each tooth and a plurality of bone density data for each section of tooth. Additionally, the database includes a plurality of normative standards and at least one statistical standard for anomaly detection.
  • In another illustrative embodiment, the system and method includes identifying a common boundary between at least two tooth objects.
  • In yet another illustrative embodiment, the system and method includes identifying a particular tooth for further analysis. Also, the system and method includes analyzing the tooth objects for the particular tooth. Furthermore, the system and method includes analyzing the particular tooth object by slicing the tooth object at one or more locations.
  • FIGURES
  • Embodiments for the following description are shown in the following drawings:
  • FIG. 1A is shows an illustrative system overview.
  • FIG. 1B is an illustrative general purpose computer.
  • FIG. 1C is an illustrative client-server system.
  • FIG. 2 is an illustrative raw image.
  • FIG. 3 is an illustrative object identification flowchart.
  • FIG. 4A is an illustrative drawing showing jaw object identification.
  • FIG. 4B is an illustrative drawing showing tooth object identification.
  • FIG. 5 is an illustrative 3-D image of a tooth object.
  • FIG. 6 is an illustrative first slice of the tooth object in FIG. 5.
  • FIG. 7 is an illustrative second slice of the tooth object in FIG. 5.
  • FIG. 8 is an illustrative third slice of the tooth object in FIG. 5.
  • FIG. 9 is an illustrative flowchart for anomaly detection and for modeling growth rates.
  • FIGS. 10A and 10B shows a normal orientation for a wisdom tooth.
  • FIGS. 11A and 11B shows the beginning phase of horizontal impaction.
  • FIGS. 12A and 12B shows an illustrative example of cyst formation.
  • FIGS. 13A and 13B shows an illustrative example of cyst growth.
  • FIGS. 14A and 14B shows the resulting tooth decay and continuing cyst growth.
  • DESCRIPTION
  • In the following detailed description, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that structural, logical and electrical changes may be made without departing from the spirit and scope of the claims. The following detailed description is, therefore, not to be taken in a limited sense.
  • Note, the leading digit(s) of the reference numbers in the Figures correspond to the figure number, with the exception that identical components which appear in multiple figures are identified by the same reference numbers.
  • The systems and methods described herein are generally related to visualization tools that operate with 3-D images generated using tomography. Tomography is imaging by sections or sectioning. The mathematical procedures for imaging are referred to as tomographic reconstruction. Imaging is the process of creating a virtual image of a physical object, its detailed structure, its substructure or any combination thereof. Those skilled in the art shall appreciate that tomographic imaging includes analyzing the attenuation of the captured image using the Radon transform and filtered back projection. There are a variety of different types of tomography including but not limited to Atom Probe Tomography, Computed Tomography, Electrical Impedance Tomography, Magnetic Resonance Tomography, Optical Coherence Tomography, Positron Emission Tomography, Quantum Tomography, Single Photon Emission Computed Tomography, and X-Ray Tomography. Attenuation refers to any reduction in signal strength.
  • The systems and methods described herein allow improved visualization, object identification, anomaly detection, and predictive growth rate features. Visualization refers to the process of taking one or more images and incorporates a comprehension of the physical relationship or significance of the features contained in the images. An object is a physical relationship within an image that is capable of being grasped through visualization and an object is comprised of a plurality of voxels that presume a common basis. A variety of techniques, methods, algorithms, mathematical formulae, or any combination thereof may be used to identify a common basis. In the illustrative examples, elements such as location, bone density, shape or a combination thereof may be used to identify at least one common basis that is used for object identification. Bone density is the measure of mass of bone in relation to volume. Therefore, one or more common basis may be used for object identification.
  • It shall be appreciated by those of ordinary skill in the art that the systems and methods described herein can be applied to a plurality of different modalities. A modality in a medical image is any of the various types of equipment or probes used to acquire images of the body. Magnetic Resonance Imaging is an example of a modality in this context.
  • Referring to FIG. 1A there is shown an illustrative system. The illustrative system 200 receives a 3-D dental image 202 that is stored in a first database 204 that stores archived images. A digital acquisition and processing component 208 processes received 3-D dental images. Particular information that is used to process the 3-D dental images is stored in the second database 206. An interactive graphical user interface 210 permits a user to manipulate the processed images and to interact with each illustrative dental object. By way of example and not of limitations, the 3-D dental image is generated by a medical imaging device such as an i-CAT 3-D Imaging System from Imaging Sciences International.
  • The databases 204 and 206 comprise a plurality of data fields including, but not limited to, data fields that correspond to the location for a plurality of teeth, a plurality of locations for each section of tooth, a plurality of standard shapes associated with each tooth, a plurality of standard shapes associated with each of the sections of tooth, and a plurality of bone density data for each section of tooth.
  • The digital processing component 208 is configured to process the 3-D image, and is in operative communication with the database. The digital processing component is configured to provide improved visualization of the medical image. The digital processing component 208 is configured to identify an object by combining a plurality of voxels having a common density and tagging the object using the methods described herein. A voxel is a volume element that represents a value in 3-D space. Common density is a density associated with a particular object in an image, in which a degree of attenuation within the image is associated with density.
  • Additionally, the digital processing component 208 is also configured to permit modifying the shape of at least one object. Furthermore, the digital processing component 208 is configured to provide a method for detecting anomalies and mathematically modeling growth rates.
  • In one embodiment the digital processing component 208 is a computer having a processor as shown in FIG. 1B. The illustrative general purpose computer 10 is suitable for implementing the systems and methods described herein. The general purpose computer 10 includes at least one central processing unit (CPU) 12, a display such as monitor 14, and an input device 15 such as cursor control device 16 or keyboard 17. The cursor control device 16 can be implemented as a mouse, a joy stick, a series of buttons, or any other input device which allows user to control the position of a cursor or pointer on the display monitor 14. Another illustrative input device is the keyboard 17. The general purpose computer may also include random access memory {RAM) 18, hard drive storage 20, read-only memory (ROM) 22, a modem 26 and a graphic co-processor 28. All of the elements of the general purpose computer 10 may be tied together by a common bus 30 for transporting data between the various elements.
  • The bus 30 typically includes data, address, and control signals. Although the general purpose computer 10 illustrated in FIG. 1B includes a single data bus 30 which ties together all of the elements of the general purpose computer 10, there is no requirement that there be a single communication bus which connects the various elements of the general purpose computer 10. For example, the CPU 12, RAM 18, ROM 22, and graphics co-processor might be tied together with a data bus while the hard disk 20, modem 26, keyboard 24, display monitor 14, and cursor control device are connected together with a second data bus (not shown). In this case, the first data bus 30 and the second data bus could be linked by a bi-directional bus interface (not shown). Alternatively, some of the elements, such as the CPU 12 and the graphics coprocessor 28 could be connected to both the first data bus 30 and the second data bus and communication between the first and second data bus would occur through the CPU 12 and the graphics co-processor 28. The methods of the present invention are thus executable on any general purpose computing architecture, but there is no limitation that this architecture is the only one which can execute the methods of the present invention.
  • Various visualization and analysis application may be run on the illustrative general purpose computer 10. For example, BioImage and BioPSE Power App is a visualization and analysis application developed by the University of Utah that may run on the computer 10. The software programs explore scalar data sets such as medical imaging volumes. In operation, the user chooses an input data set. BioImage supports a variety of different industry standard formats including DICOM and Analyze. For example, a dental data set containing a single tooth may be loaded into these programs.
  • After the data is loaded, the illustrative software program permits the user to resample, crop, histogram or median filter the data. Using a cropping filter permits visually removing the excess data from the borders of the volume. The GUI permits the user to explore the data volume in both 2-D and 3-D using the rendering panes in the software.
  • The software also permits slice views wherein the user can change slices and can adjust the contrast and brightness of the data. Yet another feature of BioImage is the volume rendering engine. From the volume rendering tab, the user turns on the direct volume rendering visualization. The volume rendering algorithm uses a transfer function to assign color and opacity based on both data values and gradient magnitudes of the volume. Thus, the interface between the dentin and pulp of the tooth may be colored differently. The dentin is a calcified tissue of the body, and along with enamel, cementum and pulp are the four major components of teeth. Pulp is the part in the center of a tooth make up of living soft tissue and cells called odontoblasts.
  • Alternatively, the methods described herein may use a client/server architecture which is shown in FIG. 1C. It shall be appreciated by those of ordinary skill in the art that the client/server architecture 50 can be configured to perform similar functions as those performed by the general purpose computer 10. In the client-server architecture communication generally takes the form of a request message 52 from a client 54 to the server 56 asking for the server 56 to perform a server process 58. The server 56 performs the server process 58 and sends back a reply 60 to a client process 62 resident within client 54. Additional benefits from use of a client/server architecture include the ability to store and share gathered information and to collectively analyze gathered information. In another alternative embodiment, a peer-to-peer network (not shown) can used to implement the methods described herein.
  • In operation, the general purpose computer I 0, client/server network system 50, or peer-to-peer network system execute a sequence of machine-readable instructions. These machine readable instructions may reside in various types of signal bearing media. In this respect, one aspect of the present invention concerns a programmed product, comprising signal-bearing media tangibly embodying a program of machine-readable instructions executable by a digital data processor such as the CPU 12 for the general purpose computer 10.
  • It shall be appreciated by those of ordinary skill that the computer readable medium may comprise, for example, RAM 18 contained within the general purpose computer 10 or within a server 56. Alternatively, the computer readable medium may be contained in another signal-bearing media, such as a magnetic data storage diskette that is directly accessible by the general purpose computer 10 or the server 56. Whether contained in the general purpose computer or in the server, the machine readable instructions within the computer readable medium may be stored in a variety of machine readable data storage media, such as a conventional “hard drive” or a RAID array, magnetic tape, electronic read-only memory (ROM), an optical storage device such as CD-ROM, DVD, or other suitable signal bearing media including transmission media such as digital and analog and communication links. In an illustrative embodiment, the machine-readable instructions may comprise software object code from a programming language such as C++, Java, or Python.
  • Referring to FIG. 2 there is shown an illustrative raw image of a mouth and a tooth. In general, FIG. 2 provides a visual aid of the basic anatomy of the mouth and the tooth similar to what may be generated using the illustrative i-CAT imaging system described above. This visual aid has many limitations, namely, the multiple objects in the image have not been identified. Additionally, the image is essentially a raw image that has not been standardized using some type of calibrated sample.
  • Referring to FIG. 3 there is shown an illustrative flowchart of a method for visualizing objects in a 3-D image 70. The illustrative method 70 for visualizing a 3-D medical image comprises receiving a plurality of high resolution 3-D medical data at block 72 that are generated using a tomography technique, e.g. computed tomography (CT) scans. By way of example and not of limitation, the method for visualizing a dental image comprises receiving a plurality of high resolution 3-D dental data associated with a patient's mouth that is generated using x-ray tomography.
  • The high resolution 3-D data received at block 72 includes a standard for calibration purposes. For example, with respect to CT scans, the standard may have a particular density that can be associated with a bone density. The standard is composed of a material that can be associated with the bone density of an illustrative tooth. The standard may placed adjacent to the patient and held physically or mechanically in place. Alternatively, the patient may place the standard in the mouth and bite the standard.
  • The method then proceeds to block 74 where the high resolution data is converted into an image comprised of a plurality of cubic voxels. At block 76, the method proceeds to identify the location for each cubic voxel. The method then proceeds to identify a degree of attenuation for the voxels at block 78. Attenuation is the reduction in amplitude and intensity of a signal.
  • At block 80, the method associates a common density with the degree of attenuation. The method also associates the degree of attenuation for the standard with the previously determined standard density related to block 72. For example, the degree of attenuation for each voxel is associated with at least one of a plurality of common bone densities.
  • The method then proceeds to block 82 that identifies an object by combining the voxels having the common density and determines an initial shape for the object. By way of example and not of limitation, the identifying of the object may comprise comparing the initial shape of the object to a standard shape. The common density may also be modified by a user, thereby resulting in the object having a different shape. For example, the 3-D data may be dental data and the common density is a bone density associated with teeth, mouth, or jaw. By way of example and not of limitation, the method then proceeds to generate dental objects by combining voxels having one of the common bone densities. The boundaries for each dental object are also determined. Various opportunities may be presented where each dental object is compared to a standard shape to confirm identification of each dental object.
  • The method then proceeds to block 84 where at least one object is then tagged for further analysis 78. For example, the tagged objects in the tooth may be associated with a “metatag” so that each object can be quickly identified and viewed. A “metatag” as used herein refers to a “tag” that is associated with each object, wherein the “tag” is searchable and is used to provide a structured means for identifying object so that the “tagged” object or objects can be viewed. The tagged object may then be extracted for further analysis. The extracted object may then be viewed using a plurality of different perspectives. The tagged and extracted object can then be viewed by slicing the object at desired locations. An illustrative object may be a particular tooth object, an enamel object, a dentin object, a pulp object, a root object, a nerve object, or any other such dental object associated with mouth and jaw. The plurality of objects may also be identified such as teeth objects, a plurality of nerve objects, and a plurality of bone objects. More generally, a plurality of objects may also be identified by combining the voxels having one of a plurality of different common densities and by determining the boundaries for each object.
  • At block 86, each of the objects is then compared to a standard shape, and each of the objects is then tagged to permit one or more objects to be combined. The plurality of objects may also be identified such as teeth objects, a plurality of nerve objects, a plurality of bone objects, or any other such object.
  • The flowchart also describes modifying the shape of at least one object in a scanned 3-D medical image at block 88. After the method proceeds to tag a first object and a second object, a common boundary between the first object and the second object is identified. The common boundary is configured to identify a change in bone density between the first object and the second object. The method permits a user to modify the common boundary by permitting the user to modify the apparent bone density of the first object. The method also provides for coloring each voxel according to each of the bone densities.
  • The common boundary spans a relatively broad area when there is little change in bone density between the first object and the second object. The method also permits evaluating a plurality of standard shapes when generating the first object and the second object. For example, each of the plurality of objects may have a plurality of tags, in which each tag may be extracted from the image as represented by block 90. By way of example and not of limitation, the first object is tagged as a first tooth and the second object is a second tooth. In another illustrative example, the first object and said second object is selected from a group consisting of a tooth object, an enamel object, a dentin object, a pulp object, a root object, a nerve object, a plurality of teeth objects, a plurality of nerve objects, or a plurality of bone objects. Additionally, the method 70 also supports performing imaging operation such as slicing objects as represented by block 92 and described in further detail below.
  • In operation, the method involves known physiologies discovered by the method above and supports analyzing relevant materials. After a 3-D DICOM file is converted to 3-D volumetric image, if the process has not already been completed, the volume is oriented according to axes, body portion contained, scale, etc. Object identification may be performed as a function of common densities, density transitions, and known or standard shape similarities. A map of the objects can then be created and displayed.
  • The systems and method described may be applied to non-specific objects, issue identification by density, adjacent material, and general location. Margin (junction) shape determination, specific material shape (object) determination based on material profile, and cataloging of same may also be performed. For example, the identification of objects, passageways, etc. (e.g. teeth, nerve canals, implants, vertebrae, jaw, etc.) is performed. The objective is to identify recurring examples of similar objects such as teeth, and to catalog their identification, both by normative standards and by reference to statistically compiled identifiers and shape.
  • Referring to FIG. 4A there is shown an illustrative drawing 100 with dental and jaw object identification. As presented, the tagged objects in the tooth are associated with a “metatag” or “searchable tag” so that each object can be quickly identified and viewed. A variety of soft tissues objects such as nerve objects are shown. The nerve objects refer to sensitive tissue in the pulp of a tooth, or any bundle of nerve fibers running to various organs in the body. Additionally, a standard 102 for calibration purposes is shown. By way of example and not of limitation, these nerve objects are typically identified using MRI or CT scans.
  • Referring to FIG. 4B there is shown an exploded view of a third molar tooth object 110, which is identified using the systems and methods described herein. The tooth is a set of hard, bone-like structures rooted in sockets in the jaws of vertebrates, typically composed of a core of soft pulp surrounded by a layer of hard dentin that is coated with cementum or enamel at the crown and used for biting or chewing foods or as a means of attack or defense. The tooth object is composed of a variety of different objects. One such object is an enamel object which is the hard, calcareous substance covering the exposed portion of a tooth. Another object is dentin, which is the main, calcareous part of a tooth, beneath the enamel, and surrounding the pulp chamber and root canals. The pulp object is the soft tissue forming the inner structure of a tooth and containing nerves and blood vessels. The root object is the embedded part of an organ or structure such as a tooth, or nerve, and includes the part of the tooth that is embedded in the jaw and serves as support. The root canal objects refers to the portion of the pulp cavity inside the root of the tooth, namely, the chamber within the root of the tooth that contains the pulp. The Gingiva or Gum object is the firm connective tissue covered by mucous membrane that envelops the alveolar arches of the jaw and surrounds the neck of the teeth. The neck object is the constriction between the root and the crown and can also be referred to as the Cemental-Enamel-Junction.
  • Referring to FIG. 5 there is shown an illustrative 3-D image of an illustrative tooth object 120. The tooth object 120 comprises each of the objects described above such as the enamel, dentin and pulp. A variety of different slices of the 3-D image are presented. For example FIG. 6 provides an illustrative first slice 122 of the tooth object in FIG. 5. FIG. 7 provides an illustrative second slice 124 of the tooth object in FIG. 5, and FIG. 8 is an illustrative third slice 126 of the tooth object. Each of these drawings depict that the tagged objects can be “sliced” to provide a clearer view of the particular tooth. This slicing process may also be used for anomaly detection as described below.
  • Referring to FIG. 9 there is shown an illustrative flowchart for anomaly detection and for modeling growth rates that is a continuation of the flowchart in FIG. 3. An anomaly is a deviation or departure from a normal, or common order, or form, or rule, and is generally used to refer to a substantial defect. An “irregularity” is distinguishable from an anomaly since “irregular” simply means lacking symmetry, evenness, or having a minor defect. The flowchart describes a method for identifying anomalies in a scanned 3-D dental image. The method accesses a database 206 (shown in FIG. IA) having a plurality of data fields related to location for a plurality of teeth, a plurality of locations for each section of tooth, a plurality of standard shapes associated with the teeth, a plurality of standard shapes associated with each of the sections of tooth, and a plurality of bone density data for each section of tooth.
  • As previously described in FIG. 3, a 3-D image having a plurality of cubic voxels is generated, and the location for each voxel is identified. For the illustrative example described herein, the method then proceeds to identify a signal strength for each cubic voxel, and associates the signal strength for each voxel with the bone density data. Signal strength refers to the total amount of power of RF received by the receiver. This is divided into useful signal, referred to as EC/IO, and the noise floor.
  • The method at block 92 performs object identification at block 132 where an illustrative first object is generated by combining a first grouping of voxels having a first bone density. At block 134, the illustrative first object is compared to objects in the database 206 (shown in FIG. IA). The method then proceeds to identify irregularities at block 136. At block 138, anomalies are identified after comparing the first object to one or more fields in the database 206. The database comprises a plurality of normative standards and statistical standards for anomaly detection that distinguished between anomalies and irregularities.
  • The anomaly detection at block 138 may also comprise generating a plurality of other objects and tagging the objects so that one or more objects may be combined. The method may then proceed to identify one or more anomalies associated with the plurality of objects. For example, the method supports identifying one or more anomalies associated with at least one object that is tagged as a tooth object, in which the tooth object further comprises a plurality of tagged objects selected from a group consisting of an enamel object, a dentin object, a pulp object, a root object, and a nerve object.
  • The method then proceeds to block 140 and performs the process of mathematically modeling growth rates. Although the illustrative example of teeth is described herein, teeth are not the only objects that grow and it shall be appreciated that the systems and methods described herein may be used to model bone growth and bone decay in general. Growth rate projections may be based on such parameters as age, gender, height, weight, ethnicity, and other such parameters that may be valuable to mathematically modeling growth rates. Those skilled in the art shall appreciate that measurements such as bone growth are also primary indicators and are provided for illustrative purposes only.
  • At block 142, the relational effects resulting from having modeled the growth of a particular object are determined. Thus, the modeled growth results in changes to the local conditions, and these changes are presented to the user.
  • The method then proceeds to decision diamond 144 where the determination of whether to change any of the parameters described above is necessary. Therefore, modeled growth rates may be changed, thresholds for anomaly detection may be changed, and the basis for object identification may also be modified.
  • In operation, at least one expected growth rate is provided for at least one tooth. After the first tooth object is compared to the first tooth object data in the database, the method then proceeds to mathematically model a growth rate for the first tooth object using the expected growth rate, and modifies the location of a plurality of objects surrounding the first tooth object due to the growth of the first tooth object. The method may then proceed to identify an anomaly after comparing the first object to one or more fields in the database. Objects may then be tagged so that so that one or more objects may be combined. Anomalies may then be associated with one or more tooth objects selected from a group consisting of an enamel object, a dentin object, a pulp object, a root object, and a nerve object.
  • By way of example and not of limitation, anomaly detection may be performed by identifying at least one threshold for anomaly detection. The gathered data is then compared to the threshold to determine if one or more anomalies have been detected.
  • The potential anomaly may also be associated with a first mathematical model, which is then compared to a second “normative” mathematical model using recently extracted data. The first mathematical model may have variables that can be modified, which mirrors the ability to modify the object. The correlation between the first mathematical model and second mathematical model is determined by a correlation estimate that may be based on the concordances of randomly sampled pairs.
  • Additionally, the method may also provide for the use of clustering analysis. Clustering provides an additional method for analyzing the data. Spatial cluster detection has two objectives, namely, to identify the locations, shapes and sizes of potentially anomalous spatial regions, and to determine whether each of these potential clusters is more likely to a valid cluster or simply a chance cluster. The process of spatial cluster detection can separated into two parts: first, determining the expected result, secondly, determining which regions deviate from the expected result.
  • The process of determining which regions deviate from the expected result can be performed using a variety of techniques. For example, simple statistics can be used to determine a number of spatial standard deviations, and anomalies simply fall outside the standard deviations. Alternatively, spatial scan statistics can be used as described by Kulldorff. (M. Kulldorff. A Spatial Scan Statistic. Communications in Statistics: Theory and Methods 26(6), 1481-1496, 1997.) In this method, a given set of spatial regions are searched and regions are found using hypothesis testing. A generalized spatial scan framework can also be used. (M. R. Sabhnani, D. B. Neill, A. W. Moore, F.-C. tsui, M. M. Wagner, and J. U. Espino. Detecting anomalous patterns in pharmacy retail data. KDD Workshop on Data Mining Methods for Anomaly Detection, 2005.)
  • It shall be appreciated by those skilled in the art that the particular algorithm that is used for anomaly detection will depend on the particular application and be subject to system limitations. Thus, a variety of different algorithms for anomaly detection may be used.
  • An illustrative method for anomaly detection for a tooth object is shown in FIG. 10 through FIG. 14. The anomaly detection also includes modeling the crown of a tooth object and the effect the tooth object has on surrounding objects. Referring to FIG. 10A and the exploded view in FIG. 10B, there is shown a normal orientation for a wisdom tooth. In this orientation, the wisdom tooth in question has sufficient space so that there will be no horizontal impaction.
  • With respect to another patient, an anomaly 150 is detected in FIG. 11A and the exploded view in FIG. 11B. The anomaly reflects that this is a problem tooth, and this anomaly can immediately be brought to the physician's attention using the systems and method described herein. This image also shows the beginning phase of horizontal impaction, and the formation of a cyst. A cyst is an abnormal membranous sac containing a gaseous, liquid, or semisolid substance. Additionally, at location there is shown a dental cavity/caries that are just starting.
  • Referring to FIG. 12A and the exploded view in FIG. 12B there is shown an illustrative example of the progression, i.e. growth, of the cyst and cavity/caries after the appropriate growth models have been associated with the particular tooth 150 and the surrounding teeth. At FIG. 13A and the exploded view in 13B, there is shown the effect of cyst growth, and the initial stages of tooth decay on tooth 150. Tooth decay is an infectious, transmissible, disease caused by bacteria. The damage done to teeth by this disease is commonly known as cavities. Tooth decay can cause pain and lead to infections in surrounding tissues and tooth loss if not treated properly. The progression of the tooth decay and cyst formation is then shown in FIG. 14A and exploded view in FIG. 14B.
  • The illustrative systems and methods described above have been developed to assist in visualizing objects, permitting a user to modify objects, anomaly detection for objects, and for modeling growth rates associated with particular objects. It shall be appreciated by those of ordinary skill in the various arts having the benefit of this disclosure that the system and methods described can be applied to many disciplines outside of the field of dentistry. Furthermore, alternate embodiments of the invention which implement the systems in hardware, firmware, or a combination of hardware and software, as well as distributing the modules or the data in a different fashion will be apparent to those skilled in the art. Further still, the illustrative methods described may vary as to order and implemented algorithms.
  • Although the description above contains many limitations in the specification, these should not be construed as limiting the scope of the claims but as merely providing illustrations of some of the presently preferred embodiments of this invention. Many other embodiments will be apparent to those of skill in the art upon reviewing the description. Thus, the scope of the invention should be determined by the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (20)

What is claimed is:
1. A system for visualizing a dental image, comprising:
a plurality of high resolution dental data generated using computed tomography;
a plurality of tooth objects selected for each tooth from the dental data from the group consisting of an enamel object, a dentin object, a pulp object, a root object, and a nerve object;
at least one threshold to detect at least one problem tooth;
a processing module that detects at least one problem tooth;
the processing module mathematically models the growth of at least one tooth object; and
the processing module determines the effect the problem tooth has on at least one other tooth object.
2. The system of claim 1 further comprising a database that includes a database that further includes a plurality of data fields that include a plurality of standard shapes associated with each tooth and a plurality of bone density data for each section of tooth.
3. The system of claim 2 wherein the database includes a plurality of normative standards and at least one statistical standard for anomaly detection.
4. The system of claim 2 further comprising a common boundary between at least two tooth objects that is identified by the system.
5. The system of claim 2 further comprising a particular tooth that is identified by the system for further analysis.
6. The system of claim 5 wherein the processing module analyzes the tooth objects for the particular tooth.
7. The system of claim 6 wherein the processing module analyzes the particular tooth object by slicing the tooth object at one or more locations.
8. A system for visualizing a dental image, comprising:
a plurality of high resolution dental data generated using computed tomography;
a plurality of tooth objects selected for each tooth from the dental data from the group consisting of an enamel object, a dentin object, a pulp object, a root object, and a nerve object;
at least one threshold to detect at least one problem tooth;
a processing module that detects at least one problem tooth;
the processing module mathematically models the decay for at least one tooth object; and
the processing module determines the effect the problem tooth has on at least one other tooth object.
9. The system of claim 8 further comprising a database that includes a database that further includes a plurality of data fields that include a plurality of standard shapes associated with each tooth and a plurality of bone density data for each section of tooth.
10. The system of claim 9 wherein the database includes a plurality of normative standards and at least one statistical standard for anomaly detection.
11. The system of claim 9 further comprising a common boundary between at least two tooth objects that is identified by the system.
12. The system of claim 9 further comprising a particular tooth that is identified by the system for further analysis.
13. The system of claim 5 wherein the processing module analyzes the tooth objects for the particular tooth and analyzes the particular tooth object by slicing the tooth object at one or more locations.
14. A method for visualizing a dental image, comprising:
receiving a plurality of high resolution dental data that is generated using computed tomography;
identifying a plurality of tooth objects for each tooth from the dental data, wherein the plurality of tooth objects is selected from the group consisting of an enamel object, a dentin object, a pulp object, a root object, and a nerve object;
providing at least one threshold to detect at least one problem tooth;
detecting the at least one problem tooth;
mathematically modeling at least one of the growth or decay of the at least one tooth object; and
determining the effect the problem tooth has on at least one other tooth object.
15. The method of claim 14 further comprising providing a database that includes a database that further includes a plurality of data fields that include a plurality of standard shapes associated with each tooth and a plurality of bone density data for each section of tooth.
16. The method of claim 15 wherein the database includes a plurality of normative standards and at least one statistical standard for anomaly detection.
17. The method of claim 15 further comprising identifying a common boundary between at least two tooth objects.
18. The method of claim 15 further comprising identifying a particular tooth for further analysis.
19. The method of claim 18 further comprising analyzing the tooth objects for the particular tooth.
20. The method of claim 19 further comprising analyzing the particular tooth object by slicing the tooth object at one or more locations.
US14/797,321 2006-08-11 2015-07-13 System and method for detecting a problem tooth Abandoned US20150313559A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/797,321 US20150313559A1 (en) 2006-08-11 2015-07-13 System and method for detecting a problem tooth

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US83731106P 2006-08-11 2006-08-11
US11/890,533 US9111372B2 (en) 2006-08-11 2007-08-06 System and method for object identification and anomaly detection
US14/797,321 US20150313559A1 (en) 2006-08-11 2015-07-13 System and method for detecting a problem tooth

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/890,533 Continuation US9111372B2 (en) 2006-08-11 2007-08-06 System and method for object identification and anomaly detection

Publications (1)

Publication Number Publication Date
US20150313559A1 true US20150313559A1 (en) 2015-11-05

Family

ID=40641999

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/890,533 Active 2030-05-16 US9111372B2 (en) 2006-08-11 2007-08-06 System and method for object identification and anomaly detection
US14/797,321 Abandoned US20150313559A1 (en) 2006-08-11 2015-07-13 System and method for detecting a problem tooth

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/890,533 Active 2030-05-16 US9111372B2 (en) 2006-08-11 2007-08-06 System and method for object identification and anomaly detection

Country Status (1)

Country Link
US (2) US9111372B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020123167A1 (en) * 2018-12-14 2020-06-18 Colgate-Palmolive Company System and method for oral health monitoring using electrical impedance tomography
US11207161B2 (en) 2016-05-30 2021-12-28 3Shape A/S Predicting the development of a dental condition

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090204338A1 (en) * 2008-02-13 2009-08-13 Nordic Bioscience A/S Method of deriving a quantitative measure of the instability of calcific deposits of a blood vessel
WO2009139110A1 (en) * 2008-05-13 2009-11-19 パナソニック株式会社 Intraoral measurement apparatus and intraoral measurement system
GB201002778D0 (en) * 2010-02-18 2010-04-07 Materialise Dental Nv 3D digital endodontics
US9779504B1 (en) 2011-12-14 2017-10-03 Atti International Services Company, Inc. Method and system for identifying anomalies in medical images especially those including one of a pair of symmetric body parts
US9401021B1 (en) 2011-12-14 2016-07-26 Atti International Services Company, Inc. Method and system for identifying anomalies in medical images especially those including body parts having symmetrical properties
US8724871B1 (en) 2011-12-14 2014-05-13 Atti International Services Company, Inc. Method and system for identifying anomalies in medical images
US9135498B2 (en) 2012-12-14 2015-09-15 Ormco Corporation Integration of intra-oral imagery and volumetric imagery
GB2548149A (en) * 2016-03-10 2017-09-13 Moog Bv Model generation for dental simulation
EP3582717B1 (en) 2017-02-17 2023-08-09 Silvio Franco Emanuelli System and method for monitoring optimal dental implants coupleable with an optimized implant site
IT201700017965A1 (en) * 2017-02-17 2018-08-17 Silvio Franco Emanuelli METHOD AND SIMULATION SYSTEM OF SITE IMPLANT TO OPTIMIZED
CA3068526A1 (en) * 2017-06-30 2019-01-03 Frank Theodorus Catharina CLAESSEN Classification and 3d modelling of 3d dento-maxillofacial structures using deep learning methods
CA3100495A1 (en) 2018-05-16 2019-11-21 Benevis Informatics, Llc Systems and methods for review of computer-aided detection of pathology in images
US11744681B2 (en) 2019-03-08 2023-09-05 Align Technology, Inc. Foreign object identification and image augmentation for intraoral scanning
CN113573642A (en) * 2019-03-25 2021-10-29 伯尼维斯公司 Apparatus, method and recording medium for recording instructions for determining bone age of tooth
JP7245577B1 (en) * 2022-06-23 2023-03-24 Rightly株式会社 Computer system, three-dimensional intraoral data coloring method and program
CN115688461B (en) * 2022-11-11 2023-06-09 四川大学 Device and method for evaluating degree of abnormality of dental arch and alveolar bone arches based on clustering

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020178032A1 (en) * 2001-05-03 2002-11-28 Benn Douglas K. Method and system for recording carious lesions
US7826646B2 (en) * 2000-08-16 2010-11-02 Align Technology, Inc. Systems and methods for removing gingiva from computer tooth models

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020176619A1 (en) 1998-06-29 2002-11-28 Love Patrick B. Systems and methods for analyzing two-dimensional images
US7234937B2 (en) 1999-11-30 2007-06-26 Orametrix, Inc. Unified workstation for virtual craniofacial diagnosis, treatment planning and therapeutics
US8073101B2 (en) * 1999-12-01 2011-12-06 Massie Ronald E Digital modality modeling for medical and dental applications
US6944262B2 (en) * 1999-12-01 2005-09-13 Massie Ronald E Dental and orthopedic densitometry modeling system and method
US7373286B2 (en) 2000-02-17 2008-05-13 Align Technology, Inc. Efficient data representation of teeth model
US7717708B2 (en) 2001-04-13 2010-05-18 Orametrix, Inc. Method and system for integrated orthodontic treatment planning using unified workstation
US7156655B2 (en) 2001-04-13 2007-01-02 Orametrix, Inc. Method and system for comprehensive evaluation of orthodontic treatment using unified workstation
US8021147B2 (en) 2001-04-13 2011-09-20 Orametrix, Inc. Method and system for comprehensive evaluation of orthodontic care using unified workstation
JP2002352563A (en) * 2001-05-25 2002-12-06 Toshiba Corp Recording device, data management system and data management method
US7474932B2 (en) 2003-10-23 2009-01-06 Technest Holdings, Inc. Dental computer-aided design (CAD) methods and systems
EP1714255B1 (en) * 2004-02-05 2016-10-05 Koninklijke Philips N.V. Image-wide artifacts reduction caused by high attenuating objects in ct deploying voxel tissue class
US7536461B2 (en) * 2005-07-21 2009-05-19 International Business Machines Corporation Server resource allocation based on averaged server utilization and server power management
US8417010B1 (en) * 2006-01-12 2013-04-09 Diagnoscan, LLC Digital x-ray diagnosis and evaluation of dental disease
US7813591B2 (en) 2006-01-20 2010-10-12 3M Innovative Properties Company Visual feedback of 3D scan parameters
US20090061381A1 (en) 2007-09-05 2009-03-05 Duane Milford Durbin Systems and methods for 3D previewing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7826646B2 (en) * 2000-08-16 2010-11-02 Align Technology, Inc. Systems and methods for removing gingiva from computer tooth models
US20020178032A1 (en) * 2001-05-03 2002-11-28 Benn Douglas K. Method and system for recording carious lesions

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11207161B2 (en) 2016-05-30 2021-12-28 3Shape A/S Predicting the development of a dental condition
US11918438B2 (en) 2016-05-30 2024-03-05 3Shape A/S Predicting the development of a dental condition
WO2020123167A1 (en) * 2018-12-14 2020-06-18 Colgate-Palmolive Company System and method for oral health monitoring using electrical impedance tomography
AU2019397263B2 (en) * 2018-12-14 2022-07-07 Colgate-Palmolive Company System and method for oral health monitoring using electrical impedance tomography

Also Published As

Publication number Publication date
US20090129639A1 (en) 2009-05-21
US9111372B2 (en) 2015-08-18

Similar Documents

Publication Publication Date Title
US9111372B2 (en) System and method for object identification and anomaly detection
US10629309B2 (en) Method and system for 3D root canal treatment planning
Agrawal et al. Artificial intelligence in dentistry: past, present, and future
Torres et al. Characterization of mandibular molar root and canal morphology using cone beam computed tomography and its variability in Belgian and Chilean population samples
Grauer et al. Working with DICOM craniofacial images
Saghiri et al. A new approach for locating the minor apical foramen using an artificial neural network
CN113223010B (en) Method and system for multi-tissue full-automatic segmentation of oral cavity image
Porto et al. Evaluation of volumetric changes of teeth in a Brazilian population by using cone beam computed tomography
DE102015122604B4 (en) Apparatus for assisting diagnosis of periodontal disease, system for assisting diagnosis of periodontal disease, program for assisting diagnosis of periodontal disease, and method for assisting diagnosis of periodontal disease
JP5696146B2 (en) Method for determining at least one segmentation parameter or optimal threshold of volumetric image data of a maxillofacial object, and method for digitizing a maxillofacial object using the same
Kim et al. Automatic extraction of inferior alveolar nerve canal using feature-enhancing panoramic volume rendering
Roden-Johnson et al. Comparison of hand-traced and computerized cephalograms: landmark identification, measurement, and superimposition accuracy
Stoetzer et al. Advances in assessing the volume of odontogenic cysts and tumors in the mandible: a retrospective clinical trial
Benyó Identification of dental root canals and their medial line from micro-CT and cone-beam CT records
Robles et al. A step-by-step method for producing 3D crania models from CT data
Kanuri et al. Trainable WEKA (Waikato Environment for Knowledge Analysis) segmentation tool: machine-learning-enabled segmentation on features of panoramic radiographs
Linney et al. Three-dimensional visualization of computerized tomography and laser scan data for the simulation of maxillo-facial surgery
Zhu et al. An algorithm for automatically extracting dental arch curve
Khatri et al. Unfolding the mysterious path of forensic facial reconstruction: Review of different imaging modalities
Iurino et al. Medical CT scanning and the study of hidden oral pathologies in fossil carnivores
Alzaid et al. Revolutionizing Dental Care: A Comprehensive Review of Artificial Intelligence Applications Among Various Dental Specialties
US11576631B1 (en) System and method for generating a virtual mathematical model of the dental (stomatognathic) system
Agarwal et al. A review paper on diagnosis of approximal and occlusal dental caries using digital processing of medical images
Zhao et al. Cone-Beam Computed Tomography Image Features under Intelligent Three-Dimensional Reconstruction Algorithm in the Evaluation of Intraoperative and Postoperative Curative Effect of Dental Pulp Disease Using Root Canal Therapy
Jatti et al. Image processing and parameter extraction of digital panoramic dental X-rays with ImageJ

Legal Events

Date Code Title Description
AS Assignment

Owner name: VISIONARY TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ORTEGA, MIREYA;DAUGHERTY, ROGER;SIGNING DATES FROM 20071228 TO 20071230;REEL/FRAME:036066/0852

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION