WO2012061452A1 - Automatic image-based calculation of a geometric feature - Google Patents

Automatic image-based calculation of a geometric feature Download PDF

Info

Publication number
WO2012061452A1
WO2012061452A1 PCT/US2011/058882 US2011058882W WO2012061452A1 WO 2012061452 A1 WO2012061452 A1 WO 2012061452A1 US 2011058882 W US2011058882 W US 2011058882W WO 2012061452 A1 WO2012061452 A1 WO 2012061452A1
Authority
WO
WIPO (PCT)
Prior art keywords
measurement elements
geometric feature
image
user
measurement
Prior art date
Application number
PCT/US2011/058882
Other languages
French (fr)
Inventor
Peter Huber
Arun Krishnan
Takahisa Taniguchi
Xiang Sean Zhou
Original Assignee
Siemens Medical Solutions Usa, Inc.
Siemens Japan K.K.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Medical Solutions Usa, Inc., Siemens Japan K.K. filed Critical Siemens Medical Solutions Usa, Inc.
Priority to JP2013537773A priority Critical patent/JP5837604B2/en
Publication of WO2012061452A1 publication Critical patent/WO2012061452A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/68Analysis of geometric attributes of symmetry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Definitions

  • the present disclosure relates to systems and techniques for automatically interpreting digital medical images and, more specifically, to systems and techniques for automatically calculating a geometric feature based on the interpreted digital images.
  • MRI Computed Tomography
  • CT Computed Tomography
  • Digital medical images are constructed using raw image data obtained from a scanner, for example, a CT scanner, MRI, etc.
  • Digital medical images are typically either a two-dimensional ("2-D") image made of pixel elements or a three-dimensional (“3-D”) image made of volume elements ("voxels"). Because of large amount of image data generated by such modern medical scanners, there has been and remains a need for developing image processing techniques that can automate some or all of the processes to interpret scanned medical images to provide some relevant insight to a determination of a specific disease.
  • CTR cardio-thoracic ratio
  • the CTR is defined as the ratio of the transverse diameter of the heart, at the level of the apex, to the internal diameter of the thorax, and is used for, among other reasons, determination of the enlargement of the heart.
  • a CTR of more than 0.5 or 50% is typically considered abnormal in an adult; more than 0.66 or 66% is typically considered abnormal in a neonate.
  • the cardiac diameter itself can also be measured and monitored for changes. In normal individuals, the cardiac diameter is typically less than 15.5 cm in males, and less than 14.5 cm in females. A change in cardiac diameter of greater than 1.5 cm between two X-ray images is typically considered significant.
  • CTR measurements are performed using chest X-Ray (CXR) images.
  • CXR chest X-Ray
  • the patient needs to stand upright.
  • a lying down posture during acquisition of the images can distort the internal organs.
  • four locations on the image need to be defined by a user: (1) the right side of the thorax; (2) the left side of the thorax; (3) the right side of the heart; (4) the left side of the heart.
  • the respective point is selected which constitutes the start/end point of the widest diameter of the corresponding anatomical structure (e.g., thorax, heart).
  • Standard state-of-the-art solutions allow a user to manually identify the four relevant measurement points (shown as crosses in FIG. 4) directly on the image, e.g. by performing four mouse-clicks at the respective locations.
  • this measurement functionality is integrated into an image-related system such as Picture Archiving and Communication system (PACS). After all four measurement points have been manually identified, the system will calculate the CTR and output the value to a user.
  • PACS Picture Archiving and Communication system
  • Described herein are systems and methods for automatically interpreting digital images and calculating a geometric feature based on the interpreted digital images.
  • a plurality of measurement elements within a digital medical image are automatically identified.
  • the measurement elements are used to compute a geometric feature, and provided to a user for review. If the user modified the measurement elements, a final geometric feature is computed based on the user-modified measurement elements.
  • FIG. 1 shows an exemplary image interpretation system
  • FIG. 2 shows an exemplary image interpretation system in more detail
  • FIG. 3 shows an exemplary image interpretation method
  • FIG. 4 shows an exemplary chest X-ray image
  • FIG. 5 shows another exemplary chest X-ray image.
  • x-ray image may mean a visible x-ray image (e.g., displayed on a video screen) or a digitized representation of an x-ray image (e.g., a file corresponding to the pixel output of an x-ray detector).
  • in-treatment x-ray image may refer to images captured at any point in time during a treatment delivery phase of a radiosurgery or radiotherapy procedure, which may include times when the radiation source is either on or off. From time to time, for convenience of description, CT imaging data may be used herein as an exemplary imaging modality.
  • data from any type of imaging modality including but not limited to X-Ray radiographs, MRI, CT, PET (positron emission tomography), PET- CT, SPECT, SPECT-CT, MR-PET, 3D ultrasound images or the like may also be used in various embodiments of the disclosure.
  • imaging modality including but not limited to X-Ray radiographs, MRI, CT, PET (positron emission tomography), PET- CT, SPECT, SPECT-CT, MR-PET, 3D ultrasound images or the like may also be used in various embodiments of the disclosure.
  • detecting may refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system's memories or registers or other such information storage, transmission or display devices.
  • Embodiments of the methods described herein may be implemented using computer software. If written in a programming language conforming to a recognized standard, sequences of instructions designed to implement the methods can be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems.
  • embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement embodiments of the present invention.
  • the term "image” refers to multi-dimensional data composed of discrete image elements (e.g., pixels for 2-D images and voxels for 3-D images).
  • the image may be, for example, a medical image of a subject collected by computer tomography, magnetic resonance imaging, ultrasound, or any other medical imaging system known to one of skill in the art.
  • the image may also be provided from non-medical contexts, such as, for example, remote sensing systems, electron microscopy, etc.
  • an image can be thought of as a function from R 3 to R or R 7 , the methods of the inventions are not limited to such images, and can be applied to images of any dimension, e.g., a 2-D picture or a 3-D volume.
  • the domain of the image is typically a 2- or 3-dimensional rectangular array, wherein each pixel or voxel can be addressed with reference to a set of 2 or 3 mutually orthogonal axes.
  • digital and digitized as used herein will refer to images or volumes, as appropriate, in a digital or digitized format acquired via a digital acquisition system or via conversion from an analog image.
  • an image interpretation system includes a virtual assistant that automatically identifies measurement elements within an image and computes a geometric feature based on these measurement elements. The results are then proposed to the user to be accepted or adapted if desired.
  • An example of a geometric feature of clinical relevance is the cardio-thoracic ratio (CTR), which is often used in the diagnosis of cardiomegaly. It is understood that while a particular application directed to calculating CTR is shown, the technology is not limited to the specific embodiments illustrated. The present technology has application to, for example, calculating the waist-to-hip ratio for measuring visceral body fat, or determining the scoliotic curvature of a spine.
  • the present framework advantageously achieves tremendous time savings for the user, which are especially significant when deployed in settings where hundreds of examinations are performed per day.
  • a further advantage is that it will result in highly standardized results that are perfectly reproducible, provided the user did not modify the proposed results. Even further, an equally high level of standard can be obtained, independent of clinical expertise of the respective user, thereby achieving a good reduction in skill-related inter- person quality deviations.
  • the present diagnostic testing is much more efficient, it can be administered to a much larger patient-population than would normally be covered by conventional testing. For example, all acquired chest X-ray examinations from routine occupational health check-ups can be analyzed instead of only small subset. This may lead to much earlier detection of disease (e.g., cardiomegaly) in patients that otherwise would not have been checked thoroughly for this specific disease.
  • FIG. 1 is a block diagram illustrating an exemplary image interpretation system 100.
  • the image interpretation system 100 includes a computer system 101 for implementing the framework as described herein.
  • the computer system 101 may be further connected to a digital image source 102 and a workstation 103, over a wired or wireless network.
  • the digital image source 102 may be an image acquisition system (e.g., magnetic resonance (MR) scanner, CT scanner, other radiology scanner or X-ray device) or a database, such as a Picture Archiving and Communication System (PACS), Hospital Information System (HIS), Advanced Visualization (AV) system, Electronic Medical Record (EMR) system, Vendor Neutral Archive (VNA), or a Radiology Information System (RIS), embodied in a storage medium.
  • MR magnetic resonance
  • HIS Hospital Information System
  • AV Advanced Visualization
  • EMR Electronic Medical Record
  • VNA Vendor Neutral Archive
  • RIS Radiology Information System
  • Computer system 101 may be a desktop personal computer, a portable laptop computer, a tablet personal computer, another portable device, a mini-computer, a mainframe computer, a server, a storage system, a dedicated digital appliance, or another device having a storage sub-system configured to store a collection of digital data items.
  • computer system 101 comprises a processor or central processing unit (CPU) 104 coupled to one or more non-transitory computer-readable media 106 (e.g., computer storage or memory), display device 108 (e.g., monitor) and various input devices 110 (e.g., mouse, touch-screen, keyboard, etc.) via an input-output interface 121.
  • CPU central processing unit
  • Computer system 101 may further include support circuits such as a cache, power supply, clock circuits, printer interface, local area network (LAN) data transmission controller, LAN interface, a network controller, and a communications bus. Even further, computer system 101 may be provided with a graphics controller chip, such as a graphics processing unit (GPU) that supports high performance graphics functions.
  • a graphics controller chip such as a graphics processing unit (GPU) that supports high performance graphics functions.
  • Image interpretation unit 107 may include computer-readable program code tangibly embodied in non-transitory computer-readable media 106.
  • Non-transitory computer-readable media 106 may include random access memory (RAM), read only memory (ROM), magnetic floppy disk, flash memory, and other types of memories, or a combination thereof.
  • the computer-readable program code is executed by CPU 104 to process images (e.g., X-ray, MR or CT images) from digital image source 102 (e.g., X-ray, MR or CT scanner).
  • the computer system 101 is a general-purpose computer system that becomes a specific purpose computer system when executing the computer-readable program code.
  • the computer-readable program code is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and coding thereof may be used to implement the teachings of the disclosure contained herein.
  • computer system 101 also includes an operating system and microinstruction code.
  • the various techniques described herein may be implemented either as part of the microinstruction code or as part of an application program or software product, or a combination thereof, which is executed via the operating system.
  • Various other peripheral devices such as additional data storage devices and printing devices, may be connected to the computer system 101.
  • the workstation 103 may be a desktop personal computer, a portable laptop computer, a tablet personal computer, a personal digital assistant, another portable device, a communications device, a smart phone, a mini-computer, a mainframe computer, a server, a storage system, a dedicated digital appliance, or another device having a storage sub-system configured to store a collection of digital data items.
  • the workstation 103 may include a processor, non-transitory computer-readable media and appropriate peripherals, such as an input device and display device, and can be operated in conjunction with the entire system 100.
  • the workstation 103 may communicate with the digital image source 102 so that the retrieved image data can be rendered at the workstation 103 and viewed on the display.
  • the workstation 103 may include a human-computer interface (HCI) that allows the radiologist or any other skilled user (e.g., physician, technician, operator, scientist, etc.), to manipulate the image data and provide user input. For example, the user may identify or modify measurement points in the images via the HCI. Further, the workstation 103 may communicate directly with computer system 101 to display processed image data and results. For example, a radiologist can interactively accept or modify the measurement elements or geometric features determined by the computer system 101.
  • HCI human-computer interface
  • FIG. 2 shows an exemplary image interpretation system 100 in more detail. It should be noted that the various components shown in FIG. 2 may be hosted in whole or in part by different computer systems in some implementations. Thus, the techniques described herein may occur locally on the computer system 101, or may occur in other computer systems and be reported to computer system 101. Although the environment is illustrated with one computer system, it is understood that more than one computer system or server, such as a server pool, as well as computers other than servers, may be employed.
  • the image interpretation unit 107 is coupled to a digital image source 102 and a human-computer interface (HCI) 203.
  • the image interpretation unit 107 includes an image interpretation engine 202 for performing image analysis to generate measurement elements, and a user interface controller 204 for managing the visual output of, for example, digital images (via, e.g., a PACS viewer, AV/DICOM viewer, etc.), computation parameters and results (e.g., proposed measurement elements, geometric features), and handling input from the user and system components.
  • the user interface controller 204 is communicatively coupled to the HCI 203 that displays the images and results for the user, and allows the user to accept the results as "being correct" or to make any modifications to the results as desired.
  • the HCI 203 may be implemented at the workstation 103, computer system 101 or another system.
  • the image interpretation unit 107 may further include a geometric feature calculation unit 206 and a results processing unit 208.
  • the geometric feature calculation unit 206 computes the actual geometric feature (e.g., CTR value) based on the image interpretation engine's 202 proposed measurement elements and any user modifications received via the HCI 203.
  • the results processing unit 208 captures and distributes the results from the various components in the image interpretation unit 107.
  • the results processing unit 208 may send the results to adjacent systems, such as a PACS, RIS or HIS system, or create Digital Imaging and Communications in Medicine (DICOM) Structured Report (SR) objects that can be stored in the digital image source 102 or any other storage medium.
  • DICOM Digital Imaging and Communications in Medicine
  • one or more digital images may be retrieved from the digital image source 102 and transferred to the image interpretation engine 202.
  • the interpretation engine 202 interprets the images by performing one or more image analysis techniques to automatically identify, for example, one or more measurement elements (e.g., measurement points, lines, etc.) for calculating geometric features.
  • the output of this analysis may be propagated to the user interface controller 204.
  • the user interface controller 204 transfers the results to the HCI 203 for display.
  • Supporting visual elements e.g. lines, dots, markers, text, etc.
  • the user 210 interacts with the system 100 by using the input capabilities of the HCI 203 to either directly accept without changes or modify the results proposed by the image interpretation engine 202.
  • the HCI 203 may provide different means of user interaction, such as a mouse, voice or handwriting recognition engine, virtual reality (VR) glove, three-dimensional (3D) mouse, keyboard, eye movement/gaze capture engine, touchpad, etc.
  • VR virtual reality
  • 3D three-dimensional
  • the results of this interaction may be transferred back from the HCI 203 to the user interface controller 204, and from there propagated to the geometric feature calculation unit 206.
  • the user-induced modifications may also be fed back to the image interpretation engine 202 to improve future performance through machine learning.
  • the user- modified measurement elements may be stored in a training database as annotated images for learning a classifier, as will be described in more detail later.
  • the geometric feature calculation unit 206 calculates the geometric feature value based on the proposed and/or modified measurement elements, which may be returned to the user interface controller 204 for display via the HCI 203.
  • the geometric feature value may be a numerical data value, such as a cardio-thoracic ratio (e.g., 48%). It may visually presented as overlay text or graphics on the interpreted image, along with the relevant geometric elements and measurements used to compute it, such as shown in FIG. 4.
  • the geometric feature value and/or the interpreted image may be transferred to the results processing unit 208, where it is distributed to adjacent systems (e.g. RIS, HIS, PACS, etc.) and/or stored in a persistent storage device (e.g., digital image source 102).
  • the geometric feature value and/or the interpreted image may be stored in the form of an electronic medical record (EMR), a findings list (e.g., via a findings navigator), a
  • EMR electronic medical record
  • a findings list e.g., via a findings navigator
  • DICOM secondary capture object a DICOM presentation state, a DICOM SR object or a standard file format (e.g., JPEG, GIF, PNG, etc.).
  • a DICOM presentation state e.g., JPEG, GIF, PNG, etc.
  • a DICOM SR object e.g., JPEG, GIF, PNG, etc.
  • FIG. 3 is a flow chart illustrating an exemplary image interpretation method 300.
  • the exemplary method 300 may be implemented by the image
  • the image interpretation engine 202 in the image interpretation unit 107 receives at least one digital medical image to be interpreted.
  • the image is received from the digital image source 102, which may acquire the image by techniques that include, but are not limited to, magnetic resonance (MR) imaging, computed tomography (CT), helical CT, X-ray, positron emission tomography, fluoroscopic, ultrasound or single photon emission computed tomography (SPECT).
  • MR magnetic resonance
  • CT computed tomography
  • helical CT helical CT
  • X-ray positron emission tomography
  • fluoroscopic fluoroscopic
  • ultrasound single photon emission computed tomography
  • SPECT single photon emission computed tomography
  • the digital medical image may comprise two dimensions, three dimensions, four dimensions or any other number of dimensions.
  • the digital medical image is a chest radiograph or a chest X-ray (CXR).
  • a CXR is a projection radiograph of the chest that is used to diagnose conditions affecting the chest, its contents and nearby structures.
  • the digital medical image may be in a standard digital format, such as a PACS, EMR or AV format.
  • Other types of digital medical images including images of other anatomical structures (e.g., skeletal spine), may also be received and interpreted by the image interpretation engine 202.
  • FIGS. 4 and 5 show exemplary CXR images (402 and 502) that may be interpreted by the image interpretation engine 202.
  • Image 402 as shown in FIG. 4, was acquired from a patient with a relatively straight spine and who was in an upright position, such that the determination of the centerline 404 is relatively straightforward, either by analysis of the objects in the image or identification of the spine as the centerline, as will be described in more detail later.
  • FIG. 5 shows a more complex case involving a skewed image 502. In such case, a framework in accordance with the present disclosure can still identify the skewed angle and compensate for it by automatically identifying the centerline 504 based on anatomical features in the image 512 so that the CTR may be calculated, as will be described in more detail later.
  • the image interpretation engine 202 automatically identifies the measurement elements in the image that are relevant to computing the geometric feature of interest. These measurement elements may be points, lines or planes onto which geometric measurements are based. As shown in FIGS. 4 and
  • the relevant measurement elements for computing a CTR in a CXR posteroanterior (PA) image (402 or 502) may include, for example, the apex point of the heart (408 or 508), the lateral extrema point of the heart (406 or 506), the lateral extrema point of the right lung (410 or 510), the lateral extrema point of the left lung (412 or 512), as well as the torso centerline (404 or 504).
  • the image interpretation engine 202 applies a trained classifier to recognize or predict positions of the relevant measurement elements in the image.
  • the classifier may be trained by applying a machine learning technique to learn from a training database, including training cases provided by experts or cases previously identified by the present framework.
  • the training cases may include, for example, a sample set of clinical CXR images with pre-identified measurement elements.
  • Exemplary machine learning techniques include Bayesian networks, support vector machines, decision tree learning, etc. Other image analysis techniques, such as segmentation, may also be performed.
  • the classifier may be trained to verify the consistency of measurement elements amongst themselves, or to recognize one or more supporting landmarks from which occluded or missing measurement elements may be inferred.
  • supporting landmarks for CTR computation include boundary points at the mid- or upper-level of the lung, boundary points along the heart, and landmarks on other organs such as the liver or spleen, etc.
  • Training may be performed based on a training database of images annotated by experts, wherein at least some of the images include occluded measurement elements.
  • the classifier comprises a regression model that represents statistical relationships for prediction. Exemplary regression analysis techniques include, but are not limited to, linear regression, least squares regression, Bayesian linear regression, least absolute deviations, distance metric learning, nonparametric regression, etc. It should be understood that other types of classifiers may also be trained.
  • the system may use the regression model to check the consistency of the measurement elements amongst themselves, or predict missing or occluded measurement elements.
  • similar images from the training database may be retrieved from a database of pre-annotated images, and annotated measurement elements from these images may be used to infer the locations of the missing or occluded measurement elements. Similarity may be determined based on, for example, similar detected measurement elements, image features or supporting landmarks.
  • the database may include images that are annotated by experts or previous cases identified by the present framework.
  • At least one of the measurement elements to be automatically identified includes a centerline.
  • the relevant measurement element is an optional torso centerline.
  • the torso centerline facilitates the measurement of the maximum traverse diameter of the heart in a skewed image, as will be described in more detail later.
  • the torso centerline may be extracted by automatically detecting two end points, multiple points or a curve along the spine.
  • the torso centerline may be derived by identifying a plurality of primary and supporting landmarks. This is especially useful in cases where the spine is not a good indicator for a centerline, such as in Scoliotic patients.
  • Primary landmarks include those located along the centerline (e.g., spine), while supporting landmarks include those located outside the centerline (e.g., sternum, liver or head) but whose geometric relation with the centerline allows the centerline to be inferred.
  • the mid-point between landmarks located on the ribs on both sides, or the liver dome and the spleenic dome may be used to define the centerline of the torso.
  • machine learning may be performed to learn a predictor.
  • a regression model may be learned to predict the orientation of the centerline.
  • a similar case may be retrieved from an annotated database which includes images that are marked by experts to indicate where the landmarks and centerline are located.
  • the landmarks are detected, and the centerline may either be predicted (or regressed) or "borrowed" from the most similar case(s) in the database.
  • the geometric feature calculation unit 206 automatically computes the geometric feature based on the measurement elements that are automatically identified by the image interpretation engine 202.
  • the geometric feature computed by the image interpretation system may be a geometric measurement, such as a distance between measurement points or an angle between lines determined by measurement points, or a functional value of geometric measurements (e.g., ratio, area, volume, etc.).
  • the geometric feature calculation unit 206 computes the Cardio-Thoracic Ratio (CTR), which measures the enlargement of the heart and is defined by the ratio of the traverse diameter of the heart (LI) to the internal diameter of the thoracic cage (L2):
  • CTR Cardio-Thoracic Ratio
  • a + b is the maximum traverse diameter of the heart measured between the lateral extrema point and apical lateral extrema point.
  • the traverse diameter may be arbitrarily split into discrete distances a and b to represent the total distance between the
  • the maximum traverse diameter (a) of the right side of the heart is defined by the perpendicular distance of the measurement point 406 from the centerline 404
  • the maximum traverse diameter (b) of the left side of the heart is defined by the perpendicular distance of the measurement point 408 from the centerline 404.
  • the internal diameter of the thoracic cage (L2) may be obtained by calculating the distance between the measurement points 410 and 412.
  • the measurement elements 404, 406, 408, 410 and 412 may be automatically identified by the image interpretation engine 202.
  • FIG. 5 shows an exemplary skewed image 502 taken from a patient in a non- perfect upright position (e.g., due to disease, clinical condition, scoliosis, weakness, etc.).
  • the centerline 504 is not perfectly perpendicular to the top or bottom of the image 502. If horizontal lines are used to measure the diameters LI and L2, a calculation error will result, eventually leading to misdiagnosis.
  • the respective maximum traverse diameters (a and b) are determined by projecting the measurement points (506 and 508) onto the centerline 504 to obtain the perpendicular distances.
  • the centerline 504 may be used to obtain the skew angle of the image 502.
  • the image 502 is then rotated to compensate for the skew angle and align the centerline parallel to the Y-axis.
  • the diameters are then determined by simple orthogonal projections onto the Y-axis and determining the difference between the X-coordinates.
  • the internal diameter of the thoracic cage (L2) may be obtained by orthogonally projecting the measurement points 510 and 512 onto the X-axis and determining the difference between the Y-coordinates.
  • the measurement elements identified by the image interpretation engine 202 are provided to the user for review.
  • the geometric feature computed by the geometric feature calculation unit 206 may also be provided to the user for review.
  • the user can either accept the proposed measurement elements and geometric feature, or - in the case of a suboptimal proposal - manually modify them. By allowing the user to interact with the present framework, systematic errors in the results may be identified and corrected.
  • the user input is checked to see if the user has modified any of the measurement elements. If modifications were made, the geometric feature calculation unit 206 computes the final geometric feature based on the user-modified measurement elements at 308. Steps 308, 310, and 312 are repeated until the user is satisfied with the results and no modifications are made.
  • the final geometric feature is output.
  • the results processing unit 208 stores and/or distributes the final geometric feature to adjacent systems (e.g. RIS, HIS, PACS, etc.).
  • the final geometric feature may be displayed to the user via the HCI 203.
  • the final geometric feature may be visually overlaid on the interpreted image, along with the measurement elements used to compute the geometric feature.
  • the final geometric feature value (e.g., "CTR" in percent (%)) may be stored in the system 101 or propagated to adjacent system (e.g. RIS, HIS, PACS, etc.).

Abstract

Systems and methods for automatically interpreting digital images and calculating a geometric feature based on the interpreted digital images are described herein. In accordance with one aspect of the present disclosure, a plurality of measurement elements within a digital medical image are automatically identified (306). The measurement elements are used to compute a geometric feature (308), and provided to a user for review (310). If the user modified the measurement elements (312), a final geometric feature is computed based on the user-modified measurement elements (314)

Description

AUTOMATIC IMAGE-BASED CALCULATION OF A
GEOMETRIC FEATURE
CROSS-REFERENCE TO RELATED APPLICATION
[0001] The present application claims the benefit of U.S. provisional application no. 61/409,144 filed November 2, 2010, the entire contents of which are herein incorporated by reference.
TECHNICAL FIELD
[0002] The present disclosure relates to systems and techniques for automatically interpreting digital medical images and, more specifically, to systems and techniques for automatically calculating a geometric feature based on the interpreted digital images.
BACKGROUND
[0003] The field of medical imaging has seen significant advances since the time X- rays were first used to determine anatomical abnormalities. Medical imaging hardware has progressed in the form of newer machines such as Medical Resonance Imaging
(MRI) scanners, Computed Tomography (CT) scanners, etc. Digital medical images are constructed using raw image data obtained from a scanner, for example, a CT scanner, MRI, etc. Digital medical images are typically either a two-dimensional ("2-D") image made of pixel elements or a three-dimensional ("3-D") image made of volume elements ("voxels"). Because of large amount of image data generated by such modern medical scanners, there has been and remains a need for developing image processing techniques that can automate some or all of the processes to interpret scanned medical images to provide some relevant insight to a determination of a specific disease.
[0004] Geometric measurements are widely used as an examination analysis tool to aid in the diagnosis of a disease or disease severity. An example of a frequently performed measurement is the computation of the cardio-thoracic ratio (CTR), which facilitates the determination of a disease related to the cardiovascular system and supports the monitoring of the clinical status of patients. For example, CTR is particularly useful for monitoring patients suffering from renal insufficiency. The CTR is defined as the ratio of the transverse diameter of the heart, at the level of the apex, to the internal diameter of the thorax, and is used for, among other reasons, determination of the enlargement of the heart. In terms of clinical relevance, a CTR of more than 0.5 or 50% is typically considered abnormal in an adult; more than 0.66 or 66% is typically considered abnormal in a neonate. The cardiac diameter itself can also be measured and monitored for changes. In normal individuals, the cardiac diameter is typically less than 15.5 cm in males, and less than 14.5 cm in females. A change in cardiac diameter of greater than 1.5 cm between two X-ray images is typically considered significant.
[0005] Traditionally, CTR measurements are performed using chest X-Ray (CXR) images. In order to ensure that the location and shape of internal organs (e.g., the heart, lungs) are not affected or distorted, the patient needs to stand upright. A lying down posture during acquisition of the images can distort the internal organs. To make the CTR calculation, four locations on the image need to be defined by a user: (1) the right side of the thorax; (2) the left side of the thorax; (3) the right side of the heart; (4) the left side of the heart. For each of these measurements, the respective point is selected which constitutes the start/end point of the widest diameter of the corresponding anatomical structure (e.g., thorax, heart).
[0006] Standard state-of-the-art solutions allow a user to manually identify the four relevant measurement points (shown as crosses in FIG. 4) directly on the image, e.g. by performing four mouse-clicks at the respective locations. Typically, this measurement functionality is integrated into an image-related system such as Picture Archiving and Communication system (PACS). After all four measurement points have been manually identified, the system will calculate the CTR and output the value to a user.
[0007] This simple four-click approach works well for younger patients and patients without significant disease, because they can stand perfectly upright during image acquisition. However, elderly patients or patients with significant disease are typically less capable of standing perfectly upright during image acquisition, and thus the measurement of the CTR could be flawed due to a tilted body position being measured on a straight image. To compensate for this issue, a centerline can be defined to serve as a geometric reference line from which all distances are measured. All measured distances will stand perpendicular to this centerline. The manual definition of this centerline requires at least two mouse clicks, and sometimes also requires complex operations such as the rotation or translation of the line. In addition, as discussed previously, the four measurement points also need to be identified manually. The CTR value can only be computed after performing these time-consuming tasks.
[0008] Therefore, there is a need for improved systems and methods for determining the CTR of a patient, where the system can adjust for imperfect imaging conditions automatically.
SUMMARY [0009] Described herein are systems and methods for automatically interpreting digital images and calculating a geometric feature based on the interpreted digital images. In accordance with one aspect of the present disclosure, a plurality of measurement elements within a digital medical image are automatically identified. The measurement elements are used to compute a geometric feature, and provided to a user for review. If the user modified the measurement elements, a final geometric feature is computed based on the user-modified measurement elements.
[0010] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the following detailed description. It is not intended to identify features or essential features of the claimed subject matter, nor is it intended that it be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all
disadvantages noted in any part of this disclosure. BRIEF DESCRIPTION OF THE DRAWINGS
[0011] A more complete appreciation of the present disclosure and many of the attendant aspects thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings.
[0012] FIG. 1 shows an exemplary image interpretation system;
[0013] FIG. 2 shows an exemplary image interpretation system in more detail;
[0014] FIG. 3 shows an exemplary image interpretation method;
[0015] FIG. 4 shows an exemplary chest X-ray image; and
[0016] FIG. 5 shows another exemplary chest X-ray image.
DETAILED DESCRIPTION
[0017] In the following description, numerous specific details are set forth such as examples of specific components, devices, methods, etc., in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice embodiments of the present disclosure. In other instances, well-known materials or methods have not been described in detail in order to avoid unnecessarily obscuring embodiments of the present disclosure. While the disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure.
[0018] The term "x-ray image" as used herein may mean a visible x-ray image (e.g., displayed on a video screen) or a digitized representation of an x-ray image (e.g., a file corresponding to the pixel output of an x-ray detector). The term "in-treatment x-ray image" as used herein may refer to images captured at any point in time during a treatment delivery phase of a radiosurgery or radiotherapy procedure, which may include times when the radiation source is either on or off. From time to time, for convenience of description, CT imaging data may be used herein as an exemplary imaging modality. It will be appreciated, however, that data from any type of imaging modality including but not limited to X-Ray radiographs, MRI, CT, PET (positron emission tomography), PET- CT, SPECT, SPECT-CT, MR-PET, 3D ultrasound images or the like may also be used in various embodiments of the disclosure.
[0019] Unless stated otherwise as apparent from the following discussion, it will be appreciated that terms such as "segmenting," "generating," "registering," "determining," "aligning," "positioning," "processing," "computing," "selecting," "estimating,"
"detecting," "tracking" or the like may refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system's memories or registers or other such information storage, transmission or display devices. Embodiments of the methods described herein may be implemented using computer software. If written in a programming language conforming to a recognized standard, sequences of instructions designed to implement the methods can be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement embodiments of the present invention.
[0020] As used herein, the term "image" refers to multi-dimensional data composed of discrete image elements (e.g., pixels for 2-D images and voxels for 3-D images). The image may be, for example, a medical image of a subject collected by computer tomography, magnetic resonance imaging, ultrasound, or any other medical imaging system known to one of skill in the art. The image may also be provided from non-medical contexts, such as, for example, remote sensing systems, electron microscopy, etc. Although an image can be thought of as a function from R3 to R or R7, the methods of the inventions are not limited to such images, and can be applied to images of any dimension, e.g., a 2-D picture or a 3-D volume. For a 2- or 3-dimensional image, the domain of the image is typically a 2- or 3-dimensional rectangular array, wherein each pixel or voxel can be addressed with reference to a set of 2 or 3 mutually orthogonal axes. The terms "digital" and "digitized" as used herein will refer to images or volumes, as appropriate, in a digital or digitized format acquired via a digital acquisition system or via conversion from an analog image.
[0021] The following description sets forth one or more implementations of systems and methods that facilitate automatic interpretation of medical images.
According to aspects of the present disclosure, an image interpretation system is provided that includes a virtual assistant that automatically identifies measurement elements within an image and computes a geometric feature based on these measurement elements. The results are then proposed to the user to be accepted or adapted if desired. An example of a geometric feature of clinical relevance is the cardio-thoracic ratio (CTR), which is often used in the diagnosis of cardiomegaly. It is understood that while a particular application directed to calculating CTR is shown, the technology is not limited to the specific embodiments illustrated. The present technology has application to, for example, calculating the waist-to-hip ratio for measuring visceral body fat, or determining the scoliotic curvature of a spine.
[0022] Compared to the traditional manual approach, the present framework advantageously achieves tremendous time savings for the user, which are especially significant when deployed in settings where hundreds of examinations are performed per day. A further advantage is that it will result in highly standardized results that are perfectly reproducible, provided the user did not modify the proposed results. Even further, an equally high level of standard can be obtained, independent of clinical expertise of the respective user, thereby achieving a good reduction in skill-related inter- person quality deviations. Since the present diagnostic testing is much more efficient, it can be administered to a much larger patient-population than would normally be covered by conventional testing. For example, all acquired chest X-ray examinations from routine occupational health check-ups can be analyzed instead of only small subset. This may lead to much earlier detection of disease (e.g., cardiomegaly) in patients that otherwise would not have been checked thoroughly for this specific disease.
[0023] FIG. 1 is a block diagram illustrating an exemplary image interpretation system 100. The image interpretation system 100 includes a computer system 101 for implementing the framework as described herein. The computer system 101 may be further connected to a digital image source 102 and a workstation 103, over a wired or wireless network. The digital image source 102 may be an image acquisition system (e.g., magnetic resonance (MR) scanner, CT scanner, other radiology scanner or X-ray device) or a database, such as a Picture Archiving and Communication System (PACS), Hospital Information System (HIS), Advanced Visualization (AV) system, Electronic Medical Record (EMR) system, Vendor Neutral Archive (VNA), or a Radiology Information System (RIS), embodied in a storage medium.
[0024] Computer system 101 may be a desktop personal computer, a portable laptop computer, a tablet personal computer, another portable device, a mini-computer, a mainframe computer, a server, a storage system, a dedicated digital appliance, or another device having a storage sub-system configured to store a collection of digital data items. In one implementation, computer system 101 comprises a processor or central processing unit (CPU) 104 coupled to one or more non-transitory computer-readable media 106 (e.g., computer storage or memory), display device 108 (e.g., monitor) and various input devices 110 (e.g., mouse, touch-screen, keyboard, etc.) via an input-output interface 121. Computer system 101 may further include support circuits such as a cache, power supply, clock circuits, printer interface, local area network (LAN) data transmission controller, LAN interface, a network controller, and a communications bus. Even further, computer system 101 may be provided with a graphics controller chip, such as a graphics processing unit (GPU) that supports high performance graphics functions.
[0025] It is to be understood that the present technology may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. In one implementation, the techniques described herein are implemented by image interpretation unit 107. Image interpretation unit 107 may include computer-readable program code tangibly embodied in non-transitory computer-readable media 106. Non-transitory computer-readable media 106 may include random access memory (RAM), read only memory (ROM), magnetic floppy disk, flash memory, and other types of memories, or a combination thereof. The computer-readable program code is executed by CPU 104 to process images (e.g., X-ray, MR or CT images) from digital image source 102 (e.g., X-ray, MR or CT scanner). As such, the computer system 101 is a general-purpose computer system that becomes a specific purpose computer system when executing the computer-readable program code. The computer-readable program code is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and coding thereof may be used to implement the teachings of the disclosure contained herein.
[0026] In one implementation, computer system 101 also includes an operating system and microinstruction code. The various techniques described herein may be implemented either as part of the microinstruction code or as part of an application program or software product, or a combination thereof, which is executed via the operating system. Various other peripheral devices, such as additional data storage devices and printing devices, may be connected to the computer system 101.
[0027] The workstation 103 may be a desktop personal computer, a portable laptop computer, a tablet personal computer, a personal digital assistant, another portable device, a communications device, a smart phone, a mini-computer, a mainframe computer, a server, a storage system, a dedicated digital appliance, or another device having a storage sub-system configured to store a collection of digital data items. The workstation 103 may include a processor, non-transitory computer-readable media and appropriate peripherals, such as an input device and display device, and can be operated in conjunction with the entire system 100. For example, the workstation 103 may communicate with the digital image source 102 so that the retrieved image data can be rendered at the workstation 103 and viewed on the display. The workstation 103 may include a human-computer interface (HCI) that allows the radiologist or any other skilled user (e.g., physician, technician, operator, scientist, etc.), to manipulate the image data and provide user input. For example, the user may identify or modify measurement points in the images via the HCI. Further, the workstation 103 may communicate directly with computer system 101 to display processed image data and results. For example, a radiologist can interactively accept or modify the measurement elements or geometric features determined by the computer system 101.
[0028] FIG. 2 shows an exemplary image interpretation system 100 in more detail. It should be noted that the various components shown in FIG. 2 may be hosted in whole or in part by different computer systems in some implementations. Thus, the techniques described herein may occur locally on the computer system 101, or may occur in other computer systems and be reported to computer system 101. Although the environment is illustrated with one computer system, it is understood that more than one computer system or server, such as a server pool, as well as computers other than servers, may be employed.
[0029] As shown in FIG. 2, the image interpretation unit 107 is coupled to a digital image source 102 and a human-computer interface (HCI) 203. In one implementation, the image interpretation unit 107 includes an image interpretation engine 202 for performing image analysis to generate measurement elements, and a user interface controller 204 for managing the visual output of, for example, digital images (via, e.g., a PACS viewer, AV/DICOM viewer, etc.), computation parameters and results (e.g., proposed measurement elements, geometric features), and handling input from the user and system components. The user interface controller 204 is communicatively coupled to the HCI 203 that displays the images and results for the user, and allows the user to accept the results as "being correct" or to make any modifications to the results as desired. The HCI 203 may be implemented at the workstation 103, computer system 101 or another system.
[0030] The image interpretation unit 107 may further include a geometric feature calculation unit 206 and a results processing unit 208. The geometric feature calculation unit 206 computes the actual geometric feature (e.g., CTR value) based on the image interpretation engine's 202 proposed measurement elements and any user modifications received via the HCI 203. The results processing unit 208 captures and distributes the results from the various components in the image interpretation unit 107. The results processing unit 208 may send the results to adjacent systems, such as a PACS, RIS or HIS system, or create Digital Imaging and Communications in Medicine (DICOM) Structured Report (SR) objects that can be stored in the digital image source 102 or any other storage medium.
[0031] In accordance with one aspect of the present disclosure, one or more digital images may be retrieved from the digital image source 102 and transferred to the image interpretation engine 202. The interpretation engine 202 interprets the images by performing one or more image analysis techniques to automatically identify, for example, one or more measurement elements (e.g., measurement points, lines, etc.) for calculating geometric features. The output of this analysis may be propagated to the user interface controller 204. The user interface controller 204 transfers the results to the HCI 203 for display. Supporting visual elements (e.g. lines, dots, markers, text, etc.) may also be provided to the user interface controller 204 for facilitating better visual orientation.
[0032] The user 210 interacts with the system 100 by using the input capabilities of the HCI 203 to either directly accept without changes or modify the results proposed by the image interpretation engine 202. The HCI 203 may provide different means of user interaction, such as a mouse, voice or handwriting recognition engine, virtual reality (VR) glove, three-dimensional (3D) mouse, keyboard, eye movement/gaze capture engine, touchpad, etc. Once this user-interaction phase is completed, the results of this interaction may be transferred back from the HCI 203 to the user interface controller 204, and from there propagated to the geometric feature calculation unit 206. Optionally, the user-induced modifications may also be fed back to the image interpretation engine 202 to improve future performance through machine learning. For example, the user- modified measurement elements may be stored in a training database as annotated images for learning a classifier, as will be described in more detail later.
[0033] The geometric feature calculation unit 206 calculates the geometric feature value based on the proposed and/or modified measurement elements, which may be returned to the user interface controller 204 for display via the HCI 203. The geometric feature value may be a numerical data value, such as a cardio-thoracic ratio (e.g., 48%). It may visually presented as overlay text or graphics on the interpreted image, along with the relevant geometric elements and measurements used to compute it, such as shown in FIG. 4. The geometric feature value and/or the interpreted image may be transferred to the results processing unit 208, where it is distributed to adjacent systems (e.g. RIS, HIS, PACS, etc.) and/or stored in a persistent storage device (e.g., digital image source 102). The geometric feature value and/or the interpreted image may be stored in the form of an electronic medical record (EMR), a findings list (e.g., via a findings navigator), a
DICOM secondary capture object, a DICOM presentation state, a DICOM SR object or a standard file format (e.g., JPEG, GIF, PNG, etc.).
[0034] FIG. 3 is a flow chart illustrating an exemplary image interpretation method 300. The exemplary method 300 may be implemented by the image
interpretation unit 107, as previously described with reference to FIGS. 1 and 2.
[0035] At step 304, the image interpretation engine 202 in the image interpretation unit 107 receives at least one digital medical image to be interpreted. In one implementation, the image is received from the digital image source 102, which may acquire the image by techniques that include, but are not limited to, magnetic resonance (MR) imaging, computed tomography (CT), helical CT, X-ray, positron emission tomography, fluoroscopic, ultrasound or single photon emission computed tomography (SPECT). The digital medical image may comprise two dimensions, three dimensions, four dimensions or any other number of dimensions.
[0036] In one implementation, the digital medical image is a chest radiograph or a chest X-ray (CXR). A CXR is a projection radiograph of the chest that is used to diagnose conditions affecting the chest, its contents and nearby structures. In addition, the digital medical image may be in a standard digital format, such as a PACS, EMR or AV format. Other types of digital medical images, including images of other anatomical structures (e.g., skeletal spine), may also be received and interpreted by the image interpretation engine 202.
[0037] FIGS. 4 and 5 show exemplary CXR images (402 and 502) that may be interpreted by the image interpretation engine 202. Image 402, as shown in FIG. 4, was acquired from a patient with a relatively straight spine and who was in an upright position, such that the determination of the centerline 404 is relatively straightforward, either by analysis of the objects in the image or identification of the spine as the centerline, as will be described in more detail later. FIG. 5 shows a more complex case involving a skewed image 502. In such case, a framework in accordance with the present disclosure can still identify the skewed angle and compensate for it by automatically identifying the centerline 504 based on anatomical features in the image 512 so that the CTR may be calculated, as will be described in more detail later.
[0038] Referring back to FIG. 3, at 306, the image interpretation engine 202 automatically identifies the measurement elements in the image that are relevant to computing the geometric feature of interest. These measurement elements may be points, lines or planes onto which geometric measurements are based. As shown in FIGS. 4 and
5, the relevant measurement elements for computing a CTR in a CXR posteroanterior (PA) image (402 or 502) may include, for example, the apex point of the heart (408 or 508), the lateral extrema point of the heart (406 or 506), the lateral extrema point of the right lung (410 or 510), the lateral extrema point of the left lung (412 or 512), as well as the torso centerline (404 or 504).
[0039] Due to the fact that these measurement elements are relatively well defined, advanced image analysis techniques may be applied to automatically identify them without user intervention. In one implementation, the image interpretation engine 202 applies a trained classifier to recognize or predict positions of the relevant measurement elements in the image. The classifier may be trained by applying a machine learning technique to learn from a training database, including training cases provided by experts or cases previously identified by the present framework. The training cases may include, for example, a sample set of clinical CXR images with pre-identified measurement elements. Exemplary machine learning techniques include Bayesian networks, support vector machines, decision tree learning, etc. Other image analysis techniques, such as segmentation, may also be performed.
[0040] In addition, when one or more of these relevant measurement elements are missing or occluded by, for example, other anatomical or pathological structures, the classifier may be trained to verify the consistency of measurement elements amongst themselves, or to recognize one or more supporting landmarks from which occluded or missing measurement elements may be inferred. Examples of supporting landmarks for CTR computation include boundary points at the mid- or upper-level of the lung, boundary points along the heart, and landmarks on other organs such as the liver or spleen, etc.
[0041] Training may be performed based on a training database of images annotated by experts, wherein at least some of the images include occluded measurement elements. In one implementation, the classifier comprises a regression model that represents statistical relationships for prediction. Exemplary regression analysis techniques include, but are not limited to, linear regression, least squares regression, Bayesian linear regression, least absolute deviations, distance metric learning, nonparametric regression, etc. It should be understood that other types of classifiers may also be trained. [0042] For a given test case, the system may use the regression model to check the consistency of the measurement elements amongst themselves, or predict missing or occluded measurement elements. Alternatively, similar images from the training database may be retrieved from a database of pre-annotated images, and annotated measurement elements from these images may be used to infer the locations of the missing or occluded measurement elements. Similarity may be determined based on, for example, similar detected measurement elements, image features or supporting landmarks. The database may include images that are annotated by experts or previous cases identified by the present framework.
[0043] In accordance with one implementation, at least one of the measurement elements to be automatically identified includes a centerline. In the case of computing a CTR in a CXR posteroanterior (PA) image, the relevant measurement element is an optional torso centerline. The torso centerline facilitates the measurement of the maximum traverse diameter of the heart in a skewed image, as will be described in more detail later. The torso centerline may be extracted by automatically detecting two end points, multiple points or a curve along the spine. Alternatively, the torso centerline may be derived by identifying a plurality of primary and supporting landmarks. This is especially useful in cases where the spine is not a good indicator for a centerline, such as in Scoliotic patients. Primary landmarks include those located along the centerline (e.g., spine), while supporting landmarks include those located outside the centerline (e.g., sternum, liver or head) but whose geometric relation with the centerline allows the centerline to be inferred. For example, the mid-point between landmarks located on the ribs on both sides, or the liver dome and the spleenic dome, may be used to define the centerline of the torso.
[0044] To extract the centerline from supporting landmarks, machine learning may be performed to learn a predictor. For example, a regression model may be learned to predict the orientation of the centerline. Alternatively, a similar case may be retrieved from an annotated database which includes images that are marked by experts to indicate where the landmarks and centerline are located. At run-time, given a test image, the landmarks are detected, and the centerline may either be predicted (or regressed) or "borrowed" from the most similar case(s) in the database.
[0045] Referring back to FIG. 3, at 308, the geometric feature calculation unit 206 automatically computes the geometric feature based on the measurement elements that are automatically identified by the image interpretation engine 202. The geometric feature computed by the image interpretation system may be a geometric measurement, such as a distance between measurement points or an angle between lines determined by measurement points, or a functional value of geometric measurements (e.g., ratio, area, volume, etc.). In one implementation, the geometric feature calculation unit 206 computes the Cardio-Thoracic Ratio (CTR), which measures the enlargement of the heart and is defined by the ratio of the traverse diameter of the heart (LI) to the internal diameter of the thoracic cage (L2):
Figure imgf000017_0001
where a + b is the maximum traverse diameter of the heart measured between the lateral extrema point and apical lateral extrema point. The traverse diameter may be arbitrarily split into discrete distances a and b to represent the total distance between the
measurement points located on different vertical positions.
[0046] As shown in FIG. 4, the maximum traverse diameter (a) of the right side of the heart is defined by the perpendicular distance of the measurement point 406 from the centerline 404, while the maximum traverse diameter (b) of the left side of the heart is defined by the perpendicular distance of the measurement point 408 from the centerline 404. The internal diameter of the thoracic cage (L2) may be obtained by calculating the distance between the measurement points 410 and 412. As discussed previously with respect to step 306 of FIG. 3, the measurement elements 404, 406, 408, 410 and 412 may be automatically identified by the image interpretation engine 202.
[0047] FIG. 5 shows an exemplary skewed image 502 taken from a patient in a non- perfect upright position (e.g., due to disease, clinical condition, scoliosis, weakness, etc.). In such case, the centerline 504 is not perfectly perpendicular to the top or bottom of the image 502. If horizontal lines are used to measure the diameters LI and L2, a calculation error will result, eventually leading to misdiagnosis. To compensate for the image skew, the respective maximum traverse diameters (a and b) are determined by projecting the measurement points (506 and 508) onto the centerline 504 to obtain the perpendicular distances. Alternatively, the centerline 504 may be used to obtain the skew angle of the image 502. The image 502 is then rotated to compensate for the skew angle and align the centerline parallel to the Y-axis. The diameters are then determined by simple orthogonal projections onto the Y-axis and determining the difference between the X-coordinates. The internal diameter of the thoracic cage (L2) may be obtained by orthogonally projecting the measurement points 510 and 512 onto the X-axis and determining the difference between the Y-coordinates.
[0048] At 310, the measurement elements identified by the image interpretation engine 202 are provided to the user for review. In addition, the geometric feature computed by the geometric feature calculation unit 206 may also be provided to the user for review. The user can either accept the proposed measurement elements and geometric feature, or - in the case of a suboptimal proposal - manually modify them. By allowing the user to interact with the present framework, systematic errors in the results may be identified and corrected.
[0049] At 312, the user input is checked to see if the user has modified any of the measurement elements. If modifications were made, the geometric feature calculation unit 206 computes the final geometric feature based on the user-modified measurement elements at 308. Steps 308, 310, and 312 are repeated until the user is satisfied with the results and no modifications are made.
[0050] At 314, the final geometric feature is output. In one implementation, the results processing unit 208 stores and/or distributes the final geometric feature to adjacent systems (e.g. RIS, HIS, PACS, etc.). In addition, the final geometric feature may be displayed to the user via the HCI 203. For example, the final geometric feature may be visually overlaid on the interpreted image, along with the measurement elements used to compute the geometric feature. In some implementations, the final geometric feature value (e.g., "CTR" in percent (%)) may be stored in the system 101 or propagated to adjacent system (e.g. RIS, HIS, PACS, etc.).
[0051] While the present disclosure has been described in detail with reference to exemplary embodiments, those skilled in the art will appreciate that various
modifications and substitutions can be made thereto without departing from the spirit and scope of the disclosure as set forth in the appended claims. For example, elements and/or features of different exemplary embodiments may be combined with each other and/or substituted for each other within the scope of this disclosure and appended claims.

Claims

WHAT IS CLAIMED IS:
1. A method for automatically calculating a geometric feature, comprising:
(i) receiving at least one digital medical image from a digital image source;
(ii) automatically identifying within the digital medical image, by a processor, a plurality of measurement elements relevant to computing the geometric feature;
(iii) automatically computing, by the processor, the geometric feature based upon the plurality of measurement elements;
(iv) providing the measurement elements to a user for review; and
(v) if the user modified the measurement elements, computing, by the processor, a final geometric feature based upon the user-modified measurement elements.
2. The method of claim 1 wherein the digital medical image comprises a chest X-ray image.
3. The method of claim 1 wherein the geometric feature comprises a cardio- thoracic ratio (CTR).
4. The method of claim 3 wherein the relevant measurement elements comprise an apex point and a lateral extrema point of a heart and lateral extrema points of a right lung and a left lung.
5. The method of claim 1 wherein automatically identifying the plurality of measurement elements comprises applying an image analysis technique to recognize the measurement elements.
6. The method of claim 5 wherein automatically identifying the plurality of measurement elements comprises applying a trained classifier to recognize the measurement elements.
7. The method of claim 1 wherein automatically identifying the plurality of measurement elements comprises applying a trained classifier to recognize one or more supporting landmarks from which one or more occluded or missing measurement elements of the image are inferred.
8. The method of claim 7 further comprises training the classifier by performing regression analysis.
9. The method of claim 1 wherein automatically identifying the plurality of measurement elements comprises:
retrieving one or more similar images from a database of pre-annotated images; and
inferring one or more locations of occluded or missing measurement elements of the image from annotated measurement elements of the similar images.
10. The method of claim 1 wherein at least one of the plurality of
measurement elements comprises a centerline of an anatomical structure.
11. The method of claim 10 wherein automatically identifying the plurality of measurement elements comprises detecting a plurality of primary landmarks along the centerline.
12. The method of claim 10 wherein automatically identifying the plurality of measurement elements comprises detecting a plurality of secondary landmarks from which the centerline is inferred.
13. The method of claim 10 wherein the anatomical structure comprises a torso.
14. The method of claim 13 wherein automatically identifying the plurality of measurement elements comprises detecting two end points, multiple points or a curve along a spine.
15. A non-transitory computer readable medium embodying a program of instructions executable by machine to perform steps for calculating a geometric feature, the steps comprising:
(i) receiving at least one digital medical image from a digital image source;
(ii) automatically identifying within the digital medical image a plurality of measurement elements relevant to computing the geometric feature;
(iii) automatically computing the geometric feature based upon the plurality of measurement elements;
(iv) providing the measurement elements to a user for review; and
(v) if the user modified the measurement elements, computing a final geometric feature based upon the user-modified measurement elements.
16. The non-transitory computer readable medium of claim 15 wherein the geometric feature comprises a cardio-thoracic ratio (CTR).
17. The non-transitory computer readable medium of claim 15 wherein at least one of the plurality of measurement elements comprises a centerline of an anatomical structure.
18. An image interpretation system, comprising:
a memory device for storing computer readable program code; and
a processor in communication with the memory device, the processor being operative with the computer readable program code to:
(i) receive at least one digital medical image from a digital image source; (ii) automatically identify within the digital medical image a plurality of measurement elements relevant to computing the geometric feature;
(iii) automatically compute the geometric feature based upon the plurality of measurement elements;
(iv) provide the measurement elements to a user for review; and
(v) if the user modified the measurement elements, compute a final geometric feature based upon the user-modified measurement elements.
19. The system of claim 18 wherein the geometric feature comprises a cardio- thoracic ratio (CTR).
20. The system of claim 18 wherein at least one of the plurality of
measurement elements comprises a centerline of an anatomical structure.
PCT/US2011/058882 2010-11-02 2011-11-02 Automatic image-based calculation of a geometric feature WO2012061452A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2013537773A JP5837604B2 (en) 2010-11-02 2011-11-02 Geometric feature automatic calculation method, non-transitory computer-readable medium, and image interpretation system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US40914410P 2010-11-02 2010-11-02
US61/409,144 2010-11-02

Publications (1)

Publication Number Publication Date
WO2012061452A1 true WO2012061452A1 (en) 2012-05-10

Family

ID=44936575

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/058882 WO2012061452A1 (en) 2010-11-02 2011-11-02 Automatic image-based calculation of a geometric feature

Country Status (2)

Country Link
JP (1) JP5837604B2 (en)
WO (1) WO2012061452A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015114834A1 (en) * 2014-02-03 2015-08-06 株式会社島津製作所 Image processing method
WO2016149468A1 (en) * 2015-03-18 2016-09-22 Proscia Inc. Computing technologies for image operations
US9636181B2 (en) 2008-04-04 2017-05-02 Nuvasive, Inc. Systems, devices, and methods for designing and forming a surgical implant
US9848922B2 (en) 2013-10-09 2017-12-26 Nuvasive, Inc. Systems and methods for performing spine surgery
US9913669B1 (en) 2014-10-17 2018-03-13 Nuvasive, Inc. Systems and methods for performing spine surgery
US11207132B2 (en) 2012-03-12 2021-12-28 Nuvasive, Inc. Systems and methods for performing spinal surgery
US11244458B2 (en) 2018-01-29 2022-02-08 Fujifilm Corporation Image processing apparatus, image processing method, and program
US20220245793A1 (en) * 2021-01-29 2022-08-04 GE Precision Healthcare LLC Systems and methods for adaptive measurement of medical images

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060122480A1 (en) * 2004-11-22 2006-06-08 Jiebo Luo Segmenting occluded anatomical structures in medical images
US20070047789A1 (en) * 2005-08-30 2007-03-01 Agfa-Gevaert N.V. Method of Constructing Gray Value or Geometric Models of Anatomic Entity in Medical Image
EP1780671A1 (en) * 2005-11-01 2007-05-02 Medison Co., Ltd. Image processing system and method for editing contours of a target object using multiple sectional images

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006175036A (en) * 2004-12-22 2006-07-06 Fuji Photo Film Co Ltd Rib shape estimating apparatus, rib profile estimating method, and its program
JP2007222325A (en) * 2006-02-22 2007-09-06 Canon Inc Information processing apparatus and method and program for controlling the same
JP4964171B2 (en) * 2008-02-29 2012-06-27 富士フイルム株式会社 Target region extraction method, apparatus, and program
WO2010095508A1 (en) * 2009-02-23 2010-08-26 コニカミノルタエムジー株式会社 Midline determining device and program
JP2010238040A (en) * 2009-03-31 2010-10-21 Konica Minolta Medical & Graphic Inc Image measuring apparatus and program
JP2010277231A (en) * 2009-05-27 2010-12-09 Konica Minolta Medical & Graphic Inc Data processing apparatus, method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060122480A1 (en) * 2004-11-22 2006-06-08 Jiebo Luo Segmenting occluded anatomical structures in medical images
US20070047789A1 (en) * 2005-08-30 2007-03-01 Agfa-Gevaert N.V. Method of Constructing Gray Value or Geometric Models of Anatomic Entity in Medical Image
EP1780671A1 (en) * 2005-11-01 2007-05-02 Medison Co., Ltd. Image processing system and method for editing contours of a target object using multiple sectional images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHAOHUA KEVIN ZHOU ET AL: "A boosting regression approach to medical anatomy detection", CVPR '07. IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION; 18-23 JUNE 2007; MINNEAPOLIS, MN, USA, IEEE, PISCATAWAY, NJ, USA, 1 June 2007 (2007-06-01), pages 1 - 8, XP031114396, ISBN: 978-1-4244-1179-5 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9636181B2 (en) 2008-04-04 2017-05-02 Nuvasive, Inc. Systems, devices, and methods for designing and forming a surgical implant
US11453041B2 (en) 2008-04-04 2022-09-27 Nuvasive, Inc Systems, devices, and methods for designing and forming a surgical implant
US10500630B2 (en) 2008-04-04 2019-12-10 Nuvasive, Inc. Systems, devices, and methods for designing and forming a surgical implant
US11207132B2 (en) 2012-03-12 2021-12-28 Nuvasive, Inc. Systems and methods for performing spinal surgery
US9848922B2 (en) 2013-10-09 2017-12-26 Nuvasive, Inc. Systems and methods for performing spine surgery
WO2015114834A1 (en) * 2014-02-03 2015-08-06 株式会社島津製作所 Image processing method
JPWO2015114834A1 (en) * 2014-02-03 2017-03-23 株式会社島津製作所 Image processing method
CN105960203A (en) * 2014-02-03 2016-09-21 株式会社岛津制作所 Image processing method
US9913669B1 (en) 2014-10-17 2018-03-13 Nuvasive, Inc. Systems and methods for performing spine surgery
US10433893B1 (en) 2014-10-17 2019-10-08 Nuvasive, Inc. Systems and methods for performing spine surgery
US10485589B2 (en) 2014-10-17 2019-11-26 Nuvasive, Inc. Systems and methods for performing spine surgery
US11213326B2 (en) 2014-10-17 2022-01-04 Nuvasive, Inc. Systems and methods for performing spine surgery
US20190073510A1 (en) * 2015-03-18 2019-03-07 David R. West Computing technologies for image operations
US10614285B2 (en) 2015-03-18 2020-04-07 Proscia Inc. Computing technologies for image operations
WO2016149468A1 (en) * 2015-03-18 2016-09-22 Proscia Inc. Computing technologies for image operations
US11244458B2 (en) 2018-01-29 2022-02-08 Fujifilm Corporation Image processing apparatus, image processing method, and program
US20220245793A1 (en) * 2021-01-29 2022-08-04 GE Precision Healthcare LLC Systems and methods for adaptive measurement of medical images
US11875505B2 (en) * 2021-01-29 2024-01-16 GE Precision Healthcare LLC Systems and methods for adaptive measurement of medical images

Also Published As

Publication number Publication date
JP5837604B2 (en) 2015-12-24
JP2014502176A (en) 2014-01-30

Similar Documents

Publication Publication Date Title
US8625869B2 (en) Visualization of medical image data with localized enhancement
US10304198B2 (en) Automatic medical image retrieval
US9471987B2 (en) Automatic planning for medical imaging
US8064677B2 (en) Systems and methods for measurement of objects of interest in medical images
US9082231B2 (en) Symmetry-based visualization for enhancing anomaly detection
US10580159B2 (en) Coarse orientation detection in image data
US20160321427A1 (en) Patient-Specific Therapy Planning Support Using Patient Matching
US8958614B2 (en) Image-based detection using hierarchical learning
WO2012061452A1 (en) Automatic image-based calculation of a geometric feature
US9741131B2 (en) Anatomy aware articulated registration for image segmentation
US20210166391A1 (en) Method and system for identifying pathological changes in follow-up medical images
US9336457B2 (en) Adaptive anatomical region prediction
US9691157B2 (en) Visualization of anatomical labels
US8682051B2 (en) Smoothing of dynamic data sets
US20140294263A1 (en) Synchronized Navigation of Medical Images
US20110200227A1 (en) Analysis of data from multiple time-points
US10460508B2 (en) Visualization with anatomical intelligence
US20170221204A1 (en) Overlay Of Findings On Image Data
US11327773B2 (en) Anatomy-aware adaptation of graphical user interface
US9020215B2 (en) Systems and methods for detecting and visualizing correspondence corridors on two-dimensional and volumetric medical images
US20220028510A1 (en) Medical document creation apparatus, method, and program
US9286688B2 (en) Automatic segmentation of articulated structures
US11423554B2 (en) Registering a two-dimensional image with a three-dimensional image
EP4231234A1 (en) Deep learning for registering anatomical to functional images
US11704795B2 (en) Quality-driven image processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11781951

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013537773

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11781951

Country of ref document: EP

Kind code of ref document: A1