WO2014123395A1 - Image display to display 3d image and sectional images - Google Patents

Image display to display 3d image and sectional images Download PDF

Info

Publication number
WO2014123395A1
WO2014123395A1 PCT/KR2014/001080 KR2014001080W WO2014123395A1 WO 2014123395 A1 WO2014123395 A1 WO 2014123395A1 KR 2014001080 W KR2014001080 W KR 2014001080W WO 2014123395 A1 WO2014123395 A1 WO 2014123395A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display
sectional
point
displaying
Prior art date
Application number
PCT/KR2014/001080
Other languages
French (fr)
Inventor
Youngkyu Jin
Original Assignee
Ewoosoft Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ewoosoft Co., Ltd. filed Critical Ewoosoft Co., Ltd.
Priority to US14/766,747 priority Critical patent/US9665990B2/en
Priority to KR1020157024562A priority patent/KR101731589B1/en
Publication of WO2014123395A1 publication Critical patent/WO2014123395A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/466Displaying means of special interest adapted to display 3D data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/467Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient characterised by special input means
    • A61B6/51
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5223Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data generating planar views from image data, e.g. extracting a coronal view from a 3D image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/028Multiple view windows (top-side-front-sagittal-orthogonal)
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/08Biomedical applications

Definitions

  • the present disclosure relates to an image display, and more particularly to an image display capable of displaying a three-dimensional image and sectional images concurrently in defined relation.
  • An X-ray computerized tomography (CT) imaging apparatus emits X-rays to an object and detects the X-rays transmitted through the object using an X-ray detector. Based on the detected X-rays, the X-ray CT imaging apparatus generates image data.
  • the X-ray CT imaging apparatus produces and displays a three-dimensional (3D) image and corresponding sectional images (e.g., axial images, sagittal images, and coronal images) based on the generated image data.
  • FIG. 1 illustrates dental X-ray images displayed on a display of a typical image display apparatus.
  • screen 10 includes four image display regions R1, R2, R3 and R4 for displaying a volume (e.g., 3-dimensional) image 3D, an axial image AX, a coronal image CR, and a sagittal image SG. Initially, these four images are displayed with a default display condition.
  • a user e.g., dentist
  • the user should perform at least seven steps as follows: i) specifying a height H of a tooth of interest on a coronal image CR ( FIG. 2), ii) specifying a center C of the tooth of interest on the axial image AX ( FIG. 3), iii) rotating one of two axes, perpendicular each other at the center C, by a rotating angle R, so that the axis becomes parallel to the tangent of a dental arch (FIG. 4), iv) magnifying the sectional images (FIG. 5), v) moving the axial image so that the tooth of interest is located on a center of the image display region R2 (FIG.
  • the user performs a relatively large number of steps including rotating the axis, determining magnification ratio, moving the image in each display region, after selecting the teeth of interest, in order to observe the 3D image and the sectional images in association with one another. Further, the user should be a highly skilled person in the use of the image display device to generate the desired views. Even when used by an expert, is the large number of steps described above creates a disadvantage of losing time and possible error.
  • Manipulation of an image, such as a dental image, between several views to focus on a particular area of one of the teeth in interest requires a user to perform many steps before obtaining the desired image. Such excessive steps in manipulation requires time, skill and training on the part of the user, and increases the possibility of error.
  • a volume image and related sectional images thereof may be shown in association with one another by arranging the designation point on the volume image and the corresponding points on the sectional images, rotating the volume image in consideration of the designation point and magnifying the sectional image concurrently.
  • an image display apparatus may be provided for producing and displaying a 3D image and corresponding sectional images.
  • the image display apparatus may include a data storage, an input interface, an image display, and a processor.
  • the data storage may be configured to store image data.
  • the input interface may be configured to receive position information of a first image point from a user.
  • the image display may be configured to display, based on 3D image data and sectional image data, i) a three-dimensional (3D) image in a first display region of a screen, and ii) a sectional image of the 3D image in a second display region of the screen.
  • the processor may be configured to generate, with the image data stored in the data storage unit, i) the 3D image data to display the 3D image by locating the first image point of the 3D image at a first position in the first display region and ii) the sectional image data to display the sectional image of the 3D image by locating a second image point of the sectional image at a second position in the second display region.
  • the second image point may correspond to the first image point and the second position corresponds to the first position.
  • a method may be provided for displaying images.
  • the method may include displaying a second three-dimensional (3D) image in a first display region by locating a first image point of the second 3D image on a first position in the first display region of a screen and displaying sectional images of the second 3D image in a plurality of second display regions of the screen, respectively, by locating a second image point of each sectional image on a second position in each second display region.
  • the second image points may be corresponding to the first image point and the second positions may be corresponding to the first position.
  • a user does not require substantial skill or training to obtain appropriate images in association with one another for diagnosis and treatment, while also reducing the time required to obtain the desired image.
  • FIG. 1 is a schematic diagram showing an example of displaying a three-dimensional (3D) image and sectional image on a screen of a typical X-ray image display device;
  • FIG. 2 is a schematic diagram showing a step of specifying a height H of a tooth of interest on a coronal image CR displayed on a screen of a typical X-ray image display device;
  • FIG. 3 is a schematic diagram showing a step of specifying a center C of a tooth of interest on an axial image AX;
  • FIG. 4 is a schematic diagram showing a step of rotating one of two axes, perpendicular each other at the center of a tooth of interest, by a rotating angle R, on an axial image AX;
  • FIG. 5 is a schematic diagram showing a step of magnifying sectional images AX, CR and SG;
  • FIG. 6 to FIG. 8 are schematic diagrams showing movement of an axial image, a coronal image and a sagittal image in each display region;
  • FIG. 9 is a schematic diagram showing a display device in accordance with at least one embodiment.
  • FIG. 10 and FIG. 11 are schematic diagrams for explaining image data of a CT image
  • FIG. 12 is a schematic diagram showing displaying a 3D image and a plurality of sectional images in association with one another on a screen of a display unit in accordance with at least one embodiment
  • FIG. 13 is a schematic diagram showing a designation of a first image point IP1 on a 3D image displayed on a screen in accordance with at least one embodiment
  • FIG. 14 is a schematic diagram for explaining a method of determining a second image point, a pair of axial axes and calculating a rotation angle O of the 3D image in accordance with at least one embodiment
  • FIG. 15 is a schematic diagram showing displaying another 3D image and corresponding sectional images on a screen of a display device, after designation of a first image point IP1 on a previous 3D image, in accordance with at least one embodiment.
  • An image display apparatus in accordnace with the present invention includes: a data storage configured to store image data; an input interface configured to receive position information of a first image point from a user; an image display configured to display, based on 3D image data and sectional image data, i) a three-dimensional (3D) image in a first display region of a screen, and ii) a sectional image of the 3D image in a second display region of the screen; and a processor configured to generate, with the image data stored in the data storage, i) the 3D image data to display the 3D image by locating the first image point of the 3D image at a first position in the first display region and ii) the sectional image data to display the sectional image of the 3D image by locating a second image point of the sectional image at a second position in the second display region, wherein the second image point corresponds to the first image point and the second position corresponds to the first position.
  • a display device may produce and display a 3D image of an object and corresponding sectional views based on an image point designated on a part of interest in the object.
  • the display device may detect single user input to designate an image point on a part (e.g., a tooth) of interest in an object. Based on the detected single user input, the display device may produce and display a 3D image and corresponding multiplanar reconstruction (MPR) views of the part of interest without requiring further user inputs or user interaction.
  • MPR multiplanar reconstruction
  • FIG. 9 illustrates a display device in accordance with at least one embodiment.
  • display device 100 displays a three-dimensional (3D) image and at least one sectional image in association with one another on a screen.
  • Display device 100 may include data storage 110, image display 120, processor 130 and user input interface 140.
  • Display device 100 may be configured and implemented via a typical computer.
  • Data storage 110 may store 3D X-ray image data.
  • the stored 3D X-ray image data may be generated based on image signals obtained from an X-ray CT imaging apparatus.
  • FIG. 10 illustrates the generation of 3D X-ray image data.
  • image signals may be obtained by emitting an X-ray to an object (OB) and detecting the X-ray transmitted through the object (OB) using an X-ray detector.
  • the obtained image signals may be processed to generate the 3D X-ray image data.
  • the 3D X-ray image data may be stored in data storage 110.
  • the 3D X-ray image data may include information on voxel values and CT numbers (e.g., Hounsfield scale).
  • User input interface 140 represents one or more devices that allow a user to interact with display device 100 such as, but not limited to, touch screen input, mouse input, and keyboard input.
  • display device 100 such as, but not limited to, touch screen input, mouse input, and keyboard input.
  • some embodiments employ a mouse input to select portions of the images on the screen for further processing.
  • user input interface 140 may receive a user input for designating an image point that indicates a part of interest in an object. For example, such a user input may be single mouse click that points a part of interest in an image displayed on a screen of image display 120. User input interface 140 may transfer such a user input to processor 130 to produce 3D image and corresponding section images in association with the image point made through the user input.
  • Processor 130 may produce a CT image of the object (OB) based on the 3D X-ray image data stored in data storage 110 and a user input detected by user input interface 140.
  • the CT image may include a 3D image and corresponding sectional images.
  • the 3D X-ray image data may include information on voxel values and CT numbers (e.g., Hounsfield scale).
  • Processor 130 may process such 3D X-ray image data and produces 3D image data for displaying a 3D image of an object and sectional image data for displaying a plurality of sectional images of the object.
  • the sectional images may include an axial image, a coronal image, and a sagittal image.
  • the sectional images may be referred to as multiplanar reconstruction views
  • the axial image may be referred to as an image of an axial plane view
  • the coronal image may be referred to as an image of a coronal plane view
  • the sagittal image may be referred to as an image of a sagittal plane view or a transverse plane view.
  • FIG. 11 illustrate an axial image created produced based on the 3D X-ray image data stored in data storage 110.
  • processor 103 may produce the coronal images and the sagittal images directly or indirectly from the 3D X-ray Image data.
  • Processor 130 may also produce the 3D image and the corresponding sectional views at predetermined magnification ratios. Such magnification ratios may be stored in data storage 110. The magnification ratios may be initially determined by a system designer as a default magnification ratio or set by a user in response to a user input made through user input interface 140.
  • Image display 120 may receive the 3D image data and the sectional image data from processor 130 and display the 3D image and the sectional images. For example, image display 120 may display the 3D image data in a first display region and the sectional image data in at least one second display region. In accordance with at least one embodiment, image display 120 may provide first display region R1 for displaying a 3D image data and three second display regions R2, R3, and R4 for displaying sectional image data. However, the present invention is not limited thereto. Hereinafter, displaying a 3D image and corresponding sectional images on display unit 120 will be described with reference to FIG. 12.
  • FIG. 12 illustrates an image display that displays a 3D image and sectional images in accordance with at least one embodiment.
  • image display 120 may receive the 3D image data and form first volume X-ray image 3D1 within first display region R1 of screen 10 with first image point IP1 of volume X-ray image 3D1 positioned at a first position in first display region R1.
  • Image display 120 may display first volume X-ray image 3D1 having a contour surface passing first image point IP1 in parallel to screen 10. That is, the contour surface passing first image point IP1 may face a user in front of screen 10. Namely, first image point IP1 is directed to the front direction of screen 10.
  • image display 120 may display a plurality of sectional images: axial image AX1, coronal image CR1, and sagittal image SG1, within a plurality of second display regions R2, R3, and R4, respectively.
  • image display 120 may form axial image AX1, coronal image CR1, and sagittal image SG1based on first volume X-ray image 3D1 in different directions.
  • Image display 120 may i) display axial image AX1 in display region R2 with image point IP2a at a second position of display region R2, ii) display coronal image CR1 in display region R3 with image point IP2c at a second position of display region R3 and iii) display sagittal image SG1 in display region R4 with image point IP2s at a second position of display region R4.
  • Second image points IP2a, IP2c and IP2s correspond to first image point IP1, and the second positions correspond to the first position.
  • the first position is the center of the first display region R1
  • the second positions are also the centers of the display regions R2, R3 and R4, respectively.
  • the cross sectional images in different directions may be displayed by predetermined magnification ratios which are stored in data storage 110.
  • the stored magnification ratios may be a default magnification ration set by at least one of a system designer and a user.
  • the present invention is not limited thereto.
  • a user may be allowed to input or to set the magnification ratio through user input interface 140.
  • the magnification ratios of all the sectional images are the same value, but the present invention is not limited thereto.
  • Processor 130 may generate the 3D image data with the 3D X-ray image data stored in data storage 110 so as to locate first image point IP1 on volume X-ray image 3D1 at a first position in first display region R1 and to change the display direction of the contour surface passing through first image point IP1 facing to a user. Such a contour surface passing through first image point IP1 may be displayed in parallel to screen 10.
  • processor 130 may produce 3D image data based on the 3D X-ray image data in data storage 110 and a user input detected by user input interface 140. The user input may be single mouse click that points a part of interest in a target object.
  • Processor 130 may receive position information of the user input made through user input interface 140 and produce the 3D image data based on the position information to display an image part associated with first image point IP1 positioned at a first position of first display region R1.
  • the first position may be a center of first display region R1.
  • Processor 130 may also generate the sectional image data for displaying axial image AX1, coronal image CR1, and sagittal image SG1 so as to locate the second image points at second positions in the second display regions R2, R3, R4, respectively, which are corresponding to the first image point IP1.
  • the second positions may be a center of respective second display regions R2, R3, and R4, but the present invention is not limited thereto.
  • First image point IP1 may be designated by a user through user input interface 140 of display device 100.
  • a user designates first image point IP1 to point a part of interest in an object, (e.g., a tooth of interest in a dental structure).
  • Such designation may be performed by pointing an image of a tooth of interest using an input device (e.g., a mouse).
  • Such designation may be made by single user input such as single mouse click.
  • Second image points IP2a, IP2c and IP2s may be automatically designated to be located at the same tooth of interest in the respective sectional images corresponding to first image point IP1 in accordance with at least one embodiment.
  • processor 130 may calculate position information of second image points IP2a, IP2c, and IP2s based on position information of first image point IP1.
  • processor 130 may consider various characteristics of each sectional view (e.g., axial plane view, coronal plane view, and sagittal plane view). That is, although first image point IP1 indicates a contour surface of a tooth of interest, second image point IP2a of an axial plane view may not indicate the same contour surface of the tooth of interest.
  • Second image point IP2a of the corresponding axial plane view may indicate a center axial plane of the same tooth of interest or an axial plane passing a top of a gum of the same tooth.
  • second image points IP2c and IP2s of the corresponding coronal plane view and the corresponding sagittal plane view may not indicate the same contour surface of the tooth of interest.
  • second image points IP2c and IP2s of the corresponding coronal plane view and the corresponding sagittal plane view may indicate a center coronal plane or a center sagittal plane of the same tooth of interest.
  • second image points IP2c and IP2s of the corresponding coronal plane view and the corresponding sagittal plane view may indicate a coronal plane or a sagittal plane passing an end of a root of the same tooth of interest.
  • Such second image points IP2a, IP2c, and IP2s may be set by at least one of a system designer and a user.
  • display device 100 may indicate single user input that designate first image point IP1 pointing a part of interest in a 3D image, automatically calculate second image points IP2a, IP2c, and IP2s pointing corresponding planes of the same part of interest in sectional images, and reconstruct the 3D image and the corresponding sectional images to show the relatively most interesting part and planes of the same part of interest without requiring further user inputs or user interactions.
  • first image point IP1 pointing a part of interest in a 3D image
  • second image points IP2a, IP2c, and IP2s pointing corresponding planes of the same part of interest in sectional images
  • reconstruct the 3D image and the corresponding sectional images to show the relatively most interesting part and planes of the same part of interest without requiring further user inputs or user interactions.
  • FIG. 13 illustrates designation of a first image point in accordance with at least one embodiment.
  • FIG. 14 is a diagram for describing reconstructing an axial plane image based on a first image point in accordance with at least one embodiment.
  • FIG. 15 illustrates displaying 3D image and corresponding sectional images, which are reconstructed based on a first image point designated by a user in accordance with at least one embodiment.
  • screen 10 displays first volume image 3D1 in a first display direction (step 1).
  • processor 130 may produce first volume image 3D1 of a dental structure to show a contour surface of incisors. That is, image display 120 may display the contour surface of incisors in parallel to screen 10 (e.g., the first display direction).
  • First image point IP1 may be designated by a user on first volume image 3D1 (step 2). For example, a user might want to carefully examine a lower right cuspid. In this case, the user makes single user input to designate first image point IP1. That is, the user points the lower right cuspid using an input device (e.g., a mouse). Processor 130 may recognize such signal user input as designating first image point IP1 and obtain position information of first image point IP1 (e.g., the lower right cuspid).
  • an input device e.g., a mouse
  • Processing unit 130 may generate 3D image data for displaying second volume image 3D2 and sectional image data for displaying sectional images AX2, CR2, and SG2 of second volume image 3D2 (step 3).
  • Display unit 120 may display second volume image 3D2 so as to locate first image point P1 at a first position along a front direction in first display region R1 and the sectional images while locating the second image point IP2a, IP2b or IP2c at the second position corresponding to the first position with given magnification ratios or a magnification ratio input through user input interface 140 (step 4). Desirably, all of the sectional images may be displayed with the same magnification ratio. The displaying of second volume image 3D2 and the sectional images thereof may be displayed concurrently, or substantially simultaneously.
  • processor 130 may generate the sectional image data based on first image point IP1 made on a lower right cuspid of FIG. 13, as follows. i) Processor 130 may select an axial image (AX) (e.g., axial plane view) showing a dental arch image at a height (e.g., a height value or a z axis value) of first image point IP1 designated by a user.
  • AX axial image
  • Processor 130 may analyze CT numbers of pixels of the axial image (AX,), included in a virtual line drawn to pass first image point IP1 along the first display direction D1 on the axial image.
  • a virtual line and the first display direction D1 may be identical to a vertical line (VL) shown in FIG. 14.
  • the CT numbers of pixels may be referred to as voxel values or CT numbers of voxels.
  • Processor 130 may determine edge point EP where the difference of CT numbers between two adjacent pixels (e.g., voxels) is maximum.
  • Processor 130 may determine second image point IP2a by finding a point that has CT numbers within a predetermined range and is apart from the edge point EP by a predetermined distance along the first direction D1. For example, the predetermined distance may be about 3 mm.
  • Processor 130 may establish a pair of axial axes X and Y perpendicular to each other at second image point IP2a on the axial image (AX), and establish a pair of axes for a coronal image and a sagittal image, respectively, on the basis of the pair of axial axes.
  • Processor 130 may determine a rotation angle O.
  • Processor 130 may generate sectional image data based on each pair of the axes.
  • the axial axes X and Y may be determined fixing the direction of the axial axis X toward to a center of dental arch AC as shown in FIG. 14, but the present invention is not limited thereto.
  • the axial axis Y may be determined with a tangential line to the dental arch AC at second image point IP2a.
  • the rotation angel is determined with an angle between the axis X and a vertical line (VL) of screen 10.
  • the vertical line (VL) of screen 10 may be identical to the first display direction D1 since they are in parallel.
  • the rotation angle may be determined with an angle between the axis Y and a horizontal line of the screen 10.
  • the procedures related to FIG. 14 will not be shown on screen 10, but are only exemplary for illustrating examples of procedures to be performed by and within the (e.g., digital) processor 130.
  • processing unit 130 may generate the 3D image data for second volume image 3D2 and the sectional image data in accordance with at least one embodiment, as described above.
  • image display 120 may display the four images: second volume image 3D2, axial image AX2, coronal image CR2, and sagittal image SG2 in respective display regions R1, R2, R3, and R4.
  • display device 100 automatically rotates the 3D image and rearranges the corresponding sectional images thereof on corresponding positions with respect to the first image with the predetermined magnification by automatically calculating second image points IP2a, IP2c, and IP2s based on first image point IP1 in accordance with at least one embodiment.
  • the volume image and the sectional images thereof may be shown in association with one another by arranging the designation point on the volume image and the corresponding points on the sectional images, rotating the volume image in consideration of the designation point and magnifying the sectional image concurrently.
  • the user does not require substantial skill or training to obtain appropriate images in association with one another for diagnosis and time for obtain appropriate can be reduced effectively.
  • the word exemplary is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion.
  • a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a controller and the controller can be a component.
  • One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
  • the present invention can be embodied in the form of methods and apparatuses for practicing those methods.
  • the present invention can also be embodied in the form of program code embodied in tangible media, non-transitory media, such as magnetic recording media, optical recording media, solid state memory, floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
  • the present invention can also be embodied in the form of program code, for example, whether stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium or carrier, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
  • program code When implemented on a general-purpose processor, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits.
  • the present invention can also be embodied in the form of a bitstream or other sequence of signal values electrically or optically transmitted through a medium, stored magnetic-field variations in a magnetic recording medium, etc., generated using a method and/or an apparatus of the present invention.
  • the term "compatible" means that the element communicates with other elements in a manner wholly or partially specified by the standard, and would be recognized by other elements as sufficiently capable of communicating with the other elements in the manner specified by the standard.
  • the compatible element does not need to operate internally in a manner specified by the standard.
  • the present invention may be applied to display a three-dimensional image and sectional images concurrently in defined relation.

Abstract

The disclosure is related to displaying a three dimensional image and corresponding sectional images. A display device may include a data storage, an input interface, an image display, and a processor. The data storage may store image data. The input interface may receive position information of a first image point from a user. The image display may display a 3D image in a first display region and a sectional image of the 3D image in a second display region of the screen. The processor may generate the 3D image data to display the 3D image by locating the first image point of the 3D image at a first position in the first display region and the sectional image data to display the sectional image of the 3D image by locating a second image point of the sectional image at a second position in the second display region.

Description

IMAGE DISPLAY TO DISPLAY 3D IMAGE AND SECTIONAL IMAGES
The present disclosure relates to an image display, and more particularly to an image display capable of displaying a three-dimensional image and sectional images concurrently in defined relation.
An X-ray computerized tomography (CT) imaging apparatus emits X-rays to an object and detects the X-rays transmitted through the object using an X-ray detector. Based on the detected X-rays, the X-ray CT imaging apparatus generates image data. The X-ray CT imaging apparatus produces and displays a three-dimensional (3D) image and corresponding sectional images (e.g., axial images, sagittal images, and coronal images) based on the generated image data.
FIG. 1 illustrates dental X-ray images displayed on a display of a typical image display apparatus. As shown in FIG. 1, screen 10 includes four image display regions R1, R2, R3 and R4 for displaying a volume (e.g., 3-dimensional) image 3D, an axial image AX, a coronal image CR, and a sagittal image SG. Initially, these four images are displayed with a default display condition. In order to examine a tooth of interest accurately, a user (e.g., dentist) should change display conditions of the four images.
In order to changes the display conditions of each image, the user should perform at least seven steps as follows: i) specifying a height H of a tooth of interest on a coronal image CR ( FIG. 2), ii) specifying a center C of the tooth of interest on the axial image AX ( FIG. 3), iii) rotating one of two axes, perpendicular each other at the center C, by a rotating angle R, so that the axis becomes parallel to the tangent of a dental arch (FIG. 4), iv) magnifying the sectional images (FIG. 5), v) moving the axial image so that the tooth of interest is located on a center of the image display region R2 (FIG. 6), vi) moving the coronal image so that the teeth of interest is located on a center of the image display region R3 (FIG. 7), and vii) moving the sagittal image so that the teeth of interest is located on a center of the image display region R4 (FIG. 8).
As described above, the user performs a relatively large number of steps including rotating the axis, determining magnification ratio, moving the image in each display region, after selecting the teeth of interest, in order to observe the 3D image and the sectional images in association with one another. Further, the user should be a highly skilled person in the use of the image display device to generate the desired views. Even when used by an expert, is the large number of steps described above creates a disadvantage of losing time and possible error.
Manipulation of an image, such as a dental image, between several views to focus on a particular area of one of the teeth in interest requires a user to perform many steps before obtaining the desired image. Such excessive steps in manipulation requires time, skill and training on the part of the user, and increases the possibility of error.
A volume image and related sectional images thereof may be shown in association with one another by arranging the designation point on the volume image and the corresponding points on the sectional images, rotating the volume image in consideration of the designation point and magnifying the sectional image concurrently.
In accordance with at least one embodiment, an image display apparatus may be provided for producing and displaying a 3D image and corresponding sectional images. The image display apparatus may include a data storage, an input interface, an image display, and a processor. The data storage may be configured to store image data. The input interface may be configured to receive position information of a first image point from a user. The image display may be configured to display, based on 3D image data and sectional image data, i) a three-dimensional (3D) image in a first display region of a screen, and ii) a sectional image of the 3D image in a second display region of the screen. The processor may be configured to generate, with the image data stored in the data storage unit, i) the 3D image data to display the 3D image by locating the first image point of the 3D image at a first position in the first display region and ii) the sectional image data to display the sectional image of the 3D image by locating a second image point of the sectional image at a second position in the second display region. The second image point may correspond to the first image point and the second position corresponds to the first position.
In accordance with at least one embodiment, a method may be provided for displaying images. The method may include displaying a second three-dimensional (3D) image in a first display region by locating a first image point of the second 3D image on a first position in the first display region of a screen and displaying sectional images of the second 3D image in a plurality of second display regions of the screen, respectively, by locating a second image point of each sectional image on a second position in each second display region. The second image points may be corresponding to the first image point and the second positions may be corresponding to the first position.
A user does not require substantial skill or training to obtain appropriate images in association with one another for diagnosis and treatment, while also reducing the time required to obtain the desired image.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate one or more embodiments described herein and, together with the description, explain these embodiments. In the drawings:
FIG. 1 is a schematic diagram showing an example of displaying a three-dimensional (3D) image and sectional image on a screen of a typical X-ray image display device;
FIG. 2 is a schematic diagram showing a step of specifying a height H of a tooth of interest on a coronal image CR displayed on a screen of a typical X-ray image display device;
FIG. 3 is a schematic diagram showing a step of specifying a center C of a tooth of interest on an axial image AX;
FIG. 4 is a schematic diagram showing a step of rotating one of two axes, perpendicular each other at the center of a tooth of interest, by a rotating angle R, on an axial image AX;
FIG. 5 is a schematic diagram showing a step of magnifying sectional images AX, CR and SG;
FIG. 6 to FIG. 8 are schematic diagrams showing movement of an axial image, a coronal image and a sagittal image in each display region;
FIG. 9 is a schematic diagram showing a display device in accordance with at least one embodiment;
FIG. 10 and FIG. 11 are schematic diagrams for explaining image data of a CT image;
FIG. 12 is a schematic diagram showing displaying a 3D image and a plurality of sectional images in association with one another on a screen of a display unit in accordance with at least one embodiment;
FIG. 13 is a schematic diagram showing a designation of a first image point IP1 on a 3D image displayed on a screen in accordance with at least one embodiment;
FIG. 14 is a schematic diagram for explaining a method of determining a second image point, a pair of axial axes and calculating a rotation angle O of the 3D image in accordance with at least one embodiment; and
FIG. 15 is a schematic diagram showing displaying another 3D image and corresponding sectional images on a screen of a display device, after designation of a first image point IP1 on a previous 3D image, in accordance with at least one embodiment.
An image display apparatus in accordnace with the present invention includes: a data storage configured to store image data; an input interface configured to receive position information of a first image point from a user; an image display configured to display, based on 3D image data and sectional image data, i) a three-dimensional (3D) image in a first display region of a screen, and ii) a sectional image of the 3D image in a second display region of the screen; and a processor configured to generate, with the image data stored in the data storage, i) the 3D image data to display the 3D image by locating the first image point of the 3D image at a first position in the first display region and ii) the sectional image data to display the sectional image of the 3D image by locating a second image point of the sectional image at a second position in the second display region, wherein the second image point corresponds to the first image point and the second position corresponds to the first position.
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention.
In accordance with at least one embodiment, a display device may produce and display a 3D image of an object and corresponding sectional views based on an image point designated on a part of interest in the object. Particularly, the display device may detect single user input to designate an image point on a part (e.g., a tooth) of interest in an object. Based on the detected single user input, the display device may produce and display a 3D image and corresponding multiplanar reconstruction (MPR) views of the part of interest without requiring further user inputs or user interaction. Hereinafter, overall configuration and operation of such a display device will be described with reference to FIG. 9. For convenience and ease of understanding, the display device will be described as displaying a 3D image and corresponding sectional images of dental structures. However, the present invention is not limited thereto. The display device may be applied to producing and displaying 3D image and sectional views of any objects.
FIG. 9 illustrates a display device in accordance with at least one embodiment. Referring to FIG. 9, display device 100 displays a three-dimensional (3D) image and at least one sectional image in association with one another on a screen. Display device 100 may include data storage 110, image display 120, processor 130 and user input interface 140. Display device 100 may be configured and implemented via a typical computer.
Data storage 110 may store 3D X-ray image data. The stored 3D X-ray image data may be generated based on image signals obtained from an X-ray CT imaging apparatus. As an example, FIG. 10 illustrates the generation of 3D X-ray image data. As shown in FIG. 10, image signals may be obtained by emitting an X-ray to an object (OB) and detecting the X-ray transmitted through the object (OB) using an X-ray detector. The obtained image signals may be processed to generate the 3D X-ray image data. After the generation of 3D X-ray image data, the 3D X-ray image data may be stored in data storage 110. The 3D X-ray image data may include information on voxel values and CT numbers (e.g., Hounsfield scale).
User input interface 140 represents one or more devices that allow a user to interact with display device 100 such as, but not limited to, touch screen input, mouse input, and keyboard input. Advantageously, some embodiments employ a mouse input to select portions of the images on the screen for further processing.
In accordance with at least one embodiment, user input interface 140 may receive a user input for designating an image point that indicates a part of interest in an object. For example, such a user input may be single mouse click that points a part of interest in an image displayed on a screen of image display 120. User input interface 140 may transfer such a user input to processor 130 to produce 3D image and corresponding section images in association with the image point made through the user input.
Processor 130 may produce a CT image of the object (OB) based on the 3D X-ray image data stored in data storage 110 and a user input detected by user input interface 140. The CT image may include a 3D image and corresponding sectional images. As described, the 3D X-ray image data may include information on voxel values and CT numbers (e.g., Hounsfield scale). Processor 130 may process such 3D X-ray image data and produces 3D image data for displaying a 3D image of an object and sectional image data for displaying a plurality of sectional images of the object. For example, the sectional images may include an axial image, a coronal image, and a sagittal image.
The sectional images may be referred to as multiplanar reconstruction views, the axial image may be referred to as an image of an axial plane view, the coronal image may be referred to as an image of a coronal plane view, and the sagittal image may be referred to as an image of a sagittal plane view or a transverse plane view. FIG. 11 illustrate an axial image created produced based on the 3D X-ray image data stored in data storage 110. Besides the axial image, processor 103 may produce the coronal images and the sagittal images directly or indirectly from the 3D X-ray Image data.
Processor 130 may also produce the 3D image and the corresponding sectional views at predetermined magnification ratios. Such magnification ratios may be stored in data storage 110. The magnification ratios may be initially determined by a system designer as a default magnification ratio or set by a user in response to a user input made through user input interface 140.
Image display 120 may receive the 3D image data and the sectional image data from processor 130 and display the 3D image and the sectional images. For example, image display 120 may display the 3D image data in a first display region and the sectional image data in at least one second display region. In accordance with at least one embodiment, image display 120 may provide first display region R1 for displaying a 3D image data and three second display regions R2, R3, and R4 for displaying sectional image data. However, the present invention is not limited thereto. Hereinafter, displaying a 3D image and corresponding sectional images on display unit 120 will be described with reference to FIG. 12.
FIG. 12 illustrates an image display that displays a 3D image and sectional images in accordance with at least one embodiment. Referring to FIG. 12, image display 120 may receive the 3D image data and form first volume X-ray image 3D1 within first display region R1 of screen 10 with first image point IP1 of volume X-ray image 3D1 positioned at a first position in first display region R1. Image display 120 may display first volume X-ray image 3D1 having a contour surface passing first image point IP1 in parallel to screen 10. That is, the contour surface passing first image point IP1 may face a user in front of screen 10. Namely, first image point IP1 is directed to the front direction of screen 10.
Furthermore, image display 120 may display a plurality of sectional images: axial image AX1, coronal image CR1, and sagittal image SG1, within a plurality of second display regions R2, R3, and R4, respectively. For example, image display 120 may form axial image AX1, coronal image CR1, and sagittal image SG1based on first volume X-ray image 3D1 in different directions. Image display 120 may i) display axial image AX1 in display region R2 with image point IP2a at a second position of display region R2, ii) display coronal image CR1 in display region R3 with image point IP2c at a second position of display region R3 and iii) display sagittal image SG1 in display region R4 with image point IP2s at a second position of display region R4. Second image points IP2a, IP2c and IP2s correspond to first image point IP1, and the second positions correspond to the first position.
For example, if the first position is the center of the first display region R1, the second positions are also the centers of the display regions R2, R3 and R4, respectively. The cross sectional images in different directions may be displayed by predetermined magnification ratios which are stored in data storage 110. The stored magnification ratios may be a default magnification ration set by at least one of a system designer and a user. However, the present invention is not limited thereto. For example, a user may be allowed to input or to set the magnification ratio through user input interface 140. Desirably, the magnification ratios of all the sectional images are the same value, but the present invention is not limited thereto.
Processor 130 may generate the 3D image data with the 3D X-ray image data stored in data storage 110 so as to locate first image point IP1 on volume X-ray image 3D1 at a first position in first display region R1 and to change the display direction of the contour surface passing through first image point IP1 facing to a user. Such a contour surface passing through first image point IP1 may be displayed in parallel to screen 10. For example, processor 130 may produce 3D image data based on the 3D X-ray image data in data storage 110 and a user input detected by user input interface 140. The user input may be single mouse click that points a part of interest in a target object. Processor 130 may receive position information of the user input made through user input interface 140 and produce the 3D image data based on the position information to display an image part associated with first image point IP1 positioned at a first position of first display region R1. For example, the first position may be a center of first display region R1.
Processor 130 may also generate the sectional image data for displaying axial image AX1, coronal image CR1, and sagittal image SG1 so as to locate the second image points at second positions in the second display regions R2, R3, R4, respectively, which are corresponding to the first image point IP1. For example, the second positions may be a center of respective second display regions R2, R3, and R4, but the present invention is not limited thereto.
First image point IP1 may be designated by a user through user input interface 140 of display device 100. For example, a user designates first image point IP1 to point a part of interest in an object, (e.g., a tooth of interest in a dental structure). Such designation may be performed by pointing an image of a tooth of interest using an input device (e.g., a mouse). Such designation may be made by single user input such as single mouse click.
Second image points IP2a, IP2c and IP2s may be automatically designated to be located at the same tooth of interest in the respective sectional images corresponding to first image point IP1 in accordance with at least one embodiment. For example, processor 130 may calculate position information of second image points IP2a, IP2c, and IP2s based on position information of first image point IP1. Particularly, processor 130 may consider various characteristics of each sectional view (e.g., axial plane view, coronal plane view, and sagittal plane view). That is, although first image point IP1 indicates a contour surface of a tooth of interest, second image point IP2a of an axial plane view may not indicate the same contour surface of the tooth of interest. Second image point IP2a of the corresponding axial plane view may indicate a center axial plane of the same tooth of interest or an axial plane passing a top of a gum of the same tooth.
Furthermore, second image points IP2c and IP2s of the corresponding coronal plane view and the corresponding sagittal plane view may not indicate the same contour surface of the tooth of interest. For example, second image points IP2c and IP2s of the corresponding coronal plane view and the corresponding sagittal plane view may indicate a center coronal plane or a center sagittal plane of the same tooth of interest. Or, second image points IP2c and IP2s of the corresponding coronal plane view and the corresponding sagittal plane view may indicate a coronal plane or a sagittal plane passing an end of a root of the same tooth of interest. Such second image points IP2a, IP2c, and IP2s may be set by at least one of a system designer and a user.
As described, display device 100 may indicate single user input that designate first image point IP1 pointing a part of interest in a 3D image, automatically calculate second image points IP2a, IP2c, and IP2s pointing corresponding planes of the same part of interest in sectional images, and reconstruct the 3D image and the corresponding sectional images to show the relatively most interesting part and planes of the same part of interest without requiring further user inputs or user interactions. Hereinafter, an operation for designating a first image point and reconstructing a 3D image and corresponding sectional images based on the first image point will be described with reference to FIG. 13 to FIG. 15.
FIG. 13 illustrates designation of a first image point in accordance with at least one embodiment. FIG. 14 is a diagram for describing reconstructing an axial plane image based on a first image point in accordance with at least one embodiment. FIG. 15 illustrates displaying 3D image and corresponding sectional images, which are reconstructed based on a first image point designated by a user in accordance with at least one embodiment.
Referring to FIG. 13, screen 10 displays first volume image 3D1 in a first display direction (step 1). For example, processor 130 may produce first volume image 3D1 of a dental structure to show a contour surface of incisors. That is, image display 120 may display the contour surface of incisors in parallel to screen 10 (e.g., the first display direction).
First image point IP1 may be designated by a user on first volume image 3D1 (step 2). For example, a user might want to carefully examine a lower right cuspid. In this case, the user makes single user input to designate first image point IP1. That is, the user points the lower right cuspid using an input device (e.g., a mouse). Processor 130 may recognize such signal user input as designating first image point IP1 and obtain position information of first image point IP1 (e.g., the lower right cuspid).
Processing unit 130 may generate 3D image data for displaying second volume image 3D2 and sectional image data for displaying sectional images AX2, CR2, and SG2 of second volume image 3D2 (step 3).
Display unit 120 may display second volume image 3D2 so as to locate first image point P1 at a first position along a front direction in first display region R1 and the sectional images while locating the second image point IP2a, IP2b or IP2c at the second position corresponding to the first position with given magnification ratios or a magnification ratio input through user input interface 140 (step 4). Desirably, all of the sectional images may be displayed with the same magnification ratio. The displaying of second volume image 3D2 and the sectional images thereof may be displayed concurrently, or substantially simultaneously.
As described, display device 100 reconstructs a 3D image and corresponding sectional images based on first image point IP1 designated by single user input. Hereinafter, operation for generating sectional image data based on the first image point IP1 will be described in detail with reference to FIG. 14. Referring to FIG. 14, processor 130 may generate the sectional image data based on first image point IP1 made on a lower right cuspid of FIG. 13, as follows. i) Processor 130 may select an axial image (AX) (e.g., axial plane view) showing a dental arch image at a height (e.g., a height value or a z axis value) of first image point IP1 designated by a user. ii) Processor 130 may analyze CT numbers of pixels of the axial image (AX,), included in a virtual line drawn to pass first image point IP1 along the first display direction D1 on the axial image. Such a virtual line and the first display direction D1 may be identical to a vertical line (VL) shown in FIG. 14. The CT numbers of pixels may be referred to as voxel values or CT numbers of voxels.
iii) Processor 130 may determine edge point EP where the difference of CT numbers between two adjacent pixels (e.g., voxels) is maximum. iv) Processor 130 may determine second image point IP2a by finding a point that has CT numbers within a predetermined range and is apart from the edge point EP by a predetermined distance along the first direction D1. For example, the predetermined distance may be about 3 mm. v) Processor 130 may establish a pair of axial axes X and Y perpendicular to each other at second image point IP2a on the axial image (AX), and establish a pair of axes for a coronal image and a sagittal image, respectively, on the basis of the pair of axial axes. vi) Processor 130 may determine a rotation angle O.
vii) Processor 130 may generate sectional image data based on each pair of the axes. The axial axes X and Y may be determined fixing the direction of the axial axis X toward to a center of dental arch AC as shown in FIG. 14, but the present invention is not limited thereto. In accordance with another embodiment, the axial axis Y may be determined with a tangential line to the dental arch AC at second image point IP2a. In case of fixing the direction of the axis X, the rotation angel is determined with an angle between the axis X and a vertical line (VL) of screen 10. In FIG. 14, the vertical line (VL) of screen 10 may be identical to the first display direction D1 since they are in parallel. In case of fixing the direction of the axis Y, the rotation angle may be determined with an angle between the axis Y and a horizontal line of the screen 10. The procedures related to FIG. 14 will not be shown on screen 10, but are only exemplary for illustrating examples of procedures to be performed by and within the (e.g., digital) processor 130.
Once first image point IP1 is designated on first volume image 3D1 by the user through user input interface 140, processing unit 130 may generate the 3D image data for second volume image 3D2 and the sectional image data in accordance with at least one embodiment, as described above. As shown in FIG. 15, image display 120 may display the four images: second volume image 3D2, axial image AX2, coronal image CR2, and sagittal image SG2 in respective display regions R1, R2, R3, and R4. As described, when the user designates only single image point (e.g., first image point IP1) with single user input, display device 100 automatically rotates the 3D image and rearranges the corresponding sectional images thereof on corresponding positions with respect to the first image with the predetermined magnification by automatically calculating second image points IP2a, IP2c, and IP2s based on first image point IP1 in accordance with at least one embodiment.
According to embodiments of this invention, the volume image and the sectional images thereof may be shown in association with one another by arranging the designation point on the volume image and the corresponding points on the sectional images, rotating the volume image in consideration of the designation point and magnifying the sectional image concurrently. Thus, the user does not require substantial skill or training to obtain appropriate images in association with one another for diagnosis and time for obtain appropriate can be reduced effectively.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments necessarily mutually exclusive of other embodiments. The same applies to the term "implentation."
As used in this application, the word exemplary is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion.
Additionally, the term "or" is intended to mean an inclusive "or rather than an exclusive "or" That is, unless specified otherwise, or clear from context, "employs A or B"is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then X employs A or B is satisfied under any of the foregoing instances. In addition, the articles "a" and "an"as used in this application and the appended claims should generally be construed to mean "one or more unless specified otherwise or clear from context to be directed to a singular form.
Moreover, the terms "system," "component," "module," "interface," "model" or the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
The present invention can be embodied in the form of methods and apparatuses for practicing those methods. The present invention can also be embodied in the form of program code embodied in tangible media, non-transitory media, such as magnetic recording media, optical recording media, solid state memory, floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. The present invention can also be embodied in the form of program code, for example, whether stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium or carrier, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits. The present invention can also be embodied in the form of a bitstream or other sequence of signal values electrically or optically transmitted through a medium, stored magnetic-field variations in a magnetic recording medium, etc., generated using a method and/or an apparatus of the present invention.
It should be understood that the steps of the exemplary methods set forth herein are not necessarily required to be performed in the order described, and the order of the steps of such methods should be understood to be merely exemplary. Likewise, additional steps may be included in such methods, and certain steps may be omitted or combined, in methods consistent with various embodiments of the present invention.
As used herein in reference to an element and a standard, the term "compatible" means that the element communicates with other elements in a manner wholly or partially specified by the standard, and would be recognized by other elements as sufficiently capable of communicating with the other elements in the manner specified by the standard. The compatible element does not need to operate internally in a manner specified by the standard.
Although embodiments of the present invention have been described herein, it should be understood that the foregoing embodiments and advantages are merely examples and are not to be construed as limiting the present invention or the scope of the claims. Numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure, and the present teaching can also be readily applied to other types of apparatuses. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.
The present invention may be applied to display a three-dimensional image and sectional images concurrently in defined relation.

Claims (20)

  1. An image display apparatus comprising:
    a data storage configured to store image data;
    an input interface configured to receive position information of a first image point from a user;
    an image display configured to display, based on 3D image data and sectional image data, i) a three-dimensional (3D) image in a first display region of a screen, and ii) a sectional image of the 3D image in a second display region of the screen; and
    a processor configured to generate, with the image data stored in the data storage, i) the 3D image data to display the 3D image by locating the first image point of the 3D image at a first position in the first display region and ii) the sectional image data to display the sectional image of the 3D image by locating a second image point of the sectional image at a second position in the second display region,
    wherein the second image point corresponds to the first image point and the second position corresponds to the first position.
  2. An image display apparatus according to claim 1, wherein the processor is configured to generate the 3D image data to display a part of the 3D image, indicated by the first image point, to be directed to a front direction of the image display.
  3. An image display apparatus according to claim 1, wherein the sectional image data is generated for displaying the sectional images with the same magnification ratio
  4. An image display apparatus according to claim 21, wherein the sectional image display data is generated for displaying the sectional images with the same magnification ratio.
  5. An image display apparatus according to claim 1, wherein:
    the data storage stores 3D X-ray image data of a dental arch including teeth; and
    the 3D image is a 3D X-ray image of the dental arch.
  6. An image display apparatus according to claim 5, wherein the sectional images include an axial image, a coronal image, and a sagittal image.
  7. An image display apparatus according to claim 6, wherein the first image point is a point on a tooth of interest.
  8. An image display apparatus according to claim 5, wherein the image display displays the 3D image and the sectional images at the same time, substantially.
  9. An image display apparatus according to claim 1, wherein the first position and the second position are center points in the first display region and the second display region, respectively.
  10. An image display apparatus according to claim 4, wherein the first position and the second position are center points in the first display region and the second display region, respectively.
  11. A method of displaying images, comprising:
    displaying a second three-dimensional (3D) image in a first display region by locating a first image point of the second 3D image on a first position in the first display region of a screen; and
    displaying sectional images of the second 3D image in a plurality of second display regions of the screen, respectively, by locating a second image point of each sectional image on a second position in each second display region,
    wherein the second image points are corresponding to the first image point, and the second positions are corresponding to the first position.
  12. A method of displaying images according to claim 11, wherein a part of the 3D image, indicated by the first image point, is displayed to be directed to a frontal direction of the screen.
  13. A method of displaying images according to claim 11, wherein the sectional images are displayed with the same magnification ratio.
  14. A method of displaying images according to claim 12, wherein the sectional images are displayed with the same magnification ratio.
  15. A method of displaying images according to claim 11, wherein the 3D image is a 3D X-ray image of the dental arch.
  16. A method of displaying images according to claim 15, wherein the sectional images include an axial image, a coronal image, and a sagittal image.
  17. A method of displaying images according to claim 16, wherein the first image point is a point on a tooth of interest.
  18. A method of displaying images according to claim 15, wherein the displaying of the 3D image and the displaying of the sectional images are concurrent.
  19. A method of displaying images according to claim 11, wherein the first position and the second position are center points in the first display region and the second display region, respectively.
  20. A non-transitory machine-readable storage medium, having encoded thereon program code, wherein, when the program code is executed by a machine, the machine implements a method for display images, comprising the steps of:
    displaying a second three-dimensional (3D) image in a first display region by locating a first image point of the second 3D image on a first position in the first display region of a screen; and
    displaying sectional images of the second 3D image in a plurality of second display regions of the screen, respectively, by locating a second image point of each sectional image on a second position in each second display region,
    wherein the second image points are corresponding to the first image point, and the second positions are corresponding to the first position.
PCT/KR2014/001080 2013-02-08 2014-02-08 Image display to display 3d image and sectional images WO2014123395A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/766,747 US9665990B2 (en) 2013-02-08 2014-02-08 Image display to display 3D image and sectional images
KR1020157024562A KR101731589B1 (en) 2013-02-08 2014-02-08 Image display to display 3d image and sectional images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20130014496 2013-02-08
KR10-2013-0014496 2013-02-08

Publications (1)

Publication Number Publication Date
WO2014123395A1 true WO2014123395A1 (en) 2014-08-14

Family

ID=51299939

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/KR2014/001080 WO2014123395A1 (en) 2013-02-08 2014-02-08 Image display to display 3d image and sectional images
PCT/KR2014/001081 WO2014123396A1 (en) 2013-02-08 2014-02-08 Image display to display internal structure with change of depth

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/KR2014/001081 WO2014123396A1 (en) 2013-02-08 2014-02-08 Image display to display internal structure with change of depth

Country Status (3)

Country Link
US (2) US10210667B2 (en)
KR (2) KR101731593B1 (en)
WO (2) WO2014123395A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104600141A (en) * 2015-02-06 2015-05-06 保利协鑫(苏州)新能源运营管理有限公司 Solar module
CN109963100A (en) * 2017-12-14 2019-07-02 浙江宇视科技有限公司 A kind of caching amended record method and device shared based on multicast
US20230298163A1 (en) * 2022-03-15 2023-09-21 Avatar Medical Method for displaying a 3d model of a patient
WO2023175001A1 (en) * 2022-03-15 2023-09-21 Avatar Medical Method for displaying a 3d model of a patient
EP4258216A1 (en) * 2022-04-06 2023-10-11 Avatar Medical Method for displaying a 3d model of a patient

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IT201600083061A1 (en) * 2016-08-05 2018-02-05 Aldo Amato METHOD OF DETERMINING AND DESIGNING THE INDIVIDUAL IDEAL FORM OF TWO UPPER FRONT TEETH
KR101877895B1 (en) * 2016-10-06 2018-07-12 주식회사 메가젠임플란트 Image Generation System for implant Diagnosis and the same
KR101865701B1 (en) * 2016-10-06 2018-06-11 주식회사 메가젠임플란트 Mobile iinked implant diagnosis system
WO2019045144A1 (en) * 2017-08-31 2019-03-07 (주)레벨소프트 Medical image processing apparatus and medical image processing method which are for medical navigation device
US20210137653A1 (en) * 2019-11-12 2021-05-13 Align Technology, Inc. Digital 3d models of dental arches with accurate arch width
KR102352985B1 (en) * 2020-07-07 2022-01-20 한국과학기술원 Method and apparatus system for volume data visualization interface through virtual widget based on touch screen
CN112288886B (en) * 2020-09-15 2022-02-15 陈学鹏 Tooth position arrangement method of accurate digital tooth model
KR102633824B1 (en) * 2021-07-21 2024-02-06 오스템임플란트 주식회사 Dental image providing apparatus and method thereof
CN116779093B (en) * 2023-08-22 2023-11-28 青岛美迪康数字工程有限公司 Method and device for generating medical image structured report and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371778A (en) * 1991-11-29 1994-12-06 Picker International, Inc. Concurrent display and adjustment of 3D projection, coronal slice, sagittal slice, and transverse slice images
WO1998053428A1 (en) * 1997-05-20 1998-11-26 Cadent Ltd. Computer user interface for orthodontic use
US20090191503A1 (en) * 2008-01-29 2009-07-30 Align Technology, Inc. Method and system for optimizing dental aligner geometry
US20090316966A1 (en) * 2008-05-16 2009-12-24 Geodigm Corporation Method and apparatus for combining 3D dental scans with other 3D data sets
US20110033026A1 (en) * 2008-02-12 2011-02-10 Sirona Dental Systems Gmbh Method for Creating a Tomographic Image

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5734384A (en) 1991-11-29 1998-03-31 Picker International, Inc. Cross-referenced sectioning and reprojection of diagnostic image volumes
US7581191B2 (en) * 1999-11-15 2009-08-25 Xenogen Corporation Graphical user interface for 3-D in-vivo imaging
US6633789B1 (en) * 2000-02-17 2003-10-14 Align Technology, Inc. Effiicient data representation of teeth model
US7113818B2 (en) * 2002-04-08 2006-09-26 Oti Ophthalmic Technologies Inc. Apparatus for high resolution imaging of moving organs
KR200303415Y1 (en) 2002-11-01 2003-02-07 박창수 computed tomography
US8295432B2 (en) 2005-05-02 2012-10-23 Oy Ajat Ltd Radiation imaging device with irregular rectangular shape and extraoral dental imaging system therefrom
US7742560B2 (en) 2005-05-02 2010-06-22 Oy Ajat Ltd. Radiation imaging device with irregular rectangular shape and extraoral dental imaging system therefrom
US7336763B2 (en) 2005-05-02 2008-02-26 Oy Ajat Ltd Dental extra-oral x-ray imaging system and method
US7676022B2 (en) 2005-05-02 2010-03-09 Oy Ajat Ltd. Extra-oral digital panoramic dental x-ray imaging system
CN101288102B (en) 2005-08-01 2013-03-20 拜奥普蒂根公司 Methods and systems for analysis of three dimensional data sets obtained from samples
KR100947826B1 (en) 2006-05-24 2010-03-18 주식회사 메디슨 Apparatus and method for displaying an ultrasound image
US8401257B2 (en) 2007-01-19 2013-03-19 Bioptigen, Inc. Methods, systems and computer program products for processing images generated using Fourier domain optical coherence tomography (FDOCT)
US20100255445A1 (en) * 2007-10-03 2010-10-07 Bernard Gantes Assisted dental implant treatment
JP5390377B2 (en) 2008-03-21 2014-01-15 淳 高橋 3D digital magnifier surgery support system
EP2123223B1 (en) 2008-05-19 2017-07-05 Cefla S.C. Method and Apparatus for Simplified Patient Positioning in Dental Tomographic X-Ray Imaging
EP2335596A1 (en) 2009-12-15 2011-06-22 Medison Co., Ltd. Ultrasound system and method of selecting slice image from three-dimensional ultrasound image
KR101183767B1 (en) 2009-12-15 2012-09-17 삼성메디슨 주식회사 Ultrasound system and method of selecting two-dimensional slice image from three-dimensional ultrasound image
KR101126891B1 (en) 2010-01-12 2012-03-20 삼성메디슨 주식회사 Ultrasound system and method for providing slice image
KR101117930B1 (en) * 2010-05-13 2012-02-29 삼성메디슨 주식회사 Ultrasound system and method for providing additional information with slice image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371778A (en) * 1991-11-29 1994-12-06 Picker International, Inc. Concurrent display and adjustment of 3D projection, coronal slice, sagittal slice, and transverse slice images
WO1998053428A1 (en) * 1997-05-20 1998-11-26 Cadent Ltd. Computer user interface for orthodontic use
US20090191503A1 (en) * 2008-01-29 2009-07-30 Align Technology, Inc. Method and system for optimizing dental aligner geometry
US20110033026A1 (en) * 2008-02-12 2011-02-10 Sirona Dental Systems Gmbh Method for Creating a Tomographic Image
US20090316966A1 (en) * 2008-05-16 2009-12-24 Geodigm Corporation Method and apparatus for combining 3D dental scans with other 3D data sets

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104600141A (en) * 2015-02-06 2015-05-06 保利协鑫(苏州)新能源运营管理有限公司 Solar module
CN104600141B (en) * 2015-02-06 2018-04-03 协鑫集成科技股份有限公司 Solar cell module
CN109963100A (en) * 2017-12-14 2019-07-02 浙江宇视科技有限公司 A kind of caching amended record method and device shared based on multicast
US20230298163A1 (en) * 2022-03-15 2023-09-21 Avatar Medical Method for displaying a 3d model of a patient
WO2023175001A1 (en) * 2022-03-15 2023-09-21 Avatar Medical Method for displaying a 3d model of a patient
EP4258216A1 (en) * 2022-04-06 2023-10-11 Avatar Medical Method for displaying a 3d model of a patient

Also Published As

Publication number Publication date
KR101731593B1 (en) 2017-05-11
KR101731589B1 (en) 2017-05-11
WO2014123396A1 (en) 2014-08-14
KR20160002703A (en) 2016-01-08
KR20150122678A (en) 2015-11-02
US9665990B2 (en) 2017-05-30
US20150379780A1 (en) 2015-12-31
US20150374316A1 (en) 2015-12-31
US10210667B2 (en) 2019-02-19

Similar Documents

Publication Publication Date Title
WO2014123395A1 (en) Image display to display 3d image and sectional images
US7965304B2 (en) Image processing method and image processing apparatus
US8189002B1 (en) Method and apparatus for visualizing three-dimensional and higher-dimensional image data sets
JP4588736B2 (en) Image processing method, apparatus, and program
Thurfjell et al. CBA—an atlas-based software tool used to facilitate the interpretation of neuroimaging data
CN110753954B (en) System and method for combining color 3D images
WO2010126307A2 (en) Apparatus and method for a real-time multi-view three-dimensional ultrasonic image user interface for ultrasonic diagnosis system
WO2020184875A1 (en) Tooth number selection method using panoramic image, and medical image processing device therefor
JP4885042B2 (en) Image processing method, apparatus, and program
CN103177471A (en) Three-dimensional image processing apparatus
CN103534733A (en) Medical image system and method
CN109887048A (en) PET scatter correction method, image rebuilding method, device and electronic equipment
Kim et al. Conveying shape with texture: experimental investigations of texture's effects on shape categorization judgments
CN1827046A (en) Image processing apparatus
JP2005521960A (en) Method, system and computer program for stereoscopic observation of three-dimensional medical images
WO2015198835A1 (en) Image processing apparatus and image processing method
WO2015072807A1 (en) Device and method for generating dental three-dimensional surface image
JP5065740B2 (en) Image processing method, apparatus, and program
WO2011159085A2 (en) Method and apparatus for ray tracing in a 3-dimensional image system
US20160157726A1 (en) Projection image generating device, method and program
JP2008067915A (en) Medical picture display
US10347032B2 (en) Slice representation of volume data
WO2021187675A1 (en) Reliable subsurface scattering method for volume rendering in three-dimensional ultrasound image
EP2515137A1 (en) Processing a dataset representing a plurality of pathways in a three-dimensional space
CN106296707A (en) Medical image processing method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14749423

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14766747

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20157024562

Country of ref document: KR

Kind code of ref document: A

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 30.11.2015)

122 Ep: pct application non-entry in european phase

Ref document number: 14749423

Country of ref document: EP

Kind code of ref document: A1